Let’s take a look at the new features of Samsung Cloud Platform Console~
Resource Management using Copilot
Information about resources/services
Easy and Fast Integrated Search
Account and Favorites Accessibility Improvements
Providing services for large-scale organizations and systematic user management
LLM service that can be used without separate application
1.2 - Overview
Samsung Cloud Platform
Samsung Cloud Platform is a cloud environment that virtualizes and provides various resource pools such as computing, storage, networking, and database required by companies.
Through the Samsung Cloud Platform, you can use the necessary resources in the cloud environment without having to equip hardware or physical space. The fee is charged according to the actual usage, so efficient budget operation is possible, and companies can reduce the cost of building and managing their own server environment.
The main terms used in Samsung Cloud Platform are as follows.
Term
Detailed Description
A region is a geographically separated unit of cloud service provision, consisting of one or more data centers to ensure availability
Account
The basic unit and billing unit required to use Samsung Cloud Platform services
Root User
The user who created the account, with the highest authority
IAM User
A user created by the Root user within the Account with limited privileges
Service
Various types of IT services and infrastructure (Compute, Storage, Network, etc.) provided by Samsung Cloud Platform
Resource
An individual unit object (Entity) created and managed by the user while using the service
Fig. Samsung Cloud Platform key terms
Components
Samsung Cloud Platform provides Service Portal, Console, Documentation.
Classification
Detailed Description
Service Portal
Introduction to services and fees for Samsung Cloud Platform, providing information such as customer support
Console
Samsung Cloud Platform’s account and resource creation, cost verification, and other features are provided through a self-service interface
Documentation
Samsung Cloud Platform’s integrated documentation service that provides user guides, API/CLI references, etc.
A region is a geographically separated unit of cloud service provision, consisting of one or more data centers to ensure availability. Services provided by Samsung Cloud Platform are managed and provided on a region-by-region basis, so some services may differ. Please refer to the user guide for each service.
Region Name
Region
West Korea
Korea West(kr-west1)
South Korea (administrative)
South Korea south1(kr-south1)
South Korea (public)
South Korea south2(kr-south2)
South Korea (Internet)
South Korea South3(kr-south3)
Table. Samsung Cloud Platform provided region items
Account
To use the Samsung Cloud Platform, account creation is necessary, and you can create an account through membership registration. The user who creates an account through membership registration is the root user of the account and is responsible for paying the account’s fees. By registering a payment method through membership registration, resource creation is possible.
The Root user can create a user in IAM and add it to a user group, and the user created in this way is an IAM user.
The general flow from creating an Account to configuring a cloud environment and performing tasks is as follows.
When the user signs up, an account is created and becomes the root user.
When the Root user registers a payment method, they can create resources on the Samsung Cloud Platform.
The Root user creates an IAM user and adds it to a user group to perform tasks according to each authority.
IAM users apply for necessary services according to their purpose and perform tasks.
The detailed tasks for each role are as follows.
Role
Responsible Task
Root user
Account’s top-level authority user
Can manage all resources within the Account.
Can create and manage IAM users, user groups, and policies.
IAM user
A user with limited privileges within an Account
Can manage resources only within the permissions set by the Root user
Can prevent unnecessary access to resources by granting only the necessary permissions
Table. Role and Responsibilities
1.3 - Getting Started with Console
1.3.1 - Login
To use the Samsung Cloud Platform Console, Account creation is required, and you can create an Account through membership registration. The user who creates an Account through membership registration is the root user of the Account and is responsible for paying the Account’s bills. After registering for membership and registering a payment method, resource creation is possible. For more information, see Registering a payment method.
Note
Root users can create users in Management > IAM and add them to user groups. Users created in this way can log in as IAM users. For more information on user creation, see IAM.
Join membership
To use the Samsung Cloud Platform Console, account creation is necessary, and you can create an account through membership registration. To create an account on the Samsung Cloud Platform Console, follow the procedure below.
On the login page, select the user type as Root User and then click the Sign Up button. It moves to the Sign Up page.
Member registration page where you can proceed with identity verification.
Once the self-authentication is completed, click the next button.
Item
Mandatory
Description
Automatic input prevention
Required
Enter the characters output in the image into the input window
Mobile phone number
Required
Enter mobile phone number
Enter the mobile phone number and click the authentication button to issue an authentication number
Enter the authentication number issued to your mobile phone and click the confirm button
If the authentication number is valid, the identity verification is complete
Table. Personal Authentication Information
Enter membership information and select regional information, and agree to the terms.
Item
Mandatory
Description
Region
Required
Select joiner region
Service Terms Agreement
Required
Check whether the service terms agreement is accepted
Personal information collection and usage agreement
Required
Check for personal information collection and usage agreement
Personal Information Overseas Transfer Consent
Required
Check for personal information overseas transfer consent
Are you 14 years old or older?
Required
Check if you are 14 years old or older
Personal Information Collection and Usage Agreement
Select
Check for Personal Information Collection and Usage Agreement
Table. Input information for membership registration
Member Information Input where you enter required item information.
Item
Mandatory
Description
ID(email)
required
email to be used as subscriber ID
Username
Required
Subscriber name
Can be entered within 60 characters using Korean and English, numbers, and spaces
Account name
Required
Account that the subscriber will use
Can be entered within 60 characters using Korean and English, numbers, and spaces
Password
Required
The password that the member will use can be entered within 9-20 characters
Must include at least one uppercase letter (English), lowercase letter (English), number, and special character (!@#$%&*^)
ID cannot be used as password
Same character cannot be used more than 3 times
Easy-to-guess passwords cannot be used
Continuous characters/numbers of 4 or more characters cannot be used
Password change cycle: 90 days
Password Confirmation
Required
Confirmation of the password to be used by the member
Mobile phone number
Required
Mobile phone number used for self-authentication
Notification Language
Required
Notification language setting such as email, SMS, etc. provided by Samsung Cloud Platform
Changed in Notification Popover > Notification Settings after logging in
Table. Member Information Input
After entering all the information and clicking the completion button, a verification email will be sent to the entered email address.
After receiving the mail, click the authentication button to complete membership registration.
Log in
Samsung Cloud Platform Console has two types of user types: Root user and IAM user. The Root user is the user who created the Account that performs tasks that require unlimited access rights. The IAM user is a user within the Account that performs daily tasks, and the Root user can create an IAM user. For more information related to user creation, please refer to IAM.
Root user login
To log in to the Samsung Cloud Platform Console as a root user, follow these procedures.
On the login page, select the user type as Root user and enter the ID(email), then click the Next button.
Root user login page moves. Password should be entered.
Select the means to send the authentication number and click the Send Authentication Number button.
Enter the received authentication number and click the login button.
If you log in successfully, it moves to the Console Home page.
ID/Password Find
ID or password was lost in case ID/password search button to click, account information after checking, try to log in.
Caution
Please enter your password and authentication number correctly. If you enter your password or authentication number incorrectly more than 5 times, your account will be locked for security reasons.
If the account is locked, it provides the user with the locked account information.
If the allowed access IP is set, login through unallowed IP is not possible. Click the Access allowed IP not set item to release the IP restriction.
If you have logged in after releasing the allowed IP setting, please move to the Management > IAM > My info. page to reset for security reasons. For more information, please refer to Access IP Management.
IAM user login
To log in to the Samsung Cloud Platform Console as an IAM user, follow these procedures.
On the login page, select the user type as IAM user and enter Account information, then click the Next button.
Reference
Account information requires entering an Account ID or Account alias. Account information is provided by the Root user.
IAM User Login page moves. Enter the IAM Username and Password.
Select Authentication method, and click the next button.
Enter the received authentication number and click the next button.
If you log in successfully, it moves to the Console Home page.
Note
In the case of IAM users, when the ID or password is lost, you must contact the Root user, and for the password, you must request the Root user to initialize the password.
Caution
Please enter the password and authentication number correctly. If you enter the password or authentication number incorrectly more than 5 times, the account will be locked for security reasons.
If the account is locked, it delivers the locked account information to the user or administrator.
If the allowed access IP is set, login through unallowed IP is not possible. Click the Access allowed IP not set item to release the IP restriction.
If you have logged in after releasing the allowed IP setting, please move to the Management > IAM > My info. page to reset for security reasons. For more information, please refer to Access IP Management.
Switch to user
You can switch to the Root user or an IAM user after logging in to the Samsung Cloud Platform Console.
Guidance
Root user and IAM user’s email and phone number must be the same in order to switch users after logging in.
To switch users, follow the following procedure.
Console Home page, click the Switch User button to the right of the Account name. The Switch User popup window opens.
Click the user name you want to change among the account-by-user names. A pop-up window notifying user switching will open.
Check the user name and then click the Confirm button. The user switch is completed and moves to the Console Home page.
Console language change
You can set the language to be used in the Samsung Cloud Platform Console.
from the login page to change
To change the language displayed on the Samsung Cloud Platform Console, click the language to be used on the top right of the login page and select the language to use, then log in.
Change from the Console page
If you are logged in to the Samsung Cloud Platform Console, you can change the language by clicking the language at the bottom of the page.
Modify user information
You can change user information such as username, password, mobile phone number, etc.
To modify user information, click My menu > My Info. on the top right of the Console.
1.3.2 - Console
When you first log in to the Samsung Cloud Platform Console, you will be taken to the Console page.
Console
On the Console page, you can view the Console Home page and configure the widgets of Console Home. Additionally, you can view the list of all services of Samsung Cloud Platform. The Console left menu’s Console Home and All Services provide the functions below.
Provided Features
Description
Console Home
In the Console Home of the Console, it provides important information about the Samsung Cloud Platform and provides shortcut widgets for services
For detailed information about Console Home, see Console Home
All Services
Check all Samsung Cloud Platform service categories and service list in the Console and navigate to the respective service
For detailed information about all services, see All Services
Table. Console Home Function
Console Home
When you first log in to the Samsung Cloud Platform Console you are taken to the Console page and on the Console page the Console Home page is shown by default. Console Home is composed of widgets, and can be changed by clicking the Dashboard Settings button at the top right.
Provided Features
Description
Welcome
Greeting and Samsung Cloud Platform text introduction
My Info., Documentation, Notification List button is provided
Recent Visited Service
List of services visited recently
Click the service name to go to that service
Copilot
Introduction to Copilot or list of conversations with Copilot
Click the conversation list to go to Copilot
For detailed information about Copilot, see Copilot
Architecture Diagram
Provide resource information in diagram form to grasp the relationships between resources at a glance
In Console Home, provides three types of resources: VPC, Subnet, and Virtual Server
When a specific resource is clicked, the main information of the resource can be viewed directly without navigating to the resource detail screen
Payment method registration possible to use all resources of the Samsung Cloud Platform Console
Widget not displayed after payment method registration is completed
Support Center
A service that provides technical support, standard architecture, incident response, service inquiries/answers, etc., needed when using the Samsung Cloud Platform Console
Service request, inquiry, Knowledge Center provided
Predict this month’s estimated amount and check the billing amount
Provides this month’s usage amount, last month’s usage amount, end-of-month billing estimate, recent 6-month average amount details, and this month’s usage amount by service category
Among the widgets of Console Home, users can rearrange them except the Welcome widget. For more details, see Dashboard Settings.
Setting up the Dashboard
From the Console page, click Console Home in the left menu. You will be taken to the Console Home page.
When you first log in, the default page is the Console > Console Home screen.
Console Home page, click the Dashboard Settings button at the top right. Dashboard Settings popup opens.
Dashboard Settings In the popup’s Widget Settings, change the items and order of the widgets displayed on Console Home.
Welcome The other widgets, except the Welcome widget, can have their settings changed by the user.
If you select the widget’s checkbox, you can add it to the dashboard, and if you deselect the checkbox, you can remove it from the dashboard.
You can change the order by clicking and holding the hamburger button next to the widget name and moving it up or down.
Dashboard Settings popup window’s Widget Settings when Select All is selected, all widgets are selected. When Select All is deselected, all widgets except Welcome are deselected.
Dashboard Settings in the popup window’s Preview, after checking the widget changes, click the Save button.
The configured widget can be previewed on the right.
Console Home Check the changed settings on the page.
All Services
You can view the service categories and service list provided by Samsung Cloud Platform at a glance. And you can navigate to the respective service.
1.3.3 - Integrated Management
Integrated management means the collection of functions located at the top of the Samsung Cloud Platform Console.
Integrated Management Function
In Integrated Management, you can view the list of services and see the list of services you recently visited. You can converse with Copilot, and you can check notifications received in the Samsung Cloud Platform Console. You can view the region of the Samsung Cloud Platform and see the list of regions that can be changed. In the My menu, you can view the user’s information and Account information.
Category
Detailed description
Service
Search for Samsung Cloud Platform services, etc., and go to the service
For detailed information about Service, see Service
Copilot
Copilot provided by Samsung Cloud Platform
For detailed information about Copilot, see Copilot
Unified Search
Search Samsung Cloud Platform services, internal Console documents, Marketplace, etc.
For detailed information about Unified Search, see Unified Search
Notification
Check notifications received from Samsung Cloud Platform Console
For detailed information about Notification, see Notification
Support
Support Center, Documentation, Can navigate to Announcements
Copilot is only available in Korea West (kr-west1) and Korea East (kr-east1).
Service
You can search for services provided by Samsung Cloud Platform by keyword, service category, recent visits, favorites, etc., and navigate to the service.
Category
Detailed description
Search term lookup
Lookup using the search term within the service
Lookup service name and service description using the entered search term
Search possible when entering two or more characters
Recent Visits
List of recently visited services
List of recently visited services and service description
A list where all services are sorted in ascending order
Table. Service Items
Add to Favorites
You can add the service to favorites.
Click the Service button at the top of the Console. It moves to the Service popup window.
Service In the popup window, search for the service to add to favorites from All Services or Recent Visits. Click the star icon to the left of the service name you want to add to favorites.
Please check that the star shape has been changed to yellow.
Service > Favorites you can check the services you added to favorites.
You can also view the services you added to favorites at the top of the Console.
Check Favorites
You can view the services added to your favorites and navigate to the respective service.
Click the Service button at the top of the Console. It will move to the Service popup window.
In the service popup window, click bookmark. You can check the services added to your favorites.
Service > Favorites Click the name of the service you want to move to. It will navigate to that service.
Remove from Favorites
You can remove a service added to your favorites.
Click the Service button at the top of the Console. Service page will open.
Service in the popup window, click Bookmark. You can check the services added to your bookmarks.
Please deselect the star icon to the left of the service name you added to favorites.
You can deselect the star icon to the left of the service name anywhere within all services, etc., service.
In Service > Favorites, you can confirm that the service you unfavorited has been removed.
Console also at the top, the unfavorited service will be removed.
Copilot
Copilot is a generative AI-based conversational assistant that can help you understand, build, scale, and operate the Samsung Cloud Platform. You can start a conversation with Copilot by clicking the Copilot icon in the Samsung Cloud Platform Console.
For more details, refer to Copilot. Below are some example questions you can ask Copilot.
Example question
How can I check the estimated charges?
How do I create a Virtual Server?
Please show the list of Virtual Server resources of my Account.
Find the 192.168.21.1 IP among Virtual Server
How do I add a user to a user group?
Download cost details
What is the estimated bill amount for this month?
Reference
Copilot is only available in Korea West (kr-west1) and Korea East (kr-east1).
Integrated Search
You can easily find services and documents provided by Samsung Cloud Platform using integrated search.
To find the desired service or document using integrated search, follow these steps.
Enter the keyword you want to search for in the integrated search box. A search results popup will open.
When you enter a keyword in the integrated search box, the keyword is auto-completed and a popup window with the search results for the completed keyword opens.
If you click the View All button in the search results category, the full results searched in that category will be displayed.
Category
Detailed description
Service, Condole documents, Marketplace
Enter at least two characters for the keyword you want to search
Resource Name, Resource ID
Enter the resource name or resource ID you want to find in the form / + resource name (or resource ID)
Enter at least two characters for the resource name (or resource ID)
Table. How to input keywords for each search item
Click the information to be checked in the search result popup window. It will move to the corresponding page.
Reference
The scope that can be found with integrated search is as follows.
Services and resource names, resource IDs within Samsung Cloud Platform
User guide documents: How-to guides, API reference, CLI reference
Knowledge Center
Marketplace
Notification
You can view notifications received from the Samsung Cloud Platform Console, and you can go to the notification settings.
Category
Detailed description
Notification List
Latest 10 notifications received from Samsung Cloud Platform Console
Click the View All Notifications button to go to the Notification popup window
All: Latest 10 of all received notifications
Unread: Latest 10 unread notifications among all received
Mark All as Read: Mark all received notifications as read
Selecting a notification item allows viewing detailed information
Click the Notification button at the top right of the Console to view the latest 10 notifications received in the Console.
When you select a specific notification item from the latest 10 notifications list, you will be taken to the notification detail popup.
Category
Detailed description
Resource Name
Name of the resource where the alert occurred
Notification Type
Notification Occurrence Type
Announcement: Notification received about announcements
Service: Notification received per service
Cost Management: Notification received about Cost Management
IAM: Notification received about IAM
Notification Manager: Notification received about Notification Manager
Support: Notification received about Support such as inquiries, service requests, etc.
Creator
Notification Creator
Notification creation time
Notification creation time
Table. Notification detailed information items
Check Notification Settings
If you click the Notification button at the top right of the Console, you can view the list of the latest 10 notifications received in the Console.
Click the Notification Settings button next to Notification. It moves to the Notification > Notification Settings popup.
Notification > Notification Settings You can check the notification settings status in the popup window.
Category
Detailed description
Notification language
Language to receive notifications
Notification Target > Notification Type
Notification Types by Target
Announcements: Notifications received about announcements
Service: Notifications received per service
Cost Management: Notifications received about Cost Management
IAM: Notifications received about IAM
Notification Manager: Notifications received about Notification Manager
Support: Notifications received about Support such as inquiries, service requests, etc.
Table. Notification Settings Items
Edit notification settings
Click the Notification button at the top right of the Console to view the list of the latest 10 notifications received in the Console.
Click the Notification Settings button next to Notification. It moves to the Notification > Notification Settings popup window.
Notification > Notification Settings You can check the notification settings status in the popup window.
Notification > Notification Settings in the popup window, click the Edit button to modify the notification settings.
Category
Detailed description
Notification language
Language to receive notifications
Notification Target > Notification Type
For each notification target, you can select the notification type to receive (email, SMS). The default setting is email
Announcements: Notifications received about announcements
Service: Notifications received per service
Cost Management: Notifications received about Cost Management
IAM: Notifications received about IAM
Notification Manager: Notifications received about Notification Manager
Support: Notifications received about Support such as inquiries, service requests, etc.
Table. Notification Settings Modification Items
Support
In support, we provide Support Center, Documentation, Announcements.
Support Center provides technical support, standard architecture, incident response, service inquiries/answers, etc., needed when using Samsung Cloud Platform. Please refer to Support Center.
Documentation is a user guide that provides easy and clear guidance on concepts of various services, Console usage, and utilization methods. Please refer to User Guide.
Notice is a notice provided to users in the Samsung Cloud Platform Console.
Region
You can check the region of Samsung Cloud Platform and view the list of regions that can be changed.
The services provided by Samsung Cloud Platform are managed on a per-region basis, so the services offered may differ slightly. A region is a geographically separated unit of cloud service provision, consisting of one or more data centers to ensure availability. For some services such as the Samsung Cloud Platform Console or IAM, there is no need to select a region as they are global services.
Region Name
Region
Global
-
Korea West
Korea West(kr-west1)
Korea East
Korea East(kr-east1)
South Korea (administrative)
South Korea 1(kr-south1)
South Korea (public)
South Korea2(kr-south2)
South Korea (Internet)
South Korea 3(kr-south3)
Table. Console region selectable items
My menu
Click the profile-shaped button at the top right of the Console to view the functions provided in the My menu. You can check the Account ID, IAM username, or Root username, and you can navigate to Account, My Info., Cost Management.
Provided Features
Description
Account ID
Account ID logged into Samsung Cloud Platform Console
IAM username or Root username
IAM username or Root username logged into Samsung Cloud Platform Console
Through the generative AI-based work assistance service Copilot, you can use Samsung Cloud Platform more conveniently. Users can easily utilize Copilot using familiar interfaces and natural language, allowing them to conduct work accurately and efficiently.
You can start a conversation with Copilot by clicking the Copilot button in the Samsung Cloud Platform Console.
Ask Tip
Clearly describe the reason for requesting the task and the ultimate goal in order to obtain an accurate answer.
To solve complex problems, we request small and easy requirements step by step.
We utilize the terminology used in Samsung Cloud Platform.
Caution
Since sensitive information related to work cannot be used, please be careful not to input it.
Answers generated based on generative AI may contain inaccurate content, so be sure to review them before use.
For general questions related to Samsung Cloud Platform, select General Query, and for questions about the resources you are using, select Resource Lookup and ask.
Using Copilot
You can use Copilot via the Copilot button at the top of the Samsung Cloud Platform Console and the Copilot button at the bottom right.
You can also use Copilot through the Copilot widget on the Dashboard of the Console Home page, which is the first login screen of the Samsung Cloud Platform Console.
Click the Copilot button at the top of the Console. The Copilot popup window opens.
Copilot Check the popup window. The features provided by Copilot are as follows.
Category
Detailed description
Ask > General Inquiry
General questions about using Samsung Cloud Platform
When you drag text in the Samsung Cloud Platform Console, the Copilot button appears. Click the Copilot button to search the text via Copilot.
If you click the info button displayed next to the item on the creation page of each service, detailed guidance is provided through Copilot.
General Query
Through Copilot, you can ask general questions about the features, usage, etc. provided by Samsung Cloud Platform.
To make a general query to Copilot, follow the steps below.
Click the Copilot button at the top of the Console. The Copilot popup window opens.
Copilot In the bottom prompt of the popup window, select General Query and ask overall questions about Samsung Cloud Platform.
example question
Service that must be created in advance before creating Virtual Server
What network settings are required to create a Virtual Server?
Ask Tip
Clearly describe the reason for requesting the task and the ultimate goal in order to obtain an accurate answer.
To solve complex problems, we request small and easy requirements step by step.
Use the terminology used in Samsung Cloud Platform.
Caution
Please be careful not to input any sensitive information related to work as it cannot be used.
Answers generated based on generative AI may contain inaccurate information, so be sure to review them before using them.
For overall questions regarding Samsung Cloud Platform, please select General Query, and for questions about the resources you are using, select Resource Lookup.
Resource View
You can ask questions about resources created on the Samsung Cloud Platform through Copilot.
To ask Copilot a question related to Resource, follow the steps below.
Click the Copilot button at the top of the Console. The Copilot popup window opens.
Copilot In the bottom prompt of the popup window, select View Resource and ask a question about resources on Samsung Cloud Platform.
example question
Virtual Server Please show the list.
Active state Load Balancer
Ask Tip
Clearly describe the reason for requesting the work and the final goal in order to provide an accurate answer.
To solve complex problems, we request small and easy requirements step by step.
Use the terminology used in Samsung Cloud Platform.
Caution
Please be careful not to enter sensitive information related to work, as it cannot be used.
Answers generated based on generative AI may contain inaccurate content, so be sure to review them before use.
For general questions related to Samsung Cloud Platform, select General Inquiry, and for questions about the resources you are using, select Resource Lookup and please ask.
Start a new conversation
You can start a new conversation with Copilot.
To start a new conversation with Copilot, follow the steps below.
Click the Copilot button at the top of the Console. Copilot popup window opens.
Copilot Click the Start New Conversation button at the top right of the popup window. A new conversation will begin.
Check conversation list
You can view the list of conversations you had with Copilot.
To view the list of conversations with Copilot, follow the steps below.
Click the Copilot button at the top of the Console. The Copilot popup window opens.
Copilot Click the Conversation List button at the top left of the popup window. The Conversation List expands.
Conversation List If you click the conversation you want to check among them, you will be taken to that conversation.
Check recommended prompts
You can view the recommended prompt questions provided by Copilot.
To check the Copilot recommended prompt questions, follow the steps below.
Click the Copilot button at the top of the Console. You will be taken to the Copilot popup window.
Copilot Click the Recommended Prompt button on the left side of the bottom prompt of the popup. The Recommended Prompt popup opens.
Recommended Prompt You can check the questions needed for work in the popup window.
Category
Detailed description
Product
Are there any prerequisite services that need to be set up before creating a Virtual Server?
What are the constraints of Virtual Server and other Bare Metal Servers?
Are there any precautions when using a Bare Metal Server?
Fee
What is the contract fee policy?
What is the time basis for fee calculation?
How can I check the estimated fee?
Member
How do I register for an IAM member?
If I lose my password, how can I retrieve it?
How do I change user information and password?
Other
How should the network configuration and firewall settings be set up to access the Samsung Cloud Platform Virtual Machine from an overseas local internal network?
If there are errors or improvements when using the service, how should I inquire?
Table. Recommended Prompt Question Items
Reference
Recommended prompts can also be found on the Service Home page of the service provided by Samsung Cloud Platform Console.
View in a new window
You can check and use Copilot in a new browser tab.
To use Copilot in a new tab, follow these steps.
Click the Copilot button at the top of the Console. Copilot popup window opens.
Copilot Click the View New Window button at the top right. It opens in a new tab of Copilot’s browser.
Click the Copilot tab opened in the browser. It will navigate to the Copilot page.
Are there any prerequisite services that must be set up before creating a Virtual Server?
What are the constraints between Virtual Server and other Bare Metal Server?
Are there any precautions when using Bare Metal Server?
Pricing
What is the contract pricing policy?
What is the time zone used for pricing calculation?
How can I check the estimated cost?
Member
How do I register an IAM member?
If I forget my password, how do I retrieve it?
How do I change user information and password?
Other
How should the network configuration and firewall settings be set up to access Samsung Cloud Platform Virtual Machine from an overseas local intranet?
How should I inquire if there are errors or improvements when using the service?
You can also check by clicking the Recommended Prompt button on the left side of the question input area
Table. Copilot page detailed items
Reference
Recommended prompts can also be found on the Service Home page of the service provided in the Samsung Cloud Platform Console.
2 - Compute
Based on the best stability in Korea, it provides the optimal computing resources conveniently and flexibly according to the purpose of use.
2.1 - Virtual Server
2.1.1 - Overview
Service Overview
Virtual Server is a virtual server optimized for cloud computing that allows you to freely allocate and use as much as you need at the necessary time without purchasing infrastructure resources such as CPU, Memory, etc. provided by the server individually. You can use resources with optimized performance according to your computing usage purposes such as development, testing, and application execution in a cloud environment.
Key Features
Easy and convenient computing environment configuration: Through the web-based Console, users can easily use Self Service from Virtual Server provisioning to resource management and cost management. If you need to change the capacity of major resources such as CPU or Memory while using Virtual Server, you can easily expand or reduce without operator intervention.
Provision of various types of services: Provides virtualized vCore/Memory resources according to predefined server types (1~128 vCore).
General Virtual Server: Provides Computing Specs generally used (maximum 16vCore, 256GB)
High Capacity Virtual Server: Provides when large capacity resources larger than General Virtual Server Spec are needed
Strong security application: Protects servers safely by controlling Inbound/Outbound traffic communicating with external internet or other VPC (Virtual Private Cloud) through Security Group service. In addition, you can stably operate computing resources through real-time monitoring.
Service Architecture
Figure. Virtual Server Architecture
Provided Functions
Virtual Server provides the following functions.
Automatic Provisioning and Management: Provides functions from Virtual Server provisioning to resource management and cost management through web-based Console. If you need to change the capacity of major resources such as CPU or Memory while using Virtual Server, you can change immediately using the server type modification function.
Standard server type and Image provision: Provides virtualized vCore/Memory resources according to standard server types, and provides standard OS Image.
Storage connection: Provides additional connection storage in addition to OS disk. You can use by additionally connecting Block Storage, File Storage, and Object Storage.
Network connection: You can connect Virtual Server’s general subnet/IP and Public NAT IP. Provides local subnet connection for communication between servers. This task can be modified on the detail page.
Security Group application: Protects servers safely by controlling Inbound/Outbound traffic communicating with external internet or other VPC through Security Group service.
Monitoring: You can check monitoring information such as CPU, Memory, Disk, etc. corresponding to computing resources through Cloud Monitoring service.
Backup and recovery: You can backup and recover Virtual Server Image through Backup service.
Cost management: You can create, stop, and terminate servers as needed, and since billing is based on actual usage time, you can check costs according to usage.
ServiceWatch service integration provision: You can monitor data through ServiceWatch service.
Components
Virtual Server provides standard server types and standard OS Image. Users can select and use them according to the desired service scale.
Image
You can create and manage Image. Main functions are as follows.
Image creation: You can create the configuration of the Virtual Server in use as an Image, and you can create an Image by uploading the user’s Image file to Object Storage.
Shared Image creation: You can create a Shared Image that can be shared from an Image with Private Visibility.
Share to other Account: You can share an Image to another Account.
Through Server Group settings, you can place Virtual Server and Block Storage added when creating Virtual Server close to or distributed across racks and hosts. Main functions are as follows.
Server Group creation: You can set Virtual Servers belonging to the same Server Group as Anti-Affinity (distributed placement), Affinity (close placement), Partition (Virtual Server and Block Storage distributed placement).
The OS Images provided by Virtual Server are as follows
OS Image Version
EoS Date
Alma Linux 8.10
2029-05-31
Alma Linux 9.6
2025-11-17
Oracle Linux 8.10
2029-07-31
Oracle Linux 9.6
2025-11-25
RHEL 8.10
2029-05-31
RHEL 9.4
2026-04-30
RHEL 9.6
2027-05-31
Rocky Linux 8.10
2029-05-31
Rocky Linux 9.6
2025-11-30
Ubuntu 22.04
2027-06-30
Ubuntu 24.04
2029-06-30
Windows 2019
2029-01-09
Windows 2022
2031-10-14
Windows 2016
2027-01-12
Table. Virtual Server Provided OS Image Versions
Reference
Linux operating systems such as Alma Linux and Rocky Linux only provide even Minor versions, except for the last release version of the Major version. This is a policy to ensure the stability and consistency of the SCP system.
Please check the EOS (End of Support) and EOL (End of Life) dates of the operating system and apply new or additional individual packages as needed to maintain a stable environment.
Server Type
The server types supported by Virtual Server are as follows. For details on server types, refer to Virtual Server Server Type.
Standard s1v2m4
Category
Example
Detailed Description
Server Type
Standard
Classification of provided server types
Standard: Composed of standard specs (vCPU, Memory) generally used
High Capacity: Specs of large capacity server larger than Standard
Server Spec
s1
Classification of provided server types and generation
s1: s means standard spec, 1 means generation provided in Samsung Cloud Platform v2
s2: s means standard spec, 2 means generation provided in Samsung Cloud Platform v2
h2: h means large capacity server spec, 2 means generation provided in Samsung Cloud Platform v2
Server Spec
v2
Number of vCores
v2: 2 virtual cores
Server Spec
m4
Memory capacity
m4: 4GB Memory
Table. Virtual Server Server Type
Constraints
Reference
If you create Virtual Server with Rocky Linux or Oracle Linux, additional settings are required for time synchronization (NTP: Network Time Protocol). For other Images, it is automatically set and no separate settings are required. For details, refer to Configure Linux NTP.
If you create RHEL and Windows Server before August 2025, you need to modify RHEL Repository and WKMS (Windows Key Management Service) settings. For details, refer to Configure RHEL Repo and WKMS.
Prerequisite Services
This is a list of services that need to be configured in advance before creating this service. For details, please prepare in advance by referring to the guide provided for each service.
Virtual Server provides a server type suitable for the purpose of use. The server type consists of various combinations such as CPU, Memory, Network Bandwidth, etc. The host server used by the Virtual Server is determined by the server type selected when creating the Virtual Server. Please select a server type according to the specifications of the application you want to run on the Virtual Server.
The server types supported by Virtual Server are as follows.
Standard s1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Provided server type distinction
Standard: Composed of standard specifications (vCPU, Memory) commonly used
High Capacity: Server specifications with higher capacity than Standard
Server Specification
s1
Type of server provided and generation distinction
s1: s means general specification, and 1 means the generation provided by Samsung Cloud Platform v2
s2: s means general specification, and 2 means the generation provided by Samsung Cloud Platform v2
h2: h means large-capacity server specification, and 2 means the generation provided by Samsung Cloud Platform v2
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory Capacity
m4: 4GB Memory
Table. Virtual Server server type format
s1 server type
The s1 server type of Virtual Server is provided with standard specifications (vCPU, Memory) and is suitable for various applications.
Samsung Cloud Platform v2’s 1st generation: up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Division
Server Type
vCPU
Memory
Network Bandwidth
Standard
s1v1m2
1 vCore
2 GB
up to 10 Gbps
Standard
s1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
s1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
s1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
s1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
s1v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
s1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
s1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
s1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
s1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
s1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
s1v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
s1v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
s1v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
s1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
s1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
s1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
s1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
s1v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
s1v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
s1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
s1v10m20
10 vCore
20 GB
up to 10 Gbps
Standard
s1v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
s1v10m80
10 vCore
80 GB
up to 10 Gbps
Standard
s1v10m120
10 vCore
120 GB
up to 10 Gbps
Standard
s1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
s1v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
s1v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
s1v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
s1v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
s1v12m192
12 vCore
192 GB
up to 12.5 Gbps
Standard
s1v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
s1v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
s1v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
s1v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
s1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
s1v16m32
16 vCore
32 GB
up to 12.5 Gbps
Standard
s1v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
s1v16m128
16 vCore
128 GB
up to 12.5 Gbps
Standard
s1v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
s1v16m256
16 vCore
256 GB
up to 12.5 Gbps
Table. Virtual Server server type specifications - s1 server type
S2 Server Type
Virtual Server s2 server type is provided with standard specifications (vCPU, Memory) and is suitable for various applications.
Samsung Cloud Platform v2’s 2nd generation: up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
CPU vCore
Memory
Network Bandwidth(Gbps)
Standard
s2v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
s2v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
s2v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
s2v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
s2v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
s2v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
s2v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
s2v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
s2v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
s2v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
s2v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
s2v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
s2v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
s2v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
s2v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
s2v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
s2v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
s2v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
s2v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
s2v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
s2v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
s2v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
s2v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
s2v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
s2v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
s2v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
s2v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
s2v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
s2v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
s2v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
s2v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
s2v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
s2v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
s2v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
s2v14m168
14 vCore
168 GB
up to 12.5 Gbps
Standard
s2v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
s2v16m32
16 vCore
32 GB
up to 12.5 Gbps
Standard
s2v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
s2v16m128
16 vCore
128 GB
up to 12.5 Gbps
Standard
s2v16m192
16 vCore
192 GB
up to 12.5 Gbps
Standard
s2v16m256
16 vCore
256 GB
up to 12.5 Gbps
Table. Virtual Server server type specifications - s2 server type
h2 Server Type
The h2 server type of Virtual Server is provided with large-capacity server specifications and is suitable for applications for large-scale data processing.
Samsung Cloud Platform v2’s 2nd generation: up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 128 vCPUs and 1,536 GB of memory
up to 25Gbps networking speed
Division
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
h2v24m48
24 vCore
48 GB
Up to 25 Gbps
High Capacity
h2v24m96
24 vCore
96 GB
Up to 25 Gbps
High Capacity
h2v24m192
24 vCore
192 GB
Up to 25 Gbps
High Capacity
h2v24m288
24 vCore
288 GB
Up to 25 Gbps
High Capacity
h2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
h2v32m128
32 vCore
128 GB
Up to 25 Gbps
High Capacity
h2v32m256
32 vCore
256 GB
Up to 25 Gbps
High Capacity
h2v32m384
32 vCore
384 GB
Up to 25 Gbps
High Capacity
h2v48m96
48 vCore
96 GB
up to 25 Gbps
High Capacity
h2v48m192
48 vCore
192 GB
Up to 25 Gbps
High Capacity
h2v48m384
48 vCore
384 GB
Up to 25 Gbps
High Capacity
h2v48m576
48 vCore
576 GB
Up to 25 Gbps
High Capacity
h2v64m128
64 vCore
128 GB
Up to 25 Gbps
High Capacity
h2v64m256
64 vCore
256 GB
Up to 25 Gbps
High Capacity
h2v64m512
64 vCore
512 GB
Up to 25 Gbps
High Capacity
h2v64m768
64 vCore
768 GB
Up to 25 Gbps
High Capacity
h2v72m144
72 vCore
144 GB
Up to 25 Gbps
High Capacity
h2v72m288
72 vCore
288 GB
Up to 25 Gbps
High Capacity
h2v72m576
72 vCore
576 GB
Up to 25 Gbps
High Capacity
h2v72m864
72 vCore
864 GB
Up to 25 Gbps
High Capacity
h2v96m192
96 vCore
192 GB
Up to 25 Gbps
High Capacity
h2v96m384
96 vCore
384 GB
Up to 25 Gbps
High Capacity
h2v96m768
96 vCore
768 GB
Up to 25 Gbps
High Capacity
h2v96m1152
96 vCore
1152 GB
Up to 25 Gbps
High Capacity
h2v128m256
128 vCore
256 GB
Up to 25 Gbps
High Capacity
h2v128m512
128 vCore
512 GB
Up to 25 Gbps
High Capacity
h2v128m1024
128 vCore
1024 GB
Up to 25 Gbps
High Capacity
h2v128m1536
128 vCore
1536 GB
Up to 25 Gbps
Table. Virtual Server server type specifications - h2 server type
2.1.1.2 - Monitoring Metrics
Virtual Server monitoring metrics
The following table shows the monitoring metrics of Virtual Server that can be checked through Cloud Monitoring. For more information on how to use Cloud Monitoring, please refer to the Cloud Monitoring guide.
You can get basic monitoring metrics without installing Agent, and please check the metrics below in Table. Virtual Server Monitoring Metrics (Basic). In addition, you can check the metrics that can be retrieved by installing Agent in Table. Virtual Server Additional Monitoring Metrics (Agent Installation Required).
For Windows OS, memory-related metrics can only be retrieved if the Agent is installed.
Performance Item
Detailed Description
Unit
Memory Total [Basic]
Available memory bytes
bytes
Memory Used [Basic]
Currently used memory bytes
bytes
Memory Swap In [Basic]
Swapped memory bytes
bytes
Memory Swap Out [Basic]
bytes of swapped memory
bytes
Memory Free [Basic]
Unused memory bytes
bytes
Disk Read Bytes [Basic]
Read bytes
bytes
Disk Read Requests [Basic]
Number of Read Requests
cnt
Disk Write Bytes [Basic]
Write bytes
bytes
Disk Write Requests [Basic]
Number of Write Requests
cnt
CPU Usage [Basic]
1-minute average system CPU usage rate
%
Instance State [Basic]
Instance Status
state
Network In Bytes [Basic]
Received bytes
bytes
Network In Dropped [Basic]
Receive Packet Drop
cnt
Network In Packets [Basic]
Received Packet Count
cnt
Network Out Bytes [Basic]
Transmission bytes
bytes
Network Out Dropped [Basic]
Transmission Packet Drop
cnt
Network Out Packets [Basic]
Transmission packet count
cnt
Fig. Virtual Server Monitoring Metrics (Default Provided)
Performance Item
Detailed Description
Unit
Core Usage [IO Wait]
The ratio of CPU time spent in waiting state (disk waiting)
%
Core Usage [System]
The ratio of CPU time spent in kernel space
%
Core Usage [User]
The ratio of CPU time spent in user space
%
CPU Cores
The number of CPU cores on the host
cnt
CPU Usage [Active]
Idle and IOWait status excluding the percentage of CPU time used
%
CPU Usage [Idle]
The ratio of CPU time spent in idle state.
%
CPU Usage [IO Wait]
the ratio of CPU time spent in a waiting state (disk waiting)
%
CPU Usage [System]
The percentage of CPU time used by the kernel
%
CPU Usage [User]
The percentage of CPU time used in the user area
%
CPU Usage/Core [Active]
Idle and IOWait status excluding the percentage of CPU time used
%
CPU Usage/Core [Idle]
The ratio of CPU time spent in idle state.
%
CPU Usage/Core [IO Wait]
the ratio of CPU time spent in waiting state (disk waiting)
%
CPU Usage/Core [System]
The percentage of CPU time used by the kernel
%
CPU Usage/Core [User]
The percentage of CPU time used in the user area
%
DiskCPU Usage [IO Request]
The ratio of CPU time spent executing input/output requests for the device
%
Disk Queue Size [Avg]
The average queue length of requests executed for the device.
num
Disk Read Bytes
The number of bytes read from the device per second.
bytes
Disk Read Bytes [Delta Avg]
Average of system.diskio.read.bytes_delta for each Disk
bytes
Disk Read Bytes [Delta Max]
Individual Disks’ system.diskio.read.bytes_delta maximum
Individual networks’ sum of system.network.in.bytes_delta
bytes
Network In Bytes [Delta]
Received byte count delta
bytes
Network In Dropped
Number of packets dropped among incoming packets
cnt
Network In Errors
Number of errors during reception
cnt
Network In Packets
Received packet count
cnt
Network In Packets [Delta Avg]
Individual Networks’ average of system.network.in.packets_delta
cnt
Network In Packets [Delta Max]
Individual Network’s system.network.in.packets_delta maximum value
cnt
Network In Packets [Delta Min]
Individual Network’s system.network.in.packets_delta minimum value
cnt
Network In Packets [Delta Sum]
The sum of system.network.in.packets_delta of individual Networks
cnt
Network In Packets [Delta]
Received packet count delta
cnt
Network Out Bytes
Sent byte count
bytes
Network Out Bytes [Delta Avg]
Individual Networks’ average of system.network.out.bytes_delta
bytes
Network Out Bytes [Delta Max]
Individual Networks’ system.network.out.bytes_delta maximum
bytes
Network Out Bytes [Delta Min]
Individual Networks’ system.network.out.bytes_delta minimum value
bytes
Network Out Bytes [Delta Sum]
Individual Network’s system.network.out.bytes_delta sum
bytes
Network Out Bytes [Delta]
Sent byte count delta
bytes
Network Out Dropped
number of packets dropped among outgoing packets
cnt
Network Out Errors
Number of errors during transmission
cnt
Network Out Packets
Transmitted packet count
cnt
Network Out Packets [Delta Avg]
Average of system.network.out.packets_delta for individual Networks
cnt
Network Out Packets [Delta Max]
Individual Networks’ system.network.out.packets_delta maximum values
cnt
Network Out Packets [Delta Min]
Individual Network’s system.network.out.packets_delta minimum value
cnt
Network Out Packets [Delta Sum]
The sum of system.network.out.packets_delta of individual Networks
cnt
Network Out Packets [Delta]
Sent packet count delta
cnt
Open Connections [TCP]
All open TCP connections
cnt
Open Connections [UDP]
All open UDP connections
cnt
Port Usage
Accessible port usage rate
%
SYN Sent Sockets
Number of sockets in SYN_SENT state (when connecting from local to remote)
cnt
Kernel PID Max
kernel.pid_max value
count
Kernel Thread Max
kernel threads maximum value
count
Process CPU Usage
The percentage of CPU time consumed by the process after the last update
%
Process CPU Usage/Core
The percentage of CPU time used by the process since the last event
%
Process Memory Usage
main memory(RAM) where the process occupies a ratio
%
Process Memory Used
Resident Set size. The amount of memory a process occupies in RAM
bytes
Process PID
Process pid
pid
Process PPID
Parent process’s pid
pid
Processes [Dead]
number of dead processes
cnt
Processes [Idle]
idle Number of Processes
cnt
Processes [Running]
running Number of Processes
count
Processes [Sleeping]
sleeping processes count
cnt
Processes [Stopped]
stopped processes count
cnt
Processes [Total]
Total number of processes
cnt
Processes [Unknown]
The status cannot be searched or the number of unknown processes
cnt
Processes [Zombie]
Number of zombie processes
cnt
Running Process Usage
Process Usage Rate
%
Running Processes
number of running processes
count
Running Thread Usage
thread usage rate
%
Running Threads
running processes where the total number of threads being executed
cnt
Context Switches
number of context switches (per second)
cnt
Load/Core [1 min]
The value divided by the number of cores for the last 1 minute load
cnt
Load/Core [15 min]
The value of load divided by the number of cores for the last 15 minutes
cnt
Load/Core [5 min]
The value divided by the number of cores for the last 5 minutes
cnt
Multipaths [Active]
External storage connection path status = active count
cnt
Multipaths [Failed]
External storage connection path status = failed count
cnt
Multipaths [Faulty]
External storage connection path status = faulty count
cnt
NTP Offset last
sample’s measured offset (time difference between NTP server and local environment)
num
Run Queue Length
Execution Waiting Queue Length
num
Uptime
OS operation time(uptime) (milliseconds)
ms
Context Switchies CPU
number of context switches (per second)
cnt
Disk Read Bytes [Sec]
bytes read from the Windows logical disk in 1 second
Windows only
cnt
Disk Read Time [Avg]
Data Read Average Time (sec)
Windows only
sec
Disk Transfer Time [Avg]
Disk average wait time (seconds)
Windows only
sec
Disk Write Bytes [Sec]
The number of bytes written to the Windows logical disk in 1 second
Windows only
cnt
Disk Write Time [Avg]
Data write average time (seconds)
Windows only
sec
Pagingfile Usage
Paging file usage rate
Windows only
%
Pool Used [Non Paged]
_KERNEL MEMORY among Nonpaged Pool usage
Windows only
bytes
Pool Used [Paged]
Kernel memory Paged Pool usage among kernel memory
Windows only
bytes
Process [Running]
The number of processes currently running
Windows only
cnt
Threads [Running]
The number of threads currently running
Windows only
cnt
Threads [Waiting]
The number of threads waiting for processor time
Windows only
cnt
Fig. Virtual Server Additional Monitoring Metrics (Agent Installation Required)
2.1.1.3 - ServiceWatch Metrics
Virtual Server sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at 5-minute intervals. When detailed monitoring is enabled, you can view data collected at 1-minute intervals.
Reference
To view metrics in ServiceWatch, refer to the ServiceWatch guide.
The following are basic metrics for the namespace Virtual Server.
In the table below, metrics with metric names marked in bold are selected as key metrics among the basic metrics provided by Virtual Server.
Key metrics are used to configure service dashboards that are automatically built for each service in ServiceWatch. You can also check key metrics on the monitoring tab of the Virtual Server detail page.
Each metric guides you on which statistic value is meaningful when querying that metric through the user guide, and the statistic value marked in bold among the meaningful statistics is the key statistic value. In the service dashboard or monitoring tab, you can view key metrics through key statistic values.
Performance Item (Metric Name)
Detailed Description
Unit
Meaningful Statistics
Instance State
Instance state display
1 - Active
0 - Off
None
Sum
CPU Usage
CPU usage
Percent
Average
Maximum
Minimum
Disk Read Bytes
Amount read from block device (bytes)
Bytes
Sum
Average
Maximum
Minimum
Disk Read Requests
Number of read requests from block device
Count
Sum
Average
Maximum
Minimum
Disk Write Bytes
Amount written to block device (bytes)
Bytes
Sum
Average
Maximum
Minimum
Disk Write Requests
Number of write requests to block device
Count
Sum
Average
Maximum
Minimum
Network In Bytes
Amount received on network interface (bytes)
Bytes
Sum
Average
Maximum
Minimum
Network In Dropped
Number of received packets dropped on network interface
Count
Sum
Average
Maximum
Minimum
Network In Packets
Number of received packets on network interface
Count
Sum
Average
Maximum
Minimum
Network Out Bytes
Amount transmitted on network interface (bytes)
Bytes
Sum
Average
Maximum
Minimum
Network Out Dropped
Number of transmitted packets dropped on network interface
Count
Sum
Average
Maximum
Minimum
Network Out Packets
Number of transmitted packets on network interface
Count
Sum
Average
Maximum
Minimum
Table. Virtual Server Basic Metrics
Reference
For information on how to collect metrics using ServiceWatch Agent, refer to the ServiceWatch Agent guide.
2.1.2 - How-to guides
Users can create Virtual Server services by entering required information and selecting detailed options through Samsung Cloud Platform Console.
Create Virtual Server
You can create and use Virtual Server service in Samsung Cloud Platform Console.
To create a Virtual Server, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Create Virtual Server button. You will be moved to the Create Virtual Server page.
On the Create Virtual Server page, enter information required for service creation and select detailed options.
In the Image and version selection area, select the required information.
Category
Required
Detailed Description
Image
Required
Select the type of Image provided
Standard: Samsung Cloud Platform standard provided Image
Alma Linux, Oracle Linux, RHEL, Rocky Linux, Ubuntu, Windows
Custom: User created Image
Kubernetes: Image for Kubernetes
RHEL, Ubuntu
Marketplace: Image subscribed from Marketplace
Image Version
Required
Select version of selected Image
Provides version list of provided server Image
Table. Virtual Server Image and Version Selection Input Items
In the Service information input area, enter or select the required information.
Category
Required
Detailed Description
Server Count
Required
Number of servers to create simultaneously
Only numbers can be entered, enter value between 1 and 100
Service Type > Server Type
Required
Virtual Server server type
Standard: Standard specs generally used
High Capacity: Large capacity server specs larger than Standard
Delete on termination: When Delete on Termination is selected, the Volume is also terminated when the server is terminated
Volumes with snapshots are not deleted even when Delete on termination is Use
Multi attach volumes are only deleted when the server to be deleted is the last remaining server connected to the volume
Max IOPS: Enter IOPS maximum value within 5,000 to 20,000
Cannot set when disk type is HDD, HDD_KMS, HDD_MultiAttach
Server Group
Optional
Set servers belonging to the same Server Group to Anti-Affinity (distributed placement), Affinity (close placement), Partition (Virtual Server and Block Storage distributed placement)
Select Use and then select Server Group
Select Create New to create Server Group
Servers belonging to the same Server Group are placed in Best Effort manner according to selected policy
Select policy from Anti-Affinity (distributed placement), Affinity (close placement), Partition (Virtual Server and Block Storage distributed placement)
Table. Virtual Server Service Information Input Items
Caution
When using Partition (Virtual Server and Block Storage distributed placement) policy among Server Group policies, Block Storage Volume cannot be additionally allocated after Virtual Server creation, so create all necessary Block Storage at the Virtual Server creation stage.
In the Required information input area, enter or select the required information.
Category
Required
Detailed Description
Server Name
Required
Enter name for server identification when selected server count is 1
Set hostname with entered server name
Enter within 63 characters using English, numbers, spaces, and special characters (-, _)
Network Settings > Create New Network Port
Required
Set network where Virtual Server will be installed
VPC Name: Select pre-created VPC
General Subnet: Select pre-created general Subnet
IP can select Auto Create or user input, if Input is selected, user can directly enter IP
NAT: Can only use when server count is 1 and Internet Gateway is connected to VPC. Check Use to select NAT IP
NAT IP: Select NAT IP
If there is no NAT IP to select, click Create New button to create Public IP
Click Refresh button to check and select created Public IP
When Public IP is created, fees are charged according to Public IP fee standard
Local Subnet (Optional): Select Use for Local Subnet
Not required for service creation
Must select pre-created Local Subnet
IP can select Auto Create or user input, if Input is selected, user can directly enter IP
Security Group: Settings required to connect to server
Select: Select pre-created Security Group
Create New: If there is no Security Group to apply, can create separately in Security Group service
Can select up to 5
If Security Group is not set, all connections are blocked by default
Security Group must be set to allow necessary connections
Network Settings > Specify Existing Network Port
Required
Set network where Virtual Server will be installed
VPC: Select pre-created VPC
General Subnet: Select pre-created general Subnet and Port
NAT: Can only use when server count is 1 and Internet Gateway is connected to VPC. Check Use to select NAT IP
NAT IP: Select NAT IP
If there is no NAT IP to select, click Create New button to create Public IP
Click Refresh button to check and select created Public IP
Local Subnet (Optional): Select Use for Local Subnet
Select pre-created Local Subnet and Port
Keypair
Required
User authentication method to use when connecting to the server
Create New: Create a new one when a new Keypair is needed
For the new Keypair creation method, refer to Create Keypair
Default login account list by OS
Alma Linux: almalinux
Oracle Linux: cloud-user
RHEL: cloud-user
Rocky Linux: rocky
Ubuntu: ubuntu
Windows: sysadmin
</ul
Table. Virtual Server Required Information Input Items
In the Additional information input area, enter or select the required information.
Category
Required
Detailed Description
Lock
Optional
Set whether to use Lock
When Lock is used, prevents operations such as server termination, start, stop to prevent accidental malfunction
Init Script
Optional
Script to execute when server starts
Init script must be written as Batch script for Windows or Shell script or cloud-init for Linux depending on Image type.
Can enter up to 45,000 bytes
Tag
Optional
Add tags
Can add up to 50 per resource
Click Add Tag button and then enter or select Key, Value values
Table. Virtual Server Additional Information Input Items
On the Summary panel, check the created detailed information and estimated billing amount, then click the Create button.
When creation is complete, check the created resource on the Virtual Server List page.
Notice
When entering server name, if spaces and special characters (_) are used, OS hostname is set with spaces and special characters (_) changed to special character (-). Please note when setting OS hostname.
Example: If server name is ‘server name_01’, OS hostname is set to ‘server-name-01’.
If you need to manage server names uniquely, create with different server names (Prefix).
When creating servers, numbers do not automatically increase based on server name (Prefix), so Virtual Servers with the same name may be created.
Example: If you first create 2 Virtual Servers using ’test’ as server name (Prefix), ’test-1’, ’test-2’ are created. Even if you create 2 Virtual Servers later using ’test’ as Prefix again, ’test-1’, ’test-2’ are created.
Reference
If you create Virtual Server with Rocky Linux or Oracle Linux, additional settings are required for time synchronization (NTP: Network Time Protocol). For details, refer to Configure Linux NTP.
If you create RHEL and Windows Server before July 2025, you need to modify RHEL Repository and WKMS (Windows Key Management Service) settings. For details, refer to Configure RHEL Repo and WKMS.
Check Virtual Server Details
Virtual Server service can check and modify overall resource list and detailed information. Virtual Server Detail page is composed of Detail Information, Monitoring, Tags, Operation History tabs.
To check detailed information of Virtual Server service, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Virtual Server menu. You will be moved to the Virtual Server List page.
On the Virtual Server List page, click the resource to check detailed information for. You will be moved to the Virtual Server Detail page.
Virtual Server Detail page displays status information and additional function information, and is composed of Detail Information, Monitoring, Tags, Operation History tabs.
Networking: Process in progress during server creation
Scheduling: Process in progress during server creation
Block_Device_Mapping: Connecting Block Storage during server creation
Spawning: State where server creation process is in progress
Active: Usable state
Powering_off: State when stop is requested
Deleting: Server deletion in progress
Reboot_Started: State where Reboot is in progress
Error: Error state
Migrating: State where server is being migrated to another host
Reboot: State where Reboot command is transmitted
Rebooting: Restart in progress
Rebuild: State where Rebuild command is transmitted
Rebuilding: State when Rebuild is requested
Rebuild_Spawning: State where Rebuild process is in progress
Resize: State where Resize command is transmitted
Resizing: Resize in progress
Resize_Prep: State when server type modification is requested
Resize_Migrating: State where server is moving to another host while Resize is in progress
Resize_Migrated: State where server has completed moving to another host while Resize is in progress
Resize_Finish: Resize completed
Revert_Resize: Server Resize or migration failed for some reason. Target server is cleaned and original source server is restarted
Shutoff: State when Powering off is completed
Verity_ Resize: State where server type confirmation/server type reversion is selectable after Resize_Prep progress according to server type modification request
Resize_Reverting: State when server type reversion is requested
Resize_Confirming: State where server Resize request is being confirmed
Server Control
Buttons to change server status
Start: Start stopped server
Stop: Stop running server
Restart: Restart running server
Create Image
Create a user image from the current server image
For detailed image creation method, refer to Create Image
Console Log
Check current server console log
Can check console log output from current server. For details, refer to Check Console Log
Create Dump
Create current server Dump
Dump file is created inside Virtual Server
For detailed Dump creation method, refer to Create Dump
Rebuild
Delete existing Virtual Server OS area data and settings, and configure by Rebuilding as new server
Display whether ServiceWatch detailed monitoring is enabled
To **enable** ServiceWatch detailed monitoring, click **Modify** button to set
For details, refer to [Enable ServiceWatch Detailed Monitoring](#servicewatch-세부-모니터링-활성화하기)
For details on ServiceWatch service, refer to [ServiceWatch Overview](/userguide/management/service_watch/overview/_index.md)
Not provided for Auto-Scaling Group, Virtual Server created from Marketplace
|
| Network | Virtual Server network information
VPC, General Subnet, IP and status, Public NAT IP and status, Private NAT IP and status, Virtual IP, Security Group
If IP change is needed, click **Modify** button to set
Can only modify when Virtual Server status is **Active**, **Shutoff**
For Default port, **Default** is displayed next to IP, and cannot detach
If Security Group change is needed, click **Modify** button to set
If Virtual IP change is needed, can modify on **Virtual IP Management** tab of **Networking > VPC > Subnet Detail** page
**Add as New Network Port**: Select General Subnet and IP
Can select other General Subnet within same VPC
IP can select auto create or user input, if input is selected, user can directly enter IP
**Add as Existing Network Port**: Select pre-created General Subnet and port
|
| Local Subnet | Virtual Server Local Subnet information
Local Subnet, Local Subnet IP, Security Group name, Virtual IP
If Security Group change is needed, click **Modify** button to set
**Add as New Network Port**: Select Local Subnet and IP
Can select other General Subnet within same VPC
**IP** can select **Auto Create** or user input, if **Input** is selected, user can directly enter IP
**Add as Existing Network Port**: Select pre-created Local Subnet and port
|
| Block Storage | Block Storage information connected to server
Volume ID, Volume name, Disk type, Capacity, Connection information, Type, Delete on termination, Status
**Add**: Connect additional Block Storage when needed
**Modify Delete on termination**: Modify Delete on termination value of selected Block Storage from list
**More > Disconnect**: Disconnect Block Storage connection of selected Block Storage from list
Cannot disconnect for OS default Storage
|
Table. Virtual Server Detail Information Tab Items
Monitoring
On the Virtual Server List page, you can monitor ServiceWatch metrics of selected resource.
On the Monitoring tab, you can view monitoring charts for Virtual Server, and each chart is based on available Service Watch metrics.
Category
Detailed Description
Period Setting Area
Select period to apply to chart
Metrics query can be set from current to maximum 455 days
Timezone Setting Area
Select timezone to apply to chart
Reset Button
Reset all manipulations or settings made on chart
Refresh Setting Area
Select refresh period of chart
Refresh button redisplays information based on current time
Click refresh period to select desired period: Off, 10 seconds, 1 minute, 2 minutes, 5 minutes, 15 minutes
On the Virtual Server List page, you can check tag information of selected resource, and add, modify, or delete tags.
Category
Detailed Description
Tag List
Tag list
Can check tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select from existing Key and Value lists
Table. Virtual Server Tag Tab Items
Operation History
On the Virtual Server List page, you can check operation history of selected resource.
Category
Detailed Description
Operation History List
Resource change history
Can check operation details, operation date and time, resource type, resource name, operation result, operator information
Click corresponding resource from Operation History List list. Operation History Detail popup window opens.
Table. Virtual Server Operation History Tab Detail Information Items
Control Virtual Server Operation
If operation control of created Virtual Server resource is needed, you can perform tasks on Virtual Server List or Virtual Server Detail page.
You can start, stop, and restart running servers.
Start Virtual Server
You can start stopped (Shutoff) Virtual Server. To start Virtual Server, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Virtual Server menu. You will be moved to the Virtual Server List page.
On the Virtual Server List page, click the resource to start among stopped (Shutoff) servers to move to the Virtual Server Detail page.
On the Virtual Server List page, you can Start through More button on the right for each resource.
After selecting multiple servers with checkbox, you can control multiple servers simultaneously through Start button at the top.
On the Virtual Server Detail page, click Start button at the top to start the server. Check changed server status in Status Display item.
When Virtual Server start is completed, server status changes from Shutoff to Active.
If server control and management functions of created Virtual Server resource are needed, you can perform tasks on Virtual Server List or Virtual Server Detail page.
Create Image
You can create Image of running Virtual Server.
Reference
This content guides how to create user Image from running Virtual Server.
Create user Image by clicking Create Image button on Virtual Server List or Virtual Server Detail page.
When Virtual Server server type is changed, monitoring performance metric data may not be collected normally for a while. Normal performance metrics are collected in the next collection cycle (1 minute).
If IP change is performed, can no longer communicate with that IP, and IP change cannot be cancelled during progress.
Server is rebooted to apply changed IP.
If server is in Load Balancer service, must delete existing IP from LB server group and directly add changed IP as member of LB server group.
For servers using Public NAT/Private NAT, must disable use of Public NAT/Private NAT and then set again after IP change.
If Public NAT/Private NAT is in use, first disable use of Public NAT/Private NAT and after completing IP change, set again.
Whether Public NAT/Private NAT is used can be changed by clicking Modify button of Public NAT IP/Private NAT IP on Virtual Server Detail page.
Enable ServiceWatch Detailed Monitoring
By default, Virtual Server is linked with ServiceWatch for basic monitoring. If needed, you can enable detailed monitoring to more quickly identify and respond to operational issues. For details on ServiceWatch, refer to ServiceWatch Overview.
Caution
Basic monitoring is provided free of charge, but additional fees are charged when detailed monitoring is enabled. Please be careful when using.
To enable Virtual Server ServiceWatch detailed monitoring, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Virtual Server menu. You will be moved to the Virtual Server List page.
On the Virtual Server List page, click the resource to enable ServiceWatch detailed monitoring. You will be moved to the Virtual Server Detail page.
On the Virtual Server Detail page, click ServiceWatch detailed monitoring Modify button. You will be moved to Modify ServiceWatch Detailed Monitoring popup window.
On the Modify ServiceWatch Detailed Monitoring popup window, select Enable, check guide text and then click Confirm button.
Check ServiceWatch detailed monitoring item on Virtual Server Detail page.
Disable ServiceWatch Detailed Monitoring
Caution
Disabling detailed monitoring is needed for cost efficiency. Keep detailed monitoring enabled only when absolutely necessary, and disable detailed monitoring for the rest.
To disable Virtual Server ServiceWatch detailed monitoring, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Virtual Server menu. You will be moved to the Virtual Server List page.
On the Virtual Server List page, click the resource to disable ServiceWatch detailed monitoring. You will be moved to the Virtual Server Detail page.
On the Virtual Server Detail page, click ServiceWatch detailed monitoring Modify button. You will be moved to Modify ServiceWatch Detailed Monitoring popup window.
On the Modify ServiceWatch Detailed Monitoring popup window, deselect Enable, check guide text and then click Confirm button.
Check ServiceWatch detailed monitoring item on Virtual Server Detail page.
Virtual Server Management Additional Functions
For Virtual Server server management, you can check console log, create Dump, and perform Rebuild. To check Virtual Server console log, create Dump, and perform Rebuild, follow the procedure below.
Check Console Log
You can check current console log of Virtual Server.
To check Virtual Server console log, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Virtual Server menu. You will be moved to the Virtual Server List page.
On the Virtual Server List page, click the resource to check console log. You will be moved to the Virtual Server Detail page.
On the Virtual Server Detail page, click Console Log button. You will be moved to Console Log popup window.
Check console log output on Console Log popup window.
Create Dump
To create Virtual Server Dump file, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Virtual Server menu. You will be moved to the Virtual Server List page.
On the Virtual Server List page, click the resource to check detailed information. You will be moved to the Virtual Server Detail page.
On the Virtual Server Detail page, click Create Dump button.
Dump file is created inside Virtual Server.
Perform Rebuild
You can delete existing Virtual Server server OS area data and settings, and configure by Rebuilding as new server.
To perform Virtual Server Rebuild, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Virtual Server menu. You will be moved to the Virtual Server List page.
On the Virtual Server List page, click the resource to perform Rebuild. You will be moved to the Virtual Server Detail page.
On the Virtual Server Detail page, click Rebuild button.
During Virtual Server Rebuild, server status changes to Rebuilding and when Rebuild is completed, returns to state before Rebuild execution.
Terminating unused Virtual Server can reduce operating costs. However, since Virtual Server termination can immediately stop running services, you must fully consider the impact of service interruption before proceeding with termination.
Caution
Please note that data cannot be recovered after service termination.
To terminate Virtual Server, follow the procedure below.
Click All Services > Compute > Virtual Server menu. You will be moved to the Service Home page of Virtual Server.
On the Service Home page, click the Virtual Server menu. You will be moved to the Virtual Server List page.
On the Virtual Server List page, select the resource to terminate and click Terminate Service button.
Termination of connected storage varies depending on the Delete on termination setting; see Termination Constraints.
When termination is complete, check if resource is terminated on Virtual Server List page.
Termination Constraints
If termination is not possible when Virtual Server termination is requested, you will be guided through a popup window. Refer to the cases below.
Termination Not Possible
When File Storage is connected: First disconnect File Storage connection.
When LB server group is connected: First disconnect LB server group Pool connection.
When Lock is set: Change Lock setting to not used and then try again.
When Backup is connected: First disconnect Backup connection.
When Auto-Scaling Group connected to Virtual Server is not In Service state: Change state of connected Auto-Scaling Group and then try again.
Termination of connected storage varies depending on Delete on termination setting, so please refer.
Deletion by Delete on termination Setting
Volume deletion varies depending on whether Delete on termination is set.
When Delete on termination is not set: Volume is not deleted even if Virtual Server is terminated.
When Delete on termination is set: Volume is deleted when Virtual Server is terminated.
Volumes with snapshots are not deleted even if Delete on termination is set.
Multi attach volumes are deleted only when the server to be deleted is the last remaining server connected to the volume.
2.1.2.1 - Image
The user can enter the required information for the Image service within the Virtual Server service and select detailed options through the Samsung Cloud Platform Console to create the service.
Image generation
You can create and use the Image service while using the Virtual Server service on the Samsung Cloud Platform Console.
To create an Image, follow the steps below.
Click the All Services > Compute > Virtual Server menu. Go to the Virtual Server’s Service Home page.
Click the Image menu on the Service Home page. Go to the Image List page.
Click the Image Create button on the Image List page. It navigates to the Image Create page.
Service Information Input Enter or select the required information in the area.
Category
Required
Detailed description
Image name
Required
Name of the Image to create
Enter within 255 characters using English letters, numbers, spaces, and special characters (-, _)
Image file > URL
Required
Enter URL after uploading Image file to Object Storage
Object Storage Details page allows copying URL
The Bucket of Object Storage where the Image file is uploaded must be in the same zone as the server to be created
Image file can only have .qcow2 extension
Upload a secure Image file to minimize security risks.
OS type
Required
OS type of the uploaded Image file
Select from Alma Linux, CentOS, Oracle Linux, RHEL, Rocky Linux, Ubuntu
Minimum Disk
Required
Minimum disk size (GB) for the Image to be created
Enter a value between 0 and 12,288 GB
Minimum RAM
Required
Minimum RAM capacity (GB) of the image to be created
Enter a value between 0 and 2,097,151 GB
Visibility
Required
Indicates access permissions for the Image
Private: Can be used only within the Account
Shared: Can be shared between Accounts
Protected
Select
Select whether Image deletion is prohibited
Checking Use prevents accidental deletion of the Image
This setting can be changed after Image creation
Table. Image Service Information Input Items
Additional Information Input Enter or select the required information in the area.
Category
Required
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Image additional information input items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resources on the Image List page.
Image Check detailed information
Image service can view and edit the full resource list and detailed information. Image detail page consists of detailed information, tags, operation history tabs.
To view detailed information of the Image service, follow the steps below.
All Services > Compute > Virtual Server Click the menu. Go to the Service Home page of Virtual Server.
Click the Image menu on the Service Home page. Go to the Image list page.
Image List page, click the resource to view detailed information. Image Details page will be opened.
Image Details page displays status information and additional feature information, and consists of Detail Information, Tag, Work History tabs.
Category
Detailed description
Image status
Status of the Image created by the user
Active: Available
Queued: When an Image creation request is made, the Image is uploaded and waiting for processing
Importing: When an Image creation request is made, the Image is uploaded and being processed
Create shared Image
Create Image to share with another Account
Can be created only when the Image’s Visibility is private and the Image has snapshot information
Share with another Account
Image can be shared with another Account
If the Image’s Visibility is Shared, it can be shared with another Account
Only displayed for Images created by Create shared Image or by uploading a qcow2 file
Image Delete
Button to delete the Image
If the Image is deleted, it cannot be recovered
Table. Image status information and additional functions
Detailed Information
Image list page allows you to view detailed information of the selected resource and edit the information if needed.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In Image, it means Image SRN
Resource Name
Image Name
Resource ID
Image’s unique resource ID
Creator
User who created the Image
Creation time
Image creation time
Editor
User who modified the Image information
Modification Date
Date/Time when Image information was modified
Image name
Image name
Minimum Disk
Image’s minimum disk capacity (GB)
If you need to modify the minimum disk, click the Edit button to set it
Minimum RAM
Minimum RAM capacity of the Image (GB)
OS type
Image’s OS type
Alma Linux, CentOS, Oracle Linux, RHEL, Rocky Linux, SLES, Ubuntu
OS hash algorithm
OS hash algorithm method
Visibility
Displays access permissions for the Image
Private: Can be used only within the Account
Shared: Can be shared between Accounts
Protected
Select whether image deletion is prohibited
enabled setting prevents accidental deletion of the Image
Image size
Image size
If the generated Image size is 1GB or less, it is displayed as 1GB.
Image Type
Classification by Image creation method
Snapshot-Based: When the configuration of the currently used Virtual Server is created as an Image
Image-Based: When an Image is created by uploading a qcow2 extension file or by creating a shared Image
Image file URL
Image file URL uploaded to Object Storage when creating an Image
Not displayed for Images created via the Image creation menu on the Virtual Server detail page, but displayed when the Image file is uploaded to Object Storage.
Sharing Status
Status of sharing images with other Accounts
Approved Account ID: ID of the Account that has been approved for sharing
Modification Date/Time: The date/time when sharing was requested to another Account, after the sharing status changes Pending → Accepted it is updated to that date/time
Status: Approval status
Accepted: Approved and being shared
Pending: Waiting for approval
Sharing stopped: Sharing has been stopped
Table. Image detailed information tab items
Tag
Image list page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can view the tag’s Key, Value information
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing list of Keys and Values
Table. Image tag tab items
Work History
You can view the operation history of the selected resource on the Image list page.
Category
Detailed description
Work History List
Resource Change History
Work date/time, Resource ID, Resource name, Work details, Event topic, Work result, Verify worker information
Table. Image work history tab detailed information items
Image Resource Management
Describes the control and management functions of the generated Image.
Create Image for Sharing
Create an Image to share with another Account.
Notice
Image’s Visibility is private and only when the Image has snapshot information can a shared Image be created.
Shared Image includes only one OS area disk volume as the imaging target. Additionally, connected data volumes are not included in the Image, so if needed, please copy the data to a separate volume and use the volume migration function.
To create an image for sharing, follow the steps below.
Log in to the shared Account and click the All Services > Compute > Virtual Server menu. Go to the Virtual Server’s Service Home page.
Click the Image menu on the Service Home page. Navigate to the Image List page.
Click the Image to create a shared Image on the Image List page. You will be taken to the Image Details page.
Create Shared Image Click the button. A popup window notifying the creation of a shared Image will open.
After checking the notification content, click the Complete button.
Share Image to another Account
Create an image to share with another Account.
Notice
.qcow2 extension file uploaded to create, or only Images created via Image Details page with Create Shared Image can be shared with other Accounts.
The Image to be shared must have Visibility set to Shared.
To share the Image with another Account, follow these steps.
Log in to the shared Account and click the All Services > Compute > Virtual Server menu. Navigate to the Virtual Server’s Service Home page.
Click the Image menu on the Service Home page. It navigates to the Image List page.
On the Image List page, click the Image you want to share with another Account. It moves to the Image Details page.
Click the Share to another Account button. A popup window notifying Image sharing opens.
After checking the notification content, click the Confirm button. It moves to the Share Image with another Account page.
Share Image with another Account on the page, enter Share Account ID, and click the Complete button. A popup notifying Image sharing opens.
Category
Required
Detailed description
Image name
-
Name of the Image to share
Input not allowed
Image ID
-
Image ID to share
Input not allowed
Shared Account ID
Required
Enter another Account ID to share
Enter within 64 characters using English letters, numbers, and special character -
Table. Image sharing items to another Account
After checking the notification content, click the Confirm button. You can check the information in the sharing status of the Image Details page.
When first requested, the status is Pending, and when approval is completed by the Account to be shared, it changes to Accepted, and if approval is denied, it changes to Rejected.
Receive shared Image from another Account
To receive an Image shared from another Account, follow the steps below.
Log in to the account to be shared and click the All Services > Compute > Virtual Server menu. Go to the Service Home page of the Virtual Server.
Click the Image menu on the Service Home page. It navigates to the Image List page.
Image List on the page More > Image Share Request List click the button. Image Share Request List popup opens.
Image Sharing Request List In the popup window, click the Approve or Reject button for the Image to be shared.
Category
Detailed description
Image name
shared Image name
OS type
OS type of shared Image
Owner Account ID
Owner Account ID of shared Image
Creation time
Creation time of shared Image
Approval
Approve the shared Image
Reject
Reject processing of the shared Image
Table. Image sharing request list item
After checking the notification content, click the Confirm button. You can check the shared Image in the Image list.
Image Delete
You can delete unused images. However, once an image is deleted it cannot be recovered, so you should fully consider the impact before proceeding with the deletion.
Caution
Please be careful as data cannot be recovered after deleting the service.
To delete Image, follow the steps below.
All Services > Compute > Virtual Server Click the menu. Navigate to the Service Home page of Virtual Server.
Click the Image menu on the Service Home page. Go to the Image list page.
On the Image list page, select the resource to delete and click the Delete button.
Image list page, select multiple Image check boxes, and click the Delete button at the top of the resource list.
When deletion is complete, check on the Image List page whether the resource has been deleted.
2.1.2.2 - Keypair
Users can create a Keypair within the Virtual Server service by entering the required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Keypair
You can create and use a Keypair while using the Virtual Server service in the Samsung Cloud Platform Console.
Follow these steps to create a Keypair.
All Services > Compute > Virtual Server – Click to go to the Virtual Server Service Home page.
On the Service Home page, click the Keypair menu to go to the Keypair List page.
On the Keypair List page, click the Create Keypair button to go to the Create Keypair page.
In the Service Information Input section, enter the required information.
Item
Required
Description
Keypair Name
Required
Name of the Keypair to create. Use English letters, numbers, spaces, and special characters (-, _) up to 255 characters.
Keypair Type
Required
ssh
Table. Keypair service information input items
In the Additional Information Input section, enter or select the required information.
Item
Required
Description
Tags
Optional
Add tags (up to 50 per resource). Click the Add Tag button and enter/select Key and Value.
Table. Additional Keypair information input items
Review the entered information and click the Create button.
After creation, the new Keypair will appear on the Keypair List page.
Caution
After creation, you can download the Private Key only once. It cannot be reissued, so ensure you have downloaded it.
Store the downloaded Private Key in a secure location.
Viewing Keypair Details
The Keypair service allows you to view and edit the resource list and detailed information. The Keypair Details page consists of Details, Tags, and Activity History tabs.
To view the details of a Keypair, follow these steps.
All Services > Compute > Virtual Server – Click to go to the Virtual Server Service Home page.
On the Service Home page, click the Keypair menu to go to the Keypair List page.
On the Keypair List page, click the resource you want to view details for. You will be taken to the Keypair Details page.
The Keypair Details page displays status information and additional feature information, organized into Details, Tags, and Activity History tabs.
Details
You can view the detailed information of the selected resource from the Keypair List page, and modify the information if needed.
Item
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform (for Keypair, refers to Keypair SRN)
Resource Name
Keypair name
Resource ID
Unique resource ID of the Keypair
Creator
User who created the Keypair
Creation Time
Timestamp when the Keypair was created
Modifier
User who modified the Keypair information
Modification Time
Timestamp when the Keypair information was modified
Keypair Name
Name of the Keypair
Fingerprint
Unique value to identify the key
User ID
User ID of the Keypair creator
Public Key
Public key information
Table. Keypair details tab items
Tags
You can view, add, modify, or delete the tags of a selected resource from the Keypair List page.
Item
Description
Tag List
List of tags. You can view the Key and Value of each tag. Up to 50 tags can be added per resource. When entering a tag, you can search the existing Key and Value lists to select them.
Table. Keypair tags tab items
Activity History
You can view the activity history of a selected resource from the Keypair List page.
Item
Description
Activity History List
Resource change history, including operation time, resource ID, resource name, operation details, event topic, operation result, and operator information.
Table. Keypair activity history tab details
Managing Keypair Resources
This section describes the control and management functions for Keypair.
Retrieving the Public Key
Follow these steps to retrieve the public key.
All Services > Compute > Virtual Server – Click to go to the Virtual Server Service Home page.
On the Service Home page, click the Keypair menu to go to the Keypair List page.
On the Keypair List page, click the More button at the top and then click Retrieve Public Key. You will be taken to the Retrieve Public Key page.
In the Required Information Input section, enter or select the required information.
Item
Required
Description
Keypair Name
Required
Name of the Keypair to retrieve
Keypair Type
Required
ssh
Public Key
Required
Enter the public key.
File Upload: Click the Attach File button to attach a public key file (only .pem files are allowed).
Public Key Input: Paste the copied public key value (you can copy the public key value from the Keypair Details page).
Table. Required fields for retrieving a public key
Review the entered information and click the Complete button.
After completion, the newly created resource will appear on the Keypair List page.
Info
Even if you create a new Keypair via Retrieve Public Key, the Keypair cannot be re‑downloaded. Use the Keypair that was issued at the time of creation.
Deleting a Keypair
You can delete an unused Keypair. However, once a Keypair is deleted, it cannot be recovered, so please review the impact carefully before deletion.
Caution
After deleting the service, the data cannot be recovered, so please be careful.
To delete a Keypair, follow these steps.
All Services > Compute > Virtual Server – Click to go to the Virtual Server Service Home page.
On the Service Home page, click the Keypair menu to go to the Keypair List page.
On the Keypair List page, select the resource you want to delete and click the Delete button.
You can select multiple Keypairs using checkboxes and click the Delete button at the top of the resource list.
After deletion is complete, verify that the resource has been removed from the Keypair List page.
2.1.2.3 - Server Group
Users can enter the required information for a Server Group within the Virtual Server service and select detailed options through the Samsung Cloud Platform Console to create the service.
Server Group Create
You can create and use the Server Group service while using the Virtual Server service in the Samsung Cloud Platform Console.
To create a Server Group, follow the steps below.
Click the All Services > Compute > Virtual Server menu. Go to the Service Home page of Virtual Server.
Click the Server Group menu on the Server Group page. Go to the Server Group list page.
Server Group List on the page, click the Server Group Create button. Navigate to the Server Group Create page.
Service Information Input area, enter or select the required information.
Category
Required
Detailed description
Server Group name
Required
Name of the Server Group to create
Enter within 255 characters using English letters, numbers, spaces, and special characters (-, _)
Policy
Required
Set Anti-Affinity (distributed placement), Affinity (proximate placement), Partition (distributed placement of Virtual Server and Block Storage) for Virtual Servers belonging to the same Server Group
Anti-Affinity (distributed placement) and Affinity (proximate placement) policies place Virtual Servers belonging to the same Server Group based on the selected policy in a Best Effort manner, but are not absolutely guaranteed.
Anti-Affinity (distributed placement): A policy that places servers belonging to a Server Group on different racks and hosts as much as possible
Affinity (proximate placement): A policy that places servers belonging to a Server Group close together within the same rack and host as much as possible
Partition (distributed placement of Virtual Server and Block Storage): A policy that places Virtual Servers belonging to a Server Group and the Block Storage connected to those servers in different distribution units (Partitions)
The Partition (distributed placement of Virtual Server and Block Storage) policy displays the Partition number together so that it is clear which Partition each Virtual Server and its associated Block Storage belong to.
Partition numbers are assigned based on the Partition Size (up to 3) set for the Server Group.
Table. Server Group Service Information Input Items
Add Information Input area, enter or select the required information.
Category
Required
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Server Group Additional Information Input Items
Check the input information and click the Complete button.
When creation is complete, check the created resources on the Server Group List page.
Server Group View detailed information
Server Group service can view and edit the full resource list and detailed information. Server Group Details page consists of Details, Tags, Activity Log tabs.
To view detailed information of the Server Group, follow the steps below.
Click the All Services > Compute > Virtual Server menu. Go to the Service Home page of Virtual Server.
Click the Server Group menu on the Service Home page. You will be taken to the Server Group List page.
Click the resource to view detailed information on the Server Group List page. It navigates to the Server Group Details page.
Server Group Details page displays status information and additional feature information, and consists of Details, Tags, Activity History tabs.
Detailed Information
On the Server Group List page, you can view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In Server Group, it means Server Group SRN
Resource Name
Server Group Name
Resource ID
Unique resource ID of Server Group
Creator
User who created the Server Group
Creation time
Server Group creation time
Server Group name
Server Group name
Policy
Anti-Affinity(distributed placement), Affinity(proximal placement), Partition(distributed placement of Virtual Server and Block Storage)
Server Group Member
List of Virtual Servers belonging to the Server Group
Members cannot be modified after the initial Virtual Server is created
Anti-Affinity (distributed placement) and Affinity (proximate placement) policies define only the relative placement relationships between Virtual Servers, and the SCP Console provides only the list of Virtual Servers belonging to the policy.
The Partition (distributed placement of Virtual Server and Block Storage) policy displays the Partition number together to clearly indicate which Partition the Virtual Server and its associated Block Storage belong to. The Partition number is assigned based on the Partition Size set for the Server Group (maximum 3).
Table. Server Group detailed information tab items
Tag
Server Group List page you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Can view the tag’s Key and Value information
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Server Group Tag Tab Items
Work History
You can view the operation history of the selected resource on the Server Group List page.
Category
Detailed description
Work History List
Resource Change History
Work date/time, Resource ID, Resource name, Work details, Event topic, Work result, Check worker information
Table. Server Group Task History Tab Detailed Information Items
Server Group Delete
You can delete unused Server Groups. However, once a Server Group is deleted it cannot be recovered, so please review the impact thoroughly in advance before proceeding with deletion.
Caution
Please be careful as data cannot be recovered after deleting the service.
To delete a Server Group, follow these steps.
All Services > Compute > Virtual Server menu, click it. Go to the Virtual Server’s Service Home page.
On the Service Home page, click the Server Group menu. Navigate to the Server Group List page.
Server Group list On the page, select the resource to delete, and click the Delete button.
Server Group list on the page select multiple Server Group check boxes, and click the Delete button at the top of the resource list.
When deletion is complete, check whether the resource has been deleted on the Server Group list page.
2.1.2.4 - IP Change
You can change the IP of the Virtual Server and add network ports to the Virtual Server to set the IP.
IP Change
You can change the IP of the Virtual Server.
Caution
If you proceed with changing the IP, you will no longer be able to communicate with that IP, and you cannot cancel the IP change while it is in progress.
The server will be rebooted to apply the changed IP.
If the server is running the Load Balancer service, you must delete the existing IP from the LB server group and directly add the changed IP as a member of the LB server group.
Servers using Public NAT/Private NAT must disable and reconfigure Public NAT/Private NAT after changing the IP.
If you are using Public NAT/Private NAT, first disable the use of Public NAT/Private NAT, complete the IP change, and then set it again.
Whether to use Public NAT/Private NAT can be changed by clicking the Edit button of Public NAT IP/Private NAT IP on the Virtual Server Details page.
To change the IP, follow the steps below.
All Services > Compute > Virtual Server Click the menu. Navigate to the Service Home page of Virtual Server.
Click the Virtual Server menu on the Service Home page. Move to the Virtual Server List page.
Virtual Server List Click the resource to change the IP on the page. Navigate to the Virtual Server Details page.
Virtual Server Details page, click the Edit button of the IP item to change the IP. The IP Edit popup opens.
Edit IP In the popup window, after selecting Subnet, set the IP to change.
Input: Enter the IP to be changed directly.
Automatic Generation: Automatically generate the IP and apply it.
When the settings are complete, click the Confirm button.
When the popup notifying IP modification opens, click the Confirm button.
Setting IP on the server after adding network ports
If you create a Virtual Server with Ubuntu Linux, after adding a network port on Samsung Cloud Platform, additional IP configuration is required on the server.
As the root user of the Virtual Server’s OS, use the ip command to check the assigned network interface name.
Color mode
ipa
ipa
Code block. ip command - network interface check command
If there is an added interface, the following result is displayed.
If a user creates a Virtual Server with Rocky Linux or Oracle Linux via the Samsung Cloud Platform Console, additional configuration is required for time synchronization (NTP: Network Time Protocol). For other OS standard Linux images (RHEL, Alma Linux, Ubuntu), NTP is already configured, so no additional setup is needed.
Install NTP Daemon
You can install the chrony daemon to configure NTP. To install the chrony daemon, follow the steps below.
Reference
For detailed information about chrony, please refer to the chronyc page.
Check whether the chrony package is installed using the dnf command as the root user of the OS of the Virtual Server.
Run the chronyc sources command with the “v” option (display detailed information) to check the IP address of the configured NTP server and verify whether synchronization is in progress.
If the user created RHEL and Windows Server prior to August 2025 via the Samsung Cloud Platform Console, they need to modify the RHEL Repository and WKMS (Windows Key Management Service) settings.
The SCP RHEL Repository is a repository provided by SCP to support user environments such as VPC Private Subnet where external access is restricted. Since the SCP RHEL Repository synchronizes with each Region Local Repository according to the internal schedule, it is recommended to switch to an external Public Mirror site to quickly apply the latest patches.
RHEL Repository Configuration Guide
In Samsung Cloud Platform, when using RHEL, you can install and download the same packages as the official RHEL Repository by utilizing the RHEL Repository provided by SCP. SCP provides the latest version of the repository for the given major version by default. To set up the RHEL repository, follow the steps below.
Using the root user of the Virtual Server’s OS, use the cat command to check the /etc/yum.repos.d/scp.rhel8.repo or /etc/yum.repos.d/scp.rhel9.repo settings.
Color mode
cat /etc/yum.repos.d/scp.rhel8.repo
cat /etc/yum.repos.d/scp.rhel8.repo
Code block. repo configuration check (RHEL8)
Color mode
cat /etc/yum.repos.d/scp.rhel9.repo
cat /etc/yum.repos.d/scp.rhel9.repo
Code block. repo configuration check (RHEL9)
When checking the configuration file, the following result is displayed.
Use a text editor (e.g., vim) to open the /etc/hosts file.
/etc/hosts Modify the file with the content below and save.
Color mode
198.19.2.13 scp-rhel8-ip scp-rhel9-ip scp-rhel-ip
198.19.2.13 scp-rhel8-ip scp-rhel9-ip scp-rhel-ip
Code block. /etc/hosts file setting change
Verify the RHEL Repository connection configured on the server using the yum command.
Color mode
yum repolist –v
yum repolist –v
Code block. repository connection settings check
If the RHEL Repository is successfully connected, you can check the Repository list.
Color mode
Repo-id : rhel-8-appstream
Repo-name : rhel-8-appstream
Repo-revision : 1718903734Repo-updated : Fri 21 Jun 2024 02:15:34 AM KST
Repo-pkgs : 38,260
Repo-available-pkgs: 25,799
Repo-size : 122 G
Repo-baseurl : http://scp-rhel8-ip/rhel/8/appstream
Repo-expire : 172,800 second(s)(last: Thu 08 Aug 2024 07:27:57 AM KST)Repo-filename : /etc/yum.repos.d/scp.rhel8.repo
Repo-id : rhel-8-baseos
Repo-name : rhel-8-baseos
Repo-revision : 1718029433Repo-updated : Mon 10 Jun 2024 11:23:52 PM KST
Repo-pkgs : 17,487
Repo-available-pkgs: 17,487
Repo-size : 32 G
Repo-baseurl : http://scp-rhel8-ip/rhel/8/baseos
Repo-expire : 172,800 second(s)(last: Thu 08 Aug 2024 07:27:57 AM KST)Repo-filename : /etc/yum.repos.d/scp.rhel8.repo
Repo-id : rhel-8-baseos-debug
Repo-name : rhel-8-baseos-debug
Repo-revision : 1717662461Repo-updated : Thu 06 Jun 2024 05:27:41 PM KST
Repo-pkgs : 17,078
Repo-available-pkgs: 17,078
Repo-size : 100 G
Repo-baseurl : http://scp-rhel8-ip/rhel/8/baseos-debug
Repo-expire : 172,800 second(s)(last: Thu 08 Aug 2024 07:27:57 AM KST)Repo-filename : /etc/yum.repos.d/scp.rhel8.repo
Repo-id : rhel-8-appstream
Repo-name : rhel-8-appstream
Repo-revision : 1718903734Repo-updated : Fri 21 Jun 2024 02:15:34 AM KST
Repo-pkgs : 38,260
Repo-available-pkgs: 25,799
Repo-size : 122 G
Repo-baseurl : http://scp-rhel8-ip/rhel/8/appstream
Repo-expire : 172,800 second(s)(last: Thu 08 Aug 2024 07:27:57 AM KST)Repo-filename : /etc/yum.repos.d/scp.rhel8.repo
Repo-id : rhel-8-baseos
Repo-name : rhel-8-baseos
Repo-revision : 1718029433Repo-updated : Mon 10 Jun 2024 11:23:52 PM KST
Repo-pkgs : 17,487
Repo-available-pkgs: 17,487
Repo-size : 32 G
Repo-baseurl : http://scp-rhel8-ip/rhel/8/baseos
Repo-expire : 172,800 second(s)(last: Thu 08 Aug 2024 07:27:57 AM KST)Repo-filename : /etc/yum.repos.d/scp.rhel8.repo
Repo-id : rhel-8-baseos-debug
Repo-name : rhel-8-baseos-debug
Repo-revision : 1717662461Repo-updated : Thu 06 Jun 2024 05:27:41 PM KST
Repo-pkgs : 17,078
Repo-available-pkgs: 17,078
Repo-size : 100 G
Repo-baseurl : http://scp-rhel8-ip/rhel/8/baseos-debug
Repo-expire : 172,800 second(s)(last: Thu 08 Aug 2024 07:27:57 AM KST)Repo-filename : /etc/yum.repos.d/scp.rhel8.repo
Code block. Repository list check
Windows Key Management Service Configuration Guide
In Samsung Cloud Platform, when using Windows Server, you can authenticate genuine products by using the Key Management Service provided by SCP. Follow the steps below.
After right-clicking the Windows Start icon, please run cmd from Windows PowerShell (Administrator) or the Windows Run menu.
Windows PowerShell (administrator) or in cmd, please run the command below to register the KMS Server.
Color mode
slmgr /skms 198.19.2.23:1688
slmgr /skms 198.19.2.23:1688
Code block. WKMS Settings
After executing the KMS Server registration command, check the notification popup indicating successful registration, then click OK.
Figure. WKMS setting check
Windows PowerShell (Administrator) or in cmd, please execute the command below to perform product activation.
Color mode
slmgr /ato
slmgr /ato
Code block. Windows Server activation settings
After confirming the notification popup that the product activation was successful, click OK.
Figure. Windows Server genuine activation verification
Windows PowerShell (Administrator) or cmd, run the command below to check if it has been activated.
Color mode
slmgr /dlv
slmgr /dlv
Code block. Windows Server genuine activation verification
After confirming the notification popup that the product activation was successfully performed, click OK.
Figure. Windows Server genuine activation verification
2.1.2.7 - Installing ServiceWatch Agent
Users can install the ServiceWatch Agent on a Virtual Server to collect custom metrics and logs.
Note
Collecting custom metrics/logs via the ServiceWatch Agent is currently only available on Samsung Cloud Platform for Enterprise. It will be offered for other offerings in the future.
Caution
Metrics collected via the ServiceWatch Agent are considered custom metrics and incur charges, unlike the default metrics collected from each service. Therefore, it is recommended to remove or disable unnecessary metric collection settings.
ServiceWatch Agent
The agents required to collect custom metrics and logs for ServiceWatch on a Virtual Server can be broadly divided into two types: Prometheus Exporter and Open Telemetry Collector.
Category
Description
Prometheus Exporter
Provides metrics of a specific application or service in a format that Prometheus can scrape.
Depending on the OS, you can use the Node Exporter for Linux servers and the Windows Exporter for Windows servers.
Acts as a centralized collector that gathers telemetry data such as metrics and logs from distributed systems, processes them (filtering, sampling, etc.), and exports them to multiple backends (e.g., Prometheus, Jaeger, Elasticsearch).
Exports data to the ServiceWatch Gateway so that ServiceWatch can collect metric and log data.
Collectors can be enabled or disabled using flags.
<code>--collector.{name}</code>: Enables a specific metric collector.
<code>--no-collector.{name}</code>: Disables a specific metric collector.
To disable all default metrics and enable only specific collectors, use <code>--collector.disable-defaults --collector.{name} ...</code>.
Below is a description of the main collectors.
Collector
Description
Labels
meminfo
Provides memory statistics
-
filesystem
Provides filesystem statistics such as used disk space
device: Physical or virtual device path where the filesystem is located (e.g., /dev/sda1)
fstype: Filesystem type (e.g., ext4, xfs, nfs, tmpfs)
mountpoint: Path where the filesystem is mounted on the host OS; serves as an intuitive way to distinguish disks (e.g., /, /var/lib/docker, /mnt/data)
Table. Description of main Node Exporter collectors
For detailed information on available metrics and configuration, see the Node Exporter > Collector page.
Available metrics may vary depending on the version of Node Exporter you use. See the Node Exporter repository.
Caution
Metrics collected via the ServiceWatch Agent are considered custom metrics and incur charges, unlike the default metrics collected from each service. Remove or disable unnecessary metric collection to avoid excessive charges.
Enable and start the service.
Register the Node Exporter service and verify the registered service and configured metrics.
After completing the Node Exporter setup, you must install the Open Telemetry Collector provided by ServiceWatch to finish the ServiceWatch Agent configuration. See ServiceWatch > Using ServiceWatch Agent for details.
Installing Prometheus Exporter for Virtual Server (Windows)
Install the Prometheus Exporter on a Windows server following the steps below.
Installing Windows Exporter
Install the Windows Exporter according to the steps below.
Test the Windows Exporter execution.
By default, Windows Exporter enables all collectors, but to collect only desired metrics, enable the following collectors:
Available metrics may vary depending on the version of Windows Exporter you use. See the Windows Exporter repository.
Caution
Metrics collected via the ServiceWatch Agent are considered custom metrics and incur charges, unlike the default metrics collected from each service. Remove or disable unnecessary metric collection to avoid excessive charges.
Register the service and verify.
Color mode
$sc.execreatewindows_exporterbinPath="C:\Temp\windows_exporter-0.31.3-amd64.exe --collectors.enabled memory,logical_disk,os"DisplayName="Prometheus Windows Exporter"start=auto$Start-Servicewindows_exporter
$ sc.exe create windows_exporter binPath= "C:\Temp\windows_exporter-0.31.3-amd64.exe --collectors.enabled memory,logical_disk,os" DisplayName= "Prometheus Windows Exporter" start= auto
$ Start-Service windows_exporter
Use the --config.file option to specify a YAML configuration file.
Color mode
$.\windows_exporter.exe--config.file=config.yml$.\windows_exporter.exe--config.file="C:\Program Files\windows_exporter\config.yml"# When using an absolute path, wrap it in quotes
$ .\windows_exporter.exe --config.file=config.yml$ .\windows_exporter.exe --config.file="C:\Program Files\windows_exporter\config.yml"# When using an absolute path, wrap it in quotes
$sc.execreatewindows_exporterbinPath="C:\Temp\windows_exporter-0.31.3-amd64.exe --config.file=C:\Temp\config.yml"DisplayName="Prometheus Windows Exporter"start=auto$Start-Servicewindows_exporter
$ sc.exe create windows_exporter binPath= "C:\Temp\windows_exporter-0.31.3-amd64.exe --config.file=C:\Temp\config.yml" DisplayName= "Prometheus Windows Exporter" start= auto
$ Start-Service windows_exporter
Code block. Register Windows Exporter with config file
Note
When using both a configuration file and command‑line options, command‑line options take precedence over configuration file values.
Info
After completing the Windows Exporter setup, you must install the Open Telemetry Collector provided by ServiceWatch to finish the ServiceWatch Agent configuration. See ServiceWatch > Using ServiceWatch Agent for details.
Node Exporter Metrics
Main Node Exporter metrics
The following are the collector and metric information available through Node Exporter. Collectors can be enabled, and specific metrics can be activated.
Category
Collector
Metric
Description
Memory
meminfo
node_memory_MemTotal_bytes
Total memory
Memory
meminfo
node_memory_MemAvailable_bytes
Available memory (used for determining memory shortage)
Memory
meminfo
node_memory_MemFree_bytes
Free memory
Memory
meminfo
node_memory_Buffers_bytes
IO buffers
Memory
meminfo
node_memory_Cached_bytes
Page cache
Memory
meminfo
node_memory_SwapTotal_bytes
Total swap
Memory
meminfo
node_memory_SwapFree_bytes
Remaining swap
Filesystem
filesystem
node_filesystem_size_bytes
Total filesystem size
Filesystem
filesystem
node_filesystem_free_bytes
Total free space
Filesystem
filesystem
node_filesystem_avail_bytes
Space actually available to unprivileged users
Table. Main Node Exporter metrics
Node Exporter collector and metric collection settings
Node Exporter enables most collectors by default, but you can enable or disable specific collectors as needed.
Enable specific collectors only
When you want to use only memory and filesystem collectors:
Code block. Enable specific Node Exporter collectors (disable defaults)
Enable filesystem collector for specific mount points:
Color mode
./node_exporter \
--collector.disable-defaults \
--collector.filesystem.mount-points-include="/|/data"# Enable filesystem collector for / (root) and /data mount points
./node_exporter \
--collector.disable-defaults \
--collector.filesystem.mount-points-include="/|/data"# Enable filesystem collector for / (root) and /data mount points
Code block. Enable filesystem collector for specific mount points
Enable filesystem collector excluding specific mount points:
Color mode
./node_exporter \
--collector.disable-defaults \
--collector.filesystem.mount-points-exclude="/boot|/var/log"# Exclude /boot and /var/log mount points
./node_exporter \
--collector.disable-defaults \
--collector.filesystem.mount-points-exclude="/boot|/var/log"# Exclude /boot and /var/log mount points
Code block. Exclude specific mount points from filesystem collector
Disable specific collectors (no-collector)
When you do not want to use the filesystem collector:
Color mode
./node_exporter --no-collector.filesystem
./node_exporter --no-collector.filesystem
Code block. Disable specific Node Exporter collector
Configure collector as a systemd service (recommended)
Metrics collected via the ServiceWatch Agent are considered custom metrics and incur charges, unlike the default metrics collected from each service. It is recommended to configure only the necessary metrics.
Windows Exporter Metrics
Main Windows Exporter metrics
The following are the collector and metric information available through Windows Exporter. Collectors can be enabled, and specific metrics can be activated.
Category
Collector
Metric
Description
Memory
memory
windows_memory_available_bytes
Available memory
Memory
memory
windows_memory_cache_bytes
Cached memory
Memory
memory
windows_memory_committed_bytes
Committed memory
Memory
memory
windows_memory_commit_limit
Commit limit
Memory
memory
windows_memory_pool_paged_bytes
Paged pool
Memory
memory
windows_memory_pool_nonpaged_bytes
Non‑paged pool
Disk
logical_disk
windows_logical_disk_free_bytes
Free space
Disk
logical_disk
windows_logical_disk_size_bytes
Total capacity
Disk
logical_disk
windows_logical_disk_read_bytes_total
Total read bytes
Disk
logical_disk
windows_logical_disk_write_bytes_total
Total written bytes
Disk
logical_disk
windows_logical_disk_read_seconds_total
Read latency
Disk
logical_disk
windows_logical_disk_write_seconds_total
Write latency
Disk
logical_disk
windows_logical_disk_idle_seconds_total
Idle time
Table. Main Windows Exporter metrics
Windows Exporter collector and metric collection settings
Windows Exporter enables most collectors by default, but you can configure only the desired collectors.
Enable specific collectors only
If you want to use only CPU, memory, and logical disk collectors:
Color mode
# The --collector.enabled option disables defaults and enables only the listed collectors.\windows_exporter.exe--collectors.enabled="memory,logical_disk"
# The --collector.enabled option disables defaults and enables only the listed collectors.\windows_exporter.exe --collectors.enabled="memory,logical_disk"
Code block. Enable specific Windows Exporter collectors
Note
Windows Exporter does not need you to disable unused collectors; using --collector.enabled will collect only the collectors specified in the option.
Configure collector as a service (recommended)
Color mode
# Register windows_exporter as a servicesc.execreatewindows_exporterbinPath="C:\Temp\windows_exporter-0.31.3-amd64.exe --config.file=C:\Temp\config.yml"DisplayName="Prometheus Windows Exporter"start=auto# Start the serviceStart-Servicewindows_exporter
# Register windows_exporter as a servicesc.exe create windows_exporter binPath= "C:\Temp\windows_exporter-0.31.3-amd64.exe --config.file=C:\Temp\config.yml" DisplayName= "Prometheus Windows Exporter" start= auto
# Start the serviceStart-Service windows_exporter
Code block. Register service
Color mode
# Note this is not an exhaustive list of all configuration valuescollectors:enabled:logical_disk,memory# Set collectors to enablecollector:service:include:"windows_exporter"scheduled_task:include:/Microsoft/.+log:level:debugscrape:timeout-margin:0.5telemetry:path:/metricsweb:listen-address:":9182"
# Note this is not an exhaustive list of all configuration valuescollectors:enabled:logical_disk,memory# Set collectors to enablecollector:service:include:"windows_exporter"scheduled_task:include:/Microsoft/.+log:level:debugscrape:timeout-margin:0.5telemetry:path:/metricsweb:listen-address:":9182"
Code block. Service configuration file
Filter specific metrics
Using the Open Telemetry Collector configuration, you can select only the required metrics collected by Windows Exporter. For guidance on pre‑configuring the Open Telemetry Collector for ServiceWatch, see Prerequisite Open Telemetry Collector configuration for ServiceWatch.
Caution
Metrics collected via the ServiceWatch Agent are considered custom metrics and incur charges, unlike the default metrics collected from each service. It is recommended to configure only the necessary metrics.
2.1.3 - API Reference
API Reference
2.1.4 - CLI Reference
CLI Reference
2.1.5 - Release Note
Virtual Server
2026.03.19
FEATUREStandard image addition and SSD_Provisioned disk type addition and ServiceWatch metric monitoring feature addition
OS Image addition provision
Standard Image has been added. (Window server 2016)
SSD volume with configurable IOPS and Throughput has been added.
You can select SSD_Provisioned disk type when creating Block Storage.
You can set IOPS and Throughput maximum values.
You can view Virtual Server ServiceWatch metric monitoring graphs on the detail page.
2025.12.16
FEATUREVirtual Server feature addition
OS Image addition provision
Standard Image has been added. (Alma Linux 9.6, Oracle Linux 9.6, RHEL 9.6, Rocky Linux 9.6)
New Server Group policy addition
Partition (Virtual Server and Block Storage distributed placement) policy has been added.
You can collect custom metrics and logs by installing Virtual Server ServiceWatch Agent.
2025.10.23
FEATUREServer name change feature addition and ServiceWatch service integration provision
You can change the server name on the Virtual Server detail page of Samsung Cloud Platform Console.
When changing the server name, only the information in Samsung Cloud Platform Console is changed, not the OS’s Hostname.
ServiceWatch service integration provision
You can monitor data through ServiceWatch service.
2025.07.01
FEATUREVirtual Server feature addition and Image sharing method change
Virtual Server feature addition
IP, Public NAT IP, Private NAT IP configuration feature has been added.
LLM Endpoint for using LLM is provided.
You can select OS Image subscribed from Marketplace when creating Virtual Server.
2nd generation server type has been added.
2nd generation (s2) server type based on Intel 4th generation (Sapphire Rapids) Processor has been added. For details, refer to Virtual Server Server Type
Image sharing method between Accounts has been changed.
You can share by creating a new qcow2 Image or Image for sharing.
2025.02.27
FEATURENAT configuration feature and OS Image, server type addition
Virtual Server feature addition
NAT configuration feature has been added in Virtual Server.
OS Image addition provision
Standard Image has been added. (Alma Linux 8.10, Alma Linux 9.4, Oracle Linux 8.10, Oracle Linux 9.4, RHEL 8.10, RHEL 9.4, Rocky Linux 8.10, Rocky Linux 9.4, Ubuntu 24.04)
Image for Kubernetes has been added. You can create Kubernetes Engine using Image for Kubernetes.
2nd generation server type addition
2nd generation (h2) server type based on Intel 4th generation (Sapphire Rapids) Processor has been added. For details, refer to Virtual Server Server Type
Samsung Cloud Platform common function change
Common CX changes such as Account, IAM and Service Home, tags have been reflected.
2024.10.01
NEWVirtual Server service official version release
Virtual Server service official release.
We have released a virtualization server that allows you to freely allocate and use as much as you need at the necessary time without purchasing infrastructure resources individually.
2024.07.02
NEWBeta version release
We have released a virtualization server that allows you to freely allocate and use as much as you need at the necessary time without purchasing infrastructure resources individually.
2.2 - Virtual Server Auto-Scaling
2.2.1 - Overview
Service Overview
Virtual Server Auto-Scaling is a service that automatically scales resources up or down according to demand. You can add or terminate the number of servers running the application according to predefined conditions or schedule. Auto-Scaling Group uses a pre-created Launch Configuration as a pre-configuration template to create servers, and can adjust and manage the number of servers. It adjusts so that the number does not fall below the specified minimum number of servers or exceed the maximum number of servers. If you register a schedule with Auto-Scaling Group, you can set the number of servers according to the predetermined schedule. If you register a policy, you can increase or decrease the number of servers based on predefined conditions.
Features
Easy and convenient computing environment configuration: Through the web-based Console, users can easily configure the required computing environment themselves via Self Service, from creating Launch Configurations to creating/modifying/deleting Auto-Scaling Groups.
Elastic Resource Usage: You can elastically use computing resources according to the service’s load and usage. Users can schedule resource usage for predictable specific time periods, and can adjust resource usage to prepare for temporary connections from an unspecified large number of users.
Availability Improvement: Virtual Server Auto-Scaling adjusts resources to match variable demand so that the traffic required by the user can always be processed. Through this, users can achieve improved application performance and availability.
Maximizing Cost Reduction Effect: By using resources only as needed according to demand fluctuations, unnecessary costs can be reduced. Through flexible resource usage according to traffic increases or decreases at specific times such as night, weekends, and month-end, the cost reduction effect can be maximized.
Service Architecture Diagram
Figure. Virtual Server Auto-Scaling Diagram
Provided Features
Virtual Server Auto-Scaling provides the following features.
Launch Configuration: It is a configuration template used to create a Virtual Server in an Auto-Scaling Group. When creating a Launch Configuration, you set information about the Virtual Server such as image, server type, key pair, block storage, etc.
Server Count Adjustment: Provides several ways to adjust the number of servers. Using policies, you can add a Virtual Server when the load exceeds a threshold and release the Virtual Server when demand is low, maintaining application availability and reducing costs. You can add and release Virtual Servers according to a schedule, and you can also manually adjust the number of servers in an Auto-Scaling Group as needed.
Load Balancer integration: You can use a Load Balancer to evenly distribute application traffic to Virtual Server. Whenever a Virtual Server is added or removed, it is automatically registered and deregistered with the Load Balancer.
Network Connection: You can connect the general subnet of the Auto-Scaling Group, automatic IP allocation, and a Public NAT IP. It provides a local subnet connection for inter-server communication.
Security Group applied: Security Group is a virtual logical firewall that controls inbound/outbound traffic generated on a Virtual Server. Inbound rules control incoming traffic to the Virtual Server, and Outbound rules control outgoing traffic from the Virtual Server.
Monitoring: You can view monitoring information such as CPU, Memory, Disk of Virtual Servers created in the Auto-Scaling Group through the Cloud Monitoring service. Based on the monitoring information, you can use Auto-Scaling policies to set thresholds for load, and when the threshold is exceeded, you can add or remove servers.
Components
Virtual Server Auto-Scaling creates an Auto-Scaling Group through Launch Configuration and checks and manages the server.
Launch Configuration
This is a Configuration template used to create a Virtual Server in an Auto-Scaling Group. The main features are as follows.
Image: Provides OS standard images and user-created custom images. Users can select and use them according to the service they want to configure.
Keypair: Provides the Keypair method for a secure OS access method.
Init script: The user can define a script to be executed when the Virtual Server starts.
Use Launch Configuration as a pre-configuration template for server creation. You can create an Auto-Scaling Group to adjust and manage the number of servers. The main features are as follows.
Launch Configuration: It is a configuration template used to create a Virtual Server in an Auto-Scaling Group.
Server Count Settings: Virtual Server Auto-Scaling provides several ways to adjust the number of servers in an Auto-Scaling Group.
Fixed Server Count Method: When creating an Auto-Scaling Group, this method keeps the default settings by maintaining the configured number of servers without any added schedules or policies. Refer to Create Auto-Scaling Group and set the Min, Desired, Max server counts.
Manual Server Count Adjustment Method: In an Auto-Scaling Group, this method increases or decreases the number of servers by modifying the server count to the desired amount. You can choose whether to manually set the desired number of servers. Refer to Modify Server Count.
Schedule Reservation Method: You can schedule daily, weekly, monthly, or one-time, and set the desired number of servers at a specified time. This is useful when you can predict when to reduce or increase the number of servers. If you use the schedule method, refer to Manage Schedule to add and manage schedules.
Policy Method: You can use a policy as a way to dynamically adjust servers. When the set threshold based on monitoring metrics is exceeded, it adjusts the number of servers. At this time, you can choose one of three ways to adjust the server count. Increase or decrease the number of servers by a specified number, increase or decrease by a specified ratio, or fix the number of servers to an entered value. When servers start and terminate due to the policy, the monitoring metric CPU usage may temporarily exceed the threshold registered in the policy. However, because this is a temporary moment, a cooldown period is set to avoid judging it as an abnormal situation. If you want to use the policy method, refer to Manage Policies.
Load Balancer: Whenever a Virtual Server is added or terminated, it automatically connects to and disconnects from the Load Balancer registered in the Auto-Scaling Group.
Auo-Scaling Group’s Load Balancer operates in conjunction with Load Balancer from February 2025.
Constraints
The constraints of Virtual Server Auto-Scaling are as follows.
Category
Description
Number of Virtual Servers per Auto-Scaling Group
50 or less
Number of policies per Auto-Scaling Group
12 or fewer
Number of schedules per Auto-Scaling Group
20 or fewer
Number of LB server groups and ports per Auto-Scaling Group
3 or less
Table. Virtual Server Auto-Scaling Group Constraints
Caution
If the Image you are using is a discontinued standard Image, Scale-out will not work. If the Image you are using is a Custom Image, Scale out will continue to work properly even after the version is no longer provided.
Before the end of support for the Image you are using, we recommend replacing the Launch Configuration with the latest version of the Image or a Custom Image.
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.
Table. Virtual Server Auto-Scaling Preliminary Service
2.2.1.1 - Monitoring Metrics
Virtual Server Auto-Scaling is a service provided for Virtual Server targets, providing individual Virtual Server monitoring metrics and monitoring metrics provided by Cloud Monitoring-based policies.
Virtual Server monitoring metrics
The following table shows the monitoring metrics of Virtual Server that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, please refer to the Cloud Monitoring guide.
For Windows OS, memory-related metrics are not provided.
Performance Item
Detailed Description
Unit
Memory Total [Basic]
Available memory bytes
bytes
Memory Used [Basic]
Currently used memory bytes
bytes
Memory Swap In [Basic]
Replaced memory bytes
bytes
Memory Swap Out [Basic]
Replaced memory bytes
bytes
Memory Free [Basic]
Unused memory bytes
bytes
Disk Read Bytes [Basic]
Read bytes
bytes
Disk Read Requests [Basic]
Number of Read Requests
cnt
Disk Write Bytes [Basic]
Write bytes
bytes
Disk Write Requests [Basic]
Write Request Count
cnt
CPU Usage [Basic]
1-minute average system CPU usage rate
%
Instance State [Basic]
Instance Status
state
Network In Bytes [Basic]
Received bytes
bytes
Network In Dropped [Basic]
Received Packet Drop
cnt
Network In Packets [Basic]
Received Packet Count
cnt
Network Out Bytes [Basic]
Transmission bytes
bytes
Network Out Dropped [Basic]
Transmission Packet Drop
cnt
Network Out Packets [Basic]
Transmission Packet Count
cnt
Fig. Virtual Server Monitoring Metrics (Default Provided)
Monitoring metrics provided by Cloud Monitoring-based policies
The following table shows the monitoring metrics provided by the policy of Cloud Monitoring-based Auto-Scaling Group. For more information on policy settings, see Managing Policies.
Performance Item
Detailed Description
Unit
CPU Usage [Basic]
1-minute average system CPU usage rate
%
Memory Used [Basic]
Currently used memory bytes
bytes
Network In Bytes [Basic]
Received bytes
bytes
Network In Packets [Basic]
Number of Received Packets
cnt
Network Out Bytes [Basic]
Transmission bytes
bytes
Network Out Packets [Basic]
Transmission Packet Count
cnt
Fig. Monitoring metrics provided by Cloud Monitoring-based policies
2.2.1.2 - ServiceWatch 지표
Virtual Server Auto-Scaling is a service offered for Virtual Servers that provides individual Virtual Server monitoring metrics as well as monitoring metrics supplied by ServiceWatch‑based policies.
Refer to the ServiceWatch guide for how to view metrics in ServiceWatch.
ServiceWatch monitoring metrics provided by the Auto-Scaling Group policy
The table below shows the ServiceWatch monitoring metrics provided by the Auto-Scaling Group policy. For detailed information on Auto-Scaling Group policy configuration, see Managing Policies.
Performance Items
Detailed description
unit
CPU Usage
CPU usage
Percent
Network In Bytes
Bytes received on the network interface
Bytes
Network In Packets
Number of packets received on the network interface
Count
Network Out Bytes
Data transmitted on the network interface (bytes)
Bytes
Network Out Packets
Number of packets transmitted on the network interface
Count
표. Auto-Scaling Group의 정책에서 제공하는 ServiceWatch 모니터링 지표
2.2.2 - How-to guides
Users can create an Auto-Scaling Group service by entering the required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating an Auto-Scaling Group
You can create an Auto-Scaling Group service through the Samsung Cloud Platform Console.
Note
To create an Auto-Scaling Group, you need to create a Launch Configuration in advance.
Please refer to Creating a Launch Configuration.
To create an Auto-Scaling Group, follow these steps:
Click All Services > Compute > Virtual Server menu. It will move to the Virtual Server’s Service Home page.
Click the Auto-Scaling Group menu. It will move to the Auto-Scaling Group list page.
On the Auto-Scaling Group list page, click the Create Auto-Scaling Group button. It will move to the Create Auto-Scaling Group page.
On the Create Auto-Scaling Group page, enter the information required to create the service.
In the Launch Configuration section, select a Launch Configuration.
You can create a new Launch Configuration by clicking the Create Launch Configuration button.
In the Service Information Input section, enter or select the required information.
Category
Required
Detailed Description
Auto-Scaling Group Name
Required
Auto-Scaling Group name
Manage servers of the same type and purpose as a group
Server Name
Required
Server name within the Auto-Scaling Group
An identifier to distinguish servers created within the Auto-Scaling Group, automatically assigned based on the input server name and sequence
Number of Servers
Required
Number of servers to create in the Auto-Scaling Group
Enter a value between 0 and 20 (Min≤Desired≤Max)
Min: Set the minimum number of servers for the Auto-Scaling Group to maintain
Desired: Set the target number of servers within the Auto-Scaling Group, also meaning the initial number of servers created when the Auto-Scaling Group is created
Max: Set the maximum number of servers that the Auto-Scaling Group can maintain
After creating the Auto-Scaling Group, you can modify the settings using the Modify button. For more information, refer to Modifying the Number of Servers
Manual Desired Server Count Setting
Optional
Choose whether to manually change the Desired server count
Click the Add Notification button to open the Add Notification popup window
For more information on notification settings, refer to the detailed information
Click the Modify button in the notification recipient list to change the notification information
Set Later
Optional
Set the notification recipient and method after creating the Auto-Scaling Group, on the detailed information page
Table. Auto-Scaling Group Notification Settings Items
In the Additional Information Input section, enter or select the required information.
Category
Required
Detailed Description
Tag
Optional
Add a tag
Up to 50 tags can be added per resource
Click the Add Tag button, then enter or select the Key and Value
Table. Auto-Scaling Group Additional Information Input Items
In the Summary panel, review the created details and estimated billing amount, then click the Complete button.
After creation is complete, you can find the created Auto-Scaling Group on the Auto-Scaling Group list page.
Checking Auto-Scaling Group Details
The Auto-Scaling Group service allows you to view and modify the overall resource list and detailed information. The Auto-Scaling Group details page consists of Details, Policy, Schedule, Virtual Server, Load Balancer, Tag, and Work History tabs.
To check the Auto-Scaling Group details, follow these steps:
Click All Services > Compute > Virtual Server menu. It will move to the Virtual Server’s Service Home page.
Click the Auto-Scaling Group menu. It will move to the Auto-Scaling Group list page.
On the Auto-Scaling Group list page, click the resource you want to check the details for. It will move to the Auto-Scaling Group details page.
The Auto-Scaling Group details page displays status information and additional feature information, and consists of Details, Policy, Schedule, Virtual Server, Load Balancer, Tag, and Work History tabs.
Category
Detailed Description
Auto-Scaling Group Status
The status of the Auto-Scaling Group created by the user
Creating: Auto-Scaling Group creation in progress
In Service: Serviceable state
Scale In: Scale In in progress
Scale Out: Scale Out in progress
Cool Down: Cool-down wait in progress
Terminating: Auto-Scaling Group deletion in progress
Attach to LB: Connecting to Load Balancer in progress
Detach from LB: Detaching from Load Balancer in progress
Auto-Scaling Group Deletion
Button to delete the Auto-Scaling Group
Table. Auto-Scaling Group Status Information and Additional Features
Details
… (rest of the content remains the same)
Auto-Scaling Group Details page where you can check the detailed information of the selected resource and modify the information if necessary.
Category
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Auto-Scaling Group refers to Auto-Scaling Group SRN
Resource Name
Resource name
Auto-Scaling Group refers to Auto-Scaling Group name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service information
Modification Time
Time when the service information was modified
Auto-Scaling Group Name
Auto-Scaling Group name
Launch Configuration Name
Launch Configuration name selected when creating the Auto-Scaling Group
Image template used when creating a Virtual Server in the Auto-Scaling Group
You can check the tag information of the selected resource on the Auto-Scaling Group List page and add, modify, or delete tags.
Category
Detailed Description
Tag List
Tag list
Key and Value information of the tag can be checked
Up to 50 tags can be added per resource
When entering a tag, you can search and select from the existing Key and Value list
Table. Auto-Scaling Group Details - Tag Tab Items
Work History
You can check the work history of the selected resource on the Auto-Scaling Group List page.
Category
Detailed Description
Work History List
Resource change history
Work time, resource ID, resource name, work details, event topic, work result, and worker information can be checked
Table. Auto-Scaling Group Details - Work History Tab Items
Managing Auto-Scaling Group Resources
If you need to manage the created Auto-Scaling Group, you can perform tasks on the Auto-Scaling Group Details page.
Modifying Launch Configuration
You can modify the Launch Configuration of the Auto-Scaling Group.
Note
Modifying the Launch Configuration does not apply to existing servers in the Auto-Scaling Group. It only applies to newly created servers. If you want to apply the modified Launch Configuration to all servers in the Auto-Scaling Group, adjust the server count (Desired) to 0 to delete all existing servers, and then modify the server count (Desired) to the desired quantity.
To modify the Launch Configuration of the Auto-Scaling Group, follow these steps:
Click All Services > Compute > Virtual Server. The Virtual Server Service Home page opens.
Click Auto-Scaling Group. The Auto-Scaling Group List page opens.
On the Auto-Scaling Group List page, click the resource for which you want to modify the Launch Configuration. The Auto-Scaling Group Details page opens.
Click the Modify button next to the Launch Configuration name. The Modify Launch Configuration popup window opens, where you can view the list of available Launch Configurations.
Category
Detailed Description
Launch Configuration Name
Launch Configuration name
Image
Launch Configuration OS image
Server Type
Launch Configuration server type
Block Storage
Launch Configuration Block Storage settings
Auto-Scaling Group Count
Number of Auto-Scaling Groups to which the Launch Configuration is applied
Detailed View
Button to view detailed Launch Configuration information
Table. Launch Configuration List Items
In the Modify Launch Configuration popup window, select the Launch Configuration you want to modify and click OK. The Launch Configuration Modification Notification popup window opens. Check the message and click OK.
Modifying Server Count
You can modify the server count of the Auto-Scaling Group.
Note
The maximum number of servers that can be set is 50. However, if a Load Balancer is present, the number of servers not connected to the Load Balancer is excluded.
To modify the server count of the Auto-Scaling Group, follow these steps:
Click All Services > Compute > Virtual Server. The Virtual Server Service Home page opens.
Click Auto-Scaling Group. The Auto-Scaling Group List page opens.
On the Auto-Scaling Group List page, click the resource for which you want to modify the server count. The Auto-Scaling Group Details page opens.
Click the Edit Server Count button. The Edit Server Count popup window opens.
In the Edit Server Count popup window, enter the required items and click the Confirm button.
Classification
Required
Detailed Description
Server Count > Min
Required
Modify the minimum number of servers
Set the minimum number of servers that the Auto-Scaling Group will maintain
Server Count > Desired
Required
Modify the target server count
Set the target server count in the Auto-Scaling Group
If Desired Server Count Manual Setting is Not Used, you cannot modify the Desired server count. To modify the Desired server count, refer to Modifying Desired Server Count Manual Setting
Server Count > Max
Required
Modify the maximum server count
Set the maximum number of servers that the Auto-Scaling Group can maintain
Table. Auto-Scaling Group Server Count Modification Items
Canceling a Virtual Server Created in an Auto-Scaling Group
A Virtual Server created in an Auto-Scaling Group can be canceled by reducing the desired number of servers.
To cancel a Virtual Server created in an Auto-Scaling Group, follow these steps:
Click All Services > Compute > Virtual Server. You will be taken to the Virtual Server’s Service Home page.
Click Auto-Scaling Group. You will be taken to the Auto-Scaling Group List page.
On the Auto-Scaling Group List page, click the resource you want to cancel. You will be taken to the Auto-Scaling Group Details page.
Click the Edit button in the Server Count section. The Edit Server Count popup window will open.
In the Edit Server Count popup window, reduce the Desired count and click the Confirm button. The Desired server count will be adjusted, and the Virtual Server will be canceled.
Notice
If Manual Setting of Desired Server Count is set to Not Used, you will not be able to modify the Desired server count. To modify the Desired server count, refer to Modifying Manual Setting of Desired Server Count.
Modifying Desired Server Count Manual Setting
You can change the Desired server count manual setting of the Auto-Scaling Group.
Note
If you do not select Use for the Desired server count manual setting, you cannot modify the Desired server count in the detailed information server count modification.
To modify the Desired server count manual setting of the Auto-Scaling Group, follow these steps:
Click the All Services > Compute > Virtual Server menu. The Virtual Server Service Home page opens.
Click the Auto-Scaling Group menu. The Auto-Scaling Group List page opens.
On the Auto-Scaling Group List page, click the resource for which you want to change the Desired server count manual setting. The Auto-Scaling Group Details page opens.
Click the Edit button for the server count. The Desired Server Count Manual Setting popup window opens.
In the Desired Server Count Manual Setting popup window, select whether to use it and click the Confirm button.
Setting Security Group
You can set the Security Group for the Auto-Scaling Group.
Note
If you modify the Security Group, it will not be applied to the existing servers in the Auto-Scaling Group, but only to new servers created afterwards. If you want to apply the modified Security Group to all servers in the Auto-Scaling Group, adjust the server count (Desired) to 0 to delete all existing servers, and then modify the server count (Desired) to the desired number.
To set the Security Group for the Auto-Scaling Group, follow these steps:
Click the All Services > Compute > Virtual Server menu. The Virtual Server Service Home page opens.
Click the Auto-Scaling Group menu. The Auto-Scaling Group List page opens.
On the Auto-Scaling Group List page, click the resource for which you want to set the Security Group. The Auto-Scaling Group Details page opens.
Click the Edit button for the Security Group. The Security Group Modification popup window opens, where you can view the list of available Security Groups.
Classification
Detailed Description
Security Group Name
Security Group name
Table. Security Group List Items
In the Security Group Modification popup window, select the Security Group and click the Confirm button. The Security Group Modification Notification popup window opens. Check the message in the notification popup window and click the Confirm button.
Managing Additional Auto-Scaling Group Information
You can set the Load Balancer to use and select the LB server group for the Auto-Scaling Group. For an Auto-Scaling Group that is using a Load Balancer, you can change it to not use it.
Modifying Load Balancer Draining Timeout
You can set the Load Balancer Draining Timeout for the Auto-Scaling Group.
Note
Draining Timeout is the time to wait before disconnecting the server from the Load Balancer.
You can set the Draining Timeout to safely clean up sessions, as there may be remaining sessions connected to the server.
If the Load Balancer is Not Used, the Draining Timeout cannot be set.
The default value is 300 seconds, and you can set it to a minimum of 1 second and a maximum of 3,600 seconds.
To set the Load Balancer Draining Timeout for the Auto-Scaling Group, follow these steps:
Click the All Services > Compute > Virtual Server menu. The Virtual Server Service Home page opens.
Click the Auto-Scaling Group menu. The Auto-Scaling Group List page opens.
On the Auto-Scaling Group List page, click the resource for which you want to set the Load Balancer Draining Timeout. The Auto-Scaling Group Details page opens.
Click the Load Balancer tab. The Load Balancer list page opens.
Click the Edit button for the Draining Timeout. The Draining Timeout Modification popup window opens.
In the Draining Timeout Modification popup window, select whether to use the Draining Timeout and enter the Draining Timeout time (in seconds).
In the Draining Timeout Modification popup window, check the input values and click the Confirm button. The Draining Timeout Modification Notification popup window opens. Check the message in the notification popup window and click the Confirm button.
Using Load Balancer
You can modify the Load Balancer for the Auto-Scaling Group. To set the Load Balancer for the Auto-Scaling Group, follow these steps:
Note
When the Auto-Scaling Group’s server is created, it is automatically connected to the selected Load Balancer’s LB server group as a member, and when the server is terminated, it is disconnected from the LB server group.
If the Draining Timeout is Used, the server is disconnected from the LB server group after waiting for the Draining Timeout (in seconds).
For Load Balancer modification, the member is detached from the LB server group and waits in the Detach from LB state. For Scale In, the member is disconnected from the LB server group and waits in the Scale In state.
Click the All Services > Compute > Virtual Server menu. The Virtual Server Service Home page opens.
Click the Auto-Scaling Group menu. The Auto-Scaling Group List page opens.
On the Auto-Scaling Group List page, click the resource for which you want to set the Load Balancer. The Auto-Scaling Group Details page opens.
Click the Load Balancer tab. The Load Balancer list page opens.
Click the Edit button for the Load Balancer. The Load Balancer Modification popup window opens.
In the Load Balancer Modification popup window, select whether to use it. If you select Use, you can select the Load Balancer.
Classification
Detailed Description
LB Server Group
LB server group name
Select the LB server group created in the selected VPC
LB server groups using Weighted Round Robin or Weighted Least Connection load balancing cannot be selected
Port
LB server group port information
Enter the port information required for registering the LB server group member
Enter a value between 1 and 65,534
Table. Load Balancer List Items
You can add an LB server group by clicking the + button. Up to 3 can be added. You can remove the added Load Balancer by clicking the X button.
Check the Load Balancer list and click the Confirm button. The Load Balancer Modification Notification popup window opens. Check the message in the notification popup window and click the Confirm button.
Caution
Be cautious when detaching/attaching servers from Load Balancer, as it may affect the service.
If Draining Timeout is in use, setting Load Balancer to not in use or removing some connected Load Balancers using the X button will not immediately detach the server. The server will be detached from Load Balancer after waiting for the Draining Timeout (seconds). At this time, Auto-Scaling Group will be in Detach from LB state.
You can modify the Load Balancer of Auto-Scaling Group to not in use. To set Load Balancer to not in use in Auto-Scaling Group, follow the procedure below.
Caution
Be cautious when detaching/attaching servers from Load Balancer, as it may affect the service.
If Draining Timeout is in use, setting Load Balancer to not in use or removing some connected Load Balancers using the X button will not immediately detach the server. The server will be detached from Load Balancer after waiting for the Draining Timeout (seconds). At this time, Auto-Scaling Group will be in Detach from LB state.
Click All Services > Compute > Virtual Server menu. Move to the Service Home page of Virtual Server.
Click Auto-Scaling Group menu. Move to the Auto-Scaling Group list page.
Click the resource to set Load Balancer in the Auto-Scaling Group list page. Move to the Auto-Scaling Group details page.
Click the Load Balancer tab. Move to the Load Balancer list page.
Click the Modify button of Load Balancer. The Load Balancer modification popup window opens.
Select whether to use Load Balancer in the Load Balancer modification popup window. If you deselect Use, Load Balancer will not be used.
Confirm the deselection of Use and click the OK button. The Load Balancer modification notification popup window opens. Check the message in the notification popup window and click the OK button.
Deleting Auto-Scaling Group
Deleting unused Auto-Scaling Groups can reduce operating costs. However, deleting an Auto-Scaling Group may immediately stop the service in operation, so you must consider the impact of service termination before proceeding with the deletion.
Caution
Be cautious, as data cannot be recovered after deletion.
To delete an Auto-Scaling Group, follow the procedure below.
Click All Services > Compute > Virtual Server menu. Move to the Service Home page of Virtual Server.
Click Auto-Scaling Group menu. Move to the Auto-Scaling Group list page.
Click the resource to delete in the Auto-Scaling Group list page. Move to the Auto-Scaling Group details page.
Click the Delete Auto-Scaling Group button.
After deletion is complete, check if the resource has been deleted in the Auto-Scaling Group list page.
2.2.2.1 - Launch Configuration
To create an Auto-Scaling Group, you need to create a Launch Configuration in advance.
Creating a Launch Configuration
You can create a Launch Configuration service on the Samsung Cloud Platform Console and use it.
To create a Launch Configuration, follow these steps:
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Click the Launch Configuration menu. It moves to the Launch Configuration list page.
Click the Create Launch Configuration button on the Launch Configuration list page. It moves to the Create Launch Configuration page.
Select the required information in the Image and Version Selection section of the Create Launch Configuration page and click the Next button.
Table. Launch Configuration Service Information Input Items
Enter the information in the Additional Information Input section of the Create Launch Configuration page and click the Next button.
Category
Required
Description
Init Script
Optional
Script that runs when the server starts using the Launch Configuration
Enter within 45,000 bytes
The Init Script must be a batch script for Windows or a shell script or cloud-init for Linux, depending on the selected image.
Tag
Optional
Add a tag
Up to 50 tags can be added per resource
Click the Add Tag button, enter the Key and Value, or select them
Table. Launch Configuration Additional Information Input Items
Check the input information and estimated cost on the Create Information Confirmation page, and click the Complete button.
After creation is complete, check the created Launch Configuration on the Launch Configuration list page.
Checking Launch Configuration Details
The Launch Configuration service allows you to check the overall resource list and detailed information, and modify it. The Launch Configuration details page consists of Details, Tags, and Work History tabs.
To check the Launch Configuration details, follow these steps:
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Click the Launch Configuration menu. It moves to the Launch Configuration list page.
Click the resource to check the details on the Launch Configuration list page. It moves to the Launch Configuration details page.
The top of the Launch Configuration details page displays status information and additional feature information, and consists of Details, Tags, and Work History tabs.
Category
Description
Launch Configuration Status
The status of the Launch Configuration created by the user
Active: Available status
Launch Configuration Deletion
Button to delete the Launch Configuration
Table. Launch Configuration Status Information and Additional Features
Details
You can check and modify the detailed information of the selected resource on the Launch Configuration list page.
Category
Description
Service
Service category
Resource Type
Service name
SRN
Unique resource ID in Samsung Cloud Platform
In the case of Launch Configuration, it refers to the Launch Configuration SRN
Resource Name
Resource name
In the case of Launch Configuration, it refers to the Launch Configuration name
Resource ID
Unique resource ID in the service
Creator
The user who created the service
Creation Time
The time when the service was created
Modifier
The user who modified the service information
Modification Time
The time when the service information was modified
Launch Configuration Name
Launch Configuration name
Image
The image name selected when creating the Launch Configuration
The OS image used when creating a server using the Launch Configuration for the Auto-Scaling Group
Number of Auto-Scaling Groups
The number of Auto-Scaling Groups using the Launch Configuration
Server Type
The server type set in the Launch Configuration
Block Storage
Block Storage information set in the Launch Configuration
Type, capacity, and type
Keypair
Server authentication information set in the Launch Configuration
Keypair information used to connect to the server created using the Launch Configuration for the Auto-Scaling Group
Init Script
Init Script set in the Launch Configuration
Script that runs when the server starts using the Launch Configuration for the Auto-Scaling Group
Table. Launch Configuration Details Tab Items
Tags
You can check the tag information of the selected resource on the Launch Configuration list page, and add, change, or delete it.
Category
Description
Tag List
Tag list
Check the Key and Value information of the tag
Up to 50 tags can be added per resource
Search and select from the existing Key and Value list when entering tags
Table. Launch Configuration Tags Tab Items
Work History
You can check the work history of the selected resource on the Launch Configuration list page.
Category
Description
Work History List
Resource change history
Check the work time, resource ID, resource name, work details, event topic, work result, and worker information
Table. Launch Configuration Work History Tab Detailed Information Items
Deleting a Launch Configuration
You can reduce operating costs by deleting unused Launch Configurations. However, deleting a Launch Configuration may immediately stop the operating service, so you should consider the impact of stopping the service before proceeding with the deletion.
Caution
Data cannot be recovered after deletion, so proceed with caution.
To delete a Launch Configuration, follow these steps:
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Click the Launch Configuration menu. It moves to the Launch Configuration list page.
Click the resource to delete on the Launch Configuration list page. It moves to the Launch Configuration details page.
Click the Delete Launch Configuration button.
After deletion is complete, check that the resource has been deleted on the Launch Configuration list page.
Caution
A Launch Configuration applied to an Auto-Scaling Group cannot be deleted.
Delete the Auto-Scaling Group first, and then delete the Launch Configuration.
2.2.2.2 - Managing Policies
The number of servers in an Auto-Scaling Group can be dynamically adjusted based on monitoring metrics. When the set threshold is exceeded based on the monitoring metrics, the number of servers is adjusted. At this time, you can choose one of three ways to adjust the number of servers: increase or decrease the number of servers by a specified number, increase or decrease the number of servers by a specified ratio, or fix the number of servers to a specified value. When a server is started or terminated due to a policy, the monitoring metric, such as CPU usage, may temporarily exceed the threshold set in the policy. However, since this is a temporary moment, a cooldown time is set so that it is not judged as an abnormal situation. You can add and manage policies for an Auto-Scaling Group created in the Samsung Cloud Platform Console.
Adding a Policy
You can add a policy to an Auto-Scaling Group. To add a policy to an Auto-Scaling Group, follow these steps:
All Services > Compute > Virtual Server menu, click. Move to the Service Home page of Virtual Server.
Click the Auto-Scaling Group menu. Move to the Auto-Scaling Group List page.
On the Auto-Scaling Group List page, click the resource to view detailed information. Move to the Auto-Scaling Group Details page.
Click the Policy Tab. Move to the Policy Tab page.
Click the Add Policy button. The Add Policy popup window opens.
Classification
Required
Detailed Description
Classification
Required
Policy classification
Scale In: Server reduction
Scale Out: Server increase
Policy Name
Required
Policy name for distinction
Execution Condition
Required
Condition for executing the policy
Statistic: Method of calculating the metric type
Average: Average of servers in the Auto-Scaling Group
Min: Minimum value among servers in the Auto-Scaling Group
Max: Maximum value among servers in the Auto-Scaling Group
Note: Memory usage policy is not available for Windows servers
Operator: >=><=<
Threshold: Threshold for the metric type
Period: Continuous time to trigger the execution condition (N minutes)
Execution Unit
Required
Method of executing the policy
Policy Type: Select the type of policy to execute.
Increase or decrease the number of servers by a specified number: Increase or decrease the target value
Increase or decrease the number of servers by a specified ratio: Increase or decrease the target value ratio
Fix the number of servers to a specified value: Fix the target value
Target Value: Number or ratio to execute the selected Policy Type
Cooldown
Required
Time to wait (in seconds) when a server is started or terminated due to a policy
Default value is 300 seconds, and it can be set between 60 seconds and 3,600 seconds.
Table. Add Policy Popup Items
Note
Policy > Cooldown Setting
When a server is started or terminated due to a policy, wait for the cooldown time set. The monitoring metric, such as CPU usage, may temporarily exceed the threshold set in the policy. However, since this is a temporary moment and not a condition for adjusting the number of servers, the cooldown time is set to wait.
Guide
Policy execution operates within the set minimum and maximum number of servers.
Even if the number of servers is increased or decreased, or fixed, beyond the minimum and maximum number of servers, it operates within the set minimum and maximum number of servers.
Example: If the minimum number of servers is 3, even if the number of servers is fixed to 1, the number of servers will not decrease to 1, but will be maintained at the minimum number of servers, which is 3.
In the Add Policy popup window, enter the required values and click the Confirm button. The added policy can be checked in the Policy List.
Policy Creation Example
The following is an explanation of the policy example. Refer to it when creating a policy.
Policy Example Explanation 1
Classification
Execution Condition
Execution Unit
Cooldown
Scale Out
Average CPU Usage >= 60% for 1 minute
Increase the number of servers by a specified number, 1 server
300 seconds
Table. Auto-Scaling Group Policy Example 1
If the average CPU usage of the servers in the Auto-Scaling Group is 60% or higher for 1 minute, 1 server is added.
When a server is added, the cooldown time is 300 seconds. During the cooldown time, no additional servers are added or terminated due to the policy.
After the cooldown time ends, the policy execution condition is checked again.
Policy Example Explanation 2
Classification
Execution Condition
Execution Unit
Cooldown
Scale In
Min CPU Usage <= 5% for 1 minute
Decrease the number of servers by a specified ratio, 50%
300 seconds
Table. Auto-Scaling Group Policy Example 2
If the minimum CPU usage of the servers in the Auto-Scaling Group is 5% or lower for 1 minute, 50% of the current number of servers are terminated.
When a server is terminated, the cooldown time is 300 seconds. During the cooldown time, no additional servers are added or terminated due to the policy.
After the cooldown time ends, the policy execution condition is checked again.
Policy Example Explanation 3
Classification
Execution Condition
Execution Unit
Cooldown
Scale Out
Max CPU Usage >= 90% for 1 minute
Fix the number of servers to a specified value, 5 servers
300 seconds
Table. Auto-Scaling Group Policy Example 3
If the maximum CPU usage of the servers in the Auto-Scaling Group is 90% or higher for 1 minute, the number of servers is fixed to 5.
During the server creation, the cooldown time is 300 seconds. During the cooldown time, no additional servers are added or terminated due to the policy.
After the cooldown time ends, the policy execution condition is checked again.
Modifying a Policy
You can modify a policy of an Auto-Scaling Group. To modify a policy of an Auto-Scaling Group, follow these steps:
All Services > Compute > Virtual Server menu, click. Move to the Service Home page of Virtual Server.
Auto-Scaling Group menu should be clicked. It moves to the Auto-Scaling Group list page.
In the Auto-Scaling Group list page, click on the resource to check the detailed information. It moves to the Auto-Scaling Group details page.
Click on the Policy tab. It moves to the Policy tab page.
Click on the More > Edit button of the policy to be modified. The Policy modification popup opens.
Classification
Required
Detailed Description
Classification
Required
Policy classification
Scale In: Server count return
Scale Out: Server count increase
Policy Name
Required
Policy name for distinction
Execution Condition
Required
Condition for executing the policy
Statistic: Method of calculating Metric Type
Average: Average of servers in Auto-Scaling Group
Min: Minimum value among servers in Auto-Scaling Group
Max: Maximum value among servers in Auto-Scaling Group
Note: Memory usage policy cannot be set for Windows servers
Operator: >=><=<
Threshold: Threshold corresponding to Metric Type
Period: Continuous time (N minutes) to trigger the execution condition
Execution Unit
Required
Method of executing the policy
Policy Type: Select the type of policy to be executed.
Increase or decrease the server count by a specified number: Increase or decrease the server count by the target value
Increase or decrease the server count by a specified ratio: Increase or decrease the server count by the target value ratio
Fix the server count to the input value: Fix the server count to the target value
Target Value: The number or ratio of the selected Policy Type to be executed
Cooldown
Required
Waiting time (in seconds) when a server is started or terminated due to a policy
Default value is 300 seconds, and it can be set from 1 second to a maximum of 3,600 seconds
Table. Policy modification popup items
Click the Confirm button after entering the required values in the Policy modification popup window.
Policy Addition and Modification Restrictions
There are restrictions when adding or modifying policies, depending on the policy classification, execution condition, and execution condition range.
Refer to the examples of restrictions below and add or modify policies accordingly.
Example 1 - Check for duplicate registration of policy classification and execution condition
Duplicate registration is not allowed when the policy classification (Scale Out or Scale In) and execution condition (Metric type) are the same.
Policy Classification
Policy Name
Execution Condition (Statistic)
Execution Condition (Metric Type)
Execution Condition Range
Scale Out
ScaleOutPolicy
Average
CPU Usage
>= 60%
Table. Policy restriction example 1 - Pre-registered policy
If a policy is already registered as shown above, it is not possible to add or modify a policy with the same classification (Scale Out) and execution condition (Metric type = CPU Usage).
Example 2 - Check the execution condition range for the same execution condition (Metric type) and execution condition (Statistic)
When the policy distinction (Scale Out or Scale In) is different, the execution condition range (Comparison operator + Threshold) cannot be duplicated for the same execution condition (Metric type) and execution condition (Statistic).
Policy Distinction
Policy Name
Execution Condition (Statistic)
Execution Condition (Metric type)
Execution Condition Range
Scale Out
ScaleOutPolicy
Average
CPU Usage
>= 60%
Table. Policy Constraint Example 2 - Pre-registered Policy
In the case where a policy is registered as above, it is not possible to add a policy or modify it as follows:
If the CPU Usage is 60% or higher on average, since the Scale Out policy is already registered, it is not possible to register a Scale In policy for CPU Usage average of 60% or lower, as the 60% case would be a duplicate of the same execution condition.
Policy Distinction
Policy Name
Execution Condition (Statistic)
Execution Condition (Metric type)
Execution Condition Range
Scale In
AddUpdatePolicy
Average
CPU Usage
<= 60%
Table. Policy Constraint Example 2 - Policy that cannot be added
If a policy is already registered as shown above, it is not possible to add or modify a policy with the same execution condition (Metric type = CPU Usage) and execution condition (Statistic = Average), and an execution condition range that overlaps with the existing policy.
Example 3 - Check the execution condition range for the same execution condition (Metric type) and execution condition (Statistic)
When the policy distinction (Scale Out or Scale In) is different, the execution condition range (Comparison operator + Threshold) cannot be duplicated for the same execution condition (Metric type) and execution condition (Statistic).
Policy Distinction
Policy Name
Execution Condition (Statistic)
Execution Condition (Metric type)
Execution Condition Range
Scale In
ScaleInPolicy
Average
CPU Usage
<= 10%
Table. Policy Constraint Example 3 - Pre-registered Policy
In the case where a policy is registered as above, it is not possible to add or modify a policy as follows:
Since the Scale In policy is already registered when the CPU usage is 10% or less on average, it is not possible to register a Scale Out policy when the CPU usage is less than 60%, less than or equal to 60%, greater than 10%, or greater than 9%.
Policy Distinction
Policy Name
Execution Condition (Statistic)
Execution Condition (Metric type)
Execution Condition Range
Scale Out
AddUpdatePolicy1
Average
CPU Usage
< 60%
Scale Out
AddUpdatePolicy2
Average
CPU Usage
<= 60%
Scale Out
AddUpdatePolicy3
Average
CPU Usage
>= 10%
Scale Out
AddUpdatePolicy4
Average
CPU Usage
> 9%
Table. Policy Constraint Example 3 - Policies that cannot be added
Example 4 - Registration is possible when the execution condition range does not overlap
When the policy distinction (Scale Out or Scale In) is different, it is possible to register even if the execution condition (Statistic) is different or the execution condition range (Comparison operator + Threshold) does not overlap for the same execution condition (Metric type).
Policy Distinction
Policy Name
Execution Condition (Statistic)
Execution Condition (Metric type)
Execution Condition Range
Scale Out
ScaleOutPolicy
Average
CPU Usage
>= 60%
Table. Policy constraint example 4 - Pre-registered policy
In the case where a policy is registered as above, it is possible to add or modify a policy as follows. If the execution condition range does not overlap or the execution condition (Statistic) is different, registration is possible.
Policy Distinction
Policy Name
Execution Condition (Statistic)
Execution Condition (Metric type)
Execution Condition Range
Scale In
AddUpdatePolicy1
Average
CPU Usage
<= 10%
Scale In
AddUpdatePolicy2
Min
CPU Usage
<= 60%
Table. Policy constraint example 4 - Policies that can be added
Deleting a Policy
It is possible to delete a policy from an Auto-Scaling Group. To delete a policy, follow the procedure below.
All Services > Compute > Virtual Server menu, click. Move to the Service Home page of Virtual Server.
Auto-Scaling Group menu, click. Move to the Auto-Scaling Group List page.
On the Auto-Scaling Group List page, click the resource to check the detailed information. Move to the Auto-Scaling Group Details page.
Click the Policy Tab. Move to the Policy Tab page.
Select the policy to delete and click the Delete button. The Policy Delete Confirmation popup window opens.
Confirm the Policy Delete Confirmation popup window and click the Confirm button.
2.2.2.3 - Managing Schedules
You can schedule daily, weekly, monthly, or one-time, and set the desired number of servers at a fixed time. This is useful when it is possible to predict when to reduce or increase the number of servers.
Add schedule
You can add a schedule to the Auto-Scaling Group. To add a schedule to the Auto-Scaling Group, follow these steps.
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Auto-Scaling Group menu is clicked. It moves to the Auto-Scaling Group list page.
Auto-Scaling Group list page, click the resource to check the detailed information. It moves to the Auto-Scaling Group details page.
Click the Schedule Tab. It moves to the Schedule Tab page.
Click the Add Schedule button. The Add Schedule popup window opens.
Classification
Required
Detailed Description
Schedule Name
Required
Schedule-specific distinguishing name
Server count selection
Required
When performing a schedule, select the number of servers to adjust
Min: The minimum number of servers that the Auto-Scailg Group will maintain
Desired: The target number of servers within the Auto-Scailg Group
Max: The maximum number of servers that the Auto-Scailg Group can maintain
Enter the number of servers
Required
Enter the value of the selected server number
Min value: Please enter a value between 0 and 50. (Min≤Desired≤Max)
Desired value: Please enter a value between 0 and 50. (Min≤Desired≤Max)
Max value: Please enter a value between 0 and 50. (Min≤Desired≤Max)
Period
Required
Schedule execution period
Daily: You can set the start date and end date, and permanent settings for daily schedule execution. You can also set time and time zone
Weekly: You can set start date and end date, permanent settings, and time and time zone settings. You can also select the day of the week for weekly schedule execution
Monthly: You can set start date and end date, permanent settings, and time and time zone settings. You can also enter the date for monthly schedule execution
Once: You can set time and time zone settings. You can also set the date for one-time schedule execution
Start Date
Select
Set schedule start date
Cannot be set to a date prior to the current date. The default is the current date.
End Date
Select
Set schedule end date
Cannot be set to a date prior to the current date. The default is the current date + 7.
Permanent
Select
If permanent is set, the schedule end date is set to 9999-12-31
Time
Required
Schedule execution time setting
Can be set in 30-minute units. Time faster than the current date and time cannot be set
Time Zone
Required
Time zone corresponding to the schedule execution time (e.g., Asia/Seoul (GMT +09:00))
Day of the week
Required
If you select cycle as every week, select the day of the week to perform the schedule
Date
Essential
Cycle is selected as every month, enter the Date to perform the schedule
Please enter one or more between -31 and 31, excluding 0. (Example: 3,4,5)
Cycle is selected as once, set the Date to perform the schedule
It cannot be set before the current date. The default value is the current date.
Table. Schedule Add Popup Item
In the Add Schedule popup window, enter the required values and click the OK button.
Check the message in the Add Schedule Confirmation popup window, then click the Confirm button.
Reference
If you select monthly for the schedule cycle, you must enter the schedule execution date, which is Date. Please refer to the following contents to register the schedule.
If you enter a number greater than 0, it means the date of the month.
Example: If you enter 1, it will be August 1, September 1, …, December 1
If you enter a number less than 0, it will be calculated from the last day of the month.
Entering -1 means the last day of the month.
Example: August 31, September 30, …, December 31
If -2 is entered, it means the day before the last day of the month.
Example: August 30, September 29, …, December 30
Since the last day of each month is different, such as 31st, 30th, 29th, 28th, to handle cases where a schedule should be executed on the last day of each month, negative numbers are used to calculate from the last day, as shown above.
Notice
When the schedule is executed, if the minimum number of servers set in the schedule is greater than the desired number of servers, or the maximum number of servers is less than the desired number of servers, the desired number of servers is also modified.
If there are schedules with overlapping execution times, they may not run normally. Please try to avoid overlapping execution times if possible.
Modify Schedule
You can modify the schedule of the Auto-Scaling Group. To modify the schedule of the Auto-Scaling Group, follow these steps.
Click All services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Auto-Scaling Group menu is clicked. It moves to the Auto-Scaling Group list page.
Auto-Scaling Group list page, click the resource to check the detailed information. Move to the Auto-Scaling Group details page.
Click the Schedule Tab. It moves to the Schedule Tab page.
Click the More > Edit button of the schedule you want to modify. The Edit Schedule popup window will open.
Classification
Required
Detailed Description
Schedule Name
Required
Schedule name to distinguish by schedule
Number of servers to select
Required
When performing a schedule, select the number of servers to adjust
Min: The minimum number of servers that the Auto-Scailg Group will maintain
Desired: The target number of servers within the Auto-Scailg Group
Max: The maximum number of servers that the Auto-Scailg Group can maintain
Enter the number of servers
Required
Enter the value of the selected server number
Min value: Please enter a value between 0 and 50. (Min≤Desired≤Max)
Desired value: Please enter a value between 0 and 50. (Min≤Desired≤Max)
Max value: Please enter a value between 0 and 50. (Min≤Desired≤Max)
Period
Required
Schedule execution period
Daily: You can set the start date and end date, and permanent settings for daily schedule execution. You can also set time and time zone
Weekly: You can set start date and end date, permanent settings, and time and time zone. You can also select the day of the week for weekly schedule execution
Monthly: You can set start date and end date, permanent settings, and time and time zone. You can also enter the date for monthly schedule execution
Once: You can set time and time zone. You can also set the date for one-time schedule execution
Start Date
Select
Set schedule start date
Cannot be set to a date prior to the current date. The default is the current date.
End Date
Select
Set schedule end date
Cannot be set to a date prior to the current date. The default is the current date + 7.
Permanent
Select
If permanent is set, the schedule end date is set to 9999-12-31
Time
Required
Schedule execution time setting
Can be set in 30-minute units. Time faster than the current date and time cannot be set
Time Zone
Required
Time zone corresponding to the schedule execution time (e.g. Asia/Seoul (GMT +09:00))
Day
Required
If you select Weekly as the Cycle, select the day of the week to perform the schedule
Date
Essential
When Cycle is set to Every Month, enter the Date to perform the schedule
Please enter one or more values from -31 to 31, excluding 0. (Example: 3,4,5)
When Cycle is set to Once, set the Date to perform the schedule
Cannot be set to a date before the current date. The default value is the current date.
Fig. Schedule modification popup item
In the Modify Schedule popup window, enter the required values and click the Confirm button.
Schedule Modification Confirmation Check the message in the popup window and click the Confirm button.
Delete Schedule
You can delete the schedule of the Auto-Scaling Group. To delete the schedule of the Auto-Scaling Group, follow the next procedure.
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Auto-Scaling Group menu is clicked. It moves to the Auto-Scaling Group list page.
Auto-Scaling Group list page, click the resource to check the detailed information. Move to the Auto-Scaling Group details page.
Click the Schedule Tab. It moves to the Schedule Tab page.
Select the schedule to be deleted and click the Delete button. The Schedule Deletion Confirmation popup window will open.
Schedule deletion confirmation popup window, confirm and click the Confirm button.
2.2.2.4 - Managing Notifications
You can specify the notification recipient to send a notification message via E-mail or SMS for a specific situation.
Reference
Notification method (E-mail or SMS) can be set by selecting Notification target as Service > Virtual Server Auto-Scaling on the Notification settings page.
You can add notifications to the Auto-Scaling Group. To add notifications to the Auto-Scaling Group, follow these steps.
All services > Compute > Virtual Server menu, click. It moves to the Service Home page of Virtual Server.
Auto-Scaling Group menu is clicked. It moves to the Auto-Scaling Group list page.
Click the resource to add notification information on the Auto-Scaling Group list page. It moves to the Auto-Scaling Group details page.
Click the Notification Tab. You will be taken to the Notification Tab page.
Click the Add Notification button. The Add Notification popup window opens.
In the Add Notification popup window, enter the required values and click the Confirm button.
Division
Detailed Description
Notification Point
Notification point when Auto-Scaling Group alert occurs
Server creation, Server termination, Server creation failure, Server termination failure, When policy execution conditions are met
Multiple selections are possible
Notification Recipient
User to receive notification when notification occurs
Add Notification Recipient button to select user
Only Samsung Cloud Platform users can be selected as recipients
Table. Notification Items
Caution
When adding a notification recipient, check if there is an email address and add it. Only users with a login history (users who have registered their email or mobile phone number) can receive notifications.
Check the message in the Add Alert Confirmation popup window, then click the Confirm button.
Modify Alert
You can modify the notification information of the Auto-Scaling Group. To modify the notification information of the Auto-Scaling Group, follow the procedure below.
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Auto-Scaling Group menu will be clicked. It moves to the Auto-Scaling Group list page.
Click the resource to modify the notification information on the Auto-Scaling Group list page. It moves to the Auto-Scaling Group details page.
Click the Notification Tab. It moves to the Notification Tab page.
Click the More > Edit button for the notification information you want to modify in the notification list. The Edit Notification popup window opens.
Modify Notification In the notification modification popup window, modify the notification information and click the Confirm button.
Classification
Detailed Description
Notification Point
Notification point when Auto-Scaling Group alert occurs
Server creation, Server termination, Server creation failure, Server termination failure, When policy execution conditions are met
Multiple selections are possible
Table. Notification Modification Items
Check the message in the Notification Modification Confirmation popup window, then click the Confirm button.
Delete Notification
You can delete the notification of Auto-Scaling Group. To delete the notification of Auto-Scaling Group, follow the procedure below.
All Services > Compute > Virtual Server menu should be clicked. It moves to the Service Home page of Virtual Server.
Auto-Scaling Group menu is clicked. It moves to the Auto-Scaling Group list page.
Auto-Scaling Group list page, click the resource to modify the notification information. Move to the Auto-Scaling Group details page.
Click the Notification Tab. It moves to the Notification Tab page.
Select the notification to be deleted from the notification list, then click the Delete button. The Delete Notification Confirmation popup window will open.
Notification deletion confirmation popup window, confirm and click the Confirm button.
2.2.3 - API Reference
API Reference
2.2.4 - CLI Reference
CLI Reference
2.2.5 - Release Note
Virtual Server Auto-Scaling
2025.07.01
FEATURENew feature added
Added notification feature to Virtual Server Auto-Scaling.
You can add notification settings in the Auto-Scaling Group creation or detail screen.
You can set the scaling policy when creating an Auto-Scaling Group.
You can set the Draining Timeout when connecting to the Load Balancer.
In an Auto-Scaling Group, a Virtual Server can be connected to up to 50 instances, and an LB server group and port can be connected up to 3 instances.
2025.02.27
FEATUREVirtual Server Auto Scaling-Load Balancer service linkage release and NAT setting feature addition
Virtual Server Auto-Scaling feature change
It will be released in conjunction with the Load Balancer service to be released in February 2025.
NAT setting feature has been added to Auto-Scaling Group.
Samsung Cloud Platform common feature changes
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.11.19
NEWVirtual Server Auto Scaling Service Official Version Release
Virtual Server Auto-Scaling creates and manages Auto-Scaling Group through Launch Configuration and checks and manages the server.
It provides a schedule method that can set the desired number of servers at a fixed time and a policy method that adjusts the number of servers based on CPU usage rate.
2.3 - GPU Server
2.3.1 - Overview
Service Overview
GPU Server is a virtualized computing service that allows you to freely allocate and use as much infrastructure resources provided by the server such as CPU, GPU, and memory as needed at the desired time without having to purchase them individually. It is suitable for tasks that require fast computing speed such as AI model experimentation, prediction, and inference in a cloud environment, and allows you to flexibly select and use resources with optimized performance according to task type and scale.
GPU Server provides the following features:
Provided Features
GPU Server Management: Users can directly manage creation, deletion, and changes as a Self Service from GPU Server provisioning to monitoring and billing through a web-based Console.
Provisioning by GPU Quantity: You can configure a virtual server by freely selecting the quantity of H100/A100 GPUs according to project purpose and scale.
High Performance GPU Provision: Provides high-performance GPU servers at the physical server level using the Pass-through method.
Storage Connection: Provides additional connected storage besides OS disks. You can connect and use Block Storage, File Storage, and Object Storage.
Strong Security Application: Protects servers safely by controlling Inbound/Outbound traffic exchanged with external internet or other VPCs (Virtual Private Cloud) through the Security Group service.
Monitoring: You can check monitoring information such as CPU, Memory, Disk, and GPU status corresponding to computing resources through the Cloud Monitoring service.
Network Setting Management: The server’s subnet/IP can be easily changed from the values set at initial creation. Provides management functionality that allows you to set use/terminate NAT IP as needed.
Key Pair Method: Provides a Key Pair method instead of ID/PW access for secure OS access.
Image Management: You can create and manage Custom Images and provides sharing functionality between projects.
ServiceWatch Service Integration Provision: You can monitor data through the ServiceWatch service.
Components
GPU Server provides GPU, NVSwitch, and NVLink on top of virtualized computing resources.
Warning
NVSwitch can be activated and used only for instance types that allocate 8 GPUs to a single GPU Server.
Specifications by GPU Type
GPU (Graphic Processing Unit) plays the role of performing calculations necessary to create images that make up the computer screen and is specialized for parallel processing, enabling it to process large amounts of data quickly, handling large-scale parallel operations such as artificial intelligence (AI) and data analysis.
The following are the specifications of GPU types provided by the GPU Server service.
Item
A100 Type
H100 Type
Service Provision Method
Pass-through
Pass-through
GPU Architecture
NVIDIA Ampere
NVIDIA Hopper
GPU Memory
80 GB
80 GB
GPU Transistors
54 billion 7N TSMC
80 billion 4N TSMC
FP16 Tensor Core (Dense)
312 TFLOPs
989 TFLOPs
FP8 Tensor Core (Dense)
Not supported
1,979 TFLOPs
FP4 Tensor Core (Dense)
Not supported
Not supported
GPU Memory Bandwidth
2,039 GB/s HBM2e
3,352 GB/s HBM3
NVLink Performance
NVLink 3
NVLink 4
NVLink Signaling Rate
25 GB/s (x12)
25 GB/s (x18)
NVSwitch GPU-to-GPU Bandwidth
600 GB/s
900 GB/s
Total NVSwitch Aggregate Bandwidth
4.8 TB/s
7.2 TB/s
Table. GPU Type Specifications
Server Type
The server types provided by GPU Server are as follows. For a detailed description of the server types provided by GPU Server, see GPU Server Server Types.
Item
Server Type
CPU vCore
Memory(GB)
GPU Quantity
GPU-A100-1
g1v16a1
16
234
1
GPU-A100-1
g1v32a2
32
468
2
GPU-A100-1
g1v64a4
64
936
4
GPU-A100-1
g1v128a8
128
1872
8
GPU-H100-2
g2v12h1
12
234
1
GPU-H100-2
g2v24h2
24
468
2
GPU-H100-2
g2v48h4
48
936
4
GPU-H100-2
g2v96h8
96
1872
8
Table. GPU Server Server Types
OS and GPU Driver Version
The operating systems (OS) supported by GPU Server are as follows:
OS
OS Version
GPU Driver Version
Ubuntu
22.04
535.183.06
Ubuntu
24.04
570.195.03
RHEL
8.10
535.183.06
Table. GPU Server OS and GPU Driver Version
Prerequisite Services
This is a service that must be pre-installed before creating this service. Please prepare by referring to the user guide provided in advance.
GPU Server is classified according to the GPU Type provided, and the GPU used in the GPU Server is determined by the server type selected when creating the GPU Server. Please select the server type according to the specifications of the application you want to run on the GPU Server.
The server types supported by the GPU Server are as follows.
GPU-H100-2 g2v12h1
Category
Example
Detailed description
Server Type
GPU-H100-2
Provided server type classification
GPU-H100-2
GPU-H100 means the provided GPU type
2 means the generation
GPU-A100-1
GPU-A100 means the provided GPU type
1 means the generation
Server specifications
g2
Provided server type classification and generation
g2
g means GPU server specifications
2 means generation
Server specifications
v12
Number of vCores
v2: 2 virtual cores
Server specifications
h1
GPU type and quantity
h1
h means GPU-H100
1 means 1 GPU
a2
a means GPU-A100
2 means 2 GPUs
Table. GPU Server server type format
g1 server type
The g1 server type is a GPU Server that uses NVIDIA A100 Tensor Core GPU, suitable for high-performance applications.
Provides up to 8 NVIDIA A100 Tensor Core GPUs
Equipped with 6,912 CUDA cores and 432 Tensor cores per GPU
Supports up to 128 vCPUs and 1,920 GB of memory
Maximum 40 Gbps networking speed
600GB/s GPU and NVIDIA NVSwitch P2P communication
Category
Server Type
GPU
CPU
Memory
GPU Memory
Network Bandwidth
GPU-A100-1
g1v16a1
1
16 vCore
234 GB
80 GB
up to 20 Gbps
GPU-A100-1
g1v32a2
2
32 vCore
468 GB
160 GB
up to 20 Gbps
GPU-A100-1
g1v64a4
4
64 vCore
936 GB
320 GB
up to 40 Gbps
GPU-A100-1
g1v128a8
8
128 vCore
1872 GB
640 GB
Maximum 40 Gbps
Table. GPU Server server type > GPU-A100-1 server type
g2 server type
The g2 server type is a GPU Server that uses NVIDIA H100 Tensor Core GPU, suitable for high-performance applications.
Up to 8 NVIDIA H100 Tensor Core GPUs provided
Equipped with 16,896 CUDA cores and 528 Tensor cores per GPU
Supports up to 96 vCPUs and 1,920 GB of memory
Maximum networking speed of 40Gbps
900GB/s GPU and NVIDIA NVSwitch P2P communication
Category
Server Type
GPU
CPU
Memory
GPU Memory
Network Bandwidth
GPU-H100-2
g2v12h1
1
12 vCore
234 GB
80 GB
up to 20 Gbps
GPU-H100-2
g2v24h2
2
24 vCore
468 GB
160 GB
up to 20 Gbps
GPU-H100-2
g2v48h4
4
48 vCore
936 GB
320 GB
Maximum 40 Gbps
GPU-H100-2
g2v96h8
8
96 vCore
1872 GB
640 GB
up to 40 Gbps
Table. GPU Server server type > GPU-H100-2 server type
2.3.1.2 - Monitoring Metrics
GPU Server Monitoring Metrics
The following table shows the monitoring metrics of the GPU Server that can be checked through Cloud Monitoring.
Even without installing an Agent, basic monitoring metrics are provided. Please check the Table. GPU Server Monitoring Metrics (Basic) below. Additionally, metrics that can be retrieved by installing an Agent are referenced in the Table. GPU Server Additional Monitoring Metrics (Agent Installation Required) below.
For detailed Cloud Monitoring usage, please refer to the Cloud Monitoring guide.
Performance Item Name
Description
Unit
Memory Total [Basic]
Total available memory in bytes
bytes
Memory Used [Basic]
Currently used memory in bytes
bytes
Memory Swap In [Basic]
Swapped memory in bytes
bytes
Memory Swap Out [Basic]
Swapped memory in bytes
bytes
Memory Free [Basic]
Unused memory in bytes
bytes
Disk Read Bytes [Basic]
Read bytes
bytes
Disk Read Requests [Basic]
Number of read requests
cnt
Disk Write Bytes [Basic]
Written bytes
bytes
Disk Write Requests [Basic]
Number of write requests
cnt
CPU Usage [Basic]
Average system CPU usage over 1 minute
%
Instance State [Basic]
Instance state
state
Network In Bytes [Basic]
Received bytes
bytes
Network In Dropped [Basic]
Dropped received packets
cnt
Network In Packets [Basic]
Received packets
cnt
Network Out Bytes [Basic]
Sent bytes
bytes
Network Out Dropped [Basic]
Dropped sent packets
cnt
Network Out Packets [Basic]
Sent packets
cnt
Table. GPU Server Basic Monitoring Metrics (Basic)
Performance Item Name
Description
Unit
GPU Count
Number of GPUs
cnt
GPU Memory Usage
GPU memory usage rate
%
GPU Memory Used
Used GPU memory
MB
GPU Temperature
GPU temperature
℃
GPU Usage
GPU utilization
%
GPU Usage [Avg]
Average GPU usage rate
%
GPU Power Cap
Maximum power capacity of the GPU
W
GPU Power Usage
Current power usage of the GPU
W
GPU Memory Usage [Avg]
Average GPU memory usage rate
%
GPU Count in use
Number of GPUs in use by jobs on the node
cnt
Execution Status for nvidia-smi
Execution result of the nvidia-smi command
status
Core Usage [IO Wait]
CPU time spent in IO wait state
%
Core Usage [System]
CPU time spent in system space
%
Core Usage [User]
CPU time spent in user space
%
CPU Cores
Number of CPU cores on the host
cnt
CPU Usage [Active]
CPU time used, excluding idle and IO wait states
%
CPU Usage [Idle]
CPU time spent in idle state
%
CPU Usage [IO Wait]
CPU time spent in IO wait state
%
CPU Usage [System]
CPU time used by the kernel
%
CPU Usage [User]
CPU time used by user space
%
CPU Usage/Core [Active]
CPU time used per core, excluding idle and IO wait states
%
CPU Usage/Core [Idle]
CPU time spent in idle state per core
%
CPU Usage/Core [IO Wait]
CPU time spent in IO wait state per core
%
CPU Usage/Core [System]
CPU time used by the kernel per core
%
CPU Usage/Core [User]
CPU time used by user space per core
%
Disk CPU Usage [IO Request]
CPU time spent on IO requests
%
Disk Queue Size [Avg]
Average queue length of requests
num
Disk Read Bytes
Bytes read from the device per second
bytes
Disk Read Bytes [Delta Avg]
Average delta of bytes read from the device
bytes
Disk Read Bytes [Delta Max]
Maximum delta of bytes read from the device
bytes
Disk Read Bytes [Delta Min]
Minimum delta of bytes read from the device
bytes
Disk Read Bytes [Delta Sum]
Sum of delta of bytes read from the device
bytes
Disk Read Bytes [Delta]
Delta of bytes read from the device
bytes
Disk Read Bytes [Success]
Total bytes successfully read
bytes
Disk Read Requests
Number of read requests to the device per second
cnt
Disk Read Requests [Delta Avg]
Average delta of read requests to the device
cnt
Disk Read Requests [Delta Max]
Maximum delta of read requests to the device
cnt
Disk Read Requests [Delta Min]
Minimum delta of read requests to the device
cnt
Disk Read Requests [Delta Sum]
Sum of delta of read requests to the device
cnt
Disk Read Requests [Success Delta]
Delta of successful read requests to the device
cnt
Disk Read Requests [Success]
Total successful read requests
cnt
Disk Request Size [Avg]
Average size of requests to the device
num
Disk Service Time [Avg]
Average service time of requests to the device
ms
Disk Wait Time [Avg]
Average wait time of requests to the device
ms
Disk Wait Time [Read]
Average read wait time of the device
ms
Disk Wait Time [Write]
Average write wait time of the device
ms
Disk Write Bytes [Delta Avg]
Average delta of bytes written to the device
bytes
Disk Write Bytes [Delta Max]
Maximum delta of bytes written to the device
bytes
Disk Write Bytes [Delta Min]
Minimum delta of bytes written to the device
bytes
Disk Write Bytes [Delta Sum]
Sum of delta of bytes written to the device
bytes
Disk Write Bytes [Delta]
Delta of bytes written to the device
bytes
Disk Write Bytes [Success]
Total bytes successfully written
bytes
Disk Write Requests
Number of write requests to the device per second
cnt
Disk Write Requests [Delta Avg]
Average delta of write requests to the device
cnt
Disk Write Requests [Delta Max]
Maximum delta of write requests to the device
cnt
Disk Write Requests [Delta Min]
Minimum delta of write requests to the device
cnt
Disk Write Requests [Delta Sum]
Sum of delta of write requests to the device
cnt
Disk Write Requests [Success Delta]
Delta of successful write requests to the device
cnt
Disk Write Requests [Success]
Total successful write requests
cnt
Disk Writes Bytes
Bytes written to the device per second
bytes
Filesystem Hang Check
Filesystem hang check (normal: 1, abnormal: 0)
status
Filesystem Nodes
Total number of filesystem nodes
cnt
Filesystem Nodes [Free]
Total number of available filesystem nodes
cnt
Filesystem Size [Available]
Available disk space in bytes
bytes
Filesystem Size [Free]
Free disk space in bytes
bytes
Filesystem Size [Total]
Total disk space in bytes
bytes
Filesystem Usage
Disk space usage rate
%
Filesystem Usage [Avg]
Average disk space usage rate
%
Filesystem Usage [Inode]
Inode usage rate
%
Filesystem Usage [Max]
Maximum disk space usage rate
%
Filesystem Usage [Min]
Minimum disk space usage rate
%
Filesystem Usage [Total]
Total disk space usage rate
%
Filesystem Used
Used disk space in bytes
bytes
Filesystem Used [Inode]
Used inode space in bytes
bytes
Memory Free
Total available memory in bytes
bytes
Memory Free [Actual]
Actual available memory in bytes
bytes
Memory Free [Swap]
Available swap memory in bytes
bytes
Memory Total
Total memory in bytes
bytes
Memory Total [Swap]
Total swap memory in bytes
bytes
Memory Usage
Memory usage rate
%
Memory Usage [Actual]
Actual memory usage rate
%
Memory Usage [Cache Swap]
Cache swap usage rate
%
Memory Usage [Swap]
Swap memory usage rate
%
Memory Used
Used memory in bytes
bytes
Memory Used [Actual]
Actual used memory in bytes
bytes
Memory Used [Swap]
Used swap memory in bytes
bytes
Collisions
Network collisions
cnt
Network In Bytes
Received bytes
bytes
Network In Bytes [Delta Avg]
Average delta of received bytes
bytes
Network In Bytes [Delta Max]
Maximum delta of received bytes
bytes
Network In Bytes [Delta Min]
Minimum delta of received bytes
bytes
Network In Bytes [Delta Sum]
Sum of delta of received bytes
bytes
Network In Bytes [Delta]
Delta of received bytes
bytes
Network In Dropped
Dropped received packets
cnt
Network In Errors
Received errors
cnt
Network In Packets
Received packets
cnt
Network In Packets [Delta Avg]
Average delta of received packets
cnt
Network In Packets [Delta Max]
Maximum delta of received packets
cnt
Network In Packets [Delta Min]
Minimum delta of received packets
cnt
Network In Packets [Delta Sum]
Sum of delta of received packets
cnt
Network In Packets [Delta]
Delta of received packets
cnt
Network Out Bytes
Sent bytes
bytes
Network Out Bytes [Delta Avg]
Average delta of sent bytes
bytes
Network Out Bytes [Delta Max]
Maximum delta of sent bytes
bytes
Network Out Bytes [Delta Min]
Minimum delta of sent bytes
bytes
Network Out Bytes [Delta Sum]
Sum of delta of sent bytes
bytes
Network Out Bytes [Delta]
Delta of sent bytes
bytes
Network Out Dropped
Dropped sent packets
cnt
Network Out Errors
Sent errors
cnt
Network Out Packets
Sent packets
cnt
Network Out Packets [Delta Avg]
Average delta of sent packets
cnt
Network Out Packets [Delta Max]
Maximum delta of sent packets
cnt
Network Out Packets [Delta Min]
Minimum delta of sent packets
cnt
Network Out Packets [Delta Sum]
Sum of delta of sent packets
cnt
Network Out Packets [Delta]
Delta of sent packets
cnt
Open Connections [TCP]
Open TCP connections
cnt
Open Connections [UDP]
Open UDP connections
cnt
Port Usage
Port usage rate
%
SYN Sent Sockets
Number of sockets in SYN_SENT state
cnt
Kernel PID Max
Maximum PID value
cnt
Kernel Thread Max
Maximum thread value
cnt
Process CPU Usage
CPU time used by the process
%
Process CPU Usage/Core
CPU time used by the process per core
%
Process Memory Usage
Resident Set size
%
Process Memory Used
Used memory by the process
bytes
Process PID
Process ID
PID
Process PPID
Parent process ID
PID
Processes [Dead]
Number of dead processes
cnt
Processes [Idle]
Number of idle processes
cnt
Processes [Running]
Number of running processes
cnt
Processes [Sleeping]
Number of sleeping processes
cnt
Processes [Stopped]
Number of stopped processes
cnt
Processes [Total]
Total number of processes
cnt
Processes [Unknown]
Number of unknown processes
cnt
Processes [Zombie]
Number of zombie processes
cnt
Running Process Usage
Process usage rate
%
Running Processes
Number of running processes
cnt
Running Thread Usage
Thread usage rate
%
Running Threads
Number of running threads
cnt
Context Switches
Context switches per second
cnt
Load/Core [1 min]
Load per core over 1 minute
cnt
Load/Core [15 min]
Load per core over 15 minutes
cnt
Load/Core [5 min]
Load per core over 5 minutes
cnt
Multipaths [Active]
Number of active multipath connections
cnt
Multipaths [Failed]
Number of failed multipath connections
cnt
Multipaths [Faulty]
Number of faulty multipath connections
cnt
NTP Offset
Measured offset from the NTP server
num
Run Queue Length
Run queue length
num
Uptime
System uptime in milliseconds
ms
Context Switchies
Context switches per second
cnt
Disk Read Bytes [Sec]
Bytes read from the device per second
cnt
Disk Read Time [Avg]
Average read time from the device
sec
Disk Transfer Time [Avg]
Average disk transfer time
sec
Disk Usage
Disk usage rate
%
Disk Write Bytes [Sec]
Bytes written to the device per second
cnt
Disk Write Time [Avg]
Average write time to the device
sec
Pagingfile Usage
Paging file usage rate
%
Pool Used [Non Paged]
Non-paged pool usage
bytes
Pool Used [Paged]
Paged pool usage
bytes
Process [Running]
Number of running processes
cnt
Threads [Running]
Number of running threads
cnt
Threads [Waiting]
Number of waiting threads
cnt
Table. GPU Server Additional Monitoring Metrics (Agent Installation Required)
2.3.1.3 - ServiceWatch Metrics
GPU Server sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at 5-minute intervals. When detailed monitoring is enabled, you can view data collected at 1-minute intervals.
Notice
GPU Server’s basic monitoring and detailed monitoring are provided with the same metrics as Virtual Server, and the namespace is also provided as Virtual Server.
GPU-related metrics are provided through ServiceWatch Agent. For information on how to collect metrics using ServiceWatch Agent, refer to the ServiceWatch Agent guide.
Reference
To view metrics in ServiceWatch, refer to the ServiceWatch guide.
The following are basic metrics for the namespace Virtual Server.
In the table below, metrics with metric names marked in bold are selected as key metrics among the basic metrics provided by Virtual Server.
Key metrics are used to configure service dashboards that are automatically built for each service in ServiceWatch.
Each metric guides you on which statistic value is meaningful when querying that metric through the user guide, and the statistic value marked in bold among the meaningful statistics is the key statistic value. In the service dashboard, you can view key metrics through key statistic values.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Instance State
Instance state display
1 - Active
0 - Off
None
Sum
CPU Usage
CPU usage
Percent
Average
Maximum
Minimum
Disk Read Bytes
Amount read from block device (bytes)
Bytes
Sum
Average
Maximum
Minimum
Disk Read Requests
Number of read requests from block device
Count
Sum
Average
Maximum
Minimum
Disk Write Bytes
Amount written to block device (bytes)
Bytes
Sum
Average
Maximum
Minimum
Disk Write Requests
Number of write requests to block device
Count
Sum
Average
Maximum
Minimum
Network In Bytes
Amount received on network interface (bytes)
Bytes
Sum
Average
Maximum
Minimum
Network In Dropped
Number of received packets dropped on network interface
Count
Sum
Average
Maximum
Minimum
Network In Packets
Number of received packets on network interface
Count
Sum
Average
Maximum
Minimum
Network Out Bytes
Amount transmitted on network interface (bytes)
Bytes
Sum
Average
Maximum
Minimum
Network Out Dropped
Number of transmitted packets dropped on network interface
Count
Sum
Average
Maximum
Minimum
Network Out Packets
Amount transmitted on network interface
Count
Sum
Average
Maximum
Minimum
Table. Virtual Server Basic Metrics
2.3.2 - How-to guides
The user can enter the required information of the GPU Server through the Samsung Cloud Platform Console, select detailed options, and create the service.
Create GPU Server
You can create and use GPU Server services from the Samsung Cloud Platform Console.
If you want to create a GPU Server, follow the steps below.
All Services > Compute > GPU Server Please click the menu. Navigate to the GPU Server’s Service Home page.
Service Home on the page, click the Create GPU Server button. Navigate to the Create GPU Server page.
GPU Server creation On the page, enter the information required to create the service, and select detailed options.
Image and version selection Select the required information in the area.
Category
Required or not
Detailed description
Image
Required
Select provided image type
Ubuntu
Image version
Required
Select version of the chosen image
Provides a list of server image versions offered
Table. GPU Server image and version selection input items
Service Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Server count
Required
Number of GPU Server servers to create simultaneously
Only numbers can be entered, and input a value between 1 and 100
Service Type > Server Type
Required
GPU Server Server Type
Indicates the server specifications of GPU type, select a server that includes 1, 2, 4, or 8 GPUs
For detailed information about the server types provided by GPU Server, refer to GPU Server Server Type
Service Type > Planned Compute
Select
Resource status with Planned Compute set
In Use: Number of resources with Planned Compute set that are currently in use
Configured: Number of resources with Planned Compute set
Coverage Preview: Amount applied per resource by Planned Compute
Planned Compute Service Application: Go to the Planned Compute service application page
Table. GPU Server required information input items
Additional Information Input Enter or select the required information in the area.
Category
Required
Detailed description
Lock
Select
Set whether to use Lock
Using Lock prevents actions such as server termination, start, stop from being executed, preventing malfunctions caused by mistakes
Init script
Select
Script executed when the server starts
The Init script must be written as a Batch script for Windows, a Shell script or cloud‑init for Linux, depending on the image type.
Up to 45,000 bytes can be entered
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. GPU Server Additional Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resources on the GPU Server list page.
GPU Server Check detailed information
GPU Server service can view and edit the full resource list and detailed information. GPU Server Detail page consists of Detail Information, Tags, Job History tabs.
To view detailed information about the GPU Server service, follow the steps below.
All Services > Compute > GPU Server Click the menu. Go to the GPU Server’s Service Home page.
Click the GPU Server menu on the Service Home page. Navigate to the GPU Server list page.
GPU Server List Click the resource to view detailed information on the page. GPU Server Details You will be taken to the page.
GPU Server Details page displays status information and additional feature information, and consists of Details, Tags, Job History tabs.
VPC, general Subnet, IP, NAT IP, NAT IP status, Security Group
If you need to change the NAT IP value, you can set it by clicking the Edit button
If you need to change the Security Group, you can set it by clicking the Edit button
Add as new network: select a general Subnet and IP
You can select another general Subnet within the same VPC
IP can be set to auto-generate or user input; if input is selected, the user can directly enter the IP
Add with existing port: select a pre-created general Subnet and port
Local Subnet
GPU Server’s Local Subnet Information
Local Subnet, Local Subnet IP, Security Group
If a Security Group change is needed, you can click the Edit button to set it
Add as New Network: Select Local Subnet and IP
You can select a different Local Subnet within the same VPC
IP can be auto-generated or user input; if Input is selected, the user enters the IP directly
Add with Existing Port: Select a pre-created Local Subnet and port
Block Storage
Information of Block Storage connected to the server
Volume ID, Volume Name, Type, Capacity, Connection Info, Category, Delete on termination, Status
Add: Additional Block Storage can be connected if needed
Modify Delete on termination: Modify Delete on termination value
Disconnect: Disconnect the additionally connected Block Storage
Table. GPU Server detailed information tab items
tag
On the GPU Server List page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Tag’s Key, Value information can be checked
Up to 50 tags can be added per resource
When entering tags, you can search and select from existing Key and Value lists
Table. GPU Server Tag Tab Items
Work History
You can view the job history of the selected resource on the GPU Server List page.
Category
Detailed description
Work History List
Resource Change History
Work date and time, Resource ID, Resource name, Work details, Event topic, Work result, Verify worker information
Table. Work History Tab Detailed Information Items
GPU Server Operation Control
If you need to control the operation of the generated GPU Server resources, you can perform the task on the GPU Server List or GPU Server Details page.
You can start, stop, and restart a running server.
GPU Server Start
You can start a shutoff GPU Server. To start the GPU Server, follow the steps below.
All Services > Compute > GPU Server Please click the menu. Navigate to the GPU Server’s Service Home page.
Click the GPU Server menu on the Service Home page. Go to the GPU Server List page.
GPU Server List page, click the resource to start among the shutoff servers, and go to the GPU Server Details page.
GPU Server list page, you can start each resource via the right more button.
After selecting multiple servers with the check box, you can control multiple servers simultaneously through the Start button at the top.
GPU Server Details On the page, click the Start button at the top to start the server. Check the changed server status in the Status Display item.
When the GPU Server start is completed, the server status changes from Shutoff to Active.
If you need server control and management functions for the generated GPU Server resources, you can perform the work on the GPU Server Resource List or GPU Server Details page.
Image Create
You can create an image of a running GPU server.
Reference
This content provides instructions on how to create a user custom image using the image of a running GPU server.
GPU Server list or GPU Server details page, click the Create Image button to create a user Custom Image.
To create an Image of the GPU Server, follow the steps below.
All Services > Compute > GPU Server Click the menu. Navigate to the GPU Server’s Service Home page.
Service Home page, click the GPU Server menu. Go to the GPU Server list page.
Click the resource to create an Image on the GPU Server List page. It navigates to the GPU Server Details page.
GPU Server Details on the page, click the Image Generation button. Navigate to the Image Generation page.
Service Information Input area, please enter the required information.
Category
Required
Detailed description
Image Name
Required
Image name to be generated
Enter within 200 characters using English letters, numbers, spaces, and special characters (-_)
Table. Image Service Information Input Items
Check the input information and click the Complete button.
When creation is complete, check the created resources on the All Services > Compute > GPU Server > Image List page.
Notice
If you create an Image, the generated Image is stored in the Object Storage used as internal storage. Therefore, Object Storage usage fees are charged.
Active state GPU Server-generated image’s file system cannot guarantee integrity, so image creation after server shutdown is recommended.
ServiceWatch Enable Detailed Monitoring
Basically, the GPU Server is linked with the basic monitoring of ServiceWatch and the Virtual Server namespace. You can enable detailed monitoring as needed to more quickly identify and address operational issues. For more information about ServiceWatch, see the ServiceWatch Overview (/userguide/management/service_watch/overview/).
Reference
GPU Server provides basic and detailed monitoring in the same namespace as Virtual Server.
GPU Server’s GPU metrics are scheduled to be provided by ServiceWatch Agent. (Scheduled for Dec 2025)
Caution
Basic monitoring is provided for free, but activating detailed monitoring incurs additional charges. Please be aware when using.
To enable detailed monitoring of ServiceWatch on the GPU Server, follow these steps.
All Services > Compute > GPU Server Please click the menu. Navigate to the GPU Server’s Service Home page.
Click the GPU Server menu on the Service Home page. Navigate to the GPU Server list page.
On the GPU Server List page, click the resource to enable ServiceWatch detailed monitoring. You will be taken to the GPU Server Details page.
GPU Server Details page, click the ServiceWatch detailed monitoring Edit button. You will be taken to the ServiceWatch Detailed Monitoring Edit popup.
ServiceWatch Detailed Monitoring Modification In the popup window, after selecting Enable, check the guidance text and click the Confirm button.
GPU Server Details page, check the ServiceWatch detailed monitoring items.
ServiceWatch Disable detailed monitoring
Caution
Disabling detailed monitoring is required for cost efficiency. Keep detailed monitoring enabled only when absolutely necessary, and disable detailed monitoring for the rest.
To disable the detailed monitoring of ServiceWatch on the GPU Server, follow the steps below.
All Services > Compute > GPU Server Please click the menu. Navigate to the GPU Server’s Service Home page.
Service Home page, click the GPU Server menu. Navigate to the GPU Server list page.
Click the resource to disable ServiceWatch detailed monitoring on the GPU Server List page. It navigates to the GPU Server Details page.
GPU Server Details page, click the ServiceWatch detailed monitoring Edit button. It moves to the ServiceWatch Detailed Monitoring Edit popup.
ServiceWatch Detailed Monitoring Edit in the popup window, after deselecting Enable, check the guide text and click the Confirm button.
GPU Server Details page, check the ServiceWatch detailed monitoring items.
GPU Server Management Additional Features
For GPU Server management, you can view Console logs, generate Dump, and Rebuild. To view Console logs, generate Dump, and Rebuild the GPU Server, follow the steps below.
Check console log
You can view the current console log of the GPU Server.
To check the console logs of the GPU Server, follow the steps below.
All Services > Compute > GPU Server Please click the menu. Navigate to the GPU Server’s Service Home page.
Click the GPU Server menu on the Service Home page. Navigate to the GPU Server List page.
On the GPU Server List page, click the resource to view the console log. Navigate to the GPU Server Details page.
GPU Server Details on the page, click the Console Log button. It will move to the Console Log popup window.
Console Log Check the console log displayed in the popup window.
Create Dump
To create a Dump file of the GPU Server, follow the steps below.
All Services > Compute > GPU Server Click the menu. Navigate to the GPU Server’s Service Home page.
Click the GPU Server menu on the Service Home page. Navigate to the GPU Server List page.
GPU Server List Click the resource to view detailed information on the page. GPU Server Details Navigate to the page.
GPU Server Details on the page Create Dump Click the button.
The dump file is created inside the GPU Server.
Rebuild perform
You can delete all data and settings of the existing GPU Server and rebuild it on a new server.
To perform the Rebuild of the GPU Server, follow the steps below.
All Services > Compute > GPU Server Click the menu. Navigate to the Service Home page of GPU Server.
Click the GPU Server menu on the Service Home page. It navigates to the GPU Server List page.
GPU Server List page, click the resource to perform Rebuild. GPU Server Details page will be opened.
GPU Server Details on the page click the Rebuild button.
During GPU Server Rebuild, the server status changes to Rebuilding, and when the Rebuild is completed, it returns to the state before the Rebuild.
If you cancel an unused GPU Server, you can reduce operating costs. However, if you cancel a GPU Server, the service currently running may be stopped immediately, so you should proceed with the cancellation after fully considering the impact that may occur when the service is interrupted.
Caution
Please note that data cannot be recovered after service termination.
To cancel the GPU Server, follow the steps below.
All Services > Compute > GPU Server Click the menu. Navigate to the GPU Server’s Service Home page.
Click the GPU Server menu on the Service Home page. Navigate to the GPU Server List page.
GPU Server List on the page, select the resource to cancel, and click the Cancel Service button.
The termination of connected storage depends on the Delete on termination setting, so please refer to Termination Constraints.
When termination is completed, check on the GPU Server List page whether the resource has been terminated.
Termination Constraints
If the termination request for GPU Server cannot be processed, we will guide you with a popup window. Please refer to the cases below.
Cancellation not allowed
If File Storage is connected, please disconnect the File Storage connection first.
If LB Pool is connected please disconnect the LB Pool connection first.
When Lock is set please change the Lock setting to unused and try again.
The termination of attached storage depends on the Delete on termination setting.
Delete on termination Delete per setting
Delete on termination Whether the volume deletion also varies depending on the setting.
Delete on termination If not set: Even if you terminate the GPU Server, the volume will not be deleted.
Delete on termination when set: If you terminate the GPU Server, the volume will be deleted.
Volumes with a Snapshot will not be deleted even if Delete on termination is set.
Multi attach volume is deleted only when the server you are trying to delete is the last remaining server attached to the volume.
2.3.2.1 - Image Management
Users can enter the required information for the Image service within the GPU Server service and select detailed options through the Samsung Cloud Platform Console to create the respective service.
Image Generation
You can create an image of a running GPU Server. To create an image of a GPU Server, please refer to Create Image.
Image Check detailed information
Image service allows you to view and edit the full resource list and detailed information. Image detail page consists of detail information, tags, operation history tabs.
To view detailed information of the Image service, follow the steps below.
All Services > Compute > GPU Server Click the menu. Navigate to the Service Home page of GPU Server.
Click the Image menu on the Service Home page. Go to the Image List page.
Image List page, click the resource to view detailed information. You will be taken to the Image Detail page.
Image Details page displays status information and additional feature information, and consists of Detail Information, Tag, Work History tabs.
Category
Detailed description
Image Status
User-generated Image’s status
Active: Available state
Queued: When creating Image, Image is uploaded and waiting for processing
Importing: When creating Image, Image is uploaded and being processed
Share to another Account
Image can be shared to another Account
Image’s Visibility must be in Shared state to be able to share to another Account
Delete Image
Button to delete the Image
If the Image is deleted, it cannot be recovered
Table. GPU Server Image status information and additional features
Detailed Information
Image list page allows you to view detailed information of the selected resource and edit the information if necessary.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Means the SRN of a GPU Server Image
Resource Name
Image Name
Resource ID
Image ID
Creator
User who created the Image
Creation date/time
Date/time when the image was created
Editor
User who edited the Image
Edit date/time
Date and time the image was edited
Image name
Image name
Minimum Disk
Image’s minimum disk capacity (GB)
If you need to modify the minimum disk, click the Edit button to set it
Minimum RAM
Image’s minimum RAM capacity (GB)
OS type
Image’s OS type
OS hash algorithm
OS hash algorithm method
Visibility
Displays access permissions for images
Private can be used only within the project, and Shared can be shared across projects
Protected
Select whether image deletion is prohibited
If checked, it can prevent accidental deletion of images
This setting can be changed after image creation
Image File URL
Image file URL uploaded when generating image
Not displayed for images created via the image generation menu on the GPU Server detail page
Sharing Status
Status of sharing images with other Accounts
Approved Account ID: ID of the Account that has been approved for sharing
Modification Date/Time: The date/time when sharing was requested to another Account, if the sharing status later changes from Pending → Accepted it is updated to that date/time
Status: Approval Status
Accepted: Approved and being shared
Pending: Waiting for approval
Delete: Sharing has been stopped
Table. Image detailed information tab items
Tag
On the Image list page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Tag’s Key and Value information can be checked
Up to 50 tags can be added per resource
When entering tags, you can search and select from previously created Key and Value lists
Table. Image tag tab items
Work History
You can view the operation history of the selected resource on the Image list page.
Category
Detailed description
Work History List
Resource Change History
Work date and time, Resource ID, Resource name, Work details, Event topic, Work result, Verify worker information
Table. GPU Server Image Job History Tab Detailed Information Items
Image Resource Management
Describes the control and management functions of the generated image.
Share to another account
To share the Image with another Account, follow the steps below.
Log in to the shared Account and click the All Services > Compute > GPU Server menu. Navigate to the GPU Server’s Service Home page.
Click the Image menu on the Service Home page. Go to the Image list page.
Click the Image to control on the Image List page. It moves to the Image Detail page.
Click the Share to another Account button. It navigates to the Share image to another Account page.
Share to another Account feature allows you to share the Image with another Account. To share the Image with another Account, the Image’s Visibility must be Shared.
Share images to another Account On the page, enter the required information and click the Complete button.
Category
Required or not
Detailed description
Image Name
-
Name of the image to share
Input not allowed
Image ID
-
Image ID to share
Input not allowed
Shared Account ID
Required
Enter another Account ID to share
Enter within 64 characters using English letters, numbers, special characters-
Table. Required input items for sharing images to another Account
Image Details page’s sharing status can be checked for information.
At the initial request, the status is Pending, and when approval is completed in the account to be shared, it changes to Accepted.
Notice
Only the Image created by the current user’s Image file upload can be shared with another Account. If a Custom Image is created from the Image of a running GPU Server, it cannot be shared with another Account, and this feature is planned to be provided, so please note.
Receive sharing from another account
To receive an Image shared from another Account, follow the steps below.
Log in to the account to be shared and click the All Services > Compute > GPU Server menu. Navigate to the Service Home page of the GPU Server.
Click the Image menu on the Service Home page. Go to the Image list page.
Image List on the page click the Get Image Share button. Go to the Get Image Share popup window.
Receive Image Share In the popup window, enter the resource ID of the Image you want to receive, and click the Confirm button.
When image sharing is completed, you can check the shared Image in the Image list.
Image Delete
You can delete unused Images. However, once an Image is deleted it cannot be recovered, so you should fully consider the impact before proceeding with the deletion.
Caution
Please be careful because data cannot be recovered after deleting the service.
To delete the image, follow the steps below.
All Services > Compute > GPU Server Click the menu. Navigate to the GPU Server’s Service Home page.
Click the Image menu on the Service Home page. Go to the Image List page.
On the Image list page, select the resource to delete and click the Delete button.
Image list page, select multiple Image check boxes, and click the Delete button at the top of the resource list.
When deletion is complete, check on the Image List page whether the resource has been deleted.
2.3.2.2 - Using Multi-instance GPU in GPU Server
After creating a GPU Server, you can enable the MIG (Multi-instance GPU) feature on the GPU Server’s VM (Guest OS) and create an instance to use it.
Multi-instance GPU (NVIDIA A100) Overview
NVIDIA A100 is a Multi-instance GPU (MIG) based on the NVIDIA Ampere architecture, which can be securely divided into up to 7 independent GPU instances to operate CUDA (Compute Unified Device Architecture) applications. The NVIDIA A100 provides independent GPU resources to multiple users by allocating computing resources in a way optimized for GPU usage while utilizing high-bandwidth memory (HBM) and cache. Users can maximize GPU utilization by utilizing workloads that have not reached the maximum computing capacity of the GPU through parallel execution of each workload.
Figure. Multi-instance GPU configuration diagram
Using Multi-instance GPU Feature
To use the multi-instance GPU feature, you must create a GPU Server service on the Samsung Cloud Platform and then create a VM Instance (GuestOS) with an A100 GPU assigned. After completing the GPU Server creation, you can follow the MIG application order and MIG release order below to apply it.
Code block. nvidia-smi command - Check GPU inactive state (2)
In the VM Instance(GuestOS), enable MIG for each GPU and reboot the VM Instance.
Color mode
$ nvidia-smi –I 0 –mig 1Enabled MIG mode for GPU 00000000:05:00.0
All done.
# reboot
$ nvidia-smi –I 0 –mig 1Enabled MIG mode for GPU 00000000:05:00.0
All done.
# reboot
Code Block. nvidia-smi Command - MIG Activation
Note
If the GPU monitoring agent displays the following warning message, stop the nvsm and dcgm services before enabling MIG.
Warning: MIG mode is in pending enable state for GPU 00000000:05:00.0: In use by another client. 00000000:05:00.0 is currently being used by one or more other processes (e.g. CUDA application or a monitoring application such as another instance of nvidia-smi).
# systemctl stop nvsm# systemctl stop dcgm
After completing the MIG work, restart the nvsm and dcgm services.
Check the GPU status after applying MIG in the VM Instance(GuestOS).
MIG mode must be in Enabled state.
Color mode
$ nvidia-smi
Mon Sep 27 09:44:33 2021+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 ||-------------------------------+----------------------+----------------------|| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |||| MIG M. ||===============================+======================+======================||0 NVDIA A100-SXM... Off | 00000000:05:00.0 Off | On || N/A 32C P0 59W / 400W | 0MiB / 81251MiB | 0% Default |||| Enabled |+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| MIG devices: |+-----------------------------------------------------------------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared || ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|||| ECC|||=============================================================================|| No MIG devices found |+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=============================================================================|| No running processes found |+-----------------------------------------------------------------------------+
$ nvidia-smi
Mon Sep 27 09:44:33 2021+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------|
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVDIA A100-SXM... Off | 00000000:05:00.0 Off | On |
| N/A 32C P0 59W / 400W | 0MiB / 81251MiB | 0% Default |
| | | Enabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| MIG devices: |
+-----------------------------------------------------------------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|=============================================================================|
| No MIG devices found |
+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Code block. nvidia-smi command - Check GPU activation status (1)
+-----------------------------------------------------------------------------+
| MIG devices: |+-----------------------------------------------------------------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared || ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|||| ECC|||=============================================================================||0000| 66562MiB / 81251MiB |980|70511||| 5MiB / 13107... |||+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=============================================================================||00017483 C python 66559MiB |+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| MIG devices: |
+-----------------------------------------------------------------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|=============================================================================|
| 0000 | 66562MiB / 81251MiB | 980 | 70511 |
| | 5MiB / 13107... | | |
+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 00017483 C python 66559MiB |
+-----------------------------------------------------------------------------+
Code block. Example of checking GPU usage
MIG Instance deletion and release
To delete a MIG instance and release the MIG, follow these procedures.
Code block. nvidia-smi command - GPU Instance check example
Color mode
$ nvidia-smi mig -i 0 -dgi
Successfully destroyed GPU instance ID 0 from GPU 0
$ nvidia-smi mig -i 0 -dgi
Successfully destroyed GPU instance ID 0 from GPU 0
Code block. nvidia-smi command - GPU Instance deletion example
Color mode
$ nvidia-smi mig -i 0 -lgi
No GPU instances found: Not found
$ nvidia-smi mig -i 0 -lgi
No GPU instances found: Not found
Code block. nvidia-smi command - GPU Instance deletion example
MIG Function Disablement (Deactivation)
Disable MIG and then reboot.
Color mode
$ nvidia-smi -mig 0Disabled MIG Mode for GPU 00000000:05:00.0
All done.
$ nvidia-smi -mig 0Disabled MIG Mode for GPU 00000000:05:00.0
All done.
Code Block. nvidia-smi command - MIG disable
Color mode
$ nvidia-smi
Mon Sep 30 05:18:28 2021+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 ||-------------------------------+----------------------+----------------------|| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |||| MIG M. ||===============================+======================+======================||0 NVDIA A100-SXM... Off | 00000000:05:00.0 Off |0|| N/A 33C P0 60W / 400W | 0MiB / 81251MiB | 0% Default |||| Disabled |+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| MIG devices: |+-----------------------------------------------------------------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared || ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|||| ECC|||=============================================================================|| No MIG devices found |+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=============================================================================|| No running processes found |+-----------------------------------------------------------------------------+
$ nvidia-smi
Mon Sep 30 05:18:28 2021+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------|
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVDIA A100-SXM... Off | 00000000:05:00.0 Off | 0 |
| N/A 33C P0 60W / 400W | 0MiB / 81251MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| MIG devices: |
+-----------------------------------------------------------------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|=============================================================================|
| No MIG devices found |
+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Code Block. nvidia-smi command - Check GPU status
2.3.2.3 - Using NVSwitch on GPU Server
After creating a GPU Server, you can enable the NVSwitch feature in the GPU Server’s VM (Guest OS) and quickly use P2P (GPU to GPU) communication between GPUs.
Exploring NVIDIA NVSwitch for Multi GPU
NVIDIA A100 GPU server is a multi-GPU based on the NVIDIA Ampere architecture, with 8 Ampere 80 GB GPUs installed on the baseboard. The GPUs installed on the baseboard are connected to 6 NVSwitches via NVLink ports. Communication between GPUs on the baseboard is done using the full 600 GBps bandwidth. For this reason, the 8 GPUs installed on the A100 GPU server can be connected and operated like one, thereby maximizing GPU-to-GPU usage.
NVSwitch(600 GBps) 6 units 8 GPU configuration diagram
Figure. NVSwitch(600 GBps) 6 units 8 GPU configuration diagram
Create GPU NVSwitch
To use the GPU NVSwitch feature, create a GPU Server service on the Samsung Cloud Platform, create a VM Instance (GuestOS) with 8 A100 GPUs assigned, and activate the Fabricmanager.
주의
NVSwitch can only be activated and used for products with 8 A100 GPUs assigned to a single GPU server (g1v128a8 (vCPU 128 | Memory 1920G | A100(80GB)*8)).
Currently, GPU Server created with Windows OS does not support NVSwitch (Fabricmanager).
NVSwitch Installation and Operation Check (Fabric Manager Activation)
To operate NVSwitch, install Fabricmanager on the GPU Instance and follow the next procedure.
Install NVIDIA GPU Driver (470.52.02 Version) on the GPU server.
Code Block. NVIDIA Fabric Manager Installation and Operation
Check the status of NVIDIA Fabric Manager running on the GPU server.
Normal operation indicates active (running)
Color mode
$ systemctl status nvidia-fabricmanager
$ systemctl status nvidia-fabricmanager
Code Block. Check NVIDIA Fabric Manager Operation Status
Figure. NVSwitch installation - Checking the operation status of Fabric Manager
Check the NVSwitch operation status on the GPU server.
Normal operation indicates NV12
Color mode
$ nvidia-smi topo --matrix
$ nvidia-smi topo --matrix
Code block. NVSwitch operation status check
Figure. NVSwitch Installation - Checking NVSwitch Operation Status
2.3.2.4 - Keypair Management
The user can enter the required information for the Keypair within the GPU Server service through the Samsung Cloud Platform Console, select detailed options, and create the service.
Keypair Create
You can create and use the Keypair service while using the GPU Server service on the Samsung Cloud Platform Console.
To create a keypair, follow these steps.
All Services > Compute > GPU Server Click the menu. Navigate to the Service Home page of GPU Server.
Click the Keypair menu on the Service Home page. It will go to the Keypair List page.
Click the Keypair List page’s Create Keypair button. It navigates to the Create Keypair page.
Service Information Input Enter the required information in the area.
Category
Required or not
Detailed description
Keypair name
Required
Name of the Keypair to create
Enter within 255 characters using English letters, numbers, spaces, and special characters (-, _)
Keypair type
Required
ssh
Table. Keypair Service Information Input Items
Additional Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Keypair additional information input items
Caution
After creation is complete, you can download the Key only once for the first time. Since reissuance is not possible, make sure it has been downloaded.
Save the downloaded Private Key in a safe place.
Check the input information and click the Complete button.
When creation is complete, check the created resource on the Keypair List page.
Keypair Check detailed information
Keypair service can view and edit the full resource list and detailed information. Keypair Details page consists of Details, Tags, Activity History tabs.
To view detailed information about the Keypair, follow the steps below.
All Services > Compute > GPU Server Click the menu. Navigate to the GPU Server’s Service Home page.
Service Home page, click the Keypair menu. Navigate to the Keypair list page.
Keypair List Click the resource to view detailed information on the page. Go to the Keypair Detail page.
Keypair Details page displays status information and additional feature information, and consists of Details, Tags, Activity History tabs.
Detailed Information
On the Keypair List page, you can view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In Keypair, it means Keypair SRN
Resource Name
Keypair Name
Resource ID
Keypair’s unique resource ID
Creator
User who created the Keypair
Creation time
Time when the keypair was created
Editor
User who modified the Keypair information
Modification Date/Time
Timestamp of Keypair information modification
Keypair name
Keypair name
Fingerprint
Unique value for identifying a Key
User ID
User ID of the user who created the Keypair
Public Key
Public Key Information
Table. Keypair detailed information tab items
Tag
Keypair list page allows you to view the tag information of selected resources, and you can add, modify, or delete them.
Category
Detailed description
Tag List
Tag List
Tag’s Key, Value information can be checked
Up to 50 tags can be added per resource
When entering a tag, search and select from the list of previously created Keys and Values
Table. Keypair Tag Tab Items
Work History
On the Keypair list page, you can view the operation history of the selected resource.
Table. Keypair operation history tab detailed information items
Keypair Resource Management
Describes the control and management functions of the keypair.
Get public key
To retrieve the public key, follow the steps below.
Click the All Services > Compute > GPU Server menu. Go to the GPU Server’s Service Home page.
Service Home page, click the Keypair menu. Navigate to the Keypair list page.
In the Keypair list page, click the More button at the top, then click the Import Public Key button. You will be taken to the Import Public Key page.
Required Information Input Enter or select the required information in the area.
Category
Required
Detailed description
Keypair name
required
Name of the Keypair to create
Keypair type
required
ssh
Public Key
Required
Enter Public Key
Load File: Attach File button to select and attach the public key file
Only files with the following extension (.pem) can be attached
Enter Public Key: Paste the copied public key value
Public key value can be copied from the Keypair Details page
Table. Required input items for fetching public key
Check the entered information and click the Complete button.
Once creation is complete, check the created resource on the Keypair List page.
Delete Keypair
You can delete unused Keypairs. However, once a Keypair is deleted it cannot be recovered, so please review the impact thoroughly in advance before proceeding with deletion.
Caution
Please be careful as data cannot be recovered after deleting the service.
To delete a keypair, follow the steps below.
All Services > Compute > GPU Server Click the menu. Navigate to the GPU Server’s Service Home page.
Click the Keypair menu on the Service Home page. It moves to the Keypair List page.
On the Keypair list page, select the resource to delete and click the Delete button.
On the Keypair list page, select multiple Keypair check boxes, and click the Delete button at the top of the resource list.
After deletion is complete, check on the Keypair List page whether the resource has been deleted.
2.3.2.5 - ServiceWatch Agent Installation
Users can install ServiceWatch Agent on GPU Server to collect custom metrics and logs.
Reference
Custom metrics/log collection through ServiceWatch Agent is currently available only in Samsung Cloud Platform For Enterprise. It will be available in other offerings in the future.
Caution
Metric collection through ServiceWatch Agent is classified as custom metrics, and unlike basic metrics collected from each service, fees are charged. Therefore, it is recommended to remove or disable unnecessary metric collection settings.
ServiceWatch Agent
There are two main types of Agents that must be installed to collect ServiceWatch custom metrics and logs on GPU Server.
Prometheus Exporter and Open Telemetry Collector.
Category
Detailed Description
Prometheus Exporter
Provides metrics of specific applications or services in a format that Prometheus can scrape
For collecting server OS metrics, you can use Node Exporter for Linux servers and Windows Exporter for Windows servers depending on the OS type.
For collecting OS metrics on GPU Server, you can use Node Exporter as with Virtual Server. For details, refer to Virtual Server > ServiceWatch Agent
You can use DCGM (NVIDIA Data Center GPU Manager) Exporter for GPU metrics
A centralized collector that collects telemetry data such as metrics and logs from distributed systems, processes (filtering, sampling, etc.), and sends them to multiple backends (e.g., Prometheus, Jaeger, Elasticsearch, etc.)
Sends data to ServiceWatch Gateway to enable ServiceWatch to collect metric and log data.
Install NVSwitch Configuration and Query (NSCQ) Library
Reference
NVSwitch Configuration and Query (NSCQ) Library is required for Hopper or earlier Generation GPUs.
Notice
The installation commands below are possible in an environment where internet is available.
If internet is not available, you must download libnvdia-nscq from https://developer.download.nvidia.com/compute/cuda/repos/ and upload it.
For metrics that can be collected with GPU DCGM Exporter and configuration methods, refer to DCGM Exporter Metrics.
Install DCGM(datacenter-gpu-manager)
Refers to a specific version of NVIDIA’s Data Center GPU Manager (DCGM) tool, which is a package for managing and monitoring NVIDIA data center GPUs. In particular, cuda12 indicates that this management tool is installed for CUDA 12 version, and datacenter-gpu-manager-4 means DCGM version 4.x. This tool provides various functions including GPU status monitoring, diagnostics, alert systems, and power/clock management.
Check CUDA version.
Color mode
nvidia-smi | grep CUDA
nvidia-smi | grep CUDA
Code block. Check CUDA version
Color mode
| NVIDIA-SMI 535.183.06 Driver Version: 535.183.06 CUDA Version: 12.2 |
| NVIDIA-SMI 535.183.06 Driver Version: 535.183.06 CUDA Version: 12.2 |
This is a tool that collects various GPU metrics such as GPU usage, memory usage, temperature, and power consumption based on NVIDIA Data Center GPU Manager (DCGM) and exposes them for use in monitoring systems like Prometheus.
Code block. Check datacenter-gpu-manager-exporter configuration file result example
Check the configuration provided at DCGM Exporter installation, remove # for necessary metrics, and add # for unnecessary metrics.
Color mode
vi /etc/dcgm-exporter/default-counters.csv
## Example ##
...
DCGM_FI_PROF_PIPE_TENSOR_ACTIVE, gauge, Ratio of cycles the tensor (HMMA) pipe is active.
DCGM_FI_PROF_DRAM_ACTIVE, gauge, Ratio of cycles the device memory interface is active sending or receiving data.
# DCGM_FI_PROF_PIPE_FP64_ACTIVE, gauge, Ratio of cycles the fp64 pipes are active.
# DCGM_FI_PROF_PIPE_FP32_ACTIVE, gauge, Ratio of cycles the fp32 pipes are active.
...
vi /etc/dcgm-exporter/default-counters.csv
## Example ##
...
DCGM_FI_PROF_PIPE_TENSOR_ACTIVE, gauge, Ratio of cycles the tensor (HMMA) pipe is active.
DCGM_FI_PROF_DRAM_ACTIVE, gauge, Ratio of cycles the device memory interface is active sending or receiving data.
# DCGM_FI_PROF_PIPE_FP64_ACTIVE, gauge, Ratio of cycles the fp64 pipes are active.
# DCGM_FI_PROF_PIPE_FP32_ACTIVE, gauge, Ratio of cycles the fp32 pipes are active.
...
Code block. datacenter-gpu-manager-exporter metric configuration example
Metric collection through ServiceWatch Agent is classified as custom metrics, and unlike basic metrics collected from each service, fees are charged. Therefore, unnecessary metric collection should be removed or disabled to avoid excessive charges.
Enable and Start DCGM Service
Enable and start nvdia-dcgm service.
Color mode
systemctl enable --now nvidia-dcgm
systemctl enable --now nvidia-dcgm
Code block. Enable and start nvdia-dcgm service command
Enable and start nvdia-dcgm-exporter service.
Color mode
systemctl enable --now nvidia-dcgm-exporter
systemctl enable --now nvidia-dcgm-exporter
Code block. Enable and start nvdia-dcgm-exporter service command
Notice
If you have completed DCGM Exporter configuration, you must install the Open Telemetry Collector provided by ServiceWatch to complete ServiceWatch Agent configuration. For details, refer to ServiceWatch > Use ServiceWatch Agent.
Install Prometheus Exporter for GPU Metrics (for RHEL)
Install ServiceWatch Agent to collect GPU Server metrics in the following order.
Install NVSwitch Configuration and Query (NSCQ) Library (for RHEL)
Reference
NVSwitch Configuration and Query (NSCQ) Library is required for Hopper or earlier Generation GPUs.
For RHEL, check if libnvdia-nscq is installed and then install it.
Notice
The installation commands below are possible in an environment where internet is available.
If internet is not available, you must download libnvdia-nscq from https://developer.download.nvidia.com/compute/cuda/repos/ and upload it.
Updating Subscription Management repositories.
Last metadata expiration check: 0:03:15 ago on Wed 19 Nov 2025 01:23:48 AM EST.
Dependencies resolved.
=============================================
Package Architecture Version Repository Size
=============================================
Disabling module profiles:
nvidia-driver/default
nvidia-driver/fm
Resetting modules:
nvidia-driver
Transaction Summary
=============================================
Is this ok [y/N]: y
Updating Subscription Management repositories.
Last metadata expiration check: 0:03:15 ago on Wed 19 Nov 2025 01:23:48 AM EST.
Dependencies resolved.
=============================================
Package Architecture Version Repository Size
=============================================
Disabling module profiles:
nvidia-driver/default
nvidia-driver/fm
Resetting modules:
nvidia-driver
Transaction Summary
=============================================
Is this ok [y/N]: y
Code block. Reset NVIDIA Driver DNF module status result example
Enable NVDIA Driver module.
Color mode
dnf module enable nvidia-driver:535-open
dnf module enable nvidia-driver:535-open
Code block. Enable NVDIA Driver module
Color mode
Updating Subscription Management repositories.
Last metadata expiration check: 0:04:22 ago on Wed 19 Nov 2025 01:23:48 AM EST.
Dependencies resolved.
=============================================
Package Architecture Version Repository Size
=============================================
Enabling module streams:
nvidia-driver 535-open
Transaction Summary
=============================================
Is this ok [y/N]: y
Updating Subscription Management repositories.
Last metadata expiration check: 0:04:22 ago on Wed 19 Nov 2025 01:23:48 AM EST.
Dependencies resolved.
=============================================
Package Architecture Version Repository Size
=============================================
Enabling module streams:
nvidia-driver 535-open
Transaction Summary
=============================================
Is this ok [y/N]: y
Code block. Enable NVDIA Driver module result example
Code block. Check NVSDM library module list result example
Install libnvsdm.
Color mode
dnf install libnvsdm-580.105.08-1
dnf install libnvsdm-580.105.08-1
Code block. NVSDM library installation
Color mode
Updating Subscription Management repositories.
Last metadata expiration check: 0:08:18 ago on Wed 19 Nov 2025 01:05:28 AM EST.
Dependencies resolved.
========================================================================
Package Architecture Version Repository Size
========================================================================
Installing:
libnvsdm x86_64 580.105.08-1 cuda-rhel8-x86_64 675 k
Installing dependencies:
infiniband-diags x86_64 48.0-1.el8 rhel-8-for-x86_64-baseos-rpms 323 k
libibumad x86_64 48.0-1.el8 rhel-8-for-x86_64-baseos-rpms 34 k
Transaction Summary
========================================================================
Install 3 Packages
Total download size: 1.0 M
Installed size: 3.2 M
Is this ok [y/N]: y
Updating Subscription Management repositories.
Last metadata expiration check: 0:08:18 ago on Wed 19 Nov 2025 01:05:28 AM EST.
Dependencies resolved.
========================================================================
Package Architecture Version Repository Size
========================================================================
Installing:
libnvsdm x86_64 580.105.08-1 cuda-rhel8-x86_64 675 k
Installing dependencies:
infiniband-diags x86_64 48.0-1.el8 rhel-8-for-x86_64-baseos-rpms 323 k
libibumad x86_64 48.0-1.el8 rhel-8-for-x86_64-baseos-rpms 34 k
Transaction Summary
========================================================================
Install 3 Packages
Total download size: 1.0 M
Installed size: 3.2 M
Is this ok [y/N]: y
Code block. NVSDM library installation command result example
For metrics that can be collected with GPU DCGM Exporter and configuration methods, refer to DCGM Exporter Metrics.
Install DCGM(datacenter-gpu-manager) (for RHEL)
Refers to a specific version of NVIDIA’s Data Center GPU Manager (DCGM) tool, which is a package for managing and monitoring NVIDIA data center GPUs. In particular, cuda12 indicates that this management tool is installed for CUDA 12 version, and datacenter-gpu-manager-4 means DCGM version 4.x. This tool provides various functions including GPU status monitoring, diagnostics, alert systems, and power/clock management.
| NVIDIA-SMI 535.183.06 Driver Version: 535.183.06 CUDA Version: 12.2 |
| NVIDIA-SMI 535.183.06 Driver Version: 535.183.06 CUDA Version: 12.2 |
Code block. Check CUDA version result example
Color mode
CUDA_VERSION=12
CUDA_VERSION=12
Code block. Set CUDA version command
Check datacenter-gpu-manager-cuda module list.
Color mode
dnf list datacenter-gpu-manager-4-cuda${CUDA_VERSION} --showduplicates
dnf list datacenter-gpu-manager-4-cuda${CUDA_VERSION} --showduplicates
Code block. Check datacenter-gpu-manager-cuda module list
Color mode
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:00:34 ago on Wed 19 Nov 2025 12:26:56 AM EST.
Available Packages
datacenter-gpu-manager-4-cuda12.x86_64 1:4.0.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.1.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.1.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.2.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.2.2-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.2.3-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.2.3-2 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.3.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.3.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.4.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.4.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.4.2-1 cuda-rhel8-x86_64
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:00:34 ago on Wed 19 Nov 2025 12:26:56 AM EST.
Available Packages
datacenter-gpu-manager-4-cuda12.x86_64 1:4.0.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.1.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.1.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.2.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.2.2-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.2.3-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.2.3-2 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.3.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.3.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.4.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.4.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-4-cuda12.x86_64 1:4.4.2-1 cuda-rhel8-x86_64
Code block. Check datacenter-gpu-manager-cuda module list result example
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:07:12 ago on Wed 19 Nov 2025 12:26:56 AM EST.
Dependencies resolved.
===================================================================================================
Package Architecture Version Repository Size
===================================================================================================
Installing:
datacenter-gpu-manager-4-cuda12 x86_64 1:4.4.2-1 cuda-rhel8-x86_64 554 M
Installing dependencies:
datacenter-gpu-manager-4-core x86_64 1:4.4.2-1 cuda-rhel8-x86_64 9.9 M
Installing weak dependencies:
datacenter-gpu-manager-4-proprietary x86_64 1:4.4.2-1 cuda-rhel8-x86_64 5.3 M
datacenter-gpu-manager-4-proprietary-cuda12 x86_64 1:4.4.2-1 cuda-rhel8-x86_64 289 M
Transaction Summary
====================================================================================================
Install 4 Packages
...
Is this ok [y/N]: y
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:07:12 ago on Wed 19 Nov 2025 12:26:56 AM EST.
Dependencies resolved.
===================================================================================================
Package Architecture Version Repository Size
===================================================================================================
Installing:
datacenter-gpu-manager-4-cuda12 x86_64 1:4.4.2-1 cuda-rhel8-x86_64 554 M
Installing dependencies:
datacenter-gpu-manager-4-core x86_64 1:4.4.2-1 cuda-rhel8-x86_64 9.9 M
Installing weak dependencies:
datacenter-gpu-manager-4-proprietary x86_64 1:4.4.2-1 cuda-rhel8-x86_64 5.3 M
datacenter-gpu-manager-4-proprietary-cuda12 x86_64 1:4.4.2-1 cuda-rhel8-x86_64 289 M
Transaction Summary
====================================================================================================
Install 4 Packages
...
Is this ok [y/N]: y
Code block. datacenter-gpu-manager-cuda installation result example
This is a tool that collects various GPU metrics such as GPU usage, memory usage, temperature, and power consumption based on NVIDIA Data Center GPU Manager (DCGM) and exposes them for use in monitoring systems like Prometheus.
Add CUDA Repository to DNF. (If you have already run this command, proceed to the next step.)
dnf list datacenter-gpu-manager-exporter --showduplicates
dnf list datacenter-gpu-manager-exporter --showduplicates
Code block. Check datacenter-gpu-manager-exporter module list
Color mode
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:02:11 ago on Wed 19 Nov 2025 12:26:56 AM EST.
Available Packages
datacenter-gpu-manager-exporter.x86_64 4.0.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.1.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.1.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.1.3-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.5.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.5.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.5.2-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.6.0-1 cuda-rhel8-x86_64
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:02:11 ago on Wed 19 Nov 2025 12:26:56 AM EST.
Available Packages
datacenter-gpu-manager-exporter.x86_64 4.0.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.1.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.1.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.1.3-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.5.0-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.5.1-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.5.2-1 cuda-rhel8-x86_64
datacenter-gpu-manager-exporter.x86_64 4.6.0-1 cuda-rhel8-x86_64
Code block. Check datacenter-gpu-manager-exporter module list result example
Install datacenter-gpu-manager-cuda.
dcgm-exporter 4.5.X requires glibc 2.34 or higher, but RHEL9 provides glibc 2.34, so specify version 4.1.3-1 to install.
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:07:12 ago on Wed 19 Nov 2025 12:26:56 AM EST.
Dependencies resolved.
====================================================================================================
Package Architecture Version Repository Size
====================================================================================================
Installing:
datacenter-gpu-manager-exporter x86_64 4.1.3-1 cuda-rhel8-x86_64 26 M
...
Is this ok [y/N]: y
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:07:12 ago on Wed 19 Nov 2025 12:26:56 AM EST.
Dependencies resolved.
====================================================================================================
Package Architecture Version Repository Size
====================================================================================================
Installing:
datacenter-gpu-manager-exporter x86_64 4.1.3-1 cuda-rhel8-x86_64 26 M
...
Is this ok [y/N]: y
Code block. datacenter-gpu-manager-cuda installation result example
Code block. Check datacenter-gpu-manager-exporter configuration file result example
Check the configuration provided at DCGM Exporter installation, remove # for necessary metrics, and add # for unnecessary metrics.
Color mode
vi /etc/dcgm-exporter/default-counters.csv
## Example ##
...
DCGM_FI_PROF_PIPE_TENSOR_ACTIVE, gauge, Ratio of cycles the tensor (HMMA) pipe is active.
DCGM_FI_PROF_DRAM_ACTIVE, gauge, Ratio of cycles the device memory interface is active sending or receiving data.
# DCGM_FI_PROF_PIPE_FP64_ACTIVE, gauge, Ratio of cycles the fp64 pipes are active.
# DCGM_FI_PROF_PIPE_FP32_ACTIVE, gauge, Ratio of cycles the fp32 pipes are active.
...
vi /etc/dcgm-exporter/default-counters.csv
## Example ##
...
DCGM_FI_PROF_PIPE_TENSOR_ACTIVE, gauge, Ratio of cycles the tensor (HMMA) pipe is active.
DCGM_FI_PROF_DRAM_ACTIVE, gauge, Ratio of cycles the device memory interface is active sending or receiving data.
# DCGM_FI_PROF_PIPE_FP64_ACTIVE, gauge, Ratio of cycles the fp64 pipes are active.
# DCGM_FI_PROF_PIPE_FP32_ACTIVE, gauge, Ratio of cycles the fp32 pipes are active.
...
Code block. datacenter-gpu-manager-exporter metric configuration example
Reference
For metrics that can be collected with the GPU DCGM Exporter and configuration methods, refer to DCGM Exporter Metrics.
Caution
Metric collection through ServiceWatch Agent is classified as custom metrics, and unlike basic metrics collected from each service, fees are charged. Therefore, unnecessary metric collection should be removed or disabled to avoid excessive charges.
Enable and Start DCGM Service (for RHEL)
Enable and start nvdia-dcgm service.
Color mode
systemctl enable --now nvidia-dcgm
systemctl enable --now nvidia-dcgm
Code block. Enable and start nvdia-dcgm service command
Enable and start nvdia-dcgm-exporter service.
Color mode
systemctl enable --now nvidia-dcgm-exporter
systemctl enable --now nvidia-dcgm-exporter
Code block. Enable and start nvdia-dcgm-exporter service command
Notice
If you have completed DCGM Exporter configuration, you must install the Open Telemetry Collector provided by ServiceWatch to complete ServiceWatch Agent configuration. For details, refer to ServiceWatch > Use ServiceWatch Agent.
DCGM Exporter Metrics
DCGM Exporter Key Metrics
The key GPU metrics provided by DCGM Exporter are as follows.
Category
DCGM Field
Prometheus Metric Type
Summary
Clocks
DCGM_FI_DEV_SM_CLOCK
gauge
SM clock frequency (in MHz)
Clocks
DCGM_FI_DEV_MEM_CLOCK
gauge
Memory clock frequency (in MHz)
Temperature
DCGM_FI_DEV_GPU_TEMP
gauge
GPU temperature (in C)
Power
DCGM_FI_DEV_POWER_USAGE
gauge
Power draw (in W)
Utilization
DCGM_FI_DEV_GPU_UTIL
gauge
GPU utilization (in %)
Utilization
DCGM_FI_DEV_MEM_COPY_UTIL
gauge
Memory utilization (in %)
Memory Usage
DCGM_FI_DEV_FB_FREE
gauge
Frame buffer memory free (in MiB)
Memory Usage
DCGM_FI_DEV_FB_USED
gauge
Frame buffer memory used (in MiB)
Nvlink
DCGM_FI_DEV_NVLINK_BANDWIDTH_TOTAL(8 GPU only)
counter
Total number of NVLink bandwidth counters for all lanes
For additional metrics to configure beyond the default settings, remove # in default-counters.csv.
For metrics you do not want to collect among the default configured metrics, add # or delete the item.
Color mode
# Format
# If line starts with a '#' it is considered a comment
# DCGM FIELD, Prometheus metric type, help message
# Clocks
DCGM_FI_DEV_SM_CLOCK, gauge, SM clock frequency (in MHz).
DCGM_FI_DEV_MEM_CLOCK, gauge, Memory clock frequency (in MHz).
# Temperature
DCGM_FI_DEV_MEMORY_TEMP, gauge, Memory temperature (in C).
DCGM_FI_DEV_GPU_TEMP, gauge, GPU temperature (in C).
# Power
DCGM_FI_DEV_POWER_USAGE, gauge, Power draw (in W).
DCGM_FI_DEV_TOTAL_ENERGY_CONSUMPTION, counter, Total energy consumption since boot (in mJ).
# PCIE
# DCGM_FI_PROF_PCIE_TX_BYTES, counter, Total number of bytes transmitted through PCIe TX via NVML.
# DCGM_FI_PROF_PCIE_RX_BYTES, counter, Total number of bytes received through PCIe RX via NVML.
...
# Format
# If line starts with a '#' it is considered a comment
# DCGM FIELD, Prometheus metric type, help message
# Clocks
DCGM_FI_DEV_SM_CLOCK, gauge, SM clock frequency (in MHz).
DCGM_FI_DEV_MEM_CLOCK, gauge, Memory clock frequency (in MHz).
# Temperature
DCGM_FI_DEV_MEMORY_TEMP, gauge, Memory temperature (in C).
DCGM_FI_DEV_GPU_TEMP, gauge, GPU temperature (in C).
# Power
DCGM_FI_DEV_POWER_USAGE, gauge, Power draw (in W).
DCGM_FI_DEV_TOTAL_ENERGY_CONSUMPTION, counter, Total energy consumption since boot (in mJ).
# PCIE
# DCGM_FI_PROF_PCIE_TX_BYTES, counter, Total number of bytes transmitted through PCIe TX via NVML.
# DCGM_FI_PROF_PCIE_RX_BYTES, counter, Total number of bytes received through PCIe RX via NVML.
...
Code block. default-counters.csv configuration example
2.3.3 - API Reference
API Reference
2.3.4 - CLI Reference
CLI Reference
2.3.5 - Release Note
GPU Server
2025.10.23
FEATUREAdd new features and provide ServiceWatch service integration functionality
ServiceWatch service integration provision
You can monitor data through the ServiceWatch service.
You can select a RHEL image when creating a GPU Server.
Keypair management feature has been added.
You can create a keypair to use, or retrieve a public key and apply it.
2025.07.01
FEATUREGPU Server feature addition, Image sharing method change and GPU Server usage guide addition
GPU Server feature addition
IP, Public NAT IP, Private NAT IP configuration feature has been added.
LLM Endpoint is provided for LLM usage.
The method of sharing images between accounts has been changed.
GPU Server RHEL OS and GPU driver version have been added.
2025.02.27
FEATURECommon Feature Change
GPU Server feature addition
NAT setting feature has been added to GPU Server.
Samsung Cloud Platform Common Feature Change
Account, IAM and Service Home, tags, etc. have been reflected in common CX changes.
2024.10.01
NEWGPU Server Service Official Version Release
GPU Server service has been officially launched.
CPU, GPU, memory, etc., we have launched a virtualization computing service that allows you to allocate and use the infrastructure resources provided by the server as needed at the required time without having to purchase them individually.
2.4 - Bare Metal Server
2.4.1 - Overview
Service Overview
Bare Metal Server does not use virtualization technology and a high-performance cloud computing service that can allocate and use physically separated computing resources such as CPU and memory individually. Since it is not affected by other cloud users, you can reliably operate performance-sensitive services.
Features
Easy and convenient computing environment setup: Through the web-based console, you can easily use everything from Bare Metal Server provisioning to resource management and cost management. You can receive a server with standard specs (CPU, Memory, Disk) allocated exclusively and use it immediately.
Providing High-Performance Computing Environment: We provide servers suitable for workloads that require large capacity and high performance, such as real-time (Real-Time) systems, HPC (High Performance Computing), and servers that demand excessive I/O usage, in a physically isolated environment.
Efficient Service Provision: We ensure performance and stability through optimal server selection and in-house testing. Customers can select the optimal resources for their service environment through the various specifications of Bare Metal Servers offered by Samsung Cloud Platform.
Service Diagram
Figure. Bare Metal Server diagram
Provided Features
Bare Metal Server provides the following features.
Auto Provisioning and Management: Through the web-based console, you can easily use everything from Bare Metal Server provisioning to resource management and cost management.
Providing various types of server types and OS images: Provides CPU/Memory/Disk resources of standard server types, and offers various standard OS images.
Storage Connection: Provides additional connected storage besides the OS disk. You can connect and use Block Storage, File Storage, and Object Storage.
Network Connection: You can connect the general subnet/IP settings of the Bare Metal Server and the Public NAT IP. Provides a local subnet connection for communication between servers. This operation can be modified on the detail page.
Monitoring: You can view monitoring information such as CPU, Memory, Disk, which are computing resources, through Cloud Monitoring. To use the Cloud Monitoring service of Bare Metal Server, you need to install the Agent. Please be sure to install the Agent for stable Bare Metal Server service usage. For more details, refer to Bare Metal Server Monitoring Metrics.
Backup and Recovery: Bare Metal Server’s Filesystem backup and recovery can be used through the Backup service.
Efficient Cost Management: You can easily create/terminate servers as needed, and because billing is based on actual usage time, you can use it cost-effectively in various unpredictable situations.
Local disk partition creation You can create and use up to 10 local disk partitions.
Terraform provision: Provides an IaC environment through Terraform.
Components
Bare Metal Server provides various OS standard images and standard server types. Users can select and use them according to the scale of the service they want to configure.
OS Image Provided Version
The OS images supported by Bare Metal Server are as follows
OS Image Version
EoS Date
Oracle Linux 9.6
2032-06-30
RHEL 8.10
2029-05-31
RHEL 9.4
2026-04-30
RHEL 9.6
2027-05-31
Rocky Linux 8.10
2029-05-31
Rocky Linux 9.6
2025-11-30
Ubuntu 22.04
2027-06-30
Ubuntu 24.04
2029-06-30
Windows 2019
2029-01-09
Windows 2022
2031-10-14
Table. Bare Metal Server Provided OS Image Version
Server Type
The server types supported by Bare Metal Server are as follows. For more details about the server types supported by Bare Metal Server, see Bare Metal Server Server Type.
s3v16m64_metal
Category
Example
Detailed description
Server Generation
s3
Provided server classification and generation
s3: s means the standard specification (vCPU, Memory) commonly used for Standard, and 1 means the generation
CPU vCore
v16
vCore count
v16: Allocated vCores are twice the number of physical cores
16 vCores correspond to physically 8 cores
Provided with Hyperthreading enabled by default, which can be disabled when creating a service
Memory
m64
Memory Capacity
m64: 64GB Memory
Table. Bare Metal Server Server Type
Preceding Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.
A service that provides an independent virtual network in a cloud environment
Table. Bare Metal Server Pre-service
2.4.1.1 - Server Type
Bare Metal Server Server Types
Bare Metal Server provides server types according to usage purposes. Server types are configured with various combinations of CPU, Memory, etc. The server used for Bare Metal Server is determined by the server type selected when creating Bare Metal Server. Please select a server type according to the specifications of the application you want to run on Bare Metal Server.
The server types supported by Bare Metal Server follow the format below.
s3v16m64_metal
Category
Example
Description
Server Generation
s3
Server category and generation provided
s3
s means standard specifications
3 means generation
h3
h means high-capacity server specifications
3 means generation
CPU vCore
v16
Number of vCores
v16: Allocated vCore is double the physical cores
16 vCore is physically 8 cores
Hyper-Threading is enabled by default and can be disabled when creating the service
Memory
m64
Memory capacity
m64: 64GB Memory
Table. Bare Metal Server server type format
s4/h4 Server Types
Bare Metal Server s4 server type is provided with standard specifications (vCPU, Memory) and is suitable for high-performance applications because it receives physically isolated resources for use.
Also, Bare Metal Server h4 server type is provided with high-capacity server specifications and is suitable for high-performance applications for large-scale data processing.
Supports 5 types of vCPU in total (16, 32, 64, 96, 128 vCore)
Intel 6th generation (Granite Rapids) Processor
Supports up to 64 physical cores, 128 vCPUs, and 2,048 GB of memory
Provides up to 2 x 1.92 TB Internal Disks (for OS)
Server Type
Physical CPU
vCPU
Memory
CPU Type
Internal Disk(OS)
s4v16m64_metal
8 Core
16 vCore
64 GB
Intel Xeon 6507P up to 4.3GHz
480GB * 2EA
s4v16m128_metal
8 Core
16 vCore
128 GB
Intel Xeon 6507P up to 4.3GHz
480GB * 2EA
s4v16m256_metal
8 Core
16 vCore
256 GB
Intel Xeon 6507P up to 4.3GHz
480GB * 2EA
h4v32m128_metal
16 Core
32 vCore
128 GB
Intel Xeon 6517P up to 4.0GHz
960GB * 2EA
h4v32m256_metal
16 Core
32 vCore
256 GB
Intel Xeon 6517P up to 4.0GHz
960GB * 2EA
h4v32m512_metal
16 Core
32 vCore
512 GB
Intel Xeon 6517P up to 4.0GHz
960GB * 2EA
h4v64m256_metal
32 Core
64 vCore
256 GB
Intel Xeon 6737P up to 4.0GHz
1.92TB * 2EA
h4v64m512_metal
32 Core
64 vCore
512 GB
Intel Xeon 6737P up to 4.0GHz
1.92TB * 2EA
h4v64m1024_metal
32 Core
64 vCore
1024 GB
Intel Xeon 6737P up to 4.0GHz
1.92TB * 2EA
h4v96m512_metal
48 Core
96 vCore
512 GB
Intel Xeon 6520P up to 3.4GHz
1.92TB * 2EA
h4v96m768_metal
48 Core
96 vCore
768 GB
Intel Xeon 6520P up to 3.4GHz
1.92TB * 2EA
h4v96m2048_metal
48 Core
96 vCore
2048 GB
Intel Xeon 6520P up to 3.4GHz
1.92TB * 2EA
h4v128m512_metal
64 Core
128 vCore
512 GB
Intel Xeon 6737P up to 4.0GHz
1.92TB * 2EA
h4v128m1024_metal
64 Core
128 vCore
1024 GB
Intel Xeon 6737P up to 4.0GHz
1.92TB * 2EA
h4v128m2048_metal
64 Core
128 vCore
2048 GB
Intel Xeon 6737P up to 4.0GHz
1.92TB * 2EA
Table. Bare Metal Server server type specifications > s4/h4 server types
s3/h3 Server Types
Bare Metal Server s3 server type is provided with standard specifications (vCPU, Memory) and is suitable for high-performance applications because it receives physically isolated resources for use.
Also, Bare Metal Server h3 server type is provided with high-capacity server specifications and is suitable for high-performance applications for large-scale data processing.
Supports 5 types of vCPU in total (16, 32, 64, 96, 128 vCore)
Intel 4th generation (Sapphire Rapids) Processor
Supports up to 64 physical cores, 128 vCPUs, and 2,048 GB of memory
Provides up to 2 x 1.92 TB Internal Disks (for OS)
Server Type
Physical CPU
vCPU
Memory
CPU Type
Internal Disk(OS)
s3v16m64_metal
8 Core
16 vCore
64 GB
Intel Xeon Gold 6434 up to 4.1GHz
480 GB * 2EA
s3v16m128_metal
8 Core
16 vCore
128 GB
Intel Xeon Gold 6434 up to 4.1GHz
480 GB * 2EA
s3v16m256_metal
8 Core
16 vCore
256 GB
Intel Xeon Gold 6434 up to 4.1GHz
480 GB * 2EA
h3v32m128_metal
16 Core
32 vCore
128 GB
Intel Xeon Gold 6444Y up to 4.0GHz
960 GB * 2EA
h3v32m256_metal
16 Core
32 vCore
256 GB
Intel Xeon Gold 6444Y up to 4.0GHz
960 GB * 2EA
h3v32m512_metal
16 Core
32 vCore
512 GB
Intel Xeon Gold 6444Y up to 4.0GHz
960 GB * 2EA
h3v64m256_metal
32 Core
64 vCore
256 GB
Intel Xeon Gold 6448H up to 3.2GHz
1.92 TB * 2EA
h3v64m512_metal
32 Core
64 vCore
512 GB
Intel Xeon Gold 6448H up to 3.2GHz
1.92 TB * 2EA
h3v64m1024_metal
32 Core
64 vCore
1024 GB
Intel Xeon Gold 6448H up to 3.2GHz
1.92 TB * 2EA
h3v96m384_metal
48 Core
96 vCore
384 GB
Intel Xeon Gold 6442Y up to 3.3GHz
1.92 TB * 2EA
h3v96m768_metal
48 Core
96 vCore
768 GB
Intel Xeon Gold 6442Y up to 3.3GHz
1.92 TB * 2EA
h3v96m1536_metal
48 Core
96 vCore
1536 GB
Intel Xeon Gold 6442Y up to 3.3GHz
1.92 TB * 2EA
h3v128m512_metal
64 Core
128 vCore
512 GB
Intel Xeon Gold 6448H up to 3.2GHz
1.92 TB * 2EA
h3v128m1024_metal
64 Core
128 vCore
1024 GB
Intel Xeon Gold 6448H up to 3.2GHz
1.92 TB * 2EA
h3v128m2048_metal
64 Core
128 vCore
2048 GB
Intel Xeon Gold 6448H up to 3.2GHz
1.92 TB * 2EA
Table. Bare Metal Server server type specifications > s3/h3 server types
s2/h2 Server Types
Notice
New service applications for s2/h2 server types have been discontinued. Existing services are not affected.
Bare Metal Server s2 server type is provided with standard specifications (vCPU, Memory) and is suitable for high-performance applications because it receives physically isolated resources for use.
Also, Bare Metal Server h2 server type is provided with high-capacity server specifications and is suitable for high-performance applications for large-scale data processing.
Supports 5 types of vCPU in total (16, 24, 32, 72, 96 vCore)
Intel 3rd generation (Ice Lake) Processor
Supports up to 48 physical cores, 96 vCPUs, and 1,024 GB of memory
Provides up to 2 x 1.92 TB Internal Disks (for OS)
Server Type
Physical CPU
vCPU
Memory
CPU Type
Internal Disk(OS)
s2v16m64_metal
8 Core
16 vCore
64 GB
Intel Xeon Gold 6334 up to 3.6GHz
480 GB * 2EA
s2v16m128_metal
8 Core
16 vCore
128 GB
Intel Xeon Gold 6334 up to 3.6GHz
480 GB * 2EA
s2v16m256_metal
8 Core
16 vCore
256 GB
Intel Xeon Gold 6334 up to 3.6GHz
480 GB * 2EA
h2v24m96_metal
12 Core
24 vCore
96 GB
Intel Xeon Gold 5317 up to 3.4GHz
960 GB * 2EA
h2v24m192_metal
12 Core
24 vCore
192 GB
Intel Xeon Gold 5317 up to 3.4GHz
960 GB * 2EA
h2v24m384_metal
12 Core
24 vCore
384 GB
Intel Xeon Gold 5317 up to 3.4GHz
960 GB * 2EA
h2v32m128_metal
16 Core
32 vCore
128 GB
Intel Xeon Gold 6346 up to 3.6GHz
960 GB * 2EA
h2v32m256_metal
16 Core
32 vCore
256 GB
Intel Xeon Gold 6346 up to 3.6GHz
960 GB * 2EA
h2v32m512_metal
16 Core
32 vCore
512 GB
Intel Xeon Gold 6346 up to 3.6GHz
960 GB * 2EA
h2v72m256_metal
36 Core
72 vCore
256 GB
Intel Xeon Gold 6354 up to 3.6GHz
1.92 TB * 2EA
h2v72m512_metal
36 Core
72 vCore
512 GB
Intel Xeon Gold 6354 up to 3.6GHz
1.92 TB * 2EA
h2v72m1024_metal
36 Core
72 vCore
1024 GB
Intel Xeon Gold 6354 up to 3.6GHz
1.92 TB * 2EA
h2v96m384_metal
48 Core
96 vCore
384 GB
Intel Xeon Gold 6342 up to 3.3GHz
1.92 TB * 2EA
h2v96m768_metal
48 Core
96 vCore
768 GB
Intel Xeon Gold 6342 up to 3.3GHz
1.92 TB * 2EA
Table. Bare Metal Server server type specifications > s2/h2 server types
2.4.1.2 - Monitoring Metrics
Bare Metal Server Monitoring Metrics
The following table shows the monitoring metrics for Bare Metal Server that can be checked through Cloud Monitoring.
Guide
Bare Metal Server requires the user to install the Agent through a guide to retrieve monitoring metrics. Please install the Agent before using the stable Bare Metal Server service. For the Agent installation method and detailed Cloud Monitoring usage, refer to the Cloud Monitoring guide.
Performance Item
Detailed Description
Unit
Core Usage [IO Wait]
The ratio of CPU time spent in a waiting state (disk wait)
%
Core Usage [System]
The ratio of CPU time spent in kernel space
%
Core Usage [User]
The ratio of CPU time spent in user space
%
CPU Cores
The number of CPU cores on the host
cnt
CPU Usage [Active]
The percentage of CPU time used, excluding idle and IOWait states
%
CPU Usage [Idle]
The ratio of CPU time spent in an idle state
%
CPU Usage [IO Wait]
The ratio of CPU time spent in a waiting state (disk wait)
%
CPU Usage [System]
The percentage of CPU time used by the kernel
%
CPU Usage [User]
The percentage of CPU time used by the user area
%
CPU Usage/Core [Active]
The percentage of CPU time used, excluding idle and IOWait states
%
CPU Usage/Core [Idle]
The ratio of CPU time spent in an idle state
%
CPU Usage/Core [IO Wait]
The ratio of CPU time spent in a waiting state (disk wait)
%
CPU Usage/Core [System]
The percentage of CPU time used by the kernel
%
CPU Usage/Core [User]
The percentage of CPU time used by the user area
%
Disk CPU Usage [IO Request]
The ratio of CPU time spent executing I/O requests for the device
%
Disk Queue Size [Avg]
The average queue length of requests executed for the device
num
Disk Read Bytes
The number of bytes read from the device per second
bytes
Disk Read Bytes [Delta Avg]
The average of system.diskio.read.bytes_delta for individual disks
bytes
Disk Read Bytes [Delta Max]
The maximum of system.diskio.read.bytes_delta for individual disks
bytes
Disk Read Bytes [Delta Min]
The minimum of system.diskio.read.bytes_delta for individual disks
bytes
Disk Read Bytes [Delta Sum]
The sum of system.diskio.read.bytes_delta for individual disks
bytes
Disk Read Bytes [Delta]
The delta of system.diskio.read.bytes for individual disks
bytes
Disk Read Bytes [Success]
The total number of bytes read successfully
bytes
Disk Read Requests
The number of read requests for the disk device per second
cnt
Disk Read Requests [Delta Avg]
The average of system.diskio.read.count_delta for individual disks
cnt
Disk Read Requests [Delta Max]
The maximum of system.diskio.read.count_delta for individual disks
cnt
Disk Read Requests [Delta Min]
The minimum of system.diskio.read.count_delta for individual disks
cnt
Disk Read Requests [Delta Sum]
The sum of system.diskio.read.count_delta for individual disks
cnt
Disk Read Requests [Success Delta]
The delta of system.diskio.read.count for individual disks
cnt
Disk Read Requests [Success]
The total number of successful reads
cnt
Disk Request Size [Avg]
The average size of requests executed for the device (in sectors)
num
Disk Service Time [Avg]
The average service time for I/O requests executed for the device (in milliseconds)
ms
Disk Wait Time [Avg]
The average time spent waiting for I/O requests executed for the device
ms
Disk Wait Time [Read]
The average disk read wait time
ms
Disk Wait Time [Write]
The average disk write wait time
ms
Disk Write Bytes [Delta Avg]
The average of system.diskio.write.bytes_delta for individual disks
bytes
Disk Write Bytes [Delta Max]
The maximum of system.diskio.write.bytes_delta for individual disks
bytes
Disk Write Bytes [Delta Min]
The minimum of system.diskio.write.bytes_delta for individual disks
bytes
Disk Write Bytes [Delta Sum]
The sum of system.diskio.write.bytes_delta for individual disks
bytes
Disk Write Bytes [Delta]
The delta of system.diskio.write.bytes for individual disks
bytes
Disk Write Bytes [Success]
The total number of bytes written successfully
bytes
Disk Write Requests
The number of write requests for the disk device per second
cnt
Disk Write Requests [Delta Avg]
The average of system.diskio.write.count_delta for individual disks
cnt
Disk Write Requests [Delta Max]
The maximum of system.diskio.write.count_delta for individual disks
cnt
Disk Write Requests [Delta Min]
The minimum of system.diskio.write.count_delta for individual disks
cnt
Disk Write Requests [Delta Sum]
The sum of system.diskio.write.count_delta for individual disks
cnt
Disk Write Requests [Success Delta]
The delta of system.diskio.write.count for individual disks
cnt
Disk Write Requests [Success]
The total number of successful writes
cnt
Disk Writes Bytes
The number of bytes written to the device per second
bytes
Filesystem Hang Check
Filesystem (local/NFS) hang check (normal: 1, abnormal: 0)
status
Filesystem Nodes
The total number of file nodes in the filesystem
cnt
Filesystem Nodes [Free]
The total number of available file nodes in the filesystem
cnt
Filesystem Size [Available]
The available disk space for non-privileged users (in bytes)
bytes
Filesystem Size [Free]
The available disk space (in bytes)
bytes
Filesystem Size [Total]
The total disk space (in bytes)
bytes
Filesystem Usage
The percentage of used disk space
%
Filesystem Usage [Avg]
The average of filesystem.used.pct for individual filesystems
%
Filesystem Usage [Inode]
The inode usage rate
%
Filesystem Usage [Max]
The maximum of filesystem.used.pct for individual filesystems
%
Filesystem Usage [Min]
The minimum of filesystem.used.pct for individual filesystems
%
Filesystem Usage [Total]
The total filesystem usage
%
Filesystem Used
The used disk space (in bytes)
bytes
Filesystem Used [Inode]
The inode usage
bytes
Memory Free
The total available memory (in bytes), excluding system cache and buffer memory
bytes
Memory Free [Actual]
The actual available memory (in bytes)
bytes
Memory Free [Swap]
The available swap memory
bytes
Memory Total
The total memory
bytes
Memory Total [Swap]
The total swap memory
bytes
Memory Usage
The percentage of used memory
%
Memory Usage [Actual]
The percentage of actual used memory
%
Memory Usage [Cache Swap]
The cache swap usage rate
%
Memory Usage [Swap]
The percentage of used swap memory
%
Memory Used
The used memory
bytes
Memory Used [Actual]
The actual used memory (in bytes), subtracted from the total memory
bytes
Memory Used [Swap]
The used swap memory
bytes
Collisions
Network collisions
cnt
Network In Bytes
The number of received bytes
bytes
Network In Bytes [Delta Avg]
The average of system.network.in.bytes_delta for individual networks
bytes
Network In Bytes [Delta Max]
The maximum of system.network.in.bytes_delta for individual networks
bytes
Network In Bytes [Delta Min]
The minimum of system.network.in.bytes_delta for individual networks
bytes
Network In Bytes [Delta Sum]
The sum of system.network.in.bytes_delta for individual networks
bytes
Network In Bytes [Delta]
The delta of received bytes
bytes
Network In Dropped
The number of dropped incoming packets
cnt
Network In Errors
The number of errors during reception
cnt
Network In Packets
The number of received packets
cnt
Network In Packets [Delta Avg]
The average of system.network.in.packets_delta for individual networks
cnt
Network In Packets [Delta Max]
The maximum of system.network.in.packets_delta for individual networks
cnt
Network In Packets [Delta Min]
The minimum of system.network.in.packets_delta for individual networks
cnt
Network In Packets [Delta Sum]
The sum of system.network.in.packets_delta for individual networks
cnt
Network In Packets [Delta]
The delta of received packets
cnt
Network Out Bytes
The number of transmitted bytes
bytes
Network Out Bytes [Delta Avg]
The average of system.network.out.bytes_delta for individual networks
bytes
Network Out Bytes [Delta Max]
The maximum of system.network.out.bytes_delta for individual networks
bytes
Network Out Bytes [Delta Min]
The minimum of system.network.out.bytes_delta for individual networks
bytes
Network Out Bytes [Delta Sum]
The sum of system.network.out.bytes_delta for individual networks
bytes
Network Out Bytes [Delta]
The delta of transmitted bytes
bytes
Network Out Dropped
The number of dropped outgoing packets
cnt
Network Out Errors
The number of errors during transmission
cnt
Network Out Packets
The number of transmitted packets
cnt
Network Out Packets [Delta Avg]
The average of system.network.out.packets_delta for individual networks
cnt
Network Out Packets [Delta Max]
The maximum of system.network.out.packets_delta for individual networks
cnt
Network Out Packets [Delta Min]
The minimum of system.network.out.packets_delta for individual networks
cnt
Network Out Packets [Delta Sum]
The sum of system.network.out.packets_delta for individual networks
cnt
Network Out Packets [Delta]
The delta of transmitted packets
cnt
Open Connections [TCP]
The number of open TCP connections
cnt
Open Connections [UDP]
The number of open UDP connections
cnt
Port Usage
The port usage rate
%
SYN Sent Sockets
The number of sockets in the SYN_SENT state (when connecting from local to remote)
cnt
Kernel PID Max
The value of kernel.pid_max
cnt
Kernel Thread Max
The value of kernel.threads-max
cnt
Process CPU Usage
The percentage of CPU time consumed by the process since the last update
%
Process CPU Usage/Core
The percentage of CPU time used by the process since the last event
%
Process Memory Usage
The percentage of main memory (RAM) used by the process
%
Process Memory Used
The resident set size, which is the amount of memory used by the process in RAM
bytes
Process PID
The process ID
pid
Process PPID
The parent process ID
pid
Processes [Dead]
The number of dead processes
cnt
Processes [Idle]
The number of idle processes
cnt
Processes [Running]
The number of running processes
cnt
Processes [Sleeping]
The number of sleeping processes
cnt
Processes [Stopped]
The number of stopped processes
cnt
Processes [Total]
The total number of processes
cnt
Processes [Unknown]
The number of processes with unknown or unsearchable status
cnt
Processes [Zombie]
The number of zombie processes
cnt
Running Process Usage
The process usage rate
%
Running Processes
The number of running processes
cnt
Running Thread Usage
The thread usage rate
%
Running Threads
The total number of threads running in running processes
cnt
Context Switches
The number of context switches (per second)
cnt
Load/Core [1 min]
The load averaged over the last 1 minute, divided by the number of cores
cnt
Load/Core [15 min]
The load averaged over the last 15 minutes, divided by the number of cores
cnt
Load/Core [5 min]
The load averaged over the last 5 minutes, divided by the number of cores
cnt
Multipaths [Active]
The number of external storage connection paths with status = active
cnt
Multipaths [Failed]
The number of external storage connection paths with status = failed
cnt
Multipaths [Faulty]
The number of external storage connection paths with status = faulty
cnt
NTP Offset
The measured offset (time difference between the NTP server and the local environment) of the last sample
num
Run Queue Length
The length of the run queue
num
Uptime
The OS uptime (in milliseconds)
ms
Context Switchies
The number of CPU context switches (per second)
cnt
Disk Read Bytes [Sec]
The number of bytes read from the Windows logical disk per second
cnt
Disk Read Time [Avg]
The average data read time (in seconds)
sec
Disk Transfer Time [Avg]
The average disk wait time
sec
Disk Usage
The disk usage rate
%
Disk Write Bytes [Sec]
The number of bytes written to the Windows logical disk per second
cnt
Disk Write Time [Avg]
The average data write time (in seconds)
sec
Pagingfile Usage
The paging file usage rate
%
Pool Used [Non Paged]
The non-paged pool usage in kernel memory
bytes
Pool Used [Paged]
The paged pool usage in kernel memory
bytes
Process [Running]
The number of currently running processes
cnt
Threads [Running]
The number of currently running threads
cnt
Threads [Waiting]
The number of threads waiting for processor time
cnt
Table. Bare Metal Server Monitoring Metrics (Available when Agent is installed)
2.4.2 - How-to guides
The user can input required information for a Bare Metal Server through the Samsung Cloud Platform Console, select detailed options, and create the service.
Bare Metal Server Create
You can create and use the Bare Metal Server service from the Samsung Cloud Platform Console.
To create a Bare Metal Server, follow the steps below.
All Services > Compute > Bare Metal Server Click the menu. Go to the Service Home page of Bare Metal Server.
Click the Bare Metal Server Create button on the Service Home page. It navigates to the Bare Metal Server Create page.
Bare Metal Server creation On the page, enter the information required to create the service, and select detailed options.
Image and Version Selection area, select the required information.
Category
Required or not
Detailed description
Image
Required
Select the type of image provided
RHEL
Rocky Linux
Ubuntu
Windows
Image Version
Required
Select version of the chosen image
Provides a list of versions of the provided server images
Table. Bare Metal Server Image and Version Input Items
Service Information Input area, please input or select the required information.
Category
Required or not
Detailed description
Server count
Required
Number of Bare Metal Server servers to create simultaneously
Only numbers can be entered, and must be between 1 and 5
Automatically creates an account to provide automation functions after Bare Metal Server creation
The account is used only for inter-system interface purposes
Password is encrypted and cannot be accessed outside the system
If the account is deleted, network changes and some automation functions will be restricted
Table. Bare Metal Server Service Information Input Items
In the Required Information Input area, enter or select the required information.
Category
Required or not
Detailed description
Administrator Account
Required
Set the administrator account and password to be used when connecting to the server
RHEL, Ubuntu OS are provided fixed as root
For Windows OS, enter using lowercase English letters and numbers, 5~20 characters
Administrator not allowed
Server Name
Required
Enter a name to distinguish the Bare Metal Server when the selected number of servers is 1
Set the hostname to the entered server name
Start with a lowercase English letter, and use lowercase letters, numbers, and special character (-) to enter within 3 to 15 characters
Must not end with a special character (-)
Server Name Prefix
Required
Input Prefix to distinguish each Bare Metal Server generated when the selected number of servers is 2 or more
Automatically generated in the form of user input value (prefix) + ‘-###’
Must start with a lowercase English letter, and be entered using lowercase letters, numbers, and special characters (-) within 3 to 15 characters
Must not end with a special character (-)
Network Settings
Required
Set the network where the Bare Metal Server will be installed
Select a pre-created VPC
General Subnet: Select a pre-created general Subnet
IP can be set to Auto-generated or User input; if Input is selected, the user enters the IP directly
NAT: Available only when there is a single server and the VPC has an Internet Gateway attached
When checked, a NAT IP can be selected
NAT IP: Select a NAT IP
If there is no NAT IP to select, click the Create New button to generate a Public IP
Click the Refresh button to view and select the created Public IP
Creating a Public IP incurs charges according to the Public IP pricing policy
Local Subnet (Optional): Choose to use a local Subnet
Not a required element for creating the service
Select a pre-created local Subnet
IP can be set to Auto-generated or User input; if Input is selected, the user enters the IP directly
Table. Bare Metal Server Required Information Input Items
Caution
Please use a firewall etc. to control traffic access for Bare Metal Server. Security Group is not provided. The firewall of the Bare Metal Server can only be used for traffic control between the Bare Metal Server and the Virtual Server. To use the Bare Metal Server’s firewall, follow the steps below.
Separate the VPC of the Bare Metal Server: Separate them so that the Bare Metal Server and Virtual Server do not use the same VPC.
Create Transit Gateway: Please create the Transit Gateway.
The integration between the VPC of Virtual Server and the VPC of Bare Metal Server uses a Transit Gateway.
When creating a Transit Gateway integration in the VPC of a Bare Metal Server, you must also create the Bare Metal Server’s firewall.
Firewall Rule registration: Register a rule in the Firewall of the Bare Metal Server.
Bare Metal Server Creation on the page Additional Information Input area, enter or select the required information.
Category
Required or not
Detailed description
Local disk partition
Select
Set whether to use local disk partition
Up to 10 can be created, including the root partition
Up to 90% of total capacity can be used
After checking Use, partition information can be set
Root partition information setting
Partition type: flat, lvm selectable
Partition name: enter partition name
Can be entered only when partition type is lvm
Enter within 15 characters, starting with a letter and including letters, numbers, and special characters (-)
Partition size: enter at least 50 GB
Filesystem type: select according to the used image
For RHEL, Rocky Linux: xfs, ext4
For Ubuntu: ext4, xfs, btrfs
For SLES: btrfs, xfs, ext4
Mount point: start with special character / and enter within 15 characters, including letters, numbers, and special characters (-)
If Filesystem type is swap, entry not allowed
Available capacity: 90% of the default disk capacity provided when selecting a server
When setting partition size, the remaining capacity is automatically calculated and displayed
Total partition disk amount cannot exceed available capacity
Additional partition information setting
Partition type: flat, lvm selectable
Partition name: enter partition name
Can be entered only when partition type is lvm
Enter within 15 characters, starting with a letter and including letters, numbers, and special characters (-)
Partition size: enter at least 1 GB
Filesystem type: select according to the used image>
For RHEL, Rocky Linux: xfs, ext4, swap
For Ubuntu: ext4, xfs, btrfs, swap
For SLES: btrfs, xfs, ext4, swap
Mount point: start with special character / and enter within 15 characters, including letters, numbers, and special characters (-)
If Filesystem type is swap, entry not allowed
Available capacity: 90% of the default disk capacity provided when selecting a server
When setting partition size, the remaining capacity is automatically calculated and displayed
Total partition disk amount cannot exceed available capacity
Placement Group
Select
Servers belonging to the same Placement group are distributed across different racks
Provides distributed placement for up to 2 servers belonging to the same Placement group
For distribution of 3 or more servers, add additional Placement groups
Applicable only at initial creation; cannot be modified after creation
If you terminate the last server belonging to a Placement group, that Placement group is automatically deleted
Lock
Select
Using Lock prevents actions caused by mistakes, preventing the server from being terminated, started, or stopped
Hyper Threading
Select
Set logical cores to operate at twice the number of physical cores
Uncheck the box to turn off Hyper Threading
Cannot be changed after server creation
Init Script
Select
Script to run when the server starts
Init Script must be selected differently depending on the image type
For Windows: Select Batch Script
For Linux: Shell Script
Table. Bare Metal Server Additional Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resources on the Bare Metal Server List page.
Bare Metal Server Check detailed information
The Bare Metal Server service can view and edit the full resource list and detailed information. Bare Metal Server Details page consists of Details, Tags, Activity History tabs.
Bare Metal Server If you want to view detailed information, follow the steps below.
All Services > Compute > Bare Metal Server Click the menu. Navigate to the Service Home page of Bare Metal Server.
Click the Bare Metal Server menu on the Service Home page. Go to the Bare Metal Server List page.
Bare Metal Server List page, click the resource to view detailed information. Bare Metal Server Details page moves.
Bare Metal Server Detail page displays status information and additional feature information, and consists of Detail Information, Tag, Operation History tabs.
Category
Detailed description
Bare Metal Server status
Status of the Bare Metal Server created by the user
Creating: server is being created
Running:: creation complete and usable
Editing:: IP is being changed
Unknown: error state
Starting: server is starting
Stopping: server is stopping
Stopped: server has stopped
Terminating: termination in progress
Terminated: termination complete
Server Control
Button to change server status
Start: Start a stopped server
Stop: Stop a running server
Service termination
Button to cancel the service
Table. Bare Metal Server status information and additional functions
Detailed Information
Bare Metal Server List page allows you to view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In Bare Metal Server, it means Bare Metal Server SRN
Resource Name
Resource Name
In Bare Metal Server, it means the server name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation time
Service creation time
Editor
User who edited the service information
Modification DateTime
Date and time when service information was modified
If lock is used, it prevents server termination/start/stop to avoid accidental actions
If you need to change the lock property value, click the Edit button to set
Hyper Threading
Hyper Threading usage/not usage indication
Hyper Threading is a setting that makes the logical core count operate at twice the number of physical cores
Network
Network information of Bare Metal Server
VPC, General Subnet, IP and status, Public NAT IP and status, Private NAT IP and status
If IP change is needed, click the Edit button to set
Local Subnet
Local Subnet information of Bare Metal Server
Local Subnet name, Local Subnet IP, Vlan ID, Interface Name
If you need to add a Local Subnet, click the Add button to set
Block Storage
Block Storage information connected to the server
Volume name, disk type, capacity, status
Click the Add button to go to the Block Storage creation screen
Init Script
View the Init Script entered when creating the server
Table. Bare Metal Server detailed information tab items
Tag
Bare Metal Server List page, you can view the tag information of the selected resource, and add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Can view the tag’s Key and Value information
Up to 50 tags can be added per resource
When entering tags, search and select from the existing list of created Keys and Values
Table. Bare Metal Server Tag Tab Items
Work History
You can view the operation history of the selected resources on the Bare Metal Server List page.
Category
Detailed description
Work History List
Resource Change History
Work date and time, Resource ID, Resource name, Work details, Event topic, Work result, Check worker information
Table. Bare Metal Server Work History Tab Detailed Information Items
Bare Metal Server Resource Management
If you need server control and management functions for the created Bare Metal Server resources, you can perform the tasks on the Bare Metal Server List or Bare Metal Server Details page.
Bare Metal Server Operation Control
You can start, stop, and restart a running Bare Metal Server.
To control the operation of Bare Metal Server, follow the steps below.
All Services > Compute > Bare Metal Server Click the menu. Navigate to the Service Home page of Bare Metal Server.
Click the Bare Metal Server menu on the Service Home page. Navigate to the Bare Metal Server List page.
Bare Metal Server list On the page, after selecting multiple servers, you can control multiple servers simultaneously using the start and stop buttons at the top of the table.
Bare Metal Server Details page also allows you to start and stop the server.
Bare Metal Server List on the page click the resource to control operation and navigate to the Bare Metal Server Detail page.
Check the server status and complete the changes using each Server Management button.
Start: Start the stopped server.
Stop: Stops the running server.
Guide
When a Bare Metal Server is stopped, the server’s power turns off.
Since it may affect applications or storage in use, we recommend shutting down the OS and then stopping.
After shutting down the OS, be sure to also stop in the Console.
Operation control unavailable
Bare Metal Server If you cannot start when requesting a start, see below.
When Lock is set: After changing the Lock setting to disabled, try again.
If the Bare Metal Server’s status is not Stopped: Change the Bare Metal Server’s status to Stopped, then try again.
If stopping a Bare Metal Server request is not possible, refer to the following.
If Lock is set: Change the Lock setting to disabled, then try again.
If the Bare Metal Server’s status is not Running: Change the Bare Metal Server’s status to Running, then try again.
Add Block Storage
You can add Block Storage to a Bare Metal Server.
To add Block Storage, follow the steps below.
All Services > Compute > Bare Metal Server Click the menu. Navigate to the Service Home page of Bare Metal Server.
Click the Bare Metal Server menu on the Service Home page. Go to the Bare Metal Server list page.
On the Bare Metal Server List page, click the server to add Block Storage. You will be taken to the Bare Metal Server Details page.
Click the Add button in the Block Storage item on the Bare Metal Server Details page.
If the popup window confirming Block Storage addition opens, click the Confirm button. Move to the Block Storage (BM) Creation page.
Block Storage(BM) Creation on the page, enter the information required to create the service and create the Block Storage.
For detailed information about creating Block Storage(BM), please refer to Block Storage(BM) Create.
Navigate to the Bare Metal Server Details page after adding Block Storage and verify that Block Storage has been added.
Caution
After creating Block Storage, you cannot increase the capacity.
Bare Metal Server Termination
If you terminate an unused Bare Metal Server, you can reduce operating costs. However, terminating a Bare Metal Server may cause the running service to stop immediately, so you should consider the impact of service interruption sufficiently before proceeding with the termination.
Caution
Please note that data cannot be recovered after service termination.
Caution
If you terminate the servers one by one that have Block Storage(BM) attached, the servers will be terminated but the attached Block Storage(BM) will not be terminated, so please terminate it directly from Block Storage(BM).
To cancel the Bare Metal Server, follow the steps below.
All Services > Compute > Bare Metal Server Click the menu. Navigate to the Service Home page of Bare Metal Server.
Click the Bare Metal Server menu on the Service Home page. Go to the Bare Metal Server list page.
Bare Metal Server List page, select the resource to cancel, and click the Cancel Service button.
You can select multiple resources and delete them simultaneously.
You can also delete by clicking the Service Termination button on the Bare metal Server details page of the resource to be terminated.
When termination is complete, check on the Bare Metal Server List page whether the resource has been terminated.
Termination Constraints
Bare Metal Server when a termination request cannot be processed, we will guide with a popup window. Please refer to the cases below.
Cancellation not allowed
Block Storage(BM) is connected (simultaneous termination of 2 or more servers): Disconnect the Block Storage(BM) first.
If File Storage is connected: Please disconnect the File Storage first.
For detailed information on how to cancel, please refer to File Storage Cancel.
If Lock is set: After changing the Lock setting to disabled, try again.
If there is a Backup Agent or Load Balancer connection resource: Terminate the connection of that resource first.
If resource management tasks for Bare Metal Server are in progress on the same account: After the Bare Metal Server resource management tasks are completed, please try again.
If the Bare Metal Server’s status is not Running or Stopped: Change the Bare Metal Server’s status to Running or Stopped, then try again.
If the server that cannot be terminated simultaneously is included: Please select only the resources that can be terminated and try again.
Local Subnet Setup
After completing the creation of a Bare Metal Server, if you add a local Subnet on the Bare Metal Server Details page, you must configure the network settings of the local Subnet yourself.
First Connection(kr-west)
There is no local subnet connected to the Bare Metal Server, and if you are adding the first connection, proceed according to the guide below.
Caution
This guide applies to kr-west (Korea West) when adding the first local Subnet connection to the server.
After adding a new VLAN, set the IP for the Bonding configuration.
Change the ID and IP in the example code to the assigned ID and IP.
Color mode
[root@localhost~]#vi/etc/netplan/50-cloud-init.yamlnetwork:bonds:bond-mgt:interfaces:-ens2f1// **Bare Metal Server Details** page, enter the Interface Name you confirmed.-ens4f1// **Bare Metal Server detailed** page, enter the Interface Name you verified. mtu:1500parameters:mii-monitor-interval:100mode:active-backuptransmit-hash-policy:layer2ethernets:ens2f1:match:macaddress:68:05:ca:d4:09:91mtu:1500set-name:ens2f1ens4f1:match:macaddress:68:05:ca:d4:09:01mtu:1500set-name:ens4f1vlans:bond-mgt.20:// Enter the Vlan ID confirmed in the SCP Console instead of 20.addresses:-192.168.0.10/24// Set it to the local Subnet IP confirmed in the SCP Console.id:20// Set it to the VLAN ID confirmed in the SCP Console.link:bond-mgtmtu:1500
[root@localhost~]#vi/etc/netplan/50-cloud-init.yamlnetwork:bonds:bond-mgt:interfaces:-ens2f1// **Bare Metal Server Details** page, enter the Interface Name you confirmed.-ens4f1// **Bare Metal Server detailed** page, enter the Interface Name you verified. mtu:1500parameters:mii-monitor-interval:100mode:active-backuptransmit-hash-policy:layer2ethernets:ens2f1:match:macaddress:68:05:ca:d4:09:91mtu:1500set-name:ens2f1ens4f1:match:macaddress:68:05:ca:d4:09:01mtu:1500set-name:ens4f1vlans:bond-mgt.20:// Enter the Vlan ID confirmed in the SCP Console instead of 20.addresses:-192.168.0.10/24// Set it to the local Subnet IP confirmed in the SCP Console.id:20// Set it to the VLAN ID confirmed in the SCP Console.link:bond-mgtmtu:1500
Code block. IP Settings
Apply the changes to the system.
Color mode
#netplanapply
#netplanapply
Code block. Reflect changes
Check the interface status.
Color mode
#ipaor#bash/usr/local/bin/ip.sh
#ipaor#bash/usr/local/bin/ip.sh
Code block. Interface lookup
Linux – Setting up Subnet on CentOS/Red Hat
After adding a local Subnet on the CentOS/Red Hat operating system, follow the steps below to configure the network.
Caution
If you set the Interface name incorrectly, the IP information in use may be deleted, so be careful.
On the Bare Metal Server Details page, check the Interface Name.
Modify the following command and execute.
Color mode
#!/usr/bin/bashIP_ADDR="10.1.1.3/24"// Set the local Subnet IP that you checked from the Console.VLAN_ID="7"// Set the Vlan ID confirmed in the Console.BOND_NAME="bond-mgt"BOND_IF_name1="ens2f1"// Enter the Interface Name you verified on the **Bare Metal Server Details** page.BOND_IF_name2="ens4f0"// **Bare Metal Server Details** Enter the Interface Name you verified on the page.#DeleteConnectionnmclicondown"Bond ${BOND_NAME}"nmclicondel"Bond ${BOND_NAME}"nmclicondown"System ${BOND_IF_name1}"nmclicondown"System ${BOND_IF_name2}"nmclicondel"System ${BOND_IF_name1}"nmclicondel"System ${BOND_IF_name2}"#CreateBondingnmcliconaddcon-name${BOND_NAME}typebondifname${BOND_NAME}nmcliconnmod${BOND_NAME}con-name"Bond ${BOND_NAME}"nmcliconnmod"Bond ${BOND_NAME}"ipv4.methoddisablednmcliconnmod"Bond ${BOND_NAME}"ipv6.methodignorenmcliconnmod"Bond ${BOND_NAME}"connect.autoconnectyesnmcliconnmod"Bond ${BOND_NAME}"+bond.optionsmode=active-backup\+bond.optionsxmit_hash_policy=layer2\+bond.optionsmiimon=100\+bond.optionsnum_grat_arp=1\+bond.optionsdowndelay=0\+bond.optionsupdelay=0#Assignbond-slavenmcliconnaddtypebond-slaveifname${BOND_IF_name1}con-name"${BOND_IF_name1}"master${BOND_NAME}nmcliconnmod${BOND_IF_name1}con-name"System ${BOND_IF_name1}"nmcliconnaddtypebond-slaveifname${BOND_IF_name2}con-name"${BOND_IF_name2}"master${BOND_NAME}nmcliconnmod${BOND_IF_name2}con-name"System ${BOND_IF_name2}"#ConnectionUPnmcliconnup"Bond ${BOND_NAME}"#addvlannmcliconnaddtypevlanifname"${BOND_NAME}.${VLAN_ID}"con-name"${BOND_NAME}.${VLAN_ID}"id${VLAN_ID}dev${BOND_NAME}nmcliconmod${BOND_NAME}.${VLAN_ID}con-name"Vlan ${BOND_NAME}.${VLAN_ID}"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.addresses${IP_ADDR}nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.methodmanualnmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv6.method"ignore"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"connect.autoconnectyesnmcliconup"Vlan ${BOND_NAME}.${VLAN_ID}"nmclidevicereapply${BOND_NAME}.${VLAN_ID}
#!/usr/bin/bashIP_ADDR="10.1.1.3/24"// Set the local Subnet IP that you checked from the Console.VLAN_ID="7"// Set the Vlan ID confirmed in the Console.BOND_NAME="bond-mgt"BOND_IF_name1="ens2f1"// Enter the Interface Name you verified on the **Bare Metal Server Details** page.BOND_IF_name2="ens4f0"// **Bare Metal Server Details** Enter the Interface Name you verified on the page.#DeleteConnectionnmclicondown"Bond ${BOND_NAME}"nmclicondel"Bond ${BOND_NAME}"nmclicondown"System ${BOND_IF_name1}"nmclicondown"System ${BOND_IF_name2}"nmclicondel"System ${BOND_IF_name1}"nmclicondel"System ${BOND_IF_name2}"#CreateBondingnmcliconaddcon-name${BOND_NAME}typebondifname${BOND_NAME}nmcliconnmod${BOND_NAME}con-name"Bond ${BOND_NAME}"nmcliconnmod"Bond ${BOND_NAME}"ipv4.methoddisablednmcliconnmod"Bond ${BOND_NAME}"ipv6.methodignorenmcliconnmod"Bond ${BOND_NAME}"connect.autoconnectyesnmcliconnmod"Bond ${BOND_NAME}"+bond.optionsmode=active-backup\+bond.optionsxmit_hash_policy=layer2\+bond.optionsmiimon=100\+bond.optionsnum_grat_arp=1\+bond.optionsdowndelay=0\+bond.optionsupdelay=0#Assignbond-slavenmcliconnaddtypebond-slaveifname${BOND_IF_name1}con-name"${BOND_IF_name1}"master${BOND_NAME}nmcliconnmod${BOND_IF_name1}con-name"System ${BOND_IF_name1}"nmcliconnaddtypebond-slaveifname${BOND_IF_name2}con-name"${BOND_IF_name2}"master${BOND_NAME}nmcliconnmod${BOND_IF_name2}con-name"System ${BOND_IF_name2}"#ConnectionUPnmcliconnup"Bond ${BOND_NAME}"#addvlannmcliconnaddtypevlanifname"${BOND_NAME}.${VLAN_ID}"con-name"${BOND_NAME}.${VLAN_ID}"id${VLAN_ID}dev${BOND_NAME}nmcliconmod${BOND_NAME}.${VLAN_ID}con-name"Vlan ${BOND_NAME}.${VLAN_ID}"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.addresses${IP_ADDR}nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.methodmanualnmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv6.method"ignore"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"connect.autoconnectyesnmcliconup"Vlan ${BOND_NAME}.${VLAN_ID}"nmclidevicereapply${BOND_NAME}.${VLAN_ID}
Code block. IP configuration script
Check the interface status.
Color mode
#ipaor#bash/usr/local/bin/ip.sh
#ipaor#bash/usr/local/bin/ip.sh
Code block. Interface lookup
Setting up Subnet on Windows
After adding a local Subnet in the Windows operating system, follow these steps to configure the network.
Windows Start icon, right-click, then run the Windows PowerShell (Administrator) program.
Check the Interface Name on the Bare Metal Server Details page.
Run ncpa.cpl from the Windows Run menu.
Check whether the interface is activated, and if necessary, activate it.
Bare Metal Server detail checked on the page Interface Name activate.
Run ncpa.cpl from the Windows Run menu to check the interface status.
First connection(kr-south)
If there is no local subnet connected to the Bare Metal Server initially, and you are adding the first connection, proceed according to the guide below.
Caution
This guide is for kr-south (Korean region) when adding the first local Subnet connection to the server.
kr-west(Korea West) the guide that applies to First Connection(kr-west) please refer to the chapter.
Linux - Setting up Subnet on Ubuntu
To add a local Subnet on the Ubuntu operating system and proceed with network settings, follow the steps below.
After adding a new Vlan, set the IP.
Change the ID and IP in the example code to the assigned ID and IP.
Color mode
[root@localhost~]#vi/etc/netplan/50-cloud-init.yamlnetwork:bonds:bond-mgt:interfaces:-ens2f1-ens4f1mtu:1500parameters:mii-monitor-interval:100mode:active-backuptransmit-hash-policy:layer2ethernets:ens2f1:match:macaddress:68:05:ca:d4:09:91mtu:1500set-name:ens2f1ens4f1:match:macaddress:68:05:ca:d4:09:01mtu:1500set-name:ens4f1vlans:bond-mgt.20:addresses:-192.168.0.10/24id:20link:bond-mgtmtu:1500vlans:bond-mgt.21:// Enter the Vlan ID you checked on the Console instead of 21.addresses:-192.168.0.20/24// Set to the local Subnet IP confirmed in the Console.id:21// Set it to the Vlan ID verified in the Console.link:bond-mgtmtu:1500
[root@localhost~]#vi/etc/netplan/50-cloud-init.yamlnetwork:bonds:bond-mgt:interfaces:-ens2f1-ens4f1mtu:1500parameters:mii-monitor-interval:100mode:active-backuptransmit-hash-policy:layer2ethernets:ens2f1:match:macaddress:68:05:ca:d4:09:91mtu:1500set-name:ens2f1ens4f1:match:macaddress:68:05:ca:d4:09:01mtu:1500set-name:ens4f1vlans:bond-mgt.20:addresses:-192.168.0.10/24id:20link:bond-mgtmtu:1500vlans:bond-mgt.21:// Enter the Vlan ID you checked on the Console instead of 21.addresses:-192.168.0.20/24// Set to the local Subnet IP confirmed in the Console.id:21// Set it to the Vlan ID verified in the Console.link:bond-mgtmtu:1500
Code block. Vlan addition and IP setting
Reflect the modifications in the system.
Color mode
#netplanapply
#netplanapply
Code block. Reflect changes
Check the interface status.
Color mode
#ipaor#bash/usr/local/bin/ip.sh
#ipaor#bash/usr/local/bin/ip.sh
Code block. Interface lookup
Linux – Setting up Subnet on CentOS/Red Hat
After adding a local Subnet on CentOS/Red Hat operating system, follow the steps below to configure the network.
Caution
If you add a local Subnet and configure the network incorrectly, be careful as the IP information in use may be deleted.
Check the Bond Name for local Subnet.
Color mode
#sh/usr/local/bin/ip.sh
#sh/usr/local/bin/ip.sh
Code block. Bonding check
Modify the following command and execute.
Color mode
#!/usr/bin/bashIP_ADDR="10.1.1.3/24"// Set the local Subnet IP as confirmed from the Console.VLAN_ID="7"// Set the Vlan ID confirmed in the console.BOND_NAME="bond-mgt"// Set the Bond Name confirmed in step 1.#addvlannmcliconnaddtypevlanifname"${BOND_NAME}.${VLAN_ID}"con-name"${BOND_NAME}.${VLAN_ID}"id${VLAN_ID}dev${BOND_NAME}nmcliconmod${BOND_NAME}.${VLAN_ID}con-name"Vlan ${BOND_NAME}.${VLAN_ID}"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.addresses${IP_ADDR}nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.methodmanualnmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv6.method"ignore"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"connect.autoconnectyesnmcliconup"Vlan ${BOND_NAME}.${VLAN_ID}"nmclidevicereapply${BOND_NAME}.${VLAN_ID}
#!/usr/bin/bashIP_ADDR="10.1.1.3/24"// Set the local Subnet IP as confirmed from the Console.VLAN_ID="7"// Set the Vlan ID confirmed in the console.BOND_NAME="bond-mgt"// Set the Bond Name confirmed in step 1.#addvlannmcliconnaddtypevlanifname"${BOND_NAME}.${VLAN_ID}"con-name"${BOND_NAME}.${VLAN_ID}"id${VLAN_ID}dev${BOND_NAME}nmcliconmod${BOND_NAME}.${VLAN_ID}con-name"Vlan ${BOND_NAME}.${VLAN_ID}"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.addresses${IP_ADDR}nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.methodmanualnmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv6.method"ignore"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"connect.autoconnectyesnmcliconup"Vlan ${BOND_NAME}.${VLAN_ID}"nmclidevicereapply${BOND_NAME}.${VLAN_ID}
Code block. IP configuration script
Check the interface status.
Color mode
#ipaor#bash/usr/local/bin/ip.sh
#ipaor#bash/usr/local/bin/ip.sh
Code block. Interface lookup
Setting up Subnet on Windows
After adding a local Subnet on the Windows operating system, follow these steps to configure the network.
Windows Start icon, right-click, then run the Windows PowerShell (Administrator) program.
Check the Teaming name for local Subnet.
Color mode
PSC:\>Get-NetAdapter
PSC:\>Get-NetAdapter
Code block. Windows interface check
After adding a new VLAN, set the IP.
Enter the Teaming name confirmed in step 2, and the Vlan ID and Local Subnet IP confirmed in the Console.
In the Windows Start menu, run ncpa.cpl to check the interface status.
Add second connection (kr-west, kr-south)
If there is a local Subnet connected to the Bare Metal Server, the guide for the second additional connection is as follows.
Because a Bonding was already created when connecting the first local Subnet, there is no procedure to create Bonding when connecting the second local Subnet.
Please refer to the details below.
Notice
This is a guide that can be applied commonly to kr-west, kr-south.
Linux - Setting up Subnet on Ubuntu
To add a local Subnet on the Ubuntu operating system and proceed with network configuration, follow the steps below.
After adding a new Vlan, set the IP.
Change the ID and IP of the example code to the assigned ID and IP.
Color mode
[root@localhost~]#vi/etc/netplan/50-cloud-init.yamlnetwork:bonds:bond-mgt:interfaces:-ens2f1-ens4f1mtu:1500parameters:mii-monitor-interval:100mode:active-backuptransmit-hash-policy:layer2ethernets:ens2f1:match:macaddress:68:05:ca:d4:09:91mtu:1500set-name:ens2f1ens4f1:match:macaddress:68:05:ca:d4:09:01mtu:1500set-name:ens4f1vlans:bond-mgt.20:addresses:-192.168.0.10/24id:20link:bond-mgtmtu:1500vlans:bond-mgt.21:// Enter the Vlan ID you checked on the console instead of 21.addresses:-192.168.0.20/24// Set it to the local Subnet IP confirmed from the Console.id:21// Set it to the Vlan ID confirmed in the Console.link:bond-mgtmtu:1500
[root@localhost~]#vi/etc/netplan/50-cloud-init.yamlnetwork:bonds:bond-mgt:interfaces:-ens2f1-ens4f1mtu:1500parameters:mii-monitor-interval:100mode:active-backuptransmit-hash-policy:layer2ethernets:ens2f1:match:macaddress:68:05:ca:d4:09:91mtu:1500set-name:ens2f1ens4f1:match:macaddress:68:05:ca:d4:09:01mtu:1500set-name:ens4f1vlans:bond-mgt.20:addresses:-192.168.0.10/24id:20link:bond-mgtmtu:1500vlans:bond-mgt.21:// Enter the Vlan ID you checked on the console instead of 21.addresses:-192.168.0.20/24// Set it to the local Subnet IP confirmed from the Console.id:21// Set it to the Vlan ID confirmed in the Console.link:bond-mgtmtu:1500
Code block. Vlan addition and IP configuration
Apply the changes to the system.
Color mode
#netplanapply
#netplanapply
Code block. Reflect changes
Check the interface status.
Color mode
#ipaor#bash/usr/local/bin/ip.sh
#ipaor#bash/usr/local/bin/ip.sh
Code block. Interface lookup
Linux – Setting up Subnet on CentOS/Red Hat
After adding a local Subnet on CentOS/Red Hat operating system, follow the steps below to configure the network.
Caution
If you add a local Subnet and configure the network incorrectly, be careful as the IP information in use may be deleted.
Check the Bond Name for the local Subnet.
Color mode
#sh/usr/local/bin/ip.sh
#sh/usr/local/bin/ip.sh
Code block. Bonding check
Modify the following command and execute.
Color mode
#!/usr/bin/bashIP_ADDR="10.1.1.3/24"// Set the local Subnet IP as confirmed from the console.VLAN_ID="7"// Set the Vlan ID confirmed in the console.BOND_NAME="bond-mgt"// Set the Bond Name confirmed in step 1.#addvlannmcliconnaddtypevlanifname"${BOND_NAME}.${VLAN_ID}"con-name"${BOND_NAME}.${VLAN_ID}"id${VLAN_ID}dev${BOND_NAME}nmcliconmod${BOND_NAME}.${VLAN_ID}con-name"Vlan ${BOND_NAME}.${VLAN_ID}"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.addresses${IP_ADDR}nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.methodmanualnmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv6.method"ignore"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"connect.autoconnectyesnmcliconup"Vlan ${BOND_NAME}.${VLAN_ID}"nmclidevicereapply${BOND_NAME}.${VLAN_ID}
#!/usr/bin/bashIP_ADDR="10.1.1.3/24"// Set the local Subnet IP as confirmed from the console.VLAN_ID="7"// Set the Vlan ID confirmed in the console.BOND_NAME="bond-mgt"// Set the Bond Name confirmed in step 1.#addvlannmcliconnaddtypevlanifname"${BOND_NAME}.${VLAN_ID}"con-name"${BOND_NAME}.${VLAN_ID}"id${VLAN_ID}dev${BOND_NAME}nmcliconmod${BOND_NAME}.${VLAN_ID}con-name"Vlan ${BOND_NAME}.${VLAN_ID}"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.addresses${IP_ADDR}nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv4.methodmanualnmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"ipv6.method"ignore"nmcliconmod"Vlan ${BOND_NAME}.${VLAN_ID}"connect.autoconnectyesnmcliconup"Vlan ${BOND_NAME}.${VLAN_ID}"nmclidevicereapply${BOND_NAME}.${VLAN_ID}
Code block. IP configuration script
Check the interface status.
Color mode
#ipaor#bash/usr/local/bin/ip.sh
#ipaor#bash/usr/local/bin/ip.sh
Code block. Interface lookup
Setting Subnet in Windows
After adding a local Subnet in the Windows operating system, follow the steps below to set up the network.
Right-click the Windows Start icon, then run the Windows PowerShell (Administrator) program.
Check the Teaming name for local Subnet.
Color mode
PSC:\>Get-NetAdapter
PSC:\>Get-NetAdapter
Code block. Windows interface check
After adding a new VLAN, set the IP.
Enter the Teaming name confirmed in step 2, the Vlan ID and Local Subnet IP confirmed in the Console.
In the Windows Run menu, execute ncpa.cpl to check the interface status.
IP Change
IP can be changed for migration, server replacement, etc.
Caution
If you proceed with changing the IP, you will no longer be able to communicate with that IP, and you cannot cancel the IP change while it is in progress.
If it is a server running the Load Balancer service, you must delete the existing IP from the LB server group and directly add the changed IP as a member of the LB server group.
Servers using Public NAT, Privat NAT must disable and reconfigure Public NAT, Privat NAT after an IP change.
If you are using Public NAT, Privat NAT, first disable the use of Public NAT, Privat NAT, complete the IP change, and then set it again.
Public NAT, Privat NAT usage can be changed by clicking the Edit button of Public NAT IP, Privat NAT on the Bare Metal Server Details page.
If you want to change the IP, follow the steps below.
All Services > Compute > Bare Metal Server Click the menu. Navigate to the Service Home page of Bare Metal Server.
Click the Bare Metal Server menu on the Service Home page. You will be taken to the Bare Metal Server list page.
Bare Metal Server List page, click the server to change the IP. Bare Metal Server Details page will be opened.
Click the Edit button next to the IP item on the Bare Metal Server Details page.
When the popup notifying IP modification opens, click the Confirm button. The IP Change popup opens.
IP change popup window’s Step 1, Step 2, Step 3Proceed with the tasks in order.
Guide
When changing the IP, the detailed configuration method of the IP change step varies depending on the subnet of the IP to be changed. Be sure to refer to the following example and proceed with the work for each step.
When each progress step is completed successfully, the task status in the upper right corner is displayed as Completed, and you can proceed to the next step.
When performing the final check of Step 3, it is recommended to restart the server and then proceed with the inspection.
After confirming that all tasks have been completed successfully, click the Confirm button.
Change to the same Subnet’s IP
Explains how to set IP per operating system when the IP to be changed uses the same subnet.
Linux – centos/redhat operating system
Step 1
Follow the next procedure and proceed with Step 1 work.
Select the Subnet to change.
Enter the IP to change.
IP allocation request Click the button.
When the popup notifying IP change confirmation opens, click the Confirm button.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Caution
If you proceed with the IP allocation request of Step 1, you cannot cancel or restore the IP change.
Step 2
Follow the next procedure and proceed with Step 2 work.
Connect to the IP change target server using NAT IP for the IP change operation.
Notice
To prevent situations where communication is impossible during operation, it is recommended to connect via another Virtual Server or Bare Metal Server created in the same subnet.
Step 1 Enter the assigned IP and set the IP to be changed on the server.
In the following example, replace 172.17.34.150 with the assigned IP.
After checking the information of the Interface you want to change on the server, enter it instead of the example bond-srv.9.
If you set the IP, the terminal session will be disconnected.
Step 2 After completing the task, if the task status changes to Completed, you can reconnect to the terminal.
When all tasks are completed, select the task completion checkbox of Step 2 in the IP Change popup window.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Guide
If the task status of Step 2 has been changed to Completed but there is still an issue with terminal connection, go to the All Services > Management > Support CenterContact menu and inquire.
Step 3
Follow the next procedure and proceed with Step 3 work.
Connect to the target server for IP change using NAT IP and check the communication status.
Use the following command to check again whether the previous configuration information remains and whether it has been changed correctly. If you can connect normally to the IP change target server, the changed IP is in normal communication status.
Color mode
#bash/usr/local/bin/ip.sh
#bash/usr/local/bin/ip.sh
Code block. Communication status check
Reference
NAT IP does not change.
Once all tasks are completed, restart the server and then perform a final check.
Reference
It is recommended to perform the final check after restarting the server.
If there are no issues in the final inspection results, select the work completion checkbox of Step 3 in the IP Change popup window.
Linux – Ubuntu Operating System
Step 1
Follow the next procedure and proceed with Step 1 work.
Select the Subnet to change.
Enter the IP to change.
IP Allocation Request Click the button.
If a popup notifying IP change confirmation opens, click the Confirm button.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Caution
If you proceed with the IP allocation request of Step 1, you cannot cancel or restore the IP change.
Step 2
Proceed with Step 2 work following the next procedure.
To perform the IP change operation, connect to the IP change target server using a NAT IP.
Guide
To prevent situations where communication is impossible during operation, it is recommended to connect via another Virtual Server or Bare Metal Server created in the same subnet.
Step 1Enter the IP assigned in Step 1 and set the IP to be changed on the server.
In the following example, replace 172.17.34.150/24 with the assigned IP.
After checking the information of the Interface you want to change on the server, enter it instead of the example bond-srv.9.
Use the Netplan apply command to apply the changes to the system.
Color mode
[root@localhost~]#netplanapply
[root@localhost~]#netplanapply
Code block. Run Netplan apply
Notice
If you set the IP, the terminal session will be disconnected.
Step 2 After completing the task, if the task status changes to Completed, you can reconnect to the terminal.
When all tasks are completed, select the task completion checkbox of Step 2 in the IP Change popup window.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Notice
If the task status of Step 2 has been changed to Completed but there is an issue with terminal access, go to the All Services > Management > Support CenterContact menu and inquire.
Step 3
Follow the next procedure and proceed with Step 3 work.
Check the communication status by connecting to the IP change target server with NAT IP.
Use the following command to check again whether the pre-change configuration information remains and whether it has been changed correctly. If you can connect normally to the IP change target server, the changed IP is in normal communication status.
Color mode
#bash/usr/local/bin/ip.sh
#bash/usr/local/bin/ip.sh
Code block. Communication status check
Reference
NAT IP does not change.
Once all tasks are completed, restart the server and then perform a final check.
Reference
It is recommended to perform the final check after restarting the server.
If there are no issues in the final inspection results, select the work completion checkbox of Step 3 in the IP Change popup window.
Windows Operating System
Step 1
Follow the next procedure and proceed with Step 1 work.
Select the Subnet to change.
Please enter the IP to change.
IP Allocation Request Click the button.
When the popup notifying IP change confirmation opens, click the Confirm button.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Caution
If you proceed with Step 1’s IP allocation request, you cannot cancel or restore the IP change.
Step 2
Follow the next procedure and proceed with Step 2 work.
Connect to the target server for IP change using NAT IP for the IP change operation.
Guide
To prevent situations where communication is impossible during operation, it is recommended to connect via another Virtual Server or Bare Metal Server created in the same subnet.
Right-click the Windows Start icon, then run Windows PowerShell (Administrator).
Step 1Enter the assigned IP and set the IP to be changed on the server.
In the following example, replace 172.17.34.150 with the assigned IP.
If you set the IP, the terminal session will be disconnected.
Step 2 After completing the task, if the task status changes to Completed, you can reconnect to the terminal.
When all tasks are completed, select the task completion checkbox of Step 2 in the IP change popup window.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Notice
Step 2’s task status changed to Completed, and if there is an issue with terminal access, go to the All Services > Management > Support Center’s Contact Us menu and inquire.
Step 3
Follow the next procedure and proceed with Step 3 work.
Connect to the server targeted for IP change using NAT IP and check the communication status.
Use the following command to check again whether the previous configuration information remains and whether it has been changed correctly. If you can connect normally to the IP change target server, the changed IP is in normal communication status.
Color mode
PSC:\>Get-NetIPAddress|Format-Table
PSC:\>Get-NetIPAddress|Format-Table
Code block. Communication status check
Reference
NAT IP does not change.
Once all tasks are completed, restart the server and then perform a final check.
Reference
It is recommended to perform the final check after restarting the server.
If there are no issues with the final inspection results, select the work completion checkbox of Step 3 in the IP change popup window.
Change to IP of another Subnet
Explains how to set IP per operating system when the IP to be changed uses a different subnet.
Linux – centos/redhat operating system
Step 1
Follow the next procedure and proceed with Step 1 work.
Please select the Subnet to change.
Enter the IP to change.
Click the IP Allocation Request button.
When the popup that notifies IP change confirmation opens, click the Confirm button.
If the task completes successfully, Check Vlan ID, Check Default Gateway information is displayed, and the task status at the top right is shown as Completed.
Caution
If you proceed with the IP allocation request of Step 1, you cannot cancel or restore the IP change.
Step 2
Proceed with Step 2 work following the next procedure.
Connect to the IP change target server with a NAT IP to perform the IP change operation.
Guide
To prevent situations where communication is impossible during operation, it is recommended to connect via another Virtual Server or Bare Metal Server created in the same subnet.
Add a new VLAN and set the IP to add the IP to the server.
Add VLAN: Create the interface of the Vlan ID confirmed in Step 1. In the following example, enter the assigned ID instead of 20.
IP Settings: Enter the IP assigned in Step 1. In the following example, replace 192.168.0.10/24 with the assigned IP.
Default gateway setting: Enter the Default gateway IP assigned in Step 1. In the following example, replace 192.168.0.1 with the assigned Default gateway IP.
If you set the Default Gateway on a new VLAN, the terminal session will be disconnected.
Step 2 After completing the task, if the task status changes to Completed, you can reconnect to the terminal.
When all tasks are completed, select the task completion checkbox of Step 2 in the IP Change popup window.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Guide
Step 2’s work status changed to Completed, but if there is still an issue with terminal access, go to the All Services > Management > Support CenterContact menu and inquire.
Step 3
Follow the next procedure and proceed with Step 3 work.
Connect to the target server for IP change using NAT IP.
After checking the Default Gateway IP of the existing (pre-change) interface, delete it.
In the following example, enter the verified IP instead of 192.168.10.1.
Color mode
#iproutedeldefaultvia192.168.10.1
#iproutedeldefaultvia192.168.10.1
Code block. Delete Default Gateway IP of existing interface
Connect to the IP change target server with a NAT IP and check the communication status.
Use the following command to check again whether the previous configuration information remains and whether it has been changed correctly. If you can connect normally to the IP change target server, the changed IP is in normal communication status.
Color mode
#netstat–nr#bash/usr/local/bin/ip.sh
#netstat–nr#bash/usr/local/bin/ip.sh
Code block. Communication status check
Reference
NAT IP does not change.
After checking the VLAN information of the existing IP, delete it from the server.
In the following example, replace 30 with the ID you verified.
Color mode
#nmclicondelete"Vlan bond-srv.30"
#nmclicondelete"Vlan bond-srv.30"
Code block. Delete Vlan information of existing IP
Once all tasks are completed, restart the server and then perform a final check.
Reference
It is recommended to perform the final check after restarting the server.
If there is no issue with the final inspection results, select the work completion checkbox of Step 3 in the IP Change popup.
Linux – Ubuntu operating system
Step 1
Follow the next procedure and proceed with Step 1 work.
Please select the Subnet to change.
Enter the IP to change.
IP allocation request Click the button.
When the popup notifying IP change confirmation opens, click the Confirm button.
If the task completes successfully, Check Vlan ID, Check Default Gateway information is displayed, and the task status at the top right is shown as Completed.
Caution
Step 1’s IP allocation request cannot be cancelled or restored once processed.
Step 2
Follow the next procedure and proceed with Step 2 work.
Connect to the IP change target server using a NAT IP for the IP change operation.
Guide
To prevent situations where communication is impossible during operation, it is recommended to connect via another Virtual Server or Bare Metal Server created in the same subnet.
To add the IP to be changed to the server, add a new VLAN and set the IP and Default Gateway.
This is the part where content is added below the Step 1 work description in the following example.
In the following example, enter the assigned ID and IP instead of ID and IP.
Use the Netplan apply command to apply the changes to the system.
Color mode
[root@localhost~]#netplanapply
[root@localhost~]#netplanapply
Code block. Netplan apply execution
Notice
If you set a new Default Gateway, the terminal session will be disconnected.
Step 2 After completing the task, if the task status changes to Completed, you can reconnect to the terminal.
When all tasks are completed, select the task completion checkbox of Step 2 in the IP Change popup window.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Notice
If the task status of Step 2 has been changed to Completed but there is still an issue with terminal access, go to the All Services > Management > Support CenterContact menu and inquire.
Step 3
Follow the next procedure and proceed with Step 3 work.
Connect to the target server for IP change using NAT IP.
After checking the Default Gateway IP of the existing (pre-change) interface, delete it.
In the following example, the Delete this line row is the part that gets deleted.
Code block. Delete Default Gateway IP of existing interface
Connect to the IP change target server using NAT IP and check the communication status.
Use the following command to check again whether the previous configuration information remains and whether it has been changed correctly. If you can connect normally to the IP change target server, the changed IP is in normal communication state.
Color mode
#netstat–nr#bash/usr/local/bin/ip.sh
#netstat–nr#bash/usr/local/bin/ip.sh
Code block. Communication status check
Reference
NAT IP does not change.
Delete the existing IP.
In the following example, the Delete this line row is the part that gets deleted.
When all tasks are completed, restart the server and then conduct a final check.
Reference
It is recommended to perform the final check after restarting the server.
If there are no issues in the final inspection results, select the work completion checkbox of Step 3 in the IP Change popup window.
Windows Operating System
Step 1
Follow the next procedure and proceed with Step 1 work.
Select the Subnet to change.
Enter the IP to change.
IP Allocation Request Click the button.
When the popup that notifies IP change confirmation opens, click the Confirm button.
If the task completes successfully, Check Vlan ID, Check Default Gateway information is displayed, and the task status at the top right is shown as Completed.
Caution
If you proceed with the IP allocation request of Step 1, you cannot cancel or revert the IP change.
Step 2
Proceed with Step 2 work following the next procedure.
Connect to the IP change target server using NAT IP for the IP change operation.
Guide
To prevent situations where communication is impossible during operation, it is recommended to connect via another Virtual Server or Bare Metal Server created in the same subnet.
Windows Start icon, right-click, then run Windows PowerShell (Administrator).
Add a VLAN and set the IP and default gateway.
Add VLAN: Create the interface for the Vlan ID identified in Step 1. In the following example, replace 20 with the assigned ID.
IP setting: Enter the IP assigned in Step 1. In the following example, replace 46 with the ifindex confirmed by Get-NetAdapter, and replace 192.168.0.10 with the assigned IP.
Default gateway setting: Enter the assigned Default gateway IP from Step 1. In the following example, replace 192.168.0.1 with the assigned Default gateway IP.
If you set a new Default Gateway, the terminal session will be disconnected.
Step 2 After completing the task, if the task status changes to Completed, you can reconnect to the terminal.
When all tasks are completed, select the task completion checkbox of Step 2 in the IP Change popup window.
If the task completes successfully, the task status in the upper right corner will be displayed as Completed.
Guide
If the work status of Step 2 has been changed to Completed but there is still an issue with terminal access, go to the All Services > Management > Support CenterContact menu and inquire.
Step 3
Follow the next procedure and proceed with Step 3 work.
Connect to the IP change target server using NAT IP.
Run the interface index (ifindex) to check the existing Default Gateway IP.
If you delete the existing Default Gateway, the terminal session will be disconnected.
Step 2 After completing the task, if the task status changes to Completed, you can reconnect to the terminal.
Check the communication status by connecting to the IP change target server with a NAT IP.
Use the following command to check again whether the previous configuration information remains and whether it has been changed correctly. If you can connect normally to the target server whose IP was changed, the changed IP is in normal communication state.
Once all tasks are completed, restart the server and then perform a final check.
Reference
It is recommended to perform the final check after restarting the server.
If there are no issues in the final inspection results, select the work completion checkbox of Step 3 in the IP Change popup window.
2.4.2.1 - Installing ServiceWatch Agent
Users can install the ServiceWatch Agent on Bare Metal Server to collect custom metrics and logs.
Note
Custom metrics/logs collection via ServiceWatch Agent is currently available only on Samsung Cloud Platform For Enterprise. It will be provided in other offerings in the future.
Caution
Metrics collection via ServiceWatch Agent is classified as custom metrics and incurs charges, unlike default collected metrics. Therefore, it is recommended to remove or disable unnecessary metric collection settings.
ServiceWatch Agent
The agents that need to be installed on Bare Metal Server for ServiceWatch’s custom metrics and log collection can be broadly divided into two types: Prometheus Exporter and Open Telemetry Collector.
Category
Description
Prometheus Exporter
Provides metrics of specific applications or services in a format that Prometheus can scrape
For collecting server OS metrics, Node Exporter for Linux servers and Windows Exporter for Windows servers can be used depending on the OS type.
Serves as a centralized collector that collects telemetry data such as metrics and logs from distributed systems, processes them (filtering, sampling, etc.), and sends them to multiple backends (e.g., Prometheus, Jaeger, Elasticsearch, etc.)
Sends data to ServiceWatch Gateway to enable ServiceWatch to collect metrics and log data.
The Samsung Cloud Platform Console provides the SCP RHEL Repository to support user environments where external access is restricted, such as VPC Private Subnets.
By using the SCP RHEL Repository, you can install and download the same packages as the official RHEL Repository.
안내
If the user created RHEL before August 2025 through the Samsung Cloud Platform Console, they must modify the RHEL Repository settings.
Since the SCP RHEL Repository synchronizes with each Region Local Repository according to an internal schedule, it is recommended to switch to an external public mirror site to apply the latest patches quickly.
Samsung Cloud Platform provides the latest repository for the specified major version.
RHEL Repository Configuration Guide
When using RHEL, you can configure the SCP RHEL Repository to install and download the same packages as the official RHEL Repository.
To set up the RHEL repository, follow these steps.
On the Virtual Server, as the OS root user, use the cat command to check the /etc/yum.repos.d/scp.rhel8.repo or /etc/yum.repos.d/scp.rhel9.repo configuration.
Color mode
cat /etc/yum.repos.d/scp.rhel8.repo
cat /etc/yum.repos.d/scp.rhel8.repo
코드블록. repo 설정 확인(RHEL8)
Color mode
cat /etc/yum.repos.d/scp.rhel9.repo
cat /etc/yum.repos.d/scp.rhel9.repo
코드블록. repo 설정 확인(RHEL9)
When checking the configuration file, the following result is displayed.
You can now create and use up to 10 local disk partitions.
2025.07.01
FEATURENew Features Added and OS Images Added
You can release multiple resources simultaneously from the Bare Metal Server list.
You can change the IP of a general Subnet.
OS Images have been added.
RHEL 8.10, Ubuntu 24.04
2025.02.27
FEATUREPlacement Group Feature and OS Images, Server Types Added
Bare Metal Server features added
Distributes servers belonging to the same Placement Group across different racks.
OS Images added (RHEL 9.4, Rocky Linux 8.6, Rocky Linux 9.4)
3rd generation (s3/h3) server types based on Intel 4th generation (Sapphire Rapids) Processor added. For details, please refer to Bare Metal Server Server Types.
Samsung Cloud Platform common feature changes
Common CX changes for Account, IAM and Service Home, tags, etc. have been reflected.
2024.10.01
NEWBare Metal Server Service Official Version Release
Bare Metal Server service has been officially released.
Bare Metal Server service has been released, which allows customers to exclusively use physical servers without virtualization.
2.5 - Multi-node GPU Cluster
2.5.1 - Overview
Service Overview
Multi-node GPU Cluster is a service that provides physical GPU servers without virtualization for large-scale high-performance AI calculations. It can cluster multiple GPUs using two or more bare metal servers with GPUs, and can be used conveniently with Samsung Cloud Platform’s high-performance storage and networking services.
Provided Features
Multi-node GPU Cluster provides the following functions.
Auto Provisioning and Management: Through the web-based Console, you can easily use the standard GPU Bare Metal model server with 8 GPU cards, from provisioning to resource and cost management.
Network Connection: Two or more Bare Metal Servers can be clustered through high-speed interconnects to process multiple GPUs, and by configuring the GPU Direct RDMA (Remote Direct Memory Access) environment, direct data IO between GPU memories is possible, enabling high-speed AI/Machine Learning calculations.
Storage Connection: It provides various additional connection storages other than OS disks. High-speed network and high-performance SSD NAS File Storage, Block Storage, and Object Storage that are directly linked can also be used in conjunction.
Network Setting Management: The server’s subnet/IP can be easily changed with the initially set value. NAT IP provides a management function that can be used or cancelled according to needs.
Monitoring: You can check the monitoring information of computing resources such as CPU, GPU, Memory, Disk, etc. through Cloud Monitoring. To use the Cloud Monitoring service for Multi-node GPU Cluster, you need to install the Agent. Please install the Agent for stable service use. For more information, please refer to Multi-node GPU Cluster Monitoring Metrics.
Component
Multi-node GPU Cluster provides GPU as a Bare Metal Server type with standard images and server types, and NVSwitch and NVLink are provided.
GPU(H100)
GPU (Graphic Processing Unit) is specialized in parallel calculations that can process a large amount of data quickly, enabling large-scale parallel calculation processing in fields such as artificial intelligence (AI) and data analysis.
The following are the specifications of the GPU Type provided by the Multi-node GPU Cluster service.
Classification
H100 Type
Product Provisioning Method
Bare Metal
GPU Architecture
NNVIDIA Hopper
GPU Memory
80GB
GPU Transistors
80 billion 4N TSMC
GPU Tensor Performance(based on FP16)
989.4 TFLOPs, 1,978.9 TFLOPs*
GPU Memory Bandwidth
3,352 GB/sec HBM3
GPU CUDA Cores
16,896 Cores
GPU Tensor Cores
528(4th Generation)
NVLink performance
NVLink 4
Total NVLink bandwidth
900 GB/s
NVLink Signaling Rate
25 Gbps (x18)
NVSwitch performance
NVSwitch 3
NVSwitch GPU bandwidth
900 GB/s
Total NVSwitch Aggregate Bandwidth
7.2TB/s
With Sparsity
Table. GPU Type Specifications
OS and GPU Driver Version
The operating systems (OS) supported by Multi-node GPU Cluster are as follows.
OS
OS version
GPU driver version
Ubuntu
22.04
535.86.10, 535.183.06
Table. Multi-node GPU Cluster OS and GPU Driver Version
Server Type
The server types provided by Multi-node GPU Cluster are as follows. For a detailed description of the server types provided by Multi-node GPU Cluster, please refer to Multi-node GPU Cluster server type.
g2c96h8_metal
Classification
Example
Detailed Description
Server Generation
g2
Provided server generation
g2: g means GPU server, and 2 means generation
CPU
c96
Number of Cores
c96: Assigned Core is a physical core
GPU
h8
GPU type and quantity
h8: h means GPU type, and 8 means GPU quantity
Table. Multi-node GPU Cluster server type format
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.
A service that provides an independent virtual network in a cloud environment
Fig. Multi-node GPU Cluster Pre-service
2.5.1.1 - Server Type
Multi-node GPU Cluster Server Type
Multi-node GPU Cluster is divided based on the provided GPU Type, and the GPU used in the Multi-node GPU Cluster is determined by the server type selected when creating a GPU Node. Please select the server type according to the specifications of the application you want to run in the Multi-node GPU Cluster.
The server types supported by Multi-node GPU Cluster are in the following format:
g2c96h8_metal
Classification
Example
Detailed Description
Server Generation
g2
Provided server generation
g2
g means GPU server specification
2 means generation
CPU
c96
Number of cores
c96: Assigned cores are physical cores
GPU
h8
GPU type and quantity
h8: h means GPU type, and 8 means GPU quantity
Table. Multi-node GPU Cluster server type format
g2 Server Type
The g2 server type is a GPU Bare Metal Server using NVIDIA H100 Tensor Core GPU, suitable for large-scale high-performance AI computing.
Provides up to 8 NVIDIA H100 Tensor Core GPUs
Each GPU has 16,896 CUDA cores and 528 Tensor cores
Supports up to 96 vCPUs and 1,920 GB of memory
Supports up to 100 Gbps networking speed
900GB/s GPU and NVIDIA NVSwitch P2P communication
Server Type
GPU
GPU Memory
CPU(Core)
Memory
Disk
GPU P2P
g2c96h8_metal
H100
640 GB
96 vCore
2 TB
SSD(OS) 960 GB * 2, NVMeSSD 3.84 TB * 4
900GB/s NVSwitch
Table. Multi-node GPU Cluster server type specification > H100 server type
2.5.1.2 - Monitoring Metrics
Multi-node GPU Cluster monitoring metrics
The following table shows the monitoring metrics of Multi-node GPU Cluster that can be checked through Cloud Monitoring.
Guide
Multi-node GPU Cluster requires the user to install the Agent through the guide to view monitoring metrics. Please install the Agent before using the stable service. For the Agent installation method and detailed Cloud Monitoring usage, please refer to the Cloud Monitoring guide.
Sum value of GPU Count of nodes in the cluster: Calculate the sum of GPU Count of each node in the same GPU CLUSTER
cnt
Cluster GPU Count In Use
Number of GPUs used by jobs in the cluster
Cluster internal Process using GPU count: the sum of the number of GPUs held by the process by parsing the ‘Processes:’ information at the bottom of the nvidia-smi result of the nodes in the same GPU CLUSTER
cnt
Cluster GPU Usage
Cluster internal GPU Utilization AVG
Cluster internal node GPU utilization Average value: Average calculation of each node’s GPU utilization value among nodes in the same GPU CLUSTER
%
Cluster GPU Memory Usage [Avg]
Cluster GPU Memory Utilization AVG
Cluster node Memory utilization Average value: Average calculation of Memory utilization values of each node in the same GPU cluster
The ratio of CPU time spent in waiting state (disk waiting)
%
Core Usage [System]
The ratio of CPU time spent in kernel space
%
-Core Usage [User]-
-The ratio of CPU time spent in user space-
-%-
CPU Cores
The number of CPU cores on the host. The maximum value of the unnormalized ratio is 100%* of the cores. The unnormalized ratio already reflects this value, and the maximum value is 100%* of the cores.
cnt
CPU Usage [Active]
Percentage of CPU time used excluding Idle and IOWait states (if all 4 cores are used at 100%: 400%)
%
CPU Usage [Idle]
The ratio of CPU time spent in idle state.
%
CPU Usage [IO Wait]
The percentage of CPU time spent in waiting state (disk waiting)
%
CPU Usage [System]
Percentage of CPU time used by the kernel (in case of using all 4 cores 100%: 400%)
%
CPU Usage [User]
Percentage of CPU time used in the user area. (In case of using all 4 cores 100%, 400%)
%
CPU Usage/Core [Active]
Percentage of CPU time used excluding Idle and IOWait states (normalized value by number of cores, 100% if all 4 cores are used at 100%)
%
CPU Usage/Core [Idle]
The ratio of CPU time spent in idle state.
%
CPU Usage/Core [IO Wait]
The ratio of CPU time spent in waiting state (disk waiting)
%
CPU Usage/Core [System]
Percentage of CPU time used by the kernel (normalized value by number of cores, 100% if all 4 cores are used at 100%)
%
CPU Usage/Core [User]
Percentage of CPU time used in the user area. (normalized value by number of cores, 100% if all 4 cores are used at 100%)
%
Disk CPU Usage [IO Request]
The ratio of CPU time spent executing input/output requests for the device (device bandwidth utilization). If this value is close to 100%, the device is in a saturated state.
%
Disk Queue Size [Avg]
The average queue length of requests executed for the device.
num
Disk Read Bytes
The number of bytes read from the device per second.
bytes
Disk Read Bytes [Delta Avg]
Average of system.diskio.read.bytes_delta for each disk
bytes
Disk Read Bytes [Delta Max]
Individual disks’ system.diskio.read.bytes_delta maximum
The sum of system.diskio.read.bytes_delta of individual disks
bytes
Disk Read Bytes [Delta]
Delta value of system.diskio.read.bytes for each disk
bytes
Disk Read Bytes [Success]
The total number of bytes read successfully. In Linux, it is assumed that the sector size is 512 and the value multiplied by the number of sectors read by 512
bytes
Disk Read Requests
The number of read requests for the disk device in 1 second
cnt
Disk Read Requests [Delta Avg]
Average of system.diskio.read.count_delta for each disk
cnt
Disk Read Requests [Delta Max]
Maximum of system.diskio.read.count_delta for individual disks
cnt
Disk Read Requests [Delta Min]
Minimum of system.diskio.read.count_delta for each disk
cnt
Disk Read Requests [Delta Sum]
Sum of system.diskio.read.count_delta of individual disks
cnt
Disk Read Requests [Success Delta]
Individual disk’s system.diskio.read.count delta
cnt
Disk Read Requests [Success]
Total number of successful read completions
cnt
Disk Request Size [Avg]
The average size of requests executed for the device (unit: sector)
num
Disk Service Time [Avg]
The average service time (in milliseconds) for input requests executed on the device.
ms
Disk Wait Time [Avg]
The average time spent on requests executed for supported devices.
ms
Disk Wait Time [Read]
Disk Average Wait Time
ms
Disk Wait Time [Write]
Disk Average Wait Time
ms
Disk Write Bytes [Delta Avg]
Average of system.diskio.write.bytes_delta for each disk
bytes
Disk Write Bytes [Delta Max]
Maximum of system.diskio.write.bytes_delta for each disk
The sum of system.diskio.write.bytes_delta of individual disks
bytes
Disk Write Bytes [Delta]
Delta value of system.diskio.write.bytes for each disk
bytes
Disk Write Bytes [Success]
The total number of bytes written successfully. In Linux, it is assumed that the sector size is 512 and the value is multiplied by 512 to the number of sectors written
bytes
Disk Write Requests
The number of write requests to the disk device for 1 second
cnt
Disk Write Requests [Delta Avg]
Average of system.diskio.write.count_delta of individual disks
cnt
Disk Write Requests [Delta Max]
Maximum of system.diskio.write.count_delta for each disk
cnt
Disk Write Requests [Delta Min]
Minimum of system.diskio.write.count_delta for individual disks
cnt
Disk Write Requests [Delta Sum]
Sum of system.diskio.write.count_delta of individual disks
cnt
Disk Write Requests [Success Delta]
Individual disk’s system.diskio.write.count delta
cnt
Disk Write Requests [Success]
Total number of writes completed successfully
cnt
Disk Writes Bytes
The number of bytes written to the device per second.
bytes
Filesystem Hang Check
filesystem(local/NFS) hang check (normal:1, abnormal:0)
status
Filesystem Nodes
The total number of file nodes in the file system.
cnt
Filesystem Nodes [Free]
The total number of available file nodes in the file system.
cnt
Filesystem Size [Available]
This is the disk space (in bytes) that can be used by unauthorized users.
bytes
Filesystem Size [Free]
Available disk space (bytes)
bytes
Filesystem Size [Total]
Total Disk Space (bytes)
bytes
Filesystem Usage
Used Disk Space Percentage
%
Filesystem Usage [Avg]
Average of individual filesystem.used.pct
%
Filesystem Usage [Inode]
_inode usage rate
%
Filesystem Usage [Max]
Maximum value among individual filesystem usage percentages
%
Filesystem Usage [Min]
Minimum of individual filesystem used percentages
%
Filesystem Usage [Total]
-
%
Filesystem Used
Used Disk Space (bytes)
bytes
Filesystem Used [Inode]
Inode usage
bytes
Memory Free
The total amount of available memory (bytes). It does not include memory used by system cache and buffers (see system.memory.actual.free).
bytes
Memory Free [Actual]
Actual available memory (bytes). The calculation method varies depending on the OS, and in Linux, it is either MemAvailable from /proc/meminfo or calculated from available memory, cache, and buffer if meminfo is not available. On OSX, it is the sum of available memory and inactive memory. On Windows, it is the same as system.memory.free.
bytes
Memory Free [Swap]
Available swap memory.
bytes
Memory Total
Total Memory
bytes
Memory Total [Swap]
Total swap memory.
bytes
Memory Usage
Used memory percentage
((Memory Total - Memory Free) / Memory Total) * 100
Memory Free: Current available free memory capacity
%
Memory Usage [Actual]
The percentage of memory actually used
((Memory Total - Memory Available) / Memory Total) * 100 or ((Memory Total - (Memory Free + Buffers + Cached)) / Memory Total) * 100
Memory Free: The capacity of free memory currently available
Buffers: The capacity of memory used by buffers
Cached: The capacity of memory used by page cache
%
Memory Usage [Cache Swap]
Cache swap usage rate
%
Memory Usage [Swap]
Used swap memory percentage
%
Memory Used
Used Memory
bytes
Memory Used [Actual]
Actual used memory (bytes). The value subtracted from the total memory by the used memory. The available memory is calculated differently depending on the OS (refer to system.actual.free)
bytes
Memory Used [Swap]
Used swap memory.
bytes
Collisions
Network Collisions
cnt
Network In Bytes
Received byte count
bytes
Network In Bytes [Delta Avg]
Average of system.network.in.bytes_delta for each network
bytes
Network In Bytes [Delta Max]
Maximum of system.network.in.bytes_delta for each network
bytes
Network In Bytes [Delta Min]
Minimum of system.network.in.bytes_delta for each network
bytes
Network In Bytes [Delta Sum]
Sum of each network’s system.network.in.bytes_delta
bytes
Network In Bytes [Delta]
Received byte count delta
bytes
Network In Dropped
The number of packets deleted among incoming packets
cnt
Network In Errors
Number of errors during reception
cnt
Network In Packets
Received packet count
cnt
Network In Packets [Delta Avg]
Average of system.network.in.packets_delta for each network
cnt
Network In Packets [Delta Max]
Individual networks’ system.network.in.packets_delta maximum
cnt
Network In Packets [Delta Min]
Minimum of system.network.in.packets_delta for each network
cnt
Network In Packets [Delta Sum]
Sum of system.network.in.packets_delta of individual networks
cnt
Network In Packets [Delta]
Received packet count delta
cnt
Network Out Bytes
Transmitted byte count
bytes
Network Out Bytes [Delta Avg]
Average of system.network.out.bytes_delta for each network
bytes
Network Out Bytes [Delta Max]
Individual networks’ system.network.out.bytes_delta maximum
bytes
Network Out Bytes [Delta Min]
Minimum of system.network.out.bytes_delta for each network
bytes
Network Out Bytes [Delta Sum]
The sum of system.network.out.bytes_delta of individual networks
bytes
Network Out Bytes [Delta]
Transmitted byte count delta
bytes
Network Out Dropped
Number of packets dropped among outgoing packets. This value is not reported by the operating system, so it is always 0 in Darwin and BSD
cnt
Network Out Errors
Number of errors during transmission
cnt
Network Out Packets
Number of transmitted packets
cnt
Network Out Packets [Delta Avg]
Average of system.network.out.packets_delta for each network
cnt
Network Out Packets [Delta Max]
Maximum of system.network.out.packets_delta for each network
Sum of system.network.out.packets_delta of individual networks
cnt
Network Out Packets [Delta]
Number of transmitted packets delta
cnt
Open Connections [TCP]
All open TCP connections
cnt
Open Connections [UDP]
All open UDP connections
cnt
Port Usage
Port usage available for connection
%
SYN Sent Sockets
Number of sockets in SYN_SENT state (when connecting from local to remote)
cnt
Kernel PID Max
kernel.pid_max value
cnt
Kernel Thread Max
kernel threads-max value
cnt
Process CPU Usage
Percentage of CPU time consumed by the process after the last update. This value is similar to the %CPU value of the process displayed by the top command on Unix systems
%
Process CPU Usage/Core
Percentage of CPU time used by the process since the last event, normalized by the number of cores, with a value between 0~100%
%
Process Memory Usage
main memory (RAM) where the process occupies a ratio
%
Process Memory Used
Resident Set size. The amount of memory a process occupies in RAM. In Windows, it is the current working set size
bytes
Process PID
Process PID
PID
Process PPID
Parent process’s pid
PID
Processes [Dead]
.dead processes count
cnt
Processes [Idle]
idle process count
cnt
Processes [Running]
Number of running processes
cnt
Processes [Sleeping]
sleeping processes count
cnt
Processes [Stopped]
Number of stopped processes
cnt
Processes [Total]
Total number of processes
cnt
Processes [Unknown]
Cannot search or unknown number of processes
cnt
Processes [Zombie]
Number of zombie processes
cnt
Running Process Usage
process usage rate
%
Running Processes
Number of running processes
cnt
Running Thread Usage
Thread usage rate
%
Running Threads
number of threads running in running processes
cnt
Instance Status
_instance status
state
Context Switches
context switch count (per second)
cnt
Load/Core [1 min]
Load for the last 1 minute divided by the number of cores
cnt
Load/Core [15 min]
The value of load divided by the number of cores for the last 15 minutes
cnt
Load/Core [5 min]
The value of load divided by the number of cores over the last 5 minutes
cnt
Multipaths [Active]
External storage connection path status = active count
cnt
Multipaths [Failed]
External storage connection path status = failed count
cnt
Multipaths [Faulty]
External storage connection path status = faulty count
cnt
NTP Offset
last sample’s measured offset (time difference between NTP server and local environment)
num
Run Queue Length
Execution Waiting Queue Length
num
Uptime
OS operation time (uptime). (milliseconds)
ms
Context Switchies
CPU context switch count (per second)
cnt
Disk Read Bytes [Sec]
number of bytes read from the windows logical disk in 1 second
cnt
Disk Read Time [Avg]
Data Read Average Time (sec)
sec
Disk Transfer Time [Avg]
Disk average wait time
sec
Disk Usage
Disk Usage Rate
%
Disk Write Bytes [Sec]
number of bytes written to the windows logical disk in 1 second
The user can enter the required information for the Multi-node GPU Cluster service through the Samsung Cloud Platform Console, select detailed options, and create the service.
Multi-node GPU Cluster Getting Started
You can create and use a Multi-node GPU Cluster service in the Samsung Cloud Platform Console.
This service consists of GPU Node and Cluster Fabric services.
GPU Node Creation
To create a Multi-node GPU Cluster, follow the steps below.
All Services > Compute > Multi-node GPU Cluster Click the menu. Navigate to the Service Home page of Multi-node GPU Cluster.
Click the GPU Node creation button on the Service Home page. You will be taken to the GPU Node creation page.
GPU Node creation on the page, enter the information required to create the service, and select detailed options.
Image and Version Selection Select the required information in the area.
Category
Required
Detailed description
Image
Required
Select provided image type
Ubuntu
Image Version
Required
Select version of the chosen image
Provides a list of versions of the provided server images
Table. GPU Node image and version selection items
Enter service information area, input or select the required information.
Category
Required
Detailed description
Number of servers
Required
Number of GPU Node servers to create simultaneously
Only numbers can be entered, and the minimum number of servers to create is 2.
Only during the initial setup can you create 2 or more, and expansion is possible one at a time.
Service Type > Server Type
Required
GPU Node Server Type
Select desired CPU, Memory, GPU, Disk specifications
Required Information Input area, enter or select the required information.
Category
Required or not
Detailed description
Administrator Account
Required
Set the administrator account and password to be used when connecting to the server
Ubuntu OS is provided fixed as root
Server Name Prefix
Required
Enter a Prefix to distinguish each GPU Node generated when the number of selected servers is 2 or more
Automatically generated as user input value (prefix) + ‘-###’ format
Start with a lowercase English letter, and use lowercase letters, numbers, and special characters (-) within 3 to 11 characters
Must not end with a special character (-)
Network Settings
Required
Set the network where the GPU Node will be installed
VPC Name:Select a pre-created VPC
General Subnet Name: Select a pre-created general Subnet
IP can be set to auto-generate or user input, and if input is selected, the user enters the IP directly
NAT: Can be used only when there is 1 server and the VPC has an Internet Gateway attached. Checking ‘use’ allows selection of a NAT IP. (When first created, it is generated only with 2 or more servers, so modify on the resource detail page)
NAT IP: Select NAT IP
If there is no NAT IP to select, click the Create New button to generate a Public IP
Click the Refresh button to view and select the created Public IP
Creating a Public IP incurs charges according to the Public IP pricing policy
Table. GPU Node required information entry items
Cluster selection area, create or select a Cluster Fabric.
Category
Required
Detailed description
Cluster Fabric
Required
Setting of a group of GPU Node servers that can apply GPU Direct RDMA together
Optimal GPU performance and speed can be secured only within the same Cluster Fabric
When creating a new Cluster Fabric, select *New Input > Node pool, then enter the name of the Cluster Fabric to be created
To add to an existing Cluster Fabric, select Existing Input > Node pool, then select the already created Cluster Fabric
Table. GPU Node Cluster Fabric selection items
Additional Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Lock
Select
Using Lock prevents accidental actions that could terminate/start/stop the server
Init Script
Select
Script to run when the server starts
Init Script must be selected differently depending on the image type
For Linux: Select Shell Script or cloud-init
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. GPU Node additional information input items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
Once creation is complete, check the created resources on the GPU Node List page.
Caution
When creating a service, the GPU MIG/ECC settings are reset. However, to apply the correct settings, perform a one-time reboot initially, verify whether the settings have been applied, and then use it.
If lock is used, it prevents server termination/start/stop to avoid accidental actions
If you need to change the lock attribute value, click the Edit button to set
Network
GPU Node network information
VPC name, general Subnet name, IP, IP status, NAT IP, NAT IP status
Block Storage
Block Storage information connected to the server
Volume name, disk type, capacity, status
Init Script
View the Init Script content entered when creating the server
Table. GPU Node detailed information tab items
Tag
GPU Node List page’s Tag Tab you can view the tag information of the selected resource, and add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Tag’s Key, Value information can be checked
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing list of Keys and Values
Table. GPU Node Tag Tab Items
Work History
GPU Node List page’s Job History tab allows you to view the job history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work details, work date and time, resource type, resource name, event topic, work result, check worker information
Detailed Search button provides detailed search function
Table. GPU Node Job History Tab Detailed Information Items
GPU Node Operation Control
If you need server control and management functions for the generated GPU Node resources, you can perform tasks on the GPU Node List or GPU Node Details page.
You can start and stop the running GPU Node resources.
GPU Node Getting Started
You can start a stopped GPU Node. To start the GPU Node, follow the steps below.
All Services > Compute > Multi-node GPU Cluster Click the menu. Navigate to the Service Home page of Multi-node GPU Cluster.
Click the GPU Node menu on the Service Home page. You will be taken to the GPU Node List page.
On the GPU Node List page, after selecting individual or multiple servers with the checkbox, you can Start via the More button at the top.
GPU Node List page, click the resource. GPU Node Details page will be opened.
GPU Node Details on the page, click the Start button at the top to start the server.
Check the server status and complete the status change.
Stop GPU Node
You can stop a GPU Node that is active. To stop the GPU Node, follow the steps below.
All Services > Compute > Multi-node GPU Cluster Click the menu. Move to Multi-node GPU Cluster’s Service Home page.
Click the GPU Node menu on the Service Home page. You will be taken to the GPU Node List page.
GPU Node List page, you can control individual or multiple servers by selecting the checkboxes and then using the Stop button at the top.
GPU Node List page, click the resource. GPU Node Details page, navigate.
GPU Node Details on the page, click the Stop button at the top to stop the server.
Check the server status and complete the status change.
GPU Node Cancel
You can cancel unused GPU nodes to reduce operating costs. However, if you cancel the service, the running service may be stopped immediately, so you should consider the impact of service interruption sufficiently before proceeding with the cancellation.
Caution
Please note that data cannot be recovered after service termination.
To cancel the GPU Node, follow the steps below.
All Services > Compute > Multi-node GPU Server Click the menu. Navigate to the Service Home page of the Multi-node GPU Cluster.
Click the Cluster Fabric menu on the Service Home page. You will be taken to the Cluster Fabric List page.
Cluster Fabric List page, select the resource to cancel, and click the Cancel Service button.
Resources using the same Cluster Fabric can be terminated simultaneously.
Once the termination is complete, check on the GPU Node List page whether the resources have been terminated.
Guide
The cases where GPU Node termination is not possible are as follows.
When Block Storage (BM) is connected: Please disconnect the Block Storage (BM) connection first.
If File Storage is connected: Please disconnect the File Storage first.
When Lock is set: Please change the Lock setting to unused and try again.
If the server that cannot be terminated simultaneously is included: Please re-select only the resources that can be terminated.
If the Cluster Fabric of the server you want to terminate is different: Select only resources that use the same Cluster Fabric.
Reference
If all GPU Nodes in the Cluster Fabric are deleted, the Cluster Fabric is automatically deleted.
2.5.2.1 - Cluster Fabric Management
Cluster Fabric is a service that helps manage servers (GPU Nodes) included in a GPU Cluster. Using Cluster Fabric, you can move servers between GPU Clusters in the same Node pool and optimize the performance and speed of GPUs within the same GPU Cluster.
Creating Cluster Fabric
Cluster Fabric can be created together with a GPU Node, and it cannot be created or deleted separately. When all GPU Nodes within a Cluster Fabric are terminated, the Cluster Fabric is automatically deleted. If you haven’t created a GPU Node, please create one first. For more information, refer to Creating a GPU Node.
Checking Cluster Fabric Details
Guide
Cluster Fabric can be created together with a GPU Node, and it cannot be created or deleted separately.
When all GPU Nodes within a Cluster Fabric are terminated, the Cluster Fabric is automatically deleted.
If you haven’t created a GPU Node, please create one first. For more information, refer to Creating a GPU Node.
You can check the created Cluster Fabric list and details, and move servers on the Cluster Fabric List page and Cluster Fabric Details page.
Click on All Services > Compute > Multi-node GPU Server menu. It will move to the Service Home page of the Multi-node GPU Cluster.
Click on the Cluster Fabric menu on the Service Home page. It will move to the Cluster Fabric List page.
On the Cluster Fabric List page, you can view the list of resources of the GPU Cluster created by the user.
Resource items other than required columns can be added through the Settings button.
Category
Required
Description
Resource ID
Optional
Cluster Fabric ID created by the user
Cluster Fabric Name
Required
Cluster Fabric name created by the user
Node Pool
Optional
A collection of nodes that can be bundled into the same Cluster Fabric
Number of Servers
Optional
Number of GPU Nodes
Server Type
Optional
Server type of the GPU Node
The user can check the number of cores, memory capacity, and GPU type and number of the created resource
Status
Optional
Status of the Cluster Fabric created by the user
Creation Time
Optional
Time when the Cluster Fabric was created
Table. Cluster Fabric resource list items
Click on the resource to check the details on the Cluster Fabric List page. It will move to the Cluster Fabric Details page.
At the top of the Cluster Fabric Details page, status information and additional feature descriptions are displayed.
Category
Description
Cluster Fabric Status
Status of the Cluster Fabric created by the user
Creating: Cluster creation in progress
Active: Creation completed and available
Editing: IP change in progress
Deleting: Termination in progress
Deleted: Termination completed
Add Target Server
Function to move a server from another cluster to this cluster
Table. Cluster Fabric status information and additional features
Details
On the Details tab of the Cluster Fabric List page, you can check the details of the selected resource and bring in servers from other clusters.
Category
Description
Service
Service category
Resource Type
Service name
SRN
Unique resource ID in Samsung Cloud Platform
In Cluster Fabric, it means Cluster Fabric SRN
Resource Name
Resource name
In Cluster Fabric service, it means Cluster Fabric name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service information
Modification Time
Time when the service information was modified
Cluster Fabric Name
Cluster Fabric name created by the user
Node Pool
A collection of nodes that can be bundled into the same Cluster Fabric
Target Server
List of GPU Nodes bound to the Cluster Fabric
Server name, server type, IP, status
Table. Cluster Fabric details tab items
Bringing in Cluster Fabric Servers
Using the Add Target Server feature on the Cluster Fabric Details page, you can bring in servers from other clusters and add them to the selected cluster.
Click on All Services > Compute > Multi-node GPU Server menu. It will move to the Service Home page of the Multi-node GPU Cluster.
Click on the Cluster Fabric menu on the Service Home page. It will move to the Cluster Fabric List page.
Click on the resource to check the details on the Cluster Fabric List page. It will move to the Cluster Fabric Details page.
Click the Add button on the right side of the target server on the details tab.
The target server addition popup window opens.
Cluster Fabric Select a cluster.
The GPU Node bound to the selected cluster is retrieved, and you can select the GPU Node to bring in.
The selected GPU Node is listed at the bottom with the GPU Node name.
Click the Confirm button to complete.
Click the Cancel button to cancel the task.
Check if the added GPU Node is retrieved in the target server.
Terminating Cluster Fabric
When all GPU Nodes within a Cluster Fabric are terminated, the Cluster Fabric is automatically deleted. For more information, refer to Terminating a GPU Node.
2.5.2.2 - Installing ServiceWatch Agent
Users can install ServiceWatch Agent on the GPU Node of Multi-node GPU Cluster to collect custom metrics and logs.
Note
Custom metrics/logs collection through ServiceWatch Agent is currently available only in Samsung Cloud Platform For Enterprise. It is planned to be provided in other offerings in the future.
Warning
Metrics collection through ServiceWatch Agent is classified as custom metrics and charges are applied unlike default collected metrics, so it is recommended to remove or disable unnecessary metric collection settings.
ServiceWatch Agent
The agents that need to be installed to collect ServiceWatch’s custom metrics and logs on the GPU Node of Multi-node GPU Cluster can be divided into two main types:
Prometheus Exporter and Open Telemetry Collector.
Item
Description
Prometheus Exporter
Provides metrics of specific applications or services in a format that Prometheus can scrape
For OS metric collection on GPU Nodes, you can use Node Exporter for Linux servers and Windows Exporter for Windows servers depending on the OS type.
Acts as a central collector that collects telemetry data such as metrics and logs from distributed systems, processes them (filtering, sampling, etc.), and sends them to multiple backends (e.g., Prometheus, Jaeger, Elasticsearch, etc.)
Enables ServiceWatch to collect metrics and log data by sending data to ServiceWatch Gateway.
Table. Description of Prometheus Exporter and Open Telemetry Collector
Notice
If Kubernetes Engine is configured on the GPU Node, please check GPU metrics through the metrics provided by Kubernetes Engine.
If DCGM Exporter is installed on a GPU Node where Kubernetes Engine is configured, it may not operate normally.
Note
The ServiceWatch Agent guide for GPU metric collection on GPU Nodes can be used in the same way as for GPU Server.
For details, see GPU Server > ServiceWatch Agent.
2.5.2.3 - Multi-node GPU Cluster Service Scope and Inspection Guide
Multi-node GPU Cluster service scope
In the event of an IaaS HW level issue with the Multi-node GPU Cluster service, technical support can be received through the Support Center’s Contact Us. However, risks due to changes such as OS Kernel updates or application installation are the responsibility of the user, so technical support may be difficult, please be cautious when performing system updates or other tasks.
IaaS HW level problem
IPMI(iLO) HW monitoring console where the server’s internal HW fault event occurrence message occurs
GPU HW operation error confirmed in nvdia-smi command
HW error messages occurring from InfiniBand HCA card or InfiniBand Switch inspection
Caution
Multi-node GPU Cluster is a service sensitive to software version compatibility of Ubuntu OS / NVDIA / Infiniband, so official technical support is not available after changes such as the user’s OS kernel update or application installation.
IaaS HW Inspection Guide
After applying for the Multi-node GPU Cluster service, it is recommended to check the IaaS HW level according to the inspection guide.
OS Kernel and Package Holding
Notice
If you do not want automatic updates of package versions, it is recommended to block package updates using the apt-mark command.
It is recommended to block the update of Linux kernel or IB related package versions.
To proceed with OS Kernel and Package holding, follow the procedure below.
Use the following command to check the version of the kernel and IB-related packages.
Color mode
root@bm-dev-001:~# dpkg -l | egrep -i "kernel | mlnx"root@bm-dev-001:~# dpkg -l | egrep -i "kernel | nvidia"root@bm-dev-001:~# dpkg -l | egrep -i "kernel | linux-image"ii crash 7.2.8-1ubuntu1.20.04.1 amd64 kernel debugging utility, allowing gdb like syntax
ii dkms 2.8.1-5ubuntu2 all Dynamic Kernel Module Support Framework
ii dmeventd 2:1.02.167-1ubuntu1 amd64 Linux Kernel Device Mapper event daemon
ii dmsetup 2:1.02.167-1ubuntu1 amd64 Linux Kernel Device Mapper userspace library
ii iser-dkms 5.4-OFED.5.4.3.0.1.1 all DKMS support fo iser kernel modules
ii isert-dkms 5.4-OFED.5.4.3.0.1.1 all DKMS support fo isert kernel modules
ii kernel-mft-dkms 4.17.2-12 all DKMS support for kernel-mft kernel modules
ii kmod 27-1ubuntu2 amd64 tools for managing Linux kernel modules
ii knem 1.1.4.90mlnx1-OFED.5.1.2.5.0.1 amd64 userspace tools for the KNEM kernel module
ii knem-dkms 1.1.4.90mlnx1-OFED.5.1.2.5.0.1 all DKMS support for mlnx-ofed kernel modules
ii libaio1:amd64 0.3.112-5 amd64 Linux kernel AIO access library - shared library
ii libdevmapper-event1.02.1:amd64 2:1.02.167-1ubuntu1 amd64 Linux Kernel Device Mapper event support library
ii libdevmapper1.02.1:amd64 2:1.02.167-1ubuntu1 amd64 Linux Kernel Device Mapper userspace library
ii libdrm-amdgpu1:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to amdgpu-specific kernel DRM services -- runtime
ii libdrm-common 2.4.107-8ubuntu1~20.04.2 all Userspace interface to kernel DRM services -- common files
ii libdrm-intel1:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to intel-specific kernel DRM services -- runtime
ii libdrm-nouveau2:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to nouveau-specific kernel DRM services -- runtime
ii libdrm-radeon1:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to radeon-specific kernel DRM services -- runtime
ii libdrm2:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to kernel DRM services -- runtime
ii linux-firmware 1.187.29 all Firmware for Linux kernel drivers
hi linux-generic 5.4.0.105.109 amd64 Complete Generic Linux kernel and headers
ii linux-headers-5.4.0-104 5.4.0-104.118 all Header files related to Linux kernel version 5.4.0
ii linux-headers-5.4.0-104-generic 5.4.0-104.118 amd64 Linux kernel headers for version 5.4.0 on 64 bit x86 SMP
ii linux-headers-5.4.0-105 5.4.0-105.119 all Header files related to Linux kernel version 5.4.0
ii linux-headers-5.4.0-105-generic 5.4.0-105.119 amd64 Linux kernel headers for version 5.4.0 on 64 bit x86 SMP
hi linux-headers-generic 5.4.0.105.109 amd64 Generic Linux kernel headers
ii linux-image-5.4.0-104-generic 5.4.0-104.118 amd64 Signed kernel image generic
ii linux-image-5.4.0-105-generic 5.4.0-105.119 amd64 Signed kernel image generic
hi linux-image-generic 5.4.0.105.109 amd64 Generic Linux kernel image
ii linux-libc-dev:amd64 5.4.0-105.119 amd64 Linux Kernel Headers for development
ii linux-modules-5.4.0-104-generic 5.4.0-104.118 amd64 Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii linux-modules-5.4.0-105-generic 5.4.0-105.119 amd64 Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii linux-modules-extra-5.4.0-104-generic 5.4.0-104.118 amd64 Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii linux-modules-extra-5.4.0-105-generic 5.4.0-105.119 amd64 Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii mlnx-ofed-kernel-dkms 5.4-OFED.5.4.3.0.3.1 all DKMS support for mlnx-ofed kernel modules
ii mlnx-ofed-kernel-utils 5.4-OFED.5.4.3.0.3.1 amd64 Userspace tools to restart and tune mlnx-ofed kernel modules
ii mlnx-tools 5.2.0-0.54303 amd64 Userspace tools to restart and tune MLNX_OFED kernel modules
ii nvidia-kernel-common-470 470.103.01-0ubuntu0.20.04.1 amd64 Shared files used with the kernel module
ii nvidia-kernel-source-470 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA kernel source package
ii nvidia-peer-memory 1.2-0 all nvidia peer memory kernel module.
ii nvidia-peer-memory-dkms 1.2-0 all DKMS support for nvidia-peer-memory kernel modules
ii rsyslog 8.2001.0-1ubuntu1.1 amd64 reliable system and kernel logging daemon
ii srp-dkms 5.4-OFED.5.4.3.0.1.1 all DKMS support fo srp kernel modules
root@bm-dev-001:~# dpkg -l | egrep -i "kernel | mlnx"root@bm-dev-001:~# dpkg -l | egrep -i "kernel | nvidia"root@bm-dev-001:~# dpkg -l | egrep -i "kernel | linux-image"ii crash 7.2.8-1ubuntu1.20.04.1 amd64 kernel debugging utility, allowing gdb like syntax
ii dkms 2.8.1-5ubuntu2 all Dynamic Kernel Module Support Framework
ii dmeventd 2:1.02.167-1ubuntu1 amd64 Linux Kernel Device Mapper event daemon
ii dmsetup 2:1.02.167-1ubuntu1 amd64 Linux Kernel Device Mapper userspace library
ii iser-dkms 5.4-OFED.5.4.3.0.1.1 all DKMS support fo iser kernel modules
ii isert-dkms 5.4-OFED.5.4.3.0.1.1 all DKMS support fo isert kernel modules
ii kernel-mft-dkms 4.17.2-12 all DKMS support for kernel-mft kernel modules
ii kmod 27-1ubuntu2 amd64 tools for managing Linux kernel modules
ii knem 1.1.4.90mlnx1-OFED.5.1.2.5.0.1 amd64 userspace tools for the KNEM kernel module
ii knem-dkms 1.1.4.90mlnx1-OFED.5.1.2.5.0.1 all DKMS support for mlnx-ofed kernel modules
ii libaio1:amd64 0.3.112-5 amd64 Linux kernel AIO access library - shared library
ii libdevmapper-event1.02.1:amd64 2:1.02.167-1ubuntu1 amd64 Linux Kernel Device Mapper event support library
ii libdevmapper1.02.1:amd64 2:1.02.167-1ubuntu1 amd64 Linux Kernel Device Mapper userspace library
ii libdrm-amdgpu1:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to amdgpu-specific kernel DRM services -- runtime
ii libdrm-common 2.4.107-8ubuntu1~20.04.2 all Userspace interface to kernel DRM services -- common files
ii libdrm-intel1:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to intel-specific kernel DRM services -- runtime
ii libdrm-nouveau2:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to nouveau-specific kernel DRM services -- runtime
ii libdrm-radeon1:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to radeon-specific kernel DRM services -- runtime
ii libdrm2:amd64 2.4.107-8ubuntu1~20.04.2 amd64 Userspace interface to kernel DRM services -- runtime
ii linux-firmware 1.187.29 all Firmware for Linux kernel drivers
hi linux-generic 5.4.0.105.109 amd64 Complete Generic Linux kernel and headers
ii linux-headers-5.4.0-104 5.4.0-104.118 all Header files related to Linux kernel version 5.4.0
ii linux-headers-5.4.0-104-generic 5.4.0-104.118 amd64 Linux kernel headers for version 5.4.0 on 64 bit x86 SMP
ii linux-headers-5.4.0-105 5.4.0-105.119 all Header files related to Linux kernel version 5.4.0
ii linux-headers-5.4.0-105-generic 5.4.0-105.119 amd64 Linux kernel headers for version 5.4.0 on 64 bit x86 SMP
hi linux-headers-generic 5.4.0.105.109 amd64 Generic Linux kernel headers
ii linux-image-5.4.0-104-generic 5.4.0-104.118 amd64 Signed kernel image generic
ii linux-image-5.4.0-105-generic 5.4.0-105.119 amd64 Signed kernel image generic
hi linux-image-generic 5.4.0.105.109 amd64 Generic Linux kernel image
ii linux-libc-dev:amd64 5.4.0-105.119 amd64 Linux Kernel Headers for development
ii linux-modules-5.4.0-104-generic 5.4.0-104.118 amd64 Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii linux-modules-5.4.0-105-generic 5.4.0-105.119 amd64 Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii linux-modules-extra-5.4.0-104-generic 5.4.0-104.118 amd64 Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii linux-modules-extra-5.4.0-105-generic 5.4.0-105.119 amd64 Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii mlnx-ofed-kernel-dkms 5.4-OFED.5.4.3.0.3.1 all DKMS support for mlnx-ofed kernel modules
ii mlnx-ofed-kernel-utils 5.4-OFED.5.4.3.0.3.1 amd64 Userspace tools to restart and tune mlnx-ofed kernel modules
ii mlnx-tools 5.2.0-0.54303 amd64 Userspace tools to restart and tune MLNX_OFED kernel modules
ii nvidia-kernel-common-470 470.103.01-0ubuntu0.20.04.1 amd64 Shared files used with the kernel module
ii nvidia-kernel-source-470 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA kernel source package
ii nvidia-peer-memory 1.2-0 all nvidia peer memory kernel module.
ii nvidia-peer-memory-dkms 1.2-0 all DKMS support for nvidia-peer-memory kernel modules
ii rsyslog 8.2001.0-1ubuntu1.1 amd64 reliable system and kernel logging daemon
ii srp-dkms 5.4-OFED.5.4.3.0.1.1 all DKMS support fo srp kernel modules
Code block. Kernel, IB related package version check
Use the apt-mark command to hold the package update.
Color mode
# apt-mark hold <package name>
# apt-mark hold <package name>
Code block. Package update hold
Intel E810 Driver Update
Check the version of the Intel E810 driver and update it to the recommended version.
Notice
Server manufacturer Intel E810 driver recommended version: 1.15.4
Move the basic driver tar file to the desired directory.
Example: /home/username/ice or /usr/local/src/ice
Untar / unzip the Archiver file.
x.x.x is the version number of the driver tar file.
Color mode
tar zxf ice-x.x.x.tar.gz
tar zxf ice-x.x.x.tar.gz
Code block. Unzip file
Change to the driver src directory.
x.x.x is the version number of the driver tar file.
Color mode
cd ice-x.x.x/src/
cd ice-x.x.x/src/
Code block. Directory change
Compile the driver module.
Color mode
make install
make install
Code Block. Driver Module Compile
After the update is complete, check the version.
Color mode
lsmod | grep ice
modinfo ice | grep version
lsmod | grep ice
modinfo ice | grep version
Code Block. Version Check
NVIDIA driver check
Note
nvidia-smi topo, IB nv_peer_mem status check
To check the NVIDIA driver (nvidia-smi topo, IB nv_peer_mem status) and inspect the IaaS HW level, follow the next procedure.
Check the GPU driver and HW status.
Color mode
user@bm-dev-001:~$ nvidia-smi topo -m
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 mlx5_0 mlx5_1 mlx5_2 mlx5_3 CPU Affinity NUMA Affinity
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 SYS PXB SYS SYS 48-63 3GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 SYS PXB SYS SYS 48-63 3GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 PXB SYS SYS SYS 16-31 1GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 PXB SYS SYS SYS 16-31 1GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS SYS SYS PXB 112-127 7GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS SYS SYS PXB 112-127 7GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS SYS PXB SYS 80-95 5GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS SYS PXB SYS 80-95 5mlx5_0 SYS SYS PXB PXB SYS SYS SYS SYS X SYS SYS SYS
mlx5_1 PXB PXB SYS SYS SYS SYS SYS SYS SYS X SYS SYS
mlx5_2 SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS X SYS
mlx5_3 SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYS X
Legend:
X= Self
SYS= Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)NODE= Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB= Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)PXB= Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)PIX= Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
user@bm-dev-001:~$ nvidia-smi topo -m
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 mlx5_0 mlx5_1 mlx5_2 mlx5_3 CPU Affinity NUMA Affinity
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 SYS PXB SYS SYS 48-63 3GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 SYS PXB SYS SYS 48-63 3GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 PXB SYS SYS SYS 16-31 1GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 PXB SYS SYS SYS 16-31 1GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS SYS SYS PXB 112-127 7GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS SYS SYS PXB 112-127 7GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS SYS PXB SYS 80-95 5GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS SYS PXB SYS 80-95 5mlx5_0 SYS SYS PXB PXB SYS SYS SYS SYS X SYS SYS SYS
mlx5_1 PXB PXB SYS SYS SYS SYS SYS SYS SYS X SYS SYS
mlx5_2 SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS X SYS
mlx5_3 SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYS X
Legend:
X= Self
SYS= Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)NODE= Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB= Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)PXB= Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)PIX= Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Code Block. GPU Driver and HW Status Check
Check the NVSwitch HW status.
Color mode
user@bm-dev-001:~$ nvidia-smi nvlink --status
GPU 0: NVIDIA A100-SXM4-80GB (UUID: GPU-2c0d1d6b-e348-55fc-44cf-cd65a954b36c) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 1: NVIDIA A100-SXM4-80GB (UUID: GPU-96f429d8-893a-a9ea-deca-feffd90669e9) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 2: NVIDIA A100-SXM4-80GB (UUID: GPU-2e601952-b442-b757-a035-725cd320f589) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 3: NVIDIA A100-SXM4-80GB (UUID: GPU-bcbfd885-a9f8-ec8c-045b-c521472b4fed) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 4: NVIDIA A100-SXM4-80GB (UUID: GPU-30273090-2d78-fc7a-a360-ec5f871dd488) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 5: NVIDIA A100-SXM4-80GB (UUID: GPU-5ce7ef61-56dd-fb18-aa7c-be610c8d51c3) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 6: NVIDIA A100-SXM4-80GB (UUID: GPU-740a527b-b286-8b85-35eb-b6b41c0bb6d7) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 7: NVIDIA A100-SXM4-80GB (UUID: GPU-1fb6de95-60f6-dbf2-ffca-f7680577e37c) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
user@bm-dev-001:~$ nvidia-smi nvlink --status
GPU 0: NVIDIA A100-SXM4-80GB (UUID: GPU-2c0d1d6b-e348-55fc-44cf-cd65a954b36c) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 1: NVIDIA A100-SXM4-80GB (UUID: GPU-96f429d8-893a-a9ea-deca-feffd90669e9) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 2: NVIDIA A100-SXM4-80GB (UUID: GPU-2e601952-b442-b757-a035-725cd320f589) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 3: NVIDIA A100-SXM4-80GB (UUID: GPU-bcbfd885-a9f8-ec8c-045b-c521472b4fed) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 4: NVIDIA A100-SXM4-80GB (UUID: GPU-30273090-2d78-fc7a-a360-ec5f871dd488) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 5: NVIDIA A100-SXM4-80GB (UUID: GPU-5ce7ef61-56dd-fb18-aa7c-be610c8d51c3) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 6: NVIDIA A100-SXM4-80GB (UUID: GPU-740a527b-b286-8b85-35eb-b6b41c0bb6d7) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
GPU 7: NVIDIA A100-SXM4-80GB (UUID: GPU-1fb6de95-60f6-dbf2-ffca-f7680577e37c) Link 0: 25 GB/s
Link 1: 25 GB/s
Link 2: 25 GB/s
Link 3: 25 GB/s
Link 4: 25 GB/s
Link 5: 25 GB/s
Link 6: 25 GB/s
Link 7: 25 GB/s
Link 8: 25 GB/s
Link 9: 25 GB/s
Link 10: 25 GB/s
Link 11: 25 GB/s
Code block. NVSwitch HW status check
Check the InfiniBand(IB) HCA card HW status and Link.
Color mode
user@bm-dev-001:~$ ibdev2netdev -v
cat: /sys/class/infiniband/mlx5_0/device/vpd: Permission denied
0000:45:00.0 mlx5_0 (MT4123 - ) fw 20.29.1016 port 1(ACTIVE)==> ibs18 (Down)cat: /sys/class/infiniband/mlx5_1/device/vpd: Permission denied
0000:0e:00.0 mlx5_1 (MT4123 - ) fw 20.29.1016 port 1(ACTIVE)==> ibs17 (Down)cat: /sys/class/infiniband/mlx5_2/device/vpd: Permission denied
0000:c5:00.0 mlx5_2 (MT4123 - ) fw 20.29.1016 port 1(ACTIVE)==> ibs20 (Down)cat: /sys/class/infiniband/mlx5_3/device/vpd: Permission denied
0000:85:00.0 mlx5_3 (MT4123 - ) fw 20.29.1016 port 1(ACTIVE)==> ibs19 (Down)user@bm-dev-001:~$
root@bm-dev-001:~# ibstat
CA 'mlx5_0' CA type: MT4123
Number of ports: 1 Firmware version: 20.29.1016
Hardware version: 0 Node GUID: 0x88e9a4ffff5060ac
System image GUID: 0x88e9a4ffff5060ac
Port 1:
State: Active
Physical state: LinkUp
Rate: 200 Base lid: 8 LMC: 0 SM lid: 1 Capability mask: 0x2651e848
Port GUID: 0x88e9a4ffff5060ac
Link layer: InfiniBand
CA 'mlx5_1' CA type: MT4123
Number of ports: 1 Firmware version: 20.29.1016
Hardware version: 0 Node GUID: 0x88e9a4ffff504080
System image GUID: 0x88e9a4ffff504080
Port 1:
State: Active
Physical state: LinkUp
Rate: 200 Base lid: 5 LMC: 0 SM lid: 1 Capability mask: 0x2651e848
Port GUID: 0x88e9a4ffff504080
Link layer: InfiniBand
CA 'mlx5_2' CA type: MT4123
Number of ports: 1 Firmware version: 20.29.1016
Hardware version: 0 Node GUID: 0x88e9a4ffff505038
System image GUID: 0x88e9a4ffff505038
Port 1:
State: Active
Physical state: LinkUp
Rate: 200 Base lid: 2 LMC: 0 SM lid: 1 Capability mask: 0x2651e848
Port GUID: 0x88e9a4ffff505038
Link layer: InfiniBand
CA 'mlx5_3' CA type: MT4123
Number of ports: 1 Firmware version: 20.29.1016
Hardware version: 0 Node GUID: 0x88e9a4ffff504094
System image GUID: 0x88e9a4ffff504094
Port 1:
State: Active
Physical state: LinkUp
Rate: 200 Base lid: 7 LMC: 0 SM lid: 1 Capability mask: 0x2651e848
Port GUID: 0x88e9a4ffff504094
Link layer: InfiniBand
user@bm-dev-001:~$ ibdev2netdev -v
cat: /sys/class/infiniband/mlx5_0/device/vpd: Permission denied
0000:45:00.0 mlx5_0 (MT4123 - ) fw 20.29.1016 port 1(ACTIVE)==> ibs18 (Down)cat: /sys/class/infiniband/mlx5_1/device/vpd: Permission denied
0000:0e:00.0 mlx5_1 (MT4123 - ) fw 20.29.1016 port 1(ACTIVE)==> ibs17 (Down)cat: /sys/class/infiniband/mlx5_2/device/vpd: Permission denied
0000:c5:00.0 mlx5_2 (MT4123 - ) fw 20.29.1016 port 1(ACTIVE)==> ibs20 (Down)cat: /sys/class/infiniband/mlx5_3/device/vpd: Permission denied
0000:85:00.0 mlx5_3 (MT4123 - ) fw 20.29.1016 port 1(ACTIVE)==> ibs19 (Down)user@bm-dev-001:~$
root@bm-dev-001:~# ibstat
CA 'mlx5_0' CA type: MT4123
Number of ports: 1 Firmware version: 20.29.1016
Hardware version: 0 Node GUID: 0x88e9a4ffff5060ac
System image GUID: 0x88e9a4ffff5060ac
Port 1:
State: Active
Physical state: LinkUp
Rate: 200 Base lid: 8 LMC: 0 SM lid: 1 Capability mask: 0x2651e848
Port GUID: 0x88e9a4ffff5060ac
Link layer: InfiniBand
CA 'mlx5_1' CA type: MT4123
Number of ports: 1 Firmware version: 20.29.1016
Hardware version: 0 Node GUID: 0x88e9a4ffff504080
System image GUID: 0x88e9a4ffff504080
Port 1:
State: Active
Physical state: LinkUp
Rate: 200 Base lid: 5 LMC: 0 SM lid: 1 Capability mask: 0x2651e848
Port GUID: 0x88e9a4ffff504080
Link layer: InfiniBand
CA 'mlx5_2' CA type: MT4123
Number of ports: 1 Firmware version: 20.29.1016
Hardware version: 0 Node GUID: 0x88e9a4ffff505038
System image GUID: 0x88e9a4ffff505038
Port 1:
State: Active
Physical state: LinkUp
Rate: 200 Base lid: 2 LMC: 0 SM lid: 1 Capability mask: 0x2651e848
Port GUID: 0x88e9a4ffff505038
Link layer: InfiniBand
CA 'mlx5_3' CA type: MT4123
Number of ports: 1 Firmware version: 20.29.1016
Hardware version: 0 Node GUID: 0x88e9a4ffff504094
System image GUID: 0x88e9a4ffff504094
Port 1:
State: Active
Physical state: LinkUp
Rate: 200 Base lid: 7 LMC: 0 SM lid: 1 Capability mask: 0x2651e848
Port GUID: 0x88e9a4ffff504094
Link layer: InfiniBand
Code block. InfiniBand(IB) HCA card HW status and Link check
IB bandwidth communication check
To check the IB bandwidth communication status (ib_send_bw) and inspect the IaaS HW level, follow these steps.
Check the name of the IB HCA interface.
Color mode
user@bm-dev-001:~$ ibdev2netdev
mlx5_0 port 1==> ibs18 (Down)mlx5_1 port 1==> ibs17 (Down)mlx5_2 port 1==> ibs20 (Down)mlx5_3 port 1==> ibs19 (Down)
user@bm-dev-001:~$ ibdev2netdev
mlx5_0 port 1==> ibs18 (Down)mlx5_1 port 1==> ibs17 (Down)mlx5_2 port 1==> ibs20 (Down)mlx5_3 port 1==> ibs19 (Down)
Code block. Check the name of IB HCA interface
Check the HCA interface that can communicate with IB Switch#1.
Color mode
mlx5_0 port 1==> ibs18 (Down)mlx5_2 port 1==> ibs20 (Down)
mlx5_0 port 1==> ibs18 (Down)mlx5_2 port 1==> ibs20 (Down)
Code Block. HCA Interface Check
Check the HCA interface that can communicate with IB Switch#2.
Color mode
mlx5_1 port 1==> ibs17 (Down)mlx5_3 port 1==> ibs19 (Down)
mlx5_1 port 1==> ibs17 (Down)mlx5_3 port 1==> ibs19 (Down)
Code Block. HCA Interface Check
Use SERVER Side commands to check the communication status.
Client Side command is entered secondarily for mutual communication
Color mode
user@bm-dev-001:~$ ib_send_bw -d mlx5_3 -i 1 –F
************************************
* Waiting for client to connect... *
************************************
---------------------------------------------------------------------------------------
Send BW Test
Dual-port : OFF Device : mlx5_3
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
PCIe relax order: ON
ibv_wr* API : ON
RX depth : 512 CQ Moderation : 1 Mtu : 4096[B] Link type : IB
Max inline data : 0[B] rdma_cm QPs : OFF
Data ex. method : Ethernet
---------------------------------------------------------------------------------------
local address: LID 0x07 QPN 0x002e PSN 0xa86622
remote address: LID 0x0a QPN 0x002d PSN 0xfc58dd
---------------------------------------------------------------------------------------
#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps]655361000 0.00 19827.40 0.317238
---------------------------------------------------------------------------------------
user@bm-dev-001:~$ ib_send_bw -d mlx5_3 -i 1 –F
************************************
* Waiting for client to connect... *
************************************
---------------------------------------------------------------------------------------
Send BW Test
Dual-port : OFF Device : mlx5_3
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
PCIe relax order: ON
ibv_wr* API : ON
RX depth : 512 CQ Moderation : 1 Mtu : 4096[B] Link type : IB
Max inline data : 0[B] rdma_cm QPs : OFF
Data ex. method : Ethernet
---------------------------------------------------------------------------------------
local address: LID 0x07 QPN 0x002e PSN 0xa86622
remote address: LID 0x0a QPN 0x002d PSN 0xfc58dd
---------------------------------------------------------------------------------------
#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps]655361000 0.00 19827.40 0.317238
---------------------------------------------------------------------------------------
Code Block. Communication Status Check
Use the CLIENT Side command to check the communication status.
SERVER Side command is entered first for mutual communication
Color mode
root@bm-dev-003:~# ib_send_bw -d mlx5_3 -i 1 -F <SERVER Side IP>
---------------------------------------------------------------------------------------
Send BW Test
Dual-port : OFF Device : mlx5_3
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
PCIe relax order: ON
ibv_wr* API : ON
TX depth : 128 CQ Moderation : 1 Mtu : 4096[B] Link type : IB
Max inline data : 0[B] rdma_cm QPs : OFF
Data ex. method : Ethernet
---------------------------------------------------------------------------------------
local address: LID 0x0a QPN 0x002a PSN 0x98a48e
remote address: LID 0x07 QPN 0x002c PSN 0xe68304
---------------------------------------------------------------------------------------
#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps]655361000 19008.49 19006.37 0.304102
---------------------------------------------------------------------------------------
root@bm-dev-003:~# ib_send_bw -d mlx5_3 -i 1 -F <SERVER Side IP>
---------------------------------------------------------------------------------------
Send BW Test
Dual-port : OFF Device : mlx5_3
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
PCIe relax order: ON
ibv_wr* API : ON
TX depth : 128 CQ Moderation : 1 Mtu : 4096[B] Link type : IB
Max inline data : 0[B] rdma_cm QPs : OFF
Data ex. method : Ethernet
---------------------------------------------------------------------------------------
local address: LID 0x0a QPN 0x002a PSN 0x98a48e
remote address: LID 0x07 QPN 0x002c PSN 0xe68304
---------------------------------------------------------------------------------------
#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps]655361000 19008.49 19006.37 0.304102
---------------------------------------------------------------------------------------
Code Block. Communication Status Check
Check IB Service Related Kernel Modules
Check the IB service-related kernel modules (lsmod) to inspect the IaaS HW level.
Code block. IB service related kernel module check(1)
Color mode
user@bm-dev-001:~$ service nv_peer_mem status
nv_peer_mem.service - LSB: Activates/Deactivates nv_peer_mem to \ start at boot time.
Loaded: loaded (/etc/init.d/nv_peer_mem; generated) Active: active (exited) since Mon 2023-03-13 16:21:33 KST;2 days ago
Docs: man:systemd-sysv-generator(8) Process: 4913ExecStart=/etc/init.d/nv_peer_mem start (code=exited, status=0/SUCCESS)
user@bm-dev-001:~$ service nv_peer_mem status
nv_peer_mem.service - LSB: Activates/Deactivates nv_peer_mem to \ start at boot time.
Loaded: loaded (/etc/init.d/nv_peer_mem; generated) Active: active (exited) since Mon 2023-03-13 16:21:33 KST; 2 days ago
Docs: man:systemd-sysv-generator(8) Process: 4913ExecStart=/etc/init.d/nv_peer_mem start (code=exited, status=0/SUCCESS)
Bonding Mode: fault-tolerance (active-backup)Primary Slave: None
Currently Active Slave: ens9f0
MII Status: up
MII Polling Interval (ms): 100Up Delay (ms): 0Down Delay (ms): 0Peer Notification Delay (ms): 0Slave Interface: ens9f0
MII Status: up
Speed: 100000 Mbps
Duplex: full
Link Failure Count: 0Permanent HW addr: 30:3e:a7:02:35:70
Slave queue ID: 0Slave Interface: ens11f0
MII Status: up
Speed: 100000 Mbps
Duplex: full
Link Failure Count: 0Permanent HW addr: 30:3e:a7:02:2f:e8
Slave queue ID: 0
Bonding Mode: fault-tolerance (active-backup)Primary Slave: None
Currently Active Slave: ens9f0
MII Status: up
MII Polling Interval (ms): 100Up Delay (ms): 0Down Delay (ms): 0Peer Notification Delay (ms): 0Slave Interface: ens9f0
MII Status: up
Speed: 100000 Mbps
Duplex: full
Link Failure Count: 0Permanent HW addr: 30:3e:a7:02:35:70
Slave queue ID: 0Slave Interface: ens11f0
MII Status: up
Speed: 100000 Mbps
Duplex: full
Link Failure Count: 0Permanent HW addr: 30:3e:a7:02:2f:e8
Slave queue ID: 0
Code Block. Service Network Check Command Result
Reference
If some Slave Interface is in a down state, please use the Support Center’s Contact Us to report the abnormal situation and take action.
Multi-node GPU Cluster new deployment after checking Time Server and time synchronization
The OS image has the chrony daemon installed and set to synchronize with the SCP NTP server. Use the following command to check if there are any lines marked with ^* in the MS Name column.
Code block. chrony daemon installation check result
GPU MIG/ECC Setting Initialization Check Guide
When applying for a multi-node GPU cluster product, the GPU MIG/ECC setting is initialized. However, to apply the exact setting value, please restart it once at the beginning, and then check and use it according to the inspection guide to see if the setting value is applied.
Reference
MIG: Multi-Instance GPU
ECC: Error Correction Code
MIG Setup Initialization
Refer to the following for how to check and initialize MIG settings.
Use the following command to check if the status value of MIG M is Disabled.
command
Color mode
root@bm-dev-001:~#nvidia-smi
root@bm-dev-001:~#nvidia-smi
Code Block. MIG M. Initialize Settings
confirmation result
Color mode
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06 Driver version: 470.129.06 CUDA Version: 11.4 ||----------------------------------+-----------------------------+------------------------|| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |||| MIG M. ||==================================+=============================+========================||0 NVIDIA A100-SXM... Off | 00000000:03:00.0 Off | Off || N/A 29C P0 57W / 400W | 0MiB / 81251MiB | 0% Default |||| Disabled |+----------------------------------+-----------------------------+------------------------+
|0 NVIDIA A100-SXM... Off | 00000000:0C:00.0 Off | Off || N/A 30C P0 58W / 400W | 0MiB / 81251MiB | 18% Default |||| Disabled |+-----------------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06 Driver version: 470.129.06 CUDA Version: 11.4 |
|----------------------------------+-----------------------------+------------------------|
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|==================================+=============================+========================|
| 0 NVIDIA A100-SXM... Off | 00000000:03:00.0 Off | Off |
| N/A 29C P0 57W / 400W | 0MiB / 81251MiB | 0% Default |
| | | Disabled |
+----------------------------------+-----------------------------+------------------------+
| 0 NVIDIA A100-SXM... Off | 00000000:0C:00.0 Off | Off |
| N/A 30C P0 58W / 400W | 0MiB / 81251MiB | 18% Default |
| | | Disabled |
+-----------------------------------------------------------------------------------------+
Code Block. MIG M. Initialization Setting Check Result
If MIG M.’s status value is not Disabled, use the following command to initialize MIG.
Refer to the following for how to check and initialize the ECC settings.
Use the following command to check if the status value of Volatile Uncorr. ECC is Off.
command
Color mode
root@bm-dev-001:~#nvidia-smi
root@bm-dev-001:~#nvidia-smi
Code Block. ECC Setting Command
confirmation result
Color mode
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06 Driver version: 470.129.06 CUDA Version: 11.4 ||----------------------------------+-----------------------------+------------------------|| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |||| MIG M. ||==================================+=============================+========================||0 NVIDIA A100-SXM... Off | 00000000:03:00.0 Off | Off || N/A 29C P0 57W / 400W | 0MiB / 81251MiB | 0% Default |||| Disabled |+----------------------------------+-----------------------------+------------------------+
|0 NVIDIA A100-SXM... Off | 00000000:0C:00.0 Off | Off || N/A 30C P0 61W / 400W | 0MiB / 81251MiB | 18% Default |||| Disabled |+-----------------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06 Driver version: 470.129.06 CUDA Version: 11.4 |
|----------------------------------+-----------------------------+------------------------|
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|==================================+=============================+========================|
| 0 NVIDIA A100-SXM... Off | 00000000:03:00.0 Off | Off |
| N/A 29C P0 57W / 400W | 0MiB / 81251MiB | 0% Default |
| | | Disabled |
+----------------------------------+-----------------------------+------------------------+
| 0 NVIDIA A100-SXM... Off | 00000000:0C:00.0 Off | Off |
| N/A 30C P0 61W / 400W | 0MiB / 81251MiB | 18% Default |
| | | Disabled |
+-----------------------------------------------------------------------------------------+
Code Block. ECC Setting Check Result
Volatile Uncorr. ECC’s status value is On*, please proceed with rebooting.
Volatile Uncorr. ECC status value is not On* or Off, use the following command to initialize ECC. After initialization, reboot and check if the status value is Off.
Color mode
root@bm-dev-001:~# nvidia-smi --ecc-config=0
root@bm-dev-001:~# nvidia-smi --ecc-config=0
Code Block. ECC Status Value Check
2.5.3 - Release Note
Multi-node GPU Cluster
2025.07.01
FEATURENew feature added and monitoring linked
You can cancel multiple resources at the same time from the GPU Node list.
The nodes must use the same DataSet and Cluster Fabric.
It has been linked with Cloud Monitoring.
You can check major performance items in real-time in Cloud Monitoring.
2025.02.27
NEWMulti-node GPU Cluster Service Official Version Release
Multi-node GPU Cluster service has been launched.
Provides a service that offers physical GPU servers without virtualization for large-scale high-performance AI computing.
2.6 - Cloud Functions
2.6.1 - Overview
Service Overview
Cloud Functions is a serverless computing-based FaaS (Function as a Service) that allows you to run applications in the form of functions without the need for server provisioning.
The user does not need to manage servers or containers cumbersomely for scale adjustment, and can focus on writing and deploying code for application development.
Features
Easy and convenient development environment: Developers can easily create Function resources connected to events in various environments using a Code Editor suitable for the chosen runtime, and can write and call code easily.
Serverless Computing: You can use a serverless type of code execution service for development in the Samsung Cloud Platform environment. The resources required to call and execute function-type applications are allocated and managed by Samsung Cloud Platform according to the scale of execution.
Efficient Cost Management: The called Function is charged only for the actual application runtime by aggregating usage (total number of calls, total call time). Functions with low usage are adjusted to Scale-to-zero state by Cloud Functions’ Scaler, preventing resource consumption, thus enabling efficient cost management.
Service Composition Diagram
Figure. Cloud_Functions composition diagram
Provided Features
Cloud Functions provides the following features.
Code Writing Environment: Runtime-optimized Function creation, Code writing and editing
Function execution, environment management, monitoring: endpoint definition, Token management, access control setting, trigger setting, etc., definition and modification of operating environment/variables, calling/testing output for Deploy/Test, service deployment, progress status monitoring/logging
Serverless Computing: all elements required for code writing and deployment are managed by Samsung Cloud Platform, with automatic scale adjustment according to deployment
Sample Code Provided: Provides various sample codes through Blueprint, allowing for easy and quick start
Component
Runtime
Cloud Functions currently supports the following Runtime, and more supported Runtime will be added continuously.
Runtime
Version
GO
1.21, 1.23
java
17
Node.js
18, 20
PHP
8.1
Python
3.9, 3.10, 3.11
Table. Supported Runtime Items
Regional Provision Status
Cloud Functions service is available in the following environments.
Region
Availability
Korea West 1(kr-west1)
Provided
Korean East 1 (kr-east1)
Provided
South Korea (kr-south1)
Not provided
South Korea southern region 2(kr-south2)
Not provided
South Korea southern region 3(kr-south3)
Not provided
Table. Cloud Functions Region-wise Availability
Preceding Service
This is a list of services that can be configured as optional before creating the service. Please refer to the guide provided for each service and prepare in advance for more information.
Cloud Functions sends metrics to ServiceWatch. The metrics provided by default monitoring are data collected at a 1‑minute interval.
Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the Cloud Functions namespace.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Table. Cloud Functions Basic Metrics
2.6.2 - How-to guides
The user can enter the required information for Cloud Functions through the Samsung Cloud Platform Console, select detailed options, and create the service.
Cloud Functions Create
Click the All Services > Compute > Cloud Functions menu. Navigate to the Service Home page of Cloud Functions.
Click the Create Cloud Functions button on the Service Home page. It navigates to the Create Cloud Functions page.
Create Cloud Functions page, enter the information required to create the service.
Category
Required
Detailed description
Funtion name
Required
Enter the Funtion name to create
Start with a lowercase English letter and use lowercase English letters, numbers, and special characters (-) to input within 3 ~ 64 characters
Runtime
Required
Select Runtime creation method
Create new: Create a new Runtime
Start with Blueprint: Create using the Runtime source code provided by the service
Table. Cloud Functions Service Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resource on the Cloud Functions list page.
Cloud Functions Check Detailed Information
Cloud Functions Details page consists of Detail Information, Monitoring, Log, Code, Configuration, Trigger, Tag, Job History tabs.
To view detailed information about the Cloud Functions service, follow these steps.
All Services > Compute > Cloud Functions Click the menu. Go to the Service Home page of Cloud Functions.
Click the Function menu on the Service Home page. Move to the Function List page.
Click the resource to view detailed information on the Function list page. Go to the Function detail page.
Function Details The page displays status information and additional feature information, and consists of the Details, Monitoring, Log, Code, Configuration, Trigger, Tag, Task History tabs.
Category
Detailed description
Cloud Functions status
Cloud Functions status information
Ready: green icon, state where normal function calls are possible
Not Ready: gray icon, state where normal function calls are not possible
Deploying: yellow icon, state where function is being created or changed, triggered by the following actions
function creation and modification
modify code with editor in the Code tab
inspect jar file in the Code tab
add and modify in the Trigger tab
modify in the Configuration tab
Running: blue icon, state where normal function calls are possible and cold start prevention policy is applied
Service cancellation
Button to cancel the service
Table. Cloud Functions status information and additional features
Detailed Information
Function list page, you can view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
In the Cloud Functions service, it refers to the Function name
Resource ID
Unique resource ID of the service
Creator
User who created the service
Creation time
Date/time the service was created
Editor
User who modified the service
Modification Date and Time
Date and time the service was modified
Function name
Name of Cloud Function
Runtime
Runtime types and versions
LLM Endpoint
Click User Guide to view LLM Endpoint information and usage instructions
Table. Cloud Functions Details - Detailed Information Tab Items
Reference
For detailed information on how to use LLM by integrating AIOS, please refer to Integrate AIOS.
Monitoring
You can view the Cloud Functions usage information of the selected resource on the Function List page.
Category
Detailed description
Number of calls
Average number of times the function was called during the unit time (instances)
Execution Time
Average execution time (seconds) of the function during the unit time
Memory usage
Average memory usage (kb) used during the execution of the function per unit time
Current task count
If the function is called multiple times simultaneously, the average number of tasks generated per unit time for concurrent processing (count)
Successful call count
Average number of times (cases) the runtime code operated normally and delivered a response code per unit time when the function is called
Failed call count
Average number of calls with errors per unit time when the function is invoked
Including runtime due to response timeouts and logic errors
Handler Information: Execution Class and Method information
Compressed file name (.jar/.zip): Name of the currently set compressed file
File upload date and time: Upload date and time of the currently set compressed file
Transmission status: Compressed file transmission history
Transmission success: When the compressed file setting is successful
Reason for failure when compressed file transmission fails
Edit
Jar file can be changed
Can be changed by clicking the Get from Object Storage button on the Function code edit page
Enter the Private URL of the file in the Object Storage bucket to be fetched
For details on changing compressed files, refer to [Java Runtime code change](#java-runtime code change)
Table. Cloud Functions Details - Execution Items of Compressed Files (.jar/.zip) in Code Tab
Reference
In the case of Java Runtime, it does not provide UI code editing functionality, and you must select a compressed file (.jar/.zip) from the bucket of the Object Storage service.
In case of users whose Object Storage service authentication key has not been generated, Import from Object Storage cannot be executed, so you must generate the authentication key in advance.
Function list page allows you to view the Cloud Functions configuration of the selected resource.
Category
Detailed description
General Configuration
Cloud Function memory and timeout settings
Memory: Maximum memory limit per function
Timeout: Maximum waiting time for a function call per function. After the timeout, the function goes into a Scale-to-zero state and terminates
Function execution: Minimum and maximum number of tasks
Click the Edit button to change the General Configuration settings
Environment Variable
Set runtime environment variables
When using environment variables, you can adjust the function’s behavior without updating the code
Edit button to environment variable can be added or edited
Function URL
Issue an HTTPS URL address that can access the function
Click the Edit button to set activation status, authentication type, and allowed IP
When calling the function authenticated with IAM type, the header must include “x-scf-access-key”, “x-scf-secret-key”. In this case, policy and authentication key IP access control are not applied
Private connection configuration
Can be used in conjunction with PrivateLink Service
If you disable Access Control, the registered access information will be deleted, making function access control impossible, so it may be exposed to security attacks such as external scanning and hacking.
Reference
General Configuration’s memory allocation proportionally determines the number of CPU cores that are automatically assigned.
General configuration’s minimum execution count of 1 or more prevents Cold Start, but costs are incurred continuously.
Trigger
Function List page allows you to view and configure trigger information of the selected resource. If you set a trigger, the Function can be automatically executed when an event occurs.
Category
Detailed description
Cronjob
Use Cronjob as a trigger
Automatically invoke the function according to time or a scheduled interval
Click the Edit button to change the frequency and time zone
API Gateway
Use API Gateway as a trigger
You can view the API Gateway name and detailed information.
If you call the Cronjob trigger before the function timeout, the function will run nestedly, increasing the execution count and duration. Consequently, continuous additional costs can accrue, leading to high expenses, so be careful.
Reference
Deploying If in this state, cannot be edited.
About trigger settings, please refer to Trigger Setup.
Tag
In the Tag tab, you can view the resource’s tag information, and add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Tag’s Key, Value information can be checked
Up to 50 tags can be added per resource
When entering tags, search and select from the existing list of created Keys and Values
Table. Cloud Functions Details - Tag Tab Items
Work History
You can check the work history of resources on the Work History page.
Category
Detailed description
Work History List
Resource Change History
Work details, work date and time, resource type, resource name, work result, worker information can be checked
When you click the corresponding resource in the Work History List, the Work History Details popup opens
Table. Cloud Functions Details - Job History Tab Items
Java Runtime Code Change
If you are using Java Runtime, you cannot modify the code directly, so you need to select and change the compressed file (.jar/.zip) in the bucket of the Object Storage service.
Follow the steps below to change the compressed file.
To cancel the Cloud Functions service, follow the steps below.
Click the All Services > Compute > Cloud Functions menu. Go to the Service Home page of Cloud Functions.
Service Home on the page click the Function menu. Navigate to the Function list page.
Click the resource to change the compressed file within the code on the Function List page. Navigate to the Function Details page.
Click the Edit button on the Code tab of the Function Details page. It moves to the Function Code Edit page.
Import from Object Storage Click the button. Import from Object Storage popup opens.
Category
Detailed description
Java Runtime
Java Runtime information
Handler Information
Handler Information
Execution Class: Automatically entered when setting compressed file (.jar/.zip)
Execution Method: Automatically entered when setting compressed file (.jar/.zip)
Compressed file (.jar/.zip)
Set the compressed file to modify
Compressed file name (.jar/.zip): Displays the name of the compressed file. After setting Get from Object Storage, it is entered automatically
Get from Object Storage: Set the Object Storage to retrieve the compressed file (.jar/.zip)
Table. Cloud Functions Details - Function Code Modification Items
Object Storage URL After entering the URL information of the Object Storage from which to retrieve the compressed file, click the Confirm button. A notification popup will open.
The URL information can be found in the Folder List tab of the detailed page of the Object Storage to be retrieved, under the File Information > Private URL item.
Click the Confirm button. The name of the imported archive file is displayed in the Function code edit page’s Archive file name (.jar/.zip).
Click the Save button.
Caution
In case of a user whose authentication key has not been generated, Import from Object Storage cannot be executed.
If the URL does not exist or the compressed file corresponds to the following, it cannot be changed.
When using an unsupported extension
If there is a harmful file in the compressed file
If it exceeds the supported size
Cloud Functions Cancel
To cancel the Cloud Functions service, follow the steps below.
All Services > Compute > Cloud Functions Click the menu. Navigate to the Service Home page of Cloud Functions.
Click the Function menu on the Service Home page. Go to the Function List page.
Function list page, click the resource to be terminated and click the Cancel Service button.
When the termination is completed, check whether the resource has been terminated on the Function List page.
2.6.2.1 - Set Trigger
Set up trigger
Reference
By default, all triggers can be added in Cloud Functions.
If triggered from a specific product, it must be passed to Cloud Functions.
Cronjob Trigger Setup
To set up a Cronjob trigger, follow these steps.
All Services > Compute > Cloud Functions Click the menu. Navigate to the Service Home page of Cloud Functions.
Click the Function menu on the Service Home page. It moves to the Function List page.
On the Function List page, click the resource to set a trigger. You will be taken to the Function Details page.
After clicking the Trigger tab, click the Add Trigger button. Set it. The Add Trigger popup opens.
Add Trigger In the popup window, select Trigger TypeCronjob. The required information input area appears at the bottom.
Category
Detailed description
Cronjob Settings
Set the trigger’s repeat frequency
Can be set in minutes, hours, days, months, and weekdays
Timezone setting
Set the trigger’s reference time zone
Table. Cronjob Trigger Required Information Items
After entering the required information, click the Confirm button.
When the pop-up window notifying addition opens, click the Confirm button.
API Gateway Trigger Setup
To set up an API Gateway trigger, follow these steps.
All Services > Compute > Cloud Functions Click the menu. Go to the Service Home page of Cloud Functions.
Click the Function menu on the Service Home page. Go to the Function List page.
Function List on the page, click the resource to set the trigger. Function Details go to the page.
After clicking the Trigger tab, click the Add Trigger button. Set it. The Add Trigger popup window opens.
Add Trigger In the popup window, select Trigger TypeAPI Gateway. A required information input area appears at the bottom.
Category
Detailed description
API name
API selection
You can select an existing API or create a new one
Stage
Select deployment target
You can select an existing stage or create a new one
Table. API Gateway Trigger Required Information Items
After entering the required information, click the Confirm button.
When the popup notifying addition opens, click the Confirm button.
Setting up Multi Trigger
You can connect multiple triggers to a single function and use them.
Edit Trigger
To modify the added trigger, follow the steps below.
All Services > Compute > Cloud Functions Click the menu. Go to the Service Home page of Cloud Functions.
Click the Function menu on the Service Home page. Navigate to the Function List page.
On the Function List page, click the resource to edit the trigger. It moves to the Function Details page.
After clicking the Trigger tab, click the Edit button of the trigger whose settings you want to modify in the trigger list. The Edit Trigger popup window opens.
Trigger Edit After modifying the setting values in the popup window, click the Confirm button.
When the popup notifying the edit opens, click Confirm.
Delete Trigger
To delete the trigger, follow the steps below.
Caution
Triggers linked to a specific product only manage the product delivered at the time of linking in that product, and when Functions are terminated, a deletion status must be delivered to that product.
All Services > Compute > Cloud Functions Click the menu. Navigate to the Service Home page of Cloud Functions.
Click the Function menu on the Service Home page. Navigate to the Function List page.
Function List page, click the resource to set the trigger. Function Details page will be navigated.
In the Trigger tab’s trigger list, after selecting the trigger to delete, click the Delete button.
Click the Confirm button when the popup notifying trigger deletion opens.
2.6.2.2 - AIOS Connect
AIOS Linking
You can use LLM by linking Cloud Functions with AIOS.
AIOS LLM Private Endpoint
The URL of the AIOS LLM private endpoint is as follows.
To integrate Cloud Functions with AIOS, you need to change the URL address in the Blueprint to match the LLM Endpoint used in each region.
To change the Blueprint source code, follow the steps below.
All Services > Compute > Cloud Functions Click the menu. Go to the Service Home page of Cloud Functions.
Service Home on the page click the Cloud Functions menu. Navigate to the Function list page.
On the Function List page, click the resource to be called via URL. You will be taken to the Function Detail page.
After clicking the Code tab, click the Edit button. Navigate to the Function Code Edit page.
After modifying the Blueprint using Python, Node.js, Go Runtime source code, click the Save button.
Python source code
Color mode
importjsonimportrequestsdefhandle_request(params):# User writing area (Function details)url="{AIOS LLM private endpoint}/{API}"# Destination URLdata={"model":"openai/gpt-oss-120b","prompt":"Write a haiku about recursion in programming.","temperature":0,"max_tokens":100,"stream":False}try:response=requests.post(url,json=data,verify=True)return{'statusCode':response.status_code,'body':json.dumps(response.text)}exceptrequests.exceptions.RequestExceptionase:returnstr(e)
importjsonimportrequestsdefhandle_request(params):
# User writing area (Function details) url ="{AIOS LLM private endpoint}/{API}"# Destination URL data = { "model": "openai/gpt-oss-120b" , "prompt" : "Write a haiku about recursion in programming." , "temperature": 0 , "max_tokens": 100 , "stream": False }
try:
response = requests.post(url, json=data, verify=True)
return {
'statusCode': response.status_code,
'body': json.dumps(response.text)
}
except requests.exceptions.RequestException as e:
return str(e)
Python source code
Node.js source code
Color mode
constrequest=require('request');/**
* @description User writing area (Function details)
*/exports.handleRequest=asyncfunction(params){returnawaitsendRequest(params);};asyncfunctionsendRequest(req){returnnewPromise((resolve,reject)=>{url="{AIOS LLM private endpoint}/{API}"data={model:'openai/gpt-oss-120b',prompt:'Write a haiku about recursion in programming.',temperature:0,max_tokens:100,stream:false}constoptions={uri:url,method:'POST',body:data,json:true,strictSSL:false,rejectUnauthorized:false}request(options,(error,response,body)=>{if(error){reject(error);}else{resolve({statusCode:response.statusCode,body:JSON.stringify(body)});}});});}
packagegofunctionimport("bytes""net/http""encoding/json""io/ioutil")typePostDatastruct{Modelstring`json:"model"`Promptstring`json:"prompt"`Temperatureint`json:"temperature"`MaxTokensint`json:"max_tokens"`Streambool`json:"stream"`}funcHandleRequest(r*http.Request)(string,error){url:="{AIOS LLM private endpoint}/{API}"data:=PostData{Model:"openai/gpt-oss-120b",Prompt:"Write a haiku about recursion in programming.",Temperature:0,MaxTokens:100,Stream:false,}jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}req,err:=http.NewRequest("POST",url,bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}req.Header.Set("Content-Type","application/json")client:=&http.Client{}resp,err:=client.Do(req)iferr!=nil{panic(err)}deferresp.Body.Close()// Read response body
body,err:=ioutil.ReadAll(resp.Body)iferr!=nil{panic(err)}returnstring(body),nil"}
package gofunction
import (
"bytes""net/http""encoding/json""io/ioutil")
type PostData struct {
Model string`json:"model"` Prompt string`json:"prompt"` Temperature int`json:"temperature"` MaxTokens int`json:"max_tokens"` Stream bool`json:"stream"`}
funcHandleRequest(r *http.Request)(string, error) {
url :="{AIOS LLM private endpoint}/{API}" data := PostData {
Model: "openai/gpt-oss-120b",
Prompt: "Write a haiku about recursion in programming.",
Temperature: 0,
MaxTokens: 100,
Stream: false,
}
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
req.Header.Set("Content-Type", "application/json")
client :=&http.Client{}
resp, err := client.Do(req)
if err !=nil {
panic(err)
}
defer resp.Body.Close()
// Read response body
body, err := ioutil.ReadAll(resp.Body)
if err !=nil {
panic(err)
}
return string(body), nil"}
GO source code
2.6.2.3 - Blueprint Detailed Guide
Blueprint Overview
When creating Cloud Functions, you can set the Blueprint to utilize the Runtime source code provided by Cloud Functions.
Refer to the following for Blueprint items provided by Cloud Functions.
importjsondefhandle_request(params):# User writing area (Function details)return{'statusCode':200,'body':json.dumps('Hello Serverless World!')}
importjsondefhandle_request(params):
# User writing area (Function details)return {
'statusCode': 200,
'body': json.dumps('Hello Serverless World!')
}
Hello World - Python source code
PHP source code
Color mode
<?phpfunctionhandle_request(){# User writing area (Function details)
$res=array('statusCode'=>200,'body'=>'Hello Serverless World!',);return$res;}?>
<?php
functionhandle_request() {
# User writing area (Function details)
$res=array(
'statusCode'=>200,
'body'=>'Hello Serverless World!',
);
return$res;
}
?>
Hello World - PHP source code
Check function call
After calling the function URL in the Configuration tab of the Function Details page, verify the response.
Hello Serverless World!
# Execution after timeout
Explain the setting for execution after timeout and an example of function call (using function URL).
## Execution after timeout Setting
To set Execution after timeout, follow the steps below.
1. **All Services > Compute > Cloud Functions** Click the menu. Go to the **Service Home** page of Cloud Functions.
2. Click the **Function** menu on the **Service Home** page. Navigate to the **Function List** page.
3. **Function List** page, click the resource to set the trigger. **Function Details** page will be opened.
4. After clicking the **Trigger** tab, click the **Add Trigger** button. The **Add Trigger** popup opens.
5. **Add Trigger** In the popup window, after selecting the **Trigger Type** item, enter the required information displayed at the bottom and click the **Confirm** button.
* Required information varies depending on the trigger type.
Trigger Type
Input Item
API Gateway
API name: You can select an existing API or create a new one
Stage: You can select an existing stage or create a new one
Cronjob
Refer to the example and enter the trigger’s repeat frequency (minute, hour, day, month, day of week)
Timezone setting: select the reference time zone to apply
Table. Required input items when adding a trigger
6. **Code** after moving to the tab, click the **Edit** button. You will be taken to the **Function Code Edit** page.
7. After adding the processing logic for success and failure cases, click the **Save** button.
* Node.js source code
Color mode
exports.handleRequest=asyncfunction(params){/**
* @description User writing area (Function details)
*/console.log("Hello world 3");awaitdelay(3000);constresponse={statusCode:200,body:JSON.stringify('Hello Serverless World!'),};;returnresponse;};constdelay=(ms)=>{returnnewPromise(resolve=>{setTimeout(resolve,ms)})}
## Check function call
**Function Details** page's **configuration** tab call the **function URL** and after a certain amount of time, check the response.
Hello Serverless World!
# HTTP request body
Explains the Request Body parsing settings and function call example (using function URL).
## Setting HTTP request body
To set the HTTP request body, follow these steps.
1. Click the **All Services > Compute > Cloud Functions** menu. Go to the **Service Home** page of Cloud Functions.
2. Click the **Function** menu on the **Service Home** page. Navigate to the **Function** list page.
3. **Function List** page, click the resource to set the trigger. **Function Details** page will be opened.
4. After clicking the **Trigger** tab, click the **Add Trigger** button. The **Add Trigger** popup opens.
5. **Add Trigger** in the popup window, after selecting the **Trigger Type** item, enter the required information displayed at the bottom and click the **Confirm** button.
* Required information varies depending on the trigger type.
Trigger Type
Input Item
API Gateway
API name: You can select an existing API or create a new one
Stage: You can select an existing stage or create a new one
Cronjob
Refer to the example and enter the trigger’s repeat frequency (minute, hour, day, month, day of week)
Timezone setting: select the time zone to apply
Table. Required input items when adding a trigger
6. After moving to the **Code** tab, click the **Edit** button. You will be taken to the **Function Code Edit** page.
7. After adding processing logic for success and failure cases, click the **Save** button.
* Node.js source code
Color mode
exports.handleRequest=asyncfunction(params){/**
* @description User writing area (Function details)
*/constresponse={statusCode:200,body:JSON.stringify(params.body),};;returnresponse;};
importjsondefhandle_request(params):# User writing area (Function details)return{'statusCode':200,'body':json.dumps(params.json)}
importjsondefhandle_request(params):
# User writing area (Function details)return {
'statusCode': 200,
'body': json.dumps(params.json)
}
Execution after timeout - Python source code
## Check function call
**Function Details** page's **Configuration** tab after calling the **function URL**, check the Body data, request Body value, and response Body value.
* request Body value
# Send HTTP requests
Explains the HTTP request settings and function call example (using function URL).
## Send HTTP requests Setup
To configure Send HTTP requests, follow the steps below.
1. **All Services > Compute > Cloud Functions** Click the menu. Go to the **Service Home** page of Cloud Functions.
2. Click the **Function** menu on the **Service Home** page. Go to the **Function List** page.
3. Click the resource to set the trigger on the **Function List** page. It navigates to the **Function Details** page.
4. After clicking the **Trigger** tab, click the **Add Trigger** button. The **Add Trigger** popup opens.
5. **Add Trigger** In the popup window, after selecting the **Trigger Type** item, enter the required information displayed at the bottom and click the **Confirm** button.
* Required information varies depending on the type of trigger.
Trigger Type
Input Item
API Gateway
API name: You can select an existing API or create a new one
Stage: You can select an existing stage or create a new one
Cronjob
Refer to the example and enter the trigger’s repeat frequency (minute, hour, day, month, day of week)
Timezone setting: select the reference time zone to apply
Table. Required input items when adding a trigger
6. After moving to the **Code** tab, click the **Edit** button. You will be taken to the **Function Code Edit** page.
7. After adding the processing logic for success and failure cases, click the **Save** button.
* Node.js source code
Color mode
constrequest=require('request');/**
* @description User writing area (Function details)
*/exports.handleRequest=asyncfunction(params){returnawaitsendRequest(params);};asyncfunctionsendRequest(req){returnnewPromise((resolve,reject)=>{// Port 80 and Port 443 are available
url="https://example.com";// Destination URL
constoptions={uri:url,method:'GET',json:true,strictSSL:false,rejectUnauthorized:false}request(options,(error,response,body)=>{if(error){reject(error);}else{resolve({statusCode:response.statusCode,body:JSON.stringify(body)});}});});}
const request = require('request');
/**
* @description User writing area (Function details)
*/exports.handleRequest =asyncfunction (params) {
returnawait sendRequest(params);
};
asyncfunction sendRequest(req) {
returnnew Promise((resolve, reject) => {
// Port 80 and Port 443 are available
url ="https://example.com"; // Destination URL
const options = {
uri: url,
method:'GET',
json:true,
strictSSL:false,
rejectUnauthorized:false }
request(options, (error, response, body) => {
if (error) {
reject(error);
} else {
resolve({
statusCode: response.statusCode,
body: JSON.stringify(body)
});
}
});
});
}
Send HTTP requests - Node.js source code
* Python source code
Color mode
importjsonimportrequestsdefhandle_request(params):# User writing area (Function details)# Port 80 and Port 443 are availableurl="https://example.com"# Destination URLtry:response=requests.get(url,verify=True)return{'statusCode':response.status_code,'body':json.dumps(response.text)}exceptrequests.exceptions.RequestExceptionase:returnstr(e)
importjsonimportrequestsdefhandle_request(params):
# User writing area (Function details)# Port 80 and Port 443 are available url ="https://example.com"# Destination URLtry:
response = requests.get(url, verify=True)
return {
'statusCode': response.status_code,
'body': json.dumps(response.text)
}
except requests.exceptions.RequestException as e:
return str(e)
Send HTTP requests - Python source code
## Check function call
**Function Details** page's **Configuration** tab, after calling the **function URL**, check the response.
Color mode
<!doctype html><html><head><title>Example Domain</title><metacharset="utf-8"/><metahttp-equiv="Content-type"content="text/html; charset=utf-8"/><metaname="viewport"content="width=device-width, initial-scale=1"/><styletype="text/css">body{background-color:#f0f0f2;margin:0;padding:0;font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI","Open Sans","Helvetica Neue",Helvetica,Arial,sans-serif;}div{width:600px;margin:5emauto;padding:2em;background-color:#fdfdff;border-radius:0.5em;box-shadow:2px3px7px2pxrgba(0,0,0,0.02);}a:link,a:visited{color:#38488f;text-decoration:none;}@media(max-width:700px){div{margin:0auto;width:auto;}}</style></head><body><div><h1>Example Domain</h1><p>This domain is for use in illustrative examples in documents. You may use this
domain in literature without prior coordination or asking for permission.</p><p><ahref="https://www.iana.org/domains/example">More information...</a></p></div></body></html>
<!doctype html><html>
<head>
<title>Example Domain</title>
<meta charset="utf-8" />
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<style type="text/css">
body {
background-color: #f0f0f2;
margin: 0;
padding: 0;
font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;
}
div {
width: 600px;
margin: 5emauto;
padding: 2em;
background-color: #fdfdff;
border-radius: 0.5em;
box-shadow: 2px3px7px2px rgba(0,0,0,0.02);
}
a:link,a:visited {
color: #38488f;
text-decoration: none;
}
@media(max-width:700px) {
div {
margin: 0auto;
width: auto;
}
}
</style>
</head>
<body>
<div>
<h1>Example Domain</h1>
<p>This domain is for use in illustrative examples in documents. You may use this
domain in literature without prior coordination or asking for permission.</p>
<p><a href="https://www.iana.org/domains/example">More information...</a></p>
</div>
</body>
</html>
Check function call response
# Print logs
Explains the log output settings and function call example (using function URL).
## Print logs Setup
Print logs To set up receiving responses, follow the steps below.
1. Click the **All Services > Compute > Cloud Functions** menu. Navigate to the **Service Home** page of Cloud Functions.
2. Click the **Function** menu on the **Service Home** page. Move to the **Function List** page.
3. **Function List** page, click the resource to set the trigger. **Function Details** page will be opened.
4. Click the **Trigger** tab, then click the **Add Trigger** button. The **Add Trigger** popup opens.
5. **Add Trigger** In the popup window, after selecting the **Trigger Type** item, enter the required information displayed at the bottom and click the **Confirm** button.
* Required information varies depending on the trigger type.
Trigger Type
Input Item
API Gateway
API name: You can select an existing API or create a new one
Stage: You can select an existing stage or create a new one
Cronjob
Refer to the example and enter the trigger’s repeat frequency (minute, hour, day, month, day of week)
Timezone setting: select the reference time zone to apply
Table. Required input items when adding a trigger
6. After moving to the **Code** tab, click the **Edit** button. You will be taken to the **Function Code Edit** page.
7. After adding the processing logic for success and failure cases, click the **Save** button.
* Node.js source code
Color mode
constwinston=require('winston');// Log module setting
constlogger=winston.createLogger({format:winston.format.combine(winston.format.timestamp(),winston.format.printf(info=>info.timestamp+' '+info.level+': '+info.message)),transports:[newwinston.transports.Console()]});exports.handleRequest=asyncfunction(params){/**
* @description User writing area (Function details)
*/constresponse={statusCode:200,body:JSON.stringify(params.body),};"logger.info(JSON.stringify(response,null,2));returnresponse;};
importjsonimportlogging# Log module settinglogging.basicConfig(level=logging.INFO)defhandle_request(params):# User writing area (Function details)response={'statusCode':200,'body':json.dumps(params.json)}logging.info(response)returnresponse
# Throw a custom error
Custom error occurrence (Throw a custom error) setting and function call example (function URL usage) is explained.
## Throw a custom error Setting
To set Throw a custom error, follow the steps below.
1. **All Services > Compute > Cloud Functions** Click the menu. Go to the **Service Home** page of Cloud Functions.
2. Click the **Function** menu on the **Service Home** page. Move to the **Function List** page.
3. **Function List** page, click the resource to set the trigger. **Function Details** page will be navigated.
4. **Trigger** tab after clicking, click the **Add Trigger** button. The **Add Trigger** popup window opens.
5. **Add Trigger** In the popup window, after selecting the **Trigger Type** item, enter the required information displayed at the bottom and click the **Confirm** button.
* Required information varies depending on the trigger type.
Trigger Type
Input Item
API Gateway
API name: You can select an existing API or create a new one
Stage: You can select an existing stage or create a new one
Cronjob
Refer to the example and enter the trigger’s repeat frequency (minute, hour, day, month, day of week)
Timezone setting: select the time zone to apply
Table. Required input items when adding a trigger
6. After moving to the **Code** tab, click the **Edit** button. You will be taken to the **Function Code Edit** page.
7. After adding the processing logic for success and failure cases, click the **Save** button.
* Node.js source code
Color mode
classCustomErrorextendsError{constructor(message){super(message);this.name='CustomError';}}exports.handleRequest=asyncfunction(params){/**
* @description User writing area (Function details)
*/thrownewCustomError('This is a custom error!');};
class CustomError extends Error {
constructor(message) {
super(message);
this.name ='CustomError';
}
}
exports.handleRequest =asyncfunction (params) {
/**
* @description User writing area (Function details)
*/thrownew CustomError('This is a custom error!');
};
Throw a custom error - Node.js source code
* Python source code
Color mode
classCustomError(Exception):def__init__(self,message):self.message=messagedefhandle_request(parmas):raiseCustomError('This is a custom error!')
classCustomError(Exception):
def __init__(self, message):
self.message = message
defhandle_request(parmas):
raise CustomError('This is a custom error!')
Throw a custom error - Python source code
* PHP source code
Color mode
<?phpclassCustomErrorextendsException{publicfunction__construct($message){parent::__construct($message);$this->message=$message;}}functionhandle_request(){thrownewCustomError('This is a custom error!');}?>
## Check function call
**Function Details** page's **Configuration** tab after calling the **function URL**, check for errors in the **Log** tab.
# Using Environment Variable
Using Environment Variable (Using Environment Variable) configuration and function call example (using function URL) is explained.
## Using Environment Variable Setup
To set Using Environment Variable, follow the steps below.
1. **All Services > Compute > Cloud Functions** Click the menu. Go to the **Service Home** page of Cloud Functions.
2. Click the **Function** menu on the **Service Home** page. Navigate to the **Function List** page.
3. **Function List** Click the resource to set the trigger on the page. **Function Details** Navigate to the page.
4. After clicking the **Trigger** tab, click the **Add Trigger** button. The **Add Trigger** popup opens.
5. **Add Trigger** In the popup window, after selecting the **Trigger Type** item, enter the required information displayed at the bottom and click the **Confirm** button.
* Required information varies depending on the type of trigger.
Trigger Type
Input Item
API Gateway
API name: You can select an existing API or create a new one
Stage: You can select an existing stage or create a new one
Cronjob
Refer to the example and enter the trigger’s repeat frequency (minute, hour, day, month, day of week)
Timezone setting: select the reference time zone to apply
Table. Required input items when adding a trigger
6. After moving to the **Code** tab, click the **Edit** button. You will be taken to the **Function Code Edit** page.
7. After adding the processing logic for success and failure cases, click the **Save** button.
* Node.js source code
Color mode
exports.handleRequest=asyncfunction(params){/**
* @description User writing area (Function details)
*/returnprocess.env.test;};
exports.handleRequest =asyncfunction (params) {
/**
* @description User writing area (Function details)
*/return process.env.test;
};
Using Environment Variable - Node.js source code
* Python source code
Color mode
importjsonimportosdefhandle_request(params):# User writing area (Function details)returnos.environ.get("test")
importjsonimportosdefhandle_request(params):
# User writing area (Function details)return os.environ.get("test")
Using Environment Variable - Python source code
* PHP source code
Color mode
importjsondefhandle_request(params):# User writing area (Function details)
returnos.environ.get("test")
import json
def handle_request(params):# User writing area (Function details)
return os.environ.get("test")
Using Environment Variable - PHP source code
9. After moving to the **Configuration** tab, click the **Edit** button in the **Environment Variable** area. The **Edit Environment Variable** popup window opens.
10. After entering the environment variable information, click the **Confirm** button.
Category
Detailed description
Name
Enter Key value
value
ValueEnter value
Table. Environment Variable Input Items
## Check function call
**Function Details** page's **Configuration** tab, after calling the **Function URL**, check the environment variable values in the **Log** tab.
2.6.2.4 - PrivateLink Service Integration
By linking Cloud Functions and PrivateLink services, you can connect VPCs within the Samsung Cloud Platform and VPCs to services without external internet. The data uses only the internal network, which enhances security, and does not require a public IP, NAT, VPN, internet gateway, etc.
PrivateLink Service Integration
You can expose the function via PrivateLink Service so that it can be accessed privately from another VPC.
To integrate the PrivateLink service, follow the steps below.
All Services > Compute > Cloud Functions Click the menu. Navigate to the Service Home page of Cloud Functions.
Click the Cloud Functions menu on the Service Home page. You will be taken to the Function list page.
Click the resource to associate PrivateLink on the Function List page. It moves to the Function Details page.
Click the Configuration tab on the Function Details page.
Click the Edit button of PrivateLink Service in Private connection configuration. Edit PrivateLink Service popup window opens.
PrivateLink Service Edit in the popup window, after checking the Use item of Activation Status, click the Confirm button. In the Configuration tab’s Private Connection Configuration, the PrivateLink Service information is displayed.
Category
Detailed description
Private URL
PrivateLink Service URL information
PrivateLink Service ID
PrivateLink Service ID information
Request Endpoint Management
List of PrivateLink Endpoints that requested connection to PrivateLink Service
Endpoint ID and approval status
Approval Management Click the button to change status
Requesting: Endpoint that is requesting connection. Click Approve or Reject button to select approval
Active: Endpoint with completed connection. Click Block button to disconnect
Disconnected: Endpoint whose connection has been terminated. Click Reconnect button to connect
Reject: Endpoint whose connection request was denied
Table. PrivateLink Service Detailed Information Items
PrivateLink Endpoint Create
Create an entry point to access the user’s VPC PrivateLink Service.
Caution
Additional costs may be incurred when creating an endpoint.
To create a PrivateLink Endpoint, follow these steps.
All Services > Compute > Cloud Functions Click the menu. Go to the Service Home page of Cloud Functions.
On the Service Home page, click the Cloud Functions menu. It navigates to the Function list page.
Click the resource to associate with PrivateLink on the Function List page. You will be taken to the Function Details page.
Click the Configuration tab on the Function Details page.
Click the Add button of PrivateLink Endpoint in Configure Private Connection. Add PrivateLink Endpoint popup opens.
PrivateLink Service Add in the popup window, after entering PrivateLink Service ID and Alias information, click the Confirm button.
When the popup notifying creation opens, click the Confirm button. In the Configuration tab’s Private connection configuration, the PrivateLink Endpoint information is displayed.
Category
Detailed description
PrivateLink Endpoint ID
PrivateLink Endpoint ID information
PrivateLink Service ID
PrivateLink Service ID information
Alias
Alias information
Status
Approval status of PrivateLink Endpoint
Requesting: Pending approval
Active: Approved and connected
Disconnected: Disconnected
Reject: Approval denied. Click the Re-request button to request again
Delete: Delete the endpoint
Table. PrivateLink Endpoint detailed information items
2.6.2.5 - 리소스 기반 정책 가이드
Overview of Resource-Based Policies
A resource-based policy for Cloud Functions is a policy attached to a resource that can allow or deny (Effect) specific actions (Action) for a given principal (Principal).
You can directly define the principal that can execute (Invoke) a function using resource-based policies.
참고
While a typical IAM policy (Identity-based) grants permissions to a user, a resource-based policy is applied to the function itself to allow external access.
You can allow function calls by defining the following in a resource-based policy.
User of the specified Samsung Cloud Platform account
Specified source IP address range or CIDR block
Source policies are defined as JSON policy documents attached to an API, which control whether a specified security principal (typically an IAM role or group) can call the API.
Category
Explanation
Example
Principal
Specify the function caller
specific object storage bucket, API Gateway, other Samsung Cloud Platform accounts, etc.
Task (Action)
Define the allowed functionality
Mostly scf:InvokeFunction
Condition (Condition)
Restrict to allow only in specific situations
Allow only requests from a bucket with a specific SRN.
표. API 호출 여부를 제어하는 Entity
참고
Cloud Functions’ resource-based policies leverage the rules of IAM’s resource-based policies.
Although it is not automatically registered as a resource‑based policy for Cloud Functions, users can add and use it as needed.
The scenarios that users can add and utilize are as follows.
Cross-Account Access
When an IAM user in account A wants to invoke a Lambda function in account B, add account A to the function policy of account B.
Hybrid Access Control
You can configure it so that access is allowed only when both conditions are met—a specific user and a specific IP range—rather than restricting just the account or IP alone.
Resource-based policy management for Cloud Functions
To view and configure resource-based policies for Cloud Functions, follow these steps.
Click the All Services > Compute > Cloud Functions menu. 1. Navigate to the Service Home page of Cloud Functions.
On the Service Home page, click the Function menu. 2. Navigate to the Function list page.
On the Function List page, click the resource for which you want to set a policy. 3. Go to the Function Details page.
Click the Configuration tab on the Function Details page.
Click the Edit button for the Resource-based policy permissions item. 5. Resource Policy edit popup window opens.
In the Resource Policy edit popup, select a Policy Template and then write the policy.
When you have finished writing, click the Confirm button.
When you click the Delete button, the registered policy is deleted.
Resource-based policy example
Users can define additional resource-based policies as needed or modify existing policies for use.
참고
For some features, a resource‑based policy (or credential) must be registered to use them in Cloud Functions.
In the resource-based policy example described in this guide, Cloud Functions automatically registers the example resource-based policy when each feature is enabled or linked.
Function URL - Authentication Type None
A policy that permits public calls when the Principal is /*.
You can use functions in conjunction with the AIOS service.
Cloud Functions can be linked with AIOS to utilize LLM.
You can use functions in conjunction with the PrivateLink service.
Through Private connection (PrivateLink), you can internally connect Samsung Cloud Platform’s VPC to VPC, and VPC to services without going through the Internet.
The feature to upload Java Runtime executable files has been added.
You can fetch and configure a Java Runtime executable archive file (.jar/.zip) to Object Storage.
2025.07.01
NEWCloud Functions Service Official Version Release
Cloud Functions service has been officially launched.
It is a serverless computing-based FaaS (Function as a Service) that easily runs function-style applications without the need for server provisioning.
2.7 - Virtual Server DR
In the event of a system disruption due to various disasters and risk factors, the Block Storage connected to the Virtual Server in another region can be replicated to restore normal operating conditions in a short period of time.
2.7.1 - Overview
Service Overview
Virtual Server DR is a service that can quickly recover the system by replicating Virtual Server and connected Block Storage in a different region from the currently used region.
Even in the event of various disasters and unexpected situations that interrupt the system, Virtual Server DR can be used to quickly recover to a normal operating state.
Notice
Virtual Server DR service can be configured through a partner solution sold in the Samsung Cloud Platform’s Marketplace.
For more information about using Marketplace, please refer to Marketplace.
Caution
When purchasing and using services sold on the Marketplace, a separate contract with the Marketplace software supplier is issued in accordance with a separate tax invoice.
If you have applied for a partner solution product for Virtual Server DR on the Marketplace, the application information will be sent to the person in charge by email. Please coordinate the product details and schedule with the person in charge. The software installation and cost will be charged based on the confirmed date.
Services sold in the Samsung Cloud Platform’s Marketplace are services sold by individual sellers, and SamsungSDS is an intermediary of electronic commerce and is not a party to the electronic commerce. Therefore, SamsungSDS does not guarantee or take responsibility for the service information and transactions sold by individual sellers.
Features
Easy DR Environment Configuration: You can easily configure a Virtual Server for DR configuration through partner solutions in Samsung Cloud Platform’s Marketplace.
Various Environment Configuration: Using partner solutions, you can configure various environments such as physical to virtual environment (P2V), virtual to virtual environment (V2V), and support multiple operating systems (Windows, Linux).
Service Composition Diagram
Figure. Virtual Server DR Configuration Diagram
Provided Features
The main feature is to refer to the product catalog details page of the partner solution being sold in the Samsung Cloud Platform’s Marketplace.
Note
For more information on using the Marketplace, please refer to Marketplace.
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.
Virtual Server DR service has been officially released.
The system can be restored to normal operating conditions in a short period of time when it is interrupted by various disasters and risk factors.
2.8 - Block Storage
2.8.1 - Overview
Service Overview
Block Storage is a high-performance storage that stores data in block units arranged in a certain size and array. It is suitable for large-capacity, high-performance requirements such as databases and mail servers, and users can directly assign volumes to servers for use.
Key Features
Large-capacity Volume Provision: Volumes for OS configuration are created with at least the minimum capacity per image and can be expanded up to 12TB, and volumes for data storage other than OS can be created and expanded with capacities from a minimum of 8GB to a maximum of 12TB. Capacity expansion is performed stably in an online state.
High Performance Based on Full SSD: Provides high durability and availability based on redundant Controllers and Disk Array Raid. Since Full SSD disks are provided by default, it is suitable for high-speed data processing tasks such as database workloads.
Snapshot Backup: Recovery of changed and deleted data is possible through the image snapshot function. Users can select a snapshot created at the desired time point from the list to perform recovery.
Service Architecture
Figure. Block Storage Architecture
Provided Features
Block Storage provides the following features:
Volume Name: Users can set or modify names for each volume.
Capacity: Volumes can be created with capacities from a minimum of 8GB to a maximum of 12TB, and can be expanded during use. OS basic volumes can be created with at least the minimum capacity per image.
Connected Server: You can connect or disconnect by selecting a Virtual Server.
Multi Attach: Connects to 2 or more servers, no limit on the number of connected servers per volume, and Virtual Server can connect up to 26 volumes
Encryption: AES-256 algorithm encryption is applied by default to all volumes of Block Storage, and when the volume is HDD/SSD_KMS disk type, it additionally provides transfer encryption between the instance and the Block Storage section connected to the instance.
Snapshot: Recovery of changed and deleted data is possible through the image snapshot function. Users select a snapshot created at the desired time point from the list to recover.
Volume Transfer: You can transfer volumes to another Account through the volume transfer function.
Monitoring: You can check monitoring information such as IOPS, Latency, Throughput, etc. through the Cloud Monitoring service.
Components
You can create a volume by entering capacity according to your service scale and performance requirements and selecting a disk type. When using the snapshot function, you can recover data to the desired time point.
Volume
Volume is the basic creation unit of the Block Storage service and is used as data storage space. Users create a volume by selecting a name, capacity, and disk type, then connect it to a Virtual Server for use. The volume name creation rules are as follows:
Note
Enter within 255 characters using English letters, numbers, spaces, and special characters (-, _).
Snapshot
Snapshot is an image backup of a volume at a specific point in time. Users can check the snapshot name and creation date/time from the snapshot list and select the snapshot they want to recover, and can recover changed or deleted data through that snapshot. Notes for using snapshots are as follows:
Note
Snapshot creation date/time is based on Asia/Seoul(GMT +09:00).
You can recover a Block Storage volume to the latest snapshot by selecting the snapshot recovery button.
When selecting a specific snapshot from the snapshot list, you can recover by creating a new snapshot-based volume.
Snapshots are charged according to the size of the original Block Storage, so please delete unnecessary snapshots.
Prerequisite Services
This is a list of services that must be pre-configured before creating this service. Please prepare in advance by referring to the guides provided for each service.
The table below shows the monitoring metrics of Block Storage that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, refer to the Cloud Monitoring guide.
Performance Item Name
Description
Unit
Volume Total
Total Bytes
bytes
IOPS [Read]
iops(reading)
iops
IOPS [Write]
iops(writing)
iops
Latency Time [Read]
Delay Time (Read)
usec
Latency Time [write]
Delay Time (write)
usec
Throughput [Read]
Throughput (Read)
bytes/s
Throughput [Write]
Throughput (Write)
bytes/s
Table. Block Storage monitoring metrics
2.8.2 - How-to guides
Users can create the Block Storage service by entering required information through the Samsung Cloud Platform Console and selecting detailed options.
Creating Block Storage
You can create and use the Block Storage service through the Samsung Cloud Platform Console.
To create Block Storage, follow these steps:
Click the All Services > Compute > Virtual Server menu. This will take you to the Virtual Server Service Home page.
Click the Block Storage menu. This will take you to the Block Storage List page.
Click the Create Service button on the Block Storage page. This will take you to the Create Block Storage page.
Enter the required information for creating the service on the Create Block Storage page and select detailed options.
Item
Required
Description
Volume Name
Required
Volume name
Enter 255 characters using English letters, numbers, spaces, and special characters (-, _)
Snapshot
Optional
Select snapshot to use when creating volume through snapshot
After checking the Use item, you can select a snapshot
When creating service through snapshot recovery volume, provides recovery snapshot name
If not selected, an empty volume is created
Disk Type
Required
Select disk type
SSD_Provisioned: SSD volume where IOPS and Throughput can be set
SSD/HDD: General SSD/HDD volume
SSD/HDD_KMS: Additional encryption volume
SSD/HDD_MultiAttach: Volume that can be connected to 2 or more servers
Cannot modify after service creation
When creating service through snapshot recovery volume, it is set the same as the original and cannot be modified
Capacity
Optional
Set capacity
Can create within 8~12,228GB
Enter number of Units provided in 8GB increments
When creating service through snapshot recovery volume, enter capacity equal to or larger than the original
Max IOPS
Required
Enter maximum IOPS value between 5,000~20,000
Can only set when Disk Type is SSD_Provisioned
Max Throughput
Required
Enter maximum Throughput value between 250~1,000
Can only set when Disk Type is SSD_Provisioned
Table. Block Storage service information input items
Review the detailed information and estimated charges in the Summary panel, then click the Create button.
Once creation is complete, verify the created resource on the Block Storage List page.
Note
AES-256 algorithm encryption is applied by default to all volumes of Block Storage.
Windows-based Virtual Server cannot use MultiAttach disks. Please use a separate replication method or solution.
When the volume is HDD/SSD_KMS disk type, it additionally provides transfer encryption between the instance and the Block Storage section connected to the instance.
Warning
When using HDD/SSD_KMS disk type, approximately 60% performance degradation may occur.
Viewing Block Storage Details
Block Storage service allows you to view and modify the complete resource list and detailed information. The Block Storage Details page consists of Details, Snapshot List, Tags, and Operation History tabs.
To view detailed information of Block Storage service, follow these steps:
Click the All Services > Compute > Virtual Server menu. This will take you to the Virtual Server Service Home page.
Click the Block Storage menu. This will take you to the Block Storage List page.
Click the resource for which you want to view detailed information on the Block Storage List page. This will take you to the Block Storage Details page.
The Block Storage Details page displays status information and additional feature information, and consists of Details, Snapshot List, Tags, and Operation History tabs.
Item
Description
Volume Status
Status of the volume
Creating: Being created
Downloading: Being created (OS image being applied)
Available: Creation complete, can connect to server
Reserved: Waiting for server connection
Attaching: Connecting to server
Detaching: Disconnecting from server
In Use: Server connection complete
Deleting: Service being terminated
Awaiting Transfer: Waiting for volume transfer
Extending: Capacity expansion
Error Extending: Abnormal state during capacity expansion
Backing Up: Volume being backed up
Restoring Backup: Volume backup being recovered
Error Backing Up: Volume backup abnormal state
Error Restoring: Volume backup recovery abnormal state
To perform detailed search, click the detailed search button
Table. Operation history tab detailed information items
Managing Block Storage Resources
If you need to modify settings of created Block Storage or add or delete connected servers, you can perform tasks on the Block Storage Details page.
Modifying Volume Name
You can modify the name of a volume. To modify the volume name, follow these steps:
Click the All Services > Compute > Virtual Server menu. This will take you to the Virtual Server Service Home page.
Click the Block Storage menu. This will take you to the Block Storage List page.
Click the resource for which you want to modify the volume name on the Block Storage List page. This will take you to the Block Storage Details page.
Click the Modify button of Volume Name. The Modify Volume Name popup window opens.
Enter the volume name and click the Confirm button.
Note
Enter within 255 characters using English letters, numbers, spaces, and special characters (-, _).
Expanding Capacity
You can expand the capacity of a volume. To expand capacity, follow these steps:
Click the All Services > Compute > Virtual Server menu. This will take you to the Virtual Server Service Home page.
Click the Block Storage menu. This will take you to the Block Storage List page.
Click the resource for which you want to expand capacity on the Block Storage List page. This will take you to the Block Storage Details page.
Click the Modify button of Capacity. The Modify Capacity popup window opens.
Enter the capacity and click the Confirm button.
Warning
Capacity reduction is not provided.
After capacity expansion, recovery to snapshots before expansion is not possible.
Only recovery by creating a new volume is possible with snapshots created before capacity expansion.
Note
Can expand to capacity larger than existing capacity within 8~12,228GB.
Enter the number of Units provided in 8GB increments.
Modifying Connected Server
You can connect or disconnect servers. To modify connected server, follow these steps:
Click the All Services > Compute > Virtual Server menu. This will take you to the Virtual Server Service Home page.
Click the Block Storage menu. This will take you to the Block Storage List page.
Click the resource for which you want to modify connected server on the Block Storage List page. This will take you to the Block Storage Details page.
When adding Virtual Server connection, click the Add button in the Connected Server item. The Add Connected Server popup window opens.
Select the Virtual Server you want to connect and click the Confirm button.
When disconnecting Virtual Server connection, click the Disconnect button in the Connected Server item.
Be sure to perform disconnection work (Umount, Disk Offline) from the server before disconnecting.
Warning
Be sure to perform disconnection work (Umount, Disk Offline) from the server before disconnecting the connected server. If disconnected without OS work, a status error (Hang) may occur on the connected server. For details on server disconnection, see Disconnecting Server.
Note
You can connect Virtual Servers created in the same location as Block Storage.
Virtual Servers using Partition with Server Group policy cannot be connected.
For HDD/SSD_MultiAttach disk type, can connect with 2 or more Virtual Servers and there is no limit on the number of connections.
Windows-based Virtual Server cannot use MultiAttach disks and must use a separate replication method or solution.
Virtual Server can connect up to 26 volumes including OS basic.
OS basic volume cannot modify connected server and cannot terminate service.
When adding connected server, can use after performing connection work (Mount, Disk Online) from the server. For details on server connection, see Connecting Server.
Terminating Block Storage
You can reduce operating costs by terminating unused Block Storage. However, since terminating a service may immediately stop the operating service, you should proceed with termination after fully considering the impact of service interruption.
Warning
Be careful as data cannot be recovered after termination.
Block Storage volumes cannot be terminated in the following cases.
While connected to server
OS basic volume
Connected to Custom Image of Virtual Server
When volume status is not Available, Error, Error Extending, Error Restoring, Error Managing
When terminating by selecting 2 or more volumes, only volumes that can be terminated will be terminated.
To terminate Block Storage, follow these steps:
Click the All Services > Compute > Virtual Server menu. This will take you to the Virtual Server Service Home page.
Click the Block Storage menu. This will take you to the Block Storage List page.
Select the resource to terminate on the Block Storage List page and click the Terminate Service button.
When termination is complete, verify that the resource has been terminated on the Block Storage List page.
2.8.2.1 - Connecting to the Server
When using a volume on a server, connection or disconnection work is required. From the Block Storage Details page, add the connection server and then connect to the server to perform the connection work (Mount, Disk Online). After use, perform the disconnection work (Umount, Disk Offline) and then remove the connection server.
Connecting to the Server (Mount, Disk Online)
To use the volume added to the connection server, you must connect to the server and perform the connection work (Mount, Disk Online). Follow the procedure below.
Linux Operating System
Server Connection Example Configuration
Server OS: LINUX
Mount location: /data
Volume capacity: 24 GB
File system: ext3, ext4, xfs etc
Additional allocated disk: /dev/vdb
Click the All Services > Compute > Virtual Server menu. Move to the Service Home page of Virtual Server.
Click the Block Storage menu. Move to the Block Storage List page.
On the Block Storage List page, click the resource to be used by the connection server. Move to the Block Storage Details page.
Check the server in the Connection Server section and connect to it.
Refer to the procedure below to connect (Mount) the volume.
Switch to root privileges
$ sudo -i
Check the disk
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 23.9G 0 part [SWAP]
└─vda14 252:14 0 4M 0 part /
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 24G 0 disk
Create a partition
# fdisk /dev/vdb
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-50331646, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-50331646, default 50331646):
Created a new partition 1 of type 'Linux' and of size 24 GiB.
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Set the partition format (e.g., ext4)
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 23.9G 0 part [SWAP]
└─vda14 252:14 0 4M 0 part /
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 24G 0 disk
└─vdb1 252:17 0 24G 0 part
# mkfs.ext4 /dev/vdb1
mke2fs 1.46.5 (30-Dec-2021)
...
Writing superblocks and filesystem accounting information: done
Mount the volume
# mkdir /data
# mount /dev/vdb1 /data
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 23.9G 0 part [SWAP]
└─vda14 252:14 0 4M 0 part /
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 24G 0 disk
└─vdb1 252:17 0 24G 0 part /data
# vi /etc/fstab
(add) /dev/vdb1 /data ext4 defaults 0 0
Item
Description
cat /etc/fstab
File system information file
Used when the server starts
df -h
Check the total disk usage of the mounted disk
fdisk -l
Check partition information
Physical disks are displayed with letters such as /dev/sda, /dev/sdb, /dev/sdc
Disk partitions are displayed with numbers such as /dev/sda1, /dev/sda2, /dev/sda3
Table. Mount Command Reference
Command
Description
m
Check the usage of the fdisk command
n
Create a new partition
p
Check the changed partition information
t
Change the system ID of the partition
w
Save the partition information and exit the fdisk settings
Click the All Services > Compute > Virtual Server menu. Move to the Service Home page of Virtual Server.
Click the Block Storage menu. Move to the Block Storage List page.
On the Block Storage List page, click the resource to be used by the connection server. Move to the Block Storage Details page.
Check the server in the Connection Server section and connect to it.
Refer to the procedure below to connect (Disk Online) the volume.
Right-click the Windows start icon and run Computer Management
In the Computer Management tree structure, select Storage > Disk Management
Check the disk
Bring the disk online
Initialize the disk
Format the partition
Check the volume
Disconnecting from the Server (Umount, Disk Offline)
Connect to the server and perform the disconnection work (Umount, Disk Offline), and then remove the connection server from the console.
Follow the procedure below.
Note
If you disconnect the server from the console without performing the disconnection work (Umount, Disk Offline) on the server, a server status error (Hang) may occur.
Be sure to perform the OS work first.
For the OS basic volume, connection server modification and service termination are not allowed.
Linux Operating System
Click the All Services > Compute > Virtual Server menu. Move to the Service Home page of Virtual Server.
Click the Block Storage menu. Move to the Block Storage List page.
On the Block Storage List page, click the resource to be disconnected from the connection server. Move to the Block Storage Details page.
Check the server in the Connection Server section and connect to it.
Refer to the procedure below to disconnect (Umount) the volume.
Umount the volume
# umount /dev/vdb1 /data
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 23.9G 0 part [SWAP]
└─vda14 252:14 0 4M 0 part /
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 24G 0 disk
└─vdb1 252:17 0 24G 0 part
# vi /etc/fstab
(delete) /dev/vdb1 /data ext4 defaults 0 0
Windows Operating System
Click the All Services > Compute > Virtual Server menu. Move to the Service Home page of Virtual Server.
Click the Block Storage menu. Move to the Block Storage List page.
On the Block Storage List page, click the resource to be disconnected from the connection server. Move to the Block Storage Details page.
Check the server in the Connection Server section and connect to it.
Unmount the file system.
Refer to the procedure below to disconnect (Disk Offline) the volume.
Right-click the Windows start icon and run Computer Management
In the Computer Management tree structure, select Storage > Disk Management
Right-click the disk to be removed and run Offline
Check the disk status
2.8.2.2 - Using Snapshots
You can create, delete, or restore snapshots of the created Block Storage using snapshots. You can perform actions on the Block Storage Details page and Snapshot List page.
Create Snapshot
You can create a snapshot of the current point in time. To create a snapshot, follow these steps.
All Services > Compute > Virtual Server Click the menu. Navigate to the Service Home page of Virtual Server.
Click the Block Storage menu. Go to the Block Storage List page.
Block Storage List On the page, click the resource to create a snapshot. Block Storage Details Go to the page.
Click the Create Snapshot button. The Create Snapshot popup window opens.
Enter Snapshot Name and Description, then click the Confirm button. It creates a snapshot of the current point in time.
Snapshot List Click the button. Block Storage Snapshot List Navigate to the page.
Check the generated snapshot.
Caution
Snapshots are charged based on the size of the original Block Storage, so please delete unnecessary snapshots.
Reference
The snapshot creation time is based on Asia/Seoul (GMT +09:00).
Edit Snapshot
You can edit snapshot information. To edit the snapshot name or description, follow the steps below.
All Services > Compute > Virtual Server Click the menu. Navigate to the Service Home page of Virtual Server.
Block Storage Click the menu. Navigate to the Block Storage List page.
Block Storage List page, click the resource to edit the snapshot information. You will be taken to the Block Storage Details page.
Snapshot List Click the button. Block Storage Snapshot List Navigate to the page.
After confirming the snapshot to edit, click the More button.
Click the Edit button. The Edit Snapshot popup opens.
Enter Snapshot name or Description and click the Confirm button.
Recover Snapshot
You can restore a Block Storage volume to the latest snapshot in Available state. To perform snapshot restoration, follow the steps below.
All Services > Compute > Virtual Server Click the menu. Navigate to Virtual Server’s Service Home page.
Click the Block Storage menu. Navigate to the Block Storage List page.
Click the resource to recover from a snapshot on the Block Storage List page. You will be taken to the Block Storage Details page.
Connection Server If there is a server added in the item, after connecting to the server, perform the disconnect operation (Umount, Disk Offline).
For detailed information about server disconnection, please refer to Disconnect Server.
Block Storage Details page, click the Disconnect button in the Connected Server item to remove the server. The connected server will be removed.
Click the Confirm button. You will be taken to the Create Block Storage page.
Block Storage creation On the page, enter the information required to create the service, and select detailed options.
Please enter the volume name and size. You can enter a size that is greater than or equal to the original volume.
Disk type is set the same as the original and cannot be modified.
Category
Required
Detailed description
Volume Name
Required
Volume Name
Enter up to 255 characters using English letters, numbers, spaces, and special characters (-, _)
Disk Type
Required
Select Disk Type
HDD: Standard volume
SSD: High-performance standard volume
HDD/SSD_KMS: Volume that additionally provides transmission encryption between the instance and Block Storage
HDD/SSD_MultiAttach: Volume that can be attached to more than one server
Cannot be modified after service creation
When creating a service via snapshot recovery volume creation, it is set identical to the original and cannot be modified
Capacity
Select
Capacity Setting
Can be created within 8~12,228GB
Enter the number of units provided in 8GB increments
When creating a service via snapshot recovery volume creation, input a capacity equal to or larger than the original
Recovery Snapshot Name
Optional
Name of the recovery snapshot used when creating the volume
Provides the recovery snapshot name when creating a service through snapshot recovery volume creation
Table. Block Storage Service Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
Once creation is complete, check the created resource on the Block Storage List page.
Delete Snapshot
You can select a snapshot to delete. To delete a snapshot, follow these steps.
All Services > Compute > Virtual Server Click the menu. Go to the Service Home page of Virtual Server.
Click the Block Storage menu. Go to the Block Storage List page.
Block Storage List on the page, click the resource to delete the snapshot. Block Storage Details navigate to the page.
Snapshot List button, click it. Block Storage Snapshot List page, navigate to it.
After checking the snapshot name, description and creation date/time, click the more button of the snapshot you want to delete.
Delete Click the button. Snapshot List page, the snapshot will be removed.
2.8.2.3 - Move Volume
You can move the volume to a different Account, and if you move it, the volume will be removed from the existing location. You can perform volume migration from the Block Storage list or Block Storage detail page.
Volume previous
You can move a volume to a different account within the region. To move a volume, follow these steps.
Click All services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Block Storage menu is clicked. It moves to the Block Storage list page.
Block Storage list page, select the resource to be moved, and then click the More > Move Volume button at the top left of the list.
or click the Volume Migration button at the top of the Block Storage details page of the resource to be migrated.
When the pop-up window for volume migration appears, check the volume name you want to migrate and click the Confirm button.
When the popup window for previous completion opens, click the Confirm button. The Volume Transfer ID and Approval Key information will be downloaded as a text file.
The volume will be changed to Awaiting Transfer status.
Caution
Volume migration is possible within the same region.
Volume migration is only possible when the volume is in the Available state. If it is in the In Use state, release all connected servers.
Volume rollback cancel
The volume move can be cancelled after it is created. To cancel the volume move, follow these steps.
Click All services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Block Storage menu is clicked. It moves to the Block Storage list page.
Block Storage list page, click the resource to cancel the volume move. It moves to the Block Storage detail page.
You can cancel if the volume is in the Awaiting Transfer state.
Click the Volume Move Cancel button. The Volume Move Cancel popup window will open.
Check the volume name you want to cancel the volume move and click the Confirm button.
The volume will be changed to Available status.
Get previous volume
You can receive volumes from other accounts within the region. To receive a volume, follow these steps.
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Block Storage menu is clicked. It moves to the Block Storage list page.
On the Block Storage list page, click the More > Receive Volume Transfer button in the upper left corner of the list. The Receive Volume Transfer popup window opens.4. Volume Migration Enter the Volume Migration ID and Approval Key provided when creating the volume migration.
Block Storage list page will have the volume created.
Notice
It may take some time for the changes to be reflected.
The account that created the volume transfer will have the transferred volume removed.
2.8.3 - API Reference
API Reference
2.8.4 - CLI Reference
CLI Reference
2.8.5 - Release Note
Block Storage
2025.07.01
FEATURESnapshot Billing Policy Change and Monitoring Linkage
The snapshot is charged based on the size of the original Block Storage.
It has been linked with Cloud Monitoring.
You can check IOPS, Latency, Throughput information in Cloud Monitoring.
2025.02.27
FEATUREBlock Storage disk type added
Block Storage feature change
The HDD disk type has been added, and you can select the added type (HDD, HDD_MultiAttach, HDD_KMS) according to the purpose.
Samsung Cloud Platform common feature changes
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.10.01
NEWBlock Storage Service Official Version Release
SSD_KMS disk type has been added.
When SSD_KMS is selected, encryption through the KMS (Key Management Service) encryption key is added.
Released a high-performance storage service suitable for handling large-scale data and database workloads.
2024.07.02
NEWBeta version release
Released a high-performance storage service suitable for handling large-scale data and database workloads.
3 - Storage
Provides a data storage service that enhances stability and efficiency through various storage configurations.
3.1 - Block Storage(BM)
3.1.1 - Overview
Service Overview
Block Storage is a high-performance storage that stores data in block units with a certain size and arrangement. It is suitable for large-capacity, high-performance requirements such as databases, mail servers, etc. and users can directly assign volumes to the server for use.
Key Features
High-capacity volume provision: you can create volumes of up to 16TB in size.
Full SSD-based high-performance delivery: Dualized Controller and Disk Array Raid-based, providing high durability and availability. Since the full SSD disk is provided by default, it is suitable for high-speed data processing tasks such as database workloads.
Snapshot Backup: Through the image snapshot function, it is possible to recover data that has been changed or deleted. The user selects a snapshot created at the desired recovery point from the list and performs the recovery.
Replication: creates identical replica volumes in different locations, and users can set the data replication cycle, in case the original volume is unavailable due to a disaster or failure, services can be provided through the replica volume.
Composition Diagram
Figure. Block Storage configuration diagram
Provided Function
Block Storage provides the following functions.
Volume Name: The user can set or modify the name by volume.
Capacity: Volume creation is possible with a capacity of at least 1GB and up to 16TB.
Connection Server : Bare Metal Server, Multi Node GPU Cluster can be selected to connect or disconnect.
Multi Attach: connect up to 5 servers, no limit on the number of volumes that can be attached to a Bare Metal Server
Encryption: Regardless of the disk type, all volumes have AES-256 algorithm encryption enabled by default.
Snapshot: You can create a snapshot at a specific point in time using the image snapshot feature or generate snapshots at regular intervals.
Capacity: The capacity of snapshot storage space
Schedule: snapshot automatic creation cycle
Recovery: Restore the original volume to the latest snapshot, select a snapshot at a specific point in time to create a recovery volume
Recovery volume a separate volume created with the same capacity as the original (incurring additional costs)
Replication: replicates the volume to a different location, and users can set the replication cycle.
The replica volume can also serve as the original role in the event of a disaster through the snapshot function
Volume Group: Sets a group of up to 16 Block Storage volumes, allowing for snapshot and replication settings at the group level.
Monitoring: You can check performance information such as IOPS, Latency, Throughput through the Cloud Monitoring service.
Components
You can create a volume by entering the capacity based on the user’s service size and performance requirements and selecting the disk type. When using the snapshot function, you can restore the data to the point you want to restore.
Volume
Volume is the basic creation unit of the Block Storage service and is used as a data storage space. The user creates a volume by setting the name, capacity, disk type, snapshot, etc., and then connects and uses it to the Bare Metal Server. The volume name creation rule is as follows.
It starts with English and can be set within 3-28 characters using English, numbers, and special characters (-).
Snapshot
Snapshot is an image backup of a volume at a specific point in time. The user can select the snapshot to be recovered by checking the snapshot name and creation time in the snapshot list, and can recover the data changed or deleted through the snapshot. The notes to refer to when using snapshots are as follows.
Reference
The snapshot creation time is based on Asia/Seoul (GMT +09:00) standard.
Snapshots can be created up to 1,023. (Automatic creation through scheduling is up to 128.)
Automatic generation through snapshot schedule setting is possible.
The snapshot capacity is added to the Block Storage(BM) fee, so adjust the capacity of the snapshot storage space.
Volume Group
Volume Group is a group unit management function that allows users who have configured databases and applications with two or more volumes to create snapshots and replicas at a consistent point in time, users can create a Volume Group by selecting a name and target Block Storage.
Preceding Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service and prepare in advance.
High-performance physical server used without virtualization
Table. Block Storage(BM) Preceding Service
3.1.1.1 - Monitoring Metrics
Block Storage BM Monitoring Metrics
The following table shows the monitoring metrics of Block Storage BM that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, please refer to the Cloud Monitoring guide.
Performance Item Name
Description
Unit
Instance Status
File Storage Volume Status
status
Volume Total
Total Bytes
bytes
IOPS [Total]
iops(total)
iops
IOPS [Read]
iops(reading)
iops
IOPS [Write]
iops(writing)
iops
IOPS [Other]
iops (etc.)
iops
Latency Time [Total]
Delay Time (Total)
usec
Latency Time [Read]
Delay Time (Read)
usec
Latency Time [write]
Delay Time (write)
usec
Latency Time [Other]
Delayed time (etc.)
usec
Throughput [Total]
Processing amount (total)
bytes/s
Throughput [Read]
Throughput (Read)
bytes/s
Throughput [Write]
Throughput (Write)
bytes/s
Throughput [Other]
Processing capacity (etc.)
bytes/s
Table. Block Storage BM Monitoring Metrics
3.1.2 - How-to Guides
The user can enter the required information for Block Storage (BM) and select detailed options through the Samsung Cloud Platform Console to create the service.
Block Storage(BM) Create
You can create and use the Block Storage (BM) service from the Samsung Cloud Platform Console.
To create Block Storage (BM), follow the steps below.
Click the All Services > Storage > Block Storage(BM) menu. Navigate to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) Create button on the Service Home page. You will be taken to the Block Storage(BM) Create page.
Block Storage(BM) Creation On the page, enter the information required to create the service, and select detailed options.
Category
Required
Detailed description
Volume Name
Required
Volume Name
Must start with an English letter and use English letters, numbers, and the special character (-) to input 3~28 characters
Cannot be modified after service creation
Disk Type
Required
Select Disk Type
SSD: High-performance general volume
HDD: General volume
Cannot be modified after service creation
Capacity
Required
Capacity Setting
Enter a number between 1~16,384GB
Cannot be modified after service creation
IOPS
Required
Enter IOPS value
Enter a number between 3,000~16,000
HDD type does not provide performance metric setting function
Throughput
Required
Enter Throughput speed (MB/s)
Enter a number between 125~1,000
HDD type does not provide performance metric setting function
Connection Server
Required
Select Connected Bare Metal Server
Provide up to 8 Bare Metal Server connections
No limit on the number of volumes that can be connected to a Bare Metal Server
Table. Block Storage(BM) Service Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resources on the Block Storage (BM) List page.
Note
All volumes are encrypted by default with server-side encryption based on the AES-256 algorithm, regardless of disk type.
Snapshot schedule can be set on the detail page.
The performance metrics (IOPS, Throughput) of the configured storage are based on the maximum values and do not guarantee consistent values.
Caution
Capacity cannot be modified after service creation. If needed, add a volume to the same server.
iSCSI Setting up
Volumes created by the user in addition to the OS default volume require iSCSI configuration.
Block Storage(BM) Details page’s iSCSI information after checking the IP provided, follow the iSCSI configuration method for each OS.
Linux Operating System
Notice
iSCSI information (Storage Target IP) was written assuming 10.40.40.41, 10.40.40.42.
Block Storage(BM) Check iSCSI information on the detail page.
All Services > Storage > Block Storage(BM) Click the menu. Navigate to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) menu on the Service Home page. You will be taken to the Block Storage(BM) List page.
Block Storage(BM) List page, click the resource to be used on the connected server. You will be taken to the Block Storage(BM) Details page.
Connection Server Check the server in the item and then connect.
Follow the procedure below to configure iSCSI.
Storage (target IP) Discover the connection information.
Please modify the Initiator Port Address. If you are not an Active Directory Member, skip this procedure. The Initiator Port Address needs to be modified only if you are an Active Directory Member.
iqn is generated based on the Hostname, but when joined to Active Dicrectory it changes to DNS format. Based on the basic hostname, it is registered in Storage, so for iqn.1991-05.com.microsoft:iqn01.scp.com, iqn01 is the Hostname. Remove DNS information and change it to the name registered when creating the OS in the user Console, such as iqn.1991-05.com.microsoft:iqn01.
If you are not an AD Member, no changes are required.
Register the Multipath I/O DSM and create an MPIO Disk. When creating an MPIO disk, a query occurs if a reboot is required. Reboot by entering Y or the Enter Key.
Code block. Multipath I/O DSM registration, MPIO Disk creation
Multipath I/O Disk Check the list and path. mpclaim.exe can be used to check MPIO disk information. Enter the generated MPIO disk number to check the disk path and status.
Code block. Multipath I/O Disk list and path verification
Please check in Disk Management.
Block Storage(BM) Check Detailed Information
Block Storage(BM) service can view and edit the full resource list and detailed information. Block Storage(BM) Detail page consists of Detail Information, Snapshot List, Replication, Operation History tabs.
To view detailed information of the Block Storage (BM) service, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Navigate to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) menu on the Service Home page. You will be taken to the Block Storage(BM) List page.
Click the resource to view detailed information on the Block Storage(BM) List page. You will be taken to the Block Storage(BM) Details page.
Block Storage(BM) Details page displays status information and additional feature information, and consists of Details, Snapshot List, Replication, Tags, Operation History tabs.
Category
Detailed description
Volume Status
Status of the volume
Creating: Creating
Available: Creation completed, server connection possible
Attaching: Connecting to server
Detaching: Disconnecting from server
In Use: Server connection established
Deleting: Service termination in progress
Editing: Changing settings
Error Deleting: Abnormal state while deleting
Error: Abnormal state during creation
Create Replication
Create a replica at another location
For detailed information about replication creation, see [Create Replication](/userguide/storage/block_storage_bm/how_to_guides/replication.md/#복제-생성하기)
|
| Snapshot Creation | Create a snapshot at a specific point in time
For detailed information on snapshot creation, see [Create Snapshot](/userguide/storage/block_storage_bm/how_to_guides/snapshot.md/#스냅샷-생성하기)
|
|Service termination|Button to cancel the service|
Table. Status Information and Additional Functions
Reference
In the case of a recovery copy, snapshot list, replication tabs are not displayed.
Detailed Information
Block Storage(BM) List page allows you to view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In the Block Storage (BM) service, it refers to the volume SRN
Resource Name
Resource Name
In Block Storage(BM) service, it refers to the volume name
Resource ID
Unique resource ID of the service
Creator
User who created the service
Creation time
Date/time the service was created
Editor
User who modified the service
Modification Date/Time
Date/Time when the service was modified
Volume Name
Volume Name
If you need to edit the volume name, click the Edit button
For detailed information on editing the volume name, see Edit Volume Name
Category
Original status regarding duplication
Storage Volume Name
Volume name within storage device
Information distinguishing the volume used for analysis when failures and issues occur
capacity
volume capacity
IOPS
IOPS value set when creating service
HDD type does not provide this metric
Throughput
Throughput speed (MB/s) set when creating the service
HDD type does not provide this metric
Disk type
Disk type
Encryption
Encryption status
Encryption provided by default regardless of disk type
Volume Group
Name of the Volume Group that the volume belongs to
iSCSI information
Storage Target IP information for server connection
Snapshot Capacity
Capacity of snapshot storage space
Charges incurred according to the set capacity
Click the edit button if you need to set snapshot capacity
For detailed information on snapshot capacity, refer to [Edit Snapshot Capacity](/userguide/storage/block_storage_bm/how_to_guides/_index.md/#스냅샷-용량-수정하기)
After setting snapshot capacity, schedule can be registered
Click the edit button when snapshot schedule setting is needed
For detailed information on snapshot schedule, refer to [Edit Snapshot Schedule](/userguide/storage/block_storage_bm/how_to_guides/_index.md/#스냅샷-스케줄-수정하기)
|
| Snapshot | Name of the snapshot
Display when a snapshot exists
Click the name to go to the snapshot's detail page
For details on creating a snapshot, see [Create Snapshot](/userguide/storage/block_storage_bm/how_to_guides/snapshot.md/#복구본-생성하기)
|
| Connected Server | Connected Bare Metal Server
**Server Name**: Server Name
**Image**: Server's OS image
**Status**: Server status
When adding a Bare Metal Server connection, click the **Add** button
When removing a Bare Metal Server connection, click the **Disconnect** button
For more details about the connected server, see [Edit Connected Server](/userguide/storage/block_storage_bm/how_to_guides/_index.md/#연결-서버-수정하기)
|
Table. Block Storage(BM) Detailed Information Items
Reference
For volumes created before December 18, 2025, IOPS and Throughput information is not displayed.
Snapshot List
Block Storage(BM) List page allows you to view the snapshot of the selected resource.
Category
Detailed description
Snapshot Usage
Total Capacity of Stored Snapshots
Snapshot name
Snapshot name
Capacity
Snapshot Capacity
Creation Time
Snapshot Creation Time
Additional features > More
Snapshot management button
Restore: Restore volume from snapshot
For detailed information on snapshot restore, see Snapshot restore
Create restore point: Create restore point from snapshot
For detailed information on snapshot deletion, see Delete snapshot
Table. Snapshot List Tab Detailed Information Items
Caution
If the maximum number of snapshots or the snapshot space threshold (around 90%) is exceeded, older snapshots will be deleted.
If the snapshot capacity usage rate is high (around 90%), replication may be stopped.
Snapshots can be created up to a maximum of 1,023 (the automatic creation count via schedule is up to 128), and if the maximum creation count is exceeded, no more snapshots can be created.
Snapshot recovery must be performed while all connected servers are disconnected (Umount, Disk Offline), and the recovered volume can be used after being reconnected (Mount, Disk Online).
Only one recovery copy can be created, and it is a separate volume that incurs the same charges as the original.
Reference
Snapshot creation date and time is based on Asia/Seoul (GMT +09:00).
Snapmirror files cannot be deleted when using replication.
When using Volume Group, set the snapshot schedule on the Volume Group (BM) detail information screen. The created snapshots can be viewed in the Block Storage (BM) snapshot list.
Replication
Block Storage(BM) List page you can view the replication information of the selected resource.
Replication progress status according to policy settings
Volume Information
Volume information of original and replica
Classification: Distinguish whether original related to replication
Volume Name: Volume name of original or replica
Location: Location where the volume was created
Permissions: User permissions of the volume set according to replication policy
Table. Replication Tab Detailed Information Items
Caution
If the snapshot capacity usage rate is high (around 90%), replication may be stopped.
Reference
When creating a clone, a replica with the same disk type is created.
After setting the snapshot capacity, you can create a replica.
If using Volume Group, check the replication information on the Volume Group (BM) list page.
A replica can modify the connected server if the replication policy is stopped or deleted.
The replica can use the snapshot feature after the replication policy is deleted.
If the replication policy is stopped or the replication status is completed, you can modify the policy and schedule in the replica.
Tag
Block Storage(BM) List page allows you to view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can check the Key and Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Block Storage(BM) Tag Tab Items
Work History
Block Storage(BM) List page allows you to view the operation history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date and time, resource type, resource ID, resource name, work details, event topic, work result, verify worker information
Click the detailed search button to perform a detailed search
Table. Work History Tab Detailed Information Items
Block Storage(BM) Resource Management
If you need to modify the settings of a created Block Storage (BM) or add or delete a connected server, you can perform the task on the Block Storage (BM) Details page.
Edit Volume Name
You can edit the name of the volume. To edit the volume name, follow these steps.
All Services > Storage > Block Storage(BM) Click the menu. Go to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) menu on the Service Home page. You will be taken to the Block Storage(BM) List page.
Block Storage(BM) List Click the resource to edit the volume name. Block Storage(BM) Details page will be opened.
Click the Edit button of Volume Name. Volume Name Edit popup opens.
Enter the volume name and click the Confirm button.
Reference
It must start with an English letter and can be set using English letters, numbers, and the special character (-) within 3 to 28 characters.
Modify Snapshot Capacity
You can modify the capacity of the snapshot storage space. To modify the snapshot capacity, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Go to the Service Home page of Block Storage(BM).
Service Home page, click the Block Storage(BM) menu. Navigate to the Block Storage(BM) list page.
Block Storage(BM) List page, click the resource to modify the snapshot capacity. Block Storage(BM) Details page will be opened.
Click the Edit button of the Snapshot Capacity item. The Snapshot Capacity Edit popup window opens.
Set the Usage status and Generation Capacity (%), and click the Confirm button.
Generation capacity(%) can be selected in units of 50 between 100 and 500.
Caution
The charges change depending on whether snapshots are used and their size. (Example: If the volume size is 10GB and the snapshot creation size is 100%, a total of 20GB charges occur.)
If the maximum number of snapshots or the snapshot space threshold (around 90%) is exceeded, older snapshots will be deleted.
If the size after modification is smaller than the original, older snapshots will be deleted first.
When edited as unused, all snapshots will be deleted.
Replication can be created after setting snapshot capacity.
After setting the snapshot capacity, it can be added to the Volume Group.
If the snapshot capacity usage rate is high (around 90%), replication may be stopped.
The replica can use the snapshot feature after the replication policy is deleted.
Edit Snapshot Schedule
You can modify the snapshot auto-creation interval. To modify the snapshot schedule, follow the steps below.
All Services > Storage > Block Storage(BM) menu, click it. Navigate to the Service Home page of Block Storage(BM).
On the Service Home page, click the Block Storage(BM) menu. You will be taken to the Block Storage(BM) List page.
Block Storage(BM) List on the page, click the resource to edit the snapshot schedule. Block Storage(BM) Details page will be opened.
Click the Snapshot ScheduleEdit button. The Snapshot Schedule Edit popup opens.
Set the snapshot auto generation status and generation interval, and click the Confirm button.
Creation Cycle based, if you want to automatically create snapshots, select Auto Creation as Enabled.
Generation cycle can be selected daily, hourly, or weekly, day of week, hourly.
Caution
Snapshots can be created up to a maximum of 1,023 (with up to 128 automatically created via schedule), and if the maximum number is exceeded, no more snapshots can be created.
Note
The snapshot schedule is based on Asia/Seoul (GMT +09:00).
After setting snapshot capacity, schedule registration is possible.
When setting a snapshot schedule, it cannot be added to the Volume Group.
Modify IOPS
You can modify the IOPS value. To modify the IOPS value, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Go to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) menu on the Service Home page. Navigate to the Block Storage(BM) list page.
Click the resource to modify the IOPS value on the Block Storage(BM) List page. It moves to the Block Storage(BM) Details page.
Click the Edit button of IOPS. The IOPS Edit popup opens.
After entering the IOPS value to change, click the Confirm button.
IOPS value can be entered between 3,000 and 16,000.
Reference
The IOPS value can be modified after the initial server attach.
In the case of a recovery copy, the IOPS value cannot be modified.
Throughput Edit
You can modify the Throughput speed. To modify the Throughput speed, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Go to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) menu on the Service Home page. Navigate to the Block Storage(BM) List page.
Click the resource to modify the Throughput speed on the Block Storage(BM) List page. Block Storage(BM) Details page will be opened.
Click the Edit button of Throughput. Throughput Edit popup window opens.
After entering the Throughput speed to change, click the Confirm button.
Throughput speed can be set to a value between 125 and 1,000.
Reference
Throughput speed can be modified after the initial server attach.
In the case of a recovery copy, the Throughput speed cannot be modified.
Edit Connection Server
Bare Metal Server, you can connect or disconnect the Multi Node GPU Cluster. To modify the connected server, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Go to the Service Home page of Block Storage(BM).
Service Home page, click the Block Storage(BM) menu. Go to the Block Storage(BM) list page.
On the Block Storage(BM) List page, click the resource to edit the connected server. You will be taken to the Block Storage(BM) Details page.
If you want to add a connection server, click the Add button in the Connection Server item. The Add Connection Server popup window will open.
After selecting the server you want to connect to, click the Confirm button.
If you want to disconnect the server, click the Disconnect button in the Connection Server item.
Be sure to connect to the server and perform the disconnect operation (Umount, Disk Offline) before disconnecting.
Caution
Connect to the server and be sure to perform disconnect operations (Umount, Disk Offline) before releasing the connected server. If you release without OS operations, a status error (Hang) may occur on the connected server. For detailed information on server disconnect, see Disconnect Server.
Reference
You can connect up to 8 Bare Metal Servers created in the same location as Block Storage.
There is no limit on the number of volume connections for Bare Metal Server.
When adding a connected server, you can use it after performing connection tasks (Mount, Disk Online) on the server. For detailed information about server connection, refer to Server Connection.
Please connect to the server and be sure to perform disconnect operations (Umount, Disk Offline) before releasing the connected server. If you release without OS operations, a status error (Hang) may occur on the connected server. For detailed information on server disconnect, refer to Disconnect Server.
If the replica’s replication policy is stopped or deleted, the connected server can be modified.
Block Storage(BM) Cancel
You can reduce operating costs by terminating unused Block Storage (BM). However, if you terminate the service, the running service may be immediately stopped, so you should consider the impact of service interruption sufficiently before proceeding with the termination.
Caution
After termination, you cannot recover data, so be careful.
If there is a connected server, you can cancel after removing all connected resources.
The volume can be terminated only when it is in Available or Error state.
If you are using a replication policy, you can cancel it after deleting the replication policy from the connected replica.
If a Volume Group is being used, you can cancel it after disconnecting the connected Volume Group.
If there is a backup of the original, you can delete the backup and then cancel.
To cancel Block Storage, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Navigate to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) menu on the Service Home page. Navigate to the Block Storage(BM) List page.
On the Block Storage(BM) List page, select the resource to terminate, and click the Terminate Service button.
When termination is complete, check whether the resource has been terminated on the Block Storage(BM) List page.
3.1.2.1 - Connecting to a Server
When using a volume on a server, a connection or disconnection operation is required. When using a volume on a server, you need to perform a connection or disconnection operation. After adding a connection server on the Block Storage(BM) details page, access the server and perform Multi Path settings and connection operations (Mount, Disk Online). After completing the use, perform disconnection operations (Umount, Disk Offline) and remove the connection server.
Configuring Multi Path
Before using a volume on a connection server, you need to configure Multi Path. Follow the procedure below.
Note
If you do not configure Multi Path, it may affect the service due to maintenance, failures, etc.
Linux Operating System
Click the All Services > Storage > Block Storage(BM) menu. The Block Storage(BM) Service Home page will be displayed.
On the Service Home page, click the Block Storage(BM) menu. The Block Storage(BM) list page will be displayed.
On the Block Storage(BM) list page, click the resource you want to set up Multi Path for. The Block Storage(BM) details page will be displayed.
In the Connected Server section, check the server and access it. Follow the guide below to configure Multi Path.
Device confirmation
The created volume can be confirmed using the fdisk -l command.
DM-Multipath application confirmation
The Linux system automatically applies Multi Path during the volume recognition process, and you can confirm it using the multipath –ll command.
The volume with Multi Path applied uses a Multipath device name in the format /dev/mapper/#####, not /dev/sd#, and can be confirmed using the fdisk –l command.
iSCSI replacement timeout value setting
Set the replacement timeout when connecting to iSCSI.
# vi /etc/iscsi/iscsid.conf
node.session.timeo.replacement_timeout = 5
(change the default value of 120 to 5)
After changing the above content, restart the iSCSI service
# systemctl restart iscsid
Windows Operating System
Click the All Services > Storage > Block Storage(BM) menu. The Block Storage(BM) Service Home page will be displayed.
On the Service Home page, click the Block Storage(BM) menu. The Block Storage(BM) list page will be displayed.
On the Block Storage(BM) list page, click the resource you want to set up Multi Path for. The Block Storage(BM) details page will be displayed.
In the Connected Server section, check the server and access it. Follow the guide below to configure Multi Path.
Device confirmation
Click the Start > Server Manager menu to run Server Manager.
Click Server Manager > File and Storage Services > Volumes > Disks to confirm the iscsi device.
Before setting up Multi Path, one device appears as multiple devices for each path.
MPIO installation (Reboot required)
Click Server Manager > Dashboard > Add roles and features.
The Add Roles and Features Wizard popup window will open. Click the Next button.
On the Select installation type page, select Role-based or feature-based installation and click the Next button.
On the Select destination server page, the current server will be automatically searched. Confirm the content and click the Next button.
On the Select features page, select Features on the left menu and check Multipath I/O. Then, click the Next button.
On the Confirm installation selections page, check Restart the destination server automatically if required. If a popup window opens, click Yes and then click the Install button.
The installation will start, and the server will automatically reboot.
After reconnecting to the server, the installation will be complete. Click the Close button to close the Wizard popup window.
Click Server Manager > Dashboard > Tools > MPIO.
On the Discover Multi-Paths tab, check Add support for iSCSI devices and click the Add button.
If a message is displayed, reboot the server again.
After the reboot is complete, you can confirm NETAPP Devices in MPIO devices.
Run diskmgmt.msc in the Windows execution window (command prompt) to open the Disk Management popup window.
You can confirm that MPIO is applied through the properties of the volume created in the Block Storage(BM) service.
Connecting to a Server (Mount, Disk Online)
To use a volume added to a connection server, you need to access the server and perform connection operations (Mount, Disk Online). Follow the procedure below.
Linux Operating System
Server Connection Example Configuration
Server OS: LINUX
Mount location: /data
Volume capacity: 24 GB
File system: ext3, ext4, xfs, etc.
Additional allocated disk: /dev/vdb
Click the All Services > Storage > Block Storage(BM) menu. The Block Storage(BM) Service Home page will be displayed.
On the Service Home page, click the Block Storage(BM) menu. The Block Storage(BM) list page will be displayed.
On the Block Storage(BM) list page, click the resource you want to connect to the server. The Block Storage(BM) details page will be displayed.
In the Connected Server section, check the server and access it. Follow the guide below to connect (Mount) the volume.
Switch to root privileges
$ sudo -i
Confirm the disk
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 23.9G 0 part [SWAP]
└─vda14 252:14 0 4M 0 part /
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 24G 0 disk
Create a partition
# fdisk /dev/vdb
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-50331646, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-50331646, default 50331646):
Created a new partition 1 of type 'Linux' and of size 24 GiB.
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Set the partition format (e.g., ext4)
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 23.9G 0 part [SWAP]
└─vda14 252:14 0 4M 0 part /
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 24G 0 disk
└─vdb1 252:17 0 24G 0 part
# mkfs.ext4 /dev/vdb1
mke2fs 1.46.5 (30-Dec-2021)
...
Writing superblocks and filesystem accounting information: done
Mount the volume
# mkdir /data
# mount /dev/vdb1 /data
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 23.9G 0 part [SWAP]
└─vda14 252:14 0 4M 0 part /
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 24G 0 disk
└─vdb1 252:17 0 24G 0 part /data
# vi /etc/fstab
(add) /dev/vdb1 /data ext4 defaults 0 0
Item
Description
cat /etc/fstab
Linux system file containing filesystem information. Used when the server starts.
df -h
Confirms the total disk usage of the mounted filesystems in the Linux system.
fdisk -l
Confirms partition information.
Physical disks are displayed with letters such as a, b, c, etc. (e.g., /dev/sda, /dev/sdb, /dev/sdc)
Disk partitions are displayed with numbers such as 1, 2, 3, etc. (e.g., /dev/sda1, /dev/sda2, /dev/sda3)
Table. Mount Command Reference
Command
Description
m
Displays the usage of the fdisk command.
n
Creates a new partition.
p
Displays the changed partition information.
t
Changes the system ID of the partition.
w
Saves the partition information and exits the fdisk settings.
Click the All Services > Storage > Block Storage(BM) menu. The Block Storage(BM) Service Home page will be displayed.
On the Service Home page, click the Block Storage(BM) menu. The Block Storage(BM) list page will be displayed.
On the Block Storage(BM) list page, click the resource you want to connect to the server. The Block Storage(BM) details page will be displayed.
In the Connected Server section, check the server and access it. Follow the guide below to connect (Disk Online) the volume.
Right-click the Windows Start icon and run Computer Management.
In the Computer Management tree structure, select Storage > Disk Management.
Confirm the disk
Set the disk to Online
Initialize the disk
Format the partition
Confirm the volume
Disconnecting from the Server (Umount, Disk Offline)
After disconnecting from the server (Umount, Disk Offline) and performing the disconnection work, you must disconnect the connected server from the Console. Follow the procedure below.
Caution
If you disconnect the connected server from the Console without disconnecting from the server (Umount, Disk Offline), a server status error (Hang) may occur. Be sure to perform the OS task first.
Linux Operating System
Click All Services > Storage > Block Storage (BM) menu. Move to the Service Home page of Block Storage (BM).
Click the Block Storage (BM) menu on the Service Home page. Move to the Block Storage (BM) list page.
Click the resource to be disconnected from the server on the Block Storage (BM) list page. Move to the Block Storage (BM) details page.
Check the server in the Connected Server item and access it. Follow the guide below to disconnect the volume (Umount).
Volume Umount
# umount /dev/vdb1 /data
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 24G 0 disk
├─vda1 252:1 0 23.9G 0 part [SWAP]
└─vda14 252:14 0 4M 0 part /
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 24G 0 disk
└─vdb1 252:17 0 24G 0 part
# vi /etc/fstab
(delete) /dev/vdb1 /data ext4 defaults 0 0
Windows Operating System
Click All Services > Storage > Block Storage (BM) menu. Move to the Service Home page of Block Storage (BM).
Click the Block Storage (BM) menu on the Service Home page. Move to the Block Storage (BM) list page.
Click the resource to be disconnected from the server on the Block Storage (BM) list page. Move to the Block Storage (BM) details page.
Check the server in the Connected Server item and access it. Follow the guide below to disconnect the volume (Disk Offline).
Right-click the Windows Start icon and run Computer Management
Select Storage > Disk Management in the Computer Management tree structure
Right-click the disk to be removed and run Offline
Check the disk status
3.1.2.2 - Using Snapshots
You can create, delete, or recover snapshots of the generated Block Storage (BM). You can perform actions on the Block Storage (BM) Details page and the Snapshot List page.
Create Snapshot
You can create a snapshot at the point in time you want. To create a snapshot, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Navigate to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) menu on the Service Home page. You will be taken to the Block Storage(BM) List page.
Block Storage(BM) List page, click the resource to create a snapshot. Navigate to the Block Storage(BM) Details page.
Snapshot Capacity Check the setting status of the item.
Snapshots can only be created when storage space is secured by setting the snapshot capacity.
Click the Create Snapshot button. Create Snapshot popup window opens.
Confirm Click the button. Creates a snapshot of the current point in time.
Click the Snapshot List page. Go to the Block Storage(BM) Snapshot list page.
Check the generated snapshot.
Caution
If the maximum number of snapshots or the snapshot space threshold (around 90%) is exceeded, older snapshots will be deleted.
If the usage rate within snapshot capacity is high (around 90%), replication may be stopped.
If there is a volume in the Volume Group with snapshot capacity not set, you cannot create a snapshot. Set the snapshot capacity for all volumes first.
Snapshots can be created up to a maximum of 1,023 (the automatic creation count via schedule is up to 128), and if the maximum creation count is exceeded, no more snapshots can be created.
Reference
The snapshot creation date and time is based on Asia/Seoul (GMT +09:00).
If you want to automatically create snapshots via a schedule, set the snapshot schedule on the Block Storage(BM) Details page.
Snapshot recovery must be performed while disconnected (Umount, Disk Offline) on all connected servers, and the recovered volume can be used after reconnecting (Mount, Disk Online).
After the snapshot recovery is completed, all snapshots created after the snapshot used for recovery will be deleted.
When restoring a snapshot, the volume is restored to that point.
If you are using Volume Group (BM), you can perform snapshot recovery from the detail page of Volume Group (BM).
Create a recovery copy
Block Storage(BM) volume’s snapshot can be used to create a recovery copy. To create a snapshot recovery copy, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Navigate to the Service Home page of Block Storage(BM).
Click the Block Storage(BM) menu on the Service Home page. Navigate to the Block Storage(BM) List page.
On the Block Storage(BM) List page, click the resource to restore from a snapshot. You will be taken to the Block Storage(BM) Details page.
Connection Server If there is a server added in the item, after connecting to the server, perform the disconnect operation (Umount, Disk Offline).
Snapshot List page, click it. Block Storage(BM) Snapshot List page, go to it.
After confirming the Snapshot name and Creation date/time, click the More button of the snapshot you want to create a recovery copy for.
Click the Create Recovery button. The snapshot recovery creation popup opens.
After entering the Recovery volume name, click the Confirm button. A popup notifying the creation of the recovery copy opens.
Click the Confirm button. The recovery copy creation request is completed.
Caution
Only one backup can be created per original.
A recovery copy is a separate volume created with the same capacity as the original, and incurs additional costs.
If you are using Volume Group (BM), you can create a backup on the detail page of Volume Group (BM).
Delete Snapshot
You can select a snapshot to delete. To delete a snapshot, follow the steps below.
All Services > Storage > Block Storage(BM) Click the menu. Navigate to the Service Home page of Block Storage(BM).
Service Home on the page, click the Block Storage(BM) menu. Go to the Block Storage(BM) list page.
Block Storage(BM) List page, click the resource to delete the snapshot. Block Storage(BM) Details page will be opened.
Click Snapshot List. Go to the Block Storage (BM) Snapshot List page.
After checking the Snapshot Name and Creation Date/Time, click the More button of the snapshot you want to delete.
Click the Delete button. The snapshot will be removed from the Snapshot List page.
Reference
Snapshots that contain snapmirror in the snapshot name cannot be deleted. snapmirror is included in the snapshot name when a replication is created.
If you are using Volume Group (BM), you can delete snapshots from the detail page of Volume Group (BM).
3.1.2.3 - Using Replication
You can create a replica in a different location of the created Block Storage(BM) and synchronize it periodically, and you can perform tasks on the Block Storage(BM) details page and the replication page.
Reference
The kr-south region does not provide Block Storage(BM) replication feature.
Create a copy
You can create a replica volume in a different location. To create a replica volume, follow these steps.
All Services > Storage > Block Storage(BM) menu is clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Block Storage(BM) menu. It moves to the Block Storage(BM) list page.
Block Storage(BM) list page, click on the resource to create a replica. It moves to the Block Storage(BM) details page.
Snapshot capacity item’s setting status should be checked.
Replication creation is only possible when storage space is secured through snapshot capacity setting.
Replicate Create button will be clicked. Replicate Create popup window will be opened.
Replication location and replication volume name, replication cycle should be entered and the confirm button should be clicked. It creates a replica with the same disk type.
Replication location: Select a location different from the original Block Storage(BM) volume.
Replica Volume Name: It should start with English and use English, numbers, and special characters(-) to input 3-28 characters.
Replication cycle: Choose from 5 minutes, 1 hour, daily, weekly, or monthly. Replication will be performed according to the selected cycle.
Daily: every day 23:59:00
Every week: every Sunday 23:59:00
Every month: every month 1st 23:59:00
Replication page will be clicked. It moves to the Replication page.
Check the replication information.
When selecting the volume name of the original or replica, it moves to the Block Storage(BM) details page of the volume.
Note
When replicating, a replica with the same disk type is created.
After setting the snapshot capacity, a replica can be created, and the disk type of the created replica is the same as the original.
In cases where the snapshot capacity usage rate is high (around 90%), replication may be stopped.
After setting the snapshot capacity, the volume added to the Volume Group can be replicated in units of Group on the Volume Group page.
One replica can be created per volume, and additional data transfer fees apply when replicating across regions.
The replicated created volume cannot be added to the Volume Group.
If you are using Volume Group, you can check the replication information from Volume Group (BM).
Modify replication policy
You can change the replication status through replication policy modification.
Caution
During replication, you cannot modify the replication cycle and replication policy.
To modify the replication policy, follow the following procedure.
All services > Storage > Block Storage(BM) menu is clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Block Storage(BM) menu. It moves to the Block Storage(BM) list page.
Block Storage(BM) list page, click the resource to modify the replication policy. It moves to the Block Storage(BM) details page.
Replication page is clicked. It moves to the Block Storage(BM) replication page.
Replication Policy’s Edit button should be clicked. Replication Policy Edit popup window will be opened.
Usage: performs replication. Paused can be modified to Usage.
Temporary suspension: temporarily suspends replication. If in use, it can be modified to temporary suspension.
Delete: It deletes the replication. In case of pause, it can be modified to delete, and after deletion, replication cannot be used again.
Block Storage(BM) replication page, check the modified replication policy.
Caution
Be aware of the following when deleting a policy.
After deleting the policy, the replica is not converted to the original and cannot create a replica.
After deleting the policy, you cannot connect to the existing replica, and you can only create a new replica.
Data stored only in replicas after temporary suspension will be deleted when replication is used again.
When using the replication policy, the replica is in a read-only state and data modification is not possible. Please use replication after unmounting from all connected resources
The replication policy can be mounted from the connection server only when it is deleted or in a paused state.
Modify replication cycle
You can change the synchronization cycle between the original and the copy through replication cycle modification.
Caution
During replication, you cannot modify the replication cycle and replication policy.
To modify the replication cycle, follow the following procedure.
All services > Storage > Block Storage(BM) menu is clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Block Storage(BM) menu. It moves to the Block Storage(BM) list page.
Block Storage(BM) list page, click the resource to modify the replication cycle. It moves to the Block Storage(BM) details page.
Replication page will be clicked. It moves to the Block Storage(BM) replication page.
Replication Cycle’s Edit button should be clicked. Replication Cycle Edit popup window will be opened.
Replication cycle: Select from 5 minutes, 1 hour, daily, weekly, or monthly. Replication will be performed according to the selected cycle.
Daily: every day 23:59:00
Every week: every Sunday 23:59:00
Every month: every month 1st 23:59:00
Block Storage(BM) replication page where you can check the modified replication cycle.
3.1.2.4 - Using Volume Group
Volume Group(BM) service allows you to create a group of up to 16 Block Storage(BM) volumes to create snapshots and replicas at a consistent point in time. The user can enter the necessary information of the Volume Group (BM) through the Samsung Cloud Platform Console and select detailed options to create the corresponding service.
Creating Volume Group (BM)
You can create and use the Volume Group(BM) service on the Samsung Cloud Platform Console.
To create Block Storage, follow the next procedure.
All services > Storage > Block Storage(BM) menu should be clicked. It moves to the Service Home page of Block Storage(BM).
Volume Group(BM) menu should be clicked. It moves to the Volume Group(BM) list page.
Volume Group(BM) page, click the Volume Group(BM) creation button. It moves to the Volume Group(BM) creation page.
Classification
Mandatory
Detailed Description
Volume Group name
Required
Volume Group name
Starts with English and uses English, numbers, and special characters (-) to input 3-28 characters
Cannot be modified after service creation
Target Volume
Required
Add target volume to Volume Group
Add button is clicked, and then select the target volume in the target add popup
Target volume basis
Snapshot capacity: Setting
Snapshot automatic creation, creation cycle: Not used
Replication: Not used
Up to 16 can be added
Table. Volume Group(BM) Service Information Input Items
Complete button을 클릭하세요 -> 4. Complete button should be translated as: 4. Click the Complete button.
So the final translation is: 4. Click the Complete button.
Once creation is complete, check the created resource from the Volume Group(BM) list page.
Reference
When adding the target volume, you can add the volume corresponding to the following to the Volume Group (BM).
Snapshot capacity: setting
Snapshot automatic creation, creation cycle: not used
Replication: Not used
The target volume can be added up to a maximum of 16.
Volume Group(BM) detailed information check
Volume Group(BM) service allows you to check and modify the entire resource list and detailed information. The Volume Group(BM) details page consists of detailed information, snapshot list, replication, job history tabs.
Volume Group(BM) service’s detailed information can be checked by following the next procedure.
All Services > Storage > Block Storage(BM) menu should be clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click on the resource to check the detailed information. It moves to the Volume Group(BM) details page.
Volume Group(BM) details page displays status information and additional feature information, and consists of details, snapshot list, replication, tags, job history tabs.
Classification
Detailed Description
Volume Group status
Volume Group’s status
Creating: being created
Available: creation completed, server connection available
Table. Snapshot list tab detailed information items
Caution
When creating a snapshot in the Volume Group, a snapshot is created in the Block Storage (BM).
Volume Group snapshot is used when managing the target volume’s snapshot capacity and count.
If the maximum number of snapshot creations or the threshold of snapshot space (around 90%) is exceeded, old snapshots will be deleted from oldest.
If the snapshot capacity usage rate is high (around 90%), replication may be stopped.
Snapshots can be created up to a maximum of 1,023 (the maximum number of automatic creations through scheduling is 128), and if the maximum number of creations is exceeded, no more snapshots can be created.
Snapshot recovery must be performed in a state where all connected servers are disconnected (Umount, Disk Offline), and the recovered volume can be used after reconnection (Mount, Disk Online).
Reference
The snapshot creation time is based on Asia/Seoul (GMT +09:00) standard.
When using replication, the snapmirror file cannot be deleted.
When using Volume Group, set the snapshot schedule on the Volume Group (BM) detailed information screen. The created snapshot can be checked in the Block Storage (BM) snapshot list.
Replication
Volume Group(BM) Resource List page where you can check the replication information of the selected resource.
Replication progress status according to policy settings
Volume Information
Volume information of the original and replica
Classification: Classification of whether it is a replica-related original
Volume Group Name: Volume Group name of the original or replica
Location: Location where the volume was created
Authority: User authority of the volume set according to the replication policy
Table. Replication tab detailed information items
Caution
If the snapshot capacity usage rate is high (around 90%), replication may be stopped
Reference
When replicating, a replica with the same disk type is created.
Snapshot capacity setting and replica creation are possible after setting.
If you are using a Volume Group, check the replication information on the Volume Group(BM) resource list page.
The replica Block Storage can modify the connected server when the replication policy is paused or deleted.
The replica Block Storage can use the snapshot feature after the replication policy is deleted.
If the replication policy is paused or the replication status is completed, you can modify the policy and cycle from the replica.
Tag
Volume Group(BM) list page where you can check the tag information of the selected resource, and add, change or delete it.
Classification
Detailed Description
Tag List
Tag List
Check Key, Value information of the tag
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing list of created Key and Value
Fig. Volume Group(BM) Tag Tab Items
Work History
Volume Group(BM) Resource List page where you can check the work history of the selected resource.
Classification
Detailed Description
Job history list
Resource change history
Check job time, resource type, resource ID, resource name, job details, event topic, job result, and worker information
Click the detailed search button to search in detail
Table. Work History Tab Detailed Information Items
Volume Group(BM) Resource Management
If you need to modify the settings of the created Volume Group(BM) or add or remove target volumes, you can perform the task on the Volume Group(BM) details page.
Modifying the Snapshot Schedule
You can modify the snapshot automatic creation cycle. To modify the snapshot schedule, follow the following procedure.
All Services > Storage > Block Storage(BM) menu should be clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click the resource to modify the snapshot schedule. It moves to the Volume Group(BM) details page.
Snapshot Schedule item’s Edit button should be clicked. Snapshot Schedule Edit popup window will be opened.
Set the snapshot automatic creation and creation cycle, and click the confirm button.
Creation Cycle basis to automatically create a snapshot, in the case of Auto Creation select Yes.
Creation cycle can be selected as daily, hourly, or weekly, day of the week, hourly.
Caution
Volume Group snapshot is used when managing the target volume’s snapshot capacity and count
Snapshots can be created up to a maximum of 1,023 (the maximum number of automatic creations through scheduling is 128), and if the maximum number of creations is exceeded, no more snapshots can be created.
Reference
The snapshot schedule is based on Asia/Seoul (GMT +09:00) standard.
Modifying the target volume
You can add or detach the target volume. To modify the target volume, follow these procedures.
All Services > Storage > Block Storage(BM) menu should be clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click the resource to modify the target volume. It moves to the Volume Group(BM) detail page.
If you add a target volume, click the Add button in the Target Volume item. The Add Volume popup window opens.
If you want to disconnect the volume, click the Disconnect button in the Target Volume section.
Select the volume you want to add and then click the Confirm button.
Caution
Volume Group’s replication policy is in use, the target volume cannot be modified.
Reference
When adding the target volume, you can add the following volume to the Volume Group (BM).
Snapshot capacity: settings
Snapshot automatic creation, creation cycle: not used
Copy: Not used
The target volume can be added up to a maximum of 16.
Volume Group(BM) snapshot usage
You can create, delete a snapshot of the created Volume Group(BM), or restore it using a snapshot. You can perform tasks on the Volume Group(BM) details page and the Snapshot list page.
Creating a snapshot
The user can create a snapshot of the desired point in time. To create a snapshot, follow the following procedure.
All Services > Storage > Block Storage(BM) menu should be clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click on the resource to create a snapshot. It moves to the Volume Group(BM) details page.
Create Snapshot button is clicked. Create Snapshot popup window is opened.
Confirm button, click. It creates a snapshot of the current point in time.
Snapshot List page, click. It moves to the Volume Group(BM) Snapshot list page.
Check the generated snapshot.
Caution
When creating a snapshot in the Volume Group, a snapshot is also created on the connected target volume.
Volume Group snapshot usage, manage the snapshot capacity and number of the target volume.
If the maximum number of snapshot creations or the threshold of snapshot space (around 90%) is exceeded, old snapshots will be deleted from oldest.
If the snapshot capacity usage rate is high (around 90%), replication may be stopped.
A snapshot can be created up to 1,023 (the automatic creation number through the schedule is up to 128), and if the maximum creation number is exceeded, no more snapshots can be created.
Reference
The snapshot creation time is based on Asia/Seoul (GMT +09:00) standard.
Snapshot storage space will be applied according to the settings on the Block Storage(BM) details page of the target volume.
If you want to automatically create snapshots through a schedule, set the snapshot schedule on the Volume Group(BM) details page.
Snapshot recovery must be performed with all target volumes disconnected (Umount, Disk Offline) from the connected server, and the recovered volume can be used after reconnection (Mount, Disk Online).
After the snapshot restoration is complete, all snapshots created after the snapshot used for restoration will be deleted.
Creating a recovery version
You can create a replica from a snapshot of the target volume. To create a snapshot replica, follow these steps.
All services > Storage > Block Storage(BM) menu is clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click on the resource to create a backup. It moves to the Volume Group(BM) details page.
Volume Group(BM) details page, click the snapshot list page. It moves to the Volume Group(BM) snapshot list page.
Snapshot name and Creation time are confirmed, then click the More button of the snapshot you want to create a restore from.
Restore Point Creation button should be clicked. Snapshot restore point creation popup window will be opened.
Enter the Prefix and then click the Confirm button. A pop-up window announcing the creation of a backup will open.
The name of the backup is created with the input prefix value + original Block Storage name entered.
Confirm button will be clicked. The application for creating a backup copy will be completed.
Caution
A backup copy can be created only one per original.
The recovery volume is a separate volume created with the same capacity as the original, and additional costs are incurred.
Deleting a snapshot
You can select a snapshot to delete it. To delete a snapshot, follow these steps.
All Services > Storage > Block Storage(BM) menu should be clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click the resource to delete the snapshot. It moves to the Volume Group(BM) details page.
Click Snapshot list. You will be taken to the Volume Group (BM) snapshot list page.
Snapshot name and Creation time should be confirmed, then click the More button of the snapshot you want to delete.
Delete button will be clicked. Snapshot list page will remove the snapshot.
Reference
When using replication, the snapmirror file cannot be deleted.
Volume Group(BM) replication usage
The created Volume Group(BM) can be synchronized periodically and consistently after creating replicas in other locations, and work can be done on the Volume Group(BM) details page and the replication page.
Create a copy
You can create a replica of the Volume Group and volume in a different location.
To create a copy, follow the following procedure.
All services > Storage > Block Storage(BM) menu is clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click the resource to create a replica. It moves to the Volume Group(BM) details page.
Replicate Create button should be clicked. Replicate Create popup window will be opened.
Replication Location and Replication Volume Group, Replication Block Storage Name Prefix, Replication Cycle should be entered and the Confirm button should be clicked. It creates a replica Volume Group and a replica Block Storage volume with the same disk type.
Replication location: Select a location different from the original Volume Group (BM).
Replica Volume Group name: It should start with English and use English, numbers, and special characters (-) to input 3-28 characters.
Replica Block Storage Name Prefix: It should start with English and use English, numbers, and special characters (-) to input 3-28 characters.
The replica Block Storage(BM) is created with a volume name of ‘replica Block Storage name prefix + original Block Storage volume name’.
Replication cycle: Choose from 5 minutes, 1 hour, daily, weekly, or monthly. Replication will be performed according to the cycle below.
Daily: every day 23:59:00
Every week: every Sunday 23:59:00
Every month: every month 1st 23:59:00
Replication page will be clicked. It moves to the Replication page.
Check the replication information.
When the original or replica Volume Group name is selected, it moves to the Volume Group(BM) details page.
Note
When creating a replica in the Volume Group, a replica Volume Group and Block Storage (BM) are created.
The replica Block Storage has the same type as the original and disk type.
You can create one replica per Volume Group, and additional data transfer fees apply when replicating across regions.
Modify replication policy
You can change the replication status through replication policy modification.
Caution
During replication, you cannot modify the replication cycle and replication policy.
To modify the replication policy, follow the following procedure.
All services > Storage > Block Storage(BM) menu is clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click the resource to modify the replication policy. It moves to the Volume Group(BM) details page.
Replication page will be clicked. It moves to the Volume Group(BM) Replication page.
Replication Policy’s Edit button is clicked. Replication Policy Edit popup window is opened.
Usage: performs replication. Pause can be modified to Usage.
Temporary Suspension: temporarily suspends replication. If in use, it can be modified to temporary suspension.
Delete: It deletes the replication. In case of pause, it can be modified to delete, and after deletion, replication cannot be used again.
Volume Group(BM) replication page where you can check the modified replication policy.
Caution
Note the following when deleting a policy.
After deleting the policy, the replica cannot be converted to the original and replication cannot be created.
After deleting the policy, you cannot connect to the existing replica, and you can only create a new replica.
Data stored only in replicas after temporary suspension will be deleted when replication is used again.
When using the replication policy, the replica is in a read-only state and data modification is not possible. Please use replication after unmounting all connected resources
Modify Replication Cycle
You can change the synchronization cycle between the original and the replica through replication cycle modification.
Caution
During replication, you cannot modify the replication cycle and replication policy.
To modify the replication cycle, follow the following procedure.
All services > Storage > Block Storage(BM) menu is clicked. It moves to the Service Home page of Block Storage(BM).
Service Home page, click the Volume Group(BM) menu. It moves to the Volume Group(BM) list page.
Volume Group(BM) list page, click the resource to modify the replication cycle. It moves to the Volume Group(BM) details page.
Replication page is clicked. It moves to the Volume Group(BM) replication page.
Replication Cycle’s Edit button should be clicked. Replication Cycle Edit popup window will be opened.
Replication cycle: Choose from 5 minutes, 1 hour, daily, weekly, or monthly. Replication will be performed according to the cycle below.
Daily: every day 23:59:00
Every week: every Sunday 23:59:00
Every month: every month 1st 23:59:00
Volume Group(BM) replication page where you can check the modified replication cycle.
Volume Group(BM) cancellation
You can cancel the unused Volume Group(BM). Volume Group(BM) has no separate cancellation procedure, and it is automatically deleted when all volumes are disconnected from the target volume item.
Caution
After cancellation, it cannot be recovered, so please be careful.
3.1.3 - API Reference
API Reference
3.1.4 - CLI Reference
CLI Reference
3.1.5 - Release Note
Block Storage(BM)
2025.12.16
FEATUREIOPS, Throughput setting feature added
You can set the volume performance metrics (IOPS, Throughput) and edit them on the detail page.
IOPS: 3,000 ~ 16,000
Throughput: 125 ~ 1,000
No separate charges during the preview period (billing planned for the first half of 2026).
2025.10.23
FEATUREAdd snapshot recovery creation feature
The feature to create a recovery (Recovery) using snapshots has been added.
A recovery copy is a separate volume created with the same capacity as the original, and additional costs are incurred.
2025.07.01
FEATURENew feature addition and monitoring integration
HDD Disk type has been added. When creating Block Storage (BM), HDD disk can be selected.
Provides an IaC environment through Terraform.
You can use the snapshot feature on a replica volume.
Cloud Monitoring has been linked.
You can view IOPS, Latency, and Throughput information in Cloud Monitoring.
2025.02.27
FEATUREAdd replication and Volume Group feature
Block Storage(BM) Feature Change
Block Storage(BM) Replication feature that allows volumes to be replicated to another location has been added.
The Volume Group feature has been added, allowing you to set up to 16 Block Storage (BM) volumes as a group to create snapshots and replication at a consistent point in time.
Account, IAM and Service Home, tags etc. reflected the common CX changes.
2024.10.01
NEWBlock Storage(BM) Service Official Version Release
Launched a high-performance storage service suitable for handling large-scale data and database workloads.
3.2 - File Storage
3.2.1 - Overview
Service Overview
File Storage servers connected via network can easily store and share data can, making it suitable for application environments that use multiple servers. You can select the disk type according to performance requirements, and it will automatically expand or shrink according to the user’s data size without capacity limits.
Features
Free volume usage: The volume capacity is automatically expanded or reduced according to data creation and deletion without user configuration. There is no cost for creating volumes, and only usage fees for stored data capacity are incurred.
Various disk types provided: Users can select disk type according to usage purpose. HDD that can store large data cost‑effectively, SSD that provides reduced response time and high IOPS performance, and high‑performance SSD can be used.
Snapshot Backup: Through the image snapshot feature, recovery of changed or deleted data is possible, and when performing disk backup, snapshots can be stored in a location different from the original. Users select the snapshot created at the point in time they wish to recover from the list and perform the recovery.
Replication: Creates an identical replica volume at a different location, and the user can set the data replication schedule. In case the original volume cannot be used due to failures or disasters, service can be provided through the replica volume.
Configuration Diagram
Figure. file Storage diagram
Provided Features
File Storage provides the following features.
Volume Name: Users can set or edit names per volume.
Disk Type: You can select the disk type according to the user’s performance requirements.
HDD: General Volume
SSD: High-performance general volume
High-performance SSD: Performance-optimized volume that can be connected to a Multi-node GPU Cluster
SSD SAP_S, SSD SAP_E: SAP Account dedicated volume
Protocol: You can select the protocol according to the user’s OS Image.
NFS: Primarily used on Linux operating systems
CIFS: Primarily used in Windows operating systems
Free volume usage: Provides flexible volume size based on the amount of stored data without user capacity settings.
Connection Resources: Virtual Server, Bare Metal Server, Multi-node GPU Cluster can be connected and used.
Encryption: Regardless of disk type, all volumes are encrypted with the XTS-AES-256 algorithm.
Snapshot: Through the image snapshot feature, you can create snapshots immediately or schedule them.
Retention Count: The number of snapshots retained that are automatically created via schedule
Schedule: Snapshot automatic creation interval
Recovery: Recover the original volume to the latest snapshot, or select a snapshot at a specific point in time to create a recovery volume
Replication: Replicates the volume to another location, and the user can set the replication interval.
Disk Backup: Stores snapshots on backup-dedicated HDD storage and allows you to select the backup location.
VPC Endpoint connection: File Storage can be used via Private Network connection from an external network.
Monitoring: You can view usage, IOPS, Throughput, etc., monitoring information through the Cloud Monitoring service.
ServiceWatch Service Integration Provided: You can monitor data through the ServiceWatch service.
Components
You can create a volume by selecting the disk type and protocol according to the user’s service environment and performance requirements. When using the snapshot feature, you can restore data to the point in time you want to recover.
Volume
Volume (Volume) is the basic creation unit of the File Storage service and is used as data storage space. Users select a name, disk type, and protocol (CIFS/NFS) to create a volume, then connect it to one or more servers for use. The volume name creation rules are as follows.
Must start with a lowercase English letter and can be set to 3~21 characters using lowercase letters, numbers, and special character (_).
Snapshot
Snapshot (Snapshot) is an image backup at a specific point in time. Using the image snapshot feature, you can recover changed or deleted data or store the snapshot in a different location from the original when performing disk backup. The user selects the snapshot created at the point in time they want to recover from the snapshot list and performs the recovery. The notes for using snapshots are as follows.
Reference
Snapshot creation time is based on Asia/Seoul (GMT +09:00).
Snapshots can be created up to 800, and snapshots generated by schedule are not included. (Automatic creation via schedule is up to 128)
By selecting the snapshot recovery button, you can restore the File Storage volume to the latest snapshot.
If you select a specific snapshot from the snapshot list, recovery is possible by creating a new volume based on the snapshot.
Automatic creation is possible through snapshot schedule settings.
Snapshot capacity is included in File Storage usage and incurs charges, so adjust storage capacity by setting the snapshot retention period.
Prior Service
This is a list of services that must be pre-configured before creating the service. For details, refer to the guide provided for each service and prepare in advance.
High-performance physical server used without virtualization
Table. File Storage Preceding Service
3.2.1.1 - Monitoring Metrics
File Storage monitoring metrics
The table below shows the monitoring metrics of File Storage that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, please refer to the Cloud Monitoring guide.
Performance Item Name
Description
Unit
Instance Status
File Storage volume status
status
Volume Used
Volume Used
bytes
Volume Usage
Usage Rate
%
Volume Total
Total Bytes
bytes
IOPS [Total]
iops(total)
iops
IOPS [Read]
IOPS (read)
iops
IOPS [Write]
IOPS (writing)
iops
IOPS [Other]
IOPS (etc.)
iops
Latency Time [Total]
Delay Time (Total)
usec
Latency Time [Read]
Delay time (read)
usec
Latency Time [write]
Delayed time (writing)
usec
Latency Time [Other]
Delayed time (etc.)
usec
Throughput [Total]
Processing amount (total)
bytes/s
Throughput [Read]
Throughput (Read)
bytes/s
Throughput [Write]
Throughput (Write)
bytes/s
Throughput [Other]
Throughput (etc.)
bytes/s
Fig. File Storage Monitoring Metrics
3.2.1.2 - ServiceWatch Metrics
File Storage sends metrics to ServiceWatch. The metrics provided by default monitoring are data collected at a 1‑minute interval.
Reference
How to check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the File Storage namespace.
Indicator Name
Detailed Description
Unit
Meaningful Statistics
iops_read
IOPS (read)
-
measurement at the time of query
iops_write
IOPS (write)
-
measurement value at the time of query
latency_read
Latency (read)
Microseconds
Average
latency_write
Latency (write)
Microseconds
Average
throughput_read
Throughput (read)
Bytes/Second
Measurement at query time
throughput_write
Throughput (write)
Bytes/Second
Measured value at query time
Table. File Storage Basic Metrics
3.2.2 - How-to guides
Users can enter the required information for File Storage through the Samsung Cloud Platform Console, select detailed options, and create the service.
Caution
Service fees are charged based on usage without capacity constraints.
The volume is allocated 100 GB by default. If you want to use more than 100 GB, please inquire about it via Support Center’s Contact Us.
If you add or modify company information in Cost Management’s Billing Management, the Mount name (IP information) of File Storage is automatically changed.
Before modifying company information, be sure to check the service impact due to changes in Mount name (IP information).
When company information is invited to another Organization, the Mount name (IP information) of File Storage is automatically changed.
Refer to Check Account Information, first verify the company’s information of the Account, then proceed.
File Storage Create
You can create and use the File Storage service from the Samsung Cloud Platform Console.
To create a File Storage, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the Create File Storage button on the Service Home page. You will be taken to the Create File Storage page.
File Storage creation On the page, enter the information required to create the service, and select detailed options.
Category
Required
Detailed description
Volume Name
Required
Volume Name
Start with a lowercase English letter and use lowercase letters, numbers, and the special character (_) to enter 3 to 21 characters
Generated as ‘user input value + {6-character UUID composed of lowercase English letters and numbers}’
Cannot be modified after service creation
Disk Type
Required
Select Disk Type
Disk Types
HDD: Standard Volume
High-Performance SSD: Performance-optimized volume that can be attached to a Multi-node GPU Cluster
SSD: High-Performance Standard Volume
SSD SAP_S, SSD SAP_E: Volume exclusive to SAP Account (selectable only when created with SAP Account)
Cannot be modified after service creation
Set to be identical to the original when creating the service via snapshot recovery volume creation, and cannot be modified
Protocol
Required
Protocol for sharing volumes over the network from the server
NFS: primarily used on Linux operating systems
CIFS: primarily used on Windows operating systems
Through snapshot recovery volume creation, it is set identical to the original when creating a service and cannot be modified
Protocol > Password
Required
Set account password for volume access when using CIFS protocol
Enter 6~20 characters including letters, numbers and special characters ($%{}[]"\)
Protocol > Password Confirmation
Required
Account Password Confirmation
Re-enter the password identically
Recovery Snapshot Name
Optional
Name of the recovery snapshot used when creating the volume
Provide the recovery snapshot name when creating a service through snapshot recovery volume creation
Table. File Storage Service Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
Once creation is complete, check the created resource on the File Storage List page.
Warning
We recommend storing up to 3.5 million files per directory within the volume. If you exceed 3.5 million, severe performance degradation may occur, so separate directories and manage the number of stored files.
Reference
Regardless of disk type, all volumes have server-side encryption based on the XTS-AES-256 algorithm applied by default.
SSD SAP When using the SSD SAP disk type, if a failure occurs causing a failover, the LIF (storage mount IP) is automatically transferred. However, you cannot create new volumes on other storage.
Snapshot schedule can be set on the detail page.
File Storage View detailed information
File Storage service can view and edit the full resource list and detailed information. File Storage Detail page consists of Detailed Information, Snapshot List, Replication, Disk Backup, Tags, Job History tabs.
To view detailed information of the File Storage service, follow the steps below.
All Services > Storage > File Storage Click the menu. Go to File Storage’s Service Home page.
Click the File Storage menu on the Service Home page. Go to the File Storage list page.
Click the resource to view detailed information on the File Storage list page. It navigates to the File Storage details page.
File Storage Details page displays status information and additional feature information, and consists of Details, Snapshot List, Replication, Disk Backup, Tags, Job History tabs.
Category
Detailed description
Volume Status
Represents the status of a volume
Creating: Creating
Creating From Snapshot: Creating a new volume from snapshot
Available: Creation complete, server connection possible
Deleting: Service termination in progress
Error Deleting: Abnormal state during deletion
Inactive: Abnormal state
Error: Abnormal state during creation
Migrating: Temporary maintenance state
Migrating To: Temporary maintenance state
Reverting: Reverting snapshot
Reverting Error: Snapshot revert abnormal
Create Replication
Create a replica at another location
Backups and replicas cannot be used
For detailed information on creating replication, see [Create Replication](/userguide/storage/file_storage/how_to_guides/replication.md/#복제-생성하기)
|
| Create Disk Backup | Create a backup copy to store snapshots in backup-dedicated HDD Storage
**HDD**, **SSD** disk types can be used
Up to 2 backup copies can be created
Backup copies and replicas cannot be used
For more details on creating disk backups, see [Create Disk Backup](/userguide/storage/file_storage/how_to_guides/disk_backup.md/#디스크-백업-생성하기)
|
| Create Snapshot | Immediately create a snapshot at the time of creation
For detailed information about creating snapshots, see [Create Snapshot](/userguide/storage/file_storage/how_to_guides/snapshot.md/#스냅샧-생성하기)
|
| More > Snapshot Recovery | Recover volume with the latest snapshot in **Available** state
For detailed information on snapshot recovery, see [Snapshot Recovery](/userguide/storage/file_storage/how_to_guides/snapshot.md/#스냅샷-복구하기)
|
| More > File-level Restore | Enable file-level restore function within snapshots
For detailed information on file-level restore, see [Using File-level Restore](/userguide/storage/file_storage/how_to_guides/file_restore.md/#파일-단위-복구-사용하기) reference |
| More > Disable file-level restore | Disable file-level restore function within snapshots[Disable file-level restore](/userguide/storage/file_storage/how_to_guides/file_restore.md/#파일-단위-복구-해제하기) Reference |
|Service Cancellation|Button to cancel the service|
Table. Status Information and Additional Functions
Note
Disk backup cannot be used with high-performance SSD disk types.
Detailed Information
On the File Storage List page, you can view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In the File Storage service, it refers to a volume SRN
Resource Name
Resource Name
In the File Storage service, it refers to the volume name
Resource ID
Service’s unique resource ID
Creator
User who created the service
Creation DateTime
Date and time the service was created
Volume Name
Volume Name
Category
Classification of original status regarding duplication, disk backup
Disk type
Disk type
Usage
Volume usage
Protocol
NFS, CIFS protocol for sharing volumes over the network from the server
Mount name
Mount name per volume for server connection
‘scp_{6-digit UUID composed of lowercase English letters and numbers}’ is created
The replica’s Mount name can be checked after pausing the replication policy once for the first time
Encryption
Encryption status
Encryption provided by default regardless of disk type
Snapshot > Schedule
Snapshot automatic creation cycle
Automatically create and store snapshots according to the schedule
If snapshot settings are needed, click the **Edit** button
For detailed information on snapshot settings, see [Edit Snapshot](/userguide/storage/file_storage/how_to_guides/_index.md/#스냅샷-수정하기)
**Resource Type**: Service type of the connected resource
**Resource Name**: Name of the connected resource
**IP**: IP information of the connected resource
**Status**: Status of the connected resource
NFS volumes are recommended to connect to Linux servers, CIFS volumes to Windows servers
Up to 300 connected resources can be added
When adding or removing connected resources, select the **Edit** button
For more details about connected resources, see [Edit Connected Resources](/userguide/storage/file_storage/how_to_guides/_index.md/#연결-자원-수정하기) reference
|
| VPC Endpoint | Connected VPC Endpoint Service
**VPC Endpoint Name**: Name of the connected VPC Endpoint
**IP**: IP information of the connected VPC Endpoint
**Status**: Status of the connected VPC Endpoint
When adding or removing a VPC Endpoint, select the **Edit** button
For details on editing a VPC Endpoint, see [Edit VPC Endpoint](/userguide/storage/file_storage/how_to_guides/_index.md/#vpc-endpoint-수정하기)
|
Table. File Storage Detailed Information Items
Snapshot List
You can view the snapshot of the selected resource on the File Storage list page.
Category
Detailed description
Snapshot Usage
Total Capacity of Stored Snapshots
Snapshot Name
Snapshot Name
Capacity
Snapshot Capacity
Creation Time
Snapshot Creation Time
Status
Snapshot status
Available: creation completed, can be restored
Creating: creating
Error: abnormal state
Deleting: deleting
Error Deleting: abnormal state while deleting
Restoring: restoring
Additional Features > More
Snapshot Management Button
Create Recovery Volume: Create a recovery volume based on a snapshot
For detailed information on snapshot deletion, see Delete Snapshot
Table. Snapshot List Item
Reference
Snapshot creation date and time is based on Asia/Seoul (GMT +09:00).
Snapshots can be created up to 800, and snapshots created by schedule are not included. (Automatic creation via schedule is up to 128)
If you are using replication or disk backup, you cannot delete the snapmirror snapshot.
When the snapshot recovery button is clicked, the volume will be restored to the latest snapshot in the Available state.
When selecting to create a recovery volume on the snapshot list page, a new volume based on the snapshot is created without modifying the existing volume.
The snapshot list can be viewed in the original and backup. The replica can be viewed after the policy is deleted.
Snapshot creation, recovery, and recovery volume creation can be used on the original.
When using replication or disk backup, you cannot recover with a snapshot, and you can only create a recovery volume.
Replication
You can view the replication information of the selected resource on the File Storage list page.
Replication progress status according to policy settings
Volume Information
Original and replica volume information
Distinction: Distinguish whether original related to replication
Volume Name: Name of the original or replica volume
Location: Location where the volume was created
Permission: User permissions of the volume set according to replication policy
Table. Clone Tab Detailed Information Items
Caution
When using replication, you cannot recover with a snapshot and can only create a recovery volume. If snapshot recovery is needed, delete the replication policy and recover with a snapshot.
Note
When creating a clone, a replica with the same disk type is created.
Replication Policy, Replication Cycle can be modified when the replication status in the replica is Completed or Stopped.
File-level recovery of the replica can be used when the replication status is Completed or Stopped.
The replica’s connection server, VPC Endpoint can be modified when the replication status is stopped.
The snapshot-related functions of the replica can be used after the replication policy is deleted.
Disk Backup
You can view the disk backup information of the selected resource on the File Storage List page.
Notice
Disk type is HDD or SSD only the original can disk backup. It cannot be used on clones or high-performance SSD.
Up to 2 backup copies can be created.
Category
Detailed description
Backup Policy
Backup policy set by the user
For detailed information on modifying the backup policy, see Modify Backup Policy
Backup Cycle
Backup cycle of the original set by the user
For detailed information on modifying the backup cycle, see [Modify Backup Cycle](/userguide/storage/file_storage/how_to_guides/dsik_backup.md/#백업-주기-수정하기)
|
| Backup status | Backup progress status according to policy settings |
| Backup retention count | Number of snapshots retained in the backup |
| Volume Information | Original and backup volume information
**Classification**: Distinguish whether original related to backup
**Volume Name**: Name of the original or backup volume
**Location**: Location where the volume was created
**Permission**: User permissions of the volume set according to backup policy
|
Table. Disk Backup Tab Detailed Information Items
Caution
When using disk backup, you cannot recover with a snapshot and can only create a recovery volume. If snapshot recovery is needed, cancel the backup and recover with a snapshot.
Reference
When creating a backup, it is created as an HDD disk type regardless of the original disk type.
Backup Policy, Backup Frequency can be modified when the backup status in the backup set is Completed or Stopped.
File-level recovery of the backup can be used when the backup status is Completed or Stopped.
The backup copy’s connection server, VPC Endpoint can be modified when the backup status is stopped.
Only deletion of backup snapshots is possible.
Tag
File Storage List page allows you to view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can view the Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the existing list of Keys and Values
Table. File Storage Tag Tab Item
Work History
You can view the operation history of the selected resource on the File Storage list page.
Category
Detailed description
Work History List
Resource Change History
Work date and time, resource type, resource ID, resource name, work details, event topic, work result, verify worker information
Table. Work History Tab Detailed Information Items
File Storage Resource Management
If you need to modify settings of the created File Storage or add or delete connected servers, you can perform the task on the File Storage Details page.
Edit Snapshot
You can modify the snapshot schedule and retention count. To modify a snapshot, follow these steps.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. Navigate to the File Storage list page.
File Storage List page, click the resource to modify the snapshot settings. Navigate to the File Storage Details page.
Click the Edit button of the Snapshot item. The Snapshot Edit popup window opens.
Set the Use status and Schedule, Storage Count, and click the Confirm button.
Schedule based, if you want to automatically create snapshots, select Use.
Schedule can be selected daily, hour-wise, or weekly, day of week, hour-wise.
Storage count: Enter a number between 1 and 128. If not entered, it defaults to 10.
Caution
If the number of retained items after modification is less than before, older snapshots will be deleted first.
If you modify it as unused, the previously created snapshot will be retained.
Since snapshot capacity is included in File Storage usage and incurs charges, adjust the storage capacity by setting the number of snapshots to retain.
Reference
Snapshot schedule is based on Asia/Seoul (GMT +09:00).
Edit Connected Resources
You can connect or disconnect resources. To modify the connected resources, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. Navigate to the File Storage list page.
On the File Storage List page, click the resource to edit the linked resource. Navigate to the File Storage Details page.
Click the Edit button of the Connection Resource item. The Select Connection Resource popup opens.
After selecting or deselecting the resource, click the Confirm button.
You can select multiple resources at the same time.
Caution
Please connect to the server, perform the disconnect operation (Umount, disconnect network drive), and then disconnect the connected server. If you disconnect without OS operations, a status error (Hang) may occur on the connected server. For detailed information on disconnecting the server, refer to Disconnect Server.
Reference
You can connect up to 300 resources at the same location. If you need to connect more than 300, use the API.
When adding a connected server, you can use it after performing connection tasks (Mount, network drive connection) on the server. For detailed information about server connections, see Connect to Server.
The replica’s connection server can be modified when the replication state is stopped.
The backup copy’s connected server can be modified when the backup status is stopped.
VPC Endpoint Edit
You can connect or disconnect a VPC Endpoint.
Notice
Disk type is SSD SAP_S, you cannot modify the VPC Endpoint.
To modify the connection resource, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. Navigate to the File Storage list page.
File Storage List page, click the resource to modify the VPC Endpoint connection. Navigate to the File Storage Details page.
Click the Edit button of the VPC Endpoint item. The Select VPC Endpoint popup opens.
After selecting or deselecting the resource, click the Confirm button.
You can select multiple resources at the same time.
Caution
After connecting to the server, be sure to perform the disconnect operation (Umount, disconnect network drive) before releasing the VPC Endpoint. If you release it without OS operations, a status error (Hang) may occur on the connected server. For detailed information on disconnecting the server, refer to Disconnect Server.
High-performance SSD Even with the disk type, data sharing is possible by connecting a VPC Endpoint, but be careful as it does not guarantee sufficient performance of the high-performance SSD for connecting internal VPC resources.
high-performance SSD Disk type, after checking the connection IP in the VPC Endpoint section, apply for the VPC Endpoint service. You can add a VPC Endpoint created based on the connection IP.
Reference
The VPC Endpoint of the replica can be modified when the replication status is stopped.
The VPC Endpoint of the backup can be modified when the backup status is stopped.
File Storage Cancel
You can cancel unused File Storage to reduce operating costs. However, if you cancel the service, the running service may be stopped immediately, so you should consider the impact of service interruption sufficiently before proceeding with the cancellation.
Caution
Be careful because data cannot be recovered after termination.
Service termination is possible for File Storage volumes without connected resources. Please terminate the service after removing all connected servers.
Service termination is possible for File Storage volumes without a VPC Endpoint connection. Remove all VPC Endpoints before terminating the service.
If the volume is in Available, Error, Inactive state, it can be terminated.
The original using Replication or Disk Backup can be terminated after deleting the relevant policy.
Replication policy: Pause > Delete
Backup policy: Pause > Service termination
The replica can be terminated after the replication policy is deleted.
The backup copy can be terminated after stopping the backup policy log.
To cancel File Storage, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. Navigate to the File Storage list page.
File Storage List On the page, select the resource to cancel, and click the Cancel Service button.
When termination is complete, check on the File Storage List page whether the resource has been terminated.
3.2.2.1 - Connecting to the Server
When using a volume on the server, connection or disconnection work is required. In the File Storage Details page, add the connection server and then connect to the server to perform the connection work (Mount, Network Drive Connection). After use, perform the disconnection work (Umount, Network Drive Disconnection) and then remove the connection server.
Connecting to the Server (Mount, Network Drive Connection)
To use the volume added to the connection server, you must connect to the server and perform the connection work (Mount, Network Drive Connection). Follow the procedure below.
Linux Operating System (NFS)
Server Connection Example Configuration
Mount information: 10.10.10.10:/filestorage
Mount location: /data
Click the All Services > Storage > File Storage menu. Move to the Service Home page of File Storage.
On the Service Home page, click the File Storage menu. Move to the File Storage List page.
On the File Storage List page, click the resource to be used by the connection server. Move to the File Storage Details page.
Check the server in the Connection Server section and connect to it.
Refer to the procedure below to connect (Mount) the volume.
Click the All Services > Storage > File Storage menu. Move to the Service Home page of File Storage.
On the Service Home page, click the File Storage menu. Move to the File Storage List page.
On the File Storage List page, click the resource to be used by the connection server. Move to the File Storage Details page.
Check the server in the Connection Server section and connect to it.
Refer to the procedure below to connect (Network Drive Connection) the volume.
Right-click This PC in File Explorer and click Map Network Drive.
Enter the drive and folder based on the detailed information of the File Storage volume to be connected, and then click Finish.
To set up automatic connection at login, select Reconnect at sign-in.
In the Network Credentials pop-up window, enter the account name (ID) and password, and then click OK.
The account name (ID) can be found on the File Storage Details page.
The password is the password set when Creating File Storage.
If you are connecting two or more volumes with the CIFS protocol set, add the Storage IP and Storage name to the hosts file and then connect.
* Mount name
\\10.10.10.10\filestorage1
\\10.10.10.10\filestorage2
* Hosts file contents added
10.10.10.10 filestoragename1 #description
10.10.10.10 filestoragename2 #description
* Use the name written in the hosts file
When connecting filestorage1: \\ filestoragename1\filestorage1
When connecting filestorage2: \\ filestoragename2\filestorage2
Disconnecting from the Server (Umount, Network Drive Disconnection)
Connect to the server and perform the disconnection work (Umount, Network Drive Disconnection), and then release the connection server from the Console. Follow the procedure below.
Caution
If you release the connection server from the Console without performing the disconnection work (Umount, Network Drive Disconnection) on the server, a server status error (Hang) may occur. Be sure to perform the OS work first.
Linux Operating System (NFS)
Click the All Services > Storage > File Storage menu. Move to the Service Home page of File Storage.
On the Service Home page, click the File Storage menu. Move to the File Storage List page.
On the File Storage List page, click the resource to be released from the connection server. Move to the File Storage Details page.
Check the server information in the Connection Server section and connect to the server.
Refer to the commands listed below to perform the disconnection work (Umount).
Click the All Services > Storage > File Storage menu. Move to the Service Home page of File Storage.
On the Service Home page, click the File Storage menu. Move to the File Storage List page.
On the File Storage List page, click the resource to be released from the connection server. Move to the File Storage Details page.
Check the server information in the Connection Server section and connect to the server.
Refer to the procedure below to disconnect (Network Drive Disconnection) the volume.
Right-click the already connected Network Drive in File Explorer.
Select Disconnect Network Drive.
Note: I translated all Korean text to English while maintaining the original Markdown grammar and document format.
3.2.2.2 - Using Snapshots
You can create, delete, or restore snapshots of the created File Storage. You can perform these actions on the File Storage Details page and Snapshot List page.
Create Snapshot
You can create a snapshot of the current point in time immediately. To create a snapshot, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. You will be taken to the File Storage list page.
File Storage List page, click the resource to create a snapshot. File Storage Details page, you will be taken there.
Click the Create Snapshot button. The Create Snapshot popup window opens.
Confirm Click the button. It creates a snapshot.
Click the Snapshot List button. Navigate to the File Storage Snapshot List page.
Check the generated snapshot.
Caution
Snapshot charges are included in the File Storage usage fees.
When using replication or disk backup, you cannot recover with a snapshot, and you can only create a recovery volume.
Reference
Snapshot creation time is based on Asia/Seoul (GMT +09:00).
Snapshots can be created up to 800, and snapshots generated by schedule are not included. (Automatic creation via schedule is up to 128)
If you want to automatically create snapshots via schedule, set up snapshots on the File Storage Details page.
For detailed information about the snapshot schedule, see Edit Snapshot.
Recover Snapshot
File Storage volume can be restored to the latest snapshot in Available state. To restore the snapshot, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. You will be taken to the File Storage list page.
File Storage List On the page, click the resource to restore from snapshot. File Storage Details Navigate to the page.
Click the Snapshot List button. It navigates to the File Storage Snapshot List page.
Check the latest snapshot in Available state. The volume will be restored with that snapshot.
Click the Snapshot Recovery button. The Snapshot Recovery popup opens.
After checking the Snapshot name and creation date/time, click the Confirm button.
When recovery starts, it becomes Reverting, and when completed, it becomes Available.
Caution
If you want to recover using a snapshot that is not the latest snapshot, recovery is possible by creating a recovery volume.
When restoring, it is restored to the latest snapshot in Available state, and restoration is not possible in the following situations.
File Storage if the volume is not in Available state
If there are no recoverable snapshots
If the latest snapshot changes during recovery creation
If the latest snapshot is not in Available state
Create snapshot recovery volume
You can create a volume using a snapshot. To create a snapshot recovery volume, follow these steps.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. You will be taken to the File Storage list page.
File Storage List page, click the resource to restore from snapshot. Move to the File Storage Details page.
Click the Snapshot List button. It navigates to the File Storage Snapshot List page.
After checking Snapshot Name and Creation Date/Time, click the More button of the snapshot you want to restore.
Click the Confirm button. You will be taken to the Create File Storage page.
File Storage Creation page, enter the information required to create the service, and select detailed options.
Please enter the volume name and password.
The disk type and protocol are set the same as the original and cannot be modified.
Category
Required
Detailed description
Volume Name
Required
Volume Name
Start with a lowercase English letter and use lowercase letters, numbers, and the special character (_) to input 3 to 21 characters
Generated as ‘user input value + {6-character UUID composed of lowercase English letters and numbers}’
Cannot be modified after service creation
Disk Type
Required
Select Disk Type
HDD: Standard Volume
SSD: High-performance Standard Volume
Cannot be modified after service creation
When creating a service via snapshot recovery volume creation, it is set identical to the original and cannot be modified
Protocol
Required
Protocol for sharing volumes over the network from the server
NFS: primarily used on Linux operating systems
CIFS: primarily used on Windows operating systems
Through snapshot recovery volume creation, it is set identical to the original when creating a service and cannot be modified
Protocol > Password
Required
Set account password for volume access when using CIFS protocol
Enter 6~20 characters including letters, numbers and special characters (excluding $%{}[]"\)
Protocol > Password Confirmation
Required
Account Password Confirmation
Re-enter the password identically
Recovery Snapshot Name
Optional
Name of the recovery snapshot used when creating a volume
Provide the recovery snapshot name when creating a service through snapshot recovery volume creation
Table. File Storage Service Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resource on the File Storage List page.
Delete Snapshot
You can select a snapshot to delete. To delete a snapshot, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. You will be taken to the File Storage list page.
Click the resource to delete the snapshot on the File Storage List page. It moves to the File Storage Details page.
Click the Snapshot List button. It navigates to the File Storage Snapshot List page.
After checking the Snapshot name and Creation date/time, click the More button of the snapshot you want to delete.
Click the Delete button. The snapshot will be removed from the Snapshot List page.
Reference
Snapmirror files cannot be deleted when using replication.
3.2.2.3 - Restoring Files
You can restore data at the file level using the created snapshot. This task can be performed on the File Storage Details page.
Using File-Level Restoration
You can activate the file-level restoration feature and use it. After activating the feature, connect to the server and select and restore the data. To restore files, follow these steps:
Click the All Services > Storage > File Storage menu. You will be taken to the Service Home page of File Storage.
On the Service Home page, click the File Storage menu. You will be taken to the File Storage List page.
On the File Storage List page, click the resource you want to restore. You will be taken to the File Storage Details page.
Click the More button at the top right and then click the File-Level Restoration button. The File-Level Restoration popup window will open.
Click the Confirm button. This will activate the file-level restoration feature.
While the file-level restoration is being activated, the File-Level Restoration button will be displayed as Deactivate File-Level Restoration.
Check the server information in the Connected Server section and connect to the server.
Refer to the following procedure to restore the data:
Connect to the server and check the mount name of File Storage.
Move to the snapshot location under the mount name.
# cd /Mount_name/.snapshot/snapshot_name
In the snapshot location, check the file to be restored and restore it to the desired path.
Deactivating File-Level Restoration
After using the file-level restoration feature, you can deactivate it. To deactivate file-level restoration, follow these steps:
Click the All Services > Storage > File Storage menu. You will be taken to the Service Home page of File Storage.
On the Service Home page, click the File Storage menu. You will be taken to the File Storage List page.
On the File Storage List page, click the resource you want to deactivate file-level restoration for. You will be taken to the File Storage Details page.
Click the More button at the top right and then click the Deactivate File-Level Restoration button. The Deactivate File-Level Restoration popup window will open.
Click the Confirm button. This will deactivate the file-level restoration feature.
3.2.2.4 - Using Disk Backup
After creating a backup of File Storage on a backup-dedicated HDD Storage, you can periodically store snapshots. You can perform tasks on the File Storage Details page and the Disk Backup page.
Caution
Backup copies created through Disk Backup can be restored using the File-level Restore feature. For detailed information about File-level Restore, see File-level Restore.
Guide
disk type is HDD or SSD only on the original disk backup is possible. Clones or high-performance SSD cannot be used.
Up to 2 backup copies can be created.
Create Disk Backup
You can create a backup volume on the original or another location’s backup-only HDD storage.
To create a backup volume, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. Go to the File Storage list page.
File Storage List On the page, click the resource to create a backup. File Storage Details Navigate to the page.
Create Disk Backup Click the button. Create Disk Backup The popup window opens.
After entering the backup information, click the Confirm button. It creates a backup copy of the HDD disk type.
Backup Location: Select the location where the backup will be created.
Backup frequency: Choose from 1 hour, daily, weekly, monthly. Replication will be performed at the next interval.
Daily: Daily 23:59:00
Weekly: Every Sunday 23:59:00
Monthly: 1st of each month 23:59:00
Disk Backup page click. Disk Backup page moves.
Disk Backup Check the information.
When selecting the volume name of the original or backup, you will be taken to the File Storage Details page of that volume.
Caution
When using disk backup, you cannot recover with a snapshot and can only create a recovery volume. If snapshot recovery is needed, cancel the backup and recover with a snapshot.
Reference
A backup of HDD type is created regardless of the original disk type.
You can create 2 backups per volume, and data transfer fees are added when backing up across regions.
Modify Disk Backup Policy
You can change the backup status by modifying the backup policy. To modify the backup policy, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. Navigate to the File Storage list page.
Click the resource to modify the backup policy on the File Storage List page. You will be taken to the File Storage Details page.
Disk Backup tab.
Click the Edit button of the Backup Policy. The Disk Backup Policy Edit popup window opens.
Use: Performs backup. If Paused, can be modified to Use.
Pause: Temporarily pauses the backup. If use is the case, it can be edited to Pause.
Delete: Deletes the backup. If Paused, can be changed to Delete, and after deletion the backup cannot be used again.
Check the modified backup policy on the Disk Backup page of File Storage.
Reference
Backup Policy, Backup Schedule can be modified in the backup copy when the backup status is Completed or Stopped.
Modify disk backup schedule
Through modifying the backup schedule, you can change the synchronization cycle between the original and the backup. To modify the backup schedule, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
On the Service Home page, click the File Storage menu. You will be taken to the File Storage List page.
Click the resource to modify the backup cycle on the File Storage List page. Navigate to the File Storage Details page.
Disk Backup Click the page.
Click the Edit button of Backup Cycle. The Backup Cycle Edit popup window opens.
Backup frequency: select from 1 hour, daily, weekly, monthly. Create snapshots at the next interval.
* Daily: Daily 23:59:00
* Every week: Every Sunday 23:59:00
* Monthly: Monthly 1st 23:59:00
Check the modified backup schedule on the Disk Backup page of File Storage.
Reference
Backup Policy, Backup Frequency can be modified in the backup copy when the backup status is Completed or Stopped.
Modify disk backup retention count
You can set the number of snapshots retained in the backup. To modify the backup retention count, follow these steps.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. You will be taken to the File Storage list page.
Click the resource to modify the backup cycle on the File Storage List page. Navigate to the File Storage Details page.
Disk Backup Click the page.
Click the Edit button of Backup retention count. Edit Disk Backup Retention Count popup opens.
Enter the backup retention count.
* The number of stored items can be set from 1 to 128.
Check the modified Backup Schedule on the Disk Backup tab of the File Storage Details page.
3.2.2.5 - Using Replication
You can create a replica of File Storage at another location and synchronize it periodically. You can perform tasks on the File Storage Details page and the Replication page.
Reference
kr-south region does not provide File Storage replication feature.
Create Clone
You can create a replica volume at a different location. To create a replica volume, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. You will be taken to the File Storage list page.
Click the resource to create a replica on the File Storage List page. You will be taken to the File Storage Details page.
Click the Create Clone button. The Create Clone popup opens.
Enter Replication Location and Replication Volume Name, Password, Replication Interval, and click the Confirm button. It creates a replica with the same disk type.
Replication Location: Select a location different from the original File Storage volume.
Clone Volume Name: Start with a lowercase English letter and use lowercase letters, numbers, and the special character (_) to enter 3 to 21 characters.
Password: This is the account password for accessing CIFS volumes. Enter 6 to 20 characters including letters, numbers, and special characters (excluding $%{}[]"\).
Replication cycle: 5 minutes, 1 hour, daily, weekly, monthly. Replication will be performed at the selected interval.
Daily: Daily 23:59:00
Weekly: Every Sunday 23:59:00
Monthly: 1st of each month 23:59:00
Click the Clone page. Navigate to the Clone page.
Check the replication information.
When selecting the volume name of the original or replica, you will be taken to the volume’s File Storage Details page.
Caution
When using replication, you cannot recover with a snapshot and can only create a recovery volume. If snapshot recovery is needed, delete the replication policy and recover with a snapshot.
Reference
When creating a replica, a replica with the same disk type and protocol is created.
One replica can be created per volume, and data transfer fees are added for cross-region replication.
Edit replication policy
Through modifying the replication policy, you can change the replication status. To modify the replication policy, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. You will be taken to the File Storage list page.
On the File Storage List page, click the resource to modify the replication policy. You will be taken to the File Storage Details page.
Click the Replication page. You will be taken to the File Storage Replication page.
Click the Edit button of Replication Policy. The Edit Replication Policy popup window opens.
Use: Performs replication. If Pause, can be changed to Use.
Pause: Temporarily pauses replication. If use, it can be modified to Pause.
Delete: Deletes the replication. If Paused, it can be changed to Delete, and after deletion, replication cannot be used again.
File Storage Replication on the page, please check the modified replication policy.
Caution
After deletion, the replica becomes the original, so you cannot replicate to the previous configuration again by modifying the replication policy.
Data stored only in the replica after pausing will be deleted if replication is used again.
When using replication policy, the replica is read only and data cannot be changed. Use replication after unmounting all connected resources.
Reference
If the replication policy is stopped or the replication status is completed, you can modify the policy and schedule in the replica.
Modify replication cycle
Through modifying the replication cycle, you can change the synchronization cycle between the original and the replica. To modify the replication cycle, follow the steps below.
All Services > Storage > File Storage Click the menu. Navigate to the Service Home page of File Storage.
Click the File Storage menu on the Service Home page. You will be taken to the File Storage list page.
File Storage List on the page, click the resource to modify the replication cycle. File Storage Details navigate to the page.
Replication Click the page. File Storage Replication Navigate to the page.
Click the Edit button of Replication Cycle. The Replication Cycle Edit popup opens.
Replication Cycle: Choose from 5 minutes, 1 hour, daily, weekly, monthly. Replication will be performed at the selected interval.
Daily: Daily 23:59:00
Weekly: Every Sunday 23:59:00
Monthly: 1st of each month 23:59:00
Check the modified Replication Cycle on the File Storage Replication page.
Reference
If the replication policy is stopped or the replication status is completed, you can modify the policy and schedule on the replica.
3.2.3 - API Reference
API Reference
3.2.4 - CLI Reference
CLI Reference
3.2.5 - Release Note
File Storage
2025.10.23
FEATUREAdd disk type and provide ServiceWatch integration
SAP Account dedicated volume SSD SAP_E, SSD SAP_E disk type has been added.
If you use a SAP Account dedicated volume, when a failure occurs causing a failover, the LIF (storage mount IP) is automatically transferred.
Can only be used in SAP Account.
ServiceWatch service integration provision
You can monitor data through the ServiceWatch service.
2025.07.01
FEATUREAdd disk type and disk backup feature
High-performance SSD disk type has been added, and can be used by connecting to a Multi-node GPU Cluster.
Disk Backup Through the function, you can store snapshots in backup-dedicated HDD Storage. You can select a location other than the original location.
2025.02.27
FEATUREAdd disk type and replication, VPC Endpoint connection feature
File Storage feature change
SSD disk type has been added, allowing you to select the disk type according to the purpose.
Create the same replica volume at a different location, and you can set the data replication cycle.
Through a VPC Endpoint connection, you can use File Storage from an external network.
Samsung Cloud Platform Common Feature Change
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.10.01
NEWFile Storage Service Official Version Release
Because it automatically expands or shrinks based on usage, users can use the volume without capacity limits.
You can select the connection target through the access control function.
2024.07.02
NEWBeta version release
We have launched the File Storage service, a storage that allows multiple client servers to share files via network connection.
3.3 - Object Storage
3.3.1 - Overview
Service Overview
Object Storage is an object storage service that allows users to easily store and use their desired data, with URL-based access for very convenient data management.
It enables search and retrieval of large-scale data, and provides features such as encryption and version management.
It offers both Public and Private URLs, and Public URLs can be accessed even from the internet environment.
Key Features
S3 API Utilization: Easy and fast access from applications through Restful API, and compatible with Amazon S3, so it can be easily used in applications integrated with Amazon S3.
Secure Usage: Provides encryption (SSE-S3), access control, and Public/Private access features, making it suitable for safely storing user data or backup data for service recovery.
Cost Efficiency: Users do not need to pre-set bucket capacity, and storage space is provided with efficient pricing that charges based on actual usage.
Replication: Can perform replication to buckets in different locations or the same location. Multiple replication policies can be set, and if the original bucket cannot be used due to failure or disaster, the service can be provided through the replicated bucket.
Architecture Diagram
Fig. Object Storage Architecture Diagram
Provided Features
Object Storage provides the following features.
Storage Management: Provides features for creating Object Storage, creating folders, deleting folders, uploading files, downloading files, and deleting files.
Version Management: When version management is used, all versions of uploaded files are managed. Previous files can be easily downloaded through the version list.
Encryption: When encryption is enabled, encryption is provided via SSE-S3 method.
Access Control: When access control is used, you can directly enter Public IPs allowed to access Object Storage, or select resources within the same Account allowed to access (Virtual Server, Bare Metal Server, VPC Endpoint, etc.).
Replication: Can perform replication to buckets in different locations or the same location.
Multiple replication policies can be set
Permission Management: By default, Private permission is provided, and Public permission and permission management features are provided.
Private Permission: Allows file sharing and downloading only to users who know the authentication key
Public Permission: Allows file sharing and downloading to anyone worldwide by accessing the file’s URL
Monitoring: Monitoring information such as total file count, data amount (Bytes), and HTTP Method request count can be checked through the Cloud Monitoring service.
ServiceWatch Service Integration: Data can be monitored through the ServiceWatch service.
Components
Authentication Key
The authentication key is an essential element that must be created in advance to use Object Storage. The purpose of using the authentication key is as follows.
An authentication key is required to create and access the Object Storage service from Samsung Cloud Platform Console.
The API provided by Object Storage is compatible with Amazon S3, and tools using Amazon S3 can be used in the same way. In this case, authentication key input is required, and it is used as a tool to identify users with permissions.
For detailed instructions on creating and checking authentication keys, see How-to guides > Create Authentication Key.
Bucket
A Bucket is the top-level folder, and all folders and files exist under the bucket. When you create Object Storage service in Samsung Cloud Platform Console, a bucket is created, and you can then upload folders or files. Bucket naming rules are as follows:
Bucket names must be at least 3 characters and at most 63 characters.
Bucket names can only consist of lowercase letters, numbers, periods ., and hyphens -.
Bucket names must start with a lowercase letter or number.
Bucket names must not have two periods adjacent to each other.
Bucket names must not end with a period or hyphen.
Bucket names must not have periods and hyphens adjacent to each other.
Bucket names must not use IP address format (e.g., 192.168.x.x).
Bucket names cannot use the name admin.
Bucket names must be unique within an Account/Region.
Previously used bucket names can be reused after 1 hour.
Valid Bucket Name Examples
Invalid Bucket Name Examples
Can be used with bucket names like
cpexamplebucket1
scp-example-bucket-01
my-scp-object-storage
Cannot be used with bucket names like
scp_example_bucket(includes underscore)
DocExampleBucket(includes uppercase)
-scp-example-bucket(starts with hyphen)
Folder
Folders are used to logically group files. Folder naming rules are as follows:
Folder names can consist of Korean, English, numbers, and special characters.
Special characters that cannot be entered are as follows:
Special Characters Not Allowed in Folder Names
Percent sign%
Ampersand&
Question mark?
Exclamation mark!
Less than sign<, greater than sign>
Slash/
Equal sign=
Plus sign+
Dollar sign$
Pound sign#
Backtick’
Caret^
Vertical bar/pipe|
Left curly brace{, right curly brace}
Left square bracket[, right square bracket]
File
A File refers to data stored in Object Storage and is the same as a regular file. File naming rules are as follows:
File names can consist of Korean, English, numbers, and special characters.
Special characters that cannot be entered are as follows:
Special Characters Not Allowed in File Names
Percent sign%
Ampersand&
Question mark?
Exclamation mark!
Less than sign<, greater than sign>
Slash/
Equal sign=
Plus sign+
Dollar sign$
Pound sign#
Backslash\
Backtick’
Caret^
Vertical bar/pipe|
Left curly brace{, right curly brace}
Left square bracket[, right square bracket]
Plus sign+
Folder names and file names are separated by a slash /. The following are examples of valid folder names and file names:
Examples of Mixed Folder and File Name Usage
3scp-example
my.happy_photo-2024/20240101.jpg
video/2024/video01.wmv
Note
The path length including folder name, file name, and delimiter (/) is limited to 1,024 Bytes (based on UTF-8 Encoding).
URL
You can access Object Storage buckets through URLs. Public and Private URLs are provided, allowing access not only from the same Samsung Cloud Platform environment but also from external internet environments. The URL structure is composed as follows:
If accessing files with Public Access enabled without authentication key (Access Key, Secret Key), account ID input is required; otherwise, access is possible without account ID.
Upload API: 5GB for single upload operation, 5TB for Multipart
Number of files in bucket
Up to 200 million
Table. Object Storage Constraints
Warning
It is recommended to store up to 200 million files in a bucket. If exceeding 200 million, severe performance degradation may occur, so manage the number of files.
When using S3 Backend Filesystem solutions (ex. s3fs, objectivefs, etc.), it is recommended not to use version management. Performance degradation may occur when using version management.
Note
When executing Amazon S3 API after changing IAM permissions, it may take up to 30 seconds.
Note
Korea South3 (kr-south3) region constraints
File upload and download functions through Samsung Cloud Platform Console are limited.
S3 API/CLI usage using Public URL is limited.
However, Private URL access through resources created in Samsung Cloud Platform Console (Virtual Server, etc.) is possible.
Korea South1 (kr-south1), Korea South2 (kr-south2) region constraints
Separate firewall settings must be allowed for Public URL access.
Object Storage provides functions such as service creation, list retrieval, folder list retrieval, folder creation, file upload, download, etc., through the Samsung Cloud Platform Console. Additionally, these functions are also provided via an API compatible with Amazon S3. Therefore, tools that use Amazon S3 can be used in the same way. To use Amazon S3’s utility, you need to create and verify an authentication key. For details, see Create Authentication Key.
Caution
When using the Amazon S3 utility, you must use the following version. If you use a different version, some features may be limited, so be careful.
SDK v2: 2.22.x or lower
SDK v1: 1.12.781 or less
CLI v2: 2.22.x or lower
CLI v1: 1.36.x or lower
SDK for JavaScript v3 : 3.728.0 or lower
SDK for Python(Boto3) : 1.35.x or lower
Amazon S3 API
The list of Amazon S3 APIs supported by Samsung Cloud Platform Object Storage service is as follows.
Reference
For detailed information about the Amazon S3 API, please refer to Amazon S3 API Guide.
Category
Detailed description
head-bucket
Bucket Information Lookup
list-buckets
List bucket
get-bucket-versioning
Bucket versioning query
put-bucket-versioning
Modify bucket versioning
get-bucket-encryption
Bucket encryption settings query
put-bucket-encryption
Apply bucket encryption settings
delete-bucket-encryption
Delete bucket encryption setting
copy-object
Object copy, move, rename
put-object
Create object
get-object
Object download
list-objects
Object list query
head-object
Object detailed view
get-object-acl
Object ACL query
delete-object
Delete Object
If versioning is enabled, deleting a file adds a Delete Marker to the file and the Delete Marker becomes the latest version
If permanent deletion of a file is required, delete by specifying the version ID
list-object-versions
Object version list query
delete-object
Delete object version
presign
PUT object Presigned URL issuance
get-bucket-acl
Bucket public permission check
create-bucket
Create bucket
delete-bucket
Delete bucket
get-bucket-cors
Bucket CORS (Cross OriginResources) configuration check
put-bucket-cors
Create bucket CORS (PUT)
delete-bucket-cors
Bucket CORS Delete
put-bucket-tagging
Bucket tagging creation
get-bucket-tagging
Bucket tagging query
delete-bucket-tagging
Delete bucket tagging
put-bucket-website
Create bucket website
get-bucket-website
bucket website view
delete-bucket-website
Delete bucket website
get-bucket-policy-status
Bucket policy status query
put-bucket-acl
Create bucket ACL
create-multipart-upload
Multipart upload creation
upload-part
Multipart upload execution
complete-multipart-upload
Multipart upload completed
list-multipart-uploads
Multipart upload list
abort-multipart-upload
Delete incomplete multipart upload
put-object-tagging
Object tagging creation
get-object-tagging
Object tagging query
delete-object-tagging
Object tagging Delete
list-objects-V2
Object query (v2)
put-object-acl
Object acl creation
list-parts
Parts lookup
put-public-access-block
Public access block creation
get-public-access-block
public access block lookup
delete-public-access-block
public access block delete
put-bucket-lifecycle
Create bucket lifecycle (only Expiration rule can be used)
get-bucket-lifecycle
Bucket Lifecycle Query
delete-bucket-lifecycle
Bucket Lifecycle Delete
put-bucket-replication
Modify bucket replication policy
When using replication-configuration, the following items need to be checked
The following table shows the monitoring metrics of Object Storage that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, please refer to the Cloud Monitoring guide.
Metric Name
Description
Unit
Objects
Number of objects stored in a bucket
cnt
Bucket Used
Amount of data stored in a bucket (bytes)
bytes
Requests [Upload Avg]
Average upload usage per bucket
bytes
Requests [Download Avg]
Average download usage per bucket
bytes
Requests [Total]
Total number of HTTP requests executed on a bucket
cnt
Requests [Get]
Number of HTTP GET requests executed on objects in a bucket
cnt
Requests [Head]
Number of HTTP HEAD requests executed on objects in a bucket
cnt
Requests [List]
Number of LIST requests executed on objects in a bucket
cnt
Requests [Post]
Number of HTTP POST requests executed on objects in a bucket
cnt
Requests [Put]
Number of HTTP PUT requests executed on objects in a bucket
cnt
Requests [Delete]
Number of HTTP DELETE requests executed on objects in a bucket
cnt
Table. Object Storage Monitoring Metrics
3.3.1.3 - ServiceWatch metric
Object Storage sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at a 1-minute interval.
Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the namespace Object Storage.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Table. Object Storage Basic Metrics
3.3.2 - How-to guides
The user can enter the required information for Object Storage through the Samsung Cloud Platform Console, select detailed options, and create the service.
Creating an access key
To create and use the Object Storage service in the Samsung Cloud Platform Console, you need to generate an authentication key in advance.
Authentication key creation can be done from My Menu > My Info. > Authentication Key Management > Create Authentication Key. For more details, see IAM > Create Authentication Key.
Reference
The authentication key (Access Key, Secret Key) is used when authenticating Amazon S3 utility.
The authentication key is used not only for Object Storage, but also for authentication in OpenAPI and CLI.
Up to 2 authentication keys can be generated.
Caution
If the authentication key expires, access rights to the Object Storage service will be restricted. To ensure smooth service usage, check the authentication key’s expiration period in advance.
If you disable the authentication key, access rights to the Object Storage service will be restricted.
Object Storage Create
You can create and use the Object Storage service in the Samsung Cloud Platform Console.
To create Object Storage, follow the steps below.
All Services > Storage > Object Storage Click the menu. Go to the Service Home page of Object Storage.
Service Home on the page click the Create Object Storage button. Navigate to the Create Object Storage page.
Object Storage creation Enter the information required to create the service on the page.
Category
Required
Detailed description
Bucket Name
Required
Bucket name created by the user
Starts with a lowercase English letter or digit, and using lowercase English letters, digits, hyphen-, period., input 3~63 characters
Period. cannot appear consecutively two or more times.
Table. Object Storage Required Information Input Items
Caution
In the Archive Storage service, if you create with the bucket name that is being used as the Archiving target, be careful as the configured Archiving policy will be applied.
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resources on the Object Storage List page.
Object Storage Check Detailed Information
Object Storage service can view and edit the full resource list and detailed information. Object Storage Details page consists of Details, Folder List, Tag tabs.
To view detailed information of the Object Storage service, follow the steps below.
All Services > Storage > Object Storage Click the menu. Go to the Service Home page of Object Storage.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage List page.
Object Storage List page, click the resource to view detailed information. Navigate to the Object Storage Details page.
Object Storage Details page displays status information and additional feature information, and consists of Details, Folder List, Replication, Tag tabs.
Category
Detailed description
Bucket status
Bucket status
Active: Available state
Service cancellation
Button to cancel the service
Table. Status Information and Additional Functions
Note
Object Storage resources do not support operation history. If necessary, please check via the Logging&Audit service. For more details, see Logging & Auddit > How-to Guides.
Detailed Information
Object Storage list page, you can view the detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In the Object Storage service, it refers to a bucket SRN
Resource Name
Resource Name
In Object Storage service, it refers to the bucket name
Resource ID
Unique resource ID in the service
Bucket Name
Bucket name created by the user
Category
Field that distinguishes original or replica
Currently only the original exists, and a replica will be added when the DR feature is introduced later
Usage
Total data usage of the bucket
Encryption
Encryption usage information
When encryption is used, SSE‑S3 encryption key method and AES256 encryption algorithm are applied
Encryption settings can be configured on the Object Storage Details page after creating Object Storage
**Disable**: Disable the policy (display as enabled when disabled)
**Edit**: Can edit replication target, target files, and options of the replication policy
For details on editing replication policies, see [Edit Replication Policy](/userguide/storage/object_storage/how_to_guides/obj_policy.md#복제-정책-수정하기)
**Delete**: Delete the policy
For details on deleting replication policies, see [Delete Replication Policy](/userguide/storage/object_storage/how_to_guides/obj_policy.md#복제-정책-삭제하기)
Table. Object Storage Replication Information Tab Items
Tag
Object Storage List page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Tag’s Key and Value information can be checked
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing list of Keys and Values
Table. Object Storage Tag Tab Items
Work History
Reference
Object Storage resources do not support operation history. If needed, please check via the Logging&Audit service. For more details, see Logging&Auddit > How-to Guides.
Object Storage Encryption Setup
You can set it to encrypt the data stored in the bucket. After setting bucket encryption, the encryption setting is applied to uploaded data. When using encryption, the SSE‑S3 encryption key method and the AES256 encryption algorithm are applied.
Reference
Object Storage bucket encryption can provide two types (SSE‑S3/SSE‑KMS). SSE‑S3 is server‑side encryption (SSE‑S3) using Amazon S3 managed keys. SSE‑KMS is server‑side encryption (SSE‑KMS) using Key Management Service (KMS) keys. In this service, server‑side encryption (SSE‑S3) using Amazon S3 managed keys is released as the default method, and server‑side encryption (SSE‑KMS) using Key Management Service (KMS) keys will be provided later according to the service roadmap.
Information
If there is data saved before bucket encryption was set, the encryption settings will not be applied.
If you re-upload the file, encryption will be applied.
To set encryption for existing data, you need to re-upload it.
Object Storage Follow the steps below to use bucket encryption.
All Services > Storage > Object Storage Click the menu. Go to the Service Home page of Object Storage.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage List page.
Object Storage List page, click the resource (bucket) to use encryption. Object Storage Details page will be opened.
On the Object Storage Detailed page, check if encryption is not used.
If Encryption is Not Used, click the Edit button.Encryption Edit Popup opens.
After checking Encryption Use, click the Confirm button.
Object Storage Cancel
You can cancel unused Object Storage to reduce operating costs. However, if you cancel the service, the running service may be immediately stopped, so you should consider the impact of service interruption sufficiently before proceeding with the cancellation.
Caution
Service termination is possible for buckets with no stored data.
If a file is being uploaded, the upload will be canceled.
Please be careful as data cannot be recovered after deletion.
If you want to cancel Object Storage, follow the steps below.
All Services > Storage > Object Storage Click the menu. Go to the Service Home page of Object Storage.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage list page.
Object Storage list on the page, select the resource (bucket) to cancel, and click the Cancel Service button.
Please enter bucket name to confirm termination.
If you have entered the Bucket Name correctly, the Confirm button will be activated. Click the Confirm button.
When termination is complete, check whether the resource has been terminated on the Object Storage List page.
3.3.2.1 - Access Control
If you set bucket access control to enabled, only resources that are allowed access can access the bucket. You can set it to allow access by entering a public IP or for resources created in the Samsung Cloud Platform Console.
Set up access control
You can set bucket access control to enabled.
Object Storage Follow the steps below to set up access control.
All Services > Storage > Object Storage Click the menu. Go to the Service Home page.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage list page.
Click the resource (bucket) to set access control on the Object Storage List page. It navigates to the Object Storage Details page.
Verify that Access Control is Unused on the Object Storage Details page.
Click the Edit button if Access Control is Unused. The Edit Access Control popup opens.
After checking Access Control Use, click the Confirm button. On the Object Storage Details page, Access Control will be changed to Use.
Notice
If you change the access control to use, you can set the access control for Public UP, service resources, and Cloud Functions services.
Public IP, Register service resources to allow access, or set whether to use access control for the Cloud Functions service.
Category
Detailed description
Public IP Allow
Add registered Public IP or CIDR
Example: 192.168.x.x, 192.168.x.x/24
Allow Service Resources
Select service resources created in the same Account/Region
Service: Service Name
Example: Virutal Server, GPU Server, Bare Metal Server, Multi-node GPU Cluster, VPC Endpoint, PostgreSQL, MariaDB, MySQL, EPAS, Microsoft SQL Server
Resource Name: Name of the service resource
Allow Cloud Functions service
Setting whether to allow Object Storage access to modify Java Runtime code in Cloud Functions service
Allowed when set, Cloud Functions service can load Java Runtime executable files stored in Object Storage
Table. Access Control Items
Reference
If you modify the access permission, it may take up to 30 seconds for the changes to be completed.
Reference
South Korea (kr-south) region constraints
South Korea (kr-south) region does not provide Cloud Functions service, so the Cloud Functions Service Allowance feature cannot be used.
Allow Public IP Access
If bucket access control is set to enabled, you can add a public IP allowance.
Object Storage in to add Public IP access permission, follow the steps below.
All Services > Storage > Object Storage Click the menu. Service Home page will be navigated to.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage list page.
Object Storage List page, click the resource (bucket) to set access control. Navigate to the Object Storage Details page.
Object Storage Details page, check if access control is enabled.
If Access control is unused, click the Edit button, then in the Access control popup change access control to Enabled.
Only when access control is enabled, the Allow IP Access, Allow Service Resources, Allow Cloud Functions Service list is displayed.
Public IP Allow in Edit click the button. Public IP Allow Edit The popup window opens.
Enter the Public IP to allow access, and click the Add button.
Column
Required
Detailed description
Public IP Allowed
Required
Enter as a single IP or CIDR format (up to 150 entries)
192.168.x.x (IP format)
192.168.x.x/24 (CIDR format)
Table. Public IP Allowance Edit Popup Input Items
Check the items added to the list and press the Confirm button.
Check the added Public IP in the Object Storage Details page’s Access Control > Allow Public IP list.
Reference
If you modify the Public IP allowance, it may take up to 30 seconds for the changes to be completed.
Public IPs are allowed up to a maximum of 150.
Allow access to service resources
If bucket access control is set to enabled, you can add service resources in the allowed service resources.
Object Storage Follow the steps below to allow access to service resources.
Click the All Services > Storage > Object Storage menu. Go to the Service Home page.
Service Home page, click the Object Storage menu. Navigate to the Object Storage list page.
Object Storage List page, click the resource (bucket) to set access control. Object Storage Details page navigate.
Object Storage Details on the page, check whether Access Control is enabled.
Access control is unused, click the Edit button, then in the Access control popup change access control to Enabled.
Only when access control is enabled, the IP access allowed, service resource allowed, Cloud Functions service allowed list is displayed.
Click the Edit button in Allow Service Resources. The Select Service Resources popup opens.
Notice
The allowed criteria per service are as follows.
Virtual Server/GPU Server/Bare Metal Server/Multi-node GPU Cluster: Allowed per server
VPC Endpoint: Allow per VPC Endpoint
PostgreSQL, MariaDB, MySQL, EPAS, Microsoft SQL Server: Allowed per cluster
To access Object Storage from the server, the following tasks are required.
Verify Object Storage IP via nslookup command on the server
Register rule through Security Group or Firewall service and apply to server
Target address: Object Storage IP confirmed in ①
Direction : Outbound
Service : TCP 80, 443 (80 when using http / 443 when using https)
Caution
If each service’s status is as follows, permission and revocation of service resource access are possible. If it is not the following status, previously permitted service resources may also be affected.
Virtual Server/GPU Server: Build, Building, Networking, Scheduling, Block_Device_Mapping, Spawning, Deleting, Error and other statuses
Bare Metal Server/Multi-node GPU Cluster: Running, Starting, Stopping, Stopped
Select the server to allow access, and press the Confirm button.
Check the added server in the Object Storage Details page’s Access Control > Service Resource Allow list.
Reference
Modifying service resource permissions may take up to 30 seconds for changes to be completed.
Up to 150 service resources are allowed.
Cloud Functions Allow Service Access
If access control on the bucket is set to enabled, you can allow the Cloud Functions service to access Object Storage.
To allow access to the Cloud Functions service from Object Storage, follow these steps.
All Services > Storage > Object Storage Click the menu. Service Home page will be displayed.
Click the Object Storage menu on the Service Home page. You will be taken to the Object Storage list page.
Object Storage List On the page, click the resource (bucket) to set access control. Object Storage Details Navigate to the page.
Object Storage Details page, check if access control is enabled.
If Access Control is Disabled, click the Edit button, then in the Access Control popup change Access Control to Enabled.
Only when access control is enabled, the list of Allow IP Access, Allow Service Resources, Allow Cloud Functions Service is displayed.
Click the Edit button in Cloud Functions Service Allow. The Cloud Functions Service Edit popup opens.
After checking Allow, click the Confirm button.
Reference
When the access permission setting for the Cloud Functions service is completed, the Cloud Functions service can retrieve the Java Runtime executable stored in Object Storage.
For loading the Java Runtime executable in the Cloud Functions service, refer to Change Java Runtime code.
Reference
South Korea (kr-south) region constraints
The South Korea (kr-south) region does not provide Cloud Funtions service, so the Allow Cloud Functions Service feature cannot be used.
3.3.2.2 - File and Folder Management
If you need to manage, such as saving files to the created Object Storage or downloading saved files, you can perform tasks on the Object Storage Details and Folder List pages.
Create new folder
A new folder may need to be created to store new data in the bucket.
Object Storage Follow the steps below to create a new folder.
All Services > Storage > Object Storage Click the menu. Service Home page will be displayed.
Click the Object Storage menu on the Service Home page. It navigates to the Object Storage list page.
Object Storage List page, click the resource (bucket) to create a new folder. You will be taken to the Object Storage Details page.
Folder List Click the tab. Folder List Navigate to the page.
Click the New Folder button. New Folder popup window opens.
Enter the folder name to use, and click the Confirm button. A popup window notifying the creation of a new folder will open.
Caution
Folder names must not contain special characters that are not allowed. For more details, see Folder Name Creation Rules.
The total path length, including folder name, file name, and delimiter (/), is limited to within 1,024 Bytes (based on UTF-8 encoding).
After clicking the Confirm button, check the created folder in the Folder List.
File Upload
Reference
Korea South 3 (kr-south3) region constraints
The file upload and download functionality through the Samsung Cloud Platform Console is limited.
Using S3 API/CLI via Public URL is restricted.
However, accessing a Private URL through resources (such as Virtual Server) created in the Samsung Cloud Platform Console is possible.
Korea South1 (kr-south1), Korea South2 (kr-south2) region constraints
Public URL access requires allowing a separate firewall setting.
You can download files stored in the bucket.
Object Storage Follow the steps below to download the file.
Click the All Services > Storage > Object Storage menu. You will be taken to the Service Home page.
Click the Object Storage menu on the Service Home page. Go to the Object Storage list page.
Object Storage List page, click the resource (bucket) to download. Object Storage Details page will be opened.
Click the Folder List tab. You will be taken to the Folder List page.
Click the More > Download button located at the far right end of the file to be downloaded. The file download will start.
Check that the file download has completed in the browser.
View file information
You can retrieve information about files stored in the bucket.
Object Storage Follow the steps below to view file information.
All Services > Storage > Object Storage Click the menu. Service Home page will be opened.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage List page.
Object Storage list page, click the resource (bucket) that contains the file you want to view file information. Object Storage detail page will be opened.
Folder List Click the tab. Folder List Navigate to the page.
Click the More > File Info button located at the far right of the file to view information. File Info popup window opens.
File Information Check detailed file information in the popup window.
Category
Detailed description
file name
file name
Content type
Object type
Total size
File size
Modification Date/Time
The date and time the file was most recently modified
Permission
Public Access allowance
URL
Provide Public and Private addresses to access the file path URL
Public: Provided to be accessible from external internet network
Private: Provides an address accessible from resources created in the same Account and same region of the Samsung Cloud Platform Console
Table. File Information Items
Copy file
You can copy the file stored in the bucket to the same location.
To copy a file, follow these steps.
Click the Storage > Object Storage menu. Go to the Object Storage list page.
Object Storage List On the page, click the resource (bucket) to copy the file. Object Storage Details Navigate to the page.
Folder List Click the tab. Folder List Navigate to the page.
Click the More > File Copy button at the far right of the file to be copied. The File Copy popup window opens.
After entering the filename, click the Confirm button. A popup notifying file copy will open.
Caution
Only files with a full path length (including bucket name, folder name, file name, delimiter (/)) of 1,024 Bytes or less (based on URL Encoding) can be copied.
In the Samsung Cloud Platform Console, you can only copy files when the file size is 5 GB or less. If the file size exceeds 5 GB, you can copy using the S3 API.
You can copy within the same bucket and folder.
If the bucket does not use versioning, files with the same name in the folder will be overwritten.
After clicking the Confirm button, check the copied file in the Folder List.
Delete files and folders
You can delete files and folders stored in the bucket.
Object Storage Follow the steps below to delete files and folders.
Caution
When deleting a folder, all subfolders/files inside the folder will be deleted.
All versions will be deleted at once when the file is deleted.
All Services > Storage > Object Storage Click the menu. Service Home page will be opened.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage list page.
Click the resource (bucket) that contains the file or folder to delete on the Object Storage list page. It moves to the Object Storage details page.
Folder List tab, click it. Folder List page, navigate.
Click the More > Delete button located at the far right of the file or folder to be deleted. The Delete popup window will open.
After selecting multiple files or folders from the left check box, you can click the Delete button at the top to delete multiple files or folders at once.
Click the Confirm button in the delete popup to complete the deletion.
Folder List Check that files or folders have been deleted from the list.
Reference
When deleting multiple files and folders simultaneously, it may take a long time.
3.3.2.3 - Version Management
By setting version management on a bucket, you can manage the history of file modifications when uploading files with the same name.
Additionally, you can check the version list of files and download previous versions of files from the version list.
Notice
When using the version management feature, if you accidentally upload a file and overwrite an existing file, you can find the previous version through the version list.
Setting Up Version Management
You can set up version management on a bucket.
Warning
If you are using S3 Backend Filesystem solutions (ex. s3fs, objectivefs, etc.), it is recommended not to use version management. Performance degradation may occur when using version management.
To set up Object Storage version management, follow these steps:
Click the All Services > Storage > Object Storage menu. You will move to the Service Home page.
On the Service Home page, click the Object Storage menu. You will move to the Object Storage List page.
On the Object Storage List page, click the resource (bucket) for which you want to set up version management. You will move to the Object Storage Detail page.
On the Object Storage Detail page, check if Version Management is Disabled.
If Version Management is Disabled, click the Edit button. The Version Management Edit popup window will open.
Check Enable for version management, then click the Confirm button. On the Object Storage Detail page, Version Management will change to Enabled.
Note
When setting up version management for the first time, it may take some time for the changes to complete.
The time required may vary depending on the size of the bucket. Operations performed before the setup is complete may not have versions applied.
Checking Version List
You can check and manage versions of file uploads and modifications from the time version management is set up.
Notice
Version management of file uploads and modifications is possible from the time version management is set up.
All files have a version ID. However, before version management is set up, the version ID is null(-), and files uploaded after version management is set up are created and assigned a version ID.
For example, if you upload files with the same name to the same location, a new version file with a different version ID will be displayed in the version list even though the file name is the same.
To check the version list of Object Storage files, follow these steps:
Click the All Services > Storage > Object Storage menu. You will move to the Service Home page.
On the Service Home page, click the Object Storage menu. You will move to the Object Storage List page.
On the Object Storage List page, click the resource (bucket) that contains the file for which you want to check the version list. You will move to the Object Storage Detail page.
Click the Folder List tab. The folder and file list will be displayed.
In the folder and file list, click the More > Version List button located at the right end of the file for which you want to check the version list. The Version List popup window will open.
Component
Detailed Description
File Name
File name
Modified Date
Date and time when the file was modified
Version ID
Version ID assigned to each file
Version ID of files stored before setting up version management is displayed as null(-)
Files stored after version management is set up are created with a unique version ID
If files with the same name are uploaded to the same location, a version ID is created and the version file is added, which can be viewed in the version list
Etag
Object-specific version that identifies the file
Size
Size of the version file
More
Provides additional features such as version file download and deletion
File Download: Download the selected version of the file
To restore a file to a previous version, download the previous version and upload it again
Delete: Delete the selected version of the file
If all version files are deleted, the original file is also deleted
To keep the original file, at least 1 version must be left
Delete
Delete the file
When the checkbox of the version file to delete is selected in the list, the button is activated
Table. Version List Popup Items
Downloading Version File
Note
Korea South3 (kr-south3) region constraints
File upload and download functions through Samsung Cloud Platform Console are limited.
S3 API/CLI usage using Public URL is limited.
However, Private URL access through resources created in Samsung Cloud Platform Console (Virtual Server, etc.) is possible.
Korea South1 (kr-south1), Korea South2 (kr-south2) region constraints
Separate firewall settings must be allowed for Public URL access.
You can download version files of files.
To download version files of Object Storage files, follow these steps:
Click the All Services > Storage > Object Storage menu. You will move to the Service Home page.
On the Service Home page, click the Object Storage menu. You will move to the Object Storage List page.
On the Object Storage List page, click the resource (bucket) that contains the version file you want to download. You will move to the Object Storage Detail page.
Click the Folder List tab. The folder and file list will be displayed.
In the folder and file list, click the More > Version List button located at the right end of the file for which you want to download the version file. The Version List popup window will open.
Click the More > File Download button located at the right end of the version file you want to download. File Download will start.
Check if the file download is complete in your browser.
Note
When you download a version file from the version list, the version ID is included in the file name.
Deleting Version File
You can delete the version list of files.
Warning
If you delete all version files, the original file will also be deleted. To avoid deleting the original file, you must leave at least 1 version.
Version files with object lock (WORM) set cannot be deleted.
To delete the version list of Object Storage files, follow these steps:
Click the All Services > Storage > Object Storage menu. You will move to the Service Home page.
On the Service Home page, click the Object Storage menu. You will move to the Object Storage List page.
On the Object Storage List page, click the resource (bucket) that contains the version file you want to delete. You will move to the Object Storage Detail page.
Click the Folder List tab. The folder and file list will be displayed.
In the folder and file list, click the More > Version List button located at the right end of the file for which you want to delete the version list. The Version List popup window will open.
Click the More > Delete button located at the right end of the version file you want to delete. Deletion will be completed.
3.3.2.4 - Permission Management
Each file is provided with Private permission by default, and each file can be changed to Public permission through permission settings. Private permission allows file disclosure and download only to users who know the Access Key and Secret Key, but Public permission allows file disclosure and download to anyone worldwide when accessing the file’s Public URL, so caution is required.
Check permission management
You can check the permission settings on the file.
Object Storage Follow the steps below to check file permissions.
All Services > Storage > Object Storage Click the menu. Service Home Navigate to the page.
Service Home on the page, click the Object Storage menu. Go to the Object Storage list page.
Object Storage list page, click the resource (bucket) that contains the file whose permissions you want to check. Object Storage details page will be opened.
Folder List Click the tab. Folder List Navigate to the page.
Click the More > File Info button located at the far right of the file you want to view file information. The File Info popup window opens.
File Information in the popup window Permissions please check.
Category
Detailed description
Permission
Public Access allowed or Public Access not allowed
Table. Permission Information Description
Public Access Allow
You can set the file’s Public Access permission to allow.
Object Storage Follow the steps below to set file permissions to allow Public Access.
All Services > Storage > Object Storage Click the menu. Service Home Navigate to the page.
Service Home on the page, click the Object Storage menu. Go to the Object Storage list page.
On the Object Storage List page, click the resource (bucket) that has files to allow Public Access. You will be taken to the Object Storage Details page.
Folder List Click the tab. Folder List Navigate to the page.
Click the More > File Info button located at the far right of the file you want to view file information for. The File Info popup window opens.
Check that Permission is in Public Access not allowed state, and click the Confirm button.
Click the More > Permission Management button located at the right end of the file. The Edit Permission Management popup window opens.
Edit Permission Management In the popup, check Allow Public Access for Permission Management, and click the Confirm button. Navigate to the Folder List page.
Click the More > File Info button located at the far right of the file. The File Info popup window opens.
Permission is Public Access allowed. Check the state.
Notice
When Public Access is allowed, accessing the file’s Public URL allows the file to be publicly disclosed and downloaded by anyone worldwide. Please set it only if file disclosure is absolutely necessary.
3.3.2.5 - Replication Policy Management
You can perform replication to a bucket in a different location or the same location. You can set multiple replication policies, and if the original bucket is unavailable due to a failure or disaster, you can provide service through the replica bucket.
Notice
The replication feature operates in a 1:N structure, allowing replication within a region or between regions. When performing cross-region replication, data transfer fees are added.
The replication feature applies only to files uploaded after the replication policy is set.
Even if you delete the original version file, files in the replica bucket are not deleted.
Example: If set from Bucket A to Bucket B, even if the version file of Bucket A is deleted, it remains in Bucket B.
You can set up bidirectional replication.
Example: When set as Bucket A ↔ Bucket B, files uploaded to Bucket A are replicated to Bucket B, and files uploaded to Bucket B are replicated to Bucket A.
The duplicated file is not duplicated.
Example: If set as Bucket A → Bucket B → Bucket C, files replicated from Bucket A → Bucket B are not replicated from Bucket B → Bucket C.
Reference
kr-south region does not provide inter-region Object Storage replication functionality.
Add replication policy
Notice
You can set replication on the created bucket.
To add a replication policy, you must set the version control feature to enabled.
To modify the replication policy, follow the steps below.
All Services > Storage > Object Storage Click the menu. Navigate to the Service Home page of Object Storage.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage list page.
Object Storage List page, click the resource to modify the replication policy. Navigate to the Object Storage Detail page.
Click the Clone tab.
Click the More > Edit button of the policy you want to modify in the replication policy list. Edit Replication Policy popup window opens.
After modifying the replication policy information, click the Confirm button. A popup window notifying the replication policy modification will open.
Category
Required
Detailed description
Replication location
Required
Select replication location (region)
Other locations can be selected
Replication Bucket Name
Required
Enter the name of the replicated bucket
If set the same as the original bucket name, replication policy cannot be modified
If you set a bucket already in use as the replication bucket, replication policy cannot be modified
Target File
Required
Select files to replicate
All: Replicate all files
Prefix: Replicate files that start with the value entered as a prefix
Enter within 1,024 bytes based on UTF-8 encoding (same as file length constraint)
Special characters (%<>#\`^|{}[]) cannot be entered
When performing putObject using the S3 API’s prefix option, the prefix must start with /
Delete marker duplication
Select
Whether to use delete marker duplication
Table. Replication Policy Edit Popup Items
Confirm 버튼을 클릭하세요. 복제 정책 수정이 완료됩니다.
Caution
If there is an invalid policy, you cannot add a replication policy.
Example: If a policy remains after the replica bucket has been deleted, you cannot add a replication policy.
Change replication policy status
You can enable or disable the replication policy to change whether the replication policy is performed.
To change the replication policy status, follow the steps below.
All Services > Storage > Object Storage Click the menu. Navigate to the Service Home page of Object Storage.
Click the Object Storage menu on the Service Home page. Navigate to the Object Storage List page.
Click the resource to change the replication policy status on the Object Storage List page. Navigate to the Object Storage Details page.
Clone tab click.
In the replication policy list, click the More > Activate or More > Deactivate button for the policy whose status you want to change. A popup window notifying the replication policy status change will open.
Activation: Performs replication according to the replication policy.
Disable: Stops performing replication.
Click the Confirm button. The status of the replication policy will change.
Delete replication policy
You can delete unused replication policies.
To modify the replication policy, follow the steps below.
All Services > Storage > Object Storage Click the menu. Navigate to Object Storage’s Service Home page.
Click the Object Storage menu on the Service Home page. It moves to the Object Storage List page.
Object Storage List On the page, click the resource to delete the replication policy. Object Storage Details Navigate to the page.
Click the Clone tab.
Click the More > Delete button of the policy you want to edit in the replication policy list. A popup notifying the deletion of the replication policy will open.
Confirm Click the button. The replication policy will be deleted.
Caution
If you change the usage of versioning for the source and replica buckets, replication will not be performed correctly.
If versioning of the source bucket is set to disabled, replication will not be performed. If set back to enabled, replication will be performed for files uploaded after the setting.
If you set versioning of the replica bucket to disabled, replication will be performed but versioning is not possible. If you set it back to enabled, versioning will apply from the point it is set.
If you delete the source bucket, the configured replication policy will also be deleted.
If you delete the replica bucket, the replication policy set on the source bucket remains.
If you recreate a replication bucket with the same bucket name as a deleted replication bucket, replication will be performed to that bucket.
3.3.3 - Release Note
Object Storage
2025.10.23
FEATUREDuplication, file copy feature added and Cloud Functions service integration
Object Storage’s replication feature has been added.
You can perform replication to a bucket in a different location or the same location, and you can set multiple replication policies.
File copy feature has been added.
You can copy the desired file within the same bucket and folder.
Cloud Functions service has been added to access control.
You can upload Java Runtime executable files in Cloud Functions.
2025.07.01
FEATUREAdd access server resources and Presigned URL feature
A server resource target product has been added to Object Storage access control.
Multi-node GPU Cluster, PostgreSQL, MariaDB, MySQL, EPAS, Microsoft SQL Server
Presigned URL has been added.
You can download the file using a Presigned URL for the set period of time.
You can perform Copyobject on encrypted files.
2025.04.28
FEATUREAmazon S3 version added
Additional versions of the Amazon S3 SDK and Amazon S3 CLI that can be used have been added.
2025.02.27
FEATUREVPC Endpoint connection feature added
Object Storage feature change
VPC Endpoint can be used to access Object Storage from external networks.
Samsung Cloud Platform Common Function Change
Account, IAM and Service Home, tags, etc. have been reflected in the common CX changes.
2024.10.01
NEWObject Storage Service Official Version Release
Launched an object storage service that makes data storage and retrieval easy.
2024.07.02
NEWBeta version release
We have launched Object Storage, a service that provides a space (bucket) to economically store large amounts of data.
3.4 - Archive Storage
Automatically transfer data stored in Object Storage to Archive Storage for storage, and easily recover it when needed.
3.4.1 - Overview
Service Overview
Archive Storage is a storage service suitable for long-term storage of large amounts of data.
Users can set a schedule to automatically transfer infrequently used data stored in Object Storage to Archive Storage, thereby efficiently configuring storage and managing costs. Additionally, if a user requests, data stored in Archive Storage can be restored to Object Storage for use.
Special Features
Free capacity usage: If you create a bucket in Archive Storage, the bucket automatically expands or shrinks according to the user’s data archiving and deletion. There is no cost for creating a storage bucket, and you are only charged for storage usage.
Stable Data Recovery: Data that has been stored long-term can be searched and reliably recovered within 3 hours. When recovering data, select the target file and the Object Storage bucket where you want to store it to recover the data.
Cost Efficiency: Depending on the purpose and frequency of data usage, you can efficiently configure Object Storage and Archive Storage to store and manage data at a reasonable cost.
Convenient Use: You can conveniently use functions such as bucket creation, Archiving schedule setting, and data recovery on the Samsung Cloud Platform Console. After migrating data for all files in Object Storage, you can apply a source deletion policy.
Diagram
Figure. Archive Storage diagram
Provided Features
Archive Storage provides the following functions.
Archiving Plan Setting: Set a schedule to perform archiving on all files in the Object Storage bucket.
Data Recovery: You can recover to the Object Storage bucket and folder you want to store using folders or files stored in Archive Storage.
Archiving status monitoring: Archiving status (success, cancelled, failed, in progress, pending, skipped) can be checked.
Recovery status monitoring: You can check the recovery status (success, cancelled, failure, in progress, pending).
Encryption: If encryption is set to use, encryption is provided via the SSE‑S3 method.
Version Management: You can manage archived files by version and recover by selecting the desired version.
Components
Authentication Key
To create and access the Archive Storage service on the Samsung Cloud Platform Console, an authentication key is required.
Therefore, to use Archive Storage, you must generate an authentication key.
bucket
Bucket (Bucket) is the top-level folder and all folders and files exist under the bucket. When you create an Archive Storage service in the Samsung Cloud Platform Console, a bucket is created, and thereafter you can upload folders or files. The bucket name creation rules are as follows.
Bucket name must be at least 3 characters and at most 63 characters.
Bucket names can only consist of lowercase English letters, numbers, period. and hyphen-.
Bucket name must start with a lowercase English letter or a number.
Bucket names must not be used with two periods placed side by side.
Bucket names cannot end with a period or hyphen.
Bucket names cannot have periods and hyphens adjacent.
Bucket names do not use IP address format (e.g., 192.168.x.x).
You cannot use the name admin as a bucket name.
Bucket name must be unique within the Account.
The previously used bucket name can be used after up to 1 hour.
Valid bucket name example
Invalid bucket name example
Usable bucket names are as follows
cpexamplebucket1
scp-example-bucket-01
my-scp-object-storage
The following bucket names cannot be used
scp_example_bucket(including underscore)
DocExampleBucket(including uppercase)
-scp-example-bucket(starting with hyphen)
folder
Folder (Folder) is used to logically group files. The folder name creation rules are as follows.
Folder names can consist of Korean, English, numbers, and special characters.
The special characters that cannot be entered are as follows.
Special characters that cannot be used in folder names
percentage sign%
ampersand&
question mark?
exclamation mark!
less-than sign<,greater-than sign>
slash/
equals sign=
plus sign+
dollar sign$
pound sign#
apostrophe’
caret^
vertical bar/pipe|
left curly brace{,right curly brace}
left bracket[,right bracket]
file
File (File) is the data stored in Archive Storage and is the same as a regular file. The file name generation rules are as follows.
File names can consist of Korean, English, numbers, and special characters.
The special characters that cannot be entered are as follows.
Special characters not allowed in file names
percentage sign%
ampersand&
question mark?
exclamation mark!
less-than sign<, greater-than sign>
slash/
equals sign=
plus+
dollar$
pound sign#
backslash\
apostrophe’
caret^
vertical bar/pipe|
left curly brace{, right curly brace}
left square bracket[, right square bracket]
plus+
Folder names and file names are separated by a slash/. The following are examples of valid folder names and file names.
Example of mixed use of folder names and file names
3scp-example
my.happy_photo-2024/20240101.jpg
video/2024/video01.wmv
Note
The length of the path, including folder name, file name, and the delimiter (/), is limited to within 1,024 Bytes (based on UTF-8 encoding).
Constraints
Archive Storage’s constraints are as follows.
Category
Description
Number of creatable Archive Storage services
1,000 or less
File name length (including path)
1,024 Bytes or less
Table. Archive Storage Constraints
Reference
If using through IAM role, Archive Storage cannot be used via Samsung Cloud Platform Console, and only IAM users can use it.
Preliminary Service
This is a list of services that must be pre-configured before creating the service. For details, refer to the guide provided for each service and prepare in advance.
Object storage that facilitates data storage and retrieval
Table. Archive Storage Preliminary Service
3.4.1.1 - ServiceWatch Metrics
Archive Storage sends metrics to ServiceWatch. The metrics provided by default monitoring are data collected at a 1-minute interval.
Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the Archive Storage namespace.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Table. Archive Storage basic metrics
3.4.2 - How-to guides
The user can enter the required information for Archive Storage through the Samsung Cloud Platform Console, select detailed options, and create the service.
Create authentication key
To create and use the Archive Storage service in the Samsung Cloud Platform Console, you need to generate an authentication key in advance.
API key creation can be done from My menu > My Info. > API key management > Create API key. For detailed information, see IAM > Create API key.
Note
The authentication key is used not only for Archive Storage, but also for authentication in OpenAPI and CLI.
Authentication keys can be generated up to a maximum of 2.
Caution
If the authentication key expires, access to the Archive Storage service will be restricted. To ensure smooth service usage, please check the expiration period of the authentication key in advance.
If you disable the authentication key, access rights to the Archive Storage service will be restricted.
Archive Storage Create
You can create and use the Archive Storage service from the Samsung Cloud Platform Console.
To create Archive Storage, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Go to the Service Home page of Archive Storage.
Service Home on the page click the Create Archive Storage button. Navigate to the Create Archive Storage page.
Archive Storage creation On the page, enter the information required to create the service, and select detailed options.
Category
Required
Detailed description
Bucket Name
Required
Bucket name for performing Archiving
Start with a lowercase letter or number, and use lowercase letters, numbers, dash-, and period. to input 3~63 characters
Cannot have two or more consecutive periods.
Periods. and dashes- cannot be adjacent
Cannot end with a period. or dash-
IP format not allowed
admin name not allowed
Cannot use a bucket name already in use in the same region within the Samsung Cloud Platform Console
Archive Storage Detail page consists of Detailed Information, Folder List, Archiving History, Recovery History, Tag tabs.
To view detailed information of the Archive Storage service, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Navigate to Archive Storage’s Service Home page.
Service Home page, click the Archive Storage menu. Go to the Archive Storage list page.
Archive Storage list on the page, click the resource to view detailed information. Archive Storage details navigate to the page.
Archive Storage Details The page displays status information and additional feature information, and consists of Details, Folder List, Archiving History, Recovery History, Tags tabs.
Category
Detailed description
Archive Storage status
Archive Storage status information
Active: available
Error: abnormal state during termination
Deleting: service termination in progress
Service termination
Button to cancel the service
Table. Archive Storage status information and additional functions
Detailed Information
In the Detailed Information tab, you can view the resource’s detailed information and, if necessary, edit the information.
Category
Detailed description
Service
Service group
resource type
resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
In Archive Storage service, it means the bucket name
Resource ID
Service’s unique resource ID
Bucket name
User-created bucket name
Usage
Total data usage of the bucket
Encryption
Encryption usage information
When encryption is used, the SSE‑S3 encryption key method and AES256 encryption algorithm are applied
Encryption settings are configured on the Archive Storage detail page after creating Archive Storage
Archiving History in the tab, you can view the list of archiving performed over a specific period.
Category
Detailed description
Query period
Select the period to check the Archiving history
Up to 90 days can be queried
Status
Select Archiving status
All, In progress, Success, Failure, Cancelled, Pending, Skip can be selected
Policy ID
Archiving Policy ID
When clicked, Status Check popup opens
Policy ID, execution time, status, target file, capacity, progress can be checked
Execution Date and Time
Archiving Start Time
Completion Date/Time
Archiving Completion Time
Status
Archiving status information
Cancel Task
Cancel Archiving Task
If Archiving status is Pending or In Progress, enable
When clicked, Cancel Task popup opens
Table. Archive Storage Archiving History Tab Items
Recovery History
You can view the list of Archives that performed recovery tasks over a specific period in the Recovery History tab.
Category
Detailed description
Search period
Select the period to view the recovery history
Up to 90 days can be queried
Status
Select recovery status
All, In progress, Success, Failure, Cancel, Waiting, Skip can be selected
Recovery target, Archive recovery ID, Recovery date/time, Status, Original location can be verified
Recovery Target
Recovery Target Object Storage Name
When clicked, Check Status popup opens
Recovery Date and Time
Recovery Start Time
Completion Date/Time
Recovery Completion Time
Status
Recovery status information
Cancel recovery operation
Cancel recovery operation
If the recovery status is Pending or In progress, activated
When clicked, Cancel operation popup opens
Table. Archive Storage Recovery History Tab Items
tag
In the Tag tab, you can view the resource’s tag information, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can view the Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the existing list of Keys and Values
Table. Archive Storage tag tab items
Archive Storage Set up encryption
You can configure the data stored in the bucket to be encrypted. After setting bucket encryption, the encryption setting is applied to data uploaded thereafter. When encryption is used, the SSE‑S3 encryption key method and the AES256 encryption algorithm are applied.
Reference
Archive Storage’s bucket encryption can provide two types (SSE-S3/SSE-KMS).
SSE-S3 is server-side encryption (SSE-S3) using Amazon S3 managed keys.
SSE-KMS is server-side encryption (SSE-KMS) using Key Management Service (KMS) keys.
In this service, server-side encryption using Amazon S3 managed keys (SSE-S3) has been released as the default method, and server-side encryption using Key Management Service (KMS) keys (SSE-KMS) will be provided in the future according to the service roadmap.
Notice
If there is data saved before bucket encryption was set, the encryption settings will not be applied.
If you re-upload the file, encryption will be applied.
To set encryption for existing data, you need to re-upload.
To set up encryption, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Navigate to Archive Storage’s Service Home page.
Click the Archive Storage menu on the Service Home page. Go to the Archive Storage list page.
Archive Storage List page, click the resource (bucket) to use encryption. Navigate to the Archive Storage Details page.
On the Archive Storage Details page, after confirming that encryption is unused, click the edit button. Encryption Edit popup window opens.
After checking the use of encryption, click the Confirm button.
Archive Storage Cancel
Note
Archiving is in progress or when restoring an Archive file, Archive Storage cannot be cancelled.
To cancel Archive Storage, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Go to the Service Home page of Archive Storage.
Click the Archive Storage menu on the Service Home page. Go to the Archive Storage list page.
Archive Storage List On the page, select the resource to cancel, and click the Cancel Service button.
When the termination is completed, check on the Archive Storage list page whether the resource has been terminated.
3.4.2.1 - Archiving Policy Management
You can add, modify, or delete Archiving policies.
Archiving Add Policy
To add an Archiving policy, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Go to the Service Home page of Archive Storage.
Service Home on the page click the Archive Storage menu. Navigate to the Archive Storage list page.
Click the resource on the Archive Storage List page to view detailed information. It navigates to the Archive Storage Details page.
Click the Add button of Policy Actions. The Add Archiving Policy popup opens.
After entering the Archiving policy information, click the Confirm button.
Category
Detailed description
Archiving target
Select the bucket of Object Storage to perform Archiving
Cannot be modified after initial setup
Only buckets within the same region can be added
Buckets used in policies of other Arcvhie Storage cannot be selected
Target Object
Select Archiving target object
All only selectable
Archiving policy work
When selected, archiving is performed on all version objects of Object Storage
When using Archive Storage version management: manage all versions separately
When not using Archive Storage version management: manage by overwriting the current version
Execution time
archiving execution time input
Enter a number between 1 and 3,650
Table. Archiving policy addition popup items
Reference
Archiving target can be set only once initially. It cannot be modified after being set.
Policy work performs archiving for the entire version files of Object Storage.
Archiving policy can only be edited, not deleted.
Archiving policy is performed once a day based on the addition point.
Archiving Modify policy
To modify the Archiving policy, follow the steps below.
Click the All Services > Storage > Archive Storage menu. Navigate to the Service Home page of Archive Storage.
Click the Archive Storage menu on the Service Home page. Go to the Archive Storage list page.
Archive Storage List page, click the resource to view detailed information. It navigates to the Archive Storage Details page.
Policy Actions After selecting the policy you want to edit from the list, click the More > Add button on the right. Edit Archiving Policy popup opens.
After modifying the Archiving policy information, click the Confirm button.
Caution
If you reuse the bucket name of the Archiving target, the configured archiving policy will be applied, so be careful.
Archiving Change policy status
You can change whether the Archiving policy is performed by enabling or disabling the Archiving policy.
To change the Archiving policy status, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Navigate to the Service Home page of Archive Storage.
Click the Archive Storage menu on the Service Home page. Navigate to the Archive Storage List page.
Click the resource to change the Archiving policy status on the Archive Storage List page. It navigates to the Archive Storage Details page.
Policy Actions in the list, click the More > Activate or More > Deactivate button for the policy whose status you want to change. A popup window notifying the Archiving policy status change will open.
Activation: Perform archiving according to the archiving policy.
Disable: Stops performing archiving.
Click the Confirm button. The status of the Archiving policy will change.
Archiving Cancel operation
Notice
Only Archiving tasks that are pending or in progress can be cancelled.
To cancel Archiving, follow the steps below.
All Services > Storage > Archive Storage menu. Go to the Service Home page of Archive Storage.
Click the Archive Storage menu on the Service Home page. Navigate to the Archive Storage list page.
Archive Storage List on the page, click the resource to cancel the Archiving operation. Navigate to the Archive Storage Details page.
Click the Archive History tab.
In the Archiving history list, check whether the status of the Archiving policy for which you want to cancel the job is Pending or In progress.
Both Object Storage and Archive Storage only manage the Current version (overwrite applied)
Delete Marker version containing file
Delete Marker version cannot be archived
When the latest version is a Delete Marker
Object Storage Details page’s Folder List tab cannot view the file
Archive Storage Details page’s Folder List tab can be viewed
When an intermediate version is a Delete Marker
Cannot view that version in the file version list of Object Storage and Archive Storage
After archiving, only the Delete Marker version is retained in Object Storage
Empty folder
Folder creation not possible in Archive Storage
Folder with files
Depending on whether the folder is an Object, the folder deletion in Object Storage after archiving differs
If the folder is an Object: the folder is not deleted and is retained
When the folder is created first and then files are uploaded
When the folder containing files is uploaded directly from Windows
If the folder is not an Object: folder deletion
When the folder containing files is uploaded directly from Linux
When the folder itself is restored from Archive Storage to Object Storage
When there is no target file
Perform the Archiving schedule as is
Archiving size: 0 B
When the target Object Storage bucket is terminated
Keep display of Archiving target
When clicking the Archiving target (Object Storage) link, fail to retrieve the bucket
After deleting the bucket, handle archiving failure
When recreating the bucket with the same name, maintain archiving with the new bucket
Table. Archiving method according to archiving target
Reference
Archiving is performed based on the Etag value of the Object Storage file.
After archiving, Version ID and Modification Timestamp are changed. However, Etag value remains the same.
3.4.2.2 - Using Version Control
You can set versioning on the bucket to manage all versions of Object Storage in Archive Storage. If you use version control, you can view the list of file versions, and from the version list you can select a previous version of the file to restore.
Version Control Setup
Follow the steps below to set up the version control feature.
All Services > Storage > Archive Storage Click the menu. Navigate to Archive Storage’s Service Home page.
Click the Archive Storage menu on the Service Home page. Go to the Archive Storage list page.
Click the resource to use the version control feature on the Archive Storage List page. Navigate to the Archive Storage Details page.
Archive Storage Details on the page, Version Management is Unused after confirming, click the Edit button. Edit Version Management popup window opens.
After checking the use of version control, click the Confirm button. The Archive Storage Details page’s version control will be changed to use.
Note
If you set up version control for the first time, it may take some time for changes to be completed.
The time required varies depending on the size of the bucket, and version control may not be applied before the settings are completed.
Check version list
You can view and manage the version list of the file. To view and manage the file’s version list, follow these steps.
All Services > Storage > Archive Storage Click the menu. Go to the Service Home page of Archive Storage.
Click the Archive Storage menu on the Service Home page. Go to the Archive Storage list page.
Archive Storage list page, click the resource to view the version list. Archive Storage detail page, navigate to it.
Archive Storage Details page, click the Folder List tab.
Click the More > Version Management button of the file whose version list you want to check. The version list popup will open.
Category
Detailed description
Filename
File name under version control
Modification Date and Time
File’s modification date and time
Version ID
Version ID assigned to each individual file
File before version control setting: null(-) displayed
File after version control setting: unique ID value displayed
Etag
object that can identify a file’s specific version
From the point when version control is set, you can check and manage the version ID of archived files.
Before setting version control archived file: Version ID is displayed as null(-).
After setting up version control archived file: Version file is added and a unique version ID is generated and assigned.
Delete version file
To delete unused version files, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Navigate to the Archive Storage Service Home page.
Click the Archive Storage menu on the Service Home page. Go to the Archive Storage list page.
Archive Storage list page, click the resource to view the version list. Archive Storage details page will be opened.
Archive Storage Details on the page Folder List Click the tab.
Click the More > Version Management button of the file whose version file you want to delete. The version list popup opens.
Click the More > Delete button for the version file to be deleted. The version file will be removed from the version list.
After selecting all version files to delete, you can click the Delete button at the top of the list to delete them all at once.
Caution
If you delete all version files, the original file will also be deleted. To keep the original file, leave at least one version.
3.4.2.3 - Archive Recover
Archive Recover
Note
Recovery can be performed on a per-file basis, and even after recovery, data stored in Archive Storage is not deleted.
When restoring the Archive file, select the bucket and folder of the Object Storage and restore to that location. Before starting the restoration, first verify the Object Storage you want to restore.
Recover from folder list
If you are not using the version control feature, you can recover by selecting the Archive file from the folder list. To check and recover the Archive file, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Navigate to the Archive Storage Service Home page.
Service Home page, click the Archive Storage menu. Go to the Archive Storage list page.
Click the resource to view detailed information on the Archive Storage List page. It navigates to the Archive Storage Details page.
Folder List Click the tab.
Select the folder or file name to restore from the folder list. The Delete and Restore buttons appear at the top of the list.
Recover Click the button. Archive Recover popup window opens.
After selecting the recovery information, click the Confirm button.
Category
Detailed description
Bucket Name
Object Storage’s bucket name
Original location
Original file or folder location
Recovery target
Select recovery target bucket
Overwrite
Select whether to use the overwrite function
When selecting a bucket of Object Storage that is using versioning as the recovery target, using the overwrite function will store the existing file with the same name as a previous version
Table. Archive Recovery Popup Items
Caution
Recovery may fail due to character count restrictions on files and folders. The character count restrictions on files and folders are File name creation rules, Folder name creation rules. Please refer.
After a recovery failure, if you use the overwrite function to perform recovery again, you may be charged duplicate fees.
Notice
If recovery fails, check whether the completed recovered files are present in the target Object Storage.
Recover from version list
If you use the version control feature, you can check the file’s version and restore it to the desired version.
Notice
To check the file’s version and recover it, you need to set the version control feature to use..
Use the version file to recover the Archive file by following these steps.
All Services > Storage > Archive Storage Click the menu. Move to Archive Storage’s Service Home page.
Service Home on the page, click the Archive Storage menu. Navigate to the Archive Storage list page.
Archive Storage List page, click the resource to view detailed information. Archive Storage Details page will be opened.
Folder List Click the tab.
In the folder list, click the More > Version Management button for the file whose version list you want to check. The version list popup will open.
In the version list, click the More > Recover button of the version file to be restored. Archive Recover popup window opens.
After selecting the recovery information, click the Confirm button.
Category
Detailed description
Bucket name
Object Storage’s bucket name
Original location
Original file or folder location
Version List
Modification date and time of the version to be restored, version ID, Etag, size information
Recovery target
Select recovery target bucket
Overwrite
Select whether to use the overwrite function
When selecting a bucket of Object Storage that is using versioning as the recovery target, using the Overwrite function saves the existing file with the same name as a previous version
Table. Archive Recovery Popup Items
Caution
Recovery may fail due to character count restrictions on files and folders. The character count restrictions on files and folders are File name creation rules, Folder name creation rules. Please refer to.
After a recovery failure, if you perform recovery again using the overwrite function, you may be charged duplicate fees.
Notice
If recovery fails, check whether the completed recovered files are present in the target Object Storage.
Cancel recovery operation
Notice
Only recovery tasks that are pending or in progress can be canceled.
To cancel the recovery operation, follow the steps below.
All Services > Storage > Archive Storage Click the menu. Navigate to the Archive Storage Service Home page.
Click the Archive Storage menu on the Service Home page. Go to the Archive Storage list page.
Archive Storage List On the page, click the resource to cancel the recovery operation. Archive Storage Details Navigate to the page.
Recovery History Click the tab.
In the recovery history list, check whether the status of the recovery target to cancel the operation is Pending or In Progress.
Cancel operation Click the button. A cancel operation confirmation popup will open.
Confirm Click the button. The recovery operation will be canceled.
Archive Learn about recovery methods
The recovery methods according to the type of recovery are as follows.
Recovery Type
Status Description
General Recovery
If some of the original locations are pending: Pending
If all original locations are in progress: In progress
Work possible only when In progress
When there is no file to recover (overwrite option not used)
Not shown in the status check list
Table. Recovery method according to recovery type
3.4.3 - API Reference
API Reference
3.4.4 - CLI Reference
CLI Reference
3.4.5 - Release Note
Archive Storage
2025.10.23
FEATUREVersion control feature added
You can manage archived folders or files by version.
The user can view archived folders or files by version and select the desired version to delete or restore.
2025.07.01
NEWArchive Storage Service Official Version Release
Archive Storage service has been launched.
Automatically transfer data stored in Object Storage to Archive Storage for storage, and easily recover it when needed.
3.5 - Backup
3.5.1 - Overview
Service Overview
Backup is a service that backs up and restores the user’s data in a safe way. The backup policy includes the backup target, cycle, and retention period, so the user sets the backup plan according to the business environment and requirements.
Features
Backup scope: Provides optimal backup and recovery services suitable for various business purposes to safely preserve customers’ important data.
Flexible Policy Setting: The backup policy can be set according to the usage environment and data importance. The user can select the backup target and type, and specify the retention period and schedule.
Backup Network: The backup network connection has been designed to minimize the impact on services that may occur during backup. The backup system and network are restricted through access control to prevent unauthorized access.
Remote Backup: You can store backup copies in a different location. In the event of a disaster or failure at the backup location, you can use the backup copy stored in a different location to recover.
Composition
Figure. Backup Configuration Diagram
Provided Features
Backup provides the following functions.
Backup target: The user can select the target server they want to secure the backup for.
Backup Save: You can select the storage location of the backup.
Encryption: You can choose whether to encrypt or not. If selected, AES-256 algorithm encryption will be applied.
Retention period: You can set the retention period for the backup. Backups that have exceeded the retention period will be automatically deleted.
Schedule: The cycle of automatic backup creation.
Recovery: You can recover resources by selecting a backup created at the desired point in time.
Component
Backup provides scheduling functionality to automatically create backups at predefined cycles and times.
The user can set the retention period, etc. to secure the backup at the desired point in time, and Agent configuration may be required depending on the Backup target and type.
Backup Type
Agentless Backup, Agent Backup are provided, and can be selected according to the Backup target.
Agentless Backup: Without separate Agent configuration, backup can be performed through Backup service creation. Virtual Server, GPU Server’s VM Image backup corresponds to Agentless Backup.
Agent Backup: Before creating a Backup service, it is necessary to create and configure a Backup Agent for the server to be backed up. This includes Filesystem backup for Bare Metal Servers.
Backup classification
Full Backup: Performs a backup on the entire data.
Incremental Backup: Backs up data changed based on the previous Full Backup.
Schedule
It automatically generates a backup according to the generation cycle set by the user. Schedule setting is possible up to 1 Full Backup and 6 Incremental Backups. When using the immediate Backup feature, you can create a backup at a specific point in time outside of the schedule.
The backup cycle that can be set as a schedule is as follows.
Daily: Start Time selection
Every week: day of the week, start time selection
Every month: week, day of the week, start time selection
Day of the week: Mon, Tue, Wed, Thu, Fri, Sat, Sun selection
Parking: 1, 2, 3, 4, last parking selection
Storage Period
You can set the retention period of the backup created through the Backup service. The retention period that can be set is as follows.
Agentless Backup: You can choose from 2 weeks (14 days), 1 month (31 days), 3 months (90 days), 6 months (180 days), or 1 year (365 days). It is applied based on the date, so if you set it to 1 month, it will be kept for 31 days.
Agent Backup: You can choose from 2 weeks, 1 month, 3 months, 6 months, and 1 year. It is applied based on the unit interval, so if you set it to 1 month, the backup on January 1st will be kept until January 31st, the day before February 1st, and the backup on February 1st will be kept until the day before March 1st.
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.
Day of Week: Choose among Mon, Tue, Wed, Thu, Fri, Sat, Sun
Start Time: Choose in 30‑minute increments (00~23 hour, 00/30 minutes)
Table. Backup Service Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resources on the Backup List page.
Reference
Virtual, GPU Server creation or termination may take about 180 minutes to be reflected in the backup target list.
Virtual Server’s (Agentless) VM Image backup can be performed to a remote location by selecting a storage location (excluding GPU Server).
To create a remote backup, create the Backup service at the location where the Backup target server was created. Backup Save item, select the remote location.
Backup target can create one Backup service per region. By creating Backup services in two or more regions, stability against failures and disasters can be ensured.
To optimize backup and recovery speed, set the backup storage location to be the same as the target backup server’s location.
Caution
Agentless Backup has default encryption applied and cannot be modified.
Agent Backup applies server-side encryption based on the AES-256 algorithm when encryption is selected, and may cause a performance degradation of about 20 ~ 30%.
Agentless Backup is not possible to backup VM Image on servers with SSD_MultiAttach connected among Block Storage disk types.
The initial backup is created as a Full Backup regardless of the application details.
Backup storage, Backup replication If you select the location in a different region, data transfer fees will be added.
Backup Check detailed information
The Backup service allows you to view and edit the full resource list and detailed information. Backup Details page consists of Detailed Information, Schedule, Backup History, Recovery Target, Recovery History, Replication, Tags, Operation History tabs.
To view detailed information of the Backup service, follow the steps below.
Click the All Services > Storage > Backup menu. Navigate to the Service Home page of Backup.
Service Home on the page click Backup menu. Go to the Backup List page.
Click the resource to view detailed information on the Backup List page. You will be taken to the Backup Details page.
Backup Details page displays status information and additional feature information, and consists of Details, Schedule, Backup History, Recovery Target, Recovery History, Replication, Tags, Operation History tabs.
Category
Detailed description
Backup status
Backup status information
Creating: In progress
Available: Creation completed
Deleting: Service termination in progress
Editing: Changing settings
Error: Abnormal state
Error Deleting: Abnormal state during deletion
Restoring: In progress
Instant Backup
Instantly create a backup copy at the time of creation
For detailed information about Instant Backup, see [Instant Backup](/userguide/storage/backup/how_to_guides/on_demand.md#즉시-backup하기)
|
|Service cancellation|Button to cancel the service|
Table. Status Information and Additional Functions
Detailed Information
Backup list page allows you to view detailed information of the selected resource and edit the information if needed.
Category
Detailed description
service
service group
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
In the Backup service, it refers to the backup name
Resource ID
Unique resource ID of the service
Creator
User who created the service
Creation DateTime
DateTime when the service was created
Editor
User who modified the service
Modification DateTime
Date and time the service was edited
Backup name
Backup name
The copy’s backup name is the same as the original
Category
Original status regarding duplication
Backup type
Backup type related to Agent configuration
Encryption
Encryption status
Backup target
Backup target server
Filesystem
Backup target FileSystem
The replica’s Filesystem is the same as the original
Retention Period
Storage period of recovery target (backup)
The retention period of the replica is the same as the original
If you need to modify the retention period, click the **Edit** button
For details on modifying the retention period, see [Edit Retention Period](#보관-기간-수정하기)
|
Table. Backup Details - Detailed Information Tab Items
Schedule
You can view the automatic backup schedule of the selected resource on the Backup List page.
Category
Detailed description
Schedule Edit
Schedule Edit Button
Schedule editing not allowed for copies
For detailed information on schedule editing, see Edit Schedule
Schedule Name
Schedule Name
Automatically set when creating a service or editing a schedule
Backup type
Full, Incremental Backup type
Backup cycle
Automatic backup generation cycle set by the user
Table. Backup Details - Schedule Tab Items
Reference
The schedule is based on Asia/Seoul (GMT +09:00).
The schedule can be set to 1 Full Backup and up to 6 Incremental Backups.
If it is in Available status, schedule can be modified.
The schedule of the replica is the same as the original, and can be checked in the original.
If the original is deleted, the replica’s schedule will be removed.
Backup History
On the Backup List page, you can view the immediate backup and scheduled backup execution history of the selected resources.
Category
Detailed description
Schedule Name
Schedule Name
Automatically set when creating a service or editing a schedule
Backup Timestamp
Backup Start Time
Completion Date and Time
Backup Completion Time
Status
Backup status information (Success/Failure/In progress)
Backup Cancel
Cancel button while backup in progress
For detailed information about backup cancellation, refer to Cancel Backup
Table. Backup Details - Backup History Tab Items
Reference
Backup and completion date and time are based on Asia/Seoul (GMT +09:00).
Queries can be made based on backup timestamps within up to 30 days.
Notification on backup failure and can be checked in Management > Logging&Audit.
In the replica, the Cancel Backup button is not displayed.
The backup history of the original and the replica is provided by the original.
Click the Excel Download button at the top of the list to download the retrieved Backup history list as an Excel file.
Caution
Agentless Backup: Among Block Storage disk types, servers with SSD-MultiAttach attached cannot perform VM Image backup.
Recovery Target
On the Backup list page, you can view the recovery target (backup) of the selected resource.
Category
Detailed description
Schedule
Schedule Name
Automatically set when creating a service or modifying a schedule
Backup Timestamp
Backup Start Time
Retention Period
Storage period of recovery target
The retention period of the replica is the same as the original and can be verified from the original.
Capacity
Capacity of the recovery target
Additional Features > More
Management button for recovery target
Recovery: Restore resources as recovery target
For detailed information about recovery, see Recover
Backup date and time is based on Asia/Seoul (GMT +09:00).
You can query based on the backup date/time within up to 30 days.
Click the Excel Download button at the top of the list to download the retrieved recovery target list as an Excel file.
Recovery History
You can view the recovery execution history of the selected resource on the Backup list page.
Category
Detailed description
Schedule
Schedule Name
Automatically set when creating a service or modifying a schedule
Recovery server name
Name of the server created through recovery
Backup date/time
Backup start time of the recovery target used for recovery
Recovery date and time
Recovery start time
Recovery Completion DateTime
Recovery Completion Time
Status
Recovery status information (Success/Failure/In progress)
Table. Bacup Details - Recovery History Tab Items
Reference
Backup and recovery times are based on Asia/Seoul (GMT +09:00).
You can query based on the recovery date within up to 30 days.
By clicking the Excel download button at the top of the list, you can download the retrieved recovery history list as an Excel file.
Replication
You can view the replication information of the selected resource on the Backup list page.
Guide
Replication tab information is provided only when you set Agent Backup and Backup Replication to Use when applying for the Backup service.
In the case of Agentless Backup, the replication feature (replication tab) is not provided, and when applying for the service, you can select the backup storage location to store the backup copy in another location.
If you need to modify the schedule or retention period of a created backup, cancel the backup, or delete a recovery target, you can perform the tasks on the Backup Details, Schedule, Recovery Target pages.
Edit retention period
You can modify the retention period of the backup. To modify the retention period, follow the steps below.
All Services > Storage > Backup Click the menu. Go to Backup’s Service Home page.
Click the Backup menu on the Service Home page. Go to the Backup list page.
Backup List page, click the resource to modify the retention period. Navigate to the Backup Details page.
Click the Storage PeriodEdit button. The Storage Period Edit popup opens.
Select the storage period and click the Confirm button.
Backup Details page, check the modified retention period.
Recovery Target Click the page. Check the modified retention period of the recovery targets.
Caution
Agentless Backup: When the retention period is modified, the retention period of all already created backup copies will be changed uniformly.
Agent Backup: After modifying the retention period, the changed retention period is applied to the generated backup.
Reference
If you modify the original, the same changes will be applied to the copy.
Filesystem Edit
You can modify the registered Filesystem. To modify the Filesystem, follow the steps below.
All Services > Storage > Backup Click the menu. Navigate to Backup’s Service Home page.
Service Home on the page click the Backup menu. Go to the Backup list page.
Click the resource to modify the retention period on the Backup list page. You will be taken to the Backup details page.
Click the Edit button of Filesystem. Filesystem Edit popup window opens.
Modify the Filesystem and click the Confirm button.
After entering the Filesystem to add, click the Add Target button. The Filesystem will be added to the list.
Click the X button of the Filesystem to delete from the Filesystem list. The Filesystem will be removed from the list.
Caution
Filesystem with the same name or a Filesystem registered in another Backup cannot be added.
/dev/zero, /dev/null cannot be added to the Filesystem.
If it includes the following special characters per operating system, recovery is impossible.
Linux: :*<>| not allowed
Windows: /?*<>| not allowed
Edit Schedule
You can modify the backup schedule.
Guide
In the case of a copy, the schedule cannot be modified.
To modify the schedule, follow the steps below.
All Services > Storage > Backup Click the menu. Go to the Service Home page of Backup.
Click the Backup menu on the Service Home page. Go to the Backup list page.
Backup list page, click the resource to modify the schedule. You will be taken to the Backup details page.
Click the Schedule page. You will be taken to the Backup Schedule page.
Click the Edit Schedule button. The Edit Schedule popup window opens.
Select the schedule and click the Confirm button.
Please choose among Daily, Weekly, Monthly.
Daily: Set the Start time.
Weekly: Set day, start time.
Monthly: Set the week, day, and start time.
Week: 1, 2, 3, 4, you can choose the last week.
Day: You can select from Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday.
* Start Time: Can be selected in 30‑minute intervals (00~23h, 00/30 min).
Reference
The schedule is based on Asia/Seoul (GMT +09:00).
The schedule can be set to 1 Full Backup and up to 6 Incremental Backups, and at least one Full Backup schedule registration is required.
If the status is Available, the schedule can be modified.
After schedule modification, the initial backup is created as a Full Backup regardless of the application details.
If the original is edited, the same changes will be applied to the copy.
Backup Cancel
You can cancel an ongoing backup. To cancel the backup, follow the steps below.
All Services > Storage > Backup Click the menu. Go to the Service Home page of Backup.
Click the Backup menu on the Service Home page. Go to the Backup list page.
Click the resource to cancel the backup on the Backup List page. You will be taken to the Backup Details page.
Click the Backup History page. Navigate to the Backup History page.
Check the Backup you want to cancel. If the Backup status is in progress, Backup cancellation is possible.
Backup Cancel Click the button. Backup Cancel popup window opens.
After checking information such as Backup name, Backup date and time, click the Confirm button. The backup will be canceled.
Reference
If you cancel the backup, the backups of both the original and the copy will be canceled.
Delete recovery target
You can delete the generated recovery target (backup). To delete the recovery target, follow the steps below.
All Services > Storage > Backup Click the menu. Go to the Service Home page of Backup.
Click the Backup menu on the Service Home page. Go to the Backup list page.
Click the resource to delete the recovery target on the Backup list page. It moves to the Backup details page.
Click the Recovery Target page. Navigate to the Backup Recovery Target page.
After selecting all the items you want to delete, click the Delete button. Delete popup window opens.
After checking Backup schedule name, Backup date and time, click the Confirm button.
If you delete two or more items, you can only see the count of items to be deleted in the popup window.
Once deletion is complete, check on the Backup Recovery Target page whether the target has been deleted.
Reference
You can delete two or more recovery targets by using multi-select.
Even if you delete the recovery target of the original, the replica will not be deleted.
Backup Cancel
You can reduce operating costs by terminating unused backups. However, if you terminate the service, the running service may be stopped immediately, so you should consider the impact of service interruption sufficiently before proceeding with the termination.
Caution
All saved backup data and history will be deleted. After termination, data cannot be recovered, so be careful.
If in the Available, Error, Error Deleting state, it can be terminated.
If you cancel the original Backup service, the replica’s schedule will be deleted, but the stored backup data will not be deleted.
Agent Backup that uses replication cannot be terminated. Change the replication policy to not used in the original, then terminate.
To cancel the backup, follow the steps below.
All Services > Storage > Backup Click the menu. Go to the Service Home page of Backup.
Service Home on the page click the Backup menu. Go to the Backup list page.
On the Backup list page, select the resource to cancel and click the Cancel Service button.
Once termination is complete, check on the Backup List page whether the resource has been terminated.
3.5.2.1 - Immediate Backup
You can perform an immediate backup of the current point in time, regardless of the schedule settings for the created backup. This task can be performed on the Backup Details page.
Immediate Backup
You can immediately create a backup of the current point in time. To perform an immediate backup, follow these steps:
Click the All Services > Storage > Backup menu. You will be taken to the Service Home page of the backup.
On the Service Home page, click the Backup menu. You will be taken to the Backup List page.
On the Backup List page, click the resource to perform the backup. You will be taken to the Backup Details page.
Click the Immediate Backup button. The Immediate Backup popup window will open.
Click the Confirm button. The immediate backup will be performed.
Click the Backup History page. You will be taken to the Backup History page.
Check the backup in progress.
Note
Immediate backup is possible when the status is Available.
Immediate backup is performed as a Full Backup.
3.5.2.2 - Recover
You can perform a restore using the generated backup’s restore target (backup copy). You can perform the operation on the Backup Restore Target page.
VM Image Recover
You can restore using the backup copy to be restored.
To recover, follow the steps below.
All Services > Storage > Backup Click the menu. Go to the Service Home page of Backup.
Click the Backup menu on the Service Home page. You will be taken to the Backup List page.
Backup List page, click the resource to perform recovery. Navigate to the Backup Details page.
Click the Recovery Target page. It navigates to the Backup Recovery Target page.
After checking the Schedule and Backup date/time, click the More button of the target you want to recover.
Click the Restore button. Backup Restore popup window opens.
After checking the Backup target and Backup date/time, enter the recovery location, recovery server name, etc.
Recovery Location: For remote backup copies, select the recovery location (target server or backup location).
Recovery Server Name: Use English letters, numbers, spaces, and special characters (-_) within 63 characters.
Server Type: Set the server type.
Network Settings: If you are restoring to a different location than the backup target server, configure the network of the recovery server.
Security Group: Set the Security Group of the recovery server.
After checking the entered information, click Confirm.
Click the Recovery History page. Go to the Backup Recovery History page.
Check the ongoing recovery tasks.
Caution
If it is in Available state, recovery is possible.
The original’s Security Group, NAT IP, Delete on termination settings are not restored, so set them after recovery.
The original server’s Server Group is not applied to the recovery server.
Restored servers cannot use the Rebuild function.
Reference
When restoring, a new server is created, and the Block Storage type and Keypair are set the same as the original.
Filesystem Recovery
You can restore using the backup copy of the recovery target. To restore, follow the steps below.
All Services > Storage > Backup Click the menu. Go to Backup’s Service Home page.
Click the Backup menu on the Service Home page. It navigates to the Backup List page.
Click the resource to perform recovery on the Backup list page. You will be taken to the Backup details page.
Click the Recovery Target page. Navigate to the Backup Recovery Target page.
After checking the Schedule and Backup time, click the More button of the target you want to recover.
Restore Click the button. Backup Restore A popup window opens.
Check the Backup target and Backup date and time, etc.
Click the Server Selection button of the recovery target item. The Server Selection popup window will open.
After selecting the server, click Confirm.
Only servers that have configured the Backup Agent can be selected among servers generated at the backup storage location.
Only servers that use the same Backup Master and OS as the target server can be selected.
Filesystem After checking the Filesystem to recover from the list at the bottom of the item, enter the Recovery location.
Please enter the exact name of the Filesystem being used in the OS.
/dev/zero, /dev/null When entered, recovery is impossible.
If it includes the following special characters for each operating system, recovery is impossible.
Linux: :*<>|
Windows: /?*<>|
When Overwrite is selected, files with the same type and name are overwritten. If not selected, those files are excluded from recovery.
After verifying the entered information, click Confirm.
Click the Recovery History page. Navigate to the Backup Recovery History page.
Check the ongoing recovery work.
3.5.2.3 - Backup Agent Usage
Users can create and manage the agents required to use Filesystem backup in the Backup service.
Create Backup Agent
You can create and use the Backup Agent service in the Samsung Cloud Platform Console.
To create a Backup Agent, follow the steps below.
All Services > Storage > Backup Click the menu. Go to the Service Home page of Backup.
Click the Backup Agent menu on the Service Home page. It moves to the Backup Agent List page.
Backup Agent List page, click the Create Service button.Backup Agent Creation page will be opened.
Backup Agent Creation page, enter the information required to create the service.
Category
Required
Detailed description
Target Server
Required
Select the target server to create the Backup Agent
Click the Select button to select the Bare Metal Server to create the Backup Agent
Table. Backup Agent Service Information Input Items
Summary Check the detailed information generated in the panel, and click the Complete button.
Once creation is complete, check the created resources on the Backup Agent List page.
Caution
After creating the Backup Agent, connect to the target server and install the Agent.
If the connection status on the Agent installation and Backup Agent detailed information screen is confirmed as successful, you can create a Backup service to backup the Filesystem.
Notice
After creating the Bacup Agent service, connect to the Backup target server and install the Backup Agent.
For detailed information about installing the Backup Agent, refer to Backup Agent 설치하기.
After installation is complete, check the connection status on the Backup Agent Details page.
Backup Agent View Detailed Information
Backup Agent service can view and edit the full resource list and detailed information. Backup Agent Details page consists of Details, Tags, Job History tabs.
To view detailed information of the Backup Agent service, follow the steps below.
All Services > Storage > Backup Click the menu. Navigate to the Service Home page of Backup.
On the Service Home page, click the Backup Agent menu. You will be taken to the Backup Agent List page.
Click the resource to view detailed information on the Backup Agent List page. Move to the Backup Agent Details page.
Backup Agent Detail page displays status information and additional feature information, and consists of Detail Information, Tag, Job History tabs.
Category
Detailed description
Backup Agent status
Backup Agent status information
Creating: Creating
Available: Created
Deleting: Deleting
Error: Abnormal state
Service termination
Button to cancel the service
Table. Status Information and Additional Functions
Detailed Information
Backup Agent list page allows you to view detailed information of the selected resources and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
In the Backup Agent service, it refers to the Backup Agent name.
Resource ID
Unique resource ID of the service
Creator
User who created the service
Creation time
Time the service was created
Editor
User who modified the service
Modification Date/Time
Date/Time the service was modified
Backup Agent name
Backup Agent name
Backup Master name
Backup management server name
Number of Backup policies
Number of Backup services connected to the target server
Backup Master IP
Backup management server connection IP
Service Category
Service Category of Backup Agent Target Server
Target Server
Backup Agent Target Server Information
Server Name:Target Server Name
Connection Status: Connection status between Backup Master and server
Check Connection Status button click, reconfirm connection status
Backup IP: Backup IP information of the target server connected to Backup Master
Check Time: Last connection status check time
Gateway: Backup Gateway information of the target server connected to Backup Master
Table. Backup Agent detailed information items
Notice
After creating the Bacup Agent service, connect to the backup target server and install the Backup Agent.
For detailed information about installing the Backup Agent, please refer to Install Backup Agent.
After installation is complete, check the connection status on the Backup Agent Details page.
Tag
Backup Agent List page allows you to view the tag information of selected resources, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can view the Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Tag Tab Items
Work History
On the Backup Agent List page, you can view the operation history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date/time, resource type, resource ID, resource name, work details, event topic, work result, operator information verification
Table. Work History Tab Detailed Information Items
Backup Agent Cancel
You can cancel the unused Backup Agent.
Caution
Available, can only be terminated when in Error state.
If there is a Backup service connected to the Backup Agent, it cannot be terminated.
Backup policy count is the number of Backup services connected to the Backup Agent. After terminating all connected Backups, terminate the Backup Agent.
To cancel the Backup Agent, follow the steps below.
All Services > Storage > Backup Click the menu. Go to the Service Home page of Backup.
On the Service Home page, click the Backup Agent menu. It navigates to the Backup Agent List page.
Backup Agent List On the page, select the resource to cancel, and click the Cancel Service button.
If termination is completed, check on the Backup Agent List page whether the resource has been terminated.
3.5.2.4 - Backup Agent Install
Caution
To install the Backup Agent, you must first apply for the Backup Agent service. Please refer to Backup Agent Apply.
To use Agent-based Backup, you need to connect to the target server and install the Backup Agent. After installing the Backup Agent, register the backup through the Backup service.
Pre-preparation work
To install the Backup Agent, we check the installation space, download the Backup Agent file, verify Backup Master/Agent information, and perform other preparatory tasks.
Backup Agent Check Installation Space
The required capacity for installing the Backup Agent is as follows.
OS type
Path
Required capacity
Remarks
Linux
/tmp (temporarily used during installation)
3 GB
/usr space shortage, possible through linking a separate space #ln -s separate space /usr/openv
Linux
/opt
3 GB
When /usr space is insufficient, it is possible to link a separate space #ln -s separate space /usr/openv
Linux
openv installation path (usually /usr)
3~5 GB
If /usr space is insufficient, possible via linking separate space #ln -s separate space /usr/openv
Windows
Veritas installation path (usually C:)
3~5 GB
If space on C:\ is insufficient, can be installed on another drive
-
installation file
10 GB
Table. Required capacity when installing Backup Agent
Backup Agent installation file download
To download the Backup Agent installation file, follow these steps.
All Services > Storage > Backup Click the menu. Go to the Service Home page of Backup.
Click the Backup Agent menu on the Service Home page. You will be taken to the Backup Agent List page.
Click the resource to view detailed information on the Backup Agent list page. It navigates to the Backup Agent details page.
Installation file URL After confirming, go to the URL. You can download the Backup Agent installation file.
Linux OS
If you are using Linux OS, refer to the following example and enter the command.
# cd [download location]
# curl -O installation file → Please insert the copied installation file download URL into the installation file.
Windows OS
If you use Windows OS, you can download using a web browser or Windows PowerShell.
When using an internet browser
After launching the internet browser, enter the copied installation file download URL into the address bar. After moving to the download screen, the download will start.
Reference
It may take some time until it moves to the download screen.
If you do not go to the download screen, download using Windows PowerShell.
When using Windows PowerShell
Right-click the Windows Start icon, then run Command Prompt (Admin).
Windows PowerShell in administrator mode.
Please enter the command referring to the following example.
# wget "download URL" -OutFile "filename to save"
Check Backup Master and Agent information
To install the Backup Agent, routing settings and Backup Master and Agent information for host file registration are required. On the Backup Agent Details page, check the Backup Master IP, Backup IP, Gateway information.
Backup Agent detailed information Please refer.
Setting up routing for communication with Backup Master
To communicate with Backup Master, you need to set up routing in the operating system.
Linux OS - CentOS/RHEL
Linux OS - To set up routing on CentOS/RHEL, follow the steps below.
Reference
The IP, Gateway, and Vlan ID information of the Backup Master and Agent used in the routing configuration example are as follows.
Backup Master IP: 10.242.8.4
Backup IP (Agent IP): 10.252.25.4
Gateway (Agent Gateway): 10.252.25.1
vlan: 100
vlan can be set in the Config file of the Bare Metal Server. For more details, see Local Subnet Setup.
Use the following command to check the backup network Interface information.
ip a
bash /usr/local/bin/ip.sh
Set up routing using the nmcli command.
# nmcli con mod "Vlan bond-srv.100" +ipv4.routes "10.252.8.0/24 10.252.25.1"
# nmcli device reapply bond-srv.100
Check that the routing settings have been applied correctly.
# telnet [Backup Master name] 1556
Linux OS - ubuntu
Linux OS - To set up routing on Ubuntu, follow the steps below.
Reference
The IP, Gateway, and Vlan ID information of the Backup Master and Agent used in the routing configuration example are as follows.
Backup Master IP: 10.242.8.4
Backup IP (Agent IP): 10.252.25.4
Gateway (Agent Gateway): 10.252.25.1
vlan: 100
vlan can be set in the Config file of the Bare Metal Server. For more details, see Local Subnet Setup.
Use the following command to check the backup network Interface information.
ip a
bash /usr/local/bin/ip.sh
50-cloud-init.yaml Open the file and set up routing.
Check that the routing settings have been applied correctly.
# telnet [Backup Master name] 1556
Windows OS
To set up routing in Windows OS, follow the steps below.
Reference
The IP, Gateway, and Vlan ID information of the Backup Master and Agent used in the routing configuration example are as follows.
Backup Master IP: 10.242.8.4
Backup IP (Agent IP): 10.252.25.4
Gateway (Agent Gateway): 10.252.25.1
ifindex: 100
ifindex can be set in the Config file of a Bare Metal Server. For more details, see Setting Local Subnet.
Right-click the Windows Start icon, then run Command Prompt (Admin).
Run Windows PowerShell in administrator mode.
Using the ipconfig, Get-NetAdapter commands, check the backup network interface index (ifindex) setting.
# ipconfig /all → Check the interface name assigned to the Agent IP.
# Get-NetAdapter → Refer to the interface name checked above to find the interface index.
Add port for Backup Agent (when using Windows firewall)
If you are using the Windows firewall, you need to add the port for the Backup Agent, which communicates with the Backup Master, to the Windows firewall.
The ports and settings to add to the Windows firewall are as follows.
Inbound Rule: (TCP) 443, 1556, 13720, 13724, 13782 port
Action: allow the connection
Profile: domain, private, public select all
Check whether the package is installed (when using RHEL 8)
If you are using RHEL 8, the libnsl package must be installed to install the Backup Agent.
Check if the libnsl package is installed using the following command.
# rpm -qa | grep libnsl
If the libnsl package is not installed, install the package using the following command.
# yum install libnsl
Backup Agent Install
Install the Backup Agent suitable for the operating system.
Linux OS
To install the Backup Agent on Linux OS, follow the steps below.
Backup Agent 10.1.1 Move to the path where the installation file was extracted and change the permissions of the installation-related files.
# chmod 755 installs
# chmod –R 755 NBClients
Run ./install.
[root@bkclientlin ~]# ./install
After the Install description, when confirming whether to proceed with the installation, enter y and press the Enter key.
Do you wish to continue? [y,n] (y) y
When checking the installation progress of Software on the client, enter y and press the Enter key.
Do you want to install the ~~~~~~~~~~~~ software for this client? [y,n] (y) y
When checking the Backup master server name, enter Backup Master name and press the Enter key.
Enter the name of the ~~~~~~ master server : bkmaster
When checking the client server configuration name, enter n and then enter the Agent information.
Agent information input required: Backup Agent name.scpbackup (e.g., agent_tgdrhw.scpbackup)
Would you like to use "bkclientlin" as the configured
name of the ~~~~~~~ client? [y,n] (y) n
agent_tgdrhw.scpbackup
Check the Master Server information and if there are no issues, enter y.
Master server [bkmaster] reports the following CA Certificate fingerprints:
SHA-256 Fingerprint: [BB:38:~~~~~~~~]
SHA-1 Fingerpritn: [0D:E6:~~~~~~~]
Is this correct [y,n] y
When selecting the Java GUI and JRE option, enter 2. Java GUI and JRE package installation will be excluded.
Choose an option from the list below.
1) Include the Java GUI and JRE
2) Exclude the Java GUI and JRE
Java GUI and JRE option: [1,2] (2) :2
Excluding the installation of Java GUI and JRE packages
Verify that the installation was completed correctly.
Windows OS
To install the Backup Agent on Windows OS, follow these steps.
Guide
Backup Agent installation must be performed with the Default Administrator account. If you are not using the Default Administrator account, log in with the Default Administrator account and then proceed with the installation.
After moving to the folder where the Backup Agent 10.1.1 installation file has been extracted, run the Setup.exe file with administrator privileges.
Guide
If the installation zip file contains an EEB directory in the format NB_10.4.0.1_ETXXXXXX_1, after installing the Backup Agent, you must proceed with the patch work according to the following procedure.
When the Backup Agent installation is finished, go to the NB_10.4.0.1_ETXXXXXX_1 folder.
eebinstaller_XXXXXXX_1_AMD64.exe Run the file with administrator privileges.
If there are multiple files, install all of them.
After selecting the client to install in the client selection popup, click the OK button.
Select Yes in the Vertias NetBackup popup window. The installation popup window opens, and the installation steps are displayed on the left side of the popup.
Welcome When the step description is displayed, click the Next button.
License Agreement When the step description is displayed, select the Agree with the veritas Software License Agreement item, then click the Next button.
Install Type when the step description is displayed, confirm that Install to the computer only is checked, then select Custom installation. When selection is finished, click the Next button.
Notice
If any item other than Install to the computer only is checked, verify whether the program is already installed.
Option When the step description is displayed, verify the installation path and click the Next button. Client option information is displayed.
Change… You can click the button to change the installation path.
After checking the Client Options information, click the Next button. Log on information will be displayed.
After checking the Log on information, click the Next button.
Safe Abort Option: If a system restart is required, this is an option to halt the installation (optional).
System Name If the step description is displayed, enter the server environment information and click the Next button. Master Server information will be displayed.
Client Name: Enter the Agent information. Backup Agent name.scpbackup (e.g.: agent_tgdrhw.scpbackup)
Master Server Name: Enter the Backup Master name.
After checking the Master Server information, I recognize the fingerprint for this host. Proceed with the certificate deployment. select the item. When selection is finished, click the Next button.
Security When the step description is displayed, confirm that the certificate deployment succeeded, then click the Next button.
Install When the step description is displayed, after checking the Installation Summary content, click the Install button.
When installation is complete, click the Finish button.
Backup Agent Check Status
When the installation of Backup Agent is finished, perform a communication test with Backup Master and check the status.
Linux OS
To conduct a communication test with the Backup Master on Linux OS, follow the steps below.
After entering the following command, conduct a communication test with Backup Master and check the results.
Command: # /usr/openv/netbackup/bin/bpclntcmd -pn
The example of the communication test inspection results is as follows.
# /usr/openv/netbackup/bin/bpclntcmd -pn
expecting response from server takre1p1bkm1
agent_tgdrhw.scpbackup agent_tgdrhw.scpbackup 10.252.25.4 52096
In the Samsung Cloud Platform Console, click the All Services > Storage > Backup menu. You will be taken to the Service Home page of Backup.
Click the Backup Agent menu on the Service Home page. You will be taken to the Backup Agent List page.
Click the resource to check the communication test on the Backup Agent List page. You will be taken to the Backup Agent Details page.
Click the Check Connection Status button in the Connection Status item. If it communicates normally with Backup Master, Success is displayed.
Verify that the result values of the Backup Master communication test performed on the OS and Samsumg Cloud Platform Console are the same.
Reference
If the result values of the Backup Master communication test performed on the OS and Samsung Cloud Platform Console differ, please contact the Support Center. Please refer to Inquiry.
Windows OS
To conduct a communication test with Backup Master on Windows OS, follow the steps below.
After entering the following command, perform a communication test with Backup Master and check the results.
The example of the communication test inspection results is as follows.
# C:\Vertitas\NetBackup\bin\bpclntcmd.exe -pn
expecting response from server takre1p1bkm1
agent_tgdrhw.scpbackup agent_tgdrhw.scpbackup 10.252.25.4 52096
Click the All Services > Storage > Backup menu in the Samsung Cloud Platform Console. Go to the Backup Service Home page.
Click the Backup Agent menu on the Service Home page. Navigate to the Backup Agent List page.
Click the resource to verify the communication test on the Backup Agent List page. It navigates to the Backup Agent Details page.
Click the Check Connection Status button in the Connection Status item. If it communicates properly with Backup Master, Success is displayed.
Verify that the result values of the Backup Master communication test performed on the OS and the Samsumg Cloud Platform Console are the same.
Reference
If the result values of the Backup Master communication test performed on the OS and Samsung Cloud Platform Console differ, please contact through the Support Center. See Inquiry
Backup Agent Delete
If you do not use the Backup Agent, you must delete the Backup Agent installed on each operating system.
Linux OS
To delete the Backup Agent on Linux OS, follow the steps below.
Day of Week: Select from Mon, Tue, Wed, Thu, Fri, Sat, Sun
Start Time: Select in 30-minute increments (00-23 hours, 00/30 minutes)
Table. Backup Service Information Input Items
On the Summary panel, check the created detailed information and estimated billing amount, then click the Complete button.
When creation is complete, check the created resource on the Backup List page.
Warning
Agent Backup applies server-side encryption based on AES-256 algorithm when encryption is selected, and approximately 20-30% performance degradation may occur.
The initial Backup is created as a Full Backup regardless of the application details.
If you select a different region as the Backup Replication location, data transfer fees will be added.
Modifying Replication Policy
You can change the replication status by modifying the replication policy. To modify the replication policy, follow these steps:
Click the All Services > Storage > Backup menu. You will move to the Backup Service Home page.
On the Service Home page, click the Backup menu. You will move to the Backup List page.
On the Backup List page, click the resource for which you want to modify the replication policy. You will move to the Backup Detail page.
On the Backup Detail page, click the Replication tab.
Click the Edit button of Replication Policy. The Edit Replication Policy popup window will open.
Enable: Performs replication. If it is disabled, it can be changed to enabled.
Disable: Stops backup replication. If it is disabled, the Backup service can be terminated.
Check the modified replication policy on the Replication tab of the Backup Detail page.
3.5.3 - API Reference
API Reference
3.5.4 - CLI Reference
CLI Reference
3.5.5 - Release Note
Backup
2025.12.16
FEATURECopy tab and additional features added
Backup replication tab added
In the Backup detailed page, you can view the original and replica information in the Replication tab.
Add download feature for related backup information
In the Backup detail page, you can download the Backup history, recovery target, and recovery history list as an Excel file.
2025.07.01
FEATUREExpanded recovery location
Expand recovery scope
When restoring with an agentless remote backup copy, you can select the restore location (target server or backup copy location).
2025.04.28
FEATUREBackup location and target expansion
Expand backup location and target
Agentless-based remote backup: You can backup and restore to a different location from the backup target server.
Backup Agent feature added: By configuring the Agent, you can back up the filesystem of a Bare Metal Server.
2025.02.27
FEATURECommon Feature Change
Samsung Cloud Platform Common Feature Change
Account, IAM and Service Home, tags, etc. have been reflected in common CX changes.
2024.12.23
NEWBackup service official version launch
We launched the Backup service to provide a service that safely backs up and restores data.
Since backup policies include targets, frequency, retention period, etc., users can set backup plans according to their business environment and requirements.
3.6 - Parallel File Storage
Service Overview
Parallel File Storage is a All NVMe-based high-performance parallel file storage that can process large amounts of data quickly and efficiently. Storage that can be used in various fields such as AI/ML and big data analysis, distributes data across multiple storage nodes to improve data processing speed and reduce analysis time.
Features
High Performance and Reliability: Distributes data across multiple NVMe-based nodes to provide high performance and reliability. High-performance processing is possible regardless of file size, and even if a single node fails, data is safely maintained through other nodes.
Large-capacity volume: It can be reliably expanded while online, and its scalability is excellent, allowing use without capacity limits.
Snapshot Backup: Through the image snapshot feature, recovery of changed and deleted data is possible. Recovery is performed by using the snapshot created at the point in time you want to recover.
Diagram
Figure. Parallel File Storage diagram
Provided Features
Parallel File Storage provides the following features.
Volume Name: Users can set names for each volume.
Capacity: Volumes can be created with capacities ranging from a minimum of 1TB to a maximum of up to 1,000TB.
Connected Resource: Can be connected and used in a Multi-node GPU Cluster.
Snapshot: Through the image snapshot feature, recovery of changed and deleted data is possible. Users select a snapshot created at the point in time they wish to recover from the list to perform the recovery.
Components
Volume
Volume (Volume) is the basic creation unit of the Parallel File Storage service and is used as data storage space. Users create a volume by entering a name and capacity, then connect it to one or more Multi-node GPU Clusters for use. The volume name creation rules are as follows.
It must start with a lowercase English letter and can be set to 3 to 21 characters using lowercase letters, numbers, and the special character (_).
Snapshot
A snapshot is an image backup of a volume at a specific point in time. Users can view the snapshot name and creation date in the snapshot list to select the snapshot they want to restore, and can recover data that was changed or deleted using that snapshot. The notes for using snapshots are as follows.
Reference
The snapshot creation time is based on Asia/Seoul (GMT +09:00).
You can create up to 50 snapshots.
Snapshot capacity is included in File Storage usage and incurs charges, so please delete unnecessary snapshots.
Preceding Service
This is a list of services that must be pre-configured before creating the service. For details, refer to the guide provided for each service and prepare in advance.
Service providing many GPUs for large-scale high-performance AI computation
Table. Parallel File Storage Preceding Service
3.6.1 - Overview
Service Overview
Parallel File Storage is a high-performance parallel file storage based on All NVMe that can process large amounts of data quickly and efficiently.
Features
Data Processing Speed Improvement: By distributing file data across multiple storage nodes, it improves data processing speed and reduces analysis time.
Various Field Utilization: Through fast data processing speed and analysis time, it can be used in various fields such as AI/ML analysis, big data analysis, etc.
Diagram
Figure. Parallel File Storage diagram
Provided Features
Parallel File Storage provides the following features.
Volume Name: Users can set names for each volume.
Snapshot: You can create a snapshot to restore to a specific point in time.
Connection Resource: Can be connected and used in a Multi-node GPU Cluster.
Components
You can create a volume by selecting the disk type and protocol according to the user’s service environment and performance requirements.
When using the snapshot feature, you can restore data to the point in time you want to recover.
Volume
Volume is the basic creation unit of the Parallel File Storage service and is used as data storage space. Users select a name and capacity to create a volume, then connect and use it in a Multi-node GPU Cluster. The volume name creation rules are as follows.
Starts with a lowercase English letter and can be set to 3 to 21 characters using lowercase letters, numbers, and special character (_).
Snapshot
Snapshot (Snapshot) is an image backup at a specific point in time. Using the image snapshot function, you can recover changed or deleted data. The user selects the snapshot created at the point in time they want to recover from the snapshot list and performs the recovery.
Reference
Snapshots can be created up to a maximum of 50.
You can recover by selecting a specific snapshot from the snapshot list and creating a new volume based on the snapshot.
Notice
The snapshot recovery feature will be provided later.
Pre-service
This is a list of services that must be pre-configured before creating the service. For details, refer to the guide provided for each service and prepare in advance.
Physical GPU servers for large-scale high-performance AI computation
Table. Parallel File Storage Preliminary Service
3.6.2 - How-to guides
The user can enter the required information for Parallel File Storage through the Samsung Cloud Platform Console, select detailed options, and create the service.
Parallel File Storage Create
You can create and use the Parallel File Storage service from the Samsung Cloud Platform Console. To create Parallel File Storage, follow the steps below.
All Services > Storage > Parallel File Storage Click the menu. Go to the Service Home page of Parallel File Storage.
Click the Create Parallel File Storage button on the Service Home page. You will be taken to the Create Parallel File Storage page.
Parallel File Storage creation On the page, enter the information required to create the service.
Category
Required
Detailed description
Volume Name
Required
Enter volume name
Start with a lowercase English letter
Use lowercase letters, numbers, special character (_) to input 3 ~ 21 characters
Generated in the form ‘user input value+{6-character UUID composed of lowercase English letters and numbers}’
Cannot be modified after service creation
Capacity
Required
Enter the capacity to use
1 ~ 1000 TB available
Only expansion is possible after service creation
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Parallel File Storage Service Creation Information Input Items
Check the detailed information and estimated billing amount generated in the summary panel, and click the Complete button.
When the popup notifying creation opens, click the Confirm button.
When creation is complete, check the created resources on the Parallel File Storage list page.
Reference
Parallel File Storage creation can take more than several tens of minutes depending on the service scale.
Parallel File Storage Check Detailed Information
Parallel File Storage service can view and edit the full resource list and detailed information. Parallel File Storage If you want to check the detailed information of the service, follow the steps below.
All Services > Storage > Parallel File Storage Click the menu. Go to the Service Home page of Parallel File Storage.
Click the Parallel File Storage menu on the Service Home page. Navigate to the Parallel File Storage list page.
Parallel File Storage List page, click the resource to view detailed information. It navigates to the Parallel File Storage Details page.
Parallel File Storage Details page displays status information and additional feature information, and consists of Details, Snapshot List, Tags, Operation History tabs.
Category
Detailed description
Volume Status
Represents the status of the volume
Creating: In creation
Available: Creation complete, server connection possible
Extending: Capacity expansion in progress
Deleting: Service termination in progress
Error Deleting: Abnormal state during deletion
Error: Abnormal state during creation
Error Extending: Abnormal state during capacity expansion
Snapshot Creation
Immediately create a snapshot at the time of creation
Up to 50 can be created
For detailed information about snapshot creation, see Create Snapshot
Service cancellation
Button to cancel the service
Table. Parallel File Storage status information and additional features
Detailed Information
On the Parallel File Storage List page, you can view the detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In the Parallel File Storage service, it refers to a volume SRN
Resource Name
Resource Name
In the Parallel File Storage service, it refers to the volume name
Mount Name: Mount name per volume for server connection
{Storage IP}:/{Volume Name} is created
Mount Account: View button after clicking, entering the password allows you to view the account information and verify the password
Connected Resources
List of connected resources (Multi-node GPU Server)
Resource Type: Service type of the connected resource
Resource Name: Name of the connected resource
IP: Connected resource IP information
Resource Status: Status of the connected resource
Connection Status: Connection status of the resource
If the connection status is partially successful, verify that the two N/W interfaces for Parallel File Storage connection in the Multi-node GPU Cluster are functioning properly, then disconnect and reconnect in Parallel File Storage to check the status
Resources can be added up to a maximum of 300
Click the Edit button to add or remove connected resources
Table. Parallel File Storage Details - Job History Tab Items
Parallel File Storage Resource Management
If you need to modify settings in Parallel File Storage or add or delete a connected server, you can perform the operation on the Parallel File Storage Details page.
Edit Capacity
You can expand the capacity of Parallel File Storage.
To modify the capacity, follow the steps below.
All Services > Storage > Parallel File Storage Click the menu. Navigate to the Service Home page of Parallel File Storage.
Click the Parallel File Storage menu on the Service Home page. Go to the Parallel File Storage list page.
Parallel File Storage List page, click the resource to modify the capacity. Parallel File Storage Details page, navigate.
Click the Edit button of the Capacity item. The Capacity Edit popup window opens.
After entering the capacity to be expanded, click the Confirm button.
You can expand up to a maximum of 1000 TB, including the existing capacity.
When a popup notifying capacity expansion opens, click the Confirm button.
Edit Connected Resources
You can connect resources to Parallel File Storage or disconnect the connected resources.
Notice
You cannot modify the linked resource while the linked resource modification is in progress.
If communication with the target resource is lost or the connection is impossible, you cannot modify the connection resource.
When connecting resources, you can connect up to 300 resources at the same location. If you exceed 300, use the API.
To modify the connection resource, follow the steps below.
All Services > Storage > Parallel File Storage Click the menu. Navigate to the Service Home page of Parallel File Storage.
Click the Parallel File Storage menu on the Service Home page. Navigate to the Parallel File Storage list page.
Parallel File Storage List page, click the resource to edit the connected resource. Move to the Parallel File Storage Details page.
Click the Edit button of the Connected Resource item. The Select Connected Resource popup window opens.
After selecting the resource to connect or unchecking the resource to disconnect, click the Confirm button.
You can select multiple resources at the same time.
Caution
Multi-node GPU Cluster server is connected to Parallel File Storage through two N/W interfaces. To optimize storage performance, please check that both N/W are properly connected.
Parallel File Storage Details page, if the resource’s connection status is Partial Success, follow the steps below to verify.
Verify that the 2 N/W interfaces for connecting Parallel File Storage in the Multi-node GPU Cluster are functioning properly.
After disconnecting from Parallel File Storage, reconnect.
Parallel File Storage Details Check the connection status of the resource on the page.
When disconnecting, you must first access the server and perform the disconnect operation (Umount, disconnect network drive).
If you disconnect without OS operation, a status error (Hang) may occur on the connection server.
For detailed information about the server unmount operation, please refer to Unmount Server.
When adding a connected server, you must first perform the connection tasks (Mount, network drive connection) on the server.
For detailed information about server connection, please refer to Connecting to Server.
Parallel File Storage Cancel
You can cancel unused Parallel File Storage to reduce operating costs. However, if you cancel the service, the service currently in operation may be terminated immediately, so you should proceed with the cancellation after fully considering the impact that may occur when the service is discontinued.
Caution
Be careful because data cannot be recovered after termination.
If there are resources connected to Parallel File Storage, you cannot cancel. Remove all connected resources before canceling the service.
You can only delete when the volume status is Available or Error.
To cancel Parallel File Storage, follow the steps below.
All Services > Storage > Parallel File Storage Click the menu. Go to the Service Home page of Parallel File Storage.
Click the Parallel File Storage menu on the Service Home page. Go to the Parallel File Storage list page.
Parallel File Storage list On the page, select the resource to cancel, and click the Cancel Service button.
You can go to the Parallel File Storage Details page of the resource to be terminated and delete it individually.
If a popup notifying termination opens, click the Confirm button.
When the termination is completed, check on the Parallel File Storage list page whether the resource has been terminated.
3.6.2.1 - Using Snapshots
You can create, delete, or recover using snapshots of Parallel File Storage.
Guide
The snapshot recovery feature will be provided later.
Create Snapshot
You can create a snapshot of Parallel File Storage. To create a snapshot, follow the steps below.
All Services > Storage > Parallel File Storage Click the menu. Move to the Service Home page of Parallel File Storage.
Click the Parallel File Storage menu on the Service Home page. Navigate to the Parallel File Storage list page.
Parallel File Storage List page, click the resource to create a snapshot. Go to the Parallel File Storage Details page.
Parallel File Storage Details page, click the Create Snapshot button.
If a popup notifying snapshot creation opens, click the Confirm button.
Snapshot List Click the button. Navigate to the File Storage Snapshot List page.
Check the generated snapshot.
Caution
Snapshot fees are included in the File Storage usage fees.
Reference
You can create up to 50 snapshots.
Delete Snapshot
You can delete the snapshot of Parallel File Storage. To delete a snapshot, follow these steps.
All Services > Storage > Parallel File Storage Click the menu. Navigate to the Service Home page of Parallel File Storage.
Service Home on the page click the Parallel File Storage menu. Navigate to the Parallel File Storage list page.
Parallel File Storage List on the page, click the resource to delete the snapshot. Parallel File Storage Details navigate to the page.
Click the Snapshot List tab on the Parallel File Storage Details page.
In the snapshot list, click the More > Delete button at the far right of the snapshot to be restored.
Click the Confirm button when the popup notifying snapshot deletion opens.
3.6.2.2 - Install Agent
To use the Parallel File Storage service, you need to connect to the target server and install the Agent. After installing the Agent, mount on the server and use Parallel File Storage.
Install Agent and Connect to Server (Mount)
Agent installation and server connection consist of six steps. Follow the next procedure.
Agent installation
Account Login
Mount Point Creation
Filesystem Mount
Mount check
fstab registration
Agent Installation
Install the Agent using Mount IP.
Reference
Mount IP can be found in the Mount name item on the detail page of the Samsung Cloud Platform Console.
All Services > Storage > Parallel File Storage Click the menu. Go to the Service Home page of Parallel File Storage.
Click the Parallel File Storage menu on the Service Home page. Navigate to the Parallel File Storage list page.
Parallel File Storage List page, click the resource to be used on the connected server. Move to the Parallel File Storage Details page.
Connection Server After checking the server in the item, please connect.
Refer to the following example to install the Volume Agent and proceed with server connection (Mount).
curl <Mount IP>:14000/dist/v1/install | sh
root@RESD-s4sr3h:/# curl http://10.102.160.254:14000/dist/v1/install | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1424 100 1424 0 0 1978k 0 --:--:-- --:--:-- --:--:-- 1390k
Downloading WekaIO CLI 4.2.4.29-hcsf
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 58.7M 100 58.7M 0 0 1079M 0 --:--:-- --:--:-- --:--:-- 1088M
Installing...
Installing agent of version 4.2.4.29-hcsf
The agent is configured to detect cgroups - cgroups v1 not found, cgroups are disabled
Waiting for agent service to be ready
Installation finished successfully
WekaIO CLI 4.2.4.29-hcsf is now installed
## Account Login
Log in using the mount information for server mount.
Reference
You can check the Mount name, Mount account, and password in the Mount information item of the detail page of Samsung Cloud Platform Console.
#weka user login -H root@RESD-s4sr3h:/# weka user login -H 10.102.160.254
Organization (enter name or ID, default: 0) admin_org
Username: admin_reg
Password: ###########
+——————————+
| Login completed successfully |
+——————————+
## Mount Point creation
Create a mount point on the server for the filesystem mount.
#mkdir /mnt/weka
## Filesystem Mount
Follow the steps below to mount the filesystem.
1. Use the #ip a command to check the IP and Interface Name information for Mount.
root@RESD-s4sr3h:/# ip a |grep 10.102
inet 10.102.160.248/23 brd 10.102.161.255 scope global ibs4f0.8010
inet 10.102.160.249/23 brd 10.102.161.255 scope global ibP1s8f0.8010
Note
The IP information and Interface Name that can be confirmed in the above example are as follows.
IP: 10.102.160.10, 10.102.160.11
Interface Name: ibs4f0.8010, ibP1s8f0.8010
2. Execute the mount command using the verified IP and Interface Name.
mount -t wekafs / -o net=//mask -o mgmt_ip= /mnt/weka
root@RESD-s4sr3h:/# mount -t wekafs -o num_cores=8 -o net:ha=ibs4f0.8010,net:ha=ibP1s8f0.8010,mgmt_ip=‘10.102.160.10+10.102.160.11’ 10.102.160.254/wekafs /mnt/weka
Mounting 10.102.161.254/bmtfs on /weka_fs
Basing mount on container client
Downloading [1/21] http://10.102.160.254:14000/dist/v1/image/envoy-fe-e6b882a6bce3c0de8cd9c7833df1a567.squashfs
Downloading [2/21] http://10.102.160.254:14000/dist/v1/image/weka-driver-1.0.0-d10ca9cff59b98778b4314014569e00f.squashfs
Downloading [3/21] http://10.102.160.254:14000/dist/v1/image/weka-driver-igb-uio-4.0.0-7eee7dc5b7f1d85a1be0e448d5e97312.squashfs
Downloading [4/21] http://10.102.160.254:14000/dist/v1/image/container-s3-tmp-1.57f-9cb61c7e0ae3ca9e2b476c191e4e84ab.squashfs
Downloading [5/21] http://10.102.160.254:14000/dist/v1/image/container-smbw-weka-4.7.12.3-9b67132a85a950260f048955dc33c7a9.squashfs
Downloading [6/21] http://10.102.160.254:14000/dist/v1/image/weka-drain-tools-2d01044c641816d9002ca594a6ae9d90.squashfs
Downloading [7/21] http://10.102.160.254:14000/dist/v1/image/container-ganesha-dev-weka-5-11becf16b21c9635daa23a247340a7bd.squashfs
Downloading [8/21] http://10.102.160.254:14000/dist/v1/image/dependencies-1.0.0-9b64fdba87a4d6e6efa9ab5250169ec8.squashfs
Downloading [9/21] http://10.102.160.254:14000/dist/v1/image/weka-container-2.3.0-be66bcc7c9739b15cacd910d7cac031e.squashfs
Downloading [10/21] http://10.102.160.254:14000/dist/v1/image/weka-hostside-faf9aa30ec9ac7521ffbc9589ac23deb.squashfs
Downloading [11/21] http://10.102.160.254:14000/dist/v1/image/api-6f501306831ff9a223a7f706c5a661e1.squashfs
Downloading [12/21] http://10.102.160.254:14000/dist/v1/image/weka-s3-3508f2f1afb4900ab11c4772e327b1ac.squashfs
Downloading [13/21] http://10.102.160.254:14000/dist/v1/image/weka-ganesha-5c6ef6d08e31f80580f50bab7d1b8134.squashfs
Downloading [14/21] http://10.102.160.254:14000/dist/v1/image/dashboard-dfb78995154ab40fb274037ac9fe8a45.squashfs
Downloading [15/21] http://10.102.160.254:14000/dist/v1/image/container-samba-weka-4.7.12.3-69835f740573b7ded6faed1dfe737bed.squashfs
Downloading [16/21] http://10.102.160.254:14000/dist/v1/image/weka-smbw-8a1430e5f0f2cca6d2a4af603d630882.squashfs
Downloading [17/21] http://10.102.160.254:14000/dist/v1/image/ui-1.0.0-5bc747765d326e6e1c3488285822f459.squashfs
Downloading [18/21] http://10.102.160.254:14000/dist/v1/image/weka-samba-8102bcf3d3a81f02755cb2e75b1b8d16.squashfs
Downloading [19/21] http://10.102.160.254:14000/dist/v1/image/weka-node-fbd17baa570969b6da7e5561f1eb652f.squashfs
Downloading [20/21] http://10.102.160.254:14000/dist/v1/image/ofed-b643ca3e4fa06d84416d463afe74a66a.squashfs
Downloading [21/21] http://10.102.160.254:14000/dist/v1/image/driver-uio-pci-generic-1.0.0-322a3daa84c41eeb6f0cafd0802fbf50.squashfs
Finished getting version 4.2.4.29-hcsf
Creating Weka container ‘client’ in version 4.2.4.29-hcsf
Preparing version 4.2.4.29-hcsf of container client
Base port was not explicitly provided, the container will use 14000
Applying resources
Starting container ‘client’
Waiting for container ‘client’ to join cluster
Container “client” is ready (pid = 392216)
Calling the mount command
Cgroups v1 not found, running without cgroups
Mount completed successfully
## Mount Check
<code>#df -h</code> Run the command to check the mount status of the filesystem.
## fstab registration
Register fstab so that it automatically mounts on server reboot.<br>
To register fstab, run the <code>#vi /etc/fstab</code> command, then add the following command.
root@RESD-s4sr3h:/# cat /etc/fstab
/etc/fstab: static file system information.
Use ‘blkid’ to print the universally unique identifier for a
device; this may be used with UUID= as a more robust way to name devices
that works even if disks are added and removed. See fstab(5).
/ was on /dev/nvme2n1p2 during curtin installation
# Disconnect server (Umount)
To disconnect the server, first connect to the server and perform the disconnect operation (Umount), then you must disconnect the server from the Console.<br>
To disconnect the server, follow the steps below.
1. **All Services > Storage > Parallel File Storage** Click the menu. Go to the **Service Home** page of Parallel File Storage.
2. Click the **Parallel File Storage** menu on the **Service Home** page. You will be taken to the **Parallel File Storage** list page.
3. **Parallel File Storage List** page, click the resource to disconnect the server. Move to the **Parallel File Storage Details** page.
4. **Connection Server** after checking the server information in the item, connect to the server.
5. Refer to the commands shown in the following example and proceed with the unmount operation (Umount).
umount /mnt/weka
vi /etc/fstab
3.6.2.3 - File-level recovery
You can restore data on a per-file basis using the generated snapshot.
Use file-level recovery
You can connect to the server and select and recover data. To perform file-level recovery, follow the steps below.
All Services > Storage > Parallel File Storage Click the menu. Go to the Service Home page of Parallel File Storage.
Click the Parallel File Storage menu on the Service Home page. Navigate to the Parallel File Storage list page.
Click the resource to recover the file on the Parallel File Storage List page. Navigate to the Parallel File Storage Details page.
After checking the connected server in the Connected Resources item, access that server.
Check the mount name of File Storage on the server.
Mount name is the same as the Mount Point set on the server for the Filesystem’s mount.
Go to the snapshot location under the Mount name.
# cd /MountName/.snapshots/snapshotName
After checking the recovery target file at the Snapshot location, recover it to the required path.
If needed, restart or create the container according to the application type.
3.6.3 - API Reference
API Reference
3.6.4 - CLI Reference
CLI Reference
3.6.5 - Release Note
Parallel File Storage
2025.12.16
NEWParallel File Storage Official Version Release
Parallel File Storage service has been officially launched.
File data can be distributed across multiple storage nodes to process large-scale data quickly and efficiently.
Through fast data processing speed and reduced analysis time, it can be used in various fields such as AI/ML analysis and big data analysis.
4 - Container
Containerized applications can be operated stably using Kubernetes by providing an execution, monitoring environment, and open-source software.
4.1 - Kubernetes Engine
4.1.1 - Overview
Service Overview
Kubernetes Engine is a service that provides lightweight virtual computing and containers, as well as a Kubernetes cluster to manage them. Users can utilize the Kubernetes environment without complex preparation by installing, operating, and maintaining the Kubernetes Control Plane.
Features
Standard Kubernetes Environment Configuration: The standard Kubernetes environment can be used without separate configuration through the default Kubernetes Control Plane provided. It is compatible with applications in other standard Kubernetes environments, so you can use standard Kubernetes applications without modifying the code.
Easy Kubernetes Deployment: Provides secure communication between worker nodes and managed control planes, and quickly provisions worker nodes, allowing users to focus on building applications on the provided container environment.
Convenient Kubernetes Management: Provides various management features to conveniently use the created Kubernetes cluster, such as cluster information inquiry and cluster management, namespace management, and workload management through the dashboard for enterprise environments.
Service Composition Diagram
Figure. K8s Engine Configuration Diagram
Provided Features
Kubernetes Engine provides the following features.
Cluster Management: You can create and manage clusters to use the Kubernetes Engine service. After creating a cluster, you can add services necessary for operation, such as nodes, namespaces, and workloads.
Node Management: A node is a set of machines that run containerized applications. Every cluster must have at least one worker node to deploy applications. Nodes can be defined and used by defining a node pool. Nodes belonging to a node pool must have the same server type, size, and OS image, and multiple node pools can be created to establish a flexible deployment strategy.
Namespace Management: Namespace is a logical separation unit within a Kubernetes cluster, and is used to specify access permissions or resource usage limits by namespace.
Workload Management: Workload is an application running on Kubernetes Engine. You can create a namespace, then add or delete workloads. Workloads are created and managed item by item, such as deployments, pods, stateful sets, daemon sets, jobs, and cron jobs.
Service and Ingress Management: Service is an abstraction method that exposes applications running in a set of pods as a network service, and Ingress is used to expose HTTP and HTTPS paths from outside the cluster to the inside. After creating a namespace, you can create or delete services, endpoints, ingresses, and ingress classes.
Storage Management: When using Kubernetes Engine, you can create and manage the storage to be used. Storage is created and managed by items such as PVC, PV, and storage class.
Configuration Management: When there is a need to manage values that change inside a container according to multiple environments such as Dev/Prod, managing them with separate images due to environment variables is inconvenient and causes significant cost waste. In Kubernetes, you can manage environment variables or configuration values as variables from the outside so that they can be inserted when a Pod is created, and at this time, ConfigMap and Secret can be used.
Access Control: In cases where multiple users access a Kubernetes cluster, you can grant permissions for specific APIs or namespaces to restrict access. You can apply Kubernetes’ role-based access control (RBAC) feature to set permissions for clusters or namespaces. You can create and manage cluster roles, cluster role bindings, roles, and role bindings.
Component
Control Plane
The Control Plane is the master node role in the Kubernetes Engine service. The master node is the management node of the cluster, and it plays a role in managing other nodes in the cluster. The cluster is the basic creation unit of the Kubernetes Engine service, and it is used to manage node pools, objects, controllers, and other components within it. Users set up the cluster name, control plane, network, File Storage, and other settings, and then create a node pool within the cluster to use it. The master node assigns tasks to the cluster, monitors the status of the nodes, and plays a role in data communication between nodes.
The cluster name creation rule is as follows.
It starts with English and can be set within 3-30 characters using English, numbers, and special characters (-).
The cluster name must not be duplicated with the existing one.
Worker Node
The Worker Node is a work node in the cluster, playing a role in performing the cluster’s tasks. The Worker Node receives tasks from the cluster’s master node, performs them, and reports the task results to the cluster’s master node. All nodes created within the node pool and namespace play the role of a worker node.
The creation rule of the node pool, which is a collection of worker nodes, is as follows.
A node pool must have at least one node to be created for application deployment to be possible.
Up to 100 nodes can be created in a node pool.
Since the maximum number of nodes is 100, if there are 100 node pools, 1 node per node pool, and if there are 50 node pools, 2 nodes per node pool, the total number of nodes can be created freely within 100 nodes.
It is possible to set up Block Storage connected to the node pool.
It is possible to set the server type, size, and OS image for nodes belonging to the node pool, and all must be the same.
Auto-Scaling service allows you to set automatic node pool expansion/reduction according to the requirements of the deployed application.
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.
A storage that allows multiple clients to share files over the network
Used as a Persistant Volume
Fig. Preceding services of Kubernetes Engine
4.1.1.1 - Monitoring Metrics
Kubernetes Engine Monitoring Metrics
The following table shows the monitoring metrics of Kubernetes Engine that can be checked through Cloud Monitoring. For detailed instructions on using Cloud Monitoring, refer to the Cloud Monitoring guide.
Performance Item
Detailed Description
Unit
Cluster Namespaces [Active]
Number of active namespaces
cnt
Cluster Namespaces [Total]
Total number of namespaces in the cluster
cnt
Cluster Nodes [Ready]
Number of nodes in READY state
cnt
Cluster Nodes [Total]
Total number of nodes in the cluster
cnt
Cluster Pods [Failed]
Number of failed pods in the cluster
cnt
Cluster Pods [Pending]
Number of pending pods in the cluster
cnt
Cluster Pods [Running]
Number of running pods in the cluster
cnt
Cluster Pods [Succeeded]
Number of succeeded pods in the cluster
cnt
Cluster Pods [Unknown]
Number of unknown pods in the cluster
cnt
Instance Status
Cluster status
status
Namespace Pods [Failed]
Number of failed pods in the namespace
cnt
Namespace Pods [Pending]
Number of pending pods in the namespace
cnt
Namespace Pods [Running]
Number of running pods in the namespace
cnt
Namespace Pods [Succeeded]
Number of succeeded pods in the namespace
cnt
Namespace Pods [Unknown]
Number of unknown pods in the namespace
cnt
Namespace GPU Clock Frequency
SM clock frequency in the namespace
MHz
Namespace GPU Memory Usage
Memory utilization in the namespace
%
Namespace GPU Usage
GPU utilization in the namespace
%
Node CPU Size [Allocatable]
Allocatable CPU in the node
cnt
Node CPU Size [Capacity]
CPU capacity in the node
cnt
Node CPU Usage
CPU usage in the node
%
Node CPU Usage [Request]
CPU request ratio in the node
%
Node CPU Used
CPU utilization in the node
status
Node Filesystem Usage
Filesystem usage in the node
%
Node Memory Size [Allocatable]
Allocatable memory in the node
bytes
Node Memory Size [Capacity]
Memory capacity in the node
bytes
Node Memory Usage
Memory utilization in the node
%
Node Memory Usage [Request]
Memory request ratio in the node
%
Node Memory Workingset
Memory working set in the node
bytes
Node Network In Bytes
Node network received bytes
bytes
Node Network Out Bytes
Node network transmitted bytes
bytes
Node Network Total Bytes
Node network total bytes
bytes
Node Pods [Failed]
Number of failed pods in the node
cnt
Node Pods [Pending]
Number of pending pods in the node
cnt
Node Pods [Running]
Number of running pods in the node
cnt
Node Pods [Succeeded]
Number of succeeded pods in the node
cnt
Node Pods [Unknown]
Number of unknown pods in the node
cnt
Pod CPU Usage [Limit]
CPU usage limit ratio in the pod
%
Pod CPU Usage [Request]
CPU request ratio in the pod
%
Pod CPU Usage
CPU usage in the pod
%
Pod GPU Clock Frequency
SM clock frequency in the pod
MHz
Pod GPU Memory Usage
Memory utilization in the pod
%
Pod GPU Usage
GPU utilization in the pod
%
Pod Memory Usage [Limit]
Memory usage limit ratio in the pod
%
Pod Memory Usage [Request]
Memory request ratio in the pod
%
Pod Memory Usage
Memory usage in the pod
bytes
Pod Network In Bytes
Pod network received bytes
bytes
Pod Network Out Bytes
Pod network transmitted bytes
bytes
Pod Network Total Bytes
Pod network total bytes
bytes
Pod Restart Containers
Container restart count in the pod
cnt
Workload Pods [Running]
-
cnt
Table. Kubernetes Engine Monitoring Metrics
4.1.1.2 - ServiceWatch Metrics
Kubernetes Engine sends metrics to ServiceWatch. The metrics provided as basic monitoring are data collected at 1-minute intervals.
Note
For information on how to check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Metrics
The following are basic metrics for the Kubernetes Engine namespace.
Metrics with metric names shown in bold below are key metrics selected among the basic metrics provided by Kubernetes Engine.
Key metrics are used to configure service dashboards that are automatically built for each service in ServiceWatch.
For each metric, the user guide describes which statistical value is meaningful when querying that metric, and the statistical value shown in bold among the meaningful statistics is the key statistic. You can query key metrics through key statistics in the service dashboard.
Metric Name
Detailed Description
Unit
Meaningful Statistics
cluster_up
Cluster up
Count
Sum
Average
Maximum
Minimum
cluster_node_count
Cluster node count
Count
Sum
Average
Maximum
Minimum
cluster_failed_node_count
Cluster failed node count
Count
Sum
Average
Maximum
Minimum
cluster_namespace_phase_count
Cluster namespace phase count
Count
Sum
Average
Maximum
Minimum
cluster_pod_phase_count
Cluster pod phase count
Count
Sum
Average
Maximum
Minimum
node_cpu_allocatable
Node CPU allocatable
-
Sum
Average
Maximum
Minimum
node_cpu_capacity
Node CPU capacity
-
Sum
Average
Maximum
Minimum
node_cpu_usage
Node CPU usage
-
Sum
Average
Maximum
Minimum
node_cpu_utilization
Node CPU utilization
-
Sum
Average
Maximum
Minimum
node_memory_allocatable
Node memory allocatable
Bytes
Sum
Average
Maximum
Minimum
node_memory_capacity
Node memory capacity
Bytes
Sum
Average
Maximum
Minimum
node_memory_usage
Node memory usage
Bytes
Sum
Average
Maximum
Minimum
node_memory_utilization
Node memory utilization
-
Sum
Average
Maximum
Minimum
node_network_rx_bytes
Node network receive bytes
Bytes/Second
Sum
Average
Maximum
Minimum
node_network_tx_bytes
Node network transmit bytes
Bytes/Second
Sum
Average
Maximum
Minimum
node_network_total_bytes
Node network total bytes
Bytes/Second
Sum
Average
Maximum
Minimum
node_number_of_running_pods
Node number of running pods
Count
Sum
Average
Maximum
Minimum
namespace_number_of_running_pods
Namespace number of running pods
Count
Sum
Average
Maximum
Minimum
namespace_deployment_pod_count
Namespace deployment pod count
Count
Sum
Average
Maximum
Minimum
namespace_statefulset_pod_count
Namespace statefulset pod count
Count
Sum
Average
Maximum
Minimum
namespace_daemonset_pod_count
Namespace daemonset pod count
Count
Sum
Average
Maximum
Minimum
namespace_job_active_count
Namespace job active count
Count
Sum
Average
Maximum
Minimum
namespace_cronjob_active_count
Namespace cronjob active count
Count
Sum
Average
Maximum
Minimum
pod_cpu_usage
Pod CPU usage
-
Sum
Average
Maximum
Minimum
pod_memory_usage
Pod memory usage
Bytes
Sum
Average
Maximum
Minimum
pod_network_rx_bytes
Pod network receive bytes
Bytes/Second
Sum
Average
Maximum
Minimum
pod_network_tx_bytes
Pod network transmit bytes
Bytes/Second
Sum
Average
Maximum
Minimum
pod_network_total_bytes
Pod network total bytes
Count
Sum
Average
Maximum
Minimum
container_cpu_usage
Container CPU usage
-
Sum
Average
Maximum
Minimum
container_cpu_limit
Container CPU limit
-
Sum
Average
Maximum
Minimum
container_cpu_utilization
Container CPU utilization
-
Sum
Average
Maximum
Minimum
container_memory_usage
Container memory usage
Bytes
Sum
Average
Maximum
Minimum
container_memory_limit
Container memory limit
Bytes
Sum
Average
Maximum
Minimum
container_memory_utilization
Container memory utilization
-
Sum
Average
Maximum
Minimum
node_gpu_count
Node GPU count
Count
Sum
Average
Maximum
Minimum
gpu_temp
GPU temperature
-
Sum
Average
Maximum
Minimum
gpu_power_usage
GPU power usage
-
Sum
Average
Maximum
Minimum
gpu_util
GPU utilization
Percent
Sum
Average
Maximum
Minimum
gpu_sm_clock
GPU SM clock
-
Sum
Average
Maximum
Minimum
gpu_fb_used
GPU FB usage
Megabytes
Sum
Average
Maximum
Minimum
gpu_tensor_active
GPU tensor active rate
-
Sum
Average
Maximum
Minimum
pod_gpu_util
Pod GPU utilization
Percent
Sum
Average
Maximum
Minimum
pod_gpu_tensor_active
Pod GPU tensor active rate
-
Sum
Average
Maximum
Minimum
Table. Kubernetes Engine Basic Metrics
4.1.2 - How-to guides
Users can enter the required information for the Kubernetes Engine and select detailed options to create a service through the Samsung Cloud Platform Console.
Create Kubernetes Engine
You can create and use the Kubernetes Engine service from the Samsung Cloud Platform Console.
You can create and manage clusters to use the Kubernetes Engine service. After creating a cluster, you can add services needed for operation such as nodes, namespaces, and workloads.
Caution
You can select up to 4 Security Groups in the network settings of Kubernetes Engine.
If you directly add a Security Group to nodes created by Kubernetes Engine on the Virtual Server service page, they may be automatically detached because they are not managed by Kubernetes Engine.
For nodes, the Security Group must be added/managed in the network settings of the Kubernetes Engine service.
Managed Security Group is automatically managed in Kubernetes Engine.
Do not use Managed Security Group for arbitrary user purposes because if you delete it or add/delete rules, it will automatically be restored.
Creating a cluster
You can create and use a Kubernetes Engine cluster service from the Samsung Cloud Platform Console.
To create a Kubernetes Engine cluster, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Create Cluster button on the Service Home page. You will be taken to the Create Cluster page.
Cluster Creation page, enter the information required for service creation, and select detailed options.
Service Information Input area, please enter or select the required information.
Category
Required
Detailed description
Cluster Name
Required
Cluster Name
Start with an English letter and use English letters, numbers, and the special character (-) within 3-30 characters
Control Plane Settings > Kubernetes Version
Required
Select Kubernetes Version
Control Area Settings > Private Endpoint Access Control
Select
Select whether to use Private Endpoint Access Control
After selecting Use, click Add to select resources that are allowed to access the private endpoint
Only resources in the same Account and same region can be registered
Regardless of the Use setting, the nodes of the cluster can access the private endpoint
Control Area Settings > Public Endpoint Access/Access Control
Select
Select whether to use Public Endpoint Access/Access Control
After selecting Use, enter the Allowed Access IP Range as 192.168.99.0/24
Set the access control IP range so that external users can access the Kubernetes API server endpoint
If external access is not needed, you can disable it to reduce security threats
ServiceWatch log collection
Select
Set whether to enable log collection so that logs for the cluster can be viewed in ServiceWatch
Use to select provides 5 GB of log storage for free for all services within the Account, and if it exceeds 5 GB, charges are applied based on storage amount
If you need to check cluster logs, it is recommended to enable the ServiceWatch log collection feature
Cloud Monitoring log collection
Select
Set whether to enable log collection so that logs for the cluster can be viewed in Cloud Monitoring
Enable: If selected, 1 GB of log storage is provided for free for all services within the Account, and any amount exceeding 1 GB will be deleted sequentially
Network Settings
Required
Network connection settings for node pool
VPC Name: Select a pre-created VPC
Subnet Name: Choose a standard Subnet to use among the subnets of the selected VPC
Security Group: Select button after clicking then Select Security Group popup window select Security Group
Up to 4 Security Group can be selected
File Storage Settings
Required
Select the file storage volume to be used in the cluster
Default Volume (NFS): Click the Search button and then select the file storage in the File Storage Selection popup. The default Volume file storage can only use the NFS format
Table. Kubernetes Engine service information input items
Enter additional information area, input or select the required information.
Category
Required or not
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Kubernetes Engine Additional Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
When creation is complete, check the created resources on the Cluster List page.
Check cluster details
Kubernetes Engine service allows you to view and edit the full resource list and detailed information. Cluster Details page consists of Details, Node Pools, Tags, Activity History tabs.
To view detailed cluster information, follow the steps below.
All Services > Container > Kubernetes Engine 메뉴를 클릭하세요. Kubernetes Engine의 Service Home 페이지로 이동합니다.
Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
Click the resource (cluster) you want to view detailed information for on the Cluster List page. You will be taken to the Cluster Details page.
Cluster Details page displays the cluster’s status information and detailed information, and consists of Details, Node Pool, Tags, Job History tabs.
Category
Detailed description
Cluster Status
Kubernetes Engine cluster status
Creating: Creating
Running: Created / Running
Updating: Version upgrade in progress
Deleting: Deleting
Error: Error occurred
Service Termination
Button to terminate a Kubernetes Engine cluster
To terminate the Kubernetes Engine service, you must delete all node pools added to the cluster
If the service is terminated, the running service may be stopped immediately, so termination is necessary considering the impact of service interruption
Table. Cluster status information and additional functions
Detailed Information
You can view detailed information of the selected resource on the Cluster List page, and modify the information if necessary.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
In the Kubernetes Engine service, it refers to the cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation DateTime
DateTime when the service was created
Modifier
User who modified the service information
Modification DateTime
DateTime when service information was modified
Cluster Name
Cluster Name
LLM Endpoint
LLM Endpoint information
Control Plane Settings
Check assigned Kubernetes control plane (Control Plane) version and access permission scope
If there is a Kubernetes version of the control plane that can be upgraded, click the Edit icon to perform a Cluster Version Upgrade. See Cluster Version Upgrade for details.
Click the Admin Kubeconfig Download/User Kubeconfig Download button for the private endpoint address to download the kubeconfig settings for each role as a yaml document.
Click the Edit icon of the private endpoint access control to modify usage and allowed resources.
Click the Admin Kubeconfig Download/User Kubeconfig Download button for the public endpoint address to download the kubeconfig settings for each role as a yaml document.
Click the Edit icon of the public endpoint access/control to modify usage and allowed IP range.
Click the Edit icon of ServiceWatch log collection to change usage. When log collection is enabled, view the cluster control plane’s Audit/Event logs in ServiceWatch > Log Group.
Click the Edit icon of Cloud Monitoring log collection to change usage. When log collection is enabled, view the cluster control plane’s Audit/Event logs in Cloud Monitoring > Log Analysis.
Network Settings
View VPC, Subnet, and Security Group information set when creating a Kubernetes Engine cluster
Click each setting to view detailed information on the detail page
If a Security Group change is needed, click the Edit icon to configure
Managed Security Group is an item provided by the system and is generated automatically
File Storage Settings
If you click the volume name, you can view detailed information on the storage detail page
Table. Cluster detailed information tab items
Reference
The version of Kubernetes Engine is denoted in the order [major].[minor].[patch], and you can upgrade only one minor version at a time.
Example: Version 1.11.x > 1.13.x (Not allowed) / Version 1.11.x > 1.12.x (Allowed)
If you are using a Kubernetes version that has reached end of support or a version that is scheduled to reach end of support, a red exclamation mark will appear to the right of the version. If this icon is displayed, we recommend upgrading the Kubernetes version.
Node Pool
You can view cluster node pool information and add, modify, or delete. For detailed information on using node pools, refer to Managing Nodes.
Check the list of node pools created in the current cluster
Click the node pool name to go to the detail page and view detailed information
More menu
Provides node pool management features
Node information: Displays node name, version, and status information
Node pool upgrade: Node pool version upgrade
Node pool deletion: Delete node pool
Table. Node Pool Tab Items
Reference
If a red exclamation mark icon appears on the version of the node pool information, the server OS of that node pool is not supported in newer versions of Kubernetes. To ensure stable service, the node pool server OS must be upgraded.
To upgrade the node pool version, delete the existing node pool and then create a new node pool with a higher server OS version.
Tag
Cluster List page allows you to view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can check the Key and Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Cluster Tag Tab Items
Work History
You can view the operation history of the selected resource on the Cluster List page.
Category
Detailed description
Work History List
Resource Change History
Work details, work date and time, resource type, resource name, work result, worker information can be checked
When you click the corresponding resource in the Work History List list, the Work History Details popup opens
Table. Cluster Job History Tab Items
Managing Cluster Resources
We provide cluster version upgrade, kubeconfig download, and control plane logging modification features for cluster resource management.
Caution
To use Kubernetes Engine, you need at least read permissions for VPC, VPC Subnet, Security Group, FileStorage, and Virtual Server. Even without create/delete permissions, Security Group and Virtual Server are created/deleted by Kubernetes Engine for lifecycle management purposes, and the creator/modifier is indicated as System.
Cluster Version Upgrade
If there is a version that can be upgraded from the cluster’s Kubernetes version, you can perform the upgrade on the Cluster Details page.
Reference
Before the cluster upgrade, check the following items.
Check if the cluster status is Running
Check that the status of all node pools in the cluster is Running or Deleting
Check that all node pool versions in the cluster are the same version as the cluster
Check if automatic scaling/downsizing of all node pools in the cluster and node auto-recovery feature are disabled
After upgrading the cluster, proceed with the node pool upgrade. The control plane and node pool upgrades of the Kubernetes cluster are performed separately.
You can upgrade only one minor version at a time.
Example: version 1.12.x > 1.13.x (possible) / version 1.11.x > 1.13.x (not possible)
After an upgrade, you cannot perform a downgrade or rollback, so to use the previous version again you must create a new cluster.
Caution
Since user systems using an end-of-support Kubernetes version may become vulnerable, upgrade the control plane and node pool versions directly in the Samsung Cloud Platform Console.
No separate cost will be incurred due to the upgrade.
Please perform compatibility testing for the upgrade version in advance to ensure stable system operation for users.
Cluster version upgrade preparation
There is no need to delete and recreate API objects when upgrading the cluster version. For the transitioned API, all existing API objects can be read and updated using the new API version. However, due to deprecated APIs in older Kubernetes versions, you may be unable to read or modify existing objects or create new ones. Therefore, to ensure system stability, it is recommended to migrate clients and manifests before the upgrade.
Migrate the client and manifest using the following method.
Download the new version of the client (e.g., kubectl), install it on the cluster, and modify the YAML to refer to the new API.
Since the deprecated API differs for each cluster version, the scope of application and system impact may also differ. For detailed explanation, refer to the Kubernetes official documentation > Deprecation Guide.
Upgrade cluster and node pool version
To update the cluster and node pool, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
Service Home page, click the Cluster menu. Go to the Cluster List page.
Click the resource (cluster) to upgrade the version on the Cluster List page. You will be taken to the Cluster Details page.
Click the edit icon of Kubernetes version on the Cluster Details page. The Cluster version upgrade popup opens.
Select the Kubernetes version to upgrade, and click the Confirm button.
It may take a few minutes until the cluster upgrade is complete
During the upgrade, the cluster status is shown as Updating, and when the upgrade is complete, it is shown as Running.
When the upgrade is complete, select the Node Pool tab. Go to the Node Pool page.
Click the More button of the node pool item and click Node Pool Upgrade. The Node Pool Version Upgrade popup window opens.
Node Pool Version Upgrade After checking the message in the popup window, click the Confirm button.
It may take a few minutes until the node pool upgrade is completed.
During the upgrade, the node pool status is shown as Updating, and when the upgrade is complete, it is shown as Running.
kubeconfig download
You can download the admin/user kubeconfig settings of the cluster’s public and private endpoints as a yaml document.
To download the kubeconfig settings of the cluster, follow the steps below.
Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engines.
Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
Click the resource (cluster) to download the kubeconfig on the Cluster List page. You will be taken to the Cluster Details page.
Cluster Details on the page, select the desired endpoint’s Admin kubeconfig download/User kubeconfig download button and click it.
You can download the kubeconfig file in yaml format for each permission.
Modify private endpoint access control
You can change the private endpoint access control settings of the cluster.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
Click the Cluster menu on the Service Home page. Navigate to the Cluster List page.
Cluster List page, click the resource (cluster) for which you want to modify the private endpoint access control. You will be taken to the Cluster Details page.
Click the Edit icon of Private Endpoint Access Control on the Cluster Details page. The Edit Private Endpoint Access Control popup opens.
In the Private Endpoint Access Control Edit popup, set the Use status of Private Endpoint Access Control, add the allowed access resources, and then click the Confirm button.
Modify public endpoint access/access control
You can change the public endpoint access control settings of the cluster.
All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engines.
Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
Cluster List page, click the resource (cluster) you want to modify public endpoint access control. Navigate to the Cluster Details page.
Click the Edit icon of Public Endpoint Access/Access Control on the Cluster Details page. The Public Endpoint Access/Access Control Edit popup opens.
Public endpoint access/access control modification In the popup, set the use of Public endpoint access control, add the allowed IP range, and then click the Confirm button.
Modify control area log collection settings
You can change the log collection settings of the cluster’s control plane. Detailed logs of the cluster can be viewed in the ServiceWatch service or the Cloud Monitoring service.
Reference
Even if you set up Cloud Monitoring log collection, you can check the cluster logs.
However, the Cloud Moniotring log collection feature is scheduled for termination, so we recommend using ServiceWatch log collection.
To change the control plane log collection settings of the cluster, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
Click the Cluster menu on the Service Home page. Go to the Cluster List page.
Click the resource (cluster) to modify control plane logging on the Cluster List page. You will be taken to the Cluster Details page.
On the Cluster Details page, click the Edit icon of ServiceWatch Log Collection. The ServiceWatch Log Collection popup opens.
Cloud Monitoring log collection feature can also be set the same way.
ServiceWatch log collection in the popup window, after setting the use of ServiceWatch log modification, click the Confirm button.
Reference
When log collection is used, you can view the Audit/Event logs of the cluster control plane in each service. Detailed logs can be viewed on the next page.
You can select up to 4 Security Groups in the network settings of Kubernetes Engine.
If you directly add a Security Group on the Virtual Server service page for nodes created by Kubernetes Engine, it may be automatically released because it is not managed by Kubernetes Engine.
For nodes, the Security Group must be added/managed in the network settings of the Kubernetes Engine service.
Managed Security Group is automatically managed in Kubernetes Engine.
Do not use Managed Security Group for arbitrary user purposes because if you delete it or add/delete rules, it will automatically be restored.
Follow the steps below to modify the cluster’s Security Group.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
Click the resource (cluster) to modify the Security Group on the Cluster List page. You will be taken to the Cluster Details page.
Click the Edit icon of Security Group on the Cluster Details page. The Edit Security Group popup window opens.
After selecting or deselecting the Security Group to modify, click the Confirm button.
Cancel Cluster
Caution
If you terminate the cluster, all connected node pools will be deleted, and all data in all pods within the cluster will be permanently deleted.
To cancel the cluster, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
Click the Cluster menu on the Service Home page. Move to the Cluster List page.
Cluster List page, click the resource (cluster) for which you want to view detailed information. You will be taken to the Cluster Detail page.
Click Cancel Service on the Cluster Details page.
Service termination After reviewing the content in the popup window, click the Confirm button.
4.1.2.1 - Node Management
A node is a collection of machines that run containerized applications. Every cluster must have at least one worker node to be able to deploy applications. Nodes can be used by defining node pools. Nodes belonging to a node pool must have the same server type, size, and OS image, and by creating multiple node pools, a flexible deployment strategy can be established.
After creating a Kubernetes Engine cluster, add a node pool and modify or delete it as needed.
Caution
It is recommended not to use the OS firewall on Kubernetes Engine nodes that use Calico.
The firewall settings of Samsung Cloud Platform are set to Inactive by default.
As recommended in the reference link below, in environments using Calico, it is recommended to set the firewall to an unused state.
If the node is designated as a Backup service target, node deletion is not possible, so the function below cannot be used.
Node pool reduction (including auto-scaling)
Node Pool Upgrade
Node pool auto recovery
Delete node pool
Add node pool
A node refers to a machine that runs containerized applications, and at least one node is required to deploy applications in a Kubernetes cluster. After the creation of a Kubernetes Engine cluster is complete, add a node pool on the details page.
You can define and use node pools, which are sets of nodes, in Kubernetes Engine. Nodes belonging to a node pool use the same server type, size, and OS image, so users can establish flexible deployment strategies by using multiple node pools.
Reference
In the Virtual Server menu, you can create a node pool using the user’s Custom Image. To create a node pool using a Custom Image, follow these steps.
Create a Virtual Server that includes the Kubernetes Engine image of Samsung Cloud Platform.
Use the Image creation of the corresponding Virtual Server to proceed with image creation.
Select the registered Custom Image to create a node pool.
Click the **Add** button to input taint effect, key, and value
For configuration method, see [Node Pool Taint Settings](#노드-풀-테인트-설정하기)
|
| Advanced Settings | Select | Settings for detailed areas such as pods, logs, etc. for worker nodes
Click **Use** to select whether to apply advanced settings items for the node pool to be created
Refer to [Configure Node Pool Advanced Settings](#노드-풀-고급-설정하기) for the configuration method
|
Table. Kubernetes Engine node pool service information input items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
When creation is complete, check the created resources on the Cluster Details > Node Pool tab > Node Pool List page.
If the notification popup opens, click the Confirm button.
Edit Node Pool
If needed, modify the number of nodes in the node pool on the Kubernetes Engine details page.
Reference
If you modify the number of nodes, nodes will be automatically added or removed, causing the container operation to terminate. At this time, because the container moves to another node, the running service may be interrupted.
To modify the number of nodes, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Service Home page, click the Cluster menu. Navigate to the Cluster List page.
Cluster List page, select the cluster you want to modify the node count for. Navigate to the Cluster Details page.
Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
On the Node Pool Details page, click the Edit icon on the right of Node Pool Information. The Node Pool Edit popup window will open.
Node Pool Edit In the popup window, after modifying the node pool information, click the Confirm button.
Upgrade Node Pool
If the Kubernetes version of the control plane and the version of the node pool are different, you can upgrade the node pool to synchronize the versions.
Caution
After upgrading the cluster, proceed with the node pool upgrade. The control plane and node pool upgrades of the Kubernetes cluster are performed separately.
When performing a node pool upgrade, a rolling update is carried out on the nodes belonging to the node pool. At this time, a momentary service interruption may occur, but this is a normal phenomenon due to the rolling update and will automatically normalize after a certain period.
The server OS version may differ depending on the Kubernetes version of the node pool.
To upgrade the node pool, follow the steps below.
All Services > Container > Kubernetes Engine menu, click. Go to the Service Home page of Kubernetes Engine.
On the Service Home page, click the Cluster menu. You will be taken to the Cluster List page.
Cluster List page, select the cluster you want to perform a node pool version upgrade on. Navigate to the Cluster Details page.
On the Cluster Details page, select the Node Pool tab, then click More > Node Pool Upgrade at the far right of the Node Pool row. The Node Pool Version Upgrade popup will open.
You can only upgrade the node pool when the node’s status is Running.
Node Pool Version Upgrade After checking the information in the popup window, click the Confirm button.
Node pool auto scaling/downsizing
Node pool auto scaling is a feature that automatically adjusts the number of node pools by adding new nodes to a specified node pool or removing existing nodes according to workload demands. This feature operates based on the node pool.
When node pool auto scaling/downsizing, it is adjusted based on the resource requests of pods running on the node pool’s nodes rather than actual resource usage, and it periodically checks the status of pods and nodes and executes auto scaling/downsizing tasks.
To set up the auto-scaling/auto-shrinking feature of the node pool, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
Click the Cluster menu on the Service Home page. Go to the Cluster List page.
Cluster List page, select the cluster you want to use the node auto‑scaling/scale‑down feature. Then go to the Cluster Details page.
On the Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
Click the Edit icon on the right of Node Pool Information on the Node Pool Details page. The Edit Node Pool popup window opens.
Node Pool Edit in the popup window, select Node Pool Auto Scaling to Enable.
After entering the minimum and maximum number of nodes, click the Confirm button.
Reference
Node pool auto-scaling settings can also be configured on the cluster node pool creation page.
Node pool expansion conditions
When pod fails to run on the cluster due to insufficient resources (Pending pod occurs)
Node pool reduction condition (when all satisfied)
If the sum of resource requests (CPU/Memory) of all pods running on a node is less than 50% of the node’s allocatable resources
If all pods running on the node can be run on another node (there must be no pods with PDB restrictions, etc.)
While using node pool auto scaling, to prevent deletion due to node reduction, please add the following annotation to the node.
Node pool auto-scaling works only when the NotReady nodes among all nodes in the cluster are 45% or less of the total and no more than 3.
If there are directly connected nodes that are not node pools created by the Kubernete Engine service, using the feature may cause malfunction.
Auto-recover node pool
Node auto-recovery is a feature that, when an abnormal node is detected in the cluster, automatically deletes it and creates a new node to restore all node counts in the node pool to a normal state. This feature operates based on the node pool.
Caution
Node auto-recovery deletes the existing node and creates a new node when communication between K8S Control Planes fails due to node (Virtual Server) issues, stopped state, network issues, etc., according to the node auto-recovery conditions, so caution is required when using it.
When creating a node pool, it is restored according to the initially set conditions, and custom settings made after node creation are not restored.
If there are directly connected nodes that are not part of the node pool created by the Kubernete Engine service, the feature may malfunction when used.
To set up the node auto-recovery feature, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
Cluster List page, select the cluster you want to use the node auto-recovery feature. Move to the Cluster Details page.
On the Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
Click the Edit icon on the right of Node Pool Information on the Node Pool Details page. The Edit Node Pool popup window opens.
Node Pool Edit In the popup, select Node Auto Recovery as Enable, then click the Confirm button.
Reference
Node auto-recovery settings can also be configured on the cluster node pool creation page.
When it is a node auto-recovery target
If a node reports NotReady status in consecutive checks for a certain time threshold (about 10 minutes)
If the node does not report any status for a certain time threshold (about 10 minutes)
If not a node automatic recovery target
Node that remains in Creating state and does not become Running when initially created
When five or more abnormal nodes occur simultaneously in the same node pool
Setting Node Pool Labels
Node pool labels are a feature for selectively scheduling workloads onto nodes.
Caution
When applying node pool label, it is not applied to existing nodes, and the label is applied only to newly created nodes.
If you need to apply a label to an existing node, the user must set it directly with kubectl.
To set the node pool label, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
On the Cluster List page, select the cluster for which you want to set the node pool label. It navigates to the Cluster Details page.
Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
Node Pool Details page, when you click the Edit icon of the label, the Edit Label popup window opens.
Label Edit In the popup window, click the Add button to add the required number of labels.
Enter the label information and click the Confirm button.
Setting Node Pool Taint
Node pool taint is a feature to prevent workloads from being scheduled onto nodes.
Caution
If you set a taint on all node pools, pods required for normal cluster operation may not run.
When applying node pool taint, it is not applied to existing nodes, and the taint is applied only to newly created nodes.
If you need to apply a taint to an existing node, the user must set it directly with kubectl.
To set the node pool taint, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Cluster menu on the Service Home page. Go to the Cluster List page.
Cluster List page, select the cluster you want to set the node pool label for. Move to the Cluster Details page.
Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
On the Node Pool Details page, clicking the Edit icon of a taint opens the Edit Taint popup.
Tint Edit In the popup window, click the Add button to add tints as many as needed.
Enter the tint information and click the Confirm button.
Advanced Node Pool Settings
Node pool advanced settings is a feature to apply detailed settings such as the number of pods, PID, logs, image GC, etc. within a worker node.
Caution
After creating a node pool, it cannot be modified. If an incorrect value is entered, the node may not operate normally.
Reference
Each setting corresponds to the kubelet configuration as follows.
Container log maximum size MB: containerLogMaxSize
Container log maximum file count: containerLogMaxFiles
Pod PID limit: podPidsLimit
Unsafe Sysctl allowed: allowedUnsafeSysctls
To perform advanced settings for the node pool, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
Cluster List page, select the cluster you want to configure node pool advanced settings. Navigate to the Cluster Details page.
On the Cluster Details page, select the Node Pool tab, then click Create Node Pool. You will be taken to the Create Node Pool page.
On the Node Pool Creation page, select Advanced Settings to Enable.
After selecting Use, enter the required information for the items that appear.
Summary tab, after confirming that the required information has been entered correctly, click the Create button.
Delete node pool
If necessary, delete the node pool from the Kubernetes Engine details page.
To delete the node pool, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
On the Cluster List page, select the cluster whose node count you want to modify. You will be taken to the Cluster Details page.
On the Cluster Details page, select the Node Pool tab, then click the More button at the far right of the node pool row. In the More menu, click Delete Node Pool.
Delete Node Pool In the popup window, select the checkbox and enter the name of the node pool to delete, then click the Confirm button.
You must select the checkbox of the node deletion confirmation message for the confirm button to be enabled.
Check node details
A node is a working machine used in a Kubernetes cluster, containing essential services required to run Pods. Each node is managed by the master components, and depending on the cluster configuration, virtual machines or physical machines can be used as nodes.
After creating the cluster, you can view information such as metadata and object information of the added nodes, and edit the resource file with a YAML editor.
To view detailed information of the node pool, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Node menu on the Service Home page. Navigate to the Node List page.
Node List page, after selecting the cluster you want to view detailed information for from the gear button at the top left, click the Confirm button.
Select the node you want to view detailed information for and click. You will be taken to the Node Details page.
Category
Detailed description
Status Display
Displays the current status of the node
Detailed Information
Check the node’s Account information, metadata, and object information
YAML
Node resources can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply changes
When editing content, click the Diff button to view the changes
Event
Check events that occurred on the node
Pod
Check node’s pod information
Pod (Pod) is the smallest compute unit that can be created, managed, and deployed in Kubernetes Engine
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check metadata information such as node labels, annotations, taints
Object Information
Displays the object information of the created node, such as internal IP, machine ID, capacity, resources, etc.
If GPU resources are present, check the number of GPUs in the Capacity > Nvidia.com/GPU column
Table. Node Detailed Information Items
4.1.2.2 - Manage Namespaces
A namespace is a logical separation unit within a Kubernetes cluster, and it is used to specify access permissions or resource usage limits per namespace.
Create Namespace
To create a namespace, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Namespace menu on the Service Home page. Navigate to the Namespace List page.
On the Namespace List page, select the cluster where you want to create a namespace from the gear button at the top left, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
You can check the namespace status and detailed information on the namespace detail page.
To view detailed namespace information, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
Click the Namespace menu on the Service Home page. Navigate to the Namespace List page.
On the Namespace List page, select the cluster that the namespace requiring detailed information belongs to from the gear button at the top left, then click Confirm.
Click on the item you want to view detailed information for on the Namespace List page. You will be taken to the Namespace Details page.
Category
Detailed description
Status Display
Displays the current status of the namespace
Namespace Deletion
Delete namespace
A namespace containing workloads cannot be deleted. To delete a namespace, all associated workloads must be deleted.
Detailed Information
Check the Account information and metadata information of the namespace
YAML
Namespaces can be edited in the YAML editor
Click the Edit button, modify the namespace, then click the Save button to apply changes
When editing content, click the Diff button to view the changes
Event
Check events that occurred within the namespace
Pod
Check pod information of the namespace
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the namespace
Table. Namespace detailed information items
Delete namespace
To delete a namespace, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Namespace menu on the Service Home page. You will be taken to the Namespace List page.
Namespace List page, after selecting the cluster that the namespace you want to delete belongs to from the gear button at the top left, click the Confirm button.
Namespace List page, select the item you want to view detailed information and click. You will be taken to the Namespace Details page.
Click Delete Namespace on the Namespace Details page.
When the alert confirmation window appears, click the Confirm button.
Warning
After selecting the item you want to delete on the namespace list page, click Delete to delete the selected namespace.
A namespace that contains workloads cannot be deleted. To delete the namespace, delete all associated workloads.
4.1.2.3 - Manage Workload
A workload is an application that runs on Kubernetes Engine. You can create a namespace and then add or delete workloads. Workloads are created and managed per deployment, pod, stateful set, daemon set, job, and cron job.
Reference
Deployments, Pods, StatefulSets, DaemonSets, Jobs, and CronJobs services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
To select a different cluster (namespace), click the gear button on the right side of the list. Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
Managing Deployments
A Deployment refers to a resource that provides updates for Pods and ReplicaSets. In workloads, you can create a Deployment and view detailed information or delete it.
Create Deployment
To create a deployment, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Deployment under the Workload menu on the Service Home page. You will be taken to the Deployment List page.
On the Deployment List page, select the cluster and namespace from the top-left gear button, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
The following is an example .yaml file showing the required fields and object Spec for creating a deployment. (application/deployment.yaml)
Color mode
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:selector:matchLabels:app:nginxreplicas:2# tells deployment to run 2 pods matching the templatetemplate:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:selector:matchLabels:app:nginxreplicas:2# tells deployment to run 2 pods matching the templatetemplate:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80
Code block. Required fields and object Spec for deployment creation
To view the deployment details, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
From the Service Home page, click Deployment under the Workloads menu. Navigate to the Deployment List page.
Deployment List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Select the item you want to view detailed information for on the Deployment List page. You will be taken to the Deployment Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Delete Deployment
Delete deployment
Detailed Information
Can check detailed information of deployment
YAML
Deployment resource files can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply the changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within the deployment
Pod
Check the pod information of the deployment
Pod (pod) is the smallest computing unit that can be created, managed, and deployed in Kubernetes Engine
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the deployment
Object Information
Check the object information of the deployment
Table. Deployment detailed information items
Delete Deployment
To delete the deployment, follow these steps.
All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
On the Service Home page, click Deployment under the Workload menu. Navigate to the Deployment List page.
Deployment List page, select the cluster and namespace from the top left gear button, then click Confirm.
Select the item you want to delete on the Deployment List page. Go to the Deployment Details page.
Click Delete Deployment on the Deployment Details page.
Alert confirmation window appears, click the Confirm button.
Caution
On the deployment list page, after selecting the item you want to delete, you can delete the selected deployment by clicking Delete.
Managing Pods
A pod (Pod) is the smallest computing unit that can be created, managed, and deployed in Kubernetes, referring to a group of one or more containers. In a workload, you can create a pod and view detailed information or delete it.
Create a pod
To create a pod, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Service Home page, click Pod under the Workload menu. Navigate to the Pod List page.
Pod List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
In the Object Creation Popup from, enter the object information and click the Confirm button.
To check the detailed pod information, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Pod under the Workload menu on the Service Home page. You will be taken to the Pod List page.
On the Pod List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Select the item you want to view detailed information for on the Pod List page. You will be taken to the Pod Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Status Display
Displays the current status of the pod
Delete Pod
Delete pod
Detailed Information
Can view detailed information of the pod
YAML
Pod resource files can be edited in the YAML editor
Edit button click and modify the resource, then click the Save button to apply changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within the pod
Log
When you select a container, you can view the container information that the pod has
Account information
Check basic information about the Account such as Account name, location, creation date and time
Metadata Information
Check the pod’s metadata information
Object Information
Check the pod’s object information
Init Container Information
Check the init container information of the pod
Container Information
Check the pod’s container information
Table. Pod detailed information items
Delete Pod
To delete a pod, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
Click Pod under the Workload menu on the Service Home page. Navigate to the Pod List page.
On the Pod List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Pod List page, select the item you want to delete. Pod Detail page, navigate.
Click Delete Pod on the Pod Details page.
Notification Confirmation Window appears, click the Confirm button.
Caution
After selecting the item you want to delete on the pod list page, you can delete the selected pod by clicking Delete.
Managing StatefulSet
StatefulSet refers to a workload API object used to manage the stateful aspects of an application. In a workload, you can create a StatefulSet and view detailed information or delete it.
Creating a StatefulSet
To create a StatefulSet, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click StatefulSet under the Workload menu on the Service Home page. You will be taken to the StatefulSet List page.
On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
Object Creation Popup에서 enter the object information and click the Confirm button.
To view the detailed information of the StatefulSet, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Service Home on the page, click StatefulSet under the Workload menu. Navigate to the StatefulSet List page.
On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Select the item you want to view detailed information for on the StatefulSet List page. You will be taken to the StatefulSet Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view the service information.
Category
Detailed description
Delete StatefulSet
Delete the StatefulSet
Detailed Information
Can check detailed information of StatefulSet
YAML
StatefulSet resource files can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply the changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within the StatefulSet
Pod
Check the pod information of the StatefulSet
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the StatefulSet
Object Information
Check the object information of the StatefulSet
Table. StatefulSet detailed information items
Delete StatefulSet
To delete a StatefulSet, follow the steps below.
Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engine.
Service Home page, click StatefulSet under the Workload menu. Navigate to the StatefulSet List page.
On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Select the item you want to delete on the StatefulSet List page. Go to the StatefulSet Details page.
Click Delete StatefulSet on the StatefulSet Details page.
If the notification confirmation window appears, click the Confirm button.
Caution
On the StatefulSet list page, after selecting the item you want to delete, you can delete the selected StatefulSet by clicking Delete.
Managing DaemonSets
DaemonSet refers to a resource that ensures that a copy of a pod runs on all nodes or some nodes. In workloads, you can create a DaemonSet and view detailed information or delete it.
Creating a DaemonSet
To create a DaemonSet, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
On the Service Home page, click DaemonSet under the Workload menu. You will be taken to the DaemonSet List page.
On the DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
To view the detailed information of the DaemonSet, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click DaemonSet under the Workload menu on the Service Home page. You will be taken to the DaemonSet List page.
DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
DaemonSet List page, select the item you want to view detailed information for. It navigates to the DaemonSet Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view the service information.
Category
Detailed description
DaemonSet Delete
Delete DaemonSet
Detailed Information
Can view detailed information of DaemonSet
YAML
DaemonSet resource files can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within the DaemonSet
Pod
Check the pod information of the DaemonSet
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the DaemonSet
Object Information
Check the object information of the DaemonSet
Table. DaemonSet detailed information items
Delete DaemonSet
To delete a DaemonSet, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click DaemonSet under the Workload menu on the Service Home page. Navigate to the DaemonSet List page.
DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
DaemonSet List page, select the item you want to delete. Move to the DaemonSet Details page.
Click Delete DaemonSet on the DaemonSet Details page.
If the Alert confirmation window appears, click the Confirm button.
Warning
On the DaemonSet list page, after selecting the item you want to delete, click Delete to delete the selected DaemonSet.
Job Management
A job refers to a resource that creates one or more pods and continues to run pods until the specified number of pods have successfully terminated. In a workload, you can create a job and view detailed information or delete it.
Create Job
To create a job, follow the steps below.
All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
Click Job under the Workload menu on the Service Home page. You will be taken to the Job List page.
On the Job List page, select the cluster and namespace from the top left gear button, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
To view detailed job information, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Job under the Workload menu on the Service Home page. Navigate to the Job List page.
On the Job List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
On the Job List page, select the item for which you want to view detailed information. You will be taken to the Job Details page.
Selecting Show system objects at the top of the list will display all items except the Kubernetes object entries.
Click each tab to view service information.
Category
Detailed description
Job Delete
Delete Job
Detailed Information
Can view detailed information of the job
YAML
Job resource file can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply changes
When editing content, click the Diff button to view the changes
Event
Check events that occurred within the job
Pod
Check the pod information of the job
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the job’s metadata information
Object Information
Check the job’s object information
Table. Job Detailed Information Items
Delete Job
To delete a job, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
Click Job under the Workload menu on the Service Home page. You will be taken to the Job List page.
Job List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Job List page, select the item you want to delete. Go to the Job Details page.
Click Delete Job on the Job Details page.
Alert confirmation window appears, click the Confirm button.
Caution
On the job list page, after selecting the item you want to delete, you can delete the selected job by clicking Delete.
Managing Cron Jobs
Cron jobs refer to resources that periodically execute a job according to a schedule written in cron format. They can be used to run repetitive tasks at regular intervals such as backups, report generation, etc. In the workload, you can create a cron job and view or delete its detailed information.
Create Cron Job
To create a cron job, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click CronJob under the Workload menu on the Service Home page. You will be taken to the CronJob List page.
CronJob List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
To check the detailed information of the cron job, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Cron Job under the Workload menu on the Service Home page. You will be taken to the Cron Job List page.
On the CronJob List page, select the cluster and namespace from the top left gear button, then click Confirm.
Cron Job List page: select the item you want to view detailed information for. You will be taken to the Cron Job Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Cron job delete
Delete cron job
Detailed Information
Can view detailed information of cron job
YAML
Cron job resource files can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply changes
When editing content, you can click the Diff button to view the changed content
Event
Check events that occurred within the cron job
Job
Check the job information of the Cron job. Selecting a job item moves to the job detail page
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the cron job
Object Information
Check the object information of the cron job
Table. Cronjob detailed information items
Delete Cron Job
To delete a cron job, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Cron Job under the Workload menu on the Service Home page. You will be taken to the Cron Job List page.
CronJob List page에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
Cron Job List page, select the item you want to delete. Navigate to the Cron Job Details page.
Click Delete Cron Job on the Cron Job Details page.
If the Notification Confirmation Window appears, click the Confirm button.
Warning
On the cron job list page, after selecting the item you want to delete, clicking Delete will delete the selected cron job.
4.1.2.4 - Service and Ingress Management
A service is an abstraction method that exposes applications running in a set of pods as a network service, and an ingress is used to expose HTTP and HTTPS paths from outside the cluster to inside the cluster. After creating a namespace, you can create or delete services, endpoints, ingresses, and ingress classes.
Reference
Service, endpoint, ingress, ingress class services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace you want to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
Service Management
You can create a service and view or delete its detailed information.
Create Service
To create a service, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Service under the Service and Ingress menu on the Service Home page. You will be taken to the Service List page.
Service List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
To view detailed service information, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
On the Service Home page, click Service under the Service and Ingress menu. You will be taken to the Service List page.
On the Service List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
On the Service List page, select the item for which you want to view detailed information. You will be taken to the Service Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Delete Service
Delete the service
Detailed Information
Can check detailed service information
YAML
Service resource files can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply the changes
When editing content, click the Diff button to view the changes
Event
Check events that occurred within the service
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the service’s metadata information
Object Information
Check the service’s object information
Table. Service Detailed Information Items
Delete Service
To delete the service, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
On the Service Home page, click Service under the Service and Ingress menu. You will be taken to the Service List page.
Service List page, select the cluster and namespace from the top left gear button, then click Confirm.
Service List page, select the item you want to delete. Service Details page will be opened.
Click Delete Service on the Service Details page.
If the Notification Confirmation Window appears, click the Confirm button.
Caution
After selecting the item you want to delete on the service list page, click Delete to delete the selected service.
Manage Endpoints
You can create an endpoint and view or delete its detailed information.
Create Endpoint
To create an endpoint, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Endpoint under the Service and Ingress menu on the Service Home page. Navigate to the Endpoint List page.
Endpoint List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
Check endpoint detailed information
To view detailed endpoint information, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Service Home page, click Endpoint under the Service and Ingress menu. Navigate to the Endpoint List page.
On the Endpoint List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Endpoint List page, select the item you want to view detailed information for. Endpoint Details page will be opened.
If you select Show System Objects at the top of the list, all items except the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Endpoint Deletion
Delete endpoint
Detailed Information
Can check detailed information of the endpoint
YAML
Endpoint resource files can be edited in the YAML editor
click the Edit button and modify the resource, then click the Save button to apply the changes
When editing content, click the Diff button to view the changes
Event
Check events that occurred within the endpoint
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the endpoint
Object Information
Check the endpoint’s object information
Table. Endpoint Detailed Information Items
Delete Endpoint
To delete the endpoint, follow the steps below.
Click the All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
On the Service Home page, click Endpoint under the Service and Ingress menu. You will be taken to the Endpoint List page.
Endpoint List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Endpoint List page, select the item you want to delete. Navigate to the Endpoint Detail page.
Click Delete Endpoint on the Endpoint Details page.
Notification Confirmation Window appears, click the Confirm button.
Reference
On the endpoint list page, after selecting the item you want to delete, click Delete to delete the selected endpoint.
Manage Ingress
Ingress is an API object that manages external access (HTTP, HTTPS) to services within the Kubernetes Engine, used to expose workloads externally, and provides L7 load balancing functionality.
Create Ingress
To create an ingress, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Service Home page, click Ingress under the Service and Ingress menu. Go to the Ingress List page.
Ingress List page에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Create Object을 클릭하세요.
In the Object Creation Popup, enter the object information and click the Confirm button.
To view the ingress detailed information, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Ingress under the Service and Ingress menu on the Service Home page. Navigate to the Ingress List page.
On the Ingress List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Select the item you want to view detailed information for on the Ingress List page. You will be taken to the Ingress Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Delete Ingress
Delete Ingress
Detailed Information
Can view detailed information of Ingress
YAML
Ingress resource files can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within the ingress
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the ingress
Object Information
Check the object information of Ingress
Table. Ingress detailed information items
Delete Ingress
To delete Ingress, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Service Home page, click Ingress under the Service and Ingress menu. Navigate to the Ingress List page.
On the Ingress List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Ingress List page, select the item you want to delete. Go to the Ingress Details page.
Click Delete Ingress on the Ingress Detail page.
Alert confirmation window appears, click the Confirm button.
Caution
On the Ingress list page, after selecting the item you want to delete, you can delete the selected Ingress by clicking Delete.
Manage Ingress Class
IngressClass refers to an API resource that allows multiple ingress controllers to be used in a single cluster. In each ingress, you must specify a reference class to the IngressClass resource that includes the configuration, including the controller that must implement the class.
Create Ingress Class
To create an Ingress class, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
Click IngressClass under the Service and Ingress menu on the Service Home page. Go to the IngressClass List page.
On the IngressClass List page, select the cluster and namespace from the top-left gear button, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
To view detailed information of the Ingress class, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
On the Service Home page, click IngressClass under the Service and Ingress menu. Navigate to the IngressClass List page.
IngressClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
On the IngressClass List page, select the item for which you want to view detailed information. You will be taken to the IngressClass Detail page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Delete Ingress Class
Delete the ingress class
Detailed Information
Can check detailed information of IngressClass
YAML
Ingress class resource file can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply the changes
When editing content, click the Diff button to view the changes
Event
Check events that occurred within the Ingress class
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the Ingress class
Object Information
Check the object information of the Ingress class
Table. Ingress Class Detailed Information Items
Delete Ingress Class
To delete the Ingress class, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
On the Service Home page, click IngressClass under the Service and Ingress menu. Navigate to the IngressClass List page.
IngressClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Select the item you want to delete on the IngressClass List page. Move to the IngressClass Details page.
Click Delete Ingress Class on the Ingress Class Details page.
Notification confirmation window appears, click the Confirm button.
Warning
On the Ingress Class list page, after selecting the item you want to delete, clicking Delete will delete the selected Ingress Class.
4.1.2.5 - Storage Management
You can create and manage storage to use when using Kubernetes Engine. Storage is created and then managed for each of PVC, PV, and StorageClass items.
Reference
PVC, PV, storage class service is set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
To select a different cluster (namespace), click the gear button on the right side of the list. Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
Notice
The items linked by storage type are as follows.
Type
Detailed Description
Block Storage
Supports a storage class that uses the product’s volume in conjunction with the Block storage product within Virtual Server
Object Storage
Can be linked with Samsung Cloud Platform products or external Object Storage
No separate configuration is required for the Kubernetes Engine, and it can be linked by directly configuring the workload (application) according to the Object Storage guide
File Storage
Supports storage classes of NFS and CIFS protocol volumes in conjunction with the File Storage product
For NFS protocol volumes, selection is required when creating a Kubernetes Engine (supports HDD, SSD disk types)
For CIFS protocol volumes, selection can be made when creating a Kubernetes Engine or after creation
Table. Storage linkage items by type
PVC manage
Persistent Volume Claim(PVC) is an object defined to allocate the required storage capacity. PVC provides high usability through abstraction, and can prevent the problem where data disappears together when the container lifecycle expires (maintaining Data Persistence).
Create PVC
To create a PVC, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Service Home page, click Storage under the PVC menu. Navigate to the PVC List page.
On the PVC List page, after selecting the cluster and namespace from the top left gear button, click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
The bs-sc storage class supports using SSD-type volumes in conjunction with block storage products.
Access mode: RWO - ReadWriteOnce
Reclaim policy: Delete(when PVC is deleted, delete PV and stored data together), Retain(when PVC is deleted, retain PV and stored data)
Capacity expansion support: individual PVC expansion support (automatic volume expansion in 8 Gi increments)
Predefined Storage Class
Storage Class
Reclaim Policy*
Volume Expansion Allowed**
Mount Options
Remarks
nfs-subdir-external-sc (default)
Delete
Not supported
nfsvers=3, noresvport
Linked with default Volume (NFS) settings
nfs-subdir-external-sc-retain
Retain
Not supported
nfsvers=3, noresvport
Linked with default Volume (NFS) settings
bs-sc
Delete
Support
-
VirtualServer > BlockStorage product integration
bs-sc-retain
Retain
Support
-
VirtualServer > BlockStorage product integration
(*) To use a storage class other than the default, you need to specify the storage class name in PVC’s spec.storageClassName
(**) User can directly change the default storage class (storageclass.kubernetes.io/is-default-class: “true” annotation adjustment)
Table. Predefined Storage Class List
Caution
The features of the reclaim policy are as follows.
Delete: If you delete the PVC, the associated PV and physical data will also be deleted.
Retain: Even if the PVC is deleted, the corresponding PV and physical data are not deleted and are retained. Since physical data not used by the workload may remain in storage, careful capacity management is required.
Caution
Consider the following when using volume expansion.
nfs-subdir-external-sc storage class
Cannot adjust the capacity of PVC. (Volume expansion not supported)
All PVs share the total capacity of the File Storage volume, so volume expansion for each PVC is not required.
bs-sc storage class
You can expand the PVC capacity. (Shrink function not supported)
The capacity of the PV is not guaranteed to be as much as requested by the PVC. (Supports expansion in 8 Gi units)
Create StorageClass
To create a storage class, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
On the Service Home page, click StorageClass under the Storage menu. Navigate to the StorageClass list page.
StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
To view detailed storage class information, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
On the Service Home page, click Storage under the StorageClass menu. You will be taken to the StorageClass List page.
On the StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
StorageClass List page, select the item you want to view detailed information for. Navigate to the StorageClass Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Delete StorageClass
Delete the StorageClass
Detailed Information
Can view detailed information of storage class
YAML
Resource files of the storage class can be edited in the YAML editor
Click the Edit button and modify the resource, then click the Save button to apply the changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within the storage class
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the storage class
Object Information
Check the object information of the storage class
Table. StorageClass detailed information items
Delete StorageClass
To delete the storage class, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
From the Service Home page, click Storage Class under the Storage menu. You will be taken to the Storage Class List page.
On the StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
StorageClass List page, select the item you want to delete. Navigate to the StorageClass Details page.
Click Delete StorageClass on the StorageClass Details page.
When the notification confirmation window appears, click the Confirm button.
Caution
On the storage class list page, after selecting the item you want to delete, click Delete to delete the selected storage class.
4.1.2.6 - Configuration(Configuration) Management
When there is a need to manage values that change inside a container depending on various environments such as development and operation, creating and managing a separate image due to environment variables is inconvenient and incurs significant cost waste.
In Kubernetes, you can manage environment variables or configuration values as variables so that they can be changed from outside and injected when a Pod is created, and you can use ConfigMap and Secret for this.
Reference
ConfigMaps and secret services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace you want to change and click the Confirm button. You can view the ConfigMap and Secret services created in the selected cluster/namespace.
Manage ConfigMap
You can write and manage the Config information used in the namespace as a ConfigMap.
Create ConfigMap
To create a ConfigMap, follow these steps.
All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
Service Home on the page, click Configuration menu below ConfigMap. Go to the ConfigMap List page.
ConfigMap List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
To view the secret detailed information, follow the steps below.
All Services > Container > Kubernetes Engine menu, click it. Go to the Service Home page of Kubernetes Engine.
Click Secret under the Configuration menu on the Service Home page. You will be taken to the Secret List page.
Secret List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Secret List page, select the item you want to view detailed information for. Secret Details page will be navigated.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Delete Secret
Delete the secret
Detailed Information
Can check secret’s detailed information
YAML
Secret’s resource file can be edited in the YAML editor
Click the Edit button, modify the resource, then click the Save button to apply changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within Secret
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the secret’s metadata information
Object Information
Check the secret’s object information
Table. Secret Detailed Information Items
Delete Secret
To delete the secret, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
Click Secret under the Configuration menu on the Service Home page. You will be taken to the Secret List page.
Secret List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Secret List page, select the item you want to delete. Secret Detail page, navigate.
Click Delete Secret on the Secret Details page.
If the notification confirmation window appears, click the Confirm button.
Caution
Select the item you want to delete on the secret list page, then click Delete to delete the selected secret.
4.1.2.7 - Manage Permissions
Kubernetes clusters can be accessed by multiple users, and you can assign permissions per specific API or namespace to define access scope. By applying Kubernetes’ role-based access control (RBAC, Role-based access control) feature, you can set permissions per cluster or namespace.
You can create and manage cluster roles, cluster role bindings, roles, and role bindings.
Reference
ClusterRole, ClusterRoleBinding, Role, and RoleBinding services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
Reference
RBAC API declares the following four types of Kubernetes objects.
You can set and manage access permissions on a per-cluster basis. You can also set permissions for APIs or resources that are not limited to a namespace.
Create Cluster Role
To create a cluster role, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Cluster Role under the Permissions menu on the Service Home page. Go to the Cluster Role List page.
Cluster Role List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
Object Creation Popup In the Object Creation Popup, enter the object information and click the Confirm button.
To check the detailed information of cluster role binding, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
On the Service Home page, click ClusterRoleBinding under the Permissions menu. You will be taken to the ClusterRoleBinding List page.
Cluster Role Binding List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Cluster Role Binding List page, select the item you want to view detailed information. Navigate to the Cluster Role Binding Details page.
If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Delete Cluster Role Binding
Delete cluster role binding
Detailed Information
Check the detailed information of the cluster role binding
YAML
The resource file of ClusterRoleBinding can be edited in the YAML editor
Edit button click and modify the resource, then click the Save button to apply changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within the ClusterRoleBinding
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of the cluster role binding
Roll/Target Info
Check the role and target information of the cluster roll
Table. Cluster Role Binding Detailed Information Items
Delete Cluster Role Binding
To delete the cluster role binding, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click ClusterRoleBinding under the Permissions menu on the Service Home page. It will navigate to the ClusterRoleBinding List page.
Cluster Role Binding List 페이지에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
Cluster Role Binding List Select the item you want to delete on the page. Cluster Role Binding Details Navigate to the page.
Click Delete Cluster Role Binding on the Cluster Role Binding Details page.
Notification Confirmation Window appears, click the Confirm button.
Caution
On the ClusterRoleBinding list page, after selecting the item you want to delete, click Delete to delete the selected ClusterRoleBinding.
Manage Roll
A role refers to a rule that specifies permissions for a specific API or resource. You can create and manage permissions that can only access the namespace to which the role belongs.
Create Roll
To create a roll, follow the steps below.
All Services > Container > Kubernetes Engine menu, click. Navigate to the Service Home page of Kubernetes Engine.
Click Role under the Permission menu on the Service Home page. It moves to the Role List page.
On the Roll List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
In the Object Creation Popup, enter the object information and click the Confirm button.
To check the detailed roll binding information, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Roll Binding under the Permission menu on the Service Home page. Navigate to the Roll Binding List page.
Roll Binding List 페이지에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
On the Roll Binding List page, select the item you want to view detailed information for. You will be taken to the Roll Binding Details page.
If you select Show system objects at the top of the list, items other than the Kubernetes object entries will be displayed.
Click each tab to view service information.
Category
Detailed description
Delete Roll Binding
Delete roll binding
Detailed Information
Check detailed information of roll binding
YAML
Roll binding’s resource files can be edited in a YAML editor
Edit button click and modify the resource, then click the Save button to apply changes
When editing content, click the Diff button to view the changed content
Event
Check events that occurred within roll binding
Account Information
Check basic information about the Account such as Account name, location, creation date, etc.
Metadata Information
Check the metadata information of Roll Binding
Role/Target Information
Check the role’s function and target information
Table. Roll Binding Detailed Information Items
Delete Roll Binding
To delete the roll binding, follow the steps below.
All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
Click Roll Binding under the Permissions menu on the Service Home page. Navigate to the Roll Binding List page.
On the Role Binding List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
Roll Binding List page, select the item you want to delete. Roll Binding Details page, navigate.
Click Delete Roll Binding on the Roll Binding Details page.
Alert confirmation window appears, click the Confirm button.
Caution
On the role binding list page, after selecting the item you want to delete, you can delete the selected role binding by clicking Delete.
4.1.3 - Using Kubernetes Engine
Configure external network communication to expose HTTP and HTTPS services from the cluster to the outside. To configure external network communication, you can create a service of type LoadBalancer.
Using Kubernetes Engine Guide
The Using Kubernetes Engine guide describes the following features. For more information, refer to the corresponding guide.
Guide
Description
Creating a LoadBalancer Service
Instructions on how to create a LoadBalancer-type service through a service manifest file
Table. Description of Using Kubernetes Engine Guide
4.1.3.1 - Authentication and Authorization
Kubernetes Engine has Kubernetes’ authentication and RBAC authorization features applied. This explains the authentication and authorization features of Kubernetes and how to link them with Kubernetes Engine and IAM.
Kubernetes Authentication and Authorization
This explains the authentication and RBAC authorization features of Kubernetes.
Authentication
The Kubernetes API server acquires the necessary information for user or account authentication from certificates or authentication tokens and proceeds with the authentication process.
For a detailed explanation of using kubectl and kubeconfig, refer to Accessing the Cluster.
Authorization
The Kubernetes API server checks if the user has permission for the requested action using the user information obtained through the authentication process and the RBAC-related objects. There are four types of RBAC-related objects as follows:
Object
Scope
Description
ClusterRole
Cluster-wide
Definition of permissions across all namespaces in the cluster
ClusterRoleBinding
Cluster-wide
Binding definition between ClusterRole and user
Role
Namespace
Definition of permissions for a specific namespace
RoleBinding
Namespace
Binding definition between ClusterRole or Role and user
Kubernetes has several predefined ClusterRoles. Some of these ClusterRoles do not have the prefix system:, which means they are intended for user use. These include the cluster-admin role that can be applied to the entire cluster using ClusterRoleBinding, and the admin, edit, and view roles that can be applied to a specific namespace using RoleBinding.
Default ClusterRole
Default ClusterRoleBinding
Description
cluster-admin
system:masters group
Grants superuser access to perform all actions on all resources.
When used in ClusterRoleBinding, it grants full control over all resources in the cluster and all namespaces.
When used in RoleBinding, it grants full control over the namespace and all resources in the namespace bound to the RoleBinding.
admin
None
Grants administrator access to the namespace when used with RoleBinding. When used in RoleBinding, it grants read/write access to most resources in the namespace, including the ability to create roles and role bindings. However, this role does not grant write access to resource quotas or the namespace itself.
edit
None
Grants read/write access to most objects in the namespace. This role does not grant the ability to view or modify roles and role bindings. However, this role allows access to secrets, which can be used to run pods in the namespace as any account, effectively granting API access at the account level.
view
None
Grants read-only access to most objects in the namespace. Roles and role bindings cannot be viewed. This role does not grant access to secrets, as reading secret contents would allow access to account credentials and potentially grant API access at the account level (a form of privilege escalation).
Table. Default ClusterRole and ClusterRoleBinding descriptions
In addition to the predefined ClusterRoles, you can define separate roles (or ClusterRoles) as needed. For example:
Color mode
# Role that grants permission to view pods in the "default" namespaceapiVersion:rbac.authorization.k8s.io/v1kind:Rolemetadata:namespace:defaultname:pod-readerrules:- apiGroups:[""]resources:["pods"]verbs:["get","list","watch"]
# Role that grants permission to view pods in the "default" namespaceapiVersion:rbac.authorization.k8s.io/v1kind:Rolemetadata:namespace:defaultname:pod-readerrules:- apiGroups:[""]resources:["pods"]verbs:["get","list","watch"]
Code block. Role that grants permission to view pods in a namespace
Color mode
# ClusterRole that grants permission to view nodesapiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:node-viewerrules:- apiGroups:[""]resources:["nodes"]verbs:["get","list","watch"]
# ClusterRole that grants permission to view nodesapiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:node-viewerrules:- apiGroups:[""]resources:["nodes"]verbs:["get","list","watch"]
Code block. ClusterRole that grants permission to view nodes
To manage access to the Kubernetes Engine using Samsung Cloud Platform IAM, you need to understand the relationship between Kubernetes’ role binding and IAM.
The target (subjects) of role binding (or cluster role binding) can include individual users (User) or groups (Group).
User matches the Samsung Cloud Platform username, and Group matches the IAM user group name.
For role binding/cluster role binding, subjects.kind can be one of the following:
User: Binds to a Samsung Cloud Platform individual user.
Group: Binds to a Samsung Cloud Platform IAM user group.
Note
In addition to the above, a service account can also be specified, but a service account is generally not for users and cannot be bound to a Samsung Cloud Platform user.
The subjects.name of role binding/cluster role binding can be specified as follows:
User case: Samsung Cloud Platform individual username (e.g. jane.doe)
Group case: Samsung Cloud Platform IAM user group name (e.g. ReadPodsGroup)
Note
subjects.name is case-sensitive.
In this way, an IAM user group is bound to a role binding (or cluster role binding) written in the Kubernetes Engine cluster. Additionally, the permission to perform API operations included in the role (or cluster role) bound to the group is granted.
Example) Role Binding read-pods #1
An example of writing a User (Samsung Cloud Platform individual user) to a role binding is as follows:
Color mode
# This role binding allows the user "jane.doe@example.com" to view pods in the "default" namespace.# A "pod-reader" role must exist in the namespace.apiVersion:rbac.authorization.k8s.io/v1metadata:name:read-podsnamespace:defaultroleRef:# "roleRef" specifies the binding to a role or cluster role.kind:Role # Must be Role or ClusterRole.name:pod-reader# Must match the name of the role or cluster role to bind.apiGroup:rbac.authorization.k8s.iosubjects:# One or more "targets" can be specified.- kind:Username:jane.doeapiGroup:rbac.authorization.k8s.io
# This role binding allows the user "jane.doe@example.com" to view pods in the "default" namespace.# A "pod-reader" role must exist in the namespace.apiVersion:rbac.authorization.k8s.io/v1metadata:name:read-podsnamespace:defaultroleRef:# "roleRef" specifies the binding to a role or cluster role.kind:Role # Must be Role or ClusterRole.name:pod-reader# Must match the name of the role or cluster role to bind.apiGroup:rbac.authorization.k8s.iosubjects:# One or more "targets" can be specified.- kind:Username:jane.doeapiGroup:rbac.authorization.k8s.io
Code block. Example of writing a User (Samsung Cloud Platform individual user) to a role binding
If a role binding like the above is created in a cluster, a user with the username jane.doe is granted the permission to perform the API actions defined in the pod-reader role.
Example) Role Binding read-pods #2
An example of writing a group (IAM user group) to a role binding is as follows:
Color mode
# This role binding allows users in the "ReadPodsGroup" group to view pods in the "default" namespace.# A "pod-reader" role must exist in the namespace.apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:read-podsnamespace:defaultroleRef:kind:Rolename:pod-readerapiGroup:rbac.authorization.k8s.iosubjects:# One or more "targets" can be specified.- kind:Groupname:ReadPodsGroupapiGroup:rbac.authorization.k8s.io
# This role binding allows users in the "ReadPodsGroup" group to view pods in the "default" namespace.# A "pod-reader" role must exist in the namespace.apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:read-podsnamespace:defaultroleRef:kind:Rolename:pod-readerapiGroup:rbac.authorization.k8s.iosubjects:# One or more "targets" can be specified.- kind:Groupname:ReadPodsGroupapiGroup:rbac.authorization.k8s.io
Code block. Example of Role binding that allows the ReadPodsGroup group to view pods
If a role binding like the above is created in the cluster, users in the IAM user group ReadPodsGroup are granted the permission to perform API operations written in the pod-reader role.
Example) Cluster Role Binding read-nodes
Color mode
# This cluster role binding allows users in the "ReadNodesGroup" group to view nodes.# A cluster role named "node-reader" must exist.apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:read-nodesroleRef:kind:ClusterRolename:node-readerapiGroup:rbac.authorization.k8s.iosubjects:- kind:Groupname:ReadNodesGroupapiGroup:rbac.authorization.k8s.io
# This cluster role binding allows users in the "ReadNodesGroup" group to view nodes.# A cluster role named "node-reader" must exist.apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:read-nodesroleRef:kind:ClusterRolename:node-readerapiGroup:rbac.authorization.k8s.iosubjects:- kind:Groupname:ReadNodesGroupapiGroup:rbac.authorization.k8s.io
Code block. Example of a cluster role binding that allows the ReadNodesGroup group to view nodes
When a cluster role binding like the one above is created in the cluster, users in the IAM user group ReadNodesGroup are granted the permissions to perform the API actions written in the cluster role node-reader.
Predefined Roles and Role Bindings for Samsung Cloud Platform
The Kubernetes Engine of Samsung Cloud Platform has predefined cluster role bindings scp-cluster-admin, scp-view, scp-namespace-view, and cluster roles scp-namespace-view. The following table shows the binding relationship between predefined roles and role bindings, and Samsung Cloud Platform users. Here, cluster roles cluster-admin and view are predefined within the Kubernetes cluster. For more detailed explanations, refer to the Roles section.
Cluster Role Binding
Cluster Role
Subjects (User)
scp-cluster-admin
cluster-admin
Group AdministratorGroup
Group OperatorGroup
User john.smith
scp-view
view
Group ViewerGroup
scp-namespace-view
scp-namespace-view
All authenticated users in the cluster
Table. Predefined Roles and Role Bindings for Samsung Cloud Platform, IAM User Groups, and User Binding Relationships
According to the cluster role binding scp-cluster-admin, users in the IAM user groups AdministratorGroup or OperatorGroup, as well as the Kubernetes Engine product applicant, are granted cluster administrator permissions.
According to the cluster role binding scp-view, users in the ViewerGroup are granted cluster viewer permissions. More precisely, since it is linked to the predefined cluster role view in Kubernetes, access permissions for cluster-scoped resources (e.g., namespaces, nodes, ingress classes, etc.) and secrets within namespaces are not included. For more detailed explanations, refer to the Roles section.
According to the cluster role binding scp-namespace-view, all authenticated users in the cluster are granted namespace viewer permissions.
Note
Predefined roles and role bindings for Samsung Cloud Platform are created only once when the cluster product is applied.
Users can modify or delete predefined cluster role bindings and cluster roles for Samsung Cloud Platform as needed.
The details of predefined roles and role bindings for Samsung Cloud Platform are as follows:
Cluster Role Binding scp-cluster-admin
The cluster role binding scp-cluster-admin is bound to the cluster role cluster-admin and bound to the IAM user groups AdministratorGroup, OperatorGroup, and the SCP user (Kubernetes Engine cluster creator) according to the subjects.
Code Block. Example of Cluster Role Binding scp-view
Cluster Role and Cluster Role Binding scp-namespace-view
Cluster Role scp-namespace-view is a role that defines the authority to view namespaces.
Cluster Role Binding scp-namespace-view is associated with Cluster Role scp-namespace-view and grants namespace view authority to all authenticated users in the cluster.
Code Block. Cluster Role and Cluster Role Binding scp-namespace-view Example
IAM User Group RBAC Use Case
This chapter explains examples of granting authority by major user scenarios.
The names of IAM user groups, ClusterRoleBindings/RoleBindings, and ClusterRoles presented here are examples for understanding. Administrators should define and apply appropriate names and authorities according to their needs.
Scope
Use Case
IAM User Group
ClusterRoleBinding/RoleBinding
ClusterRole
Note
Cluster
Cluster Administrator
ClusterAdminGroup
ClusterRoleBinding cluster-admin-group
cluster-admin
Administrator for a specific cluster
Cluster
Cluster Editor
ClusterEditGroup
ClusterRoleBinding cluster-edit-group
edit
Editor for a specific cluster
Cluster
Cluster Viewer
ClusterViewGroup
ClusterRoleBinding cluster-view-group
view
Viewer for a specific cluster
Namespace
Namespace Administrator
NamespaceAdminGroup
RoleBinding namespace-admin-group
admin
Administrator for a specific namespace
Namespace
Namespace Editor
NamespaceEditGroup
RoleBinding namespace-edit-group
edit
Editor for a specific namespace
Namespace
Namespace Viewer
NamespaceViewGroup
RoleBinding namespace-view-group
view
Viewer for a specific namespace
Table. Predefined Roles and RoleBindings, IAM User Groups, and Binding Relationships for Samsung Cloud Platform
Note
The ClusterRoles (cluster-admin, admin, edit, view) in the table above are predefined in the Kubernetes cluster. For more information, see the Role section.
Cluster Administrator
To create a cluster administrator, follow these steps:
Create an IAM user group named ClusterAdminGroup.
Create a ClusterRoleBinding with the following content in the target cluster:
The default cluster role view is associated with it, and viewer permissions are granted for the namespace.
To create a namespace viewer, follow these steps:
Create an IAM user group: Create an IAM user group named NamespaceViewGroup.
Create a role binding: Create a role binding with the following content in the target cluster.
The view cluster role is associated with the viewer permission for the specified namespace.
Practice Example
This chapter describes an example and procedure for applying an administrator to a specific namespace.
IAM user group: NamespaceAdminGroup
IAM policy: NamespaceAdminAccess
Role binding: namespace-admin-group
Create an IAM User Group
Note
For more information about IAM user groups, see IAM > User Group.
To create an IAM user group in Samsung Cloud Platform, follow these steps:
Click All Services > Management > IAM. The Identity and Access Management (IAM) Service Home page appears.
On the Service Home page, click User Group. The User Group List page appears.
On the User Group List page, click Create User Group.
Enter the required information in the Basic Information, Add User, Attach Policy, and Additional Information sections.
Category
Required
Description
User Group Name
Required
Enter the user group name
Use Korean, English, numbers, and special characters (+=,.@-_) to enter a value between 3 and 24 characters
Enter NamespaceAdminGroup as the user group name
Description
Optional
Description of the user group name
Enter a detailed description of the user group name, up to 1,000 characters
User
Optional
Users to add to the user group
The list of users registered in the account is displayed, and the selected user’s name is displayed at the top of the screen when the checkbox is selected
Click the Delete button at the top of the screen or uncheck the checkbox in the user list to cancel the selection of the selected user
If there are no users to add, click Create User at the bottom of the user list to register a new user, and then refresh the user list to select the user
Policy
Optional
Policy to attach to the user group
The list of policies registered in the account is displayed, and the selected policy name is displayed at the top of the screen when the checkbox is selected
Select ViewerAccess in the policy list
Tag
Optional
Tags to add to the user group
Up to 50 tags can be added per resource
Table. User Group Creation Information Input Items
Click the Complete button. The User Group List page appears.
Note
In this practice example, the ViewerAccess policy (permission to view all resources) is attached for demonstration purposes.
If you do not need permission to view all resources in the Samsung Cloud Platform Console, you do not need to attach the ViewerAccess policy. Define and apply a separate policy according to your actual situation.
Create an IAM Policy
Note
If you do not need to grant Samsung Cloud Platform Console usage permissions, you do not need to perform this step.
Note
For more information about IAM policies, see IAM > Policy.
To create an IAM policy in Samsung Cloud Platform, follow these steps:
Click All Services > Management > IAM. The Identity and Access Management (IAM) Service Home page appears.
On the Service Home page, click Policy. The Policy List page appears.
On the Policy List page, click Create Policy. The Create Policy page appears.
Enter the required information in the Basic Information and Additional Information sections.
Category
Required
Description
Policy Name
Required
Enter the policy name
Use Korean, English, numbers, and special characters (+=,.@-_) to enter a value between 3 and 128 characters
Enter NamespaceAdminAccess as the policy name
Description
Optional
Description of the policy name
Enter a detailed description of the policy name, up to 1,000 characters
Tag
Optional
Tags to add to the policy
Up to 50 tags can be added per resource
Table. Policy Creation Information Input Items - Basic Information and Additional Information
Click the Next button. The Permission Settings section appears.
Enter the required information in the Permission Settings section.
Select Kubernetes Engine in the Service section.
You can create a policy by importing an existing policy using Policy Import. For more information about Policy Import, see Policy Import.
Category
Required
Description
Control Type
Required
Select the policy control type
Allow Policy: A policy that allows defined permissions
Deny Policy: A policy that denies defined permissions
The deny policy takes precedence for the same target
All Authentication: Apply regardless of authentication method
API Key Authentication: Apply to users who use API key authentication
IAM Key Authentication, Console Login: Apply to users who use IAM key authentication or console login
Applied IP
Required
IP addresses to which the policy is applied
User-specified IP: Register and manage IP addresses directly by the user
Applied IP: Register IP addresses directly by the user as IP addresses or ranges to which the policy is applied
Excluded IP: Register IP addresses to be excluded from Applied IP as IP addresses or ranges
All IP: Do not restrict IP access
Allow access to all IP addresses, but if exceptions are needed, register Excluded IP to restrict access to registered IP addresses
Table. Policy creation information input items - Permission settings
Note
Permission settings provide Basic Mode and JSON Mode.
If you write in Basic Mode and enter JSON Mode or move to another screen, services with the same conditions will be integrated into one, and settings that are not completed will be deleted.
If the content written in JSON Mode does not match the JSON format, you cannot switch to Basic Mode.
Click the Next button. Move to the Input Information Check page.
Check the input information and click the Complete button. Move to the Policy List page.
To add a user to an IAM user group in Samsung Cloud Platform, follow these steps.
Click All Services > Management > IAM menu. Move to the Identity and Access Management (IAM) Service Home page.
On the Service Home page, click the User menu. Move to the User List page.
On the User List page, click the user to be added to the IAM user group. Move to the User Details page.
On the User Details page, click the User Group tab.
On the user group tab, select the Add User Group button. Move to the Add User Group page.
On the Add User Group page, select the user group to be added and click the Complete button. Move to the User Details page.
Select NamespaceAdminGroup from the user group.
Create a role binding
Create a role binding by referring to the example below.
Color mode
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:namespace-admin-groupnamespace:dev# target namespaceroleRef:kind:ClusterRolename:admin# pre-defined cluster role in KubernetesapiGroup:rbac.authorization.k8s.iosubjects:- kind:Groupname:NamespaceAdminGroup# IAM user group created earlierapiGroup:rbac.authorization.k8s.io
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:namespace-admin-groupnamespace:dev# target namespaceroleRef:kind:ClusterRolename:admin# pre-defined cluster role in KubernetesapiGroup:rbac.authorization.k8s.iosubjects:- kind:Groupname:NamespaceAdminGroup# IAM user group created earlierapiGroup:rbac.authorization.k8s.io
Code block. Create a role binding
Verify the user
Verify that the user’s namespace permissions are applied normally.
To verify namespace user permissions in Samsung Cloud Platform, follow these steps.
Click All Services > Container > Kubernetes Engine menu. Move to the Kubernetes Engine Service Home page.
On the Service Home page, click Workload menu under Pod. Move to the Pod List page.
On the Pod List page, select the cluster and namespace from the gear button at the top left and click Confirm.
On the Pod List page, verify that the pod list is retrieved.
If you select a namespace with permissions, the pod list will be displayed.
If you select a namespace without permissions, a confirmation window will be displayed indicating that you do not have permission to retrieve the list.
4.1.3.2 - Accessing the Cluster
kubectl Installation and Usage Guide
After creating a Kubernetes Engine service, you can use the Kubernetes command-line tool kubectl to execute commands on a Kubernetes cluster. Using kubectl, you can deploy applications, inspect and manage cluster resources, and view logs. You can find how to install and use kubectl in the official Kubernetes documentation as follows.
You must use a kubectl version that is within the minor version difference of the cluster. For example, if the cluster version is 1.30, you can use kubectl versions 1.29, 1.30, or 1.31.
This kubeconfig uses the admin certificate as an authentication method when accessing the Kubernetes API.
Admin kubeconfig download
Kubernetes Engine > Cluster List > Cluster Details > Admin kubeconfig Download button to click and download the kubeconfig file.
Caution
Administrator kubeconfig download is only possible for Admin.
There are separate private endpoint and public endpoint versions, and you can download each only once.
Admin kubeconfig use
Reference
By default, kubectl looks for a file named config in the $HOME/.kube directory. Or you can set the KUBECONFIG environment variable or specify the kubeconfig flag to use a different kubeconfig file.
Private endpoints are by default only accessible from nodes of the respective cluster. For resources in the same Account and same region, you can allow access by adding them to the private endpoint access control settings.
If you need to access the cluster from the external internet, setting public endpoint access to enabled allows you to access using the public endpoint kubeconfig.
User authentication key kubeconfig
This kubeconfig uses the user’s Open API authentication key as the authentication method when accessing the Kubernetes API.
User kubeconfig download
Kubernetes Engine > Cluster List > Cluster Details > User kubeconfig download Click the button to download the kubeconfig file.
Caution
User kubeconfig download is only possible for users with cluster view permission.
There are separate ones for private endpoint and public endpoint.
Since the downloaded kubeconfig file does not contain the authentication key token, you need to add the authentication key token information before using it. (See the next paragraph)
Add authentication key token to user kubeconfig file
Below is an example of a user’s kubeconfig file. To use the kubeconfig file, you need to add the authentication key token (AUTHKEY_TOKEN) information in the token field inside the file.
AUTHKEY_TOKEN can be generated by concatenating the authentication key’s ACCESS_KEY and SECRET_KEY with a colon (:) and then Base64 encoding it. The following is an example of creating AUTHKEY_TOKEN in a Linux environment.
Code block. AUTHKEY_TOKEN value generation example
Note
For detailed information on authentication key generation, please refer to API Reference > Common > Samsung Cloud Platform Open API call procedure.
User kubeconfig execution example
You can see an example of executing the user kubeconfig.
When access is blocked by access control or a firewall
Color mode
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
Unable to connect to the server: dial tcp 123.123.123.123:6443: i/o timeout
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
Unable to connect to the server: dial tcp 123.123.123.123:6443: i/o timeout
Code block. Example execution when access is blocked by access control or firewall
When AUTHKEY_TOKEN does not match and authentication fails
Color mode
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
error: You must be logged in to the server (Unauthorized)
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
error: You must be logged in to the server (Unauthorized)
Code block. Example execution when authentication fails because AUTHKEY_TOKEN does not match
AUTHKEY_TOKEN When authentication succeeds
Color mode
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
...
kube-node-lease Active 10d
kube-public Active 10d
kube-system Active 10d
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
...
kube-node-lease Active 10d
kube-public Active 10d
kube-system Active 10d
Code block. Example execution when AUTHKEY_TOKEN authentication succeeds
AUTHKEY_TOKEN Authentication succeeded but no permission
Color mode
$ kubectl --kubeconfig=user-kubeconfig.yaml get nodes
Error from server (Forbidden): nodes is forbidden: User "jane.doe" cannot list resource "nodes" in API group "" at the cluster scope
$ kubectl --kubeconfig=user-kubeconfig.yaml get nodes
Error from server (Forbidden): nodes is forbidden: User "jane.doe" cannot list resource "nodes" in API group "" at the cluster scope
Code block. Example execution when AUTHKEY_TOKEN authentication succeeds but lacks permission
Reference
If AUTHKEY_TOKEN authentication succeeds but there is no permission, it means that the authentication process was completed correctly, but the authority to perform the requested operation was not granted (authorized). For detailed information about authorization, see Authentication and Authorization.
4.1.3.3 - Using type LoadBalancer Service
Service Configuration Method
You can configure a LoadBalancer type Service by writing and applying a Service manifest file (example:
my-lb-svc.yaml
).
Caution
LoadBalancer is created in the cluster Subnet by default.
To create a LoadBalancer in a different Subnet, use the annotation service.beta.kubernetes.io/scp-load-balancer-subnet-id. For details, refer to Annotation Detailed Settings
Follow these steps to write and apply a type LoadBalancer Service.
Write a Service manifest file
my-lb-svc.yaml
.
Color mode
apiVersion:v1kind:Servicemetadata:name:my-servicespec:selector:app.kubernetes.io/name:MyAppports:- protocol:TCPport:80targetPort:9376appProtocol:tcp# Refer to LB service protocol type setting sectiontype:LoadBalancer
apiVersion:v1kind:Servicemetadata:name:my-servicespec:selector:app.kubernetes.io/name:MyAppports:- protocol:TCPport:80targetPort:9376appProtocol:tcp# Refer to LB service protocol type setting sectiontype:LoadBalancer
Code block. Service manifest file my-lb-svc.yaml writing example
Deploy the Service manifest using the kubectl apply command.
Color mode
kubectl apply -f my-lb-svc.yaml
kubectl apply -f my-lb-svc.yaml
Code block. Deploying Service manifest with kubectl apply command
Caution
When a type LoadBalancer Service is created, a corresponding Load Balancer service is automatically created. It may take a few minutes for the configuration to complete.
Do not arbitrarily modify the automatically created Load Balancer service and LB server group. Changes may be reverted or unexpected behavior may occur.
Check the Load Balancer configuration using the
kubectl get service
command.
Color mode
# kubectl get service my-lb-svcNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default my-lb-svc LoadBalancer 172.20.49.206 123.123.123.123 80:32068/TCP 3m
# kubectl get service my-lb-svcNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default my-lb-svc LoadBalancer 172.20.49.206 123.123.123.123 80:32068/TCP 3m
Code block. Checking Load Balancer configuration with kubectl get service command
Protocol Type
You can use it by writing a Service manifest. The following is a simple example.
Color mode
apiVersion:v1kind:Servicemetadata:name:my-servicespec:selector:...ports:- port:80targetPort:9376protocol:TCP # Required (choose one of TCP, UDP)appProtocol:tcp# Optional (leave blank or choose one of tcp, http, https)type:LoadBalancer # Type load balancer
apiVersion:v1kind:Servicemetadata:name:my-servicespec:selector:...ports:- port:80targetPort:9376protocol:TCP # Required (choose one of TCP, UDP)appProtocol:tcp# Optional (leave blank or choose one of tcp, http, https)type:LoadBalancer # Type load balancer
Code block. Service manifest writing example
The list of protocols (protocol and appProtocol) supported by Kubernetes Engine’s type Load Balancer Service and the settings applied to the Load Balancer service accordingly are as follows.
Category
(k8s) protocol
(k8s) appProtocol
(LB) Service Category
(LB) LB Listener
(LB) LB Server Group
(LB) Health Check
L4 TCP
TCP
(tcp)
L4
TCP {port}
TCP {nodePort}
TCP {nodePort}
L4 UDP
UDP
-
L4
UDP {port}
UDP {nodePort}
TCP {nodePort}
L7 HTTP
TCP
http
L7
HTTP {port}
TCP {nodePort}
TCP/HTTP {nodePort}
L7 HTTPS
TCP
https
L7
HTTPS {port}
TCP {nodePort}
TCP/HTTP {nodePort}
Table. k8s Service manifest and Load Balancer service application settings
According to the k8s Service manifest spec, you can specify multiple ports for a single service.
Caution
Depending on the Load Balancer service category (L4, L7), you cannot mix and use protocol layers within a single Service.
That is, L4(TCP, UDP) and L7(HTTP, HTTPS) cannot be used together in a single Service.
Automatically add firewall rules (LB Source NAT IP, HealthCheck IP → member IP:Port)
When using this annotation, firewall rules are added as many as the number of ports in the type LB service, so a very large number of firewall rules may be added.
If having too many firewall rules is a burden, as an alternative, you can manually add firewall rules without using this annotation. For example, you can add firewall rules with the destination as the member IP’s NodePort range (30000-32767).
Table. Firewall-related settings in Kubernetes annotations
Automatically add rules to the Security Group corresponding to the specified ID
When using this annotation, rules are added to the Security Group as many as the number of ports in the type LB service, so a very large number of Security Group rules may be added.
If having too many Security Group rules is a burden, as an alternative, you can manually add Security Group rules without using this annotation. For example, you can add Security Group rules with the destination address as the Load Balancer’s Source NAT IP and health check IP, and the allowed port as the NodePort range (30000-32767).
Security Group rules added by this annotation are not automatically deleted even if this annotation is deleted or changed.
Can add multiple separated by commas. (example: ddc25ad8-6d3f-4242-8c86-2a059212ddc6,26ab7fe1-b3ea-4aa9-9e9d-35a7c237904e)
This annotation can be used simultaneously with service.beta.kubernetes.io/scp-load-balancer-security-group-name annotation, and rules are automatically added to all Security Groups that meet the conditions.
Automatically add rules to the Security Group corresponding to the specified Name
When using this annotation, rules are added to the Security Group as many as the number of ports in the type LB service, so a very large number of Security Group rules may be added.
If having too many Security Group rules is a burden, as an alternative, you can manually add Security Group rules without using this annotation. For example, you can add Security Group rules with the destination address as the Load Balancer’s Source NAT IP and health check IP, and the allowed port as the NodePort range (30000-32767).
Security Group rules added by this annotation are not automatically deleted even if this annotation is deleted or changed.
Can add multiple separated by commas (example: security-group-1,security-group-2)
This annotation can be used simultaneously with service.beta.kubernetes.io/scp-load-balancer-security-group-id annotation, and rules are automatically added to all Security Groups that meet the conditions.
Table. Security Group-related settings in Kubernetes annotations
Specify whether to use Load Balancer Public NAT IP
If this annotation is set to true and service.beta.kubernetes.io/scp-load-balancer-public-ip-id is not specified, IP is automatically assigned.
If this annotation is set to true and service.beta.kubernetes.io/scp-load-balancer-public-ip-id is specified, the Public IP corresponding to the specified ID is applied.
Specify the ID of the Public IP to use as the Load Balancer Public NAT IP
If service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled is not set to true, this annotation is ignored.
If service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled is set to true and this annotation is specified, the Public IP corresponding to the specified ID is applied.
Table. Load Balancer-related settings in Kubernetes annotations
Cannot use TCP and UDP together with the same port number in the same k8s Service
-
L7 Listener’s routing rules only support the default URL path of the LB server group delivery method
To add other URL paths, add them directly in the Samsung Cloud Platform console
URL redirection is not supported
-
Table. Constraints when using Kubernetes annotations
4.1.3.4 - Considerations for Use
Managed Port Constraints
The following ports are used for SKE management and cannot be used for service use. In addition, if blocked by OS firewall, etc., node functions or some functions may not work normally.
Port
Description
UDP 4789
calico-vxlan
TCP 5473
calico-typha
TCP 10250
kubelet
TCP 19100
node-exporter
TCP 19400
dcgm-exporter
Table. Managed Port List
kube-reserved resource constraints
kube-reserved is a feature that reserves resources for system daemons that do not run as pods on the node.
There are system daemons that do not run as pods, such as kubelet, container runtime, etc.
Reference
For more information on kube-reserved, please refer to the following document.
Example: The resources reserved according to CPU size are as follows.
CPU specification
Resource specification1
Resource specification2
Resource specification3
Resource specification4
kube-reserved CPU
70 m
80 m
90 m
110 m
Table. Example of resources reserved according to CPU size
Example: The resources reserved according to the memory size are as follows.
Memory Specification
Resource Specification1
Resource Specification2
Resource Specification3
Resource Specification4
Resource Specification4
Resource Specification4
Resource Specification4
kube-reserved memory
1 GB
1.8 GB
2.6 GB
3.56 GB
5.48 GB
9.32 GB
11.88 GB
Table. Example of resources reserved according to memory size
4.1.3.5 - Version Information
Kubernetes Version and Support Period
Kubernetes Version Lifecycle
The Kubernetes open source software (OSS) community releases three minor versions annually, with a release cycle of approximately 15 weeks.
Released minor versions go through a support period of approximately 14 months (standard patch 12 months, maintenance 2 months) and become EOL (End of Life).
Information
For information on Kubernetes release and EOL timing, and support period, refer to the following links:
Samsung Cloud Platform Kubernetes Engine (SKE) Version Provision Plan
SKE verifies and provides Stable status patch versions among released OSS minor versions. Therefore, there is a difference between the release timing of versions provided by SKE and the release timing of the same OSS version.
Additionally, for previously released versions, technical support is terminated sequentially from older versions considering the open source EOL timing, etc. (End of Tech support, EoTS).
The release and termination schedules for OSS and SKE are as follows.
Version
OSS Release
OSS EOL
SKE Release
SKE EoTS
v1.29
2023-12-13
2025-02-28
2024-10
2026-03-31
v1.30
2024-04-17
2025-06-28
2025-02
2026-06-30
v1.31
2024-08-13
2025-10-28
2025-07
2026-10-28
v1.32
2024-12-11
2026-02-28
2025-10
2027-02-28
v1.33
2025-04-23
2026-06-28
2025-12
2027-06-28
v1.34
2025-08-27
2026-10-27
2026-03
2027-10-27
Table. OSS and SKE release and termination schedules
Feature Limitations at End of Technical Support (EoTS)
When the Kubernetes version provided by SKE reaches the End of Technical Support (EoTS) state, features supported in that version may be limited.
New cluster creation → Creation not possible
Existing cluster upgrade → Upgrade possible (upgrade possible even if upper version is EoTS)
Creating node pools in existing cluster → Creation possible
Note
EOL versions may have vulnerabilities, so upgrading to a higher version is recommended.
You can upgrade the control plane and node pools in the Samsung Cloud Platform Console, and no separate cost is incurred for the upgrade.
For stable operation, perform compatibility testing for the upgrade version before proceeding with the upgrade.
OS and GPU Driver
The OS and GPU driver version information available for each K8s server type is as follows.
Caution
OS versions provided may vary by K8s version.
When using GPU nodes, related K8s components (nvidia-device-plugin, dcgm-exporter) are configured by default in the cluster.
When deploying gpu-operator, conflicts may occur due to duplicate component configuration. It is recommended to deploy and use excluding the default provided components.
For OS with ended support, node pool creation is possible, but using the latest OS version is recommended.
k8s Version
Standard and High Capacity
GPU
v1.29
Ubuntu 22.04
RHEL 8.10
RHEL 8.8 (OS with ended support)
Ubuntu 22.04 (nvidia-535.183.06)
v1.30
Ubuntu 22.04
RHEL 8.10
RHEL 8.8 (OS with ended support)
Ubuntu 22.04 (nvidia-535.183.06)
v1.31
Ubuntu 22.04
RHEL 8.10
RHEL 8.8 (OS with ended support)
Ubuntu 22.04 (nvidia-535.183.06)
v1.32
Ubuntu 22.04
RHEL 9.4
Ubuntu 22.04 (nvidia-535.183.06)
v1.33
Ubuntu 22.04
RHEL 9.4
Ubuntu 22.04 (nvidia-535.183.06)
v1.34
Ubuntu 22.04
RHEL 9.4
Ubuntu 22.04 (nvidia-535.183.06)
Table. K8s version and server type-specific OS / GPU driver versions
4.1.4 - API Reference
API Reference
4.1.5 - CLI Reference
CLI Reference
4.1.6 - Release Note
Kubernetes Engine
2026.03.19
FEATUREKubernetes version added, GPU VM custom image provision, k8s and OS version EoTS management logic provision, node pool OS image EOS response and upgrade default setting, Terraform kubeconfig not provided, type: LB setting related improvements
Kubernetes Engine feature changes
Supports Kubernetes v1.34 version.
Provides GPU VM custom image for node pools.
Provides EoTS management logic and display function for cluster and node pool k8s versions and node pool OS versions.
Provides OS selection dropdown feature when upgrading node pools.
type: LB L7 listener idle-timeout added and session-duration-time default value changed and improved.
Does not provide kubeconfig feature in Terraform.
2025.12.18
FEATUREKubernetes version added, node pool GPU Driver version display, MNGC node support(SR), node pool default disk maximum capacity changed, node pool Validation added and supplemented
Kubernetes Engine feature changes
Supports Kubernetes v1.33 version.
Provides GPU Driver version information for node pool GPU nodes.
Provides MNGC nodes in SR request setting format.
Changes the maximum capacity of Block Storage for node pool OS to be the same as VM products from 1 TB → 12 TB.
Provides additional validation for label key when creating/modifying node pools and additional validation for GPU node pool server group not supported.
2025.10.23
FEATUREKubernetes version added, node pool advanced setting feature, node pool server group setting, ServiceWatch integration, UserKubeconfig download, OS version consideration node pool upgrade supplemented
Kubernetes Engine feature changes
Supports Kubernetes v1.32 version.
Provides node pool advanced setting feature.
Provides node pool server group (Affinity or Anti-affinity) setting feature.
Provides user Kubeconfig download feature following the administrator Kubeconfig download button.
Provides additional upgrade logic considering OS version when upgrading node pools.
Provides log collection feature based on ServiceWatch integration.
2025.07.01
FEATUREKubernetes version added, public endpoint provision, private endpoint access control target added, node pool Label/Taint, Block Storage CSI, kubectl login plugin added
Kubernetes Engine feature changes
Supports Kubernetes v1.31 version.
Provides public endpoint for the cluster.
Adds MNGC(Baremetal) product and DevOps Service product to private endpoint access control targets for the cluster.
Provides node pool Label and Taint setting feature.
Provides Block Storage CSI and kubectl login plugin features.
Provides private endpoint and access control features.
Provides type: LoadBalancer feature.
2025.02.27
FEATUREKubernetes version added and Kubernetes version upgrade, Custom Image, GPU node creation feature added
Kubernetes Engine feature changes
Supports Kubernetes v1.30 version.
Provides Kubernetes version upgrade feature for cluster and node pools.
Provides Multi-Security Group feature.
Provides Custom Image node and GPU node creation feature.
Samsung Cloud Platform common feature changes
Reflected common CX changes for Account, IAM, Service Home, and tags.
2024.10.01
NEWKubernetes Engine service official version release
Released Kubernetes Engine product that provides lightweight virtual computing Containers and Kubernetes clusters for managing them.
Creates Container nodes and manages them through the cluster to enable deployment of various Container applications.
2024.07.02
NEWBeta version release
Released Kubernetes Engine product Beta version.
4.2 - Container Registry
4.2.1 - Overview
Service Overview
Container Registry is a service that provides a registry for storing and managing container images and OCI (Open Container Initiative) standard artifacts. Users can easily store, manage, and share images using the Docker CLI.
Features
Easy Registry Management and Image Deployment: You can easily create a container registry for your project in Samsung Cloud Platform. By utilizing the standard Docker CLI, you can easily retrieve images from Container Registry for deployment, simplifying the development and service deployment flow.
Efficient Container Image Storage: Container image storage is possible anywhere, anytime. It can store and retrieve images in conjunction with Object Storage, making efficient image management possible. Additionally, it supports the Docker Registry V2 API specification, making it convenient to use.
Enhanced security with registry management: You can safely store and use images using the Container Registry. The Container Registry stores images encrypted in Object Storage and transmits them via HTTPS. You can set repository-based access permissions using the IAM resource-based policies of the Samsung Cloud Platform, and use images according to the set permissions.
Container Image Vulnerability Analysis: Container Registry provides a feature to analyze security vulnerabilities in stored container images. Users can select an image and scan it in a simple way to check the vulnerability results, and identify and remove vulnerabilities based on the analysis results.
Service Composition Diagram
Figure. Container_Registry Configuration Diagram
Provided Features
Container Registry provides the following features.
Registry Management: Provides Container Registry creation, deletion, registry access control management (private), and visibility features.
Repository Management: It is created under Container Registry and provides functions such as repository creation, inquiry, deletion, and security policy setting.
Image Management: These are Container Images stored in the Repository, and provide functions such as image push, image pull, inquiry, deletion, applied tag management, and security policy setting.
Image Vulnerability Check: You can manually or automatically check the security vulnerabilities of OS packages and language packages of images stored in the Container Registry, as well as secrets included in the images. Users can identify and remove known vulnerabilities (CVE) and secrets based on the check results to prevent the use of unsafe images.
Component
Registry
The registry (Registry) is a repository or collection of repositories used to store, access, and manage container images. Container registries can often support the development of container-based applications as part of development and operational processes. It can be directly connected to container orchestration platforms such as Docker and Kubernetes. The registry acts as an intermediary for sharing container images between systems, saving developers time in creating and providing cloud-native applications. In the case of the Samsung Cloud Platform, it is provided in conjunction with Object Storage and images are transmitted via HTTPS.
Repository
The Repository is a logical management unit of image tags. Using the repository, you can efficiently manage image tags. The repository is a centralized virtual storage used by developers to change and manage application source code. When developing an application, various types of documents and source code need to be stored and shared, allowing developers to easily collaborate and edit simultaneously within the same account, and track/manage changes.
Image
An image means a container that contains all the files and settings required for container execution. The image plays a role similar to a class that creates a container, and the container can be seen as a program or process that runs the image. For example, the Ubuntu image contains all the files necessary to run Ubuntu, and the MySQL image contains all the files, IDs, passwords, and port information necessary to run MySQL.
Preceding service
Container Registry has no preceding services.
4.2.1.1 - Monitoring Metrics
Container Registry monitoring metrics
The table below shows the monitoring metrics of Container Registry that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, please refer to the Cloud Monitoring guide.
Container Registry sends metrics to ServiceWatch. The metrics provided as basic monitoring are data collected at 1-minute intervals.
Note
For information on how to check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Metrics
The following are basic metrics for the Container Registry namespace.
Metrics with metric names shown in bold below are key metrics selected among the basic metrics provided by Container Registry.
Key metrics are used to configure service dashboards that are automatically built for each service in ServiceWatch.
For each metric, the user guide describes which statistical value is meaningful when querying that metric, and the statistical value shown in bold among the meaningful statistics is the key statistic. You can query key metrics through key statistics in the service dashboard.
Metric Name
Detailed Description
Unit
Meaningful Statistics
Image Pull Count [Allowed]
Number of allowed Image Tag(digest) Pulls
Count/Minute
Sum
Average
Maximum
Image Push Count [Denied]
Number of denied Image Tag(digest) Pushes
Count/Minute
Sum
Average
Maximum
Repository Count [Deleted]
Number of deleted Repositories
Count/Minute
Sum
Average
Maximum
Repository Count [Created]
Number of created Repositories
Count/Minute
Sum
Average
Maximum
Registry Login Count [Allowed]
Number of allowed Registry Logins
Count/Minute
Sum
Average
Maximum
Image Scan Count [Denied]
Number of denied Image Tag(digest) Scans
Count/Minute
Sum
Average
Maximum
Image Pull Count [Denied]
Number of denied Image Tag(digest) Pulls
Count/Minute
Sum
Average
Maximum
Registry Login Count [Denied]
Number of denied Registry Logins
Count/Minute
Sum
Average
Maximum
Image Push Count [Allowed]
Number of allowed Image Tag(digest) Pushes
Count/Minute
Sum
Average
Maximum
Image Scan Count [Allowed]
Number of allowed Image Tag(digest) Scans
Count/Minute
Sum
Average
Maximum
Image Count [Deleted]
Number of deleted Images
Count/Minute
Sum
Average
Maximum
Image Count [Created]
Number of created Images
Count/Minute
Sum
Average
Maximum
Image Tag Count [Deleted]
Number of deleted Image Tag(digest)s
Count/Minute
Sum
Average
Maximum
Table. Container Registry Basic Metrics
4.2.2 - How-to guides
The user can enter the required information for the Container Registry service and select detailed options to create the service through the Samsung Cloud Platform Console.
Create Container Registry
You can create and use the Container Registry service in the Samsung Cloud Platform Console.
Reference
Container Registry can be created up to a maximum of 2 per Account. (1 per visibility type)
To create a Container Registry service, follow the steps below.
All Services > Container > Container Registry Click the menu. Navigate to the Service Home page of Container Registry.
Click the Create Registry button on the Service Home page. You will be taken to the Create Registry page.
On the Registry creation page, enter the information required to create the service, and select detailed options.
Enter service information area, input or select the required information.
Category
Required
Detailed description
Registry Name
Required
Registry name created by the user
Must start with a lowercase English letter and be entered using lowercase English letters and numbers, 3 to 25 characters
Endpoint
Required
Set access type for registry endpoint
Private: Only private endpoint access control items can be set
Private&Public: Private endpoint access control items and public endpoint access control can be set
Private Endpoint Access Control
Select
Private Endpoint Access Control Settings
Use is selected, you can configure it so that only specific resources within the same region’s Account, such as the registry, can be accessed
Click Add for Private Access Allowed Resources to add resources that are allowed to access the registry using the private endpoint
If Use is not selected, access is allowed from resources in all subnets within the same region
Public Endpoint Access Control
Select
Public Endpoint Access Control Settings
If you select Enable, you can configure it so that only specific IPs in the same region as the registry can access
Click Add for allowed public access IPs to add IPs and resources that are allowed to access the registry using the public endpoint
If Enable is not selected, access is allowed from resources in all subnets within the same region
Visibility
Selection
Anonymous access setting for registry read (Pull) operations
Selecting Public allows unauthenticated anonymous users to perform read operations (Anonymous Pull) on all registry content.
This setting can be set to Public only at service creation time.
Table. Container Registry Service Information Input Items
Caution
If you do not select the use of private endpoint access control, the customer’s registry may be exposed to other resources within the Samsung Cloud Platform.
If you do not select the use of public endpoint access control, external IP access is possible in an internet environment, so the user’s bucket can be exposed externally via the internet. If external access is not required, uncheck the use checkbox to minimize security threats.
Additional Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Container Registry Additional Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
When creation is complete, check the created resource on the Registry list page.
Check Container Registry detailed information
Container Registry service can view and edit the full list of resources and detailed information. Container Registry Details page consists of Details, Tags, Activity Log tabs.
To view the detailed information of the Container Registry, follow the steps below.
All Services > Container > Container Registry Click the menu. Navigate to the Service Home page of Container Registry.
Click the Registry menu on the Service Home page. Navigate to the Registry List page.
Registry List On the page, click the resource (Registry) to view detailed information. You will be taken to the Registry Details page.
Registry Details page displays the status information and detailed information of the Registry, and consists of the Details, Tags, Activity History tabs.
Category
Detailed description
Registry Status
Status of the registry
Creating: Creating
Running: Creation complete/operating normally
Editing: Changing settings
Terminating: Deleting
Error: Error occurred
Unknown: Unknown
User Guide
CLI-based Registry Usage Guide
Service termination
Button to cancel the service
Table. Container Registry status information and additional features
Detailed Information
Registry list page allows you to view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In the Container Registry service, it means registry SRN
Resource Name
Resource Name
In the Container Registry service, it refers to the registry name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation DateTime
Date and time the service was created
Editor
User who modified the service information
Modification Date and Time
Date and time when service information was modified
registry name
registry name
Bucket Name
The name of the Samsung Cloud Platform Object Storage bucket where the registry data is stored
Usage
Data usage of the Object Storage bucket for the registry
Endpoint
Access type for registry endpoint
Click the Edit icon to change settings
Private Endpoint
Private endpoint URL available within the Samsung Cloud Platform network
Used as an endpoint that provides compatibility with Docker and OCI Client Tool for executing Pull and Push client commands
Copy button to copy the URL
Public Endpoint
Public endpoint URL available within the Samsung Cloud Platform network
Private Endpoint Access Control
Private Endpoint Access Control Settings
Click the Edit icon to change whether access control is used, and add or delete accessible resources
If you select access control Enabled, it is set so that only specific resources within the same region’s account, such as a registry, can be accessed
If you do not select access control Enabled, access is allowed from resources in all subnets within the same region
Public Endpoint Access Control
Public Endpoint Access Control Settings
Click the Edit icon to change whether access control is enabled, and add or remove accessible IPs and resources
When Enable access control is selected, it is set so that only specific IPs within the Account in the same region, such as the registry, can access
If Enable access control is not selected, external IP access is possible from the internet environment
Visibility
Anonymous access setting for registry read (Pull) operations
If set to Public, unauthenticated anonymous users are allowed read operations (Anonymous Pull) for all content of the registry
This setting can be set to Public only at service creation
Table. Container Registry Detailed Information Tab Items
Tag
On the Registry list page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Can check the tag’s Key and Value information
Up to 50 tags can be added per resource
When entering tags, search and select from the existing Key and Value list
Table. Registry tag tab items
Work History
You can view the operation history of the selected resource on the Registry list page.
Category
Detailed description
Work History List
Resource Change History
Work date and time, resource type, resource name, work details, work result, operator name, path information can be checked
To perform detailed search, click the Detailed Search button
Table. Work History Tab Items
Container Registry Cancel
You can cancel unused Container Registry to reduce operating costs. However, if you cancel the service, the running service may be stopped immediately, so consider the impact of service interruption thoroughly before proceeding with the cancellation.
Caution
If there are resources linked to the Registry, it cannot be deleted. After terminating the linked service displayed in the “cannot cancel service” popup, delete the Registry.
When you cancel the service, all data, including the bucket linked to the Registry, will be deleted. Be careful as the data cannot be recovered after deletion.
To cancel the Container Registry, follow the steps below.
All Services > Container > Container Registry Click the menu. Navigate to Container Registry’s Service Home page.
Click the Registry menu on the Service Home page. You will be taken to the Registry List page.
On the Registry list page, click the resource (Registry) for which you want to view detailed information. You will be taken to the Registry details page.
Click Cancel Service on the Registry Details page.
To confirm termination, click the checkbox and enter the Registry name to delete.
If you enter the Registry name correctly, the Confirm button will be activated. Click the Confirm button.
When termination is completed, check on the Registry list page whether the resource has been terminated.
4.2.2.1 - Manage Repository
A repository is a logical management unit for images within a registry. By using a repository, you can set the default security policy for images generated underneath.
Create Repository
To create a repository, follow the steps below.
All Services > Container > Container Registry Click the menu. Go to the Service Home page of Container Registry.
Service Home page, click the Repository menu. Navigate to the Repository list page.
Click the Repository List page’s Create Repository button. It navigates to the Create Repository page.
Repository list at the top of the page, click the Settings icon to select an existing registry, or click Create New to create a registry.
Enter the required information on the Create Repository page and select the detailed options.
Service Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Registry Name
Required
Select the registry name to create the repository
If no registry has been created, you can create a new one via the Create New button
Repository Name
Required
Name of the repository to create
Enter 3 to 30 characters using lowercase English letters, numbers, and the special character (-) (the start and end must be lowercase English letters or numbers)
Table. Repository Service Information Input Items
Repository Basic Policy Input Enter or select the required information in the area.
Category
Required
Detailed description
Image Scan
Option
Automatic scanning of image vulnerabilities generated in the repository and setting scan exclusion policies
Ability to set a default scan policy applied when an image is created in the repository
If automatic scanning is set to Enabled, the image’s vulnerabilities are automatically checked when the image is pushed. In this case, the vulnerability scanning cost is charged
If the scan exclusion policy is set to Enabled, you can specify the inspection targets and vulnerabilities to exclude during image scanning
Option to exclude Language Package checks, Secret checks, and vulnerabilities without a Fix Version
Excludable vulnerabilities: you can select one of the following levels
Exclude vulnerabilities at or below the (None / Unknown / Negligible / Low / Medium / High / Critical) level
Image Pull Restriction
Option
Policy settings for using the image Pull restriction feature generated in the repository and its limit values
You can set the default Pull restriction policy applied when an image is created in the repository
If you set the unscanned image Pull restriction to Enabled, pulling images that have not been vulnerability scanned is not allowed
If you set the vulnerable image Pull restriction policy to Enabled, pulling images is not allowed when Critical or High level vulnerabilities exceeding the entered value are found. The input and selectable values for this policy are as follows
Critical: 1 (default) ~ 9,999,999
High: 1 (default) ~ 9,999,999
Exclude vulnerabilities without a Fix Version
If Enabled is selected, vulnerabilities without a Fix Version (when there is no patch version for the vulnerable package/library) are excluded from the Pull restriction policy
Image lock status
Option
You can set a lock to prevent deleting or updating any images inside the repository
If the repository’s image lock status is Lock, the individual image Lock/Unlock functions within the repository are disabled
Changing the image lock status of a repository that is in Lock state to Unlock enables the individual image Lock/Unlock functions
Pushing new images is allowed
Image Tag Deletion
Option
You can set an automatic image deletion policy for images stored in the repository
If you select Enabled for policy activation, the image deletion policy is applied
Untagged Image automatic deletion, Old Image automatic deletion items set to Enabled will apply the respective image deletion policies
Enter an automatic deletion period in the policy; the image will be automatically deleted after the specified period has passed since its initial push
Additional Information Input area, please enter or select the required information.
Category
Required or not
Detailed description
Explanation
Select
Repository description
Enter repository description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Repository Additional Information Input Items
Reference
Repository default policy input items are used for the default (initial) policy settings of Images created in the Repository. (Acts as a policy setting template applied when creating an Image)
This setting can be changed on the detail view screen after creating a Repository, and from the Image created after changing the Repository default policy input items, it will be set to the changed policy. The Image policy created before the change will not be changed.
The default policy set on the Image can be modified in the Image detail screen.
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
When creation is complete, check the created resources on the Repository List page.
Check repository detailed information
Repository service can view and edit the full resource list and detailed information. Repository Detail page consists of Detailed Information, Tags, Work History tabs.
To view repository details, follow the steps below.
All Services > Container > Container Registry Click the menu. Navigate to the Service Home page of Container Registry.
Click the Repository menu on the Service Home page. Navigate to the Repository list page.
Repository List page, click the resource (Repository) for which you want to view detailed information. You will be taken to the Repository Details page.
Repository Details page displays the repository’s status information and detailed information, and consists of Details, Tags, Activity History tabs.
Category
Detailed description
Repository Status
Display repository status
Active: Available state
Deleting: Deleting state
Inactive: State not available due to failure during deletion (only deletion request possible)
Editing: State where settings are being modified or sub-resources (images, tags) within the image are being deleted
User Guide
Repository Usage Guide
Commands to use images within the repository via CLI can be checked
Delete Repository
Button to delete the repository
Table. Status Information and Additional Functions
Detailed Information
Repository list page allows you to view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In Repository, it means repository SRN
Resource Name
Resource Name
In Repository, it means repository name
Resource ID
Unique resource ID in the service
Creator
User who created the repository
Creation Time
Repository Creation Time
Editor
User who modified the repository
Modification Date/Time
Repository Modification Date/Time
Repository Name
Repository name created by the user
Registry Name
Name of the registry to which the repository is connected
Click on the resource name to go to the detail page
Description
User-entered description for the generated repository
Edit icon can be clicked to change settings
Image
Link to view list of stored images in repository
Image Scan
Automatic image vulnerability scanning generated in repository and scan exclusion policy settings
Set the default scan policy applied when an image is created in the repository (acts as a policy setting template applied at image creation)
Edit icon click to change automatic image vulnerability scan enablement, scan exclusion policy usage, and detailed policies
If automatic scanning is set to Enabled, the image’s vulnerabilities are automatically checked when the image is pushed. This setting applies only to images pushed after automatic scanning is enabled, and vulnerability checking costs are billed during automatic scanning.
If the scan exclusion policy is set to Enabled, you can specify inspection targets and vulnerabilities to exclude during image scanning as follows
Excludable inspection targets
Exclude Language Packages
Exclude Secrets
Exclude vulnerabilities without a Fix Version
Excludable vulnerabilities: select one of the following levels
Exclude vulnerabilities at or below (None / Unknown / Negligible / Low / Medium / High / Critical) level
Image Pull Restriction
Policy settings for whether to use the image Pull restriction feature and its limit values for images generated in the repository
You can set the default Pull restriction policy applied when an image is created in the repository (acts as a policy setting template applied at image creation)
Click the Edit icon to change whether the image Pull restriction feature is used and its limit values
If you set the unscanned image Pull restriction to Enabled, pulling images that have not been vulnerability scanned is not allowed
If you set the vulnerable image Pull restriction to Enabled, pulling images is not allowed when Critical or High level vulnerabilities exceeding the entered value are found. The input and selectable values for this policy are as follows
Critical: 1(default) ~ 9,999,999
High: 1(default) ~ 9,999,999
Exclude vulnerabilities without a Fix Version
If Enabled is selected, vulnerabilities without a Fix Version (i.e., when there is no patch version for the vulnerable package/library) are excluded from the Pull restriction policy
Image lock status
Lock can be set to prevent deleting or updating any images inside the repository
Click the Edit icon to change the image lock status
If the repository’s image lock status is Lock, the Lock/Unlock function for individual images within the repository is disabled
If the image lock status of a repository that is in Lock state is set to Unlock, the Lock/Unlock function for individual images becomes enabled
Pushing new images is allowed
Image Tag Deletion
Set automatic deletion policy for images stored in the repository
Edit icon can be clicked to change the image tag deletion policy
If the deletion policy is set to Enabled, the image tag deletion policy can be applied
Select Enabled for the Untagged Image automatic deletion, Old Image automatic deletion items of the deletion policy to apply those image deletion policies
Enter the automatic deletion period in the deletion policy; the image will be automatically deleted after the set period has passed since it was first pushed
On the Repository list page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can view the Key and Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Repository Tag Tab Items
Work History
You can view the operation history of the selected resource on the Repository list page.
Category
Detailed description
Work History List
Resource Change History
Work date and time, resource type, resource name, work details, work result, operator name, path information can be checked
Table. Work History Tab Items
Delete Repository
Caution
If an Image exists in the Repository, you cannot delete the Repository. To delete the Repository, first delete all Images in that Repository, then delete the Repository.
To delete a repository, follow the steps below.
All Services > Container > Container Registry Click the menu. Navigate to the Service Home page of Container Registry.
On the Service Home page, click the Repository menu. Go to the Repository list page.
Click the resource (Repository) for which you want to view detailed information on the Repository List page. You will be taken to the Repository Details page.
On the Repository Details page, click Delete Repository.
Delete Repository In the popup window, please enter the Repository name.
If you enter the Repository name correctly, the Confirm button will be enabled. Click the Confirm button.
If termination is completed, check on the Repository list page whether the resource has been terminated.
4.2.2.2 - Image and Tag Management
Images are the logical management unit of tags. Users can efficiently manage image versions using tags.
Generate Image
To generate an image, the repository must be created first.
For detailed information about creating a repository, refer to Repository Management.
Images are created by pushing images or OCI standard artifacts using the CLI with the registry endpoint.
To push an image via CLI, refer to the official documentation provided by the client tool you are using or Using CLI.
Check image detailed information
Image can view and edit the full resource list and detailed information. The Image detail page consists of Details, Tags, Delete Policy Test tabs.
To view the image details, follow the steps below.
All Services > Container > Container Registry menu, click. Go to the Service Home page of Container Registry.
Click the Image menu on the Service Home page. Navigate to the Image list page.
Image list at the top of the page, click the Settings icon to select the Registry name and Repository name where the Image for viewing detailed information is stored.
If there is no desired item, click Create New to register the Registry and Repository, then you can select it.
Click the resource (Image) to view detailed information on the Image List page. You will be taken to the Image Details page.
Image Details page displays the Image’s status information and detailed information, and consists of Detailed Information, Tags, Delete Policy Test tabs.
Category
Detailed description
Image Status
Represents the status of the image
Active: Available state
Deleting: Deleting state
Inactive: State where deletion failed and not usable (only deletion request possible)
Editing: State of modifying settings or deleting image sub-resources (tags)
User Guide
CLI-based Image Usage Guide
Image Delete
Button that deletes the image
Table. Image status information and additional functions
Detailed Information
On the Image list page, you can view the detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
Creator
User who generated the image
Creation time
Image creation time
Editor
User who edited the image
Edit date and time
Date and time the image was edited
Image name
User-generated image name
Registry name
Registry name and view link of the repository where the image is stored
Pulls
Number of times the image was pulled
Repository Name
Name of the repository where the image is stored and view link
Description
User-entered description for the image
Click the Edit icon to edit the description
Image Scan
Image vulnerability automatic scan and scan exclusion policy settings
Set image scan policies to automatically check vulnerabilities of pushed images or specify inspection targets and vulnerabilities to exclude during image scanning
Click the Edit icon to change whether image vulnerability automatic scanning is enabled, whether the scan exclusion policy is used, and detailed policies
If image automatic scanning is set to Enabled, the image’s vulnerabilities are automatically checked when the image is pushed. This setting applies to images pushed after automatic scanning is enabled, and vulnerability checking costs are charged during automatic scans
If the scan exclusion policy is set to Enabled, you can specify inspection targets and vulnerabilities to exclude during image scanning as follows
Excludable inspection targets
Exclude Language Package
Exclude Secret
Exclude vulnerabilities without Fix Version
Excludable vulnerabilities: you can select one of the following levels
Exclude vulnerabilities at levels (None / Unknown / Negligible / Low / Medium / High / Critical) and below
Image Pull Restriction
Set usage of Image Pull Restriction feature and restriction values
Using the Image Pull Restriction feature limits the pulling of unscanned or vulnerable images to minimize security threats
Edit icon to click to change the usage of the Image Pull Restriction feature and its restriction values
If the unscanned image Pull restriction is set to Enabled, pulling images that have not been vulnerability-checked is not allowed
If the vulnerable image Pull restriction is set to Enabled, pulling images is not allowed when Critical or High level vulnerabilities exceeding the entered value are found. The input and selectable values for this policy are as follows
Critical: 1 (default) ~ 9,999,999
High: 1 (default) ~ 9,999,999
Exclude vulnerabilities without a Fix Version
When selected as Enabled, vulnerabilities without a Fix Version (i.e., vulnerable packages/libraries lacking a patch version) are excluded from the Pull restriction policy
Image lock status
You can set a lock so that the selected image cannot be deleted or updated
Edit icon can be clicked to change the image lock status
If the image lock status is Lock, the image and all internal Tags are changed to a locked state and cannot be deleted or updated
If you change the image lock status from Lock to Unlock, the image and all internal Tags can be deleted or updated
Image Tag Deletion
Set automatic deletion policy for images stored in the repository
Edit icon can be clicked to change the image tag deletion policy
If the deletion policy is set to Enabled, the image tag deletion policy can be applied
Select Enabled for the Untagged Image automatic deletion and Old Image automatic deletion items of the deletion policy to apply those image deletion policies
Enter the automatic deletion period in the deletion policy; the image will be automatically deleted after the set period has passed since it was first pushed
If you delete the image, all tags within the image will be deleted as well.
To delete the Image, follow the steps below.
All Services > Container > Container Registry Click the menu. Navigate to the Service Home page of Container Registry.
Click the Image menu on the Service Home page. It moves to the Image List page.
Image list Click the Settings icon at the top of the page and select the Registry name and Repository name where the Image to be deleted is stored.
Click the resource (Image) to delete on the Image List page. You will be taken to the Image Details page.
Click the Delete Image button on the Image Details page.
Image Delete when the popup appears, click the Confirm button.
After deletion is complete, check on the Image List page whether the resource has been deleted.
Check detailed image tag information
To view detailed image tag information, follow these steps.
All Services > Container > Container Registry Click the menu. Go to Container Registry’s Service Home page.
Click the Image menu on the Service Home page. Navigate to the Image list page.
Image List Click the Settings icon at the top of the page and select the Registry name and Repository name where the Image to view detailed information is stored.
Image List page, click the resource (Image) to view detailed information. Navigate to the Image Details page.
Image Details Click the Tags tab on the right side of the detailed information tab at the top of the page. Go to the Tags List page.
Column
Detailed description
Tags
Tag name of image Digest
A single image Digest can have multiple tag names
Digest
Image Digest value
Size
Image Digest size
Edit Timestamp
Image Digest(Tags) Edit Timestamp
Inspection Date and Time
Image Digest (Tags) Vulnerability Inspection Date and Time
Vulnerability Check Result
Image Digest (Tags) Vulnerability Check Result
Summary of vulnerability count information and a view result button are displayed
View Result button can be clicked to view detailed vulnerability analysis results for the image tags
Status
Status for image Digest (Tags)
Active: Normal usable state
Deleting: Deleting state
Inactive: Deletion failed and not usable (only delete request possible)
Copy URL
Copy endpoint URL for using image Digest
You can copy the private/public endpoint URL to use in commands for using image Digest
More button
Menu to select delete, edit, vulnerability check, and detailed usage guide for image Digest (Tags)
Delete: Delete the specified image Digest (Tags)
Edit Tags: Tag name of the image Digest can be edited in the Tags edit window
Vulnerability Check: Vulnerability check available for image Digest (Tags)
Detailed Usage Guide: Can view a guide for using image Digest (Tags) based on CLI
Tags Lock: Can set a lock so that selected image Tags cannot be deleted or updated
Tags Unlock: Can remove the lock to allow selected image Tags to be deleted or updated
Table. Tags list items
Reference
Image digests that are in an Untagged state without a tag name are displayed as None in the Tags field.
Detailed Information
Click the Tags of the image Digest whose details you want to view in the list of Tags in Image details. The detailed information window for the image Digest (Tags) will appear.
Column
Detailed description
Tag Information
Displays tag name, digest, creation date and time, modification date and time
Click the Copy button at the far right of the digest value to copy the digest value
Manifest Information
Displays manifest type and detailed content
Copy Manifest click to copy the manifest value
Download click to download the manifest as a JSON file
Table. Tags detailed information window items
If you check the information in the tag detail window and click Confirm, the window will close.
Delete image tags
Caution
If there are other tags referencing the selected tag, you cannot delete the tag. Delete the referencing tags first, then delete the tag.
To delete an image tag, follow these steps.
All Services > Container > Container Registry Click the menu. Navigate to the Service Home page of Container Registry.
Click the Image menu on the Service Home page. Go to the Image list page.
Click the Settings icon at the top of the Image List page and select the Registry name and Repository name where the Image to view detailed information is stored.
Click the resource (Image) to view detailed information on the Image List page. You will be taken to the Image Details page.
Click the Tags tab on the right side of the detailed information tab at the top of the Image Details page. You will be taken to the Tags List page.
In the Tags list, select the checkbox located to the left of the tag you want to delete, then click Delete.
If you select checkboxes for multiple items, you can delete multiple tags at once, and you can select and delete up to 50 tags at a time.
You can delete tags one by one by clicking the Delete button inside the more options button located at the right end of the tag to be deleted.
Delete Tags When the popup window opens, click Confirm.
Once deletion is complete, check on the Tags list page whether the resource has been deleted.
Testing image tag deletion policy
To test the image tag deletion policy you set, follow the steps below.
All Services > Container > Container Registry menu, click. Navigate to the Service Home page of Container Registry.
Click the Image menu on the Service Home page. Navigate to the Image list page.
Image List Click the Settings icon at the top of the page and select the Registry name and Repository name where the Image to view detailed information is stored.
Click the resource (Image) to view detailed information on the Image List page. You will be taken to the Image Detail page.
Image Details Click the Delete Policy Test tab on the right of the Detailed Information tab at the top of the page. You will be taken to the Delete Policy Test tab page.
Delete Policy Test on the tab page, click the Policy Test button of the Delete Target Tags item. The delete policy test will be executed.
When the delete policy test execution notification popup opens, click the Confirm button.
When the test execution request is completed, the phrase Deletion policy test execution request has been completed is displayed.
Check the test results when the deletion policy test is completed.
Deleted Target Tags item shows the image tags (digests) that are subject to the deletion policy.
4.2.2.3 - Managing Image Security Vulnerabilities
By using the image security vulnerability inspection feature, you can manually or automatically check the OS package security vulnerabilities of images stored in the Container Registry and the Secrets included in the image. Users can identify and remove known vulnerabilities (CVE) and Secrets based on the inspection results to prevent the use of unsafe images.
Vulnerability Inspection Support Information
Supported OS
Vulnerability inspection function supports checking libraries installed via package manager on the following OS.
Supported OS
Ubuntu
Cent OS
Oracle
Debian
Alpine
AWS Linux
RHEL
Suse
VMWare Photon
Table. Supported OS Types
Support Language
The vulnerability inspection feature supports checks for the following Language.
Support Language
Python
PHP
Node.js
.NET
Go
Dart
Table. Supported Language Types I (Libraries installed with the Language package manager)
Support Language
Java
Table. Supported Language Types II (libraries identified based on pom.properties and MANIFEST.MF files included in jar, war, par, ear type files)
Support Secret
Vulnerability assessment feature supports the following types of Secrets included in the image.
Support Secret
AWS access key
GitHub personal access token
GitLab personal access token
Asymmetric Private Key
Table. Supported Secret Types
Check Image Security Vulnerabilities (Manual)
To check image security vulnerabilities, follow the steps below.
All Services > Container > Container Registry Click the menu. Navigate to the Service Home page of Container Registry.
Click the Image menu on the Service Home page. Navigate to the Image list page.
Image list Click the Settings icon at the top of the page and select the Registry name and Repository name where the Image stored for checking detailed information is located.
Image List page, click the resource (Image) to check security vulnerabilities. Image Details page will be opened.
Image Detail Click the Tags tab to the right of the detailed information tab at the top of the page. You will be taken to the Tags tab page.
Tags On the Tags tab page, click the More button located at the far right of the tag you want to check for security vulnerabilities, then click Vulnerability Check.
When the vulnerability check notification popup opens, click the Confirm button.
When the inspection starts, Vulnerability inspection will be conducted. is displayed.
When the inspection is finished, the Vulnerability Inspection Results item will display a summary of the inspection results and a View Results button. Clicking the View Results button will open a popup where you can see the detailed analysis results of Vulnerabilities by Image Digest (Tags).
Reference
View Results button, when clicked, shows the detailed analysis results of vulnerabilities for the image tag.
If a red exclamation mark icon (!) appears in the inspection date/time field after a vulnerability check, it means the vulnerability check list for the Container Registry service has been updated. Click Vulnerability Check as a new vulnerability item check is required for the image Digest (Tags), so we recommend rechecking.
To check the vulnerability assessment results, follow the steps below.
All Services > Container > Container Registry Click the menu. Navigate to the Service Home page of Container Registry.
Click the Image menu on the Service Home page. Navigate to the Image list page.
Image List Click the Settings icon at the top of the page and select the Registry name and Repository name where the Image to view detailed information is stored.
On the Image List page, click the resource (Image) to check for security vulnerabilities. You will be taken to the Image Details page.
Image Detail Click the Tags tab on the right side of the detailed information tab at the top of the page. It will navigate to the Tags tab page.
Tags on the tab page, click the View Results button of the Vulnerability Assessment Results item of the tag to check the vulnerability assessment results.
Vulnerabilities by Image Tags Check the results in the popup window where you can view the detailed analysis results.
Check inspection results by vulnerability unit
Image Tag-specific Vulnerabilities detail page’s Vulnerabilities tab allows you to view image security vulnerability assessment results by vulnerability.
Item
Detailed description
Vulnerability Check
Vulnerability Check button
Clicking the button starts the vulnerability check
However, if the tag status is Inactive, Vulnerability Check button is not activated
Inspection Date/Time
Vulnerability Inspection Date/Time
Distribution
OS name and version of the image Digest (Tags) under inspection
Refer to the supported OS list
Total number of vulnerabilities
Vulnerability assessment summary
The total number of detected vulnerabilities and the count by severity are displayed as a graph
Vulnerabilities are classified into six levels by severity (Critical, High, Medium, Low, Negligible, Unknown)
Table. Summary of Vulnerability Inspection Results
Vulnerability tab allows you to view the list of all discovered vulnerabilities.
Item
Detailed description
CVE
External link to verify the detected vulnerability ID (CVE ID) and detailed information about the vulnerability
CVE (Common Vulnerabilities and Exposures)
Severity
Severity of detected vulnerability
CVSS
CVSS (Common Vulnerability Scoring System) based vulnerability score
Category
Inspection target type of detected vulnerabilities
OS package or Language package is displayed
OS/Language
OS or Language package type of detected vulnerability
Refer to the list of supported OSes and supported Languages
Package
Name of package with discovered vulnerability
Current version
Current version of the package where vulnerability was found (vulnerable version)
Fixed version
Version where the vulnerability of the discovered package has been addressed
Modification status
Existence of a version with the vulnerability fixed for the package where the vulnerability was discovered (existence of a vulnerability patch version)
Expand button
View vulnerability detailed information
Expand button click displays detailed information about the vulnerability at the bottom
You can view the Description and Vectors results for the vulnerability. Detailed explanations for each Vector value are provided as tooltips
The detailed information opened with the Expand button can be closed by clicking the Collapse button
Table. Vulnerability List Item
Check inspection results by package unit
Image Tag Vulnerabilities detailed page, when you click the Package tab, you are taken to the package-specific vulnerability page. In the Package tab, you can view the image security vulnerability check results by package.
Item
Detailed Description
Vulnerability Check
Vulnerability Check button
Clicking the button starts the vulnerability check
However, if the tag status is Inactive, Vulnerability Check button is not activated
Inspection Date/Time
Vulnerability Inspection Date/Time
Distribution
OS name and version of the image Digest (Tags) to be inspected
Refer to the supported OS list
Total package count
Summary of total package information
The total number of discovered packages and the number of packages based on vulnerability presence are displayed as a graph.
Table. Summary items of package vulnerability inspection results
Package tab allows you to view the full package list, as well as the list of packages with discovered vulnerabilities and the list of packages without discovered vulnerabilities.
Item
Detailed description
Category
Type of discovered package
Display OS package or Language package
OS/Language
Detailed OS or Language type of the discovered package
Refer to the list of supported OS and supported Language
Package
Detected package name
Version
Current version of the package
Vulnerability Inspection Result
Summary Information of Number of Vulnerabilities Contained in Package
Type
OS or Language type and details of the discovered package
Table. Package List Items
Check results by secret unit
Image Tag Vulnerabilities on the detail page, click the Secret tab to go to the secret-specific vulnerability page. You can view the image security vulnerability scan results by secret.
Item
Detailed description
Vulnerability Check
Vulnerability Check button
Click the button to start vulnerability check
However, if the tag status is Inactive, the Vulnerability Check button will not be activated
Inspection date/time
Vulnerability inspection date/time
Distribution
OS name and version of the target image Digest (Tags)
Refer to the supported OS list
Total number of vulnerabilities
Vulnerability result summary
The total number of detected vulnerabilities and the number of vulnerabilities by severity are displayed as a graph
Vulnerabilities are classified into six levels by severity (Critical, High, Medium, Low, Negligible, Unknown)
Table. Summary of secret vulnerability inspection results
In the Secret tab, you can view the full list of secret files, as well as the lists of files with discovered vulnerabilities and files without discovered vulnerabilities.
Item
Detailed description
File
File name of detected secret
Category
Detected secret type
Refer to the supported secret list
Severity
Detected secret severity
Match
Secret match information in detected file
Table. Secret list items
4.2.2.4 - Managing Image Tag Deletion Policies
The user can register and manage the image tag deletion policy.
Managing image tag deletion policies
The image tag deletion policy refers to the policy that automatically deletes an image when a certain period of time has passed since it was first pushed to the repository. If the image tag deletion policy is enabled, the image tags (digest) stored in the Container Registry will be automatically deleted according to the set deletion policy.
Notice
After enabling the deletion policy and setting it to use, the image tag (digest) to which the deletion policy is first applied will be deleted within a maximum of 3 days (72 hours). Subsequent image tags (digests) to which the deletion policy is applied will be deleted within a maximum of 1 day (24 hours).
Image tags (digests) to which the deletion policy is applied are permanently deleted and cannot be recovered.
Support deletion policy information
Describes policy information that supports deleting image tags.
Support Policy
It supports a policy that allows you to set automatic deletion and period for image tags (digest).
Support Policy
Untagged Image
Old Image
Table. Image tag deletion support policy type
Set the image tag (digest) deletion policy
To set the image tag (digest) deletion policy, follow these steps.
Click All services > Container > Container Registry menu. It moves to the Service Home page of Container Registry.
Service Home page, click the Image menu. It moves to the Image list page.
Click the gear button at the top of the Image 목록 page. The Registry/Repository 설정 popup window opens.
Registry/Repository settings In the popup window, select the Registry name and Repository name where the Image to be set for the deletion policy is stored, and click the OK button.
Image list page, click the resource (Image) to set the deletion policy. Move to the Image details page.
Image Detail page’s Detail Info tab, click the Edit icon of the Delete Image Tag item. The Edit Delete Image Tag popup window will open.
Image Tag Deletion Modification In the popup window, enter and select the necessary information and activation status, and click the Confirm button.
Delete policy activation is set to Use, the image tag (digest) will be automatically deleted according to the set delete policy.
Select the deletion policy to apply and enter the period from when the image was first pushed to the repository to when it will be automatically deleted.
When the update notification popup window opens, click the Confirm button.
When the modification is complete, Image tag deletion modification was successful message will be displayed.
Reference
You can also set a deletion policy in the Repository that plays the role of a template for the Image. When setting a deletion policy in the Repository, the set deletion policy is applied equally to all Images stored inside.
Image tag (digest) deletion policy test
To test the image tag (digest) deletion policy, follow these steps.
Click All services > Container > Container Registry menu. It moves to the Service Home page of Container Registry.
On the Service Home page, click the Image menu. It moves to the Image list page.
Click the gear button at the top of the Image 목록 page. The Registry/Repository 설정 popup window will open.
Registry/Repository Settings In the popup window, select the Registry name and Repository name where the Image to be set for the deletion policy is stored, and click the Confirm button.
On the Image List page, click the resource (Image) to test the deletion policy. It moves to the Image Detail page.
Image Detail page, click the Deletion Policy Test tab. Move to the Deletion Policy Test tab page.
Deletion Policy Test tab page, to test the deletion policy set, click the Policy Test button at the bottom of the deletion target Tags.
When the deletion policy test notification popup window opens, click the Confirm button.
When the test run application is completed, the phrase The deletion policy test run application has been completed will be displayed.
After the test is completed, the image tags (digest) that are the target of the deletion policy will be displayed in the Deletion Target Tags section.
4.2.2.5 - Using Container Registry with CLI
This explains how to log in to the Container Registry using the CLI command and manage Container images and Helm charts.
Managing Container Images with CLI
You can log in to the Container Registry and push or pull container images using the CLI command.
Logging in to Container Registry
The user can log in to the Container Registry using the authentication key.
Reference
To log in to Container Registry, you need LoginContainerRegistry permission for the registry you want to use. For more information on policy and permission settings, see Management > IAM > Policy.
Logging in with an authentication key
Logs in using the AccessKey and SecretKey of the authentication key and the registry endpoint.
Registry endpoint : Container Registry details page can be found.
To log in with an authentication key, you must create an authentication key on the IAM > Authentication Key Management page and set the authentication method to Authentication Key Authentication in the Security Settings.
Security settings should be checked before modifying the Authentication key security settings modification popup at the top with a notice about the authentication key authentication method.
For more information on how to create an authentication key and set up authentication key authentication, see Management > IAM > Managing Authentication Keys.
Pushing Images
To push an image to the registry, please refer to the following command.
To push an image to the registry, you need LoginContainerRegistry permission for the registry to be used and PushRepositoryImages permission for the repository.
For more information about policy and permission settings, see Management > IAM > Policy.
Image Pulling
To pull an image from the registry, please refer to the following command.
To pull an image from the registry, you need LoginContainerRegistry permission for the registry to be used and PullRepositoryImages permission for the repository.
For more information about policy and permission settings, see Management > IAM > Policy.
Managing Helm Charts with CLI
You can log in to the Container Registry using the CLI command and push or pull the Helm chart.
Reference
Container Registry supports Helm v3.8.1 and above.
Logging in to Container Registry
The user can log in to the Container Registry using the authentication key.
Reference
To log in to Container Registry, you need LoginContainerRegistry permission for the registry you want to use. For more information about policy and permission settings, see Management > IAM > Policy.
Logging in with an authentication key
Logs in using the AccessKey, SecretKey of the authentication key and the registry endpoint.
Registry endpoint : Container Registry details page can be found.
To log in with an authentication key, you must create an authentication key on the IAM > Authentication Key Management page and set the authentication method to Authentication Key Authentication in the Security Settings.
Security settings should be checked before modifying the Modify authentication key security settings popup at the top, and the guidance phrase for the authentication key authentication method must be confirmed.
For more information on how to create an authentication key and set up authentication key authentication, see Management > IAM > Managing Authentication Keys.
Chart Push
To push a chart to the registry, please refer to the following command.
As shown in the example, writing and executing the command will save (upload) the chart to the mychart repository with the hello-world image and apply the 0.1.0 tag.
To push charts to a registry, you need the LoginContainerRegistry permission for the registry you want to use and the PushRepositoryImages permission for the repository.
For more information about policy and permission settings, see Management > IAM > Policy.
Chart Pulling
To pull charts from the registry, please refer to the following command.
As shown in the example, writing and executing the command downloads the chart saved with the tag 0.1.0 in the hello-world image in the mychart repository.
To pull charts from a registry, you need the LoginContainerRegistry permission for the registry you want to use and the PullRepositoryImages permission for the repository.
For more information about policy and permission settings, see Management > IAM > Policy.
4.2.3 - API Reference
API Reference
4.2.4 - CLI Reference
CLI Reference
4.2.5 - Release Note
Container Registry
2026.03.19
FEATUREOCI Distribution Spec. compatibility secured, image vulnerability check feature expanded
Container Registry feature changes
Improved user registry by securing OCI(Open Container Initiative) Distribution Spec. v1.1.1 compatibility for registry.
Expanded provision by adding OS and Language types subject to container image vulnerability checks.
2025.12.18
FEATUREImage tag deletion policy added, Public Endpoint access control IP Validation improved
Container Registry feature changes and improvements
Added image tag deletion policy feature based on count criteria.
Improved Public Endpoint access control IP input value Validation according to Firewall product IP range constraints.
2025.10.23
FEATUREImage tag deletion policy activation item added, ServiceWatch integration supported
Container Registry feature changes
Provided deletion policy activation setting feature for image tag deletion items.
Provided log collection feature based on ServiceWatch integration.
2025.07.01
FEATURESelf-encryption / S3 API compatible bucket-based Container Registry, public endpoint provision, private endpoint access control target added, Image Life Cycle Policy supported
Container Registry feature changes
Provided Container Registry service based on Object Storage with self-encryption / S3 API compatibility issue patches applied.
Provided public endpoint and access control features for the registry.
Added Multi-Node GPU Cluster product to private endpoint access control targets for the registry.
Provided automatic deletion policy setting feature for Repository and stored Images and their tags(digests).
2025.02.27
FEATUREImage Lock feature and monitoring, VPC Endpoint integration added
Container Registry feature changes
Provided Lock feature for Images stored in the registry.
Provided monitoring feature for the registry through integration with Cloud Monitoring product.
Provided integration with VPC Endpoint.
Samsung Cloud Platform common feature changes
Reflected common CX changes for Account, IAM, Service Home, and tags.
2024.11.28
NEWContainer Registry service temporary version release
Container Registry is a service that provides a registry and repository for easy storage, management, and sharing of container images and OCI(Open Container Initiative) standard artifacts.
Released as a temporary version, and migration to the official version is planned when the encryption scheme is updated.
5 - Networking
Provides a stable and user-friendly network operation environment optimized for various cloud environments of customers.
5.1 - VPC
5.1.1 - Overview
Service Overview
Samsung Cloud Platform provides VPC service to support the use of logically isolated customer-dedicated private network spaces in the cloud environment. VPC (Virtual Private Cloud) is a service that provides logically isolated customer-dedicated private network spaces in the cloud environment. You can create General Subnets for public or private use, and Local Subnets for server-to-server communication according to your purpose. You can freely choose NAT Gateway and Internet Gateway to configure various networks. You can create multiple VPCs and operate them independently. You can configure connections between VPCs through VPC Peering.
Service Architecture
Figure. VPC Architecture
Components
Subnet
Subnet refers to the IP address range of a VPC. You can create subnets for public or private use using General Subnets according to your purpose.
It is a service that allows users to subdivide networks according to their purpose/scale within a VPC.
Subnet provides General Subnet and Local Subnet for server-to-server communication.
General Subnet Create/View/Delete: This is the subnet created by default when creating a VPC, and you use the subnet according to your purpose. For example, you can distinguish and use it as a Public Subnet that can access the internet and a Private Subnet that cannot access the internet.
VPC Endpoint Subnet Create/View/Delete: You can create an entry point to the VPC that allows access to Samsung Cloud Platform through a private connection from an external network connected to the VPC.
Local Subnet Create/View/Delete: This is a subnet that allows only direct connections between Virtual Server-Virtual Server or Bare Metal Server-Bare Metal Server without connecting to other subnets or external access. Only Virtual Server-Virtual Server settings within the VPC are possible.
Subnet Types
Sub_network refers to a subdivided IP address area in small units for use in an IP network.
Subnet types are divided according to how routing for the subnet is configured.
Type
Description
Public Subnet
Can configure a subnet that can access the internet as a General Subnet
Private Subnet
Can configure a subnet that cannot access the internet as a General Subnet
VPC Endpoint Subnet
Can configure a subnet that can be used as a VPC Endpoint
Local Subnet
Can configure a subnet that cannot connect to other subnets or external access
Table. Subnet Types
Internet Gateway
You can create an Internet Gateway and connect it to a VPC, view detailed information, or delete unused Internet Gateways. You can connect VPC resources to the internet using the Internet Gateway. You can assign a Public IP to instances and load balancers that can be connected from the outside by connecting to the internet.
NAT Gateway
You can create a NAT Gateway and connect it to a subnet, view detailed information, or delete unused NAT Gateways. To create a NAT Gateway for a subnet, you must first create an Internet Gateway and connect it to the VPC. When you create a NAT Gateway, internet access is allowed for all resources belonging to the subnet. Apply firewall rules to restrict internet access. NAT Gateway can be created for the General type, and it is a service that maps one representative public IP for Virtual Servers without public IP NAT mapping for outbound internet use.
Public IP
If you want to use the same IP address every time you stop and start an instance, you reserve and assign a Public IP. It is a service that creates a desired public IP within Samsung Cloud Platform’s available Public IP Pool and assigns it to Compute resources. Even if the Compute resource assigned with the specified Public IP is rebooted, the IP does not change.
Port
Provides a connection point to connect a single device, such as a server’s NIC, to a network. This allows additional devices beyond the default NIC.
VPC Endpoint
Provides an entry point to the VPC that allows access to Samsung Cloud Platform through a private connection from an external network connected to the VPC.
VPC Peering
You can communicate via IP through a 1:1 private route between VPCs. By default, peering between VPCs of the same account is provided, and only one connection between different accounts is allowed.
Private NAT
Compute resources within a VPC can connect by mapping customer network IPs using Direct Connect.
Transit Gateway
Transit Gateway is a gateway service that easily connects customer networks and Samsung Cloud Platform’s networks and acts as a connection hub for multiple VPCs within the cloud environment. Through Transit Gateway, you can configure various network topologies as desired. In addition, you can thoroughly manage security by providing independent firewall configuration and routing functions for each connected network section.
PrivateLink
It is a service that connects a private path between the VPC and SCP services without exposing internal data of Samsung Cloud Platform to the internet.
PrivateLink Service is for service providers, and PrivateLink Endpoint is for service users.
Constraints
Samsung Cloud Platform’s VPC limits the number of VPCs and Subnets created as follows.
Category
Default Quota
Description
VPC
5
Default VPC creation limit per account
VPC IP Range
6
IP range creation limit per VPC (default 1 + additional 5)
VPC Peering
5
VPC Peering creation limit per account
Subnet
3
Default Subnet creation limit per VPC
Private NAT
3
Default Private NAT creation limit per VPC
Transit Gateway
3
Transit Gateway creation limit per account
VPC to Transit Gateway Connection
5
VPC connection limit per Transit Gateway (only same account can be connected)
Table. VPC Constraints
Prerequisites
VPC has no prerequisites.
5.1.1.1 - ServiceWatch Metrics
VPC - Internet Gateway sends metrics to ServiceWatch. The metrics provided as basic monitoring are data collected at 5-minute intervals.
Note
For how to view metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Metrics
Internet Gateway
The following are the basic metrics for the namespace Internet Gateway.
The metrics with bold metric names below are metrics selected as key metrics among the basic metrics provided by Internet Gateway.
Key metrics are used to configure service dashboards that are automatically built for each service in ServiceWatch.
For each metric, the user guide informs which statistical value is meaningful when querying that metric, and the statistical value marked in bold among meaningful statistics is the key statistical value. In the service dashboard, you can view key metrics through key statistical values.
Performance Item
Description
Unit
Meaningful Statistics
Network In Total Bytes_Internet
Cumulative traffic volume from Internet Gateway → VPC
Bytes
Sum
Average
Maximum
Minimum
Network Out Total Bytes _Internet
Cumulative traffic volume from VPC → Internet Gateway
Bytes
Sum
Average
Maximum
Minimum
Network In Total Bytes _Internet_Delta
Cumulative traffic volume from Internet Gateway → VPC over 5 minutes (Internet)
Bytes
Sum
Average
Maximum
Network Out Total Bytes _Internet_Delta
Cumulative traffic volume from VPC → Internet Gateway over 5 minutes (Internet)
Bytes
Sum
Average
Maximum
Minimum
Table. VPC - Internet Gateway Basic Metrics
5.1.2 - How-to guides
Users can create VPC services by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Create VPC
You can create and use VPC services in the Samsung Cloud Platform Console.
To create a VPC, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Create VPC button on the Service Home page. You will be redirected to the Create VPC page.
Enter or select the required information in the Service Information section.
Category
Required
Description
VPC Name
Required
Name of the VPC to create
Enter 3-20 characters using uppercase/lowercase letters and numbers
IP Range
Required
IP range to use
Enter in IP range format within /16 ~ /28 range
Example: 192.168.0.0/24
Description
Optional
Enter description for VPC
Table. VPC Service Information Input Items
Enter or select the required information in the Additional Information section.
Category
Required
Description
Tags
Optional
Add tags
Up to 50 tags per resource
Click Add Tag button and enter or select Key, Value values
Table. VPC Additional Information Input Items
On the Summary panel, verify the detailed information and estimated billing amount, then click the Create button.
After creation is complete, verify the created resource on the VPC List page.
View VPC Details
VPC services allow you to view and modify the entire resource list and detailed information. The VPC Details page consists of Details, IP Range Management, Tags, Operation History tabs.
To view VPC details, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC menu on the Service Home page. You will be redirected to the VPC List page.
Click the resource for which you want to view detailed information on the VPC List page. You will be redirected to the VPC Details page.
The VPC Details page displays status information and additional feature information, and consists of Details, IP Range Management, Tags, Operation History tabs.
Category
Description
Status
VPC status
Active: Operating normally
Deleting: Deletion in progress
Creating: Creation in progress
Error: Current status cannot be confirmed
If it occurs continuously, contact the registered administrator
Terminate Service
Button to terminate the service
Since terminating the service may immediately stop the operating service, proceed with the termination operation after fully considering the impact caused by service interruption
Table. VPC Status Information and Additional Features
Details
On the VPC List page, you can view the detailed information of the selected resource and modify it if necessary.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In VPC, it means VPC SRN
Resource Name
VPC name
Resource ID
Unique resource ID of VPC
Creator
User who created the VPC
Created At
Date and time when VPC was created
Modifier
User who modified VPC information
Modified At
Date and time when VPC information was modified
VPC Name
VPC resource name
VPC ID
VPC unique ID
External Connection
Information about resources connected externally
IP Range
VPC IP range
Description
VPC description
Can modify description by clicking Edit icon
Table. VPC Details Tab Items
IP Range Management
On the VPC List page, you can view the IP range information connected to the selected resource and add IP ranges.
Category
Description
IP Range
Added IP range information
Created At
Date and time when IP range was added
Add IP Range
Can add IP range
Enter within 0.0.0.0/16 - 0.0.0.0/28 range
Example: 192.168.0.0/16
Table. VPC IP Range Management Tab Items
Note
When adding an IP range to a VPC, you cannot add it if it falls under the following conditions:
IP range currently in use in the VPC
Range added with destination as peer VPC in VPC Peering rules connected to the current VPC
Range added with destination as remote in Direct Connect rules connected to the current VPC
Range added with destination as remote in Transit Gateway rules connected to the current VPC
NAT IP range in use in Private NAT connected to the current VPC
Tags
On the VPC List page, you can view the tag information of the selected resource and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can verify Key, Value information of tags
Up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. VPC Tags Tab Items
Operation History
On the VPC List page, you can view the operation history of the selected resource.
Table. VPC Operation History Tab Detailed Information Items
Terminate VPC
You can reduce operating costs by terminating unused VPCs.
Warning
VPC cannot be terminated if there are connected Subnet, Internet Gateway, or Direct Connect resources.
VPC service can only be terminated when the status is Active or Error.
Terminating the service may immediately stop the operating service. Proceed with the termination operation after fully considering the impact caused by service interruption.
To terminate a VPC, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC menu on the Service Home page. You will be redirected to the VPC List page.
Select the resource to terminate on the VPC List page and click the Terminate Service button.
After termination is complete, verify that the resource has been terminated on the VPC List page.
5.1.2.1 - Subnet
Create Subnet
You can create and use VPC Subnet services in the Samsung Cloud Platform Console.
To create a Subnet, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Create Subnet button on the Service Home page. You will be redirected to the Create Subnet page.
Enter or select the required information in the Service Information section.
Category
Required
Description
Subnet Type
Required
Select subnet type
General: Can configure Public and Private
Local: Can specify by selecting between Virtual Server and Bare Metal Server
Local Subnet is a subnet for server-to-server communication only and cannot communicate externally
VPC Endpoint: Can configure VPC Endpoint
VPC Name
Required
Select the VPC to connect the subnet from the list of currently created VPCs
Click + Create New to create a VPC and then select
VPC IP Range
Optional
Automatically enters the CIDR range of the selected VPC
Subnet Name
Required
Name of the Subnet to create
Enter 3-20 characters using uppercase/lowercase letters and numbers
IP Range
Required
IP range to use
Enter in IP range format within /16 ~ /28 range
Example: 192.168.0.0/24
IP range cannot be used in duplicate with IP ranges currently in use in the VPC (other subnets)
Gateway IP
Required
Displays the Gateway IP address of the Subnet
The first IP of the entered IP range is automatically entered
Cannot be modified after service creation
Table. Subnet Service Information Input Items
Enter or select the required information in the Additional Information section.
Category
Required
Description
Description
Optional
Enter description for Subnet
IP Allocation Range
Optional
Can set range within the IP range to use
Select from entire IP range or individual specification
Subnet child resources are assigned IPs within the entered entire IP range or the range individually specified by the user
When selecting individual specification, enter the start IP address and end IP address
DNS Name Server
Optional
Enter DNS Name Server IP after selecting Enable
Host Route
Optional
Enter host route after selecting Enable
Enter destination IP range and Next Hop IP address
Destination IP ranges must not overlap with each other
Tags
Optional
Add tags
Up to 50 tags per resource
Click Add Tag button and enter or select Key, Value values
Table. Subnet Additional Information Input Items
On the Summary panel, verify the detailed information and estimated billing amount, then click the Create button.
After creation is complete, verify the created resource on the Subnet List page.
View Subnet Details
Subnet services allow you to view and modify the entire resource list and detailed information. The Subnet Details page consists of Details, Virtual IP Management, Tags, Operation History tabs.
To view Subnet details, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Subnet button on the Service Home page. You will be redirected to the Subnet List page.
Click the resource for which you want to view detailed information on the Subnet List page. You will be redirected to the Subnet Details page.
The Subnet Details page displays status information and additional feature information, and consists of Details, Virtual IP Management, Tags, Operation History tabs.
Category
Description
Status
Subnet status
Creating: Creation in progress
Active: Operating normally
Editing: Modification in progress
Deleting: Deletion in progress
Failed: Failed to create
Error: Current status unknown
If it occurs continuously, contact the registered administrator
Delete Subnet
Subnet deletion button
Table. Subnet Status Information and Additional Features
Details
On the Subnet List page, you can view the operation history of the selected resource.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In Subnet, it means Subnet SRN
Resource Name
Subnet resource name
Resource ID
Unique resource ID in the service
Creator
User who created the Subnet
Created At
Date and time when Subnet was created
Modifier
User who modified Subnet information
Modified At
Date and time when Subnet information was modified
Subnet Type
Subnet type
VPC Name
VPC to which the subnet belongs
Subnet Name
Subnet name
Subnet ID
Subnet unique ID
IP Range
IP range in use
Gateway IP
Gateway IP address of the Subnet
DHCP IP
Second IP address among the IP ranges in use
Can modify by clicking Edit icon
Description
Subnet additional description
Can modify by clicking Edit icon
IP Allocation Range
IP allocation range
DNS Name Server
Whether DNS Name Server is used
Host Route
Host route (destination IP range, Next Hop IP address) information
Table. Subnet Details Tab Items
Virtual IP Management
On the Subnet List page, you can view the virtual IP information of the selected resource, reserve, or delete it.
Category
Description
Reserve Virtual IP
Reserve Virtual IP for use
Virtual IP
Virtual IP information
Click the IP to go to the Virtual IP details page
Public NAT IP
Public NAT IP information
Connected Port Count
Number of ports connected to the IP
Reserved At
Date and time when Virtual IP was reserved
Release
Virtual IP release button
Select multiple items and click the Release button at the top of the list to release in batch
Table. Subnet Virtual IP Management Tab Items
Warning
Cannot release if Port or NAT IP is connected to Virtual IP. Delete the connected resource first.
Can only release Virtual IP when Subnet status is Active or Error.
Tags
On the Subnet List page, you can view the tag information of the selected resource and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can verify Key, Value information of tags
Up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Subnet Tags Tab Items
Operation History
On the Subnet List page, you can view the operation history of the selected resource.
Table. Subnet Operation History Tab Detailed Information Items
Manage Virtual IP
You can reserve or manage Virtual IPs to use in the Subnet.
Reserve Virtual IP
You can reserve a Virtual IP to use in the Subnet.
To reserve a Virtual IP, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Subnet button on the Service Home page. You will be redirected to the Subnet List page.
Click the resource for which you want to reserve a Virtual IP on the Subnet List page. You will be redirected to the Subnet Details page.
Click the Virtual IP Management tab on the Subnet Details page. You will be redirected to the Virtual IP Management tab page.
Click the Reserve Virtual IP button on the Virtual IP Management tab page. The Virtual IP reservation window opens.
Set the detailed items in the Reserve Virtual IP window and click Confirm.
Virtual IP: If you select Auto Generate, the automatically generated IP is reserved. If you select Input, you can reserve the IP you entered directly.
Description: Enter additional description for the Virtual IP.
When the reservation confirmation window appears, click Confirm.
View Virtual IP Details
You can view the detailed information of a Virtual IP.
To view the detailed information of a Virtual IP, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Subnet button on the Service Home page. You will be redirected to the Subnet List page.
Click the resource for which you want to reserve a Virtual IP on the Subnet List page. You will be redirected to the Subnet Details page.
Click the Virtual IP Management tab on the Subnet Details page. You will be redirected to the Virtual IP Management tab page.
Click the resource you want to view on the Virtual IP Management tab page. You will be redirected to the Virtual IP Details page.
The Virtual IP Details page displays connected ports and detailed information.
Category
Description
Virtual IP
Virtual IP address
Public NAT IP
Public NAT IP address and status
Can modify by clicking Edit icon
After setting Enable, can select existing IP or create and add
Public NAT IP cannot be modified after setting, needs to be reset when changing
Connected Port
Port information connected to Virtual IP
Click Add button to add connected port, can connect existing port or create and add
Click Delete button to delete connected port
Description
Virtual IP description
Can modify by clicking Edit icon
Creator
User who reserved the Virtual IP
Created At
Date and time when Virtual IP was reserved
Modifier
User who modified Virtual IP information
Modified At
Date and time when Virtual IP information was modified
Table. Virtual IP Details Items
Delete Subnet
You can delete unused Subnets.
Warning
Cannot terminate the service if there are connected resources. Delete the connected resources first.
Can only delete the service when the service status is Active or Error.
Data cannot be recovered after service deletion, so proceed with the deletion operation after fully considering the impact caused by Subnet deletion.
To delete a Subnet, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Subnet menu on the Service Home page. You will be redirected to the Subnet List page.
Click the resource to delete on the Subnet List page. You will be redirected to the Subnet Details page.
Click the Delete button on the Subnet Details page.
After deletion is complete, verify that the resource has been deleted on the Subnet List.
Prerequisites
This is a list of services that must be configured in advance before creating this service. Please prepare in advance by referring to the guides provided for each service for more details.
Service that provides independent virtual networks in the cloud environment
Table. Subnet Prerequisites
5.1.2.2 - Port
Create Port
You can create and use Port services in the Samsung Cloud Platform Console.
To create a Port, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Create Port button on the Service Home page. You will be redirected to the Create Port page.
Enter or select the required information in the Service Information section.
Category
Required
Description
VPC Name
Required
Select the VPC to create the Port
Click + Create New to create a VPC and then select
Subnet Name
Required
Select the Subnet to create the Port
Click + Create New to create a Subnet and then select
Port Name
Required
Name that can easily identify the Port
Enter 3-20 characters using letters, numbers, -
IP Allocation Method
Required
Select IP allocation method
Automatic Allocation: IP is automatically allocated within the Subnet’s IP allocation range
Manual Input: The entered IP within the Subnet’s range is allocated
When selecting Manual Input, enter the IP address to use for the Port in Fixed IP Address
Description
Optional
Enter description for Port
Security Group
Optional
When selecting Enable, can select up to 5 Security Groups
Table. Port Service Information Input Items
Enter or select the required information in the Additional Information section.
Category
Required
Description
Tags
Optional
Add tags
Up to 50 tags per resource
Click Add Tag button and enter or select Key, Value values
Table. Port Additional Information Input Items
On the Summary panel, verify the detailed information and estimated billing amount, then click the Create button.
After creation is complete, verify the created resource on the Port List page.
View Port Details
Port services allow you to view and modify the entire resource list and detailed information. The Port Details page consists of Details, Tags, Operation History tabs.
To view Port details, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Port menu on the Service Home page. You will be redirected to the Port List page.
Click the resource (Port name) for which you want to view detailed information on the Port List page. You will be redirected to the Port Details page.
The Port Details page displays status information and additional feature information, and consists of Details, Tags, Operation History tabs.
Category
Description
Status
Port status
Active: Operating normally
Down: Not connected to resource, or connected but not operating
Error: Current status cannot be confirmed
If it occurs continuously, contact the registered administrator
Delete Port
Button to delete Port
Table. Port Status Information and Additional Features
Details
On the Port List page, you can view the detailed information of the selected resource and modify it if necessary.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In Port, it means Port SRN
Resource Name
Port resource name
Resource ID
Unique resource ID in the service
Resource ID
Unique resource ID of the Port
Creator
User who created the Port
Created At
Date and time when Port was created
Modifier
User who modified Port information
Modified At
Date and time when Port information was modified
Port Name
Port resource name
Port ID
Port resource ID
Subnet Name
Connected Subnet name, click Subnet item to go to details page
Connected Resource
Connected device information
Fixed IP
Fixed IP information
MAC Address
MAC address information
Description
Description for Port
Can modify by clicking Edit icon
Security Group
Connected Security Group information
Can change Security Group by clicking Edit icon
Virtual IP
Connected Virtual IP information
Table. Port Details Tab Items
Tags
On the Port List page, you can view the tag information of the selected resource and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can verify Key, Value information of tags
Up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Port Tags Tab Items
Operation History
On the Port List page, you can view the operation history of the selected resource.
Table. Port Operation History Tab Detailed Information Items
Delete Port
You can reduce operating costs by deleting unused Ports.
Warning
Cannot delete the service if there are connected resources such as Virtual Server, PrivateLink, etc. Delete the connected resources first.
After service deletion, the operating service may be stopped immediately. Proceed with the deletion operation after fully considering the impact caused by service deletion.
To delete a Port, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Port menu on the Service Home page. You will be redirected to the Port List page.
Click the resource (Port name) to delete on the Port List page. You will be redirected to the Port Details page.
Click the Delete Port button on the Port Details page.
After deletion is complete, verify that the resource has been deleted on the Port List.
Prerequisites
This is a list of services that must be configured in advance before creating this service. Please prepare in advance by referring to the guides provided for each service for more details.
Table. Internet Gateway Service Information Input Items
Enter or select the required information in the Additional Information section.
Category
Required
Description
Tags
Optional
Add tags
Up to 50 tags per resource
Click Add Tag button and enter or select Key, Value values
Table. Internet Gateway Additional Information Input Items
Warning
Cannot connect Internet Gateway and Group Gateway simultaneously in one VPC.
On the Summary panel, verify the detailed information and estimated billing amount, then click the Create button.
After creation is complete, verify the created resource on the Internet Gateway List page.
View Internet Gateway Details
Internet Gateway services allow you to view and modify the entire resource list and detailed information. The Internet Gateway Details page consists of Details, Tags, Operation History tabs.
To view Internet Gateway details, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Internet Gateway menu on the Service Home page. You will be redirected to the Internet Gateway List page.
Click the resource for which you want to view detailed information on the Internet Gateway List page. You will be redirected to the Internet Gateway Details page.
The Internet Gateway Details page displays status information and additional feature information, and consists of Details, Tags, Operation History tabs.
Category
Description
Status
Internet Gateway status
Creating: Resource creation in progress
Active: Normal connection status
Deleting: Deletion in progress
Error: Current status cannot be confirmed
If it occurs continuously, contact the registered administrator
Delete Internet Gateway
Internet Gateway deletion button
Table. Internet Gateway Status Information and Additional Features
Details
On the Internet Gateway List page, you can view the detailed information of the selected resource and modify it if necessary.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In Internet Gateway, it means Internet Gateway SRN
Resource Name
Internet Gateway resource name
Resource ID
Unique resource ID in the service
Creator
User who created the Internet Gateway
Created At
Date and time when Internet Gateway was created
Modifier
User who modified Internet Gateway information
Modified At
Date and time when Internet Gateway information was modified
Internet Gateway Name
Internet Gateway name
Internet Gateway ID
Internet Gateway resource ID
VPC Name
VPC name
VPC Name
VPC ID
Type
Internet Gateway type
Description
Description for Internet Gateway
Can modify by clicking Edit icon
Firewall Name
Go to details page when clicking Firewall
Use Firewall
Whether to use Firewall
NAT Gateway
Go to details page when clicking NAT Gateway
Store NAT Logs
Whether to store NAT logs
Can modify by clicking Edit icon
Enable: Store logs
Disable: Do not store logs
Table. Internet Gateway Details Tab Items
Tags
On the Internet Gateway List page, you can view the tag information of the selected resource and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can verify Key, Value information of tags
Up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Internet Gateway Tags Tab Items
Operation History
On the Internet Gateway List page, you can view the operation history of the selected resource.
Table. Internet Gateway Operation History Tab Detailed Information Items
Manage Internet Gateway Resources
You can manage resources such as using NAT log storage.
Use NAT Log Storage
Note
To use NAT log storage, you must first create a bucket in Object Storage to store logs and set that bucket as the log storage in NAT Logging. Then, when you enable log storage in NAT details view, NAT logs will start being stored in the Object Storage bucket. You can check the log storage settings in NAT Logging. For more information, refer to NAT Logging.
Object Storage fees for log storage will be charged when you set up a log storage.
To use NAT log storage, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Internet Gateway menu on the Service Home page. You will be redirected to the Internet Gateway List page.
Click the resource for which you want to view detailed information on the Internet Gateway List page. You will be redirected to the Internet Gateway Details page.
Click the Modify NAT Log Storage button. You will be redirected to the Modify NAT Log Storage popup window.
Select Enable for log storage in the Modify NAT Log Storage popup window and click the Confirm button.
Warning
Cannot set log storage to Enable if log storage is not configured in NAT Logging.
Disable NAT Log Storage
To disable NAT log storage, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Internet Gateway menu on the Service Home page. You will be redirected to the Internet Gateway List page.
Click the resource for which you want to view detailed information on the Internet Gateway List page. You will be redirected to the Internet Gateway Details page.
Click the Modify NAT Log Storage button. You will be redirected to the Modify NAT Log Storage popup window.
Deselect Enable for log storage in the Modify NAT Log Storage popup window and click the Confirm button.
Verify the message in the Notification popup window and click the Confirm button.
Warning
If you disable log storage, log storage for that service will be stopped and tracking management through log analysis will not be possible in case of security incidents.
Delete Internet Gateway
Warning
Cannot terminate the service if there are connected resources such as NAT Gateway, Firewall rules, VPN, etc. Delete the connected resources first.
After service deletion, internet communication of VPC child resources will be stopped. Proceed with the deletion operation after fully considering the impact caused by Internet Gateway deletion.
To delete an Internet Gateway, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Internet Gateway menu on the Service Home page. You will be redirected to the Internet Gateway List page.
Click the resource to delete on the Internet Gateway List page. You will be redirected to the Internet Gateway Details page.
Click the Delete button on the Internet Gateway Details page.
After deletion is complete, verify that the resource has been deleted on the Internet Gateway List.
Prerequisites
This is a list of services that must be configured in advance before creating this service. Please prepare in advance by referring to the guides provided for each service for more details.
Service that provides independent virtual networks in the cloud environment
Table. Internet Gateway Prerequisites
5.1.2.4 - NAT Gateway
Create NAT Gateway
You can create and use NAT Gateway services in the Samsung Cloud Platform Console.
To create a NAT Gateway, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Create NAT Gateway button on the Service Home page. You will be redirected to the Create NAT Gateway page.
Enter or select the required information in the Service Information section.
Category
Required
Description
VPC Name
Required
Select the VPC to connect
Click + Create New to create a VPC and then select
Subnet Name
Required
Select the connected Subnet
Click + Create New to create a Subnet and then select
NAT Gateway Name
Optional
Created as NAT_GW_{Subnet Name}
IP for NAT Gateway
Required
Select Public IP for NAT Gateway
Click + Create New to create an IP and then select
Description
Optional
Enter description for NAT Gateway
Table. NAT Gateway Service Information Input Items
Enter or select the required information in the Additional Information section.
Category
Required
Description
Tags
Optional
Add tags
Up to 50 tags per resource
Click Add Tag button and enter or select Key, Value values
Table. NAT Gateway Additional Information Input Items
On the Summary panel, verify the detailed information and estimated billing amount, then click the Create button.
After creation is complete, verify the created resource on the NAT Gateway List page.
View NAT Gateway Details
NAT Gateway services allow you to view and modify the entire resource list and detailed information. The NAT Gateway Details page consists of Details, Tags, Operation History tabs.
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the NAT Gateway menu on the Service Home page. You will be redirected to the NAT Gateway List page.
Click the resource for which you want to view detailed information on the NAT Gateway List page. You will be redirected to the NAT Gateway Details page.
The NAT Gateway Details page displays status information and additional feature information, and consists of Details, Tags, Operation History tabs.
Category
Description
Status
NAT Gateway status
Creating: Creation in progress
Active: Operating normally
Deleting: Deletion in progress
Delete NAT Gateway
Button to terminate the service
Terminate NAT Gateway if there are no connected services
Since terminating the service may immediately stop the operating service, proceed with the termination operation after fully considering the impact caused by service interruption
Table. NAT Gateway Status Information and Additional Features
Details
On the NAT Gateway List page, you can view the detailed information of the selected resource and modify it if necessary.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In NAT Gateway, it means NAT Gateway SRN
Resource Name
NAT Gateway resource name
Resource ID
Unique resource ID in the service
Creator
User who created the NAT Gateway
Created At
Date and time when NAT Gateway was created
Modifier
User who modified NAT Gateway information
Modified At
Date and time when NAT Gateway information was modified
NAT Gateway Name
NAT Gateway name
NAT Gateway ID
NAT Gateway resource ID
VPC Name
VPC name connected to NAT Gateway
Go to details page when clicking VPC
VPC ID
VPC resource ID connected to NAT Gateway
Subnet Name
Subnet name connected to NAT Gateway
Go to details page when clicking Subnet
Subnet ID
Subnet resource ID connected to NAT Gateway
Subnet IP Range
Subnet IP range information
IP for NAT Gateway
NAT Gateway IP information
Description
Description for NAT Gateway
Can modify by clicking Edit icon
Table. NAT Gateway Details Tab Items
Tags
On the NAT Gateway List page, you can view the tag information of the selected resource and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can verify Key, Value information of tags
Up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. NAT Gateway Tags Tab Items
Operation History
On the NAT Gateway List page, you can view the operation history of the selected resource.
Table. NAT Gateway Operation History Tab Detailed Information Items
Delete NAT Gateway
Warning
If you delete a NAT Gateway, all resources in that Subnet except resources with 1:1 NAT configured will not be able to communicate with the internet.
To delete a NAT Gateway, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the NAT Gateway menu on the Service Home page. You will be redirected to the NAT Gateway List page.
Click the resource for which you want to view detailed information on the NAT Gateway List page. You will be redirected to the NAT Gateway Details page.
Click the Delete button on the NAT Gateway Details page.
After deletion is complete, verify that the resource has been deleted on the NAT Gateway List.
Prerequisites
This is a list of services that must be configured in advance before creating this service. Please prepare in advance by referring to the guides provided for each service for more details.
Service that provides independent virtual networks in the cloud environment
Table. NAT Gateway Prerequisites
5.1.2.5 - Public IP
Create Public IP
You can create and use Public IP services in the Samsung Cloud Platform Console.
To create a Public IP, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Reserve Public IP button on the Service Home page. You will be redirected to the Reserve Public IP page.
Enter or select the required information in the Service Information section.
Category
Required
Description
Type
Required
Select the Gateway to reserve Public IP
Default: Internet Gateway
Description
Optional
Enter description for Public IP
Table. Public IP Service Information Input Items
Enter or select the required information in the Additional Information section.
Category
Required
Description
Tags
Optional
Add tags
Up to 50 tags per resource
Click Add Tag button and enter or select Key, Value values
Table. Public IP Additional Information Input Items
On the Summary panel, verify the detailed information and estimated billing amount, then click the Create button.
After creation is complete, verify the created resource on the Public IP List page.
View Public IP Details
Public IP services allow you to view and modify the entire resource list and detailed information. The Public IP Details page consists of Details, Tags, Operation History tabs.
To view Public IP details, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Public IP menu on the Service Home page. You will be redirected to the Public IP List page.
Click the resource for which you want to view detailed information on the Public IP List page. You will be redirected to the Public IP Details page.
The Public IP Details page displays status information and additional feature information, and consists of Details, Tags, Operation History tabs.
Category
Description
Status
Public IP status
Attached: Connected state
Reserved: Reserved state
Error: Current status unknown
If it occurs continuously, contact the registered administrator
Release Public IP
Public IP release button
Table. Public IP Status Information and Additional Features
Details
On the Public IP List page, you can view the detailed information of the selected resource and modify it if necessary.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In Public IP, it means Public IP SRN
Resource Name
Public IP resource name
Resource ID
Unique resource ID in the service
Creator
User who created the Public IP
Created At
Date and time when Public IP was created
Modifier
User who modified Public IP information
Modified At
Date and time when Public IP information was modified
IP Address
Assigned (reserved) IP address
Type
Gateway information where Public IP is reserved
Public IP ID
Public IP resource ID
Description
Description for Public IP
Can modify description by clicking Edit icon
Connected Resource Type
Resource information connected to the assigned (reserved) IP address
Connected Resource Name
Resource name connected to the assigned (reserved) IP address
Table. Public IP Details Tab Items
Tags
On the Public IP List page, you can view the tag information of the selected resource and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can verify Key, Value information of tags
Up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Public IP Tags Tab Items
Operation History
On the Public IP List page, you can view the operation history of the selected resource.
Service that provides independent virtual networks in the cloud environment.
Table. Public IP Prerequisites
5.1.2.6 - Private NAT
Users can create the Private NAT service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating Private NAT
You can create and use the Private NAT service in the Samsung Cloud Platform Console.
Follow these steps to create a Private NAT.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Create Private NAT button on the Service Home page. You will be navigated to the Create Private NAT page.
Enter or select the required information in the Enter Service Information section.
Division
Required
Detailed Description
Private NAT Name
Required
Enter the Private NAT name
Enter 3 to 20 characters using English letters and numbers
Connected Resource Type
Required
Select the connected resource to connect to Private NAT
Can select from Direct Connect, Transit Gateway
Connected Resource Name
Required
Display the name of the selected connected resource
Click + Create New in the list to create a connected resource
NAT IP Range
Required
Enter the NAT IP range to use
Enter in CIDR format such as 192.168.2.0/23
Cannot overlap with connected VPC IP or other Private NAT IP ranges
Description
Optional
Enter a description for the Private NAT
Table. Private NAT Service Information Input Items
Note
Must not overlap with the IP range of the VPC connected to the selected Direct Connect or Transit Gateway.
Must not overlap with other Private NAT ranges connected to the selected Direct Connect or Transit Gateway.
Must not overlap with the IP range of the On-Premise Network connected to the selected Direct Connect or Transit Gateway.
Some IP ranges are for management purposes and cannot be used.
* Enter or select the required information in the **Enter Additional Information** section.
Division
Required
Detailed Description
Tags
Optional
Add tags
Can add up to 50 tags per resource
Click the Add Tag button and then enter or select Key, Value values
Table. Private NAT Additional Information Input Items
3. Review the detailed information and estimated billing cost in the Summary panel, and click the Create button.
* When creation is complete, verify the created resource in the Private NAT List page.
Viewing Private NAT Detail Information
You can view and modify the entire resource list and detailed information of the Private NAT service. The Private NAT Detail page consists of Detail Information, IP Management, Tags, Task History tabs.
Follow these steps to view Private NAT detail information.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Private NAT menu on the Service Home page. You will be navigated to the Private NAT List page.
Click the resource for which you want to view detailed information on the Private NAT List page. You will be navigated to the Private NAT Detail page.
The Private NAT Detail page displays status information and additional feature information, and consists of Detail Information, IP Management, Tags, Task History tabs.
Division
Detailed Description
Status
Private NAT status
Active: Running
Creating: Creating
Deleting: Deleting
Error: Error occurred
Delete Private NAT
Button to delete Private NAT
Table. Private NAT Status Information and Additional Features
Detail Information
You can view the detailed information of the resource selected on the Private NAT List page, and modify the information if necessary.
Division
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Private NAT resource name
Resource ID
Unique resource ID in the service
Creator
User who created the Private NAT
Created At
Date and time when the Private NAT was created
Modifier
User who modified the Private NAT information
Modified At
Date and time when the Private NAT information was modified
Private NAT Name
Private NAT resource name
Connected Resource Type
Resource information connected to Private NAT
NAT IP Range
NAT IP range information in use
Connected Resource Name
Resource information connected to Private NAT, clicking the resource name navigates to the detail information page
Description
Description of Private NAT
Can modify description by clicking the Edit icon
Table. Private NAT Detail Information Tab Items
IP Management
You can view Private NAT IPs on the Private NAT List page, and reserve or release them.
Division
Detailed Description
Private NAT IP List
List of Private NAT IPs in use
Can view Private NAT IP, connected resource, and status
Click the Reserve Private NAT IP button to add an IP
Click the Release button to delete the selected IP
Table. Private NAT IP Management Tab Items
Tags
You can view the tag information of the resource selected on the Private NAT List page, and add, change, or delete tags.
Division
Detailed Description
Tag List
Tag list
Can view tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Private NAT Tag Tab Items
Task History
You can view the task history of the resource selected on the Private NAT List page.
Division
Detailed Description
Task History List
Resource change history
View task date and time, resource name, task details, task result, task operator information
Table. Private NAT Task History Tab Detail Information Items
Managing Private NAT IP
You can reserve or release Private NAT IPs.
Reserving Private NAT IP
Follow these steps to reserve a Private NAT IP.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Private NAT menu on the Service Home page. You will be navigated to the Private NAT List page.
Click the resource for which you want to reserve an IP on the Private NAT List page. You will be navigated to the Private NAT Detail page.
Click the IP Management tab on the Private NAT Detail page. You will be navigated to the IP Management tab page.
Click the Reserve Private NAT IP button on the IP Management tab page. The Private NAT IP reservation window appears.
Enter the Private NAT IP to use in the Private NAT IP reservation window and click the OK button. A notification confirmation window appears.
Click the OK button in the notification confirmation window. Verify that the resource item has been added to the IP list.
Releasing Private NAT IP
Caution
You can only release Private NAT IPs when the status is Reserved.
Follow these steps to release a Private NAT IP.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Private NAT menu on the Service Home page. You will be navigated to the Private NAT List page.
Click the resource for which you want to reserve an IP on the Private NAT List page. You will be navigated to the Private NAT Detail page.
Click the IP Management tab on the Private NAT Detail page. You will be navigated to the IP Management tab page.
Click the Release button for the IP item you want to release on the IP Management tab page. A notification confirmation window appears.
Verify that the selected resource has been deleted from the IP list.
Deleting Private NAT
You can reduce operating costs by terminating unused Private NATs.
Caution
You cannot terminate the Private NAT service when the service status is Creating, Editing, or Deleting.
Follow these steps to terminate a Private NAT.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Private NAT menu on the Service Home page. You will be navigated to the Private NAT List page.
Click the resource you want to delete on the Private NAT List page. You will be navigated to the Private NAT Detail page.
Click the Delete Private NAT button on the Private NAT Detail page.
When termination is complete, verify that the resource has been deleted in the Private NAT List.
Prerequisite Services
These are services that must be installed in advance before creating this service. Please prepare by referring to the previously notified user guide.
Service that safely and quickly connects the customer network to the Samsung Cloud Platform environment
Table. Private NAT Prerequisite Services
5.1.2.7 - VPC Endpoint
Create VPC Endpoint
You can create and use VPC Endpoint services in the Samsung Cloud Platform Console.
To create a VPC Endpoint, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Create VPC Endpoint button on the Service Home page. You will be redirected to the Create VPC Endpoint page.
Enter or select the required information in the Service Information section.
Category
Required
Description
VPC Name
Required
Select the VPC to create the Endpoint
Click + Create New to create a VPC and then select
Usage > Target Service
Required
Select the target service to create the VPC Endpoint
Usage > Connected Resource
Required
Select the resource to create the VPC Endpoint
VPC Endpoint Name
Required
Enter the VPC Endpoint name
Enter 3-20 characters using letters and numbers
VPC Endpoint IP > Subnet Name
Required
Select the VPC Endpoint Subnet
Click + Create New to create a Subnet and then select
VPC Endpoint IP > IP
Required
Enter the IP to use as VPC Endpoint
Example: 192.168.x.x
Description
Optional
Enter description for VPC Endpoint
Table. VPC Endpoint Service Information Input Items
Enter or select the required information in the Additional Information section.
Category
Required
Description
Tags
Optional
Add tags
Up to 50 tags per resource
Click Add Tag button and enter or select Key, Value values
Table. VPC Endpoint Additional Information Input Items
Note
After registering a VPC Endpoint, you must configure Direct Connect firewall settings to integrate with Samsung Cloud Platform internal services.
Refer to the port information for each service to register firewall rules.
On the Summary panel, verify the detailed information and estimated billing amount, then click the Create button.
After creation is complete, verify the created resource on the VPC Endpoint List page.
View VPC Endpoint Details
VPC Endpoint services allow you to view and modify the entire resource list and detailed information. The VPC Endpoint Details page consists of Details, Tags, Operation History tabs.
To view Endpoint details, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC Endpoint menu on the Service Home page. You will be redirected to the VPC Endpoint List page.
Click the resource for which you want to view detailed information on the VPC Endpoint List page. You will be redirected to the VPC Endpoint Details page.
The VPC Endpoint Details page displays status information and additional feature information, and consists of Details, Tags, Operation History tabs.
Category
Description
Status
VPC Endpoint status
Active: Operating normally
Creating: Creation in progress
Deleting: Deleting resource connection
Deleted: Resource connection deleted
Delete VPC Endpoint
Button to delete VPC Endpoint connection resource
Table. VPC Endpoint Status Information and Additional Features
Details
On the VPC Endpoint List page, you can view the detailed information of the selected resource and modify it if necessary.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In VPC Endpoint, it means VPC Endpoint SRN
Resource Name
VPC Endpoint resource name
Resource ID
Unique resource ID in the service
Creator
User who created the VPC Endpoint
Created At
Date and time when VPC Endpoint was created
Modifier
User who modified VPC Endpoint information
Modified At
Date and time when VPC Endpoint information was modified
VPC Endpoint Name
VPC Endpoint name
VPC Name
Connected VPC name, click VPC item to go to details page
VPC ID
Connected VPC ID
Target Service
Connected target information
Connected Resource Information
Connected resource information
Subnet Name
Endpoint subnet information, click subnet item to go to details page
VPC Endpoint IP
VPC Endpoint IP information
Description
Description for VPC Endpoint
Can modify by clicking Edit icon
Table. VPC Endpoint Details Tab Items
Tags
On the VPC Endpoint List page, you can view the tag information of the selected resource and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can verify Key, Value information of tags
Up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. VPC Endpoint Tags Tab Items
Operation History
On the VPC Endpoint List page, you can view the operation history of the selected resource.
Table. VPC Endpoint Operation History Tab Detailed Information Items
Delete VPC Endpoint
You can reduce operating costs by terminating unused Endpoints.
Warning
Cannot terminate the service if there are connected resources such as Object Storage, Container Registry, etc. Delete the connected resources first.
Deleting a VPC Endpoint may immediately stop the operating service. Proceed with the deletion operation after fully considering the impact caused by service deletion.
To terminate a VPC Endpoint, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC Endpoint menu on the Service Home page. You will be redirected to the VPC Endpoint List page.
Click the resource to delete on the VPC Endpoint List page. You will be redirected to the VPC Endpoint Details page.
Click the Delete Endpoint button on the VPC Endpoint Details page.
After termination is complete, verify that the resource has been deleted on the VPC Endpoint List.
Prerequisites
This is a list of services that must be configured in advance before creating this service. Please prepare in advance by referring to the guides provided for each service.
Service that securely and quickly connects customer networks and Samsung Cloud Platform
Table. VPC Endpoint Prerequisites
5.1.2.8 - VPC Peering
Users can create VPC Peering services by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Create VPC Peering
You can create and use VPC Peering services in the Samsung Cloud Platform Console.
To create a VPC Peering, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the Create VPC Peering button on the Service Home page. You will be redirected to the Create VPC Peering page.
Enter or select the required information in the Service Information section.
Category
Required
Description
VPC Peering Name
Required
Enter the VPC Peering name
Enter 3-20 characters using letters and numbers
Request VPC Name
Required
Select the VPC to request VPC Peering
Click + Create New in the list to create a VPC
Approval Account
Required
Select the Account of the VPC to approve VPC Peering and then select that VPC or enter information
When selecting Same account, select the approval VPC name
Click + Create New in the list to create a VPC
When selecting Different account, enter the approval Account ID and approval VPC ID
Description
Optional
Enter description for VPC Peering
Table. VPC Peering Service Information Input Items
Enter or select the required information in the Additional Information section.
Category
Required
Description
Tags
Optional
Add tags
Up to 50 tags per resource
Click Add Tag button and enter or select Key, Value values
Table. VPC Peering Additional Information Input Items
On the Summary panel, verify the detailed information and estimated billing amount, then click the Create button.
If connecting to a VPC of a different Account, the connection operation may take time as Peering proceeds after the approval process.
After creation is complete, verify the created resource on the VPC Peering List page.
View VPC Peering Details
VPC Peering services allow you to view and modify the entire resource list and detailed information. The VPC Peering Details page consists of Details, Rules, Tags, Operation History tabs.
To view VPC Peering details, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC Peering menu on the Service Home page. You will be redirected to the VPC Peering List page.
Click the resource for which you want to view detailed information on the VPC Peering List page. You will be redirected to the VPC Peering Details page.
The VPC Peering Details page displays status information and additional feature information, and consists of Details, Rules, Tags, Operation History tabs.
Category
Description
Status
VPC Peering status
Active: Operating
Requesting: Connection or deletion request in progress
Creating: Connecting
Creating Requesting: Connection request in progress
Deleting Requesting: Deletion request in progress
Editing: Modification in progress
Rejected: Approval rejected
Canceled: Request canceled
Error: Error occurred
If it occurs continuously, contact the registered administrator
Delete VPC Peering/Request VPC Peering Deletion
Button to request deletion of VPC Peering resource
Cancel Connection Request: Can cancel if VPC Peering connection was requested
Approve Connection: Can approve if VPC Peering connection request was received
Can reject connection by clicking Reject Connection
Cancel Deletion Request: Can cancel if VPC Peering deletion was requested
Approve Deletion: Can approve if VPC Peering deletion request was received
Can reject deletion by clicking Reject Deletion
Request Reapproval: Request reapproval if VPC approval was rejected
Table. VPC Peering Status Information and Additional Features
Details
On the VPC Peering List page, you can view the detailed information of the selected resource and modify it if necessary.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
VPC Peering resource name
Resource ID
Unique resource ID in the service
Creator
User who created the VPC Peering
Created At
Date and time when VPC Peering was created
Modifier
User who modified VPC Peering information
Modified At
Date and time when VPC Peering information was modified
VPC Peering Name
VPC Peering name
Request Information
VPC name and VPC ID information that requested VPC Peering, click VPC name to go to details page
If connecting to VPC of a different Account, that VPC name is not displayed
Approval Information
VPC name and VPC ID information that approved VPC Peering, click VPC name to go to details page
If connecting to VPC of a different Account, that VPC name is not displayed
Description
Description for VPC Peering
Can modify description by clicking Edit icon
Table. VPC Peering Details Items
Rules
On the VPC Peering List page, you can view the rules connected to the selected resource, and add or delete them.
Category
Description
Rule List
List of connected rules
Can verify source, destination, destination IP range, and status of connected rules
Click Add Rule button to add rules
Click Delete button to delete selected rules
Table. VPC Peering Rules Tab Items
Tags
On the VPC Peering List page, you can view the tag information of the selected resource and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can verify Key, Value information of tags
Up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. VPC Peering Tags Tab Items
Operation History
On the VPC Peering List page, you can view the operation history of the selected resource.
Table. VPC Peering Operation History Tab Detailed Information Items
Manage VPC Peering Rules
You can add or delete rules to VPC Peering.
Add Rules
Warning
Can only add rules when VPC Peering status is Active.
If you enter the destination IP incorrectly in routing settings, communication failure may occur. Verify the destination IP information again before creating rules.
To add rules to VPC Peering, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC Peering menu on the Service Home page. You will be redirected to the VPC Peering List page.
Click the resource to delete on the VPC Peering List page. You will be redirected to the VPC Peering Details page.
Click the Rules tab on the VPC Peering Details page. You will be redirected to the Rules tab page.
Click the Add Rule button on the Rules tab page. The add rule window appears.
Enter the source and destination in the add rule window and click the Confirm button. The notification confirmation window appears.
Must not duplicate with already entered rules.
Can enter within the IP range range of the destination VPC.
Must enter the same as the Subnet range.
Cannot use 0.0.0.0/0 as the destination IP range.
Click the Confirm button in the notification confirmation window. Verify that the resource item has been added to the rule list.
Delete Rules
Warning
Can only delete connected rules when VPC Peering service status is Active or Error.
Cannot delete when the status of connected rules is Creating or Deleting.
To delete rules of VPC Peering, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC Peering menu on the Service Home page. You will be redirected to the VPC Peering List page.
Click the resource to delete on the VPC Peering List page. You will be redirected to the VPC Peering Details page.
Click the Rules tab on the VPC Peering Details page. You will be redirected to the Rules tab page.
Click the Delete button of the item to delete on the Rules tab page. The notification confirmation window appears.
Click the Confirm button in the notification confirmation window. Verify that the selected resource has been deleted in the rule list.
Terminate VPC Peering
You can reduce operating costs by terminating unused VPC Peering.
Warning
Cannot terminate the service if rules are connected to VPC Peering. Delete all connected rules before terminating the service.
Can only terminate when VPC Peering service status is Active, Rejected, Canceled, or Error.
Terminate VPC Peering in Same Account
To terminate VPC Peering within the same Account, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC Peering menu on the Service Home page. You will be redirected to the VPC Peering List page.
Click the resource to delete on the VPC Peering List page. You will be redirected to the VPC Peering Details page.
Click the Delete VPC Peering button on the VPC Peering Details page.
After termination is complete, verify that the resource has been deleted on the VPC Peering List.
Terminate VPC Peering Connected to Different Account
To terminate VPC Peering connected to a different Account, follow these steps:
Click All Services > Networking > VPC menu. You will be redirected to the VPC Service Home page.
Click the VPC Peering menu on the Service Home page. You will be redirected to the VPC Peering List page.
Click the resource to delete on the VPC Peering List page. You will be redirected to the VPC Peering Details page.
Click the Request VPC Peering Deletion button on the VPC Peering Details page.
After termination is complete, verify that the resource has been deleted on the VPC Peering List.
The deletion request must be approved by the peer Account for normal termination.
Prerequisites
This is a service that must be installed in advance before creating this service. Please prepare by referring to the user guide provided in advance.
Service that provides independent virtual networks in the cloud environment
Table. VPC Peering Prerequisites
5.1.2.9 - Transit Gateway
Users can create the Transit Gateway service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating Transit Gateway
You can create and use the Transit Gateway service in the Samsung Cloud Platform Console.
Follow these steps to create a Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Create Transit Gateway button on the Service Home page. You will be navigated to the Create Transit Gateway page.
Enter or select the required information in the Enter Service Information section.
Division
Required
Detailed Description
Transit Gateway Name
Required
Enter the Transit Gateway name
Enter 3 to 20 characters using English letters and numbers
Description
Optional
Enter a description for the Transit Gateway
Table. Transit Gateway Service Information Input Items
Enter or select the required information in the Enter Additional Information section.
Division
Required
Detailed Description
Tags
Optional
Add tags
Can add up to 50 tags per resource
Click the Add Tag button and then enter or select Key, Value values
Table. Transit Gateway Additional Information Input Items
Review the detailed information and estimated billing cost in the Summary panel, and click the Create button.
When creation is complete, verify the created resource in the Transit Gateway List page.
Viewing Transit Gateway Detail Information
You can view and modify the entire resource list and detailed information of the Transit Gateway service. The Transit Gateway Detail page consists of Detail Information, Connected VPC Management, Rules, Tags, Task History tabs.
Follow these steps to view Transit Gateway detail information.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to view detailed information on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
The Transit Gateway Detail page displays status information and additional feature information, and consists of Detail Information, Connected VPC Management, Rules, Tags, Task History tabs.
Division
Detailed Description
Status
Transit Gateway status
Active: Running
Creating: Creating
Editing: Modifying
Deleting: Deleting
Error: Error occurred
Delete Transit Gateway
Button to delete Transit Gateway resource
Table. Transit Gateway Status Information and Additional Features
Detail Information
You can view the detailed information of the resource selected on the Transit Gateway List page, and modify the information if necessary.
Division
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Transit Gateway resource name
Resource ID
Unique resource ID in the service
Creator
User who created the Transit Gateway
Created At
Date and time when the Transit Gateway was created
Modifier
User who modified the Transit Gateway information
Modified At
Date and time when the Transit Gateway information was modified
Transit Gateway Name
Transit Gateway resource name
Uplink Usage
Uplink information connected to Transit Gateway
Clicking Linked Service (IGW,FW) Line Application/Modification/Termination Request Shortcut navigates to the service application page
Description
Description of Transit Gateway
Can modify description by clicking the Edit icon
Firewall Connection Status
Firewall connection management and status display
Clicking the Firewall Connection button requests connection
After connection, can add or delete Firewalls from the list
Table. Transit Gateway Detail Information Tab Items
Connected VPC Management
You can view VPCs connected to the resource selected on the Transit Gateway List page, and add or delete them.
Division
Detailed Description
VPC List
List of connected VPCs
Can view connected VPC information and status
Click the Add VPC Connection button to add a VPC
Click the Delete button to delete the selected VPC
You can view rules connected to the resource selected on the Transit Gateway List page, and add or delete them.
Division
Detailed Description
Rule List
List of connected rules
Can view source, destination, destination IP range, and status of connected rules
Click the Add Rule button to add a rule
Click the Delete button to delete the selected rule
Table. Transit Gateway Rules Tab Items
Tags
You can view the tag information of the resource selected on the Transit Gateway List page, and add, change, or delete tags.
Division
Detailed Description
Tag List
Tag list
Can view tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Transit Gateway Tags Tab Items
Task History
You can view the task history of the resource selected on the Transit Gateway List page.
Division
Detailed Description
Task History List
Resource change history
View task date and time, resource name, task details, task result, task operator information
Table. Transit Gateway Task History Tab Detail Information Items
Managing Transit Gateway Linked Services
You can apply for, modify, and terminate Uplink and Firewall connection services required for using the Transit Gateway service.
Follow these steps to apply for Transit Gateway linked services.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to delete on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click Linked Service (IGW,FW) Line Application/Modification/Termination Request Shortcut on the Transit Gateway Detail page. You will be navigated to the service request page.
Enter or select the corresponding information in the required input field on the Service Request page.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: TGW Uplink Line Application
Region
Select the location of Samsung Cloud Platform
Automatically entered with the region corresponding to the Account
Service
Select service category and service
Service Category: Networking
Service: Transit Gateway
Task Type
Select the type you want to request
TGW Uplink Line Application/Modification/Termination: After selecting task type, enter detailed information in the service request type item
Content
Fill in detailed items of the service application form
Service Request Type: Enter directly among Application / Modification / Termination
Account Name/ID: Enter Account name and ID
Transit Gateway Name/ID: Enter created Transit Gateway name and ID
Applicant Information: Enter applicant email, phone number, etc.
Service Request Task Type: Select and enter among Uplink Line Connection / BM VPC Firewall Connection
Firewall Usage: Enter whether to use firewall
Attachment
Upload files if you want to share additional files
Can attach up to 5 files, each within 5MB
Can only attach doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files
Table. Linked Service Creation Request Items
Click the Request button on the service request page.
When application is complete, verify the applied content on the Support Center > Service Request List page.
When the service request task is complete, you can verify the applied resource on the Transit Gateway Detail page.
Managing VPC Connection for Transit Gateway
You can add or delete VPCs to the Transit Gateway.
Adding VPC Connection
Follow these steps to add a VPC connection to the Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to delete on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Connected VPC Management tab on the Transit Gateway Detail page. You will be navigated to the Connected VPC Management tab page.
Click the Add VPC Connection button on the Connected VPC Management tab page. The VPC connection addition window appears.
Select a VPC in the VPC connection addition window and click the OK button. A notification confirmation window appears.
Clicking + Create New in the list allows you to create a VPC and select it.
Click the OK button in the notification confirmation window. Verify that the resource item has been added to the VPC connection list.
Deleting VPC Connection
Follow these steps to delete a VPC connection from the Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to delete on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Connected VPC Management tab on the Transit Gateway Detail page. You will be navigated to the Connected VPC Management tab page.
Click the Delete button for the item you want to delete on the Connected VPC Management tab page. A notification confirmation window appears.
Click the OK button in the notification confirmation window. Verify that the selected resource has been deleted from the VPC connection list.
Managing Rules for Transit Gateway
You can add or delete rules to the Transit Gateway.
Adding Rules
Caution
You can only add rules when the Transit Gateway service status is Active.
If you enter the destination IP incorrectly in routing settings, communication failures may occur. Please verify the destination IP information again before creating a rule.
Follow these steps to add a rule to the Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to delete on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Rules tab on the Transit Gateway Detail page. You will be navigated to the Rules tab page.
Click the Add Rule button on the Rules tab page. The rule addition window appears.
Enter the source and destination in the rule addition window and click the OK button. A notification confirmation window appears.
Division
Detailed Description
Rule Type
Select Transit Gateway rule addition type
Select from VPC-TGW Rule, TGW-Uplink Rule
Connected VPC Name
Select connected VPC when selecting VPC-TGW Rule
Source
Automatically selected when destination is set when selecting VPC-TGW Rule
Destination
Select destination of rule
Set to VPC, TGW when selecting VPC-TGW Rule
Set to TGW, Remote when selecting TGW-Uplink Rule
Cannot register duplicate with existing rules, can enter up to x.x.x.x/28 range
Destination IP Range
Enter the destination IP range to use
Table. Rule Addition Input Items
Caution
When entering VPC-TGW Rule, verify the following items:
When destination is VPC
Can enter within VPC IP range.
Must enter the same as Subnet range.
Cannot use 0.0.0.0/0 as destination IP range.
When destination is Transit Gateway
Some IP ranges are for management purposes and cannot be used.
Cannot enter VPC IP range.
Can enter 0.0.0.0/0 as destination IP range only when VPC’s Internet Gateway is not connected.
When entering TGW-Uplink Rule, verify the following items:
When destination is Transit Gateway
Can enter within VPC IP range connected to Transit Gateway.
Cannot use 0.0.0.0/0 as destination IP range.
When destination is Remote
Cannot enter VPC IP range connected to Transit Gateway.
Can enter 0.0.0.0/0 as destination IP range only when Internet Gateway is not connected to Transit Gateway.
Cannot enter D, E class IP ranges.
Click the OK button in the notification confirmation window. Verify that the resource item has been added to the rule list.
Deleting Rules
Caution
You can only delete rules when the Transit Gateway service status is Active.
You cannot delete rules when the rule status is Creating, Deleting.
Follow these steps to delete a rule from the Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to delete on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Rules tab on the Transit Gateway Detail page. You will be navigated to the Rules tab page.
Click the Delete button for the item you want to delete on the Rules tab page. A notification confirmation window appears.
Click the OK button in the notification confirmation window. Verify that the selected resource has been deleted from the rule list.
Managing Firewall Connection
You can connect or disconnect Firewalls to use with the Transit Gateway.
Connecting Firewall
Follow these steps to add a Firewall connection to the Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to connect Firewall on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Detail Information tab on the Transit Gateway Detail page. You will be navigated to the Detail Information tab page.
Click the Firewall Connection button on the Detail Information tab page. The Firewall connection confirmation window appears.
Click the OK button in the Firewall connection confirmation window. Verify the connection status in the Firewall connection status item.
Adding Firewall
After Firewall connection is complete, you can add Firewalls.
Follow these steps to add a Firewall to the Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to add Firewall on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Detail Information tab on the Transit Gateway Detail page. You will be navigated to the Detail Information tab page.
Click the Add button in the Firewall list on the Detail Information tab page. The Firewall addition window appears.
Select the purpose in the Firewall addition window and click the OK button. Verify that the resource item has been added to the Firewall list.
Deleting Firewall
After Firewall connection is complete, you can delete Firewalls.
Follow these steps to delete a Firewall from the Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to delete Firewall on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Detail Information tab on the Transit Gateway Detail page. You will be navigated to the Detail Information tab page.
Click the Delete button in the Firewall list on the Detail Information tab page. A notification confirmation window appears.
Click the OK button in the notification confirmation window. Verify that the resource item has been deleted from the Firewall list.
Disconnecting Firewall
You can disconnect unused Firewall connections.
Caution
You can only disconnect connections when the Firewall service status is Active, Error.
Follow these steps to disconnect a Firewall connection from the Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource for which you want to disconnect Firewall connection on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Detail Information tab on the Transit Gateway Detail page. You will be navigated to the Detail Information tab page.
Click the Disconnect Firewall Connection button on the Detail Information tab page. A notification confirmation window appears.
Click the OK button in the notification confirmation window. Verify the disconnection status in the Firewall connection status item.
Deleting Transit Gateway
You can reduce operating costs by terminating unused Transit Gateways.
Caution
You cannot terminate the service when Uplink connected to Transit Gateway is in use or Firewall is connected. Complete the termination request for connected resources before terminating the service.
You cannot terminate the service when VPC resources or rules are connected to Transit Gateway. Delete all connected resources and rules before terminating the service.
You cannot terminate the service when the Transit Gateway service status is Creating, Deleting.
Follow these steps to terminate a Transit Gateway.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Transit Gateway menu on the Service Home page. You will be navigated to the Transit Gateway List page.
Click the resource you want to delete on the Transit Gateway List page. You will be navigated to the Transit Gateway Detail page.
Click the Delete Transit Gateway button on the Transit Gateway Detail page.
When termination is complete, verify that the resource has been deleted in the Transit Gateway List.
Prerequisite Services
These are services that must be installed in advance before creating this service. Please prepare by referring to the previously notified user guide.
Service that provides an independent virtual network in the cloud environment.
Table. Transit Gateway Prerequisite Services
5.1.2.10 - PrivateLink Service
Users can create the PrivateLink Service service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating PrivateLink Service
You can create and use the PrivateLink Service service in the Samsung Cloud Platform Console.
Follow these steps to create a PrivateLink Service.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Create PrivateLink Service button on the Service Home page. You will be navigated to the Create PrivateLink Service page.
Enter or select the required information in the Enter Service Information section.
Division
Required
Detailed Description
PrivateLink Service Name
Required
Enter the PrivateLink Service name
Approval Method
Required
Select and enter PrivateLink Service approval method
Automatic: Automatically approve when PrivateLink Service connection request is received
Manual: Manually approve after verification when PrivateLink Service connection request is received
Approval method cannot be modified after creation
High-Speed Data Transfer
Optional
Default setting is disabled and not displayed in Samsung Cloud Platform Console
To use high-speed data transfer, apply for service usage at Support Center > Contact Us, and when processing is complete, you can select it on the screen
VPC Name
Required
Select the VPC to connect
Clicking + Create New allows you to create a VPC and then select it
Subnet Name
Required
Select the Subnet of the VPC to connect
Clicking + Create New allows you to create a Subnet and then select it
PrivateLink Service IP
Required
Enter PrivateLink Service IP after selecting the Subnet to connect
Cannot enter IPs already in use within the Subnet, cannot use the first/last IP of Subnet IP range
Connected Resource
Required
Select the resource to connect to the selected VPC
Load Balancer: Select Load Balancer to connect (cannot select LB if using Local subnet)
IP: Enter Compute resource IP of the selected VPC
Security Group
Optional
Click the Select button to select the Security Group to connect
Can select up to 5
If Security Group is not selected, all access is blocked
Description
Optional
Enter a description for the PrivateLink Service
Table. PrivateLink Service Service Information Input Items
Enter or select the required information in the Enter Additional Information section.
Division
Required
Detailed Description
Tags
Optional
Add tags
Can add up to 50 tags per resource
Click the Add Tag button and then enter or select Key, Value values
Table. PrivateLink Service Additional Information Input Items
Review the detailed information and estimated billing cost in the Summary panel, and click the Create button.
When creation is complete, verify the created resource in the PrivateLink Service List page.
Note
PrivateLink product is a service that provides a one-way private path (a type of tunnel). PrivateLink product is used by creating a PrivateLink Service (exit) in the service provider account and creating a PrivateLink Endpoint (entrance) in the user account, then connecting to the PrivateLink Service.
The connection conditions for PrivateLink product are as follows:
One PrivateLink Endpoint can only be connected to the single PrivateLink Service specified at the time of creation. (Only one pair of entrance and exit exists)
Cannot attempt session connection to PrivateLink Endpoint through PrivateLink Service. (One-way)
In the provider account, when creating PrivateLink Service, connection is provided to one IP by selecting one LB or through direct input.
In the user account, all clients that the user account has allowed access to the PrivateLink Endpoint can use the PrivateLink Endpoint.
Can be used in both General / Local Subnet.
Viewing PrivateLink Service Detail Information
You can view and modify the entire resource list and detailed information of the PrivateLink Service service. The PrivateLink Service Detail page consists of Detail Information, Connection Management, Tags, Task History tabs.
Follow these steps to view PrivateLink Service detail information.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the PrivateLink Service menu on the Service Home page. You will be navigated to the PrivateLink Service List page.
Click the resource for which you want to view detailed information on the PrivateLink Service List page. You will be navigated to the PrivateLink Service Detail page.
The PrivateLink Service Detail page displays status information and additional feature information, and consists of Detail Information, Connection Management, Tags, Task History tabs.
Division
Detailed Description
Status
PrivateLink Service status
Active: Running
Creating: Creating
Deleting: Deleting
Error: Error occurred
Delete PrivateLink Service
Button to delete PrivateLink Service resource
Table. PrivateLink Service Status Information and Additional Features
Detail Information
You can view the detailed information of the resource selected on the PrivateLink Service List page, and modify the information if necessary.
Division
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
PrivateLink Service resource name
Resource ID
Unique resource ID in the service
Creator
User who created the PrivateLink Service
Created At
Date and time when the PrivateLink Service was created
Modifier
User who modified the PrivateLink Service information
Modified At
Date and time when the PrivateLink Service information was modified
PrivateLink Service Name
PrivateLink Service resource name
PrivateLink Service ID
PrivateLink Service ID information
Connected Resource
Connected resource of PrivateLink Service
Clicking connected resource name navigates to detail page
PrivateLink Service IP
PrivateLink Service IP address
VPC Name
Connected VPC information
Clicking VPC name navigates to detail page
Subnet Name
Connected Subnet information
Clicking Subnet name navigates to detail page
Port Name
Port information of PrivateLink Service
Clicking Port name navigates to detail page
Security Group
Configured Security Group information
Clicking Security Group name navigates to detail page
Approval Method
Configured PrivateLink Service approval method
High-Speed Data Transfer
Whether configured PrivateLink Service high-speed data transfer is enabled
Description
Description of PrivateLink Service
Can modify description by clicking the Edit icon
Table. PrivateLink Service Detail Information Tab Items
Connection Management
You can view the connection information of the resource selected on the PrivateLink Service List page. You can verify connection requests and approve or reject them.
Division
Detailed Description
PrivateLink Service List
PrivateLink Service connection list
Can verify connection information and status, manage connections
Approve: Approve the connection request
Reject: Reject the connection request
Block: Block the connected PrivateLink Endpoint
Reconnect: Reconnect the blocked PrivateLink Endpoint
Cannot execute approval/rejection etc. when connection status is Rejected, Error
Table. PrivateLink Service Connection Management Tab Items
Tags
You can view the tag information of the resource selected on the PrivateLink Service List page, and add, change, or delete tags.
Division
Detailed Description
Tag List
Tag list
Can view tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. PrivateLink Service Tags Tab Items
Task History
You can view the task history of the resource selected on the PrivateLink Service List page.
Division
Detailed Description
Task History List
Resource change history
View task date and time, resource name, task details, task result, task operator information
Table. PrivateLink Service Task History Tab Detail Information Items
Deleting PrivateLink Service
You can reduce operating costs by terminating unused PrivateLink Services.
Caution
You cannot terminate the service when the status of Private Endpoint connected to PrivateLink Service is Active, Requesting, Creating, Deleting, Error. Delete the PrivateLink Service after blocking or rejecting the connection of the Private Endpoint.
Follow these steps to terminate a PrivateLink Service.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the PrivateLink Service menu on the Service Home page. You will be navigated to the PrivateLink Service List page.
Click the resource you want to delete on the PrivateLink Service List page. You will be navigated to the PrivateLink Service Detail page.
Click the Delete PrivateLink Service button on the PrivateLink Service Detail page.
When termination is complete, verify that the resource has been deleted in the PrivateLink Service List.
Prerequisite Services
These are services that must be installed in advance before creating this service. Please prepare by referring to the previously notified user guide.
Service that distributes server traffic load in the cloud environment.
Table. PrivateLink Service Prerequisite Services
5.1.2.11 - PrivateLink Endpoint
Users can create the PrivateLink Endpoint service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating PrivateLink Endpoint
You can create and use the PrivateLink Endpoint service in the Samsung Cloud Platform Console.
Follow these steps to create a PrivateLink Endpoint.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the Create PrivateLink Endpoint button on the Service Home page. You will be navigated to the Create PrivateLink Endpoint page.
Enter or select the required information in the Enter Service Information section.
Division
Required
Detailed Description
PrivateLink Endpoint Name
Required
Enter the PrivateLink Endpoint name
VPC Name
Required
Select the VPC to connect
Clicking + Create New allows you to create a VPC and then select it
Subnet Name
Required
Select the Subnet of the VPC to connect
Clicking + Create New allows you to create a Subnet and then select it
PrivateLink Endpoint IP
Required
Enter PrivateLink Endpoint IP after selecting the Subnet to connect
Cannot enter IPs already in use within the Subnet, cannot use the first/last IP of Subnet IP range
PrivateLink Endpoint ID
Required
Enter the PrivateLink Service ID to connect
Enter within 3 to 60 characters using English letters and numbers
Need to verify the Service ID of the PrivateLink Service to connect before service application, must deliver the Endpoint ID to the service provider after Endpoint creation
Security Group
Optional
Click the Select button to select the Security Group to connect
Can select up to 5
If Security Group is not selected, all access is blocked
Description
Optional
Enter a description for the PrivateLink Endpoint
Table. PrivateLink Endpoint Service Information Input Items
Enter or select the required information in the Enter Additional Information section.
Division
Required
Detailed Description
Tags
Optional
Add tags
Can add up to 50 tags per resource
Click the Add Tag button and then enter or select Key, Value values
Table. PrivateLink Endpoint Additional Information Input Items
Review the detailed information and estimated billing cost in the Summary panel, and click the Create button.
When creation is complete, verify the created resource in the PrivateLink Endpoint List page.
Note
To request a connection to the service provider through PrivateLink, you must go through an approval process.
When applying for service connection, you must verify the PrivateLink Service ID that is the connection target in advance.
Usage agreement with the service provider must be completed before service application.
After the user creates the PrivateLink Endpoint, they must deliver the Endpoint ID to the service provider. The service provider can verify the user’s Endpoint ID and proceed with usage approval quickly.
Viewing PrivateLink Endpoint Detail Information
You can view and modify the entire resource list and detailed information of the PrivateLink Endpoint service. The PrivateLink Endpoint Detail page consists of Detail Information, Tags, Task History tabs.
Follow these steps to view PrivateLink Endpoint detail information.
Click the All Services > Networking > VPC menu. You will be navigated to the VPC’s Service Home page.
Click the PrivateLink Endpoint menu on the Service Home page. You will be navigated to the PrivateLink Endpoint List page.
Click the resource for which you want to view detailed information on the PrivateLink Endpoint List page. You will be navigated to the PrivateLink Endpoint Detail page.
The PrivateLink Endpoint Detail page displays status information and additional feature information, and consists of Detail Information, Connection Management, Tags, Task History tabs.
Service that provides an independent virtual network in the cloud environment.
Table. PrivateLink Endpoint Prerequisite Services
5.1.2.12 - NAT Logging
To save NAT logs, you must first create a bucket in Object Storage to save the logs, and then set the bucket as the log repository in NAT Logging, after that, by setting log saving in NAT detail inquiry, NAT logs will be saved in the Object Storage bucket.
NAT log saving requires settings in the following order.
To save NAT logs, you can create a bucket in Object Storage or use an existing bucket. To create a bucket, refer to Creating Object Storage.
To set the log storage to use in the NAT detailed inquiry, please refer to NAT log storage usage.
NAT Logging Using the log storage
To set the NAT log storage to use, you must first set the log storage setting in NAT Logging.
Reference
NAT Logging To set up a log storage, an Object Storage bucket for log storage is required, please create a bucket in the Object Storage service first.
For more detailed information, please refer to Object Storage creation.
All services > Management > Network Logging > NAT Logging menu, click. It moves to the NAT Logging list page.
NAT Logging List page, click the Log Storage Settings button at the top, it moves to the Log Storage Settings popup window.
Log Storage Settings popup window, select the Log Storage Bucket. When you select a bucket, the Log Storage Path will be displayed.
Log Storage Settings popup window, check the Log Storage Bucket and Log Storage Path, then click the Confirm button.
Notification Confirm the message in the popup window, then click the Confirm button.
Notice
NAT Logging After setting the log storage, you must set the log storage to use in the NAT detailed inquiry for the log storage to start.
For more detailed information, please refer to Using NAT Log Storage.
NAT Logging list
NAT Logging log storage bucket is set, then the NAT Logging list is retrieved.
All services > Management > Network Logging > NAT Logging menu is clicked. It moves to the NAT Logging list page.
Division
Required
Detailed Description
Resource ID
Required
NAT Resource ID
Save target
Required
NAT resource name
Save Registration Time
Required
NAT Log Storage Registration Time
Table. NAT Logging list items
Reference
NAT Logging After setting the log storage, you must set the log storage to use in the NAT detailed inquiry for the log storage to start.
For more detailed information, please refer to Using NAT Log Storage.
NAT Logging content check
Please refer to the contents below and check the saved Log contents.
The date and time when the log occurred (2024-10-11, 11:19:03)
accept
action (deny / accept)
259
Log occurrence firewall Rule ID (Policy ID)
17
IP Protocol ID
1: ICMP
6: TCP
17: UDP
192.168.2.173
source IP
46937
Departure Port
192.168.0.53
Destination IP
53
Destination Port
100.100.14.52
NAT translated IP
26937
NAT translated Port
NAT Logging do not use log storage
NAT Logging allows you to set the log repository to not be used.
All services > Management > Network Logging > NAT Logging menu should be clicked. It moves to the NAT Logging list page.
NAT Logging list page, click the top Log Storage Settings button. It moves to the Log Storage Settings popup window.
Log Storage Settings popup window, select Log Storage Bucket as Not Used, and click the OK button.
Reference
The log repository setting can be changed when there is no log storage target.
The log storage bucket change can be changed by selecting and confirming not in use and then resetting it.
5.1.3 - API Reference
API Reference
5.1.4 - CLI Reference
CLI Reference
5.1.5 - Release Note
VPC
2026.03.19
FEATUREVPC New Features Added
VPC IP Range Addition Feature
You can add and use a new IP range to the VPC.
Virtual IP Feature
You can reserve and use a Virtual IP in a Subnet.
Private NAT Feature Improvement
You can now use Private NAT in Transit Gateway as well.
2025.10.23
FEATUREPrivateLink Feature Added
You can connect via a private path between the VPC and SCP services without exposing internal Samsung Cloud Platform data to the internet.
2025.07.01
FEATURENew Services Added Beyond Transit Gateway
Transit Gateway Feature
Easily connects customer networks and Samsung Cloud Platform’s networks and acts as a connection hub for multiple VPCs within the cloud environment.
VPC Peering Feature
Allows IP communication via 1:1 private routes between VPCs.
Private NAT Feature
Compute resources within a VPC can connect by mapping customer network IPs using Direct Connect.
2025.02.27
FEATUREVPC Endpoint Service Added
VPC Feature
Provides an endpoint (entry point) that allows access to Samsung Cloud Platform through a private connection from an external network connected to the VPC.
Samsung Cloud Platform Common Feature Changes
Reflected common CX changes such as Account, IAM, Service Home, and tags.
2024.12.23
FEATURENAT Log Storage Feature Added
Added the ability to store NAT logs.
You can decide whether to store NAT logs and store logs in Object Storage.
2024.10.01
NEWVPC Service Official Version Release
VPC service providing independent virtual network spaces has been released.
2024.07.02
NEWBeta Version Release
VPC service providing independent virtual network spaces has been released.
5.2 - Security Group
5.2.1 - Overview
Service Overview
Security Group is a virtual logical firewall that controls Inbound/Outbound traffic occurring in the virtual server of Samsung Cloud Platform.
The target resources that can apply Security Group are Virtual Server, Database, Kubernetes Engine, etc. Security Group is applied to the port of the target resource, and multiple Security Groups can be applied according to the characteristics of each resource.
When the Security Group is created for the first time, it blocks all Inbound/Outbound traffic according to the default rules (Any/Deny).
The user can create Inbound/Outbound rules by specifying the IP address, port, and protocol, and only allowed traffic to the target resource is possible according to the created rules.
Figure. Security Group Configuration Diagram
Component
The elements that make up the Security Group are as follows.
Component
Detailed Description
Applicable Target
The target resource to which the Security Group is applied
Apply Security Group to Virtual Server, Database, Kubernetes Engine, Load Balancer
Security Group is applied to the port of the target resource, and multiple Security Groups can be applied according to the characteristics of each resource
Security Group rules
When a Security Group is first created, it follows the default rules (Any/Deny) and blocks all Inbound/Outbound traffic
Ping, SSH communication between servers in the same subnet is also blocked, and users can use it after setting the necessary rules
Inbound/Outbound allowance rules can be added by setting the target address, protocol, and port
Block rules cannot be set
Bulk creation of rules is provided through form creation
Fig. Security Group Components
Constraints
The Security Group of Samsung Cloud Platform has a default quota (limit) set. There is a maximum number of Security Groups and Security Group rules that can be created. Samsung Cloud Platform Console is a space where you can check and manage quotas for many resources related to Samsung Cloud Platform services and request quota increases.
Classification
Basic Quota
Detailed Description
Security Group
100
The default number of Security Groups created per Account
Number of Security Group rules
100
Default rule creation limit per Security Group
Number of Security Group rules > per project
1,000
Default number of Security Group rules that can be created per Account
Table. Security Group Restrictions
Preceding Service
Security Group has no preceding service.
5.2.2 - How-to guides
You can create the Security Group service by entering essential information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Security Group
You can create and use the Security Group service through the Samsung Cloud Platform Console.
Follow these steps to create a Security Group:
Click the All Services > Networking > Security Group menu. You will be navigated to the Security Group’s Service Home page.
On the Service Home page, click the Create Security Group button. You will be navigated to the Create Security Group page.
In the Service Information area, enter the required information.
Item
Required
Detailed Description
Security Group Name
Required
Security Group name to create
Can use uppercase/lowercase English letters, numbers, and special characters(-), and can enter up to 255 characters
Can use duplicate Security Group names within a project
Log Storage
Optional
Select whether to store Security Group logs
Use: Store logs
Do Not Use: Do not store logs
Clicking Go to Security Group Logging List navigates to the Security Group Logging list page
Table. Security Group service information input items
Note
To store Security Group logs, you must first create a bucket in Object Storage to store logs, and set that bucket in the Security Group Logging’s log storage.
You can check the log storage settings in Security Group Logging. For details, refer to Security Group Logging.
If you set up log storage, Object Storage charges for log storage will be applied.
In the Additional Information area, enter or select the required information.
Item
Required
Detailed Description
Tag
Optional
Add tag
Can add up to 50 tags per resource
After clicking the Add Tag button, enter or select Key, Value values
Description
Optional
User additional description
Can enter up to 255 characters
Table. Security Group additional information input items
Review the entered information and click the Create button.
When creation is complete, verify the created resource on the Security Group List page.
Viewing Security Group Detailed Information
On the Security Group List page of the Security Group menu, you can view and modify the entire resource list and detailed information.
Follow these steps to view detailed information of the Security Group:
Click the All Services > Networking > Security Group menu. You will be navigated to the Security Group’s Service Home page.
On the Service Home page, click the Security Group menu. You will be navigated to the Security Group list page.
On the Security Group List page, click the resource for which you want to view detailed information. You will be navigated to the Security Group Detail page.
The Security Group Detail page displays status information and additional feature information, and consists of Detailed Information, Rules, Tags, Task History tabs.
Item
Detailed Description
Service Status
Status of Security Group
Creating: Creating
Active: Normally operating
Editing: Changing settings
Deploying: Deployment complete
Deleting: Terminating
Error: Error occurred
Service Termination
Button to terminate the service
Table. Security Group status information and additional features
Detailed Information
You can view detailed information of the resource selected from the Security Group List and modify information if necessary.
Item
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date when the service was created
Modifier
User who modified the service information
Modification Date
Date when the service information was modified
Security Group Name
Resource name
Security Group ID
Unique resource ID in the service
Security Group Rule Count
Rule quota for the Security Group and number of rules in use
Security Group Rule Count/Account
Security Group rule quota for the Account and sum of rules in use in all Security Groups of the Account
Description
Additional description written by the user
Can modify by clicking the Edit icon
Log Storage
Whether to store Security Group logs
Use: Store logs
Do Not Use: Do not store logs
Can modify settings by clicking the Edit icon
Applied Services
Service type, service name, and status value of services where the Security Group is applied
Table. Security Group detailed information tab items
Rules
You can view the rule list of the resource selected from the Security Group List page and add or delete rules.
Item
Detailed Description
Excel Download
Download button for rule batch input Excel file
More
Additional feature button
Batch Rule Input: Upload Excel file for batch rule input
Delete: Delete selected rules
Advanced Search
Rule advanced search button
Add Rule
Add rule button
Direction
Traffic access direction based on the server where Security Group is applied
Inbound: External → Server
Outbound: Server → External
Rule ID
Unique ID value for the rule
Destination Address
Destination address to communicate with the server where Security Group is applied
Remote Security Group Name
Security Group resource name displayed when specifying the destination as a Security Group
Remote Security Group ID
Security Group ID displayed when specifying the destination as a Security Group
Service
Protocol and port
Description
Additional description written by the user
Delete
Delete rule
Table. Security Group rules tab items
Tags
You can view, add, modify, or delete tag information for the resource selected from the Security Group List page.
Item
Detailed Description
Tag List
Tag list
Can view Key, Value information of the tag
Can add up to 50 tags per resource
When entering a tag, search and select from the list of previously created Keys and Values
Table. Security Group tags tab items
Task History
You can view the task history of the resource selected from the Security Group List page.
Item
Detailed Description
Task History List
Resource change history
View task date, resource name, task details, task result, task user information
Table. Task history tab items
Managing Security Group Resources
You can manage Security Group resources such as log storage settings, adding rules, etc.
Using Log Storage
Note
To store Security Group logs, you must first create a bucket in Object Storage to store logs, and set that bucket in the Security Group Logging’s log storage.
You can check the log storage settings in Security Group Logging. For details, refer to Security Group Logging.
If you set up log storage, Object Storage charges for log storage will be applied.
Follow these steps to store Security Group logs:
Click the All Services > Networking > Security Group menu. You will be navigated to the Security Group’s Service Home page.
On the Service Home page, click the Security Group menu. You will be navigated to the Security Group list page.
On the Security Group List page, click the resource (Security Group name) to store logs. You will be navigated to the Security Group Detail page.
Click the Edit icon on Log Storage. You will be navigated to the Modify Log Storage popup window.
In the Modify Log Storage popup window, select Use for log storage and click the OK button.
Caution
If the log storage is not set up in Security Group Logging, you cannot set log storage to Use.
Setting Log Storage to Do Not Use
Follow these steps to stop storing Security Group logs:
Click the All Services > Networking > Security Group menu. You will be navigated to the Security Group’s Service Home page.
On the Service Home page, click the Security Group menu. You will be navigated to the Security Group list page.
On the Security Group List page, click the resource (Security Group name) to not store logs. You will be navigated to the Security Group Detail page.
Click the Edit icon on Log Storage. You will be navigated to the Modify Log Storage popup window.
In the Modify Log Storage popup window, deselect Use for log storage and click the OK button.
Review the message in the Notification popup window and click the OK button.
Caution
If you disable log storage, log storage for that service will stop, and you cannot track and manage through log analysis in case of a security incident.
Adding a Rule
Follow these steps to add a Security Group rule:
Click the All Services > Networking > Security Group menu. You will be navigated to the Security Group’s Service Home page.
On the Service Home page, click the Security Group menu. You will be navigated to the Security Group list page.
On the Security Group List page, click the resource (Security Group name) to add a rule. You will be navigated to the Security Group Detail page.
On the Security Group Detail page, click the Rules tab. You will be navigated to the Rules tab page.
On the Rules tab, click the Add Rule button. You will be navigated to the Add Rule popup window.
Item
Required
Detailed Description
Destination Input Method
Required
Set rule remote type
CIDR: Set destination address by entering IP directly
Security Group: Set created Security Group as destination
Remote > Destination Address
Required
When CIDR is selected, need to enter destination IP address
Enter in CIDR (IP address/subnet mask) format
Can enter multiple addresses up to 100 at once using , and -.
To use the entire IP range (ANY), enter ‘0.0.0.0/0’
Remote > Security Group
Required
When Security Group is selected, need to select Security Group
Type
Required
Select protocol type to apply the rule
Select Destination Port/Type: Select protocol type
Internet Protocol: Enter protocol number, can enter up to 100
All: Select destination port/Type and protocol as full range, means all ports for all protocols
Type > Protocol
Required
Select detailed protocol for type
Select desired protocol from TCP, UDP, ICMP, input items vary depending on the selected protocol
When selecting ICMP in protocol, can set ICMP Type
Select frequently used Type items such as Echo from values defined in ICMP Type
Click the Add button to add input value
When selecting TCP/UDP in protocol, can select allowed ports such as SSH, HTTP
When entering directly, can enter values 1 ~ 65,535, and can enter up to 100 at once using Comma(,), range(-)
Click the Add button to add input value
When selecting Internet Protocol in type, enter protocol number within 1 ~ 254
Direction
Required
Set traffic access direction based on the application target
Inbound Rule: External → Server
Outbound Rule: Server → External
Description
Optional
Additional description written by the user
Table. Security Group rule addition detailed items
Review the rule to add and click the OK button.
Batch Creating Rules
Follow these steps to add multiple Security Group rules at once:
Click the All Services > Networking > Security Group menu. You will be navigated to the Security Group’s Service Home page.
On the Service Home page, click the Security Group menu. You will be navigated to the Security Group list page.
On the Security Group List page, click the resource (Security Group name) to add rules. You will be navigated to the Security Group Detail page.
On the Security Group Detail page, click the Rules tab. You will be navigated to the Rules tab page.
On the Rules tab, click the Excel Download button. The rule batch input Excel file will be downloaded.
Enter rule information in the rule batch input Excel file and save it.
Click the More > Batch Rule Input button. The Batch Rule Input popup window will open.
In the Batch Rule Input popup window, click Attach File to attach the created Excel file and click Upload File.
If the attached Excel file format differs from the registration form or the file is encrypted, it cannot be uploaded.
The maximum number of batch registration rules that can be uploaded at once is 100. If the maximum registration rule count is exceeded, it cannot be uploaded.
If the maximum number of rules that can be registered in the Account is exceeded, the file cannot be uploaded.
Review the details in the Rule Confirmation popup window and click the OK button.
Deleting a Rule
Follow these steps to delete a Security Group rule:
Click the All Services > Networking > Security Group menu. You will be navigated to the Security Group’s Service Home page.
On the Service Home page, click the Security Group menu. You will be navigated to the Security Group list page.
On the Security Group List page, click the resource (Security Group name) to add a rule. You will be navigated to the Security Group Detail page.
On the Security Group Detail page, click the Rules tab. You will be navigated to the Rules tab page.
On the Rules tab, click the Delete button of the rule to delete.
Terminating Security Group
You can delete a Security Group that is not in use.
Caution
If there are resources connected to the Security Group, you cannot terminate the Security Group service. Delete all connected resources and then terminate the service.
Follow these steps to terminate the Security Group:
Click the All Services > Networking > Security Group menu. You will be navigated to the Security Group’s Service Home page.
On the Service Home page, click the Security Group menu. You will be navigated to the Security Group List page.
On the Security Group List page, select the resource (Security Group name) to terminate the service and click the Terminate Service button.
When termination is complete, verify that the resource has been deleted on the Security Group List page.
5.2.2.1 - Security Group Logging
To store Security Group logs, you must first create a bucket in Object Storage to store the logs and then set the bucket as the log storage for Security Group Logging. After that, you can enable log storage in the Security Group details, and Security Group logs will start being stored in the Object Storage bucket.
To store Security Group logs, you need to follow these steps:
You can create a new bucket in Object Storage for storing Security Group logs or use an existing bucket. To create a bucket, refer to Creating Object Storage.
To enable Security Group log storage, you must first set up the log storage in Security Group Logging.
Note
To set up Security Group Logging log storage, you need an Object Storage bucket for log storage. First, create a bucket in the Object Storage service.
For more information, refer to Creating Object Storage.
Click All Services > Management > Network Logging > Security Group Logging. You will be taken to the Security Group Logging List page.
On the Security Group Logging List page, click the Log Storage Settings button at the top. You will be taken to the Log Storage Settings popup window.
In the Log Storage Settings popup window, select the Log Storage Bucket. After selecting the bucket, the Log Storage Path will be displayed.
In the Log Storage Settings popup window, confirm the Log Storage Bucket and Log Storage Path, and then click the Confirm button.
Confirm the message in the Notification popup window and click the Confirm button.
Guide
After setting up Security Group Logging log storage, you must enable log storage in the Security Group details for log storage to start.
For more information, refer to Enabling Security Group Log Storage.
Security Group Logging List
After setting up the Security Group Logging log storage bucket, you can view the Security Group Logging list.
Click All Services > Management > Network Logging > Security Group Logging. You will be taken to the Security Group Logging List page.
Category
Required
Description
Resource ID
Required
Security Group ID
Storage Target
Required
Security Group Name
Storage Registration Date
Required
Security Group Log Storage Registration Date
Table. Security Group Logging List Items
Note
After setting up Security Group Logging log storage, you must enable log storage in the Security Group details for log storage to start.
For more information, refer to Enabling Security Group Log Storage.
Checking Security Group Logging Content
Refer to the following content to check the stored log content.
TCP / UDP
Example of stored log: 2024-10-11T02:18:39,drop,to-lport: tcp,198.19.65.2,6443,192.168.22.131,20427
Category
Description
2024-10-11T02:18:39
Date and time when the log occurred (2024-10-11, 02:18:39)
drop
Action (drop / allow)
to-lport
Direction
to-lport: inbound
from-lport: outbound
tcp
Protocol (tcp / udp / icmp / ip)
192.168.65.2
Source IP
6443
Source Port
192.168.22.131
Destination IP
20427
Destination Port
ICMP
Example of stored log: 2024-10-11T02:18:39,allow,to-lport: icmp,192.168.65.2,192.168.22.131,8
Category
Description
2024-10-11T02:18:39
Date and time when the log occurred (2024-10-11, 02:18:39)
to-lport
Direction
to-lport: inbound
from-lport: outbound
allow
Action (drop / allow)
icmp
Protocol (tcp / udp / icmp / ip)
192.168.65.2
Source IP
192.168.22.131
Destination IP
8
ICMP Type ID
IP
Example of stored log: 2024-10-11T02:18:39,deny,ip,192.168.65.2,192.168.22.131,103
Category
Description
2024-10-11T02:18:39
Date and time when the log occurred (2024-10-11, 02:18:39)
deny
Action (drop / allow)
ip
Protocol
192.168.65.2
Source IP
192.168.22.131
Destination IP
103
IP Protocol ID
1: ICMP
6: TCP
17: UDP
–>
Disabling Security Group Logging Log Storage
You can disable Security Group Logging log storage.
Click All Services > Management > Network Logging > Security Group Logging. You will be taken to the Security Group Logging List page.
On the Security Group Logging List page, click the Log Storage Settings button at the top. You will be taken to the Log Storage Settings popup window.
In the Log Storage Settings popup window, select Do not use for the Log Storage Bucket, and then click the Confirm button.
Note
Log storage settings can be changed only when there is no log storage target.
To change the log storage bucket, select Do not use, confirm, and then set it again.
5.2.3 - API Reference
API Reference
5.2.4 - CLI Reference
CLI Reference
5.2.5 - Release Note
Security Group
2026.03.19
FEATURESecurity Group Feature Improvement
Can select multiple service ports when adding Security Group rules
Improved to allow selecting multiple service ports when adding rules in the Console.
2025.07.01
FEATURESecurity Group Rule Input Method Addition
Security Group rule input method added
Added the ability to enter IP protocol.
Added the ability to select well-known protocols.
2025.02.27
FEATURECommon Feature Changes
Samsung Cloud Platform common feature changes
Reflected common CX changes such as Account, IAM and Service Home, and tags.
2025.02.27
CHANGEDSecurity Group Feature Improvement
Improved to allow entering multiple IPs when adding Security Group rules.
2024.12.23
FEATURESecurity Group Log Storage Feature Added
Added the ability to store Security Group logs.
Can determine whether to store Security Group logs and store logs in Object Storage.
2024.10.01
NEWSecurity Group Service Official Version Release
Released the Security Group service that provides virtual firewall functionality for instance resources.
Can control inbound and outbound traffic occurring in instance resources through the Security Group service.
2024.07.02
NEWBeta Version Release
Released the Security Group service that provides virtual firewall functionality for instance resources.
Can control inbound and outbound traffic occurring in instance resources through the Security Group service.
5.3 - Load Balancer
5.3.1 - Overview
Service Overview
The Load Balancer (LB) service of Samsung Cloud Platform automatically distributes traffic to available servers when traffic increases unpredictably or server failures occur, ensuring the stability and continuity of customer services.
The Load Balancer is deployed in the VPC Subnet according to the service type (L4 / L7) as a service access point provided to clients, and multiple services can be configured by adding Listeners to the created Load Balancer.
The Listener receives client requests through the service port and processes traffic according to routing rules. L4 supports TCP / UDP / TLS protocols, and L7 supports HTTP / HTTPS protocols. In L7, you can specify LB server groups according to routing conditions or set redirect responses for request URLs.
The LB server group delivers requests received by the Listener to specific servers according to load balancing and health checks. Servers receive client requests from the Load Balancer’s Source NAT IP through the port set on the member, and the server status is periodically monitored by the Load Balancer’s health check IP.
The LB health check defines the member health check method registered in the LB server group. You can select the LB health check resource provided by default in the LB server group, or create a new one to configure monitoring suitable for your application.
Features
Various Load Balancing Methods: Provides various load balancing methods such as Round Robin, Least Connection, and IP Hash.
SSL Certificate Encryption and Offloading: Supports SSL offloading and allows selection of encryption levels.
Enhanced Security: Manage Load Balancer communication using Firewall and view access logs through log storage.
Service Configuration Diagram
Figure. Load Balancer Configuration Diagram
Provided Functions
Load Balancer: Select the service type and set the IP to use in the Load Balancer.
Listener: Set the protocol, port, and routing rules. You can add multiple Listeners to a single Load Balancer.
LB Server Group: Set the load balancing method. The LB server group can be connected to a single Load Balancer.
Member: Select the server to add to the LB server group. You can select Virtual Server and Bare Metal Server resources created in the same VPC as the Load Balancer, or directly enter an IP.
LB Health Check: Set the member health check method. The LB health check can be registered and used in multiple LB server groups.
Components
The Load Balancer consists of Load Balancer (Listener), LB server group (member), and LB health check.
Load Balancer
The components that make up the Load Balancer are as follows. According to the settings for each component, you can configure load balancing suitable for customer workloads.
Component
Description
Service Type
Load Balancer service type
Listener protocol distinction according to L4 / L7
Service Subnet
VPC Subnet where the Load Balancer will be deployed
Allocate Service IP, Source NAT IP, and Health Check IP required for Load Balancer in the Subnet range
Service IP
Service IP that clients access
Source NAT IP
IP used by Load Balancer to deliver server traffic
Health Check IP
IP used by Load Balancer for health checks
Listener
Resources connected to Load Balancer
Set protocol, port, and LB server group
Table. Load Balancer Components
LB Server Group
The components that make up the LB server group are as follows. According to the settings for each component, traffic is delivered to members of the LB server group.
Component
Description
Protocol
LB server group delivery protocol
Load Balancing
Traffic distribution method
Deliver traffic to specific members according to load balancing method
LB Health Check
Member health check method
Select from the list of resources created in LB health check
Member
Server that processes client requests
Set weight according to load balancing or modify activation status
Table. LB Server Group Components
LB Health Check
The components that make up the LB health check are as follows. According to the settings for each component, member health checks are performed.
Component
Description
Protocol
Health check protocol
Health Check Port
Port used for health check
Interval
Health check execution interval
Timeout
Server response wait time for health check
Detection Count
Criteria for determining member health check status (Healthy / Unhealthy)
Table. LB Health Check Components
Constraints
Samsung Cloud Platform’s Load Balancer applies basic quotas, so there are constraints on the number of Load Balancers, Listeners, LB server groups, and members that can be created. You can manage current usage through the Console and request additional quotas for items that can be expanded.
Item
Basic Quota
Description
LOAD_BALANCER.SERVICE_SUBNET.DEFAULT.COUNT
3
Number of Service Subnets where Load Balancers can be created per VPC
LOAD_BALANCER.DEFAULT.COUNT
50
Number of Load Balancers created per Region
LOAD_BALANCER.LISTENER.DEFAULT.COUNT
1000
Number of Listeners created per Region
LOAD_BALANCER.SERVER_GROUP.DEFAULT.COUNT
1000
Number of LB server groups created per Region
LOAD_BALANCER.MEMBER.DEFAULT.COUNT
1000
Number of members that can be registered in all LB server groups per Region
LOAD_BALANCER.HEALTH_CHECK.DEFAULT.COUNT
500
Number of LB health checks created per Region
Table. Load Balancer Constraints
Prerequisite Services
This is a list of services that must be pre-configured before creating the Load Balancer service. Please prepare in advance by referring to the guides provided for each service.
Service that provides independent virtual networks in the cloud environment
Table. Load Balancer Prerequisite Services
5.3.2 - How-to guides
You can create a Load Balancer service by entering essential information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Load Balancer
You can create and use a Load Balancer service through the Samsung Cloud Platform Console.
Follow these steps to create a Load Balancer:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the Create Load Balancer button. You will be navigated to the Create Load Balancer page.
On the Create Load Balancer page, enter the information required for service creation and select detailed options.
In the Service Information area, enter or select the required information.
Item
Required
Detailed Description
Load Balancer Name
Required
Load Balancer resource name
Enter 3 to 63 characters using uppercase/lowercase English letters, numbers, and special characters(-_)
Service Type
Required
Load Balancer service type
Select L4 or L7
VPC Name
Required
VPC where the Load Balancer will be created
Select from the VPC list
If you select + Create New, you can create and then select
Service Subnet Name
Required
VPC Subnet where the Load Balancer will be created
Select from the list of Subnets created in the selected VPC
If you select + Create New, you can create and then select
Service IP
Optional
Service IP of the Load Balancer
Enter 1 IP in the Service Subnet range in IP address format
If not entered, automatically assigned from the IP allocation range of the selected Subnet
Public NAT IP
Optional
Public NAT IP to use in the Load Balancer when allowing service access from external (internet)
Can set to Use when selecting a VPC and Service Subnet or when an Internet Gateway is connected to the selected VPC
Select from the list of Public IPs created in the selected VPC
If you select + Create New, you can create and then select
Source NAT IP
Optional
IP to use for member communication in the Load Balancer
Enter 1 IP in the Service Subnet range in IP address format
If not entered, automatically assigned from the IP allocation range of the selected Subnet
If a Load Balancer already created exists in the selected Subnet, the previously assigned IP information is displayed
Cannot modify IP after creating Load Balancer
Health Check IP
Optional
IP to use for health check in the Load Balancer
Enter 2 IPs in the Service Subnet range in IP address format respectively
If not entered, automatically assigned from the IP allocation range of the selected Subnet (if only 1 IP is entered, the remaining 1 IP is automatically assigned)
If a Load Balancer already created exists in the selected Subnet, the previously assigned IP information is displayed
Cannot modify IP after creating Load Balancer
Use Firewall
Optional
Set whether to use Firewall
Select whether to activate Firewall for Load Balancer access control
If set to Use, Firewall resource is created
If not checked, Firewall resource is created in unused state
If a Firewall already in use exists in the selected Subnet, the Firewall resource information is displayed
Save Firewall Log
Optional
Select whether to save Firewall log
If set to Use, save Firewall log to the bucket set in the log storage
Table. Load Balancer service information input items
* In the Additional Information area, enter or select the required information.
Item
Required
Detailed Description
Description
Optional
Enter resource description
Tag
Optional
Add tag
Can add up to 50 tags per resource
Table. Load Balancer additional information input items
Review the created service information and estimated charges, then click the Create button.
When creation is complete, verify the created resource on the Load Balancer List page.
Notice
The Load Balancer service does not provide access control functionality for Service IP and service ports.
When creating a Load Balancer, we recommend selecting Use Firewall to manage communication between client and Load Balancer, and between Load Balancer and members using Firewall rules, and using Save Firewall Log to store access logs.
If you set the Firewall log storage feature when creating a service, you must set up the log storage first. If the log storage setup is not complete, you cannot create the Load Balancer service.
Caution
If using Firewall, you must add rules required for Load Balancer communication. Pay attention to the direction for each purpose when registering rules.
If you do not add rules, the Load Balancer service will not function properly.
Purpose
Source IP
Destination IP
Protocol
Destination Port/Type
Direction
Client → LB Connection
Client IP
LB Service IP
Listener Protocol
Listener Service Port
Outbound
LB → Member Connection
LB Source NAT IP
LB Server Group Member IP
LB Server Group Protocol
Member Port
Inbound
LB → Member Health Check
LB Health Check IP
LB Server Group Member IP
Health Check Protocol
Health Check Port
If health check port and member port are different, register member port
Inbound
Figure and Table. Adding Load Balancer Firewall Rules
Viewing Load Balancer Detailed Information
For Load Balancer services, you can view and modify resource lists and detailed information from the Load Balancer menu. The Load Balancer Detail page consists of Detailed Information, Connected Resources, Tags, and Task History tabs.
Follow these steps to view detailed information about the Load Balancer service:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the Load Balancer menu. You will be navigated to the Load Balancer List page.
On the Load Balancer List page, click the resource for which you want to view detailed information. You will be navigated to the Load Balancer Detail page.
The Load Balancer Detail page displays status information and additional feature information, and consists of Detailed Information, Connected Resources, Tags, Task History tabs.
Item
Detailed Description
Status
Load Balancer resource status
Active: Service is normally activated
Deleting: Processing service termination request
Creating: Processing service creation request
Error: Cannot check current status due to internal error
Editing: Processing service modification request
Service Termination
Delete Load Balancer resource
Table. Load Balancer status information and additional feature items
Detailed Information
On the Detailed Information tab, you can view and modify the detailed information of the resource selected from the Load Balancer List, and modify necessary information.
Item
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creation Date
Service creation date
Modification Date
Service modification date
Creator
User who requested service creation
Modifier
User who requested service modification
Load Balancer Name
Load Balancer name
Service IP
Load Balancer’s Service IP (used during communication between client and Load Balancer)
Uses 1 IP from Service Subnet
Service Type
Load Balancer service type
Source NAT IP
Load Balancer’s Source NAT IP (used during communication between Load Balancer and server)
Uses 1 IP from Service Subnet
VPC Name
VPC resource name where the Load Balancer is created
Clicking the resource name navigates to the detail page
Service Subnet Name
Subnet resource name where the Load Balancer is created
Clicking the resource name navigates to the detail page
Public NAT IP
Load Balancer’s Public NAT IP (used when configuring internet service)
Can modify settings by clicking the Edit icon
Private NAT IP
Load Balancer’s Private NAT IP
Can modify settings by clicking the Edit icon
Health Check IP
Load Balancer Health Check IP (used when checking health of LB server group members)
Uses 2 IPs from Service Subnet
Description
Additional information or description about the Load Balancer
Can modify by clicking the Edit icon
Firewall Name
Firewall resource name connected to the Load Balancer
Clicking the resource name navigates to the detail page
Table. Load Balancer detailed information tab items
Connected Resources
On the Connected Resources tab, you can view the list of Listeners connected to the Load Balancer, and create or terminate Listeners.
By selecting a Listener item on the Connected Resources tab, you can navigate to the Listener Detail page to view detailed information and modify or delete it.
By clicking the Edit icon on the Listener Detail page items, you can modify the information.
Item
Detailed Description
Create Listener
Create Listener button
Listener Name
Listener resource name
Routing Rules
Routing rules connected to the Listener
Routing Action: Traffic routing method
Setting Value: Setting value for routing action
Protocol
Protocol to which the Listener will listen
Port
Port to which the Listener will listen
Creation Date
Listener creation date
Delete
Delete Listener button
Table. Load Balancer connected resources list items
Tags
You can view, add, modify, or delete tag information for the resource selected from the Load Balancer List page.
Item
Detailed Description
Tag List
Tag list
Can view Key, Value information of the tag
Can add up to 50 tags per resource
When entering a tag, search and select from the list of previously created Keys and Values
Table. Load Balancer tags tab items
Task History
On the Task History tab, you can view the task history of the selected resource.
Item
Detailed Description
Task Details
Task execution content
Task Date
Task execution date
Resource Type
Resource type
Resource Name
Load Balancer name
Task Result
Task execution result (Success/Failure)
Task User Information
User information who performed the task
Table. Load Balancer task history list items
Managing Load Balancer Resources
You can manage Load Balancer resources such as creating and deleting Listeners.
Creating a Listener
Create a Listener on the Load Balancer to receive client requests and process traffic according to Listener settings.
Notice
The protocol for receiving client requests varies depending on the Load Balancer service type.
For L4 Load Balancer: TLS, TCP, UDP protocols
For L7 Load Balancer: HTTP, HTTPS protocols
Creating a Listener in L4 Load Balancer
Follow these steps to create a Listener in an L4 Load Balancer:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the Load Balancer menu. You will be navigated to the Load Balancer List page.
On the Load Balancer List page, click the Load Balancer resource where you want to create a Listener. You will be navigated to the Load Balancer Detail page.
On the Load Balancer Detail page, click the Connected Resources tab. You will be navigated to the Connected Resources tab page.
On the Connected Resources tab page, click the Create Listener button in the upper right corner.
In the Service Information area, enter or select the required information.
The information you can enter varies depending on the Protocol.
Item
Required
Detailed Description
Load Balancer
Required
Load Balancer resource name where the Listener will be created
Listener Name
Required
Listener resource name
Protocol
Required
Select Listener listening protocol
Select from TCP, UDP, TLS, TCP_Proxy
Service Port
Required
Enter Listener listening port
Enter a value between 1 and 65,534
Routing Rules
Required
Set routing rules
Routing Action: Fixed to LB Server Group Forward for L4 Load Balancer
LB Server Group: Select LB Server Group to process client requests
Can select from LB Server Groups created in the same Service Subnet as the Load Balancer
Cannot select LB Server Groups already in use by other Load Balancers
Normal: Support Cipher Suites including TLS 1.2 version
Low (Not recommended): Support Cipher Suites including TLS 1.1 version
Server SSL Security Level
Required
Select security level when configuring End-to-End SSL (when using TLS protocol)
If not encrypting server connection, select Do Not Use
Table. Listener service information input - When using L4 Load Balancer
In the Additional Information area, enter or select the required information.
Item
Required
Detailed Description
Description
Optional
Enter resource description
Tag
Optional
Add tag
Can add up to 50 tags per resource
Table. Listener additional information input items
Review the created service information and click the Create button.
When creation is complete, verify the created resource on the Connected Resources tab of the Load Balancer Detail page.
Creating a Listener in L7 Load Balancer
Follow these steps to create a Listener in an L7 Load Balancer:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the Load Balancer menu. You will be navigated to the Load Balancer List page.
On the Load Balancer List page, click the Load Balancer resource where you want to create a Listener. You will be navigated to the Load Balancer Detail page.
On the Load Balancer Detail page, click the Connected Resources tab. You will be navigated to the Connected Resources tab page.
On the Connected Resources tab page, click the Create Listener button in the upper right corner.
In the Service Information area, enter or select the required information.
The information you can enter varies depending on the Protocol.
Item
Required
Detailed Description
Load Balancer
Required
Load Balancer resource name where the Listener is created
Listener Name
Required
Listener resource name
Protocol
Required
Select Listener listening protocol
Select from HTTP, HTTPS
Service Port
Required
Enter Listener listening port
Enter a value between 1 and 65,534
Routing Rules > Routing Action
Required
Select routing processing method
LB Server Group Forward: Forward traffic to LB Server Group
URL Redirection: Load Balancer responds with redirection
Routing Rules > Routing Condition
Required
When routing action is LB Server Group Forward, set LB Server Group by routing condition
URL Path: Set LB Server Group by URL path
Host Header: Set LB Server Group based on Host value
Redirection Target: When routing action is URL Redirection, set redirection response
Change URL Path: Enter URL path to redirect
Change Host: Enter Host value to redirect
Protocol/Port: Set protocol and port to redirect (when using HTTP protocol)
You can view and modify detailed information of a Listener by selecting it from the Connected Resources tab on the Load Balancer Detail page.
Follow these steps to view detailed information of the Listener:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the Load Balancer menu. You will be navigated to the Load Balancer List page.
On the Load Balancer List page, click the resource for which you want to view detailed information. You will be navigated to the Load Balancer Detail page.
On the Load Balancer Detail page, click the Connected Resources tab.
From the connected resources list, click the Listener for which you want to view detailed information. You will be navigated to the Listener Detail page.
The Listener Detail page displays status information and additional feature information, and consists of Detailed Information, Tags, Task History tabs.
Item
Detailed Description
Status
Listener status
Active: Service is normally activated
Deleting: Processing service termination request
Creating: Processing service creation request
Error: Cannot check current status due to internal error
Editing: Processing service modification request
Delete Listener
Delete Listener
Table. Listener status information and additional feature items
Detailed Information
On the Detailed Information tab, you can view the detailed information of the Listener and modify necessary information. The detailed information varies depending on the Load Balancer in use.
L4 Load Balancer Detailed Information
Item
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who requested Listener creation
Creation Date
Listener creation date
Modifier
User who requested Listener modification
Modification Date
Listener modification date
Listener Name
Listener name
Protocol
Protocol used by Listener
Port
Port used by Listener
Session Persistence Time
Client session persistence time
Can modify by clicking the Edit icon
Proxy Protocol
Whether to insert client IP information
Can modify by clicking the Edit icon
Persistence
Whether to use session persistence (Sticky Session)
Can modify by clicking the Edit icon
Routing Rules
Routing action and LB Server Group information
Can modify LB Server Group by clicking the Edit icon
SSL Certificate
Default certificate, SSL security level, and expiration date information
Can modify by clicking the Edit icon
If registered SNI certificates exist, cannot modify default certificate (can modify after deleting SNI certificates)
SNI Certificate
SNI certificate detailed information
Can modify referenced SNI information and add register certificates by clicking the Edit icon
Server SSL Security Level
Whether to encrypt server connection
Can modify by clicking the Edit icon
Description
Additional information about the Listener
Can modify by clicking the Edit icon
Table. Listener detailed information tab - When using L4 Load Balancer
L7 Load Balancer Detailed Information
Item
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who requested Listener creation
Creation Date
Listener creation date
Modifier
User who requested Listener modification
Modification Date
Listener modification date
Listener Name
Listener name
Protocol
Protocol used by Listener
Port
Port used by Listener
Session Persistence Time
HTTP connection persistence time
Can modify Not Use → Use and modify input value by clicking the Edit icon
Client Connection Persistence Time
HTTP client connection persistence timeout
Can modify Not Use → Use and modify input value by clicking the Edit icon
Server Response Wait Time
HTTP server response wait timeout
Can modify Not Use → Use and modify input value by clicking the Edit icon
X-Forwarded-For
Whether to insert client IP information
Can modify by clicking the Edit icon
X-Forwarded-Proto
Whether to insert client request protocol information
Can modify by clicking the Edit icon
X-Forwarded-Port
Whether to insert client request port information
Can modify by clicking the Edit icon
Persistence
Whether to use session persistence (Sticky Session)
Can modify by clicking the Edit icon
HTTP 2.0
Whether to use HTTP/2 when connecting client and server
Can modify by clicking the Edit icon
Routing Rules
Routing action and routing condition/redirection target information
Can modify Routing Condition or Redirection Target by clicking the Edit icon
SSL Certificate
Default certificate, SSL security level, and expiration date information
Can modify by clicking the Edit icon
If registered SNI certificates exist, cannot modify default certificate (can modify after deleting SNI certificates)
SNI Certificate
SNI certificate detailed information
Can modify referenced SNI information and add register certificates by clicking the Edit icon
Server SSL Security Level
Whether to encrypt server connection
Can modify by clicking the Edit icon
Description
Additional information about the Listener
Can modify by clicking the Edit icon
Table. Listener detailed information tab - When using L7 Load Balancer
Tags
You can view, add, modify, or delete tag information for the Listener.
Item
Detailed Description
Tag List
Tag list
Can view Key, Value information of the tag
Can add up to 50 tags per resource
When entering a tag, search and select from the list of previously created Keys and Values
Table. Listener tags tab items
Task History
You can view the task history of the Listener.
Item
Detailed Description
Task Details
Task execution content
Task Date
Task execution date
Resource Type
Resource type
Resource Name
Listener name
Task Result
Task execution result (Success/Failure)
Task User Information
User information who performed the task
Table. Listener task history tab items
Modifying Routing Rules
You can modify routing rules of a Listener from the Connected Resources tab on the Load Balancer Detail page.
Follow these steps to modify routing rules of the Listener:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the Load Balancer menu. You will be navigated to the Load Balancer List page.
On the Load Balancer List page, click the resource for which you want to view detailed information. You will be navigated to the Load Balancer Detail page.
On the Load Balancer Detail page, click the Connected Resources tab.
From the connected resources list, click the Listener for which you want to add routing conditions. You will be navigated to the Listener Detail page.
On the Listener Detail page, click the Edit icon on the Routing Rules item. The Modify Routing Rules popup window will open.
Modify routing rules according to the routing action, then click the OK button.
Item
Required
Detailed Description
Routing Action
-
Currently set routing method (cannot modify)
Routing Condition
Required
Can modify routing conditions when routing action is LB Server Group Forward
URL Path: Modify request URL path and LB Server Group (can add up to 20)
Host Header: Modify request host and LB Server Group (can add up to 20)
Redirection Target
Required
Can modify redirection target when routing action is URL Redirection
Follow these steps to delete a Listener that is not in use:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the Load Balancer menu. You will be navigated to the Load Balancer List page.
On the Load Balancer List page, click the Load Balancer resource from which you want to delete a Listener. You will be navigated to the Load Balancer Detail page.
On the Load Balancer Detail page, click the Connected Resources tab. You will be navigated to the Connected Resources tab page.
On the Connected Resources tab page, click the Listener you want to delete. You will be navigated to the Listener Detail page.
On the Listener Detail page, click the Delete Listener button.
Terminating Load Balancer
You can terminate a Load Balancer that is not in use to reduce costs. However, since it may affect application services, request termination after sufficient prior review.
Caution
You cannot terminate a Load Balancer in the following cases:
If there are Listeners connected to the Load Balancer: Delete the connected Listeners on the Connected Resources tab of the Load Balancer Detail page.
If using Public NAT IP on the Load Balancer: Release the Public NAT IP in use on the Detailed Information tab of the Load Balancer Detail page.
If using Private NAT IP on the Load Balancer: Release the Private NAT IP in use on the Detailed Information tab of the Load Balancer Detail page.
If there are rules registered in the Firewall: Delete the rules of the Firewall in use on the Detailed Information tab of the Load Balancer Detail page.
If connected to PrivateLink Service: Check the connected Load Balancer on the PrivateLink Service Detail page.
Follow these steps to terminate a Load Balancer:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the Load Balancer menu. You will be navigated to the Load Balancer List page.
On the Load Balancer List page, click the resource you want to terminate. You will be navigated to the Load Balancer Detail page.
On the Load Balancer Detail page, click the Terminate Service button.
When termination is complete, verify resource termination on the Load Balancer List.
5.3.2.1 - LB Server Group
You can create an LB Server Group through the Samsung Cloud Platform Console and connect it to a Load Balancer’s Listener.
Creating LB Server Group
Note
You can create up to 1,000 LB Server Groups per Account.
Follow these steps to create an LB Server Group:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Server Group menu. You will be navigated to the LB Server Group List page.
On the LB Server Group List page, click the Create LB Server Group button. You will be navigated to the Create LB Server Group page.
On the Create LB Server Group page, enter the information required for service creation and select detailed options.
In the Service Information area, enter or select the required information.
Item
Required
Detailed Description
LB Server Group Name
Required
LB Server Group resource name
Enter 3 to 63 characters using uppercase/lowercase English letters, numbers, and special characters(-_)
LB Server Group name cannot be duplicated within an Account
VPC Name
Required
Select VPC to create LB Server Group
Select VPC where the Load Balancer to which the LB Server Group will be connected is created
Service Subnet Name
Required
Select VPC Subnet to create LB Server Group
Select Subnet where the Load Balancer to which the LB Server Group will be connected is created
Load Balancing
Required
Select load balancing algorithm
Round Robin: Distribute sequentially to registered members
Weighted round robin: Distribute sequentially in proportion to the weight assigned to each member
Least Connection: Distribute to the member with the fewest connections
Weighted least connection: Distribute to the member with the highest priority considering the weight assigned to each member and the number of connections
IP Hash: Distribute to a specific member according to the client IP address hash value
Protocol
Required
Select LB Server Group listening protocol
Select protocol to forward to members of LB Server Group
LB Health Check
Required
Select LB Health Check
Select from LB Health Checks created in the same Service Subnet as the LB Server Group
Table. LB Server Group service information input items
In the Additional Information area, enter or select the required information.
Item
Required
Detailed Description
Description
Optional
Enter resource description
Tag
Optional
Add tag
Can add up to 50 tags per resource
Table. LB Server Group additional information input items
Review the created service information and estimated charges, then click the Create button.
When creation is complete, verify the created resource on the LB Server Group List page.
Viewing LB Server Group Detailed Information
You can view and modify resource lists and detailed information from the LB Server Group menu. The LB Server Group Detail page consists of Detailed Information, Connected Resources, Tags, and Task History tabs.
Follow these steps to view detailed information of the LB Server Group:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Server Group menu. You will be navigated to the LB Server Group List page.
You can modify column display by clicking the Settings button in the upper right of the table.
Item
Display
Detailed Description
LB Server Group Name
Default
LB Server Group resource name
Protocol
Default
LB Server Group protocol
Load Balancer Name
Default
Load Balancer resource name connected to LB Server Group
LB Health Check ID
Default
LB Health Check resource name used by LB Server Group
Member Count
Default
Number of members registered in LB Server Group
Creation Date
Default
LB Server Group creation date
Status
Default
LB Server Group resource status
Table. LB Server Group list items
On the LB Server Group List page, click the resource for which you want to view detailed information. You will be navigated to the LB Server Group Detail page.
At the top of the LB Server Group Detail page, status information and description of additional features are displayed.
Item
Detailed Description
Status
LB Server Group resource status
Active: Service is normally activated
Deleting: Processing service termination request
Creating: Processing service creation request
Error: Cannot check current status due to internal error
If this status persists, contact Support Center
Editing: Processing service modification request
Delete LB Server Group
Delete LB Server Group resource
Table. LB Server Group status information and additional feature items
Detailed Information
On the Detailed Information tab, you can view detailed information of the resource from the LB Server Group List and modify information if necessary.
Item
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who requested service creation
Creation Date
Service creation date
Modifier
User who requested service modification
Modification Date
Service modification date
Load Balancer Name
Load Balancer resource name connected to LB Server Group
Clicking the resource name navigates to the detail page
LB Server Group Name
LB Server Group name
VPC Name
VPC resource name where the LB Server Group is created
Clicking the resource name navigates to the detail page
Service Subnet Name
Subnet resource name where the LB Server Group is created
Clicking the resource name navigates to the detail page
Port
LB Server Group forwarding port
Protocol
LB Server Group forwarding protocol
Load Balancing
LB Server Group traffic distribution method
Can modify by clicking the Edit icon
LB Health Check
LB Health Check resource name
Can modify by clicking the Edit icon
Description
LB Server Group additional description
Can modify by clicking the Edit icon
Table. LB Server Group detailed information tab items
Connected Resources
On the Connected Resources tab, you can view the list of members connected to the LB Server Group and add or delete members.
Item
Detailed Description
Add Member
Add LB Server Group member button
Member Name
Member name (server name) added to LB Server Group
IP Address
Member IP address
Port
Member listening port
Weight
Load balancing weight
Default value 1
When using weighted load balancing (Weighted Round Robin, Weighted Least Connection) in LB Server Group, can enter 1 to 1000
Enabled
Whether member is enabled
Enable: Receiving client requests
Disable: Excluded from receiving client requests
Creation Date
Member addition date
Health Check Status
Health check status information
Healthy: Health check normal
Unhealthy: Health check abnormal
Unknown: Cannot check health check status
Status
Member resource status
Table. LB Server Group connected resources list items
Tags
You can view, add, modify, or delete tag information for the resource selected from the LB Server Group List page.
Item
Detailed Description
Tag List
Tag list
Can view Key, Value information of the tag
Can add up to 50 tags per resource
When entering a tag, search and select from the list of previously created Keys and Values
Table. LB Server Group tags tab items
Task History
On the Task History tab, you can view the task history of the selected resource.
Item
Detailed Description
Task Details
Task execution content
Task Date
Task execution date
Resource Type
Resource type
Resource Name
LB Server Group name
Task Result
Task execution result (Success/Failure)
Task User Information
User information who performed the task
Table. LB Server Group task history list items
Managing LB Server Group Resources
You can view the member list of the LB Server Group and add or delete members.
Adding Member
You can add a member to the LB Server Group to register server resources that will process client requests.
Follow these steps to add a member to the LB Server Group:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Server Group menu. You will be navigated to the LB Server Group List page.
On the LB Server Group List page, click the resource for which you want to modify detailed information. You will be navigated to the LB Server Group Detail page.
On the LB Server Group Detail page, click the Connected Resources tab. You will be navigated to the Connected Resources tab page.
On the Connected Resources tab page, click the Add Member button in the upper right corner.
In the Add Member popup window, enter the required information, then click the OK button.
Item
Required
Detailed Description
LB Server Group Name
Default
LB Server Group name to add member to
Target Server
Required
Server information to add as member
Virtual Server/Bare Metal Server: Select from the list of servers created in the same VPC as the LB Server Group
Direct IP Input: Enter server IP directly
Can add target server by clicking the Add button
Member Information
Required
Set member port and weight
Member Name: Display server name and IP to be added as member
Port: Port that the member will listen to
Weight: Weight to be applied to load balancing
When using Weighted Round Robin, Weighted Least Connection load balancing, must enter a value between 1 and 1000
Table. LB Server Group member addition items
In the notification window, click the OK button.
Verify member addition on the Connected Resources tab.
Notice
For communication between Load Balancer and LB Server Group members, add the following rules to the Security Group of the server added as a member.
* (Direction) Inbound rule, (Target Address) Load Balancer’s Source NAT IP, (Protocol) LB Server Group protocol, (Allowed Port) Member port
Note
If the LB Server Group is in Creating, Editing, Deleting, Error status, you cannot add members.
If the number of members that can be created in the Account to which the LB Server Group belongs is exceeded, you cannot add members. The maximum number of members that can be created in one Account is 1,000.
Note
You can add a server created in a different VPC as a member through VPC Peering. After adding the target server by Direct IP Input, check the Health Check Status of the added member on the Connected Resources tab. For details, refer to VPC > VPC Peering.
Modifying Member
Clicking the member name in the member list navigates you to the Member Detail page. You can view detailed information of the member and change information by clicking the Edit icon.
Follow these steps to modify member detailed information:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Server Group menu. You will be navigated to the LB Server Group List page.
On the LB Server Group List page, click the resource for which you want to modify detailed information. You will be navigated to the LB Server Group Detail page.
On the LB Server Group Detail page, click the Connected Resources tab. You will be navigated to the Connected Resources tab page.
On the Connected Resources tab page, click the member you want to modify. You will be navigated to the Member Detail page.
On the Member Detail page, modify the desired member information.
Modifying Weight
Can be modified when using weighted load balancing (Weighted Round Robin, Weighted Least Connection).
Click the Edit icon on the Weight item. Enter the weight to modify in the modification window and click the OK button.
Modifying Port
To modify the member port, click the Edit icon on the Port item. Enter the port to modify in the modification window and click the OK button.
Modifying Enabled
To modify member enabled status, click the Edit icon on the Enabled item. Set the enabled status in the modification window and click the OK button.
Note
If you modify Enabled to Disable, the member will only handle existing connections and stop new connections.
Deleting Member
Follow these steps to delete a member that is not in use:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Server Group menu. You will be navigated to the LB Server Group List page.
On the LB Server Group List page, click the resource for which you want to modify detailed information. You will be navigated to the LB Server Group Detail page.
On the LB Server Group Detail page, click the Connected Resources tab. You will be navigated to the Connected Resources tab page.
On the Connected Resources tab page, click the member you want to delete. You will be navigated to the Member Detail page.
On the Member Detail page, click the Delete Member button.
Verify member deletion on the Connected Resources tab.
Terminating LB Server Group
You can terminate an LB Server Group that is not in use. However, since it may affect application services, request termination after sufficient prior review.
Notice
You cannot terminate an LB Server Group in the following cases:
If the LB Server Group is in use by a Listener: Modify the LB Server Group of the Listener before terminating the LB Server Group.
If there are registered members in the LB Server Group: Delete all resources connected to the LB Server Group before terminating the LB Server Group.
If the LB Server Group is used in an Auto-Scaling Group: Set Load Balancer to not use in the Auto-Scaling Group or modify to not use that LB Server Group. For details, refer to Auto-Scaling Group > Using Load Balancer.
Follow these steps to terminate the LB Server Group:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Server Group menu. You will be navigated to the LB Server Group List page.
On the LB Server Group List, click the resource you want to terminate. You will be navigated to the LB Server Group Detail page.
On the LB Server Group Detail page, click the Delete LB Server Group button.
When termination is complete, verify resource termination on the LB Server Group List.
5.3.2.2 - LB Health Check
You can create an LB Health Check through the Samsung Cloud Platform Console and use it for LB Server Groups.
Creating LB Health Check
Note
You can create up to 500 LB Health Checks per Account.
Follow these steps to create an LB Health Check:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Health Check menu. You will be navigated to the LB Health Check List page.
On the LB Health Check List page, click the Create LB Health Check button. You will be navigated to the Create LB Health Check page.
On the Create LB Health Check page, enter the information required for service creation and select detailed options.
In the Service Information area, enter or select the required information.
Item
Required
Detailed Description
LB Health Check Name
Required
LB Health Check resource name
Enter 3 to 63 characters using uppercase/lowercase English letters, numbers, and special characters(-_)
LB Health Check name cannot be duplicated within an Account
VPC Name
Required
Select VPC to create LB Health Check
Select VPC where the LB Server Group to use the LB Health Check is created
Service Subnet Name
Required
Select VPC Subnet to create LB Health Check
Select Subnet where the LB Server Group to use the LB Health Check is created
Health Check Method > Protocol
Required
Health check protocol
Select from TCP, HTTP to use for member health check
Health Check Method > Health Check Port
Required
Health check port
Enter a value between 1 and 65,534 to use for member health check
Health Check Method > Interval
Required
Health check interval
Default value 5 seconds, can enter between 1 and 180 seconds
Health Check Method > Timeout
Required
Health check response wait time
Default value 5 seconds, can enter between 1 and 180 seconds
Cannot set to a value greater than the interval
Health Check Method > Healthy Threshold
Required
Number of times to determine health check status
Default value 3 times, can enter between 1 and 10
Health Check Method > HTTP Method
Required
Set HTTP request method (when using HTTP protocol)
Select from GET, POST
Health Check Method > URL Path
Required
Enter health check URL path (when using HTTP protocol)
Enter within 50 characters using English letters, numbers, and special characters(/.-_?&=)
Health Check Method > Response Code
Required
Enter HTTP response code to receive from server (when using HTTP protocol)
Enter response codes in the 200 ~ 500 range
Health Check Method > Request String
Required
Enter health check request string (when using HTTP protocol POST method)
Enter content to include in Request Body within 255 bytes using English letters, numbers, and special characters(/.-_?&=)
Table. LB Health Check service information input items
In the Additional Information area, enter or select the required information.
Item
Required
Detailed Description
Description
Optional
Enter resource description
Tag
Optional
Add tag
Can add up to 50 tags per resource
Table. LB Server Group additional information input items
In the Summary panel, review the created service information and estimated charges, then click the Create button.
When creation is complete, verify the created resource on the LB Health Check List page.
Notice
For member health check in Load Balancer, add the following rules to the Security Group of the server added as a member.
(Direction) Inbound rule, (Target Address) Load Balancer’s health check IP, (Protocol) Health check protocol, (Allowed Port) Health check port
We recommend setting the health check port to be the same as the member port.
If the health check port and member port are different, health check is performed based on the member port.
Notice
Set the LB Health Check to a value that can respond from members to be added to the LB Server Group.
Since Load Balancer determines member status based on health check response, the LB Health Check result may differ from the actual service status.
Viewing LB Health Check Detailed Information
You can view and modify resource lists and detailed information from the LB Health Check menu. The LB Health Check Detail page consists of Detailed Information, Connected Resources, Tags, and Task History tabs.
Follow these steps to view detailed information of the LB Health Check:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Health Check menu. You will be navigated to the LB Health Check List page.
You can modify column display by clicking the Settings button in the upper right of the table.
Item
Display
Detailed Description
LB Health Check Name
Default
LB Health Check resource name
Service Subnet ID
Default
VPC Subnet name where the LB Health Check is created
LB Server Group Count
Default
Number of LB Server Groups using the LB Health Check
Type
Default
LB Health Check type
Protocol
Default
LB Health Check protocol
Creation Date
Default
LB Health Check creation date
Status
Default
LB Health Check resource status
Table. LB Health Check list items
On the LB Health Check List page, click the resource for which you want to view detailed information. You will be navigated to the LB Health Check Detail page.
At the top of the LB Health Check Detail page, status information and description of additional features are displayed.
Item
Detailed Description
Status
LB Health Check resource status
Active: Service is normally activated
Deleting: Processing service termination request
Creating: Processing service creation request
Error: Cannot check current status due to internal error
If this status persists, contact Support Center
Editing: Processing service modification request
Delete LB Health Check
Delete LB Health Check resource
Table. LB Health Check status information and additional feature items
Detailed Information
On the Detailed Information tab, you can view detailed information of the resource from the LB Health Check List and modify information if necessary.
Item
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who requested service creation
Creation Date
Service creation date
Modifier
User who requested service modification
Modification Date
Service modification date
LB Health Check Name
LB Health Check name
Type
LB Health Check type
VPC Name
VPC to use the LB Health Check
Clicking the resource name navigates to the detail page
Service Subnet Name
VPC Subnet to use the LB Health Check
Clicking the resource name navigates to the detail page
Health Check Method
LB Health Check method setting information
Can modify by clicking the Edit icon
Description
Additional information about the LB Health Check
Can modify by clicking the Edit icon
Table. LB Health Check detailed information tab items
Connected Resources
On the Connected Resources tab, you can view detailed information of the LB Server Group connected to the LB Health Check.
Item
Detailed Description
LB Server Group Name
LB Server Group resource name
Clicking the resource name navigates to the LB Server Group detail page
Protocol
LB Health Check protocol
Load Balancer Name
Load Balancer resource name connected to LB Server Group
Member Count
Number of members added to LB Server Group
Creation Date
LB Server Group creation date
Status
LB Server Group resource status
Active: Service is normally activated
Deleting: Processing service termination request
Creating: Processing service creation request
Error: Cannot check current status due to internal error
If this status persists, contact Support Center
Editing: Processing service modification request
Table. LB Health Check connected resources list items
Tags
You can view, add, modify, or delete tag information for the resource selected from the LB Health Check List page.
Item
Detailed Description
Tag List
Tag list
Can view Key, Value information of the tag
Can add up to 50 tags per resource
When entering a tag, search and select from the list of previously created Keys and Values
Table. LB Health Check tags tab items
Task History
On the Task History tab, you can view the task history of the selected resource.
Item
Detailed Description
Task Details
Task execution content
Task Date
Task execution date
Resource Type
Resource type
Resource Name
LB Health Check name
Task Result
Task execution result (Success/Failure)
Task User Information
User information who performed the task
Table. LB Health Check task history list items
Modifying LB Health Check Method
You can modify the health check method on the LB Health Check Detail page.
Follow these steps to modify the LB Health Check method:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Health Check menu. You will be navigated to the LB Health Check List page.
On the LB Health Check List page, click the resource for which you want to modify detailed information. You will be navigated to the LB Health Check Detail page.
On the LB Health Check Detail page, click the Edit icon on Health Check Method. You will be navigated to the Modify Health Check Method popup window.
In the Modify Health Check Method popup window, modify the required information, then click the OK button.
Terminating LB Health Check
You can terminate an LB Health Check service that is not in use.
Caution
You cannot delete LB Health Check resources of Default type.
You cannot delete LB Health Check resources in use by LB Server Groups.
Follow these steps to terminate the LB Health Check:
Click the All Services > Networking > Load Balancer menu. You will be navigated to the Load Balancer’s Service Home page.
On the Service Home page, click the LB Health Check menu. You will be navigated to the LB Health Check List page.
On the LB Health Check List, click the resource you want to terminate. You will be navigated to the LB Health Check Detail page.
On the LB Health Check Detail page, click the Delete LB Health Check button.
When termination is complete, verify resource termination on the LB Health Check List.
5.3.3 - API Reference
API Reference
5.3.4 - CLI Reference
CLI Reference
5.3.5 - Release Note
Load Balancer
2025.12.16
FEATURELB health check setting change and addition of LB health check, LB server group options
LB health check port configuration method has been changed.
You can choose between member port/direct input, and if you select direct input, specify the port to use.
Existing LB health checks are changed to member ports. (Same as the current health check method)
HTTPS option has been added to the LB health check protocol.
You can monitor the server TLS connection status.
When using URL redirection on the HTTP Listener, you can specify the target port for the redirection.
You can add Multi-node GPU Cluster resources to LB server group members.
2025.10.23
FEATURELoad Balancer Feature Added
You can set the Source NAT IP and health check IP when creating a Load Balancer.
TLS protocol has been added to L4 Listener.
You can configure TLS services based on TCP.
Routing rule option has been added to L7 Listener.
Routing conditions allow setting URL path or host-specific branching.
Supports multiple SSL certificates.
Supports SNI, allowing multiple certificates to be registered on a single Listener.
2025.07.01
FEATURELB health check and LB server group feature addition
Add LB health check management feature
Create an LB health check to define the required health check method and connect it to an LB server group for use.
LB server group weighted load balancing support
Weighted Round Robin and Weighted Least Connection have been added to the load balancing options.
By setting per-member weights, you can distribute server load.
Add LB server group member activation feature
You can select whether to enable or disable members belonging to the LB server group.
2025.02.27
NEWNew Load Balancer Service Launch
A Load Balancer service that provides more stable and enhanced features has been launched.
Provides an L7 Load Balancer that supports HTTP, HTTPS protocols.
Provides an L4 Load Balancer that supports TCP, UDP protocols.
5.4 - DNS
5.4.1 - Overview
Service Overview
The DNS service is a service that converts domain names, which are easy for humans to recognize, into IP addresses composed of numbers so that systems can identify them, allowing access to services. Through the DNS service, users can easily register desired domains and manage domain records themselves.
Features
Easy Domain Registration: New domain registration/change management is possible through a web-based console. You can easily create and manage domains through the web without building a separate DNS infrastructure or installing a DNS solution.
Various Record Support: You can set various resource record types such as A, AAAA, CNAME, TXT, MX, SPF, etc., and automatically scale to handle large query volumes without user intervention.
Convenient Hosting Environment Management: You can select and use Public domain names that provide web services exposed to the Internet and Private domain names that can only be used by designated internal users without Internet connection according to the environment and purpose.
Configuration Diagram
Figure. DNS Configuration Diagram
Provided Functions
The DNS service provides the following functions.
Hosted Zone Creation/Management: You can create and manage Public Hosted Zones that can be accessed from anywhere through the Internet and Private Hosted Zones that can only be accessed in designated network environments without exposure to the Internet.
Public Domain Name Application: You can apply for a Public Domain Name that allows access from anywhere through the Internet.
Various Resource Record Support: You can select and use record types according to the usage environment and purpose.
Record Type
Description
A
Specify the IPv4 address corresponding to the domain name so that the IP address can be found through the domain name
AAAA
Specify the IPv6 address corresponding to the domain name so that the IP address can be found through the domain name
TXT
Set text information about the domain
CNAME
Specify an alias for the domain name
MX
Specify the mail server for the domain and subdomains owned by the user
SPF
Verify the IP address or domain name of the mail sending server to prevent spam emails (Sender Policy Framework)
NS
Name server responsible for the domain (automatically generated)
SOA
Define the start information of the domain (start point of authority) (automatically generated)
Components
Private DNS
To manage Private domain names for use only in designated network environments without exposure to the Internet, you must first create a Private DNS.
Private DNS names are commonly used in all regions within an Account. They can be created for the first time in any region within the Account, and thereafter can be activated with the same Private DNS name in other regions from the Private DNS list.
You can select the VPC to connect to Private DNS for each region. By using a common Private DNS name, you can share and manage Private Hosted Zone information across all regions.
Hosted Zone
Private Hosted Zone allows you to create and manage domain names that can only be used in designated network environments targeting VPCs connected to Private DNS.
Public Hosted Zone allows you to manage Public Domain Names created through Samsung Cloud Platform.
Through Hosted Zone, you can register and modify records according to your purpose.
Public Domain Name
You can apply for a Public Domain Name in conjunction with Whois, a Public Domain Name management company.
Public Domain Name can be purchased in one-year units, and you can set or change whether to automatically renew (in one-year units) up to 7 days before the purchase period ends.
Constraints
The constraints of the DNS service are as follows.
Item
Description
Number of Private DNS that can be created within an Account
1
Number of Hosted Zones that can be created within an Account
20
Number of records that can be registered per Hosted Zone
100
Note
Application for Public Domain Name and Public Hosted Zone use in the Korea South (kr-south) region is restricted.
Prerequisite Services
The DNS service has no prerequisite services.
5.4.1.1 - TLD List
TLD (Top-Level Domain) List
You can use the following TLDs. Different annual usage fees apply by TLD type when applying as a Public Domain Name.
TLD Type
Public Domain Name Registration Cost (KRW/year, excluding VAT)
.COM
20,000
.NET
20,000
.ORG
20,000
.KR
24,000
.PE.KR
16,000
.BIZ
20,000
.INFO
20,000
.CN
65,000
.TV
90,000
.IN
65,000
.EU
80,000
.AC
286,000
.TW
100,000
.MOBI
44,000
.NAME
30,000
.CC
90,000
.JP
198,000
.ASIA
55,000
.ME
44,000
.TEL
44,000
.PRO
44,000
.SO
103,000
.SX
90,000
.CO
100,000
.XXX
200,000
.PW
44,000
.PH
100,000
.io
91,000
.app
42,500
.co.kr
24,000
5.4.1.2 - ServiceWatch Metrics
DNS sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at a 1‑minute interval.
Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the DNS namespace.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Table. DNS Basic Metrics
5.4.2 - How-to guides
This explains what to check before creating a DNS service through the Samsung Cloud Platform Console.
Before Using Private Domain Name Management
Verify the following before using Private Domain Name management.
To manage Private domain names for use only in specified network environments without exposing to the internet, you must first create a Private DNS.
The defined Private DNS name is used commonly across all regions within the Account. It can be created initially in any region within the Account, and thereafter in other regions, you activate and use it with the same Private DNS name from the Private DNS list.
You can selectively set VPCs to connect to Private DNS per region. There are no connected VPCs at the time of initial creation or activation.
The Private DNS name may already be in use within Samsung Cloud Platform, and you can check whether it is in use through duplicate checking when entering the domain name.
Hosted Zone information will be shared across all regions. However, some detailed information (SRN, creator, modifier information) can only be verified in the region where it was initially created.
A general usage example is as follows. For detailed usage instructions, refer to the How-to guides of the corresponding sub-service.
Step
Sub-service
Main Procedure
STEP 1
Private DNS
Create Private DNS (Region A) → Connect VPC within Region A → Activate Private DNS (Region B) → Connect VPC within Region B
STEP 2
Hosted Zone
Create Private Hosted Zone → Register records
STEP 3
-
View detailed information, modify, terminate
Table. Private Domain Name Management General Usage Procedure
Before Using Public Domain Name Management
Verify the following before using Public Domain Name management.
For Public Domain Names to be used in internet environment, management through Hosted Zone is only possible for domain names applied for through Samsung Cloud Platform.
The list of available top-level domains may change.
An example of general usage procedure is as follows. For detailed usage instructions, refer to the How-to guides of the corresponding sub-service.
Division
Sub-service
Main Procedure
STEP 1
Public Domain Name
Check availability and apply for the Public Domain Name you want to use
STEP 2
Hosted Zone
Create Hosted Zone for the applied Public Domain Name → Register records
STEP 3
-
View detailed information, modify, terminate
Table. Public Domain Name Management General Usage Procedure
5.4.2.1 - Private DNS
Users can create the Private DNS service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating Private DNS
You can create and use the Private DNS service in the Samsung Cloud Platform Console.
Note
Only one Private DNS service can be created per Account.
Follow these steps to request Private DNS service creation.
Click the All Services > Networking > DNS menu. You will be navigated to the Service Home page.
Click the Create Private DNS button in the dropdown on the Service Home page. You will be navigated to the Create Private DNS page.
Enter the information required for service creation and select detailed options on the Create Private DNS page.
Enter or select the required information in the Enter Service Information section.
Division
Required
Detailed Description
Private DNS Name
Required
Enter the Private DNS name to use
Enter within 3 to 20 characters including lowercase letters, numbers, and special characters (-)
Cannot use the same as an already used name
VPC Connection
Optional
Register VPCs to connect to Private DNS
Click the Select button to select VPCs
Can register up to 5 VPCs
Table. Private DNS Service Information Input Items
Enter or select the required information in the Enter Additional Information section.
Division
Required
Detailed Description
Description
Optional
Enter additional information and description for Private DNS
Tags
Optional
Add tags
Can add up to 50 tags per resource
Click the Add Tag button and then enter or select Key, Value values
Table. Private DNS Additional Information Input Items
Review the creation details and click the Create button.
When creation is complete, verify the created resource in the Private DNS List page.
Viewing Private DNS Detail Information
You can view and modify the entire resource list and detailed information of the Private DNS service. The Private DNS Detail page consists of Detail Information, Tags, Task History tabs.
Follow these steps to view Private DNS detail information.
Click the All Services > Networking > DNS menu. You will be navigated to the DNS’s Service Home page.
Click the Private DNS menu on the Service Home page. You will be navigated to the Private DNS List page.
Click the resource for which you want to view detailed information on the Private DNS List page. You will be navigated to the Private DNS Detail page.
The Private DNS Detail page displays the status information and detailed information of Private DNS, and consists of Detail Information, Tags, Task History tabs.
Division
Detailed Description
Service Status
Status of Private DNS
Creating: Creating
Activing: Activating
Active: Running
Inactive: Stopped
Editing: Changing settings
Deleting: Terminating
Error: Error occurred
Service Termination
Button to terminate Private DNS
Table. Private DNS Status Information and Additional Features
Detail Information
You can view the detailed information of the resource selected on the Private DNS List page, and modify the information if necessary.
Division
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Created At
Date and time when the service was created
Modifier
User who modified the service information
Modified At
Date and time when the service information was modified
Initial Creation Location
Initial creation location of Private DNS
VPC Connection
VPC information connected to Private DNS
Can change VPC by clicking the Edit icon
Clicking connected VPC name navigates to detail page
Description
Private DNS description
Can modify description by clicking the Edit icon
Table. Private DNS Detail Information Tab Items
Tags
You can view the tag information of the resource selected on the Private DNS List page, and add, change, or delete tags.
Division
Detailed Description
Tag List
Tag list
Can view tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Private DNS Tags Tab Items
Task History
You can view the task history of the resource selected on the Private DNS List page.
Division
Detailed Description
Task History List
Resource change history
Can view task details, task date and time, resource type, resource name, task result, task operator information
Clicking the corresponding resource in the Task History List list opens the Task History Detail popup window
Table. Private DNS Task History Tab Items
Activating Private DNS in a Location Other Than Initial Creation Location
You can activate and use Private DNS in a location (region) other than the location (region) where Private DNS was initially created.
Follow these steps to activate the Private DNS service.
Click the All Services > Networking > DNS menu. You will be navigated to the DNS’s Service Home page.
Click the Private DNS menu on the Service Home page. You will be navigated to the Private DNS List page.
Click the More > Activate button for the resource you want to activate in the Private DNS List. A notification window is displayed.
The activate button is only displayed for Private DNS items in Inactive status.
Click OK in the notification window.
Setting VPC Connection for Private DNS
You can set VPC information connected to the Private DNS service.
Follow these steps to set VPC connection for Private DNS.
Click the All Services > Networking > DNS menu. You will be navigated to the DNS’s Service Home page.
Click the Private DNS menu on the Service Home page. You will be navigated to the Private DNS List page.
Click the resource for which you want to view detailed information on the Private DNS List page. You will be navigated to the Private DNS Detail page.
Click the Edit icon for the VPC Connection item on the Private DNS Detail page. The VPC Connection Selection Popup window opens.
Select the VPC item to connect in the VPC Connection Selection Popup window and click OK.
Verify that the selected VPC is displayed in the VPC Connection item.
Deleting Private DNS
You can apply for Private DNS service termination in the Samsung Cloud Platform Console.
Caution
You cannot terminate if Hosted Zone resources are connected to the Private DNS service. To terminate the service, delete the connected resources first.
Follow these steps to request Private DNS service termination.
Click the All Services > Networking > DNS menu. You will be navigated to the DNS’s Service Home page.
Click the Private DNS menu on the Service Home page. You will be navigated to the Private DNS List page.
Click the resource for which you want to view detailed information on the Private DNS List page. You will be navigated to the Private DNS Detail page.
Click the Service Termination button on the Private DNS Detail page.
When termination is complete, verify the service termination in the Private DNS list.
5.4.2.2 - Hosted Zone
Users can create the Hosted Zone service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating Hosted Zone
You can create and use the Hosted Zone service in the Samsung Cloud Platform Console.
Follow these steps to request Hosted Zone service creation.
Click the All Services > Networking > DNS menu. You will be navigated to the Service Home page.
Click the Create Hosted Zone button in the dropdown on the Service Home page. You will be navigated to the Create Hosted Zone page.
Enter the information required for service creation and select detailed options on the Create Hosted Zone page.
Enter or select the required information in the Enter Service Information section.
Division
Required
Detailed Description
Purpose Division
Required
Select a domain that matches the purpose of Hosted Zone
Private: Domain that can only be used within Samsung Cloud Platform
Public: Domain that can be accessed from outside (internet)
Private DNS Name to Register
Required
Select from among Private DNS created in advance
Can only select when Private is selected in Purpose Division
Hosted Zone Name to Register
Required
Enter the Hosted Zone name to use
Enter within 2 to 63 characters including lowercase letters, numbers, and special characters (-)
When applying for a new domain, click the Check Availability button to verify duplicates
Table. Hosted Zone Service Information Input Items
Enter or select the required information in the Enter Additional Information section.
Division
Required
Detailed Description
Description
Optional
Enter additional information and description for Hosted Zone
Tags
Optional
Add tags
Can add up to 50 tags per resource
Click the Add Tag button and then enter or select Key, Value values
Table. Hosted Zone Additional Information Input Items
Review the creation details and click the Create button.
When creation is complete, verify the created resource in the Hosted Zone List page.
Viewing Hosted Zone Detail Information
You can view and modify the entire resource list and detailed information of the Hosted Zone service. The Hosted Zone Detail page consists of Detail Information, Records, Tags, Task History tabs.
Follow these steps to view Hosted Zone detail information.
Click the All Services > Networking > DNS menu. You will be navigated to the Service Home page.
Click the Hosted Zone menu on the Service Home page. You will be navigated to the Hosted Zone List page.
Click the resource for which you want to view detailed information on the Hosted Zone List page. You will be navigated to the Hosted Zone Detail page.
The Hosted Zone Detail page displays the status information and detailed information of Hosted Zone, and consists of Detail Information, Records, Tags, Task History tabs.
Division
Detailed Description
Service Status
Status of Hosted Zone
Creating: Creating
Active: Running
Editing: Changing settings
Deleting: Terminating
Error: Error occurred
Delete Hosted Zone
Button to delete Hosted Zone
Table. Hosted Zone Status Information and Additional Features
Detail Information
You can view the detailed information of the resource selected on the Hosted Zone List page, and modify the information if necessary.
Division
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Created At
Date and time when the service was created
Modifier
User who modified the service information
Modified At
Date and time when the service information was modified
Hosted Zone Name
Hosted Zone domain name
Purpose Division
Displays the selected purpose
Private DNS Name
Selected Private DNS name
Description
Hosted Zone description
Can modify description by clicking the Edit icon
Table. Hosted Zone Detail Information Tab Items
Records
You can view the registered record information on the Hosted Zone List page, and add, change, or delete records. Records are items that set communication with DNS servers, informing the server of the IP address connected to the domain and how to handle requests sent to the domain.
Division
Detailed Description
Detailed Search
Button to set detailed record search
Add Record
Button to add record
Name
Registered record name
Type
Record type
A: Record that specifies IPv4 format IP address to domain name
AAAA: Record that specifies IPv6 format IP address to domain name
SPF: Record that registers server IP that sent spam mail for spam mail prevention
CNAME: Record that specifies alias of domain name
MX: Record that specifies mail server of domain
TXT: Record that enters text information (description) for domain
NS: Name server record responsible for the domain (automatically created)
SOA: Record that defines start information of domain (start point of authority) (automatically created)
Value
IP address of record
TTL
Time for DNS response servers to temporarily store the record
Auto Create
Displays whether automatically created
Status
Displays service status
More Menu
Can modify, delete record
Table. Hosted Zone Records Tab Items
Tags
You can view the tag information of the resource selected on the Hosted Zone List page, and add, change, or delete tags.
Division
Detailed Description
Tag List
Tag list
Can view tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Hosted Zone Tags Tab Items
Task History
You can view the task history of the resource selected on the Hosted Zone List page.
Division
Detailed Description
Task History List
Resource change history
Can view task details, task date and time, resource type, resource name, task result, task operator information
Clicking the corresponding resource in the Task History List list opens the Task History Detail popup window
Table. Hosted Zone Task History Tab Items
Managing Hosted Zone Records
You can add or delete records to the Hosted Zone service.
Adding Records
Follow these steps to add records to Hosted Zone.
Click the All Services > Networking > DNS menu. You will be navigated to the DNS’s Service Home page.
Click the Hosted Zone menu on the Service Home page. You will be navigated to the Hosted Zone List page.
Click the resource for which you want to view detailed information on the Hosted Zone List page. You will be navigated to the Hosted Zone Detail page.
Click the Records tab on the Hosted Zone Detail page. You will be navigated to the Records tab page.
Click the Add Record button on the Records tab page. The Add Record window opens.
Select the Type, Name, Value, TTL items in the add record window and click OK. The notification confirmation window opens.
Division
Detailed Description
A
Enter IPv4 format IP address
Click the Add button to add IP address, can register up to 8
AAAA
Enter IPv6 format IP address
Click the Add button to add IP address, can register up to 8
SPF
Enter IP of server that sent spam mail
When registering multiple servers, enter in format v=spf1 ip4:211.214.160.28 ip4:211.214.16.29 ~all
CNAME
Enter record alias in domain name format
Cannot register if entered the same as other type of record value
MX
Enter priority and mail server address
Click the Add button to add server address, can register up to 8
When entering priority, enter within 0 - 65,535 range, the smaller the value, the higher the priority
TXT
Enter string
Enter within 250 characters
Table. Detailed Items by Record Type
Click OK in the notification confirmation window.
Verify that the added item is displayed in the record list.
Modifying Records
Caution
Records created by the system or records in Error status cannot be modified.
Follow these steps to modify records in Hosted Zone.
Click the All Services > Networking > DNS menu. You will be navigated to the DNS’s Service Home page.
Click the Hosted Zone menu on the Service Home page. You will be navigated to the Hosted Zone List page.
Click the resource for which you want to view detailed information on the Hosted Zone List page. You will be navigated to the Hosted Zone Detail page.
Click the Records tab on the Hosted Zone Detail page. You will be navigated to the Records tab page.
Click the more menu in the list on the Records tab page and click Modify. The Modify Record window opens.
Modify the desired items in the modify record window and click OK.
Click OK in the notification confirmation window.
Deleting Records
Caution
Records created by the system cannot be deleted.
Follow these steps to delete records from Hosted Zone.
Click the All Services > Networking > DNS menu. You will be navigated to the DNS’s Service Home page.
Click the Hosted Zone menu on the Service Home page. You will be navigated to the Hosted Zone List page.
Click the resource for which you want to view detailed information on the Hosted Zone List page. You will be navigated to the Hosted Zone Detail page.
Click the Records tab on the Hosted Zone Detail page. You will be navigated to the Records tab page.
Click the more menu in the list on the Records tab page and click Delete. The notification confirmation window opens.
Click OK in the notification confirmation window.
Deleting Hosted Zone
You can apply for Hosted Zone service termination in the Samsung Cloud Platform Console.
Caution
You cannot terminate if records are registered to the Hosted Zone service. To terminate the service, delete the registered records first.
Follow these steps to request Hosted Zone service termination.
Click the All Services > Networking > DNS menu. You will be navigated to the DNS’s Service Home page.
Click the Hosted Zone menu on the Service Home page. You will be navigated to the Hosted Zone List page.
Click the resource for which you want to view detailed information on the Hosted Zone List page. You will be navigated to the Hosted Zone Detail page.
Click the Delete Hosted Zone button on the Hosted Zone Detail page.
When termination is complete, verify the service termination in the Hosted Zone list.
5.4.2.3 - Public Domain Name
Users can create a Public Domain Name service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Public Domain Name
You can create and use a Public Domain Name service through the Samsung Cloud Platform Console.
To request the creation of a Public Domain Name service, follow these steps:
Click the All Services > Networking > DNS menu. You will be redirected to the Service Home page.
On the Service Home page, click the Create Public Domain Name button from the dropdown. You will be redirected to the Create Public Domain Name page.
On the Create Public Domain Name page, enter the information required to create the service and select detailed options.
Enter or select the required information in the Service Information section.
Division
Required
Description
Domain Name to Register
Required
Enter the Public Domain Name name to use
Enter 2-63 characters including lowercase letters, numbers, and special characters (-)
When applying for a new domain, click the Check Availability button to check for duplicates
Purchase Period
Required
Automatically selected as 1 year
Auto Renew
Required
Set whether to automatically renew when the domain usage period expires
Select Use to enter detailed information
Registrant Name (Company Name): Enter the registrant name or company name within 30 characters
Registrant Email: Enter the registrant’s email address
Registrant Address: Enter the registrant’s company address, click the Find Zip Code button to search for the address and enter it
Phone Number: Enter the registrant’s phone number
Table. Public Domain Name service information input items
Enter or select the required information in the Additional Information section.
Division
Required
Description
Description
Optional
Enter additional information and description for the Public Domain Name
Tags
Optional
Add tags
Up to 50 tags can be added per resource
Click the Add Tag button and enter or select the Key, Value values
Table. Public Domain Name additional information input items
Review the creation details and click the Create button.
When creation is complete, you can verify the created resource on the Public Domain Name List page.
Caution
The domain auto-renewal feature can be changed up to one week before the domain usage expiration date. If the auto-renewal feature is not used, the domain information will be deleted on the domain usage expiration date.
Checking Public Domain Name Detailed Information
For the Public Domain Name service, you can view and modify the entire resource list and detailed information. The Public Domain Name Details page consists of tabs for Detailed Information, Registration Information, Tags, Operation History.
To check Public Domain Name detailed information, follow these steps:
Click the All Services > Networking > DNS menu. You will be redirected to the Service Home page.
On the Service Home page, click the Public Domain Name menu. You will be redirected to the Public Domain Name List page.
On the Public Domain Name List page, click the resource for which you want to check detailed information. You will be redirected to the Public Domain Name Details page.
The Public Domain Name Details page displays the status information and detailed information of the Public Domain Name, and consists of tabs for Detailed Information, Registration Information, Tags, Operation History.
Division
Description
Service Status
Status of the Public Domain Name
Creating: Creating
Active: Operating
Editing: Changing settings
Registered: Period renewal registered
Transfer Requested: Domain transfer request completed
Expired: Usage period expired
Transfer Domain Between Accounts
Transfer domain between accounts request button
Cancel Transfer Request: Can cancel domain transfer request after transfer request completion
Approve Transfer Request: Can approve transfer request when receiving a domain transfer request
Reject Transfer Request: Can reject transfer request when receiving a domain transfer request
Table. Public Domain Name status information and additional features
Detailed Information
On the Public Domain Name List page, you can check the detailed information of the selected resource and modify the information if necessary.
Division
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
Domain Name
Public Domain Name domain name
Registration Date
Public Domain Name domain registration date
Purpose Classification
Displays the selected purpose
Expiration Date
Public Domain Name domain usage expiration date
Auto Renew
Displays whether auto-renewal feature is used
Click the Edit icon to change settings
Description
Public Domain Name description
Click the Edit icon to modify the description
Table. Public Domain Name detailed information items
Registration Information
On the Public Domain Name List page, you can check and modify the domain registration information.
Division
Description
Registrant Name (Company Name)
Registrant name or company name entered when applying for the service
Registrant Email
Registrant email address entered when applying for the service
Registrant Address
Registrant company address entered when applying for the service
Phone Number
Registrant phone number entered when applying for the service
Table. Public Domain Name registration information tab items
Tags
On the Public Domain Name List page, you can check the tag information of the selected resource, and add, change, or delete tags.
Division
Description
Tag List
Tag list
You can check the Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the list of previously created Keys and Values
Table. Public Domain Name tag tab items
Operation History
On the Public Domain Name List page, you can check the operation history of the selected resource.
Division
Description
Operation History List
Resource change history
You can check operation details, operation date and time, resource type, resource name, operation result, and operator information
Click the corresponding resource in the Operation History List to open the Operation History Details popup window
Table. Public Domain Name operation history tab detailed information items
Requesting Public Domain Transfer Between Accounts
You can transfer a registered Public Domain to another account user.
Note
If the domain you want to transfer is being used as a Hosted Zone, you cannot request a transfer. First delete the Hosted Zone in use and then request a transfer.
The auto-renewal of the domain you want to transfer must be set to Not Used to request a transfer. After the domain transfer, you can set up auto-renewal in the account that received the transfer.
You can request a domain transfer only up to 1 month before the registration period expiration date of the domain you want to transfer.
To transfer Public Domain information to another account user, follow these steps:
Click the All Services > Networking > DNS menu. You will be redirected to the DNS Service Home page.
On the Service Home page, click the Public Domain Name menu. You will be redirected to the Public Domain Name List page.
On the Public Domain Name List page, click the resource for which you want to check detailed information. You will be redirected to the Public Domain Name Details page.
On the Public Domain Name Details page, click the Transfer Domain Between Accounts button. The Transfer Domain Between Accounts popup window will open.
In the Transfer Domain Between Accounts popup window, enter the account ID to transfer to and click the Confirm button.
When the domain transfer request is completed, the status changes to Transfer requested, and the applicant can click the Cancel Transfer Request button to cancel the transfer request.
After the domain transfer request, when another account user approves the transfer, the domain information is deleted from the transfer request account.
If the user who received the transfer request does not approve within 7 days after the approval request, the transfer request is automatically canceled.
Managing Public Domain Information Transfer Request
When another account user transfers a Public Domain, you can approve or reject the request.
Approving Public Domain Information Transfer Request
To approve a Public Domain transfer request, follow these steps:
Click the All Services > Networking > DNS menu. You will be redirected to the DNS Service Home page.
On the Service Home page, click the Public Domain Name menu. You will be redirected to the Public Domain Name List page.
On the Public Domain Name List page, click the resource for which you want to check detailed information. You will be redirected to the Public Domain Name Details page.
On the Public Domain Name Details page, click the Approve Transfer Request button. Click the Confirm button in the notification window.
Rejecting Public Domain Information Transfer Request
To reject a Public Domain transfer request, follow these steps:
Click the All Services > Networking > DNS menu. You will be redirected to the DNS Service Home page.
On the Service Home page, click the Public Domain Name menu. You will be redirected to the Public Domain Name List page.
On the Public Domain Name List page, click the resource for which you want to check detailed information. You will be redirected to the Public Domain Name Details page.
On the Public Domain Name Details page, click the Reject Transfer Request button. Click the Confirm button in the notification window.
Modifying Public Domain Name Registration Information
You can modify the registration information of the Public Domain Name.
To modify the registration information of the Public Domain Name, follow these steps:
Click the All Services > Networking > DNS menu. You will be redirected to the DNS Service Home page.
On the Service Home page, click the Public Domain Name menu. You will be redirected to the Public Domain Name List page.
On the Public Domain Name List page, click the resource for which you want to check detailed information. You will be redirected to the Public Domain Name Details page.
On the Public Domain Name Details page, click the Registration Information tab. You will be redirected to the Registration Information tab page.
On the Registration Information tab page, click the Edit button. You will be redirected to the Edit Registration Information page.
On the Edit Registration Information page, modify the desired items and click the Complete button.
5.4.3 - Release Note
DNS
2026.03.19
FEATUREDNS Feature Improvements
In conjunction with the Service Watch service, you can view measurements for the following 5 items.
Number of server error responses (unit: seconds)
Number of NXDOMAIN responses (unit: seconds)
Number of queries not responded within 1 second (unit: seconds)
Number of outgoing UDP queries (unit: seconds)
Number of UDP-based data request processing (unit: seconds)
2025.12.16
FEATUREAdded Public Domain Name Transfer Feature Between User Accounts
Public Domain Names registered through Samsung Cloud Platform can be transferred to other user accounts within the allowed period.
2025.07.01
NEWDNS Service Official Version Release
Officially released DNS service available in private network and internet environments. You can manage Private DNS and Private Hosted Zone targeting limited networks, and apply for Public Domain Name registration for internet environment and manage Public Hosted Zone.
2024.07.02
NEWBeta Version Release
Beta released DNS service that provides new domain registration application and management functions based on user requests.
5.5 - VPN
5.5.1 - Overview
Service Overview
VPN (Virtual Private Network) is a service that connects the customer network and Samsung Cloud Platform through an encrypted virtual private network.
Figure. VPN Configuration Diagram
Features
Rapid Service Provision
You can set up automated services through the web-based Console, and you can use the VPN service immediately without any waiting time after creating the service.
Secure Access
You can safely access your internal network built on the Samsung Cloud Platform from your customer’s network outside through encrypted virtual tunneling using a performance and stability verified IPsec VPN.
Easy Operation Environment
You can easily and quickly manage web-based deployment, capacity provisioning, and service updates without the complex network environment configuration.
Efficient Service Use
It is possible to manage costs efficiently because you can pay only for the amount of service used without any separate installation costs.
A service that provides an independent virtual network in a cloud environment
Fig. Preceding VPN Service
5.5.1.1 - ServiceWatch Metrics
VPN sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at a 1‑minute interval.
Reference
How to check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the VPN namespace.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Table. VPN Basic Metrics
5.5.2 - How-to guides
Creating a VPN
You can create and use VPN services in the Samsung Cloud Platform Console.
Caution
You can create up to 3 VPNs per Account. If you exceed the creation limit, you cannot create a new VPN.
To create a VPN, follow these steps:
Click the All Services > Networking > VPN menu. You will be redirected to the VPN Service Home page.
On the Service Home page, click the Create VPN button. You will be redirected to the Create VPN page.
On the Create VPN page, enter the required information for service creation and select detailed options.
Enter the required information in the Service Information section.
Item
Required
Description
VPN Gateway Name
Required
Enter the VPN Gateway name
Enter 3 to 20 characters using alphanumeric characters
Connected VPC Name
Required
Select the VPC connected to the VPN Gateway
Click + New Creation to create a VPC and then select it
Public IP
Required
Select the IP for the VPN Gateway to communicate with remote sites
Table. VPN Service Information Input Items
Enter or select the required information in the Additional Information section.
Item
Required
Description
Description
Optional
User additional description
Tags
Optional
Add tags
Add up to 50 tags per resource
Click the Add Tag button and then enter or select Key and Value values
Table. VPN Service Additional Information Input Items
On the Summary panel, review the detailed information of creation and estimated charges, then click the Create button.
After creation is complete, verify the created resource on the VPN List page.
Viewing VPN Detailed Information
For VPN services, you can view and modify the entire resource list and detailed information. The VPN Detail page consists of Detailed Information, Tags, and Task History tabs.
To view the detailed information of VPN services, follow these steps:
Click the All Services > Networking > VPN menu. You will be redirected to the VPN Service Home page.
On the Service Home page, click the VPN menu. You will be redirected to the VPN List page.
On the VPN List page, click the resource for which you want to view detailed information. You will be redirected to the VPN Detail page.
The VPC Detail page displays status information and additional feature information, and consists of Detailed Information, Tags, and Task History tabs.
Detailed Information
You can view the task history of the resource selected on the VPN List page.
Item
Description
Service Status
Current status
Active: Operating normally
Creating: Creation in progress
Editing: Configuration in progress
Deleting: Termination in progress
Error: Current status unknown
If this occurs continuously, contact the registered administrator
Service Termination
VPN Service Termination
Table. VPN Status Information and Additional Features
Item
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
VPN resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date/Time
Date/Time when the service was created
Modifier
User who modified the service
Modification Date/Time
Date/Time when the service information was modified
VPN Gateway Name
VPN Gateway name
Connected VPC Name
VPC name connected to VPN
Public IP
IP information for VPN Gateway to communicate with remote sites
Description
User-written additional description
Click the Modify icon to modify
Table. VPN Detailed Information Items
Tags
On the VPN List page, you can view the tag information of the selected resource, and add, modify, or delete tags.
Item
Description
Tag List
Tag list
View tag Key, Value information
Add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. VPN Tag Tab Items
Task History
You can view the task history of the resource selected on the VPN Detail page.
Item
Description
Task History List
Resource change history
View task date/time, resource name, task details, task results, and task performer information
Table. VPN Task History Tab Detailed Information Items
Terminating a VPN
You can terminate unused VPCs to reduce operating costs. However, since terminating the service can immediately stop operating services, you must fully consider the impact of service interruption before proceeding with termination.
Caution
You cannot terminate if there are resources connected to the VPN, such as VPN Tunnels.
You cannot terminate if the VPN service status is Creating or Editing.
To terminate a VPN, follow these steps:
Click the All Services > Networking > VPN menu. You will be redirected to the VPN Service Home page.
On the Service Home page, click the VPN menu. You will be redirected to the VPN List page.
On the VPN List page, select the resource to terminate. You will be redirected to the VPN Detail page.
On the VPN Detail page, click the Service Termination button.
After termination is complete, verify that the resource has been terminated on the VPN List page.
5.5.2.1 - VPN Tunnel
Creating a VPN Tunnel
You can configure IPSec Tunneling with remote sites in the VPN service using the Samsung Cloud Platform Console.
To create a VPN Tunnel, follow these steps:
Click the All Services > Networking > VPN menu. You will be redirected to the VPN Service Home page.
On the Service Home page, click the Create VPN Tunnel button. You will be redirected to the Create VPN Tunnel page.
On the Create VPN Tunnel page, enter the required information for service creation and select detailed options.
Enter the required information in the Service Information section.
Item
Required
Description
VPN Tunnel Name
Required
Enter the VPN Tunnel name
Enter 3 to 20 characters using alphanumeric characters
VPC Gateway Name
Required
Select the VPN Gateway to connect
VPC Name
Default
VPC information connected to VPN Gateway is automatically entered
Public IP
Default
IP information for VPN Gateway to communicate with remote sites is automatically entered
Peer VPN GW IP
Required
Enter the IP information of the remote VPN
Example: 192.168.10.0
Remote Subnet(CIDR)
Required
Enter the subnet address of the remote site to connect
After entering the IP address, click the Add button, up to 10 can be added
Example: 20.0.0.0/24
Pre-shared Key
Required
Enter the shared key (PSK) to be used for IKE mutual authentication between VPN gateways
Enter 8 to 64 characters
Recommended to use a 32-character alphanumeric combination string
Description
Optional
User additional description
Table. VPN Tunnel Service Information Input Items
Enter or select the required information in the Tunnel Configuration section.
Item
Required
Description
IKE Configuration > IKE Version
Required
Select IKE version
IKE Configuration > Algorithm Configuration
Required
Select Encryption Algorithm and Digest Algorithm, then click the Add button
IKE Configuration > Diffie-Hellman
Required
Select Diffie-Hellman group
IKE Configuration > SA LifeTime
Required
Enter the VPN session (Security Association) validity period
IPSec Configuration > Algorithm Configuration
Required
Select Encryption Algorithm and Digest Algorithm, then click the Add button
Enter or select the required information in the Additional Information section.
Item
Required
Description
Tags
Optional
Add tags
Add up to 50 tags per resource
Click the Add Tag button and then enter or select Key and Value values
Table. VPN Tunnel Additional Information Input Items
On the Summary panel, review the detailed information of creation and estimated charges, then click the Create button.
After creation is complete, verify the created resource on the VPN Tunnel List page.
Viewing VPN Tunnel Detailed Information
For VPN Tunnel services, you can view and modify the entire resource list and detailed information. The VPN Tunnel Detail page consists of Detailed Information, Tags, and Task History tabs.
To view VPN detailed information, follow these steps:
Click the All Services > Networking > VPN menu. You will be redirected to the VPN Service Home page.
On the Service Home page, click the Create VPN Tunnel button. You will be redirected to the VPN Tunnel List page.
On the VPN Tunnel List page, click the resource for which you want to view detailed information. You will be redirected to the VPN Tunnel Detail page.
The VPN Tunnel Detail page displays status information and additional feature information, and consists of Detailed Information, Tags, and Task History tabs.
Item
Description
Status
Current status
Active: Operating normally
Creating: Creating
Editing: Changing information
Deleting: Deleting
Error: Cannot confirm current status
If this occurs continuously, contact the registered administrator
VPN Tunnel Deletion
VPN Tunnel delete button
Table. VPN Tunnel Status Information and Additional Features
Detailed Information
On the VPN Tunnel List page, you can view the detailed information of the selected resource and modify the information if necessary.
Item
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
VPN resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date/Time
Date/Time when the service was created
Modifier
User who modified the service information
Modification Date/Time
Date/Time when the service information was modified
VPN Tunnel Name
VPN Tunnel name
VPN Gateway Name
VPN Gateway name
Public IP
Public IP information
Peer VPN GW IP
Peer VPN GW information
Click the Modify icon to modify
Remote Subnet (CIDR)
Remote Subnet information
Click the Modify icon to modify
Pre-shared Key
Pre-shared Key information
Click the Modify icon to modify
Status
Current service connection status
Description
VPN Tunnel additional description
Click the Modify icon to modify
IKE
Click the Modify button to modify configuration information in bulk
IKE Version
IKE Version information
Encryption Algorithm/Digest Algorithm
Algorithm information
Diffie-Hellman
Diffie-Hellman information
SA LifeTime
SA LifeTime information
IPSec
Click the Modify button to modify configuration information in bulk
Encryption Algorithm/Digest Algorithm
Algorithm information
Diffie-Hellman
Diffie-Hellman information
SA LifeTime
SA LifeTime information
Perfect Forward Secrecy(PFS)
PFS configuration information
DPD
DPD probe interval information
Click the Modify icon to modify
Table. VPN Tunnel Detailed Information Items
Tags
On the VPN Tunnel List page, you can view the tag information of the selected resource, and add, modify, or delete tags.
Item
Description
Tag List
Tag list
View tag Key, Value information
Add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. VPN Tunnel Tag Tab Items
Task History
You can view the task history of the resource selected on the VPN Tunnel List page.
Item
Description
Task History List
Resource change history
View task date/time, resource name, task details, task results, and task performer information
Table. VPN Tunnel Task History Tab Detailed Information Items
Deleting a VPN Tunnel
You can delete unused VPC Tunnels to reduce operating costs. However, since deleting a Tunnel can immediately stop operating services, you must fully consider the impact of service interruption before proceeding with deletion.
To delete a VPN, follow these steps:
Click the All Services > Networking > VPN menu. You will be redirected to the VPN Service Home page.
On the Service Home page, click the Create VPN Tunnel button. You will be redirected to the VPN Tunnel List page.
On the VPN Tunnel List page, click the resource for which you want to view detailed information. You will be redirected to the VPN Tunnel Detail page.
Click the VPN Tunnel Delete button.
After deletion is complete, verify that the resource has been deleted on the VPN Tunnel List page.
5.5.3 - API Reference
API Reference
5.5.4 - CLI Reference
CLI Reference
5.5.5 - Release Note
VPN
2025.10.23
FEATUREChange in the number of additional remote site subnets for VPN Tunnel
You can enter up to 10 remote subnets (CIDR).
2024.02.27
NEWOfficial Release of VPN Service
A VPN service has been released that connects the customer network and Samsung Cloud Platform through an encrypted (IPSec) virtual private network.
5.6 - Firewall
5.6.1 - Overview
Service Overview
Firewall is a virtual logical firewall service that controls traffic occurring from VPC and Load Balancer of Samsung Cloud Platform.
The target resources that can be applied in the Firewall are Internet Gateway, Direct Connect, Load Balancer, and it is possible to manage a safe network by setting rules for communication between VPC and the internet, and VPC and customer network.
When the Firewall is first created, it blocks all Inbound/Outbound traffic according to the default rule (Any Deny).
Users can create Inbound/Outbound rules by specifying IP addresses, ports, and protocols, and only allowed traffic can communicate with the created rules.
Figure. Firewall Configuration Diagram
Component
The components that make up the Firewall are as follows.
Component
Detailed Description
Applied target
Firewall applied target resource
Apply Firewall to Internet Gateway, Direct Connect, Load Balancer as target
Firewall checks whether to use Firewall when creating the target resource and creates it together
Firewall size
Firewall is provided in 5 sizes according to the rule quota
Extra Small: 5
Small: 100
Medium: 200
Large: 500
Extra Large: 1,000
Firewall rules
When the Firewall is first created, it blocks all Inbound/Outbound traffic according to the default rule (Any Deny).
Allows Inbound/Outbound rules to be added by setting the target address, protocol, and port
Provides a batch creation function for rules through form creation
Fig. Firewall Service Components
Constraints
The Samsung Cloud Platform’s Firewall has a quota (limit) for the maximum number of rules that can be created by size. When creating a Firewall, it is created with Extra Small by default, and the Firewall size can be changed on the Firewall details page in the Samsung Cloud Platform Console.
Size
Rule Allocation
Detailed Description
Extra Small
5 items
maximum number of rules that can be created 5 items
Small
100 pieces
maximum number of rules that can be generated 100 pieces
Medium
200
maximum number of rules that can be generated 200
Large
500 pieces
maximum number of rules that can be generated 500 pieces
Extra Large
1,000 items
maximum number of rules that can be created 1,000 items
Table. Firewall Restrictions
Preceding Service
This is a list of services that must be pre-configured before creating the Firewall service. Please refer to the user guide (reference link) below for more information and prepare in advance.
A service that distributes traffic to multiple servers to maintain a stable service
Fig. Preceding Firewall Service
5.6.2 - How-to guides
Users can create a Firewall service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Firewall
You can create and use a Firewall service through the Samsung Cloud Platform Console.
Notice
The Firewall service must be set to Use in the prerequisite service of Networking to be created. Firewalls set to use can be checked on the Firewall list.
Firewall cannot be created separately like other services on the Samsung Cloud Platform Console.
To set up Firewall use, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Firewall Service Home page.
On the Service Home page, click the prerequisite service to create. You will be redirected to the service creation page.
Create VPC: Set up Firewall use for Internet Gateway and Transit Gateway of VPC service.
When creating VPC’s Internet Gateway service, set the Use Firewall item to Use. For detailed instructions, refer to Creating Internet Gateway.
Create VPC’s Transit Gateway service and apply for the Uplink Firewall associated service. For detailed instructions, refer to Creating Transit Gateway.
Create Direct Connet: Set the Use Firewall item to Use when creating the Direct Connet service. For detailed instructions, refer to Creating Direct Connect.
Create Load Balancer: Set the Use Firewall item to Use when creating the Load Balancer service. For detailed instructions, refer to Creating Load Balancer.
When the prerequisite service creation is complete, check whether the Firewall resource is displayed on the Firewall List.
Checking Firewall Detailed Information
For the Firewall service, you can view and modify the entire resource list and detailed information from the resource management menu.
To check Firewall detailed information, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Firewall Service Home page.
On the Service Home page, click Firewall List. You will be redirected to the Firewall List page.
On the Firewall List page, you can check the following information.
Division
Description
Firewall Name
Automatically created in Firewall prerequisite service type_Firewall format
Firewall Division
Firewall prerequisite service type (Internet Gateway, Direct Connect, Load Balancer)
Size
Firewall size selected by user
VPC Name
VPC name connected to Firewall
Connection Name
Automatically created in prerequisite service name using Firewall_Firewall format
Number of Rules
Number of rules in use in the Firewall
Use Status
Whether Firewall is used (activated) or not used (deactivated)
If not used, Any Allow rule is applied and no billing is charged for Firewall
Status
Displays Firewall status
Click the More button to set Use/Not Use
Table. Firewall resource list items
On the Firewall List page, click the resource for which you want to check detailed information. You will be redirected to the Firewall Details page.
The Firewall Details page displays status information and additional feature information, and consists of tabs for Detailed Information, Rules, Tags, Operation History.
Division
Description
Service Status
Displays Firewall status
Creating: Creating
Active: Operating
Editing: Changing
Deploying: Deployment complete
Deleting: Deleting
Error: Error occurred
Table. Firewall status information
Detailed Information
On the Firewall List page, you can check the detailed information of the selected resource and modify the information if necessary.
Division
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
Firewall Name
Automatically created as resource name_Firewall_connection name
Firewall ID
Unique resource ID in the service
Firewall Division
Firewall prerequisite service type (Internet Gateway, Direct Connect, Load Balancer)
Size
Firewall size selected by user
Click the Edit icon to change settings
Firewall Rule Count/Quota
Rule quota and number of rules in use for the Firewall
VPC Name
VPC name connected to Firewall
Click VPC name to move to details page
VPC ID
VPC ID connected to Firewall
Connection Name
Automatically created as {Firewall prerequisite service name_Firewall}
Click connection name to move to details page
Log Storage Status
Whether to store Firewall logs
Use: Store logs
Not Use: Do not store logs
Click the Edit icon to change settings
Table. Firewall detailed information
Rules
On the Firewall List page, you can check the rule list of the selected resource and add, modify, or delete rules.
Division
Description
Excel Download
Download the currently entered rule list as an Excel (*.xlsx) file
Detailed Search
Search for rules matching conditions set by the user
Supports string partial match (LIKE) search
Modify Rule
Modify and delete rules displayed in the rule list
Click the button to move to the rule modification page
Add Rule
Add a new Firewall rule
Click the button to move to the rule addition page
Order
Displays rule order, applied Top down according to rule order
Rule ID
Unique ID value for the rule
Click rule ID to check rule detailed information in a popup window
Rule Index
Unique Index value for the rule, used for log analysis
Source Address
Source address added to the rule
Destination Address
Destination address added to the rule, displayed as IP address according to the entered rule
Service
Protocol and destination port
Action
Traffic Allow/Deny distinction due to rule
Allow: Allow traffic if matches rule
Deny: Block traffic if matches rule
Direction
Access direction of traffic based on Firewall
Inbound: External → Internal
Outbound: Internal → External
Active Status
Displays whether the rule is active, rule does not operate if in inactive state
Status
Displays rule status
Table. Firewall rule list detailed information
Tags
On the Firewall List page, you can check the tag information of the selected resource, and add, change, or delete tags.
Division
Description
Tag List
Tag list
You can check Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the list of previously created Keys and Values
Table. Firewall tag tab items
Operation History
On the Firewall List page, you can check the operation history of the selected resource.
Division
Description
Operation History List
Resource change history
Check operation date and time, resource name, operation details, operation result, operator information
Click the button to perform detailed search
Table. Firewall operation history tab detailed information items
Managing Firewall Rules
You can add, modify, or delete Firewall rules.
Caution
Rules can be added or modified only when the Firewall status is Active.
Rules cannot be added if there is no status view permission for the prerequisite service.
Note
The firewall periodically caches Domain rules registered by the user and retains IP information for a certain period.
If the caching result of the registered Domain rule does not match the user’s IP, communication may be restricted.
Creating Rules
You can add Firewall rule information by directly entering it on the Rules tab.
To add a Firewall rule, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Firewall Service Home page.
On the Service Home page, click Firewall List. You will be redirected to the Firewall List page.
On the Firewall List page, click the resource to which you want to add rules. You will be redirected to the Firewall Details page.
On the Firewall Details page, click the Rules tab. You will be redirected to the Rules tab page.
On the Rules tab, click the Add Rule button. You will be redirected to the Add Rule page.
Enter the required information on the Direct Input tab page.
Check the added rule and click the Complete button.
Caution
If you move to another page without clicking the Confirm button after entering content on the add rule page, all entered items will be initialized, so please be careful.
Division
Required
Description
Rule Position
Required
Specify the position of the rule to create
Rule ID to Copy
Optional
Enter the Firewall rule ID to copy and click the Search button to select
Source Address
Required
Source address to add to the rule
Can enter multiple addresses up to 128 at once using Comma (,), range (-) in CIDR (IP/Subnet Mask) format
Destination Address
Required
Select the type of destination address to add to the rule
Select IP: Can enter multiple addresses up to 128 at once using Comma (,), range (-) in CIDR (IP/Subnet Mask) format
Select Domain: Can enter up to 128 full domain names in FQDN format at once using Comma (,)
Type items vary depending on the selected destination address format
Type
Required
Select the protocol type to apply the rule
Select Destination Port/Type: Select protocol type
Internet Protocol: Enter protocol number, can enter up to 128
All: Select destination port/Type, protocol to the entire range, meaning all ports for all protocols
Type > Protocol
Required
Select the detailed protocol of the type
Select the protocol desired by the user among TCP, UDP, ICMP, input items vary depending on the selected protocol
When selecting ICMP in protocol, can set ICMP Type
Select frequently used Type items such as Echo among values defined as ICMP Type
Click the Add button to add input value
When selecting TCP/UDP in protocol, can select allowed ports such as SSH, HTTP, TELENET
When entering directly, can enter values from 1 to 65,535, can enter up to 128 at once using Comma (,), range (-)
Click the Add button to add input value
When selecting Internet Protocol in type, enter protocol number within 1 ~ 254
Action
Required
Distinguish traffic allow/block due to rule
Allow: Allow traffic if matches rule
Deny: Block traffic if matches rule
Direction
Required
Access direction of traffic based on Firewall
Inbound: External → Internal
Outbound: Internal → External
Description
Optional
Additional description written by the user
Added Rule
-
Check list of entered rules
Move Up: Move selected rule up
Move Down: Move selected rule down
Delete: Delete selected rule
Table. Firewall rule add > direct input tab items
Creating Rules in Batch
To add multiple Firewall rules at once, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Firewall Service Home page.
On the Service Home page, click Firewall List. You will be redirected to the Firewall List page.
On the Firewall List page, click the resource to which you want to add rules. You will be redirected to the Firewall Details page.
On the Firewall Details page, click the Rules tab. You will be redirected to the Rules tab page.
On the Rules tab, click the Add Rule button. You will be redirected to the Add Rule page.
On the Add Rule page, click the Batch Input Rules tab.
Select Rule Position. If you do not select a position, it will be added to the last order of the rules.
On Select File, click the Download Form button. The batch input rule Excel file will be downloaded.
Enter rule information in the batch input rule Excel file and save it.
On Select File, click Attach File to attach the created Excel file and click Add.
If the attached Excel file format is different from the registration form or the file is encrypted, it cannot be uploaded.
The maximum number of batch registration rules that can be uploaded at once is 100. If the maximum registration rule count is exceeded, it cannot be uploaded.
If the maximum rule count set according to the firewall size is exceeded, the file cannot be uploaded.
Check whether the entered rules are displayed on the Added Rules list and adjust the order.
Check the added rules and click the Complete button.
Modifying Rules
You can select a Firewall rule to check and modify rule information.
To modify a Firewall rule, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Firewall Service Home page.
On the Service Home page, click Firewall List. You will be redirected to the Firewall List page.
On the Firewall List page, click the resource for which you want to modify rules. You will be redirected to the Firewall Details page.
On the Firewall Details page, click the Rules tab. You will be redirected to the Rules tab page.
On the Rules tab, click the Modify Rule button. You will be redirected to the Modify Rule page.
On the rule modification page, you can set the following items:
Activate: Activates the selected rule.
Deactivate: Deactivates the selected rule. Deactivated rules are not applied to the prerequisite service.
Delete: Deletes the selected rule. When you click delete, it is displayed as Delete Scheduled status in the changes.
Cancel Delete: If in delete scheduled status, you can cancel the rule deletion.
On the Modify Rule page, click the Edit button for the item to modify. The Modify Rule popup window will open.
In the Modify Rule popup window, enter the item to modify and click the Confirm button.
Division
Required
Description
Order
-
Order of the rule, order can be changed by clicking Move Up/Move Down in the added rule list
Rule ID
-
Unique ID value for the rule, cannot be changed
Rule Index
-
Unique Index value for the rule, can be used for log analysis
Source Address
Required
Source address registered in the rule
Can change by entering multiple addresses up to 128 at once using Comma (,), range (-) in CIDR (IP/Subnet Mask) format
Destination Address
Required
Destination address to add to the rule
Can change by entering multiple addresses up to 128 at once using Comma (,), range (-) in CIDR (IP/Subnet Mask) format
Type
Required
Set protocol type according to the selected destination address item
Action
Required
Can change traffic Allow/Deny distinction due to rule
Allow: Allow traffic if matches rule
Deny: Block traffic if matches rule
Direction
Required
Can change access direction of traffic based on Firewall registered in the rule
Inbound: External → Internal
Outbound: Internal → External
Rule Position
Required
Can change rule position
Active Status
Required
Whether the rule is active, rule does not operate if in inactive state
Status
-
Status value for the rule
Description
Optional
Additional description written by the user
Table. Firewall rule modification detailed items
Check the modified rule and click the Complete button.
Deleting Rules
Caution
Can only delete when Firewall is in Active status and rule is in Active, Error status.
To delete a Firewall rule, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Firewall Service Home page.
On the Service Home page, click Firewall List. You will be redirected to the Firewall List page.
On the Firewall List page, click the resource for which you want to modify rules. You will be redirected to the Firewall Details page.
On the Firewall Details page, click the Rules tab. You will be redirected to the Rules tab page.
On the Rules tab, click the Modify Rule button. You will be redirected to the Modify Rule page.
On the Modify Rule page, select the rule to delete and click the Delete button.
When the deletion request is completed, it is displayed as Delete Scheduled in the changes item.
You can cancel rule deletion by clicking Cancel Delete.
On the Modify Rule page, click the Complete button.
Managing Firewall Resources
You can modify the Firewall size and change the log use settings.
Modifying Firewall Size
To modify the Firewall size, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Firewall Service Home page.
On the Service Home page, click Firewall List. You will be redirected to the Firewall List page.
On the Firewall List page, click the resource to modify. You will be redirected to the Firewall Details page.
On the Firewall Details page, click the Edit icon for Size. You will be redirected to the Modify Size popup window.
In the Modify Size popup window, select the size to modify and click the Confirm button.
Note
Firewall size is provided as Extra Small (rule quota 5) by default, and you can add Firewall rules by changing the Firewall size to use them.
In Nuri SCP, the project/region selection distinction has disappeared, so we comment out the following statement. (25.01.24)
Firewall fees are charged based on Firewall service size and traffic throughput.
Using Log Storage
Note
To store Firewall logs, you must first create a bucket in Object Storage to store logs, set the bucket as the log storage in Firewall Logging, and then set log storage in Firewall details to store Firewall logs in the Object Storage bucket.
Log storage settings can be checked in Firewall Logging. For more information, refer to Firewall Logging.
If log storage is set, Object Storage fees for log storage are charged.
To use Firewall log storage, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Service Home page.
On the Service Home page, click the Firewall menu. You will be redirected to the Firewall List page.
On the Firewall List page, click the resource (Firewall) to use log storage. You will be redirected to the Firewall Details page.
On the Firewall Details page, click the Edit icon for Log Storage Status. You will be redirected to the Modify Log Storage Status popup window.
In the Modify Log Storage Status popup window, select Use for log storage and click the Confirm button.
Caution
If log storage is not set in Firewall Logging, you cannot set log storage Use.
Setting Log Storage to Not Use
To set Firewall log storage to not use, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Service Home page.
On the Service Home page, click the Firewall menu. You will be redirected to the Firewall List page.
On the Firewall List page, click the resource (Firewall) to set log storage to not use. You will be redirected to the Firewall Details page.
Click the Modify Log Storage Status button. You will be redirected to the Modify Log Storage Status popup window.
In the Modify Log Storage Status popup window, deselect Use for log storage and click the Confirm button.
Check the message in the Notification popup window and click the Confirm button.
Caution
If you disable log storage, log storage for the service is stopped and tracking management through log analysis is not possible in case of a security incident.
Setting Firewall to Not Use
The Firewall service cannot be deleted separately. When you delete the prerequisite service, the connected Firewall is also deleted.
If you want to maintain the prerequisite service and not use the Firewall, you can change the Firewall to not use status on the Firewall list page.
Caution
If you change the Firewall to not use status, all previously registered rules will be deleted.
If the connected Firewall has rules when deleting the prerequisite service, you cannot delete it. Delete the Firewall rules before deleting the prerequisite service.
To set Firewall to not use, follow these steps:
Click the All Services > Networking > Firewall menu. You will be redirected to the Service Home page.
On the Service Home page, click the Firewall menu. You will be redirected to the Firewall List page.
On the Firewall List page, click More > Not Use for the resource to switch to not use.
When the use status change is complete, check whether the resource’s use status has changed to not use on the Firewall List page.
5.6.2.1 - Firewall Logging
To store Firewall logs, you must first create a bucket in Object Storage to store the logs, set the bucket as the log storage in Firewall Logging, and then set log storage on the Firewall Details page to store Firewall logs in the Object Storage bucket.
To store Firewall logs, set up according to the following order:
To store firewall logs, you can create a bucket in Object Storage or use an already created bucket. To create a bucket, refer to Creating Object Storage.
To set the Firewall log storage status to Use, you must first set the log storage in Firewall Logging.
Note
To set Firewall Logging log storage, you need an Object Storage bucket for log storage. First, create a bucket in the Object Storage service.
For more information, refer to Creating Object Storage.
To set up Firewall Logging log storage, follow these steps:
Click the All Services > Management > Network Logging > Firewall Logging menu. You will be redirected to the Firewall Logging List page.
On the Firewall Logging List page, click the Log Storage Settings button at the top. You will be redirected to the Log Storage Settings popup window.
In the Log Storage Settings popup window, select the Log Storage Bucket. When you select a bucket, the Log Storage Path is displayed.
In the Log Storage Settings popup window, check the Log Storage Bucket and Log Storage Path, and then click the Confirm button.
Check the message in the Notification popup window and click the Confirm button.
Notice
After setting up the Firewall Logging log storage, you must set log storage status to Use on the Firewall Details page to start log storage.
For more information, refer to Using Firewall Log Storage.
Viewing Firewall Logging List
When you set the Firewall Logging log storage bucket, you can view the Firewall Logging list.
To view the Firewall Logging list, follow these steps:
Click the All Services > Management > Network Logging > Firewall Logging menu. You will be redirected to the Firewall Logging List page.
On the Firewall Logging List page, check the resources in use and log storage targets.
Division
Description
Resource ID
Firewall ID
Storage Target
Firewall name
Storage Registration Date
Firewall log storage registration date
Table. Firewall Logging list items
Note
After setting up the Firewall Logging log storage, you must set log storage status to Use in Firewall details to start log storage.
For more information, refer to Using Firewall Log Storage.
Checking Firewall Logging Detailed Content
Refer to the following content to check the detailed content of stored logs.
Date and time when the log occurred (2024-10-11, 11:23:43)
deny
Action (deny / accept)
0
Firewall Rule ID (Policy ID) where the log occurred
17
IP Protocol ID
1: ICMP
6: TCP
17: UDP
4.1.1.100
Source IP
45499
Source Port
192.168.10.10
Destination IP
53
Destination Port
Table. Log detailed information items
Setting Firewall Logging Log Storage to Not Use
You can set the log storage in Firewall Logging to not use.
To set Firewall Logging log storage to not use, follow these steps:
Click the All Services > Management > Network Logging > Firewall Logging menu. You will be redirected to the Firewall Logging List page.
On the Firewall Logging List page, click the Log Storage Settings button at the top. You will be redirected to the Log Storage Settings popup window.
In the Log Storage Settings popup window, select Not Use for the Log Storage Bucket and click the Confirm button.
Note
Log storage settings can be changed when there is no log storage target.
To change the log storage bucket, first change the setting to not use. Then you can change it by setting it to use again.
5.6.3 - API Reference
API Reference
5.6.4 - CLI Reference
CLI Reference
5.6.5 - Release Note
Firewall
2026.03.19
FEATUREFirewall rule management structure change
For user convenience, pages for Firewall rule input and modification/deletion have been added. You can perform desired operations by moving to a separate page when managing Firewall rules.
2025.10.23
FEATUREFirewall rule input method added
Firewall rule input method added
In KR WEST and KR EAST regions, you can enter the destination address in FQDN (Fully Qualified Domain Name) format.
2025.07.01
FEATUREFirewall rule input method added
Firewall rule input method added
A function to enter the IP protocol has been added.
2025.02.27
FEATURELoad Balancer-Firewall feature added
Firewall feature added
You can use Firewall in the Load Balancer service.
Samsung Cloud Platform common feature changes
Common CX changes for Account, IAM, Service Home, tags, etc. have been reflected.
2024.12.23
FEATUREFirewall log storage feature added
A function to store Firewall logs has been added.
You can decide whether to store Firewall logs and store logs in Object Storage.
2024.10.01
NEWFirewall service official version release
You can control inbound and outbound traffic occurring in VPC through the Firewall service.
2024.07.02
NEWBeta version release
The Firewall service has been released.
5.7 - Direct Connect
5.7.1 - Overview
Service Overview
Samsung Cloud Platform provides the Direct Connect service to support secure and fast connections between customer networks and the Samsung Cloud Platform environment.
Through Direct Connect, you can allocate and use the internal private network range of existing systems to Samsung Cloud Platform resources. You can place backend systems such as application servers in private network ranges without Internet access, and enhance security by applying Samsung Cloud Platform network services such as Security Groups.
Through Direct Connect, customers’ existing systems can naturally migrate to Samsung Cloud Platform even if they have IP hardcoding on devices or architecture dependencies on IPs.
Figure. Direct Connect Configuration Diagram
Creating a Direct Connect Connection
Supports connection by selecting a single VPC to connect to the customer network. Provides access blocking through the Direct Connect Firewall, and provides a secure connection path through Route configuration.
Constraints
Item
Basic Quota
Description
Direct Connect
5
Can be created per VPC within a service zone (1:1) on an Account basis.
Table. Direct Connect Constraints
Prerequisite Services
This is a list of services that must be pre-configured before creating this service. Please prepare in advance by referring to the guides provided for each service.
Direct Connect sends metrics to ServiceWatch. The metrics provided as basic monitoring are data collected at 5-minute intervals.
Note
For how to view metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Metrics
The following are basic metrics for the namespace Direct Connect.
In the table below, metrics displayed in bold are key metrics selected from among the basic metrics provided by Direct Connect.
Key metrics are used to configure service dashboards that are automatically built per service in ServiceWatch.
For each metric, the user guide informs which statistical value is meaningful to use when querying that metric, and the statistical value displayed in bold among meaningful statistics is the key statistical value. In the service dashboard, you can query key metrics through key statistical values.
Metric Name
Detailed Description
Unit
Meaningful Statistics
DirectConnect Network In Bytes
Cumulative traffic volume from Direct Connect to VPC
Bytes
Sum
Average
Maximum
Minimum
DirectConnect Network Out Bytes
Cumulative traffic volume from VPC to Direct Connect
Bytes
Sum
Average
Maximum
Minimum
DirectConnect Network In Bytes_Delta
Cumulative traffic volume from Direct Connect to VPC over 5 minutes
Bytes
Sum
Average
Maximum
Minimum
DirectConnect Network Out Bytes_Delta
Cumulative traffic volume from VPC to Direct Connect over 5 minutes
Bytes
Sum
Average
Maximum
Minimum
Table. Direct Connect Basic Metrics
5.7.2 - How-to guides
Users can enter the required information for Direct Connect service through the Samsung Cloud Platform Console and select detailed options to create the service.
Creating a Direct Connect
You can create and use Direct Connect services in the Samsung Cloud Platform Console.
To create a Direct Connect, follow these steps:
Click the All Services > Networking > Direct Connect menu. You will be redirected to the Direct Connect Service Home page.
On the Service Home page, click the Create Direct Connect button. You will be redirected to the Create Direct Connect page.
Enter or select the required information in the Service Information section.
Item
Required
Description
Direct Connect Name
Required
A name that makes it easy to identify Direct Connect
Enter 3 to 20 characters using uppercase/lowercase letters and numbers
Use Uplink
Required
Bandwidth of the communication port for communicating with remote sites
Select port capacity 1G or port capacity 10G
VPC
Required
Select the VPC for communicating with remote sites
Table. Direct Connect Service Information Input Items
Enter or select the required information in the Additional Information section.
Item
Required
Description
Tags
Optional
Add tags
Add up to 50 tags per resource
Click the Add Tag button and then enter or select Key and Value values
Table. Direct Connect Additional Information Input Items
On the Summary panel, review the detailed information of creation and estimated charges, then click the Complete button.
After creation is complete, verify the created resource on the Direct Connect List page.
Viewing Direct Connect Detailed Information
For Direct Connect services, you can view and modify the entire resource list and detailed information in the Resource Management menu. The Direct Connect Detail page consists of Detailed Information, Rules, Tags, and Task History tabs.
To view Direct Connect detailed information, follow these steps:
Click the All Services > Networking > Direct Connect menu. You will be redirected to the Direct Connect Service Home page.
On the Service Home page, click the Direct Connect menu. You will be redirected to the Direct Connect List page.
On the Direct Connect List page, click the resource for which you want to view detailed information. You will be redirected to the Direct Connect Detail page.
The Direct Connect Detail page displays status information and additional feature information, and consists of Detailed Information, Rules, Tags, and Task History tabs.
Item
Description
Status
Current status
Active: Operating normally
Deleting: Deletion in progress
Creating: Creation in progress
Failed: Failed
Error: Cannot confirm current status
If this occurs continuously, contact the registered administrator
Service Termination
Button to terminate the service
If there are no connected services, terminate Direct Connect
If you terminate the service, operating services may stop immediately, so fully consider the impact of service interruption before proceeding with termination
Table. Direct Connect Status Information and Additional Features
Detailed Information
On the Direct Connect List page, you can view the detailed information of the selected resource and modify the information if necessary.
Item
Description
Service
Service name
Resource Type
Direct Connect resource type
SRN
Unique resource ID in Samsung Cloud Platform
In Direct Connect, means Direct Connect SRN
Resource Name
Direct Connect resource name
Resource ID
Unique resource ID in Direct Connect
Creator
User who created Direct Connect
Creation Date/Time
Date/Time when Direct Connect was created
Modifier
User who modified Direct Connect information
Modification Date/Time
Date/Time when Direct Connect information was modified
Direct Connect Name
Direct Connect VPC resource name
Use Uplink
Port bandwidth allocated for line connection
Line Request/Termination SR Shortcut
Line connection service for the Samsung Cloud Platform local section connected to the customer company line
Clicking the Line Request/Termination SR Shortcut button moves to the Service Request tab in the Support Center popup
For Samsung Cloud Platform external line connection for customer integration, create the Network Line Service service through the SDS sales representative
Connected VPC Name
VPC name connected to Direct Connect
Firewall Name
Firewall name
Use Firewall
Whether to use Firewall
Table. Direct Connect Detailed Information Tab Items
Rules
You can register or modify communication rules between remote sites and VPC.
Item
Description
Destination IP
Destination IP information
Destination
Routing direction
Creation Date/Time
Creation date/time information
Status
Connection status
Active: Operating normally
Deleting: Deletion in progress
Creating: Creation in progress
Error: Cannot confirm current status
If this occurs continuously, contact the registered administrator
Delete
You can delete the rule.
Table. Direct Connect Rules Tab Items
Tags
On the Direct Connect List page, you can view the tag information of the selected resource, and add, modify, or delete tags.
Item
Description
Tag List
Tag list
View tag Key, Value information
Add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Direct Connect Tags Tab Items
Task History
You can view the task history of the resource selected on the Direct Connect List page.
Item
Description
Task History List
Resource change history
View task date/time, resource name, task details, task results, and task performer information
Table. Direct Connect Task History Tab Detailed Information Items
Adding Direct Connect Rules
Click the All Services > Networking > Direct Connect menu. You will be redirected to the Direct Connect Service Home page.
On the Service Home page, click the Direct Connect menu. You will be redirected to the Direct Connect List page.
On the Direct Connect List page, click the resource to which you want to add a rule. You will be redirected to the Direct Connect Detail page for that resource.
On the Direct Connect Detail page, click the Rules tab.
On the Rules tab, click the Add Rule button. You will be redirected to the Add Rule popup.
In the Add Rule popup, enter the required information and click the Confirm button.
Item
Description
Destination IP
Enter the destination IP range
Example: 192.168.25.0/24
Destination
Select between VPC and remote site according to the routing direction.
Table. Direct Connect Rule Addition Input Items
Terminating a Direct Connect
You can terminate unused VPCs to reduce operating costs. However, since terminating the service can immediately stop operating services, you must fully consider the impact of service interruption before proceeding with termination.
Caution
Direct Connect cannot be terminated if there are connected resources.
To terminate a Direct Connect, follow these steps:
Click the All Services > Networking > Direct Connect menu. You will be redirected to the Direct Connect Service Home page.
On the Service Home page, click the Direct Connect menu. You will be redirected to the Direct Connect List page.
On the Direct Connect List page, click the resource to terminate. You will be redirected to the Direct Connect Detail page for that resource.
On the Direct Connect Detail page, click the Service Termination button.
After termination is complete, verify that the resource has been terminated on the Direct Connect List page.
5.7.3 - API Reference
API Reference
5.7.4 - CLI Reference
CLI Reference
5.7.5 - Release Note
Direct Connect
2025.02.27
NEWCommon Feature Changes
Samsung Cloud Platform common feature changes
Reflected common CX changes, including Account, IAM, Service Home, and tags.
2024.10.01
NEWDirect Connect Service Official Release
Launching Direct Connect service, which quickly and securely connects customer networks and Samsung Cloud Platform networks.
5.8 - Cloud LAN-Campus
5.8.1 - Overview
Service Overview
Cloud LAN-Campus is a service that provides a wired and wireless integrated network usage environment based on user authentication within the customer’s business site. It provides a variety of wired and wireless integrated network access environments that can be used freely regardless of the location within the business site through simple user/device authentication, based on SDN (Software Defined Network). It minimizes existing physical network equipment and enables easy connection to multiple locations geographically distributed using the cloud. This allows companies to reduce the complexity of infrastructure construction and operation, and build a flexible and expandable network environment. Additionally, it enables the operation of business site networks in a more stable and efficient manner through optimized network design/configuration for customer environments, professional operating systems, and enhanced security management.
Provided Features
Cloud LAN-Campus provides the following functions.
Campus Network: Provides a wireless network usage environment and integrated authentication service for the workplace
NW Access: Infrastructure for business network usage (AP, NW Switch, etc.) and SDN system services
NW Authentication: User/device authentication-based network separation, multi-office authentication/security policy integrated management, support for various authentication methods (AD, certificate, etc.) and policy operation/management through service portal (user/administrator)
Figure. Cloud LAN-Campus Configuration Diagram
Features
Rapid Business Network Work Environment: Provides a wired and wireless integrated network usage environment through a user authentication-based SDN (Software Defined Network) solution. IP Mobility and separate networks according to terminal purpose are applied immediately, and users can easily change the network through the service portal.
Network Security Enhancement: Logical network separation and authentication-based wired/wireless integrated security management system enable consistent security policy operation for users/devices. Even in environments where users access multiple headquarters and business sites, the same network access environment and security policy application are possible, and authentication information is safely managed under the Samsung Cloud Platform security system.
Multi-vendor accommodation and network total service provision: The configuration of SDN equipment in the business place becomes more flexible due to the multi-vendor network integrated authentication. Additionally, instead of customers designing, building, operating, and managing their own network infrastructure, a total service system is provided, thereby improving operational and management efficiency. Optimized network design for each business place and fast and stable network services are provided through a dedicated team.
Service-based integrated billing system: The service billing system can reduce initial investment costs and enable network infrastructure expansion and capacity increase when needed. It provides usage-based authentication services, and no separate operating personnel or maintenance contract is required.
Various authentication methods and extensibility: We provide optimal authentication solutions with various authentication methods. Additionally, functional extension and differential policy management according to the security level of each business site are possible through linkage with customer systems (groupware, security systems, etc.).
Component
Cloud LAN-Campus provides services across the entire network within the workplace. The components are as follows, and related service creation is possible.
Division
Detailed Content
Network Authentication
Network access authentication and network separation, security policy management
Headquarters/branch integrated policy application, roaming support
Providing various authentication methods (certificate, AD, account/MAC, etc.) and extensibility
Service Portal
Wired/Wireless Integrated Authentication Service Portal Provided
User Portal: User Policy Creation/Change/Management
Admin Portal: Authentication Policy Management and Monitoring
Wired/Wireless Network
SDN-based Wired/Wireless Network Design and Integrated Configuration/Operation/Management
WIPS
Wireless Intrusion Prevention System configuration/operation/management
Network Solution
DHCP, NMS etc. network solution configuration/operation/management
Fig. Cloud LAN-Campus Components
Constraints
When using the Cloud LAN-Campus service, there are the following restrictions.
Network communication/connection between the customer’s business site and the Samsung Cloud Platform region is required for CLAN authentication use.
Cloud Last Mile, dedicated line, VPN, etc. used
In case of using network equipment from a specific vendor, prior consultation is required.
The start and end points of the service created for the equipment configuration in the business place are determined after consultation with the person in charge of AM.
When using AD integration as the authentication method, authentication-related policy rules must be normally deployed in advance on the user’s PC.
AD functionality issues require management by the customer’s AD administrator.
The network separation certificate method is supported for the specified OS type (currently limited to Windows), and additional costs are added excluding the authentication fee.
Regional Provision Status
Cloud LAN-Campus is available in the following environments.
Region
Availability
Western Korea(kr-west1)
Provided
South Korea southern region 1(kr-south1)
Not provided
South Korea, southern region 2(kr-south2)
Not provided
South Korea southern region 3(kr-south3)
Not provided
Table. Cloud LAN-Campus Region-based Provisioning Status
Preceding Service
Cloud LAN-Campus has no preceding service.
5.8.2 - How-to guides
The user can enter the essential information of the Cloud LAN-Campus service through the Samsung Cloud Platform Console and create the service by selecting detailed options.
Request to Create Campus Network Service
You can create and use the Campus Network service in the Samsung Cloud Platform Console.
To request the creation of a Campus Network service, follow the procedure below.
Click All services > Networking > Cloud LAN-Campus menu. It moves to the Service Home page.
On the Service Home page, click the Cloud LAN-Campus service request button. It moves to the Support Center > Service Request page.
Service Request page, enter or select the corresponding information in the required input area.
Select Campus Network service application in the work division.
Input Item
Detailed Description
Title
Title of the service being requested
Region
Location selection of Samsung Cloud Platform
Automatically entered as the region of the account
Service
Select the service category and service for the corresponding service (automatic selection)
Service category: Networking
Service: Cloud LAN-Campus
Work Classification
Select the type of service you want to perform
Campus Network service application: Select if you are newly requesting the service
Content
Detailed information required to create Campus Network service
SCP account name: Enter the account name of Samsung Cloud Platform
SCP project name: Enter the project name of Samsung Cloud Platform
Company/Corporation name: Enter the company/corporation name
Customer information (Name/E-mail/Phone number): Enter user information
Desired service start date: Enter the service start date
NW network separation: Enter Yes / No
Wired Network usage: Enter Yes / No
Wireless Network usage: Enter Yes / No
Wireless WIPS usage: Enter Yes / No
Network solution usage (NMS, WAN accelerator, DHCP, etc.): Enter Yes / No
Expected contract period: Enter 4 years / 5 years / 6 years
Operation service: Enter Yes / No
Attachments
If you have additional files you want to share, upload them
Attached files can be up to 5 files, each 5MB or less
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Detailed contents of Campus Network service request items
Check the required information entered on the Service Request page and click the Request button.
Once the application is complete, check the contents of the application on the Support Center > Service Request List page.
The requested task will take around 5 to 7 business days.
Note
Once the service request is completed, the customer manager will contact you separately for business consulting and architecture optimization design.
Please contact the Samsung SDS person in charge of AM for progress and service-related inquiries.
Request to Cancel Campus Network Service
You can cancel the Campus Network service on the Samsung Cloud Platform Console.
To request the cancellation of the Campus Network service, please follow the following procedure.
Click All services > Networking > Cloud LAN-Campus menu. It moves to the Service Home page.
On the Service Home page, click the Cloud LAN-Campus service request button. It moves to the Support Center > Service Request page.
Service Request page, please enter or select the corresponding information in the required input area.
Select Campus Network service cancellation in the work classification.
Input Item
Detailed Description
Title
Title of the service being requested
Region
Location selection of Samsung Cloud Platform
Automatically entered as the region of the account
Service
Select the service category and service for the corresponding service (auto-select)
Service category: Networking
Service: Cloud LAN-Campus
Work classification
Select the type you want to perform
Campus Network service cancellation: Select if you want to request service cancellation
Content
Detailed information required for Campus Network service cancellation
SCP account name: Enter the account name of Samsung Cloud Platform
SCP project name: Enter the project name of Samsung Cloud Platform
Customer information (name/company/department/E-mail/phone number): Enter user information
Service cancellation request date: Enter the service cancellation date
Content: Enter additional content
Attachments
If you have additional files you want to share, upload them
Attached files can be up to 5MB each, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Fig. Campus Network Service Request Item Details
Check the required information entered on the Service Request page and click the Request button.
Once the application is complete, check the contents of the application on the Support Center > Service Request List page.
The requested task will take around 5 to 7 business days.
Note
When the service request is completed, the customer manager will contact you separately for service cancellation.
Please contact the Samsung SDS person in charge of AM for progress and service-related inquiries.
5.8.3 - Release Note
Cloud LAN Campus
2025.07.01
NEWCloud LAN Campus_Enterprise Service Official Version Release
We have launched the Cloud LAN Campus service, which provides authentication-based wired and wireless integrated network services within the customer’s business site.
5.9 - Cloud LAN-Data Center
5.9.1 - Overview
Service Overview
Cloud LAN-Data Center is a shared network infrastructure based on SDDC (Software-Defined Data Center) in the data center of the Samsung Cloud Platform region or the customer’s on-premise data center, allowing connection between various networks such as servers, WAN Edge, CX (Cloud eXchange), etc.
Features
Cloud LAN-Data Center provides the following functions.
Rapid Network Access: When building a network environment in the Samsung Cloud Platform region or the customer’s on-premises data center, a fast and secure corporate customized data center network configuration is possible through the SDDC-based infrastructure.
Cost Optimization: Through the logical configuration of virtualized infrastructure and optimization design by experts, it is possible to build a customer-dedicated network with the same effect as building a high-cost physical network infrastructure alone. The cost of building a physical environment, such as network equipment, data center facilities, and cabling, is reduced through the virtual network environment.
Operational Persistence: Provides a customized operating environment for each company by maintaining existing settings such as network security policies, IP systems, and network protocols required in various on-premises environments of enterprises.
Flexible network environment provision: In the SDDC-based infrastructure, the separation of edge nodes (external network connection), service nodes (built-in equipment connection), and computing nodes (server connection) allows for the accommodation of not only physical security devices and network solution devices that require physical installation in the data center, but also virtualized devices.
Figure. Cloud LAN-Data Center Configuration Diagram
Provided Features
Cloud LAN - Data Center provides the following functions.
Various network connection virtualization: Provides virtualization resources for flexible N/W configuration and allows customers to configure a dedicated network through various types of vDevices.
Network/Security Solution Integration: It provides virtualization solutions in the form of NFV, and can configure a network by connecting various types of appliances.
Component
Cloud LAN-Data Center is a service that provides connections between various networks through virtual network configuration within the data center. The components are as follows, and related service creation is possible.
Division
Detailed Content
Cloud LAN Network
Cloud LAN-Data Center infrastructure within a logically separated network configuration for virtual space
vRouter
virtual resource for external line (L2, L3) connection
vSwitch
Virtual resource for customer dedicated H/W connection and VLAN provision
vFirewall
Cloud LAN-Data Center internally created infrastructure protection for virtual firewall
vL4/L7
Cloud LAN-Data Center internal traffic load balancing for virtual L4/L7 switch
vCore
Virtual resource for Full Mesh routing connection
vCable
virtual Cable for routing connections between virtual resources
Interface
Provides a physical interface where H/W devices and lines can be connected to vDevice
Fig. Cloud LAN-Data Center Components
Constraints
When using the Cloud LAN-Data Center service, there are the following restrictions.
The available creation capacity by region is 1:1, please inquire.
Regional Provision Status
Cloud LAN-Data Center is available in the following environment.
Region
Availability
Western Korea(kr-west1)
Provided
South Korea, southern region 1 (kr-south1)
Not provided
South Korea 2(kr-south2)
Not provided
South Korea, southern region 3(kr-south3)
Not provided
Table. Cloud LAN-Data Center Region-based Provision Status
Preceding Service
Cloud LAN-Data Center has no preceding service.
5.9.2 - How-to guides
The user can input the essential information of the Cloud LAN Network service and create the service by selecting detailed options through the Samsung Cloud Platform Console.
Create Cloud LAN Network
You can create and use the Cloud LAN Network service in the Samsung Cloud Platform Console.
Note
Cloud LAN Network can be applied up to a maximum of 5.
To request the creation of a Cloud LAN Network service, follow the procedure below.
All services > Networking > Cloud LAN-Data Center menu, click. It moves to the Cloud LAN-Data Center Service Home page.
On the Cloud LAN-Data Center Service Home page, click the Create Cloud LAN Network button. It moves to the Create Cloud LAN Network page.
Cloud LAN Network Creation page, enter the corresponding information in the service information input area and click the Complete button.
Please enter or select the required information in the service information input area.
Division
Necessity
Detailed Description
Cloud LAN Network name
required
Enter the name of the Cloud LAN Network to be created
Enter 3-21 characters using English, numbers, and special characters
Cloud LAN Network location
required
Select Cloud LAN Network location
Description
Selection
Enter additional information or description for Cloud LAN Network service
Table. Cloud LAN Network Service Information Input Items
Additional Information Input area, please enter or select the necessary information.
Classification
Mandatory
Detailed Description
Tag
Select
Add Tag
Up to 50 can be added per resource
Click the Add Tag button and enter or select Key, Value
Fig. Cloud LAN Network Additional Information Input Items
Once the creation is complete, check the created resource on the Cloud LAN Network list page.
Cloud LAN Network detailed information check
Cloud LAN Network service allows you to check and modify the list of connected resources and detailed information. The Cloud LAN Network details page consists of details, connected resources, tags, and operation history tabs.
To check the detailed information of Cloud LAN Network, follow the next procedure.
All Services > Networking > Cloud LAN-Data Center menu is clicked. It moves to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the Cloud LAN Network menu. Move to the Cloud LAN Network list page.
Cloud LAN Network list page, click the resource to check the detailed information. Move to the Cloud LAN Network details page.
Cloud LAN Network Details page displays status information and additional feature information, and consists of Details, Connected Resources, Tags, Operation History tabs.
Division
Detailed Description
Service Status
Service Status Display
Creating: Being created
Active: In operation
Deleting: Being deleted
Failed: Creation/deletion failed
Service Cancellation
Service Cancellation Button
Fig. Cloud LAN Network status information and additional features
Detailed Information
On the Cloud LAN Network list page, you can check the detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Title
Resource ID
Service’s unique resource ID
Creator
Service creator user
Creation Time
The time when the service was created
Modifier
User who modified the service
Modified Time
Time the service was modified
Service Information
Detailed information of the created service
Click the Edit icon of the description to modify
Fig. Cloud LAN Network Detailed Information Tab Items
Connected Resources
You can check the vDevice information assigned to the selected resource on the Cloud LAN Network list page.
Classification
Detailed Description
vDevice list
Displays vDevice information and status assigned to the created service
Fig. Cloud LAN Network Connected Resources Tab Detailed Information Items
Tag
On the Cloud LAN Network list page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag List
Tag List
Check Key, Value information of the tag
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing list of created Key and Value
Fig. Cloud LAN Network Tags Tab Items
Work History
You can check the operation history of the selected resource on the Cloud LAN Network list page.
Classification
Detailed Description
Work history list
Resource change history
Check work date, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Cloud LAN Network Work History Tab Detailed Information Items
Cloud LAN Network Cancellation
To cancel the Cloud LAN Network, follow the procedure below.
Caution
If other resources are connected to Cloud LAN Network, the service cannot be terminated. Please delete all connected resources and then terminate the service.
Cloud LAN Network service status is Creating or Deleting, the service cannot be cancelled.
If you cancel the Cloud LAN Network service, it will be deleted immediately and cannot be recovered. If you cancel the service, the service in operation may be stopped immediately, so please proceed with the cancellation work after fully considering the impact that may occur when the service is stopped.
Click All Services > Networking > Cloud LAN-Data Center menu. It moves to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the Cloud LAN Network menu. It moves to the Cloud LAN Network list page.
Cloud LAN Network list page, click on the resource to check the detailed information. It moves to the Cloud LAN Network details page.
Cloud LAN Network details page, click the cancel service button.
When the cancellation is complete, check if the resource has been deleted from the Cloud LAN Network list.
5.9.2.1 - vDevice
The user can enter the necessary information for the vDevice service through the Samsung Cloud Platform Console and create the service by selecting detailed options.
Create vDevice
You can create and use the vDevice service on the Samsung Cloud Platform Console.
Note
To apply for a vDevice, a Cloud LAN Network must be created. Please check the Cloud LAN Network information before applying for a vDevice.
The detailed settings of the created vDevice use a separate operation platform (NiO). For inquiries about NiO, please contact us through Support Center > Inquiry and we will guide you.
To request the creation of a vDevice service, follow the procedure below.
All Services > Networking > Cloud LAN-Data Center menu, click. It moves to the Cloud LAN-Data Center Service Home page.
On the Cloud LAN-Data Center Service Home page, click the vDevice creation button. It moves to the vDevice creation page.
vDevice creation page, please enter the corresponding information in the service information input area.
Please enter or select the required information in the service information input area.
Classification
Necessity
Detailed Description
Cloud LAN Network name
Required
Select the Cloud LAN Network to assign to vDevice
vDevice Type
required
Select the type of vDevice to create
vRouter: virtual resource for external line (L2, L3) connection
vSwitch: virtual resource for customer dedicated hardware connection and VLAN provision
vFirewall: virtual firewall for protecting infrastructure created in Data Center
vCore: resource connection service for full-mesh communication between virtual resources
vL4/L7: virtual L4/L7 switch for traffic load balancing in Cloud LAN-Data Center
vDevice Type > vRouter
required
Enter the name to be created when selecting vRouter
Enter 3-21 characters using English, numbers, and special characters
vDevice Type > vSwitch
required
Enter the name to be created when selecting vSwitch
Enter 3-21 characters using English, numbers, and special characters
vDevice Type > vFirewall
required
vFirewall selection creates selection information
vFirewall: enter the name to be created
Vendor: select vendor
Type: select the rate system of the selected vendor
Redundancy: select whether to use redundancy, use selects the fee for 2 firewalls, and non-use applies for a single configuration
Log storage option: select whether to use the log storage option, logs are stored on 1 server, and even if redundancy is selected, only the fee for 1 server is charged
Contract period: select the contract period
vDevice Type > vCore
Required
Enter the name to be created when selecting vCore
Enter 3-21 characters using English, numbers, and special characters
vDevice Type > vL4/L7
Required
When selecting vL4/L7, select creation information
vL4/L7 name: Enter the name to be created
Unit: Enter the number of units to be used within 1-20
Redundancy: Select whether to use firewall redundancy
Contract period: Select the contract period
Table. vDevice Service Information Input Items
Note
When applying for vFirewall, the Firewall Interface is automatically created. The detailed information of the firewall by vendor is as follows.
Vendor
Firewall type
Number of Interfaces
Created vFirewall Interface
SECUI
6 Gbs, 5,000 Rules
3
int / ext / dmz.1
SECUI
12 Gbs, 15,000 Rules
3
int / ext / dmz.1
SECUI
30 Gbs, 30,000 Rules
4
int / ext / dmz.1 / dmz.2
SECUI
60 Gbs, 100,000 Rules
5
int / ext / dmz.1 / dmz.2 / dmz.3
Fortinet
1 Gbs, 1,000 Rules
3
int / ext / dmz.1
Table. Detailed Firewall Information by Vendor
Additional Information Input area, please enter or select the necessary information.
Classification
Necessity
Detailed Description
Tag
Select
Add Tag
Up to 50 can be added per resource
Click the Add Tag button and enter or select Key, Value
Table. Input items for adding vDevice information
In the Summary panel, review the detailed information and estimated charges, then click the Complete button.
After creation is complete, check the created resource on the vDevice list page.
vDevice detailed information check
The vDevice service allows you to check and modify the list of connected resources and detailed information. The vDevice details page consists of detailed information, connected resources, tags, and operation history tabs.
To check the vDevice details, follow the next procedure.
Click all services > Networking > Cloud LAN-Data Center menu. It moves to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the vDevice menu. It moves to the vDevice list page.
vDevice list page, click on the resource to check the detailed information. Move to the vDevice details page.
vDevice details page displays status information and additional feature information, and consists of details, connected resources, tags, operation history tabs.
Division
Detailed Description
Service Status
Service Status Display
Creating: Being created
Active: In operation
Deleting: Being deleted
Failed: Creation/deletion failed
vDevice deletion
service deletion button
Table. vDevice Status Information and Additional Functions
Detailed Information
vDevice List page where you can view detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Title
Resource ID
Service’s unique resource ID
Creator
The user who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service
Modified Date
Date the service was modified
Service Information
Detailed service information created
Items displayed vary depending on the creation type
Table. vDevice detailed information tab items
Connected Resources
You can check the resources assigned to the selected resource on the vDevice list page.
Classification
Detailed Description
Connected Resource List
Detailed information and status of resources assigned to the created service
Items displayed vary depending on the creation type
Table. vDevice connected resource tab detailed information items
Tag
On the vDevice list page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag list
Tag list
Key, Value information of the tag can be checked
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing Key and Value list
Table. vDevice tag tab items
Work History
You can check the operation history of the resource selected on the vDevice list page.
Division
Detailed Description
Work history list
Resource change history
Check work time, resource ID, resource name, work details, event topic, work result, and worker information
Table. vDevice task history tab detailed information items
vDevice cancellation
To cancel the vDevice, follow the procedure below.
Caution
If other resources are connected to the vDevice, the service cannot be terminated. Please delete all connected resources and then terminate the service.
All services > Networking > Cloud LAN-Data Center menu, click. Move to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the vDevice menu. Move to the vDevice list page.
Click on the resource to check the detailed information on the vDevice list page. It moves to the vDevice details page.
vDevice details page, click the vDevice delete button.
When the cancellation is complete, check if the resource has been deleted from the vDevice list.
5.9.2.2 - Interface
The user can enter the required information of the Interface service through the Samsung Cloud Platform Console and create the service by selecting detailed options.
Create Interface
You can create and use the Interface service in the Samsung Cloud Platform Console.
Note
To apply for the Interface, Cloud LAN Network and vDevice must be created. Please check the Cloud LAN Network and vDevice information before applying for the Interface.
Interface is a function that assigns a physical port to a pre-created vDevice. Interface can only be applied to vRouter and vSwitch.
vFirewall is automatically created with the number of interfaces specified in the specification when created.
To request the creation of an Interface service, follow the next procedure.
Click All Services > Networking > Cloud LAN-Data Center menu. It moves to the Cloud LAN-Data Center Service Home page.
On the Cloud LAN-Data Center Service Home page, click the Create Interface button. It moves to the Create Interface page.
Interface creation page, please enter the corresponding information in the service information input area.
Please enter or select the necessary information in the service information input area.
Classification
Necessity
Detailed Description
Cloud LAN Network name
required
Select the Cloud LAN Network to assign the Interface
vDevice Type
Required
Select the type of vDevice to use
vRouter: virtual resource for connecting external lines (L2, L3)
vSwitch: virtual resource for customer dedicated hardware connection and VLAN provision
vDevice Type details
required
Select detailed information of vDevice type
vDevice name: Select vDevice
Interface Type: Select the type of Interface to use
Interface name: Enter the Interface name
Up to 5 Interface items can be added, click the (+) button to add an item, click the (x) button to delete an item
Interface redundancy: Set whether to use Interface redundancy, if redundancy is selected, the fee for 2 ports will be charged
Contract period: Select the desired contract period
Table. Interface service information input items
Enter Additional Information Enter or select the required information in the area.
Classification
Necessity
Detailed Description
Tag
Selection
Add Tag
Up to 50 can be added per resource
Click the Add Tag button and enter or select Key, Value
Table. Input items for additional interface information
In the Summary panel, check the detailed information generated and the expected billing amount, and click the Complete button.
Once the creation is complete, check the created resource on the Interface list page.
Interface detailed information check
The Interface service allows you to check and modify the list of connected resources and detailed information. The Interface details page consists of details, tags, and work history tabs.
To check the interface details, follow the next procedure.
All Services > Networking > Cloud LAN-Data Center menu, click. It moves to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the Interface menu. It moves to the Interface list page.
Interface List page, click on the resource to check the detailed information. Move to the Interface Detail page.
Interface Detail page displays status information and additional feature information, and consists of Detail Info, Tags, Work History tabs.
Classification
Detailed Description
Service Status
Service Status Display
Creating: Being created
Active: In operation
Deleting: Being deleted
Failed: Creation/deletion failed
Interface deletion
Service deletion button
Table. Interface Status Information and Additional Functions
Detailed Information
On the Interface List page, you can check the detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Title
Resource ID
Service’s unique resource ID
Creator
The user who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service
Modified Date
Date the service was modified
vDevice Type
vDevice Type information
Virtual Device Name
Virtual Device Name
Interface Type
Interface Type Information
Port Duplication
Whether to use port duplication
Contract Period
Selected Contract Period
Table. Interface detailed information tab items
Tag
On the Interface List page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag list
Tag list
Check Key, Value information of the tag
Up to 50 tags can be added per resource
Search and select from existing Key and Value lists when entering tags
Table. Interface tag tab items
Work History
You can check the work history of the resource selected on the Interface list page.
Classification
Detailed Description
Work history list
Resource change history
Check work date, resource ID, resource name, work details, event topic, work result, and worker information
Table. Interface work history tab detailed information items
Interface cancellation
To cancel the interface, follow the next procedure.
All Services > Networking > Cloud LAN-Data Center menu is clicked. It moves to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the Interface menu. Move to the Interface list page.
Interface List page, click on the resource to check the detailed information. Move to the Interface Detail page.
Interface details page, click the Interface delete button.
When the cancellation is complete, check if the resource has been deleted from the Interface list.
5.9.2.3 - vCable
The user can enter the necessary information for the vCable service through the Samsung Cloud Platform Console and create the service by selecting detailed options.
Create vCable
You can create and use the vCable service in the Samsung Cloud Platform Console.
Note
To apply for vCable, Cloud LAN Network and vDevice must be created. Please check the Cloud LAN Network and vDevice information before applying for vCable.
Only vCable configuration between vDevices created in the same Cloud LAN Network is possible.
To request the creation of a vCable service, follow these steps.
All Services > Networking > Cloud LAN-Data Center menu is clicked. It moves to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the vCable creation button. Move to the vCable creation page.
vCable creation page, please enter the corresponding information in the service information input area.
Please enter or select the necessary information in the service information input area.
Classification
Necessity
Detailed Description
Cloud LAN Network name
required
Select the Cloud LAN Network to assign vCable
vCable Type
Required
Select the type of vCable to be created
Static: Provides 1:1 connection between vDevices, when setting vDevice A and vDevice B, different virtual resources are selected
vCore: Provides multi-peering between vDevices, connects multiple vDevices to provide connections between vDevices
vCable Type > details
required
Enter detailed information according to vCable Type
vCable name: Enter the name of the vCable to be created
vDevice A: Select vDevice A
vDevice B: Select vDevice B
Select vDevice A and B in sequence, if vFirewall Interface is selected from A list, it will not be displayed in B list
If vCable Type is Static, vCore cannot be selected from vDevice A and vDevice B
If vCable Type is vCore, vCore can only be selected from vDevice A
vDevice can only be connected to 1 vCable
vFirewall can be connected to vCable using vFirewall Interface
Table. vCable Service Information Input Items
Enter Additional Information Enter or select the required information in the area.
Classification
Necessity
Detailed Description
Tag
Selection
Add Tag
Up to 50 can be added per resource
Click the Add Tag button and enter or select Key, Value
Table. Additional information input items for vCable
In the Summary panel, review the detailed information and estimated charges, and click the Complete button.
Once the creation is complete, check the created resource on the vCable list page.
Check vCable detailed information
The vCable service can check and modify the list of connected resources and detailed information. The vCable details page consists of detailed information, tags, and work history tabs.
To check the vCable details, follow the next procedure.
Click All Services > Networking > Cloud LAN-Data Center menu. It moves to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the vCable menu. It moves to the vCable list page.
vCable list page, click the resource to check the detailed information. Move to the vCable detail page.
vCable Details page displays status information and additional feature information, and consists of Details, Tags, Work History tabs.
Classification
Detailed Description
Service Status
Service Status Display
Creating: Being created
Active: In operation
Deleting: Being deleted
Failed: Creation/deletion failed
vCable delete
service delete button
Table. vCable Status Information and Additional Functions
Detailed Information
On the vCable List page, you can check the detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Title
Resource ID
Service’s unique resource ID
Creator
Service creator user
Creation Time
The time when the service was created
Modifier
User who modified the service
Modified Time
Time the service was modified
vDevice Type
vDevice Type Category
vDevice A name
vDevice A name
vDevice B name
vDevice B title
Table. vCable detailed information tab items
Tag
On the vCable List page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag list
Tag list
Check Key, Value information of the tag
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing Key and Value list
Table. vCable tag tab items
Work History
You can check the operation history of the selected resource on the vCable List page.
Division
Detailed Description
Work history list
Resource change history
Check work time, resource ID, resource name, work details, event topic, work result, and worker information
Table. vCable job history tab detailed information items
Canceling vCable
To cancel vCable, follow the procedure below.
All Services > Networking > Cloud LAN-Data Center menu is clicked. It moves to the Cloud LAN-Data Center Service Home page.
Cloud LAN-Data Center Service Home page, click the vCable menu. It moves to the vCable list page.
vCable list page, click on the resource to check the detailed information. It moves to the vCable details page.
vCable details page, click the vCable delete button.
Once the cancellation is complete, please check if the resource has been deleted from the vCable list.
5.9.2.4 - vEdge
Users can apply for the vEdge service by entering the necessary information for using the service through the Samsung Cloud Platform Console.
Create vEdge
You can apply for and use the vEdge service on the Samsung Cloud Platform Console.
To request the creation of a vEdge service, follow these steps.
All Services > Networking > Cloud LAN-Data Center menu is clicked. It moves to the Cloud LAN-Data Center Service Home page.
On the Service Home page, click the vEdge service request button. It moves to the Support Center > Service Request List > Service Request page.
Service Request page, please enter or select the corresponding information in the required input area.
Select vEdge creation in the work division.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: vEdge service creation request
Region
Select the location of Samsung Cloud Platform
Automatically entered as the region corresponding to the Account
Service
Select service category and service. If the vEdge service request button is pressed, it is automatically entered
Service category: Networking
Service: vEdge
Work classification
Select the type you want to request
vEdge creation: Select if you are newly requesting a service
Contents and guidance on the service application process and notes
Attachments
If you have files you want to share with others, proceed with uploading
Attached files can be up to 5MB each, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. vEdge Service Creation Request Items
Check the required information entered on the Service Request page and click the Request button.
When the application is complete, check the contents of the application on the Support Center > Service Request List page.
Check vEdge Application History
You can check the application and cancellation history of the vEdge service in the Samsung Cloud Platform Console.
To check the vEdge service application history, follow the procedure below.
Click all services > Management > Support Center menu. It moves to the Support Center > Service Home page.
Support Center Service Home page, click the Service Request menu. It moves to the Service Request List page.
On the Service Request List page, click the title of the service request you applied for. It moves to the Service Request Details page.
Service Request Details page to check the application status and information.
Notice
When a service request is received, the sales/operations manager checks the service application details and proceeds with the vEdge service based on the entered information.
vEdge cancellation
To request the cancellation of the vEdge service, follow the procedure below.
Click all services > Management > Support Center menu. It moves to the Support Center > Service Home page.
On the Support Center Service Home page, click the Service Request button. It moves to the Service Request List page.
On the Service Request List page, click the Service Request button. It moves to the Service Request page.
Service Request page, enter or select the corresponding information in the required input field.
Select vEdge Cancellation in the work classification.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: vEdge service cancellation request
Region
Select the location of Samsung Cloud Platform
Automatically entered as the region corresponding to the account
Service
Select service category and service
Service Category: Networking
Service: vEdge
Work classification
Select the type you want to request
vEdge cancellation: Select if you want to cancel the service
Content
Guide to service application process and notes
Attachment
If you have additional files you want to share, upload them
Attached files can be up to 5 files, each within 5 MB
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. vEdge Service Cancellation Request Items
Check the required information entered on the Service Request page and click the Request button.
Once the application is complete, check the contents of the application on the Support Center > Service Request List page.
Service cancellation takes 5-7 business days from the date of cancellation application, including the cancellation application date.
5.9.3 - Release Note
Cloud LAN-Data Center
2025.07.01
NEWCloud LAN-Data Center common feature changes
Samsung Cloud Platform common feature change
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2025.02.27
NEWCloud LAN-Data Center Service Official Launch
We have launched the Cloud LAN-Data Center service, which provides connections between various networks through virtual network configuration within the data center.
5.10 - Cloud WAN
5.10.1 - Overview
Service Overview
Cloud WAN is a service that provides network connections between Samsung Cloud Platform global regions and customer bases. This product provides services based on network traffic usage, and provides differentiated operation management services according to the selected service level.
Cloud WAN service consists of Cloud WAN Network, which is a customer virtual backbone, Segment, which provides logical network separation by purpose, and Attachment, which connects Samsung Cloud Platform Compute resources or receives a dedicated line from the customer’s business site and connects it to the Segment.
For example, to configure a backbone network connection from a system in the Samsung Cloud Platform region to a customer’s overseas base, the following settings are required in the user console. First, create a customer virtual backbone Cloud WAN Network. Next, select the access location, service level, and contract period to create a segment that suits the purpose. Then, by connecting the attachment to the segment in the relevant region or customer base, the backbone network between the relevant SCP region and the customer base is connected, allowing communication between them.
Service Composition Diagram
Figure. Cloud WAN Configuration Diagram
Provided Features
Cloud WAN provides the following features.
Rapid Backbone Network Configuration: Samsung Cloud Platform customers can select their desired hub location and create a virtual global backbone network to quickly and securely configure cloud networks between Samsung Cloud Platform regions and customer hubs, and between customer hubs.
Various Network Edge Connection Types Provided: Various Edge types that can be connected to Cloud WAN are provided, so Samsung Cloud Platform Compute resources can be connected as Transit Gateway, and local lines of customer’s business site can be connected as Site Connect, making it convenient.
Multi-path transmission selection function for cost optimization: Unlike existing circuit bandwidth-based backbone network line services, customers are only charged for the actual usage in the desired section, and traffic characteristic-based transmission path options (Gold/Silver) are provided to optimize line costs.
Service Level-Based Operation Management: Customers can receive differentiated network operation management services according to the selected service level, including the form of Cloud WAN backbone transmission network utilization, provided functions, monitoring, fault management, and technical support levels.
Component
Cloud WAN service provides a global customer virtual backbone network. The components are as follows, and users can create resources directly through the user Console.
Division
Detailed Description
Cloud WAN Network
Customer-specific virtual backbone network
Segment
Cloud WAN Network by use case, logically separated virtual routing domain
Access Location, service level, contract period, multi-path option selection
Access Location
Location of physical points to form a Segment
Attachment
Connect Samsung Cloud Platform or customer’s dedicated line Edge resources
Transit Gateway
Samsung Cloud Platform Compute resources connection type for Edge connection
Site Connect
Edge connection type for connecting customer business site dedicated line resources (CE equipment)
CE equipment
Network equipment that receives a dedicated line for customer business sites (Customer Edge)
Segment Sharing
Provides routing exchange settings to enable mutual communication between resources connected to different segments
Fig. Cloud WAN Configuration Components
Constraints
The Cloud WAN service has the following restrictions.
You can create one Cloud WAN Network per Account.
You can create up to 5 segments in a single Cloud WAN Network.
You can create up to 50 attachments in one segment.
You can create up to 10 Segment Sharings in one Segment.
Connection between Segment and Attachment is only allowed within the same project through request/approval.
However, Segment Sharing can also be connected between different projects through requests and approvals.
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance.
A service that safely and quickly connects Samsung Cloud Platform’s Compute resources to Cloud WAN Segment
Fig. Cloud WAN Preceding Service
5.10.1.1 - Monitoring Metrics
Cloud WAN Monitoring Metrics
The following table shows the monitoring metrics of Cloud WAN that can be viewed through Cloud Monitoring.
For detailed Cloud Monitoring usage, please refer to the Cloud Monitoring guide.
Performance Item
Detailed Description
Unit
Instance Status
Attachment connection status
status
Network in bytes
In bytes(per cycle inbound traffic usage)
bytes
Network In Error Packets
In Error Packet count (number of received error packets per cycle)
Cnt
Network In Packets [Broadcast]
In Broadcast Packet count (number of broadcast packets per cycle)
Cnt
Network In Packets [Dropped]
In Dropped Packet count (number of Dropped packets per cycle)
Cnt
Network In Packets [Multicast]
In Multicast Packet count (number of Multicast packets per cycle)
Cnt
Network In Packets [Unicast]
In Unicast Packet count (number of Unicast packets per cycle)
Cnt
Network out bytes
Out bytes(per cycle outbound traffic usage)
bytes
Network Out Error Packets
Out Error Packet count (number of transmission error packets per cycle)
Cnt
Network Out Packets [Broadcast]
Out Broadcast Packet count (number of broadcast packets per cycle)
Cnt
Network Out Packets [Dropped]
Out Dropped Packet count (number of dropped packets per cycle)
Cnt
Network Out Packets [Multicast]
Out Multicast Packet count (number of Multicast packets per cycle)
Cnt
Network Out Packets [Unicast]
Out Unicast Packet count (number of Unicast packets per cycle)
Cnt
Fig. Cloud WAN Basic Monitoring Metrics
5.10.2 - How-to guides
The user can create a service by entering the essential information of Cloud WAN and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Cloud WAN Network
You can create a Cloud WAN Network through the Samsung Cloud Platform Console.
Note
Only one Cloud WAN Network can be applied per account.
To create a Cloud WAN Network, follow these steps:
Click All Services > Networking > Cloud WAN menu. It moves to the Service Home page of Cloud WAN.
Click the Create Cloud WAN Network button on the Service Home page. It moves to the Create Cloud WAN Network page.
Enter the necessary information and select detailed options on the Create Cloud WAN Network page.
Enter the necessary information in the Service Information section.
Category
Required
Detailed Description
Cloud WAN Network Name
Required
Enter the name of the Cloud WAN Network to be created
Enter 3-20 characters using English letters and numbers
Table. Cloud WAN Network Service Information Input Items
Enter additional information in the Additional Information section.
Category
Required
Detailed Description
Description
Optional
Enter a description of the resource
Tag
Optional
Add a tag
Up to 50 tags can be added per resource
Table. Cloud WAN Network Additional Information Input Items
Confirm the service information and estimated billing amount in the summary panel, and click the Complete button.
After creation is complete, confirm the created resource on the Cloud WAN Network List page.
Checking Cloud WAN Network Details
The Cloud WAN Network service can be checked and modified on the Cloud WAN Network menu. The Cloud WAN Network Details page consists of Details, Connected Resources, Tags, and Operation History tabs.
To check the details of the Cloud WAN Network, follow these steps:
Click All Services > Networking > Cloud WAN menu. It moves to the Service Home page of Cloud WAN.
Click the Cloud WAN Network menu on the Service Home page. It moves to the Cloud WAN Network List page.
Click the resource to check the details on the Cloud WAN Network List page. It moves to the Cloud WAN Network Details page.
The Cloud WAN Network Details page displays status information and additional feature information, and consists of Details, Connected Resources, Tags, and Operation History tabs.
Category
Detailed Description
Status
Current service status
Creating: Service application in progress
Active: Service operating normally
Deleting: Service cancellation request in progress
Failed: Service failure status
Error: Service status cannot be checked
Service Cancellation
Service cancellation button
The Cloud WAN Network can be cancelled if there are no connected services
Table. Cloud WAN Network Status Information and Additional Features
Details
The Details tab displays detailed information about the selected Cloud WAN Network.
Category
Detailed Description
Service
Service category
Resource Type
Service name (Cloud WAN Network)
SRN
Unique ID of the resource in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique ID of the resource in the service
Creator
User who requested service creation
Creation Time
Service creation time
Modifier
User who requested service modification
Modification Time
Service modification time
Cloud WAN Network Name
Cloud WAN Network name
Number of Segments
Number of segments used
Description
Description of the service
Table. Cloud WAN Network Details Tab Items
Connected Resources
The Connected Resources tab displays the Segment connection status information.
Category
Detailed Description
Segment Name
Segment resource name
Segment ID
Segment ID Information
Status
Service Resource Status Information
Table. Cloud WAN Network Connected Resource Tab Items
Tags
In the Tags tab, you can view, add, modify, or delete tag information for the selected resource.
Classification
Detailed Description
Tag List
Tag list
Key, Value information of the tag can be checked
Up to 50 tags can be added per resource
When entering a tag, you can search and select from the existing Key and Value list
Table. Cloud WAN Network Tag Tab Items
Operation History
In the Operation History tab, you can view the operation history of the selected resource.
Classification
Detailed Description
Operation History List
Resource change history
Work time, resource type, resource name, work details, work result, worker name, and path information can be checked
To perform a detailed search, click the Detailed Search button
Table. Cloud WAN Network Operation History Tab Detailed Information Items
Canceling Cloud WAN Network
Canceling an unused Cloud WAN Network can help reduce operating costs.
Note
If there are resources connected to the Cloud WAN Network, the service cannot be canceled. Delete the connected resources first and then cancel the service.
If the service status of the Cloud WAN Network is Creating or Deleting, the service cannot be canceled.
To cancel a Cloud WAN Network, follow these steps:
Click All Services > Networking > Cloud WAN. The Cloud WAN Service Home page will be displayed.
On the Service Home page, click Cloud WAN Network. The Cloud WAN Network List page will be displayed.
On the Cloud WAN Network List page, click the resource to be canceled. The Cloud WAN Network Details page will be displayed.
On the Cloud WAN Network Details page, click the Cancel Service button.
After cancellation is complete, check the resource cancellation status on the Cloud WAN Network List.
Creating a Segment
You can create a Segment on the Samsung Cloud Platform Console and use it.
Note
A maximum of 5 Segments can be applied per Cloud WAN Network.
To create a Segment, follow these steps:
Click All Services > Networking > Cloud WAN. The Cloud WAN Service Home page will be displayed.
On the Service Home page, click the Create Segment button in the drop-down menu. The Create Segment page will be displayed.
On the Create Segment page, enter the necessary information for service creation and select detailed options.
In the Service Information Input section, enter or select the necessary information.
Classification
Mandatory
Detailed Description
Cloud WAN Network Name
Mandatory
Select a Cloud WAN Network
Click +New Creation to create a Cloud WAN Network and select it
Segment Name
Optional
Enter a Segment name and click the Duplicate Check button
Access Location
Mandatory
Select a location to connect the Segment
Only one Access Location can be selected
In the Detailed Information > Connected Resources tab, one Access Location can be added
Access Locations can be added up to the number of Cloud WAN service provision points
Service Type
Mandatory
Select the Segment service type
Select the usage area (Global)
※ Global is for connection between domestic and overseas regions
Select the service level (PremiumPlusG)
※ Professional TAM designation and advanced technical support services are provided
Select the contract period (None, 3 years, 5 years, 7 years)
※ The contract discount rate is automatically applied according to the contract period
Multiple Paths
Optional
Select multiple transmission paths (to be provided from December 25)
Basic path: Gold (3-way structure, important tasks)
Optional path: Silver (2-way structure, general)
Table. Segment Service Information Input Items
In the Additional Information Input section, enter or select the necessary information.
Classification
Mandatory
Detailed Description
Description
Optional
Enter a description of the Segment
Tag
Optional
Add a tag
Up to 50 tags can be added per resource
Table. Segment Additional Information Input Items
In the summary panel, check the service information and estimated billing amount, and click the Complete button.
After creation is complete, check the created resource on the Segment List page.
Note
After creating a Segment, set the following in the Detailed Information > Connected Resources tab:
Connect an Attachment created in the same Account to the Segment.
To connect between different Accounts, set Segment Sharing.
Checking Segment Details
A Segment can be checked in the Segment menu, where you can view the entire resource list and detailed information, and modify it. The Segment Details page consists of Detailed Information, Connected Resources, Multiple Paths, Tags, and Operation History tabs.
To check the detailed information of a Segment, follow these steps:
Click All Services > Networking > Cloud WAN. The Cloud WAN Service Home page will be displayed.
On the Service Home page, click Segment. The Segment List page will be displayed.
On the Segment List page, click the resource to check the detailed information. The Segment Details page will be displayed.
The Segment Details page displays status information and additional feature information, and consists of Detailed Information, Connected Resources, Multiple Paths, Tags, and Operation History tabs.
Category
Detailed Description
Status
Current service status
Creating: Service creation in progress
Active: Service operating normally
Deleting: Service deletion request in progress
Failed: Service creation failed status
Error: Unknown error occurred in the service
Service Deletion
Service deletion button
If there are no connected services, the Segment can be deleted
Table. Segment Status Information and Additional Function Items
Detailed Information
The Detailed Information tab allows you to view detailed information about the selected Segment.
Category
Detailed Description
Service
Service category
Resource Type
Service name (Segment name)
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who requested service creation
Creation Time
Service creation time
Modifier
User who requested service modification
Modification Time
Service modification time
Segment Name
Segment name
Access Location Count
Number of Access Locations connected to the Segment
Region
Selected usage region (global, domestic)
Domestic is currently not provided
Service Level
Selected service level (PremiumPlusG, LIteG)
LIteG is currently not provided
Contract Period
Service usage contract period
Discount rate applied according to contract period
Attachment Count
Number of Attachments connected to the Segment
Multi-Path
Transmission path option (Gold/Silver) selected for the Segment
Available from December 25th
Description
Description of the Segment
Table. Segment Detailed Information Tab Items
Connected Resources
The Connected Resources tab allows you to view the connection status of Access Locations, Segment Sharing, and Attachments.
Category
Detailed Description
Access Location
View location information connected to the Segment
Click Add to add an Access Location item
Click Delete to delete the selected Access Location item
If an Attachment or multi-path rule is connected to the selected Access Location, it cannot be deleted; delete the connected resource first
If only one Access Location is set for the Segment, it cannot be deleted; at least one Access Location must be set
Segment Sharing
Request Segment Sharing connection between projects
Click Create Sharing to add a Segment sharing item
Sharing creation is only possible between the same service levels
Click Approve in the list to approve the connection request
Click Delete to delete the selected item
Attachment Connection
Request Attachment connection from the same project to the Segment
Click Approve in the list to approve the connection request
Samsung Cloud Platform’s Transit Gateway must be pre-created in the Transit Gateway menu and connected to the Attachment (*Transit Gateway Attachment will be available from December 25th)
Table. Segment Connected Resources Tab Items
Multi-Path
The Multi-Path tab allows you to add or delete multi-path rules.
Note
The multi-path feature will be available from December 25th.
Adding Multi-Path Rules
To add a multi-path rule, follow these steps:
Click All Services > Networking > Cloud WAN. The Cloud WAN Service Home page will be displayed.
Click the Segment menu on the Service Home page. The Segment List page will be displayed.
Click the resource you want to view detailed information about on the Segment List page. The Segment Details page will be displayed.
Click the Multi-Path tab on the Segment Details page.
Click the Add Rule button on the Multi-Path tab page. A rule addition popup window will appear.
Enter detailed information in the popup window and click Confirm.
Category
Required
Detailed Description
Source Access Location
Required
Select the source location information for the multi-path rule
Source IP Range
Required
Enter the source IP range
Enter the IP address in CIDR format (e.g., 192.168.10.0/24)
Destination IP Range
Required
Enter the destination IP range
Enter the IP address in CIDR format (e.g., 192.168.10.0/24)
Both source and destination IP ranges cannot be set to 0.0.0.0/0
Protocol
Optional
Select the protocol
Port Direction
Optional
Select the port direction for the selected protocol
Port Number
Optional
Enter the port number if TCP or UDP protocol is selected
Allowed range: 1 - 65,535
Enter up to 5 port numbers separated by commas (e.g., 80, 443)
Description
Optional
Enter a description for the multi-path rule
Table. Multi-Path Rule Addition Input Items
Caution
If you enter the same information as an existing rule, you cannot register it as a new multi-path rule.
You can apply for up to 20 multi-path rules.
Viewing Multi-Path Rules
To view multi-path rules, follow these steps:
Click All Services > Networking > Cloud WAN. The Cloud WAN Service Home page will be displayed.
Click the Segment menu on the Service Home page. The Segment List page will be displayed.
Click the resource you want to view detailed information about on the Segment List page. The Segment Details page will be displayed.
Click the Multi-Path tab on the Segment Details page.
View the detailed information on the Multi-Path tab page.
Category
Detailed Description
Origin Access Location
Origin location information for multi-path rules
Origin IP Range
Origin IP range
Destination IP Range
Destination IP range
Protocol
Protocol information
Port Direction
Port direction of the protocol
Port Number
Port number for TCP, UDP protocols
Description
Description of multi-path rules
Table. Detailed information items for multi-path rules
Note
You can search by setting search filters by clicking the Detailed Search button on the right side of the rule list.
You can quickly check multi-rules by searching with the desired filter among origin access location, origin IP, destination IP, and description.
Deleting Multi-Path Rules
To delete a multi-path rule, follow these steps.
Click All Services > Networking > Cloud WAN menu. Move to the Service Home page of Cloud WAN.
Click the Segment menu on the Service Home page. Move to the Segment List page.
Click the resource to be checked in detail on the Segment List page. Move to the Segment Detail page.
Click the Multi-Path tab on the Segment Detail page.
Click the Delete button on the Multi-Path tab page. The rule will be deleted.
Tags
In the Tags tab, you can check the tag information of the selected resource and add, change, or delete it.
Division
Detailed Description
Tag List
Tag list
Key, Value information of the tag can be checked
Up to 50 tags can be added per resource
When entering a tag, you can search and select from the existing Key and Value list
Table. Segment tag tab items
Work History
In the Work History tab, you can check the work history of the selected resource.
Division
Detailed Description
Work History List
Resource change history
Work time, resource type, resource name, work details, work result, worker name, and path information can be checked
To perform a detailed search, click the Detailed Search button
Table. Segment work history tab detailed information items
Deleting a Segment
Deleting an unused Segment can reduce operating costs.
Caution
If there is an Attachment connected to the Segment or Segment sharing, multi-path rules, it cannot be deleted. Delete the connected resources first and then cancel the service.
The service cannot be deleted if the service status of the Segment is Creating, Deleting, Inactive, or Failed.
To delete a Segment, follow these steps.
Click All Services > Networking > Cloud WAN menu. Move to the Service Home page of Cloud WAN.
Click the Segment menu on the Service Home page. Move to the Segment List page.
Click the resource to be deleted on the Segment List page. Move to the Segment Detail page.
Click the Service Delete button on the Segment Detail page.
After deletion is complete, check if the resource is deleted in the Segment List.
Creating an Attachment
You can create an Attachment service using the Samsung Cloud Platform Console.
Caution
Up to 50 Attachments can be applied per Segment.
To create an Attachment, follow these steps.
Click All Services > Networking > Cloud WAN menu. Move to the Service Home page of Cloud WAN.
Click the Attachment Creation button on the Service Home page. Move to the Attachment Creation page.
Enter the necessary information for service creation and select detailed options on the Attachment Creation page.
In the Service Information Input section, enter or select the necessary information.
Division
Required
Detailed Description
Cloud WAN Network Name
Required
Select the Cloud WAN Network to apply for the Attachment
Click +New Creation to create and select a Cloud WAN Network
Segment Name
Optional
Select the Segment to connect the Attachment
Click +New Creation to create and select a Segment
Access Location
Required
Select the location connected to the Segment
Connection Type
Required
Set detailed connection information for Site Connect
Attachment Name: Enter the Attachment name and click Duplicate Check
ASN Information: Enter ASN information within the range of 1-65,534
Note that 65,001 cannot be used
Port Capacity: Select the port capacity
When connecting to Site Connect, additional work is performed on the customer’s CE router and SR, and it takes several days for the final connection
Connection Type
Required
Select a connectable Transit Gateway (available from December 25)
If you select an Access Location with Multi-AZ set, only Transit Gateway can be set as the connection type
Only Transit Gateway items within the same project are displayed
TGW items that already have a TGW Peering connection or an Attachment connection are not displayed in the list
If you select a TGW item, the Attachment name is automatically generated
Tag
Optional
Add a tag
Up to 50 tags can be added per resource
Table. Attachment additional information input items
4. In the summary panel, check the service information and the expected billing amount, and click the Complete button.
Once created, check the created resource on the Attachment List page.
Checking Attachment Details
Attachments can be checked and modified in the Attachment menu, which includes a list of all resources and detailed information. The Attachment Details page consists of Details, Tags, and Work History tabs.
To check the details of an attachment, follow these steps:
Click the All Services > Networking > Cloud WAN menu. This will move to the Cloud WAN Service Home page.
On the Service Home page, click the Attachment menu. This will move to the Attachment List page.
On the Attachment List page, click the resource you want to check the details for. This will move to the Attachment Details page.
The Attachment Details page displays status information and additional feature information, and consists of Details, Tags, and Work History tabs.
Category
Detailed Description
Status
Current service status
Creating: Service creation in progress
Active: Service operating normally
Deleting: Service deletion requested
Failed: Service creation failed
Error: Unknown error occurred in the service
Service Deletion
Service deletion button
Table. Attachment Status Information and Additional Function Items
Details
The Details tab allows you to check the detailed information of the selected attachment.
Category
Detailed Description
Service
Service category
Resource Type
Service name (Attachment name)
SRN
Unique ID of the resource in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique ID of the resource in the service
Creator
User who requested service creation
Creation Time
Service creation time
Modifier
User who requested service modification
Modification Time
Service modification time
Attachment Name
Attachment name
Segment Name
Name of the segment connected to the attachment
Access Location
Access location to be connected to the attachment
Connection Type
Attachment connection type (Site Connect or Transit Gateway)
ASN Information
If Site Connect is selected, the AS Number entered by the user
Set to a value within the range of 1 to 65,534, and 65,001 is not available
Description
Description of the attachment
Table. Attachment Details Tab Items
Tags
In the Tags tab, you can check the tag information of the selected resource and add, change, or delete it.
Category
Detailed Description
Tag List
Tag list
Key, Value information of the tag can be checked
Up to 50 tags can be added per resource
When entering a tag, you can search and select from the list of existing keys and values
Table. Attachment Tag Tab Items
Work History
The Work History tab allows you to check the work history of the selected resource.
Category
Detailed Description
Work History List
Resource change history
Work time, resource type, resource name, work details, work result, worker name, and path information can be checked
To perform a detailed search, click the Detailed Search button
Table. Attachment Work History Tab Detailed Information Items
Deleting an Attachment
Deleting an unused attachment can help reduce operating costs.
To delete an attachment, follow these steps:
Click the All Services > Networking > Cloud WAN menu. This will move to the Cloud WAN Service Home page.
On the Service Home page, click the Attachment menu. This will move to the Attachment List page.
On the Attachment List page, click the resource you want to delete. This will move to the Attachment Details page.
On the Attachment Details page, click the Service Deletion button.
Once deleted, check if the resource has been deleted on the Attachment List page.
5.10.3 - Release Note
Cloud WAN
2025.07.01
NEWCloud WAN Service Official Version Release
Samsung Cloud Platform launched Cloud WAN service, providing network connections between global regions and customer bases.
5.11 - SASE
5.11.1 - Overview
Service Overview
SASE is a service that integrates network and security functions into the cloud to allow users to safely access internal assets and applications from anywhere. It transmits traffic through the optimal route and provides consistent security services inside and outside the company through SASE hubs located in Samsung Cloud Platform global regions.
Features
Global SASE Fabric: Utilizing the systematic Samsung SDS Global communication network infrastructure, SASE points and vPOPs prepared in all regions are linked to continuously expand service coverage whenever customer demands arise.
All in One Security: Covers a security layer that includes advanced SSL/TLS analysis, sophisticated application recognition/policy, and AI/ML-based real-time behavior analysis in one solution to optimize operational complexity and performance.
Network/Security Unification: Provides network and security in a single operating system based on a single architecture, allowing for rapid traffic processing.
End to End Full Managed: provides infrastructure necessary for customer site connection in a package form through a single contract, and provides comprehensive operation services from monitoring to failure notification and reporting.
Service Composition Diagram
Figure. SASE Configuration Diagram
SASE Hub: Composed of Gateway and control plane in SamsungSDS Global POP and CSP vPOP to provide network connection and security functions
SASE Circuit: Physical circuit for connection between customer site and SASE hub, based on internet/MPLS/dedicated line SD-WAN or VPN configuration
SASE Edge: SASE line connection for customer Edge equipment, in-house routers/SD-WAN equipment/VPN equipment, out-of-house PC/mobile etc. customer’s own Endpoint terminal
Provided Features
The SASE service provides the following functions.
WAN Edge network
Provides Intra, Inter region communication between various Edge devices (SD-WAN devices, routers, VPN devices, PCs, Mobile, etc.)
Providing optimal route for each application using SD-WAN
Provides traffic control (QoS) and TCP acceleration features for high-quality networks
SSE(Secure Service Edge) Security
ZTNA : Provide least privilege, security, and private connection to internal applications
SWG : Security Gateway that provides internal user protection from insecure traffic such as the internet
CASB : Provides a feature to apply corporate security policies between users and cloud applications
FWaaS : Cloud-based firewall provides traffic inspection and control for all services
Provides additional advanced security features such as RBI, DLP, SANDBOX, etc.
Unified Orchestrator and DEM(Digital Experience Monitoring)
Integrated network and security management for cloud, on-premises, and Edge devices
Monitoring of user experience (recognition and identification of causes of problems such as network performance degradation, app suspension, etc.)
Constraints
The limitations of the SASE service are as follows.
The service is not available in China and will be provided later.
Regional Provision Status
SASE can be provided in the following environments.
Region
Availability
Western Korea(kr-west1)
Provided
Korean East(kr-east1)
Not provided
South Korea, southern region1(kr-south1)
Not provided
South Korea southern region 2(kr-south2)
Not provided
South Korea southern region 3(kr-south3)
Not provided
Table. SASE Regional Provision Status
Preceding Service
SASE has no preceding service.
5.11.2 - How-to guides
The user can enter required information for the SASE service through the Samsung Cloud Platform Console, select detailed options, and create the service.
SASE Create
You can create and use SASE services in the Samsung Cloud Platform Console.
To request SASE service creation, follow the steps below.
All Services > Networking > SASE Click the menu. Navigate to SASE’s Service Home page.
Click the Create SASE button on the Service Home page. You will be taken to the Create SASE page.
SASE Creation page, enter the information required to create the service.
Service Information Input Enter the required information in the area.
Category
Required or not
Detailed description
SASE name
Required
SASE name to be used by the user
Enter using English letters and numbers, 3-20 characters
Service Level
Required
Select SASE Service Level
Service Type
Required
Select SASE Service Type
Agent type: Enter the number of agents to use in increments of 10 within 1-10,000
Edge type: Choose whether to use inter-region connections, select the upstream country of the site and connection bandwidth
Click + to add up to 10 items, click X to delete an item
Contract period
Required
Select SASE contract period
Other requests
Option
Enter request when applying for SASE service
Table. SASE Service Information Input Items
Review the detailed information and estimated billing amount generated in the summary panel, and click the Create button.
When creation is complete, check the created resource on the Resource List page.
SASE Check detailed information
The SASE service can view and edit the full resource list and detailed information from the SASE menu. The SASE Details page consists of Detail Information, Work History tabs.
To view detailed information about SASE, follow the steps below.
All Services > Networking > SASE Click the menu. Navigate to SASE’s Service Home page.
Click the SASE menu on the Service Home page. Navigate to the SASE List page.
Click the resource to view detailed information on the SASE List page. It navigates to the SASE Details page.
SASE Details page displays status information and additional feature information, and consists of Details, Work History tabs.
Category
Detailed description
Status
Current Service Status
Request: Service request in progress
Creating: Service registration completed
Active: Service approved and successfully created
Deleting: Service termination request in progress
Previous state change
Previous state change button
In Creating, Active, Deleting states, can change to previous state
Service termination
Service termination button
Table. SASE status information and additional function items
Detailed Information
In the Detailed Information tab, you can view the detailed information of the selected SASE.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type (SASE)
SRN
Resource unique ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
Service creation request user
Creation Date/Time
Service Creation Date/Time
Editor
Service modification request user
Edit Date/Time
Service Edit Date/Time
Service Details
SASE Service Selection Items
Click the edit icon to modify each service detail item
Service Level
SASE Service Level
Click the edit icon to modify the service level
Contract period
SASE service contract period
Other requests
SASE service request
Click the edit icon to modify the request
Table. SASE detailed information tab items
Work History
Work History tab allows you to view the work history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date and time, resource type, resource name, work details, work result, and operator name can be checked
To perform detailed search, click the Detailed Search button
Table. SASE Work History Tab Detailed Information Items
SASE Cancel
If you cancel unused SASE, you can reduce operating costs.
Caution
If a SASE Lastmile resource is connected, you cannot cancel the SASE service. Please delete the connected SASE Lastmile first.
To cancel SASE, follow the steps below.
All Services > Networking > SASE Click the menu. Navigate to SASE’s Service Home page.
Click the SASE menu on the Service Home page. Navigate to the SASE List page.
SASE list Click the resource to be terminated on the page. SASE details Navigate to the page.
Click the Service Termination button on the SASE Details page.
Once the termination is complete, check the resource termination status in the SASE list.
5.11.2.1 - SASE Lastmile
The user can enter required information for the SASE Lastmile service through the Samsung Cloud Platform Console, select detailed options, and create the service.
SASE Lastmile Create
You can create and use the SASE Lastmile service from the Samsung Cloud Platform Console.
To request SASE Lastmile service creation, follow the steps below.
All Services > Networking > SASE Click the menu. Navigate to SASE’s Service Home page.
Click the SASE Lastmile Create button on the Service Home page. You will be taken to the SASE Lastmile Create page.
SASE Lastmile Creation page, enter the information required to create the service.
Service Information Input Enter the required information in the area.
Category
Required or not
Detailed description
SASE name
Required
Select SASE service to use
Click + New creation to create a SASE service and then select it
Site
Required
Select detailed items of SASE Site to use
Site name: Select site to use
Connection bandwidth, Upper country: Automatically fill selected SASE information
Line: Apply then select Line1, Line2
Customer Edge: Apply then select Customer Edge1, Customer Edge2
Table. SASE Lastmile Service Information Input Items
Check the detailed information and estimated billing amount generated in the summary panel, and click the Generate button.
When creation is complete, check the created resource on the Resource List page.
SASE Lastmile Check detailed information
SASE Lastmile service can view and edit the full resource list and detailed information from the SASE Lastmile menu. The SASE Lastmile Detail page consists of Detail Information, Operation History tabs.
To view detailed information of SASE Lastmile, follow the steps below.
All Services > Networking > SASE Click the menu. Navigate to SASE’s Service Home page.
Click the SASE Lastmile menu on the Service Home page. You will be taken to the SASE Lastmile list page.
Click the resource to view detailed information on the SASE Lastmile List page. It navigates to the SASE Lastmile Details page.
SASE Lastmile Detailed page displays status information and additional feature information, and consists of Detailed Information, Work History tabs.
Category
Detailed description
Status
Current Service Status
Request: Service request in progress
Creating: Service registration completed
Active: Service approved and successfully created
Deleting: Service termination request in progress
Previous state change
Previous state change button
In Creating, Active, Deleting states, can change to previous state
SASE Lastmile Delete
Service termination button
Table. SASE Lastmile status information and additional function items
Detailed Information
In the Detailed Information tab, you can view the detailed information of the selected SASE Lastmile.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type (SASE Lastmile)
SRN
Resource unique ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
Service creation request user
Creation Date/Time
Service Creation Date/Time
Editor
Service modification request user
Edit Date/Time
Service Edit Date/Time
Site
Site configuration information
Click the edit icon to modify Site settings
Table. SASE Lastmile detailed information tab items
Work History
Work History tab allows you to view the work history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date/time, resource type, resource name, work details, work result, operator name, and path information can be checked
To perform detailed search, click the Detailed Search button
Table. SASE Lastmile Work History Tab Detailed Information Items
SASE Lastmile Cancel
If you cancel the unused SASE Lastmile, you can reduce operating costs.
To cancel SASE Lastmile, follow the steps below.
All Services > Networking > SASE Click the menu. Navigate to SASE’s Service Home page.
Click the SASE Lastmile menu on the Service Home page. Navigate to the SASE Lastmile list page.
SASE Lastmile List Click the resource to be terminated on the page. SASE Lastmile Detail Move to the page.
SASE Lastmile Details on the page SASE Lastmile Delete Click the button.
When termination is complete, check the resource termination status in the SASE Lastmile list.
5.11.3 - Release Note
SASE
2026.03.19
FEATURESASE service ledger creation automation
The automatic ledger creation feature has been added through the Samsung Cloud Platform user console.
2025.07.01
NEWSASE Service Official Version Release
We have launched a SASE service that combines network and security functions into a single cloud-based service platform.
5.12 - Cloud Last Mile
5.12.1 - Overview
Service Overview
Cloud Last Mile is a service that provides Last Mile lines for network connection from the customer’s site to the Samsung Cloud Platform region and Customer Edge resources within the customer’s site. Resources installed and operated at the customer’s site can be easily requested through a service request in the Samsung Cloud Platform user console.
Features
Provision of Lines and Edge Packages: The Last Mile lines and Edge resources for connecting the customer’s site to external networks are provided in package form by combining optimal equipment suited to the application types the customer primarily uses.
Various Edge connection types provided: You can select virtual resources or physical equipment types, and you can choose and use various functions needed for network connections such as routers/SD-WAN/WAN accelerators/Firewall, etc.
Last Mile line monitoring service provision: The connection status and traffic usage information of the Last Mile line connected to network equipment within the Samsung Cloud Platform region can be conveniently checked using the monitoring service. The monitoring service is provided using NiO tool, a platform developed in-house by Samsung SDS.
Service Architecture Diagram
Figure. Cloud Last Mile Diagram
Provided Features
Cloud Last Mile service provides the following features.
Last Mile line
Line provision type: dedicated line or Internet
Upper country connection type: Cloud LAN - Data Center, Samsung SDS Data Center On-Premise equipment
Customer Edge Resource Provisioning Type
uCPE(VNF: Virtual Network Function): router, SD-WAN, WAN accelerator, firewall
Physical equipment: SD-WAN
Last Mile Line Monitoring Service
Last Mile line up/down status and traffic usage monitoring
Constraints
The constraints of the Cloud Last Mile service are as follows.
Only the line and Edge equipment package form is provided, so providing the line or equipment alone is not possible.
Depending on the upstream country’s connection method, it may be necessary to build customer-dedicated equipment within the Samsung Cloud Platform region.
When connecting to the upstream country’s shared equipment, port fees may be charged depending on the associated product.
Region-wise Provision Status
Cloud Last Mile is available in the following environments.
Region
Availability
Korea West (kr-west1)
Provided
Korea East (kr-east1)
Not provided
South Korea 1(kr-south1)
Not provided
South Korea 2(kr-south2)
Not provided
South Korea 3(kr-south3)
Not provided
Table. Cloud Last Mile Regional Provision Status
Preceding Service
Cloud Last Mile has no prior service.
5.12.2 - How-to guides
The user can enter required information for the Cloud Last Mile service through the Samsung Cloud Platform Console, select detailed options, and create the service.
Cloud Last Mile Create
You can create and use the Cloud Last Mile service from the Samsung Cloud Platform Console.
To request the creation of Cloud Last Mile service, follow the steps below.
All Services > Networking > Cloud Last Mile Click the menu. Navigate to the Service Home page of Cloud Last Mile.
On the Service Home page, click the Create Cloud Last Mile button. You will be taken to the Create Cloud Last Mile page.
Cloud Last Mile Creation page, enter the information required to create the service.
Service Information Input Enter the required information in the area.
Category
Required
Detailed description
Cloud Last Mile name
Required
Cloud Last Mile name to be used by the user
Enter using English letters and numbers, 3-20 characters
Installation Area
Required
Select Cloud Last Mile Installation Area
Installation address
Required
Enter Cloud Last Mile installation address
Contract Period
Required
Select Cloud Last Mile Service Contract Period
Installation Request Date
Required
Cloud Last Mile Installation Request Date Selection
Select a date at least 2 months after today’s date from the calendar.
Other requests
Option
Enter request when applying for Cloud Last Mile service
Table. Cloud Last Mile Service Information Input Items
Check the detailed information generated in the summary panel and click the Create button.
When creation is complete, check the created resource on the Resource List page.
Cloud Last Mile Check detailed information
The Cloud Last Mile service can view and edit the full resource list and detailed information in the Cloud Last Mile menu. The Cloud Last Mile Details page consists of Detailed Information, Connected Resources, Task History tabs.
To view detailed information of Cloud Last Mile, follow the steps below.
All Services > Networking > Cloud Last Mile Click the menu. Navigate to the Service Home page of Cloud Last Mile.
Click the Cloud Last Mile menu on the Service Home page. You will be taken to the Cloud Last Mile List page.
Click the resource to view detailed information on the Cloud Last Mile List page. It moves to the Cloud Last Mile Details page.
Cloud Last Mile Detail page displays status information and additional feature information, and consists of Detail Information, Connected Resources, Work History tabs.
Category
Detailed description
Status
Current Service Status
Request: Service request in progress
Creating: Service registration completed
Active: Service approved and successfully created
Service termination
Service termination button
Table. Cloud Last Mile status information and additional feature items
Detailed Information
Detailed Information tab allows you to view the detailed information of the selected Cloud Last Mile.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type(Cloud Last Mile)
SRN
Resource unique ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
Service creation request user
Creation Time
Service Creation Time
Editor
Service modification request user
Modification DateTime
Service Modification DateTime
Service Details
Service Detail Settings Information
Click the edit icon to modify the service detail settings
Table. Cloud Last Mile Detailed Information Tab Items
Connected Resources
You can view the Circuit and Edge information connected to the selected Cloud Last Mile in the Connected Resources tab.
Category
Detailed description
Circuit and Edge ID
Circuit and Edge ID information
When ID is clicked, go to the Circuit and Edge detail page
Resource Type
Circuit and Edge Resource Type
Connection Type
Circuit and Edge Connection Type
Resource Details
Circuit and Edge Resource Detailed Configuration Information
Table. Cloud Last Mile Connected Resources Tab Items
Work History
Work History tab allows you to view the work history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date and time, resource type, resource name, work details, work result, operator name, path information can be checked
To perform detailed search, click the Detailed Search button
Table. Cloud Last Mile Work History Tab Detailed Information Items
Cloud Last Mile Cancel
If you cancel the unused Cloud Last Mile, you can reduce operating costs.
Caution
When Circuit and Edge resources are connected, you cannot cancel the Cloud Last Mile service. Please delete the connected Circuit and Edge first.
To cancel Cloud Last Mile, follow the steps below.
All Services > Networking > Cloud Last Mile Click the menu. Go to Cloud Last Mile’s Service Home page.
Click the Cloud Last Mile menu on the Service Home page. Navigate to the Cloud Last Mile List page.
Cloud Last Mile List page, click the resource to cancel. Move to the Cloud Last Mile Details page.
Click the Service Termination button on the Cloud Last Mile Details page.
When termination is complete, check the resource termination status in the Cloud Last Mile list.
5.12.2.1 - Circuit and Edge
Users can create the service by entering the required information for the Circuit and Edge service through the Samsung Cloud Platform Console.
Circuit and Edge Creation
You can create and use the Circuit and Edge service from the Samsung Cloud Platform Console.
To request the creation of Circuit and Edge services, follow the steps below.
All Services > Networking > Cloud Last Mile Click the menu. Navigate to the Service Home page of Cloud Last Mile.
Click the Circuit and Edge creation button on the Service Home page. You will be taken to the Circuit and Edge creation page.
Circuit and Edge Creation Enter the information required to create the service on the page.
Service Information Input Enter the required information in the area.
Category
Required or not
Detailed description
Cloud Last Mile Name
Required
Select Cloud Last Mile service to use
If you click + New Creation, you can create a Cloud Last Mile service and then select it
Resource Type
Required
Select Resource Type to Use
Resource Type > Circuit
Required
Select Circuit connection type
SD-WAN: Select license to use
VPN: Choose line type and enter line bandwidth
Enter line bandwidth within 1-1,000
Resource Type > Customer Edge
Required
Select usage type of Customer Edge
Physical Equipment: Select manufacturer and performance of the physical equipment to use
Virtual Resource: Enter Customer Edge name and select type
Select cCPE specification
Select Use with up to 3 VNF functions, select manufacturer and performance for each item
Table. Circuit and Edge Service Information Input Items
Check the detailed information generated in the summary panel, and click the Generate button.
When creation is complete, check the created resource on the Resource List page.
Circuit and Edge Detailed Information Check
Circuit and Edge service can view and edit the full resource list and detailed information from the Circuit and Edge menu. The Circuit and Edge Detailed page consists of Detail Information, Work History tabs.
To view detailed information of Circuit and Edge, follow the steps below.
All Services > Networking > Cloud Last Mile Click the menu. Navigate to the Service Home page of Cloud Last Mile.
Click the Circuit and Edge menu on the Service Home page. Navigate to the Circuit and Edge list page.
Circuit and Edge List page, click the resource to view detailed information. Circuit and Edge Details page will be opened.
Circuit and Edge Detail page displays status information and additional feature information, and consists of Detail Information, Work History tabs.
Category
Detailed description
Status
Current Service Status
Request: Service application in progress
Creating: Service request completed
Active: Service approved and successfully created
Deleting: Service termination request in progress
Previous state change
Previous state change button
In Creating, Active, Deleting states, can change to previous state
Circuit and Edge Delete
Service termination button
Table. Circuit and Edge status information and additional function items
Detailed Information
Detailed Information tab allows you to view detailed information of the selected Circuit and Edge.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type(Circuit and Edge)
SRN
Resource unique ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
Service creation request user
Creation Time
Service Creation Time
Editor
Service modification request user
Modification Date/Time
Service Modification Date/Time
Service Details
Service Details Settings Information
Click the edit icon to modify service detail settings
Table. Circuit and Edge detailed information tab items
Work History
Work History tab allows you to view the work history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date and time, resource type, resource name, work details, work result, operator name, path information can be checked
To perform detailed search, click the Detailed Search button
Table. Circuit and Edge Work History Tab Detailed Information Items
Circuit and Edge Cancel
If you cancel unused Circuit and Edge, you can reduce operating costs.
To cancel Circuit and Edge, follow the steps below.
All Services > Networking > Cloud Last Mile Click the menu. Navigate to the Service Home page of Cloud Last Mile.
Click the Circuit and Edge menu on the Service Home page. Navigate to the Circuit and Edge list page.
Click the resource to be terminated on the Circuit and Edge List page. You will be taken to the Circuit and Edge Details page.
Click the Circuit and Edge Delete button on the Circuit and Edge Details page.
When termination is completed, check the resource termination status in the Circuit and Edge list.
5.12.3 - Release Note
Cloud Last Mile
2026.03.19
FEATURECloud Last Mile Service Ledger Creation Automation
Samsung Cloud Platform user console has added automatic ledger creation feature.
2025.07.01
NEWCloud Last Mile Service Official Version Release
We have launched the Cloud Last Mile service that provides Last Mile lines for network connection from the customer’s site to the Samsung Cloud Platform region and Customer Edge resources within the customer’s site.
5.13 - Global CDN
5.13.1 - Overview
Service Overview
Global CDN is a service that delivers static content stored in web servers or object storage to users more quickly and securely through numerous edge servers distributed across the global network.
When traffic surges, it distributes the load of the origin server to protect the origin server, and by downloading content from adjacent edge servers, it can provide users with fast and stable web services.
Guide
Samsung Cloud Platform’s Global CDN service is provided through the services and infrastructure of the global CDN provider Akamai. Akamai informs that, in accordance with the Information and Communications Network Act, if it receives a list of URLs suspected of containing illegal information from the Broadcasting Media Communications Commission, it may take measures to restrict user access to those URLs.
Features
Easy CDN Service Use: You can conveniently apply for Global CDN services through the web-based console of Samsung Cloud Platform. You can easily set the origin server settings of Samsung Cloud Platform and the caching policy settings of Global CDN edge servers, enabling rapid content delivery service usage.
Improved Service Availability: Even if many users request content simultaneously, causing excessive traffic, thanks to edge servers distributed across multiple locations, users can access content quickly without degradation of usability. Therefore, when used for tasks that require stable global services, it ensures service availability.
Safe content usage: HTTP, HTTPS, HTTP/2 protocols are supported, allowing content integration with various origin servers. If the cached content’s validity period expires or changes to the origin content are confirmed through validation, the edge server’s existing cache is deleted. Then, when a user requests content, the new content from the origin server is cached, so the user always receives valid, up-to-date content.
Efficient Cost Management: Even in work environments that require large-scale traffic such as large file downloads, stable service is possible without the need for massive resource usage. Also, Global CDN usage fees are charged only for content usage, allowing efficient cost management.
Service Diagram
Figure. Global CDN Diagram
Provided Features
Global CDN provides the following features.
Original Settings: Set the location and path of the original server and improve traffic reduction and response speed by providing basic compression of original content.
Caching Settings: Set the cached content delivery policy and cache expiration time, and when the content’s validity period expires (TTL expiration), you can delete (Purge) the expired cached content on the edge server.
Content Protection: By communicating with the origin server via the HTTPS protocol, the security of the content transmission path is strengthened, and with the powerful security features of the Global CDN network, you can protect content and users from DDoS attacks and web-based attacks.
Components
Connection between the source and the global CDN network
Category
Description
Origin location and path setting
Based on the main name or IP address, set the origin server’s location, protocol, port number, and file path to connect the origin to the Global CDN network
Forward host header
Set the Host header value to be delivered to the user when requesting the origin server from Global CDN
Cache key hostname
Set cache key information to identify content on Global CDN Edge server
Custom header(request)
Custom header usage setting
Table. Connection settings between the original and global CDN network
Caching in Global CDN Network
Category
Description
Caching Options
Setting caching options on the Global CDN network using the origin server’s Cache-control and expiration time
Content Delivery Policy
Transmission policy setting based on validity according to TTL expiration
Cache expiration time
Set expiration time of cached content
Detailed Policy
Ignore query string, Range request, Custom header usage setting
Table. Caching Settings in Global CDN Network
Constraints
The constraints of Global CDN service are as follows.
Category
Description
Maximum number of domains that can be created per Account
20
Table. Global CDN constraints
Region-specific provision status
Global CDN is available in the following environments.
Region
Availability
Korea West (kr-west1)
Provided
Korea East (kr-east1)
Provided
South Korea 1(kr-south1)
Not provided
South Korea 2(kr-south2)
Not provided
South Korea 3(kr-south3)
Not provided
Table. Global CDN regional provision status
Preliminary Service
Global CDN service has no preceding service.
5.13.1.1 - ServiceWatch Metrics
Global CDN sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at a 1‑minute interval.
Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the namespace Global CDN.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Table. Global CDN Basic Metrics
5.13.2 - How-to guides
Users can create a Global CDN service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Global CDN
You can create and use a Global CDN service through the Samsung Cloud Platform Console.
Note
To use the Global CDN service, you must add allow rules to the Firewall and Security Group for the origin server.
To request the creation of a Global CDN service, follow these steps:
Click the All Services > Networking > Global CDN menu. You will be redirected to the Service Home page.
On the Service Home page, click the Create Global CDN button. You will be redirected to the Create Global CDN page.
On the Create Global CDN page, enter the information required to create the service and select detailed options.
Enter or select the required information in the Service Information section.
Division
Required
Description
CDN Name
Required
Enter the Global CDN name to use
Cannot be used the same as a name already in use
CDN Domain
Required
Enter the domain name of the Global CDN to use
Table. Global CDN service information input items
Enter or select the required information in the Origin Settings section.
Division
Required
Description
Origin Location > Domain or IP
Required
Enter the location of the origin server
Enter domain name (recommended) or directly enter the public IP of the origin server
Origin Location > Protocol
Required
Select the protocol to use
Service protocol and origin protocol must be set the same
Set the Host header value to be delivered to the user when Global CDN requests the origin server
Incoming host header: Service domain name
Origin host name: Origin domain name
Custom Value: Directly enter the domain name in standard domain format such as www.abc.com
Cache key hostname
Required
Set cache key information to identify content from the Global CDN Edge server
Incoming host header: Use the domain the user accesses as the cache key
Origin hostname: Use the configured origin domain as the cache key
Custom header (request)
Optional
Change specific Header when requesting from Global CDN Edge server to origin server
When selecting Use, enter Header name and Header value
Add items with (+) button, delete with (X) button
Can enter up to 10
Table. Global CDN origin settings input items
Note
You can apply for multiple Global CDN services in one Account.
Only one origin location can be set in the Global CDN service.
Enter or select the required information in the Cache Settings section. This determines how to handle Cache headers transmitted to the Global CDN Edge server.
Division
Required
Description
Cache Option
Required
Set the caching policy applied to all content transmitted to the Global CDN Edge server (Honor origin cache-control and expires recommended)
Honor origin cache-control and expires: Follow all origin’s cache-control and expiration policies
Cache: Follow the Global CDN provider’s policy
Honor origin expires: Follow the origin’s expiration time policy
Honor origin cache-control: Follow the origin server’s cache control policy
Content Delivery Policy
Required
Validate content validity with origin server from Global CDN Edge server
Provide only valid content: Set not to send when TTL expires (recommended)
Provide all cached content: Provide all cached content regardless of TTL expiration
Cache Expiration Time
Required
Enter the time when cached content expires in the Global CDN Edge
Enter within 3,600 – 2,592,000 seconds
Ignore query string
Optional
Set whether to use query string when applying caching policy
When setting Use, ignore query string
Allow Range request
Optional
Provide large file optimization function for objects over 100 MB
When setting Use, support optimization up to 1.8 GB
Custom header (response)
Optional
Change specific Header when requesting from Global CDN Edge server to origin server
When setting Use, enter Header name and Header value
Add items with (+) button, delete with (X) button
Can enter up to 10
Table. Global CDN cache settings input items
Enter or select the required information in the Additional Information section.
Division
Required
Description
Tags
Optional
Add tags
Up to 50 tags can be added per resource
Click the Add Tag button and enter or select the Key, Value values
Table. Global CDN additional information input items
Review the application details and click the Create button.
When creation is complete, check the created resource on the Global CDN List page.
Checking Global CDN Detailed Information
For the Global CDN service, you can view and modify the entire resource list and detailed information. The Global CDN Details page consists of tabs for Detailed Information, Tags, Operation History.
To check Global CDN detailed information, follow these steps:
Click the All Services > Networking > Global CDN menu. You will be redirected to the Global CDN Service Home page.
On the Service Home page, click the Global CDN menu. You will be redirected to the Global CDN List page.
On the Global CDN List page, click the resource for which you want to check detailed information. You will be redirected to the Global CDN Details page.
The Global CDN Details page displays the status information and detailed information of the Global CDN, and consists of tabs for Detailed Information, Tags, Operation History.
Division
Description
Service Status
Status of the Global CDN
Creating: Creating/When starting Global CND
Active: Creation complete/Operating, information can be modified
Inactive/Pending: Stopped
Aborted: Failed to activate after creating Property
Stopped/stopping: Stopped/Stopping
Editing: Changing settings
Starting: Starting
Deleting: Terminating
Mismatching: When Console and Global CDN partner versions are different
Error: Error occurred
Start
Service start button
Stop
Service stop button
Apply Purge
Apply Purger function button
Terminate Service
Button to terminate Global CDN
Table. Global CDN status information and additional features
Detailed Information
On the Global CDN List page, you can check the detailed information of the selected resource and modify the information if necessary.
Division
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
CDN Name
CDN name
CDN Domain
CDN domain information
CDN Configuration Version
Configuration (Property) information applied to the Global CDN service
When Property version and active version checked from Global CDN partner are different, cannot control from Console
When activating the version checked from Console, all functions can be used
Description
Additional description entered by the user
Click the Edit icon to modify
Origin Settings
Entered CDN origin information
Can check origin location, protocol, Port number, origin path, Forward host header, Cache key hostname, Custom header(request) details
Cache Settings
Entered CDN description
Can check cache option, content delivery policy, Cache expiration time, Ignore query string, Allow Range request, Custom header(response) details
Table. Global CDN detailed information tab items
Tags
On the Global CDN List page, you can check the tag information of the selected resource, and add, change, or delete tags.
Division
Description
Tag List
Tag list
You can check Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the list of previously created Keys and Values
Table. Global CDN tag tab items
Operation History
On the Global CDN List page, you can check the operation history of the selected resource.
Division
Description
Operation History List
Resource change history
You can check operation details, operation date and time, resource type, resource name, operation result, operator information
Click the corresponding resource in the Operation History List to open the Operation History Details popup window
Table. Global CDN operation history tab items
Changing Global CDN Settings
You can change and apply Global CDN service settings.
To change Global CDN settings, follow these steps:
Click the All Services > Networking > Global CDN menu. You will be redirected to the Global CDN Service Home page.
On the Service Home page, click the Global CDN menu. You will be redirected to the Global CDN List page.
On the Global CDN List page, click the resource for which you want to check detailed information. You will be redirected to the Global CDN Details page.
On the Global CDN Details page, click the Edit button. You will be redirected to the Edit Global CDN page.
On the Edit Global CDN page, modify the desired information and click Complete. The modification notification window will open.
Click Confirm in the notification window. The service information modification is complete.
Controlling Global CDN Operation
You can stop or re-run the Global CDN service.
To control the operation of Global CDN, follow these steps:
Click the All Services > Networking > Global CDN menu. You will be redirected to the Global CDN Service Home page.
On the Service Home page, click the Global CDN menu. You will be redirected to the Global CDN List page.
On the Global CDN List page, click the resource for which you want to check detailed information. You will be redirected to the Global CDN Details page.
On the Global CDN Details page, click the control button. This controls the service operation.
Start: Runs the Global CDN service.
Stop: Stops the Global CDN service operation.
Caution
When starting or stopping the service, it takes more than 1 hour to apply worldwide.
When stopping the service, service domain provision is stopped. Please be careful when using the service stop function.
Applying Global CDN Purge
Purge is a function that forcibly deletes content cached in the CDN Edge server. When content is modified before the object expires, you can delete the existing content from the CDN Edge through Purge and set it to update with new content.
Caution
When applying Purge, all content stored in the CDN Edge is deleted, and content requests to the origin may occur simultaneously from the CDN Edge.
When running Purge, requests to the origin server increase and load may occur. Please be careful when applying Purge.
To apply Purge to Global CDN, follow these steps:
Click the All Services > Networking > Global CDN menu. You will be redirected to the Global CDN Service Home page.
On the Service Home page, click the Global CDN menu. You will be redirected to the Global CDN List page.
On the Global CDN List page, click the resource for which you want to check detailed information. You will be redirected to the Global CDN Details page.
On the Global CDN Details page, click the Apply Purge button. The Purge application window will open.
Set detailed items in the Purge application window and click Confirm. The modification notification window will open.
Select Content: Select the content type to apply Purge to.
Enter Path Information: When selecting Entire Domain, the set domain information is displayed, and when selecting Enter Path, you can directly enter the path excluding the domain.
Click Confirm in the notification window. Purge is applied.
Terminating Global CDN
You can apply for the termination of the Global CDN service from the Samsung Cloud Platform Console.
Caution
Global CDN can only be terminated in the stopped state. To terminate the product, first click the Stop button to change the status.
To request the termination of the Global CDN service, follow these steps:
Click the All Services > Networking > Global CDN menu. You will be redirected to the Global CDN Service Home page.
On the Service Home page, click the Global CDN menu. You will be redirected to the Global CDN List page.
On the Global CDN List page, click the resource for which you want to check detailed information. You will be redirected to the Global CDN Details page.
On the Global CDN Details page, click the Terminate Service button.
When termination is complete, check the service termination status on the Global CDN list.
5.13.3 - API Reference
API Reference
5.13.4 - CLI Reference
CLI Reference
5.13.5 - Release Note
Global CDN
2026.03.19
FEATUREGlobal CDN feature improvement
You can check measurement values for the following 2 items in conjunction with the Service Watch service.
Check Global CDN status
Check Global CDN processed data volume
Data from 30 minutes ago is retrieved due to external CDN network traffic processing time.
2025.07.01
NEWGlobal CDN service official version release
We have released the Global CDN service, which transmits static content stored on web servers or object storage to users faster and more securely through edge servers distributed across the global network.
5.14 - GSLB
5.14.1 - Overview
Service Overview
GSLB (Global Server Load Balancing) automatically distributes network traffic to available adjacent regions on a DNS basis when traffic increases in a specific global region.
In case of failure on a specific server, it load balances network traffic to available new resources, allowing the service to continue stably.
Features
Stable Service Provision: Through the function to check whether connected resources are operating normally (Health Check), if a failure occurs on a specific server, it immediately performs a Failover on that resource and removes it from the domain response, thereby bypassing traffic to other resources to provide stable service.
Easy Service Port Configuration: You can conveniently create GSLB through a web-based console and set/manage service ports. For L4-level load balancing, multi-port configuration is possible (80, 443, 8080-8090, etc.), and multiple load balancing rules can be applied and managed simultaneously.
Efficient Cost Management: Charges are determined by applying a detailed billing method based on the number of configured domains, the number of added Health Check resources, and the number of queries, allowing efficient cost management.
Service Configuration Diagram
Figure. GSLB Configuration Diagram
Provided Functions
The GSLB service provides the following functions.
GSLB Creation/Management: You can register multiple resources to a single GSLB.
Distribution Algorithm Selection: Provides the Ratio method, which distributes traffic in proportion to the weight (Weight) for each connection target, and the Round Robin method, which distributes traffic evenly while circulating.
Health Check Configuration: You can set the check cycle (Interval), service down recognition time (Timeout), response wait time (Probe Timeout), protocol (ICMP, TCP, HTTP, HTTPS), and service port.
Constraints
The constraints of the GSLB service are as follows.
Item
Description
Maximum number of domains that can be created per Account
20
Maximum number of resources that can be connected per domain
8
Table. GSLB Constraints
Note
For GSLB to monitor connection targets, allow rules must be added to the Firewall and Security Group of the connection target resources.
Regional Availability
The GSLB service can be provided in the following environments.
Region
Availability
Korea West (kr-west1)
Available
Korea East (kr-east1)
Available
Korea South1 (kr-south1)
Unavailable
Korea South2 (kr-south2)
Unavailable
Korea South3 (kr-south3)
Unavailable
Table. GSLB Regional Availability
Prerequisite Services
The GSLB service has no prerequisite services.
5.14.2 - How-to guides
Users can create a GSLB service by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a GSLB
You can create and use a GSLB service through the Samsung Cloud Platform Console.
To request the creation of a GSLB service, follow these steps:
Click the All Services > Networking > GSLB menu. You will be redirected to the Service Home page.
On the Service Home page, click the Create GSLB button. You will be redirected to the Create GSLB page.
On the Create GSLB page, enter the information required to create the service and select detailed options.
Enter or select the required information in the Service Information section.
Division
Required
Description
Purpose
Required
PUBLIC automatically entered when creating GSLB
Domain Name
Required
Enter the GSLB domain name to use
Enter within 4-40 characters using lowercase English letters and numbers
Cannot be used the same as a name already in use
Add Connection Target > IP
Required
Enter the connection target IP address
Add Connection Target > Location
Required
Select the location to perform monitoring for the connection target
Recommended to specify a location close to the IP server
Add Connection Target > Description
Optional
Enter additional information or description for the connection target
Add Connection Target > Connection Target List
Required
Display the added connection target IP, location, description items
After entering connection target IP, location, description, click the Add button to add the item
Up to 8 connection targets can be added to one GSLB service
Click x to delete the item from the list, or click the Delete All button to delete all items in the list
Table. GSLB service information input items
Enter or select the required information in the Connection Target Monitoring Settings section.
Division
Required
Description
Health Check
Required
Select the protocol type to perform health check
Can select from ICMP, TCP, HTTP, HTTPS (use of HTTPS recommended for security)
Interval
Required
Enter the time interval (seconds) to perform health check
Timeout
Required
Enter the waiting time (seconds) to determine the server status (UP or DOWN) during health check
Probe Timeout
Required
Enter the response waiting time (seconds)
Enter domain name (recommended) or directly enter the public IP of the origin server
Service Port
Required
When using TCP/HTTP/HTTPS protocol, enter the port to use for health check
Enter domain name (recommended) or directly enter the public IP of the origin server
User Name
Optional
When using HTTP/HTTPS protocol, enter the user name to use when authentication is required for health check communication
Password
Optional
When using HTTP/HTTPS protocol, enter the password to use when authentication is required for health check communication
Enter within 8-20 characters including all English letters, numbers, and special characters (@$!%*#?&)
Send String
Optional
When using HTTP/HTTPS protocol, enter the string to send when checking a specific web page
Example) GET /www/example/index.html
For HTTP 1.0/1.1, enter line break as /r/n, special characters (<, >, #) cannot be used in the string
Receive String
Required
When using HTTP/HTTPS protocol, enter the string to receive as health check response
Enter only English uppercase/lowercase letters and numbers in the string
Enter or select the required information in the Load Balancing Policy Settings section.
Division
Required
Description
Algorithm
Required
Select the load balancing method
Ratio: Distribute traffic proportionally to the weight (Weight) of each connection target
Round robin: Distribute traffic equally based on round-robin method
Connection Target
Required
When selecting Ratio, enter Weight for each connection target
Weight is the weight applied to the connection target when distributing service requests, enter within 0 - 100
Click the detailed view icon in the description item to check connection target information
Table. GSLB load balancing policy input items
Enter or select the required information in the Additional Information section.
Division
Required
Description
Description
Optional
Enter additional information or description for the GSLB service
Tags
Optional
Add tags
Up to 50 tags can be added per resource
Click the Add Tag button and enter or select the Key, Value values
Table. GSLB additional information input items
Review the creation details and click the Create button.
When creation is complete, check the created resource on the GSLB List page.
Note
To monitor connection targets, GSLB must add allow rules to the Firewall and Security Group.
Checking GSLB Detailed Information
For the GSLB service, you can view and modify the entire resource list and detailed information. The GSLB Details page consists of tabs for Detailed Information, Connection Targets, Tags, Operation History.
To check GSLB detailed information, follow these steps:
Click the All Services > Networking > GSLB menu. You will be redirected to the GSLB Service Home page.
On the Service Home page, click the GSLB menu. You will be redirected to the GSLB List page.
On the GSLB List page, click the resource for which you want to check detailed information. You will be redirected to the GSLB Details page.
The GSLB Details page displays the status information and detailed information of the GSLB, and consists of tabs for Detailed Information, Connection Targets, Tags, Operation History.
Division
Description
Service Status
Status of the GSLB
Creating: Creating
Active: Operating
Editing: Modifying
Deleting: Terminating
Error: Error occurred
Terminate Service
Button to terminate GSLB
Table. GSLB status information and additional features
Detailed Information
On the GSLB List page, you can check the detailed information of the selected resource and modify the information if necessary.
Division
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
Domain Name
GSLB domain information
Purpose
GSLB purpose
Algorithm
Set GSLB algorithm information
Click the Edit icon to change settings
Health Check
Set GSLB health check information
Click the Edit icon to change settings
Description
Entered GSLB description
Click the Edit icon to modify description
Table. GSLB detailed information tab items
Connection Targets
On the GSLB List page, you can check the connection target information of the selected resource and modify the information if necessary.
Division
Description
IP
Connection target IP address
Resource ID
GSLB resource ID
Location
Location to perform monitoring for the connection target
Description
Enter additional information or description for the connection target
On the GSLB List page, you can check the tag information of the selected resource, and add, change, or delete tags.
Division
Description
Tag List
Tag list
You can check Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the list of previously created Keys and Values
Table. GSLB tag tab items
Operation History
On the GSLB List page, you can check the operation history of the selected resource.
Division
Description
Operation History List
Resource change history
You can check operation details, operation date and time, resource type, resource name, operation result, operator information
Click the corresponding resource in the Operation History List to open the Operation History Details popup window
Table. GSLB operation history tab items
Modifying Connection Target Information
You can add, modify, or delete GSLB connection target information.
To change GSLB connection target information, follow these steps:
Click the All Services > Networking > GSLB menu. You will be redirected to the GSLB Service Home page.
On the Service Home page, click the GSLB menu. You will be redirected to the GSLB List page.
On the GSLB List page, click the resource for which you want to check detailed information. You will be redirected to the GSLB Details page.
On the GSLB Details page, click the Connection Targets tab. You will be redirected to the Connection Targets tab page.
On the Connection Targets tab page, click the Modify Connection Target button. You will be redirected to the Modify Connection Target page.
On the Modify Connection Target page, modify the desired information.
Add: Enter connection target IP, select location, enter description, and click the Add button to add the item.
Delete: Click the Delete button to delete the connection target item.
You can modify Weight for each item in the connection target list.
When modification is complete, click Complete. The modification notification window will open.
Click Confirm in the notification window. The service information modification is complete.
Note
You can add up to 8 connection targets to one GSLB service.
When adding a connection target, it is recommended to set it to a location close to the connection target server in the location item.
Setting Up Regional Routing Controller
You can check the Regional Routing Controller and change the use status.
To change the use status of Regional Routing Controller, follow these steps:
Click the All Services > Networking > GSLB menu. You will be redirected to the GSLB Service Home page.
On the Service Home page, click the Regional Routing Controller menu. You will be redirected to the Regional Routing Controller List page.
On the Regional Routing Controller List page, search for the resource for which you want to check detailed information.
Click the Detailed Search button to search by selecting domain, connection location, and use status.
On the Regional Routing Controller List page, check the resource information and change the use status.
Division
Description
Domain Name
Registered domain name
Click the domain name to move to the GSLB Details > Connection Targets tab page
Purpose
Domain purpose
Connection Location
Location to perform monitoring for the connection target
Connection Targets by Location
Number of connection targets by location
Use Status
Display connection target use setting status, click the more button to change use status
Use: Set connection target to use
Stop: Stop connection target use
You can also set use by selecting the domain in the list and selecting Use or Stop at the top
Table. Regional Routing Controller list
Click Confirm in the notification window. The domain use status change is complete.
Terminating GSLB
You can apply for the termination of the GSLB service from the Samsung Cloud Platform Console.
To request the termination of the GSLB service, follow these steps:
Click the All Services > Networking > GSLB menu. You will be redirected to the GSLB Service Home page.
On the Service Home page, click the GSLB menu. You will be redirected to the GSLB List page.
On the GSLB List page, click the resource for which you want to check detailed information. You will be redirected to the GSLB Details page.
On the GSLB Details page, click the Terminate Service button.
When termination is complete, check the service termination status on the GSLB list.
5.14.3 - API Reference
API Reference
5.14.4 - CLI Reference
CLI Reference
5.14.5 - Release Note
GSLB
2025.12.16
FEATURERegional Routing Controller Service Added
You can control whether to use traffic to be connected through GSLB by region.
2025.07.01
NEWGSLB Service Official Version Released
We have released the GSLB service that can automatically distribute network traffic to adjacent regions on a DNS basis when traffic increases in a specific global region, providing stable service.
5.15 - Cloud Virtual Circuit
Global Samsung Cloud Platform provides a 1:1 virtual circuit service based on the line bandwidth between regions or customer bases.
5.15.1 - Overview
Service Overview
becomes Cloud Virtual Circuit service is a service that provides a 1:1 virtual circuit based on circuit bandwidth between global Samsung Cloud Platform regions or customer bases.
Key Features
Cloud Virtual Circuit provides the following functions and features.
Mesh-type one-to-one connection: The Samsung Cloud Platform infrastructure is connected between all global regions, so you can use one-to-one virtual circuit services from anywhere to anywhere.
Non-contract short-term line service: Unlike existing network line services, it provides a non-contract rate system, allowing for cost-effective use when short-term line service is needed.
Special feature provision: It provides a special feature that can divide a single virtual circuit into multiple logical circuits for different purposes and use them.
The maximum circuit bandwidth within the country is 10 Gbps, and the maximum circuit bandwidth between Korea-Global and Global-Global is 1 Gbps.
Logical circuit separation function (Multi VLAN): Up to 5 individual VLANs can be used with a single cloud virtual circuit.
Components
Cloud Virtual Circuit provides a 1:1 virtual backbone line between global bases.
The components are as follows, and you can create resources with related self-service through the user Console.
Division
Content
Cloud Virtual Circuit
virtual resource that accommodates up to two Virtual Links for the same 1:1 point
Starting Point Access Location
1:1 virtual circuit starting point Access Location information
Destination Access Location
1:1 virtual circuit’s destination Access Location information
Multi VLAN
a function that separates one Virtual Link into multiple logical lines and provides them
Virtual Link
Cloud Virtual Circuit with virtual circuit based on dedicated line bandwidth (line bandwidth, contract period, transmission path level option selection)
CE equipment
network equipment that receives a dedicated line for the customer’s business site (Customer Edge)
Fig. Cloud Virtual Circuit Components
Limitations
Cloud Virtual Circuit has the following restrictions.
For one Cloud Virtual Circuit, you can create up to 2 Virtual Links.
Multi VLAN feature can create up to 5 per one Cloud Virtual Circuit.
Regional provision status
Cloud Virtual Circuit service is available in the following environment.
Region
Availability
Korea West 1(kr-west1)
Provided
Korea East 1 (kr-east1)
Not Provided
South Korea 1(kr-south1)
Not provided
South Korea 2(kr-south2)
Not provided
Korea South 3(kr-south3)
Not Provided
Table. Cloud Virtual Circuit Service Availability by Region
Preceding service
There are no services that must be pre-configured before creating this service.
5.15.2 - How-to guides
The user can apply for the Cloud Virtual Circuit service through the service request of the Samsung Cloud Platform Console.
Cloud Virtual Circuit application
You can apply for Cloud Virtual Circuit through the Support Center of the Samsung Cloud Platform Console.
To apply for Cloud Virtual Circuit, follow the following procedure.
All services > Networking > Cloud Virtual Circuit menu is clicked. It moves to the Service Home page of Cloud Virtual Circuit.
Service Home page, click the Cloud Virtual Circuit service request button. It moves to the Service Request page of the Support Center.
Cloud Virtual Circuit application requires information to be selected and entered.
Classification
Necessity
Detailed Description
title
required
title for service request
use Hangul, English, numbers, special characters (+=,.@-_) to enter within 64 characters
Region
Required
Select the region to request the service
Service
Required
Networking service group’s Cloud Virtual Circuit service selection
job classification
required
Cloud Virtual Circuit new application selection
Content
Required
Information input for Cloud Virtual Circuit application
Table. Cloud Virtual Circuit Service Request Items
Check the input information and click the request button.
Guidance
After requesting the service, you cannot modify or delete the written content.
After requesting a service, you can check the details of the request on the Service Request List page of the Support Center. Please refer to Checking Service Request Details for more information.
Cloud Virtual Circuit cancellation
You can request to cancel the Cloud Virtual Circuit in the Support Center of the Samsung Cloud Platform Console.
To apply for Cloud Virtual Circuit, follow the following procedure.
All services > Management > Support Center menu should be clicked. It moves to the Service Home page.
Service Home page, click the Cloud Virtual Circuit service request menu. It moves to the service request page.
Cloud Virtual Circuit cancellation requires information to be selected and entered.
Classification
Necessity
Detailed Description
title
required
title for service request
use hangul, english, numbers, special characters (+=,.@-_) to input within 64 characters
Region
Required
Select the region to request service cancellation
Service
Required
Networking service group’s Cloud Virtual Circuit service selection
Information input for Cloud Virtual Circuit cancellation application
Table. Cloud Virtual Circuit service cancellation request items
Check the input information and click the request button.
Guidance
After requesting the service, you cannot modify or delete the written content.
After requesting a service, you can check the details of the request on the Service Request List page in the Support Center. Please refer to Checking Service Request Details for more information.
5.15.3 - Release Note
Cloud Virtual Circuit
2025.09.08
NEWCloud Virtual Circuit Service Official Version Release
Cloud Virtual Circuit service has been officially launched.
The user can apply for a 1:1 virtual circuit based on the line bandwidth between the Global Samsung Cloud Platform region or the customer’s hub.
5.16 - Private 5G Cloud
5.16.1 - Overview
Service Overview
Private 5G Cloud is a service based on the Samsung Cloud Platform that provides Private 5G Core, Edge solutions for enterprise customers. By utilizing the cloud, it minimizes the construction of physical 5G network equipment, allowing for the creation of a flexible and expandable network environment optimized for the customer’s private environment, and easily connecting multiple geographically dispersed locations.
It provides an enterprise-dedicated 5G Core in a cloud environment, guarantees service availability with stable operation, and enables real-time processing of large amounts of data within the enterprise and secure protection of important data through Edge solutions.
Features
Stable Operation: Private 5G Cloud provides a combination of verified 5G Core quality and stability, and cloud security policies. Additionally, it offers 24-hour monitoring services by 5G professional operation personnel. This enables regular system diagnostics and prompt action in case of failures, allowing for stable service operation.
Efficient cost management: By configuring a Private 5G network on the Samsung Cloud Platform, you can reduce the initial investment cost for building a 5G system and minimize operating costs. Fast and secure cloud-based Private 5G network configuration is possible, as well as flexible operation and capacity expansion.
Private Edge solution provision: Provides application management and Edge Computing services based on Kubernetes applying 3GPP MEC standards. It configures the Edge Computing service environment within the customer’s company, enabling ultra-low latency data transmission, and since all data and services are located within the customer’s company, it can safely protect the company’s valuable information.
Various linkage functions: Various solutions and software verified in Private 5G Open Lab can be used in the marketplace. Customers can introduce new technologies such as AI, machine learning, and big data by utilizing already configured development environments and related ecosystems, and customized solution use is possible.
Service Composition Diagram
Figure. Private 5G Cloud Configuration Diagram
Provided Function
Private 5G Cloud provides the following functions.
Private 5G Cloud Core: cloud-based 5G wireless network and authentication service provision
Private 5G Core CP: cloud area where customer-specific 5G signal control processing
UPF: handling data of unique services for each customer in the customer’s business area
5G Network: Cloud and customer premises dedicated network service processing between customers (VPN/Dedicated Line)
Components
Private 5G Cloud provides services across the entire 5G network within the customer’s business site, and the components are as follows.
5G Core network
User authentication, session management, data processing
User Portal: User Policy Creation/Change/Management
Administrator Portal: Authentication Policy Management and Monitoring
Network Solution
VPN, dedicated lines, etc. cloud network solution configuration
Regional Provision Status
Private 5G Cloud can be provided in the following environments.
Region
Availability
Korea West 1(kr-west1)
Provided
Korean East 1 (kr-east1)
provided
South Korea 1(kr-south1)
Provided
South Korea 2(kr-south2)
Provided
South Korea South 3
provided
Table. Private 5G Cloud Provision Status by Region
Preceding service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service and prepare in advance for more detailed information.
A service that connects the customer network and Samsung Cloud Platform through an encrypted virtual private network
Connect regions and customer sites with IP Sec. tunneling on the internet to provide security services
Table. Private 5G Cloud Preceding Service
5.16.2 - How-to guides
The user can enter the essential information of the Private 5G Cloud service and select detailed options to create the service through the Samsung Cloud Platform Console.
Private 5G Cloud creation
You can create and use the Private 5G Cloud service on the Samsung Cloud Platform Console.
To create a Private 5G Cloud, follow the next procedure.
All services > Networking > Private 5G Cloud menu is clicked. It moves to the Service Home page of Private 5G Cloud.
Service Home page, click the Private 5G Cloud service request button. It moves to the service request page.
Service Request page, select or enter the essential information for Private 5G Cloud.
Notice
In the job classification, select and create Private 5G Cloud service creation.
Input Item
Detailed Description
Title
Title of the service you want to request
Region
Location selection of Samsung Cloud Platform
Automatically entered as the region of the project
Service
Select the service group and service of the corresponding service
Service Group: Networking
Service: Private 5G Cloud
Task classification
Select the task you want to perform
Private 5G Cloud service creation: Select if you want to create this service
Content
Private 5G Cloud creation requires detailed information input [Basic Information]
Account Name: Enter account name
Customer Name/Affiliated Company/Department/E-mail/Phone Number: Enter user information
Service Start Date: Enter the desired service start date
[Application Information]
Usage Purpose: Enter the purpose of using Private 5G Cloud
Example: Manufacturing, Logistics, Robot, CCTV, Video Analysis
Usage Period (Default 3 years): Enter the service usage period
Attachment
Only upload when you have additional files to share
Attached files can be up to 5 files, each within 5MB
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Detailed contents of private 5G cloud service creation request items
Check the entered creation information, and click the request button.
Once creation is complete, check the Service Request List page to see if the resource has been created.
This creation work proceeds with procedures such as purchasing physical servers, delivery, configuration work, and site construction, and takes at least 8 weeks or more based on business days.
Private 5G Cloud application history check
You can check the application and cancellation details of the Private 5G Cloud service on the Samsung Cloud Platform Console.
Reference
Private 5G Cloud’s service application and cancellation request details can be checked through the following procedure.
To check the application history of Private 5G Cloud service, follow the next procedure.
All services > Management > Support Center menu, click. Support Center > Service Home page will be moved.
Support Center Service Home page, click the Service Request menu. It moves to the Service Request List page.
Service Request List page, click the title of the service request you applied for. It moves to the Service Request Details page.
Service Request Details page where you can check the application status and information.
Notice
When a service request is received, the sales/operations manager checks the service application details and proceeds with the Private 5G Cloud service based on the entered information.
Private 5G Cloud Cancellation
You can cancel the Private 5G Cloud service whose contract period has expired to reduce operating costs.
Reference
If the service is canceled, the service in operation may be stopped immediately, so the cancellation work must be proceeded after fully considering the impact that occurs when the service is stopped.
To apply for service cancellation before the contract period expires, the user’s contract manager and SamsungSDS contract manager must complete the cancellation of the corresponding Private 5G Cloud contract through prior consultation before cancellation, and then proceed with the cancellation according to the following procedure.
To cancel Private 5G Cloud, follow the following procedure.
All services > Networking > Private 5G Cloud menu is clicked. It moves to the Service Home page of Private 5G Cloud.
Service Home page, click the Private 5G Cloud service request button. It moves to the service request page.
Service Request page, select or enter the required information for Private 5G Cloud.
Notice
In the job classification, select Private 5G Cloud service cancellation to cancel.
Input Item
Detailed Description
title
title of the service you want to request
Region
Location selection of Samsung Cloud Platform
Automatically entered as the region of the project
Service
Select the service group and service for the corresponding service
Service group: Networking
Service: Private 5G Cloud
Task Classification
Select the task you want to perform
Private 5G Cloud service cancellation: Select if you want to cancel the service
Content
Private 5G Cloud cancellation requires detailed information input [Basic Information]
Account Name: Enter account name
Customer Name/Company/Department/E-mail/Phone Number: Enter user information
Desired Cancellation Date: Enter the desired service cancellation date
Attachment file
Only upload when you have a file you want to share additionally
Attached files can be attached up to 5 files with a maximum of 5MB each
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Private 5G Cloud service cancellation request item detailed content
Check the entered creation information, and click the request button.
When the cancellation is complete, Service Request List page should be checked to see if the resource has been cancelled.
The cancellation process is completed after returning the physical server, so it takes at least 3-4 weeks based on business days.
5.16.3 - Release Note
Private 5G Cloud
2025.09.08
NEWPrivate 5G Cloud Service Release
A Private 5G Cloud product that provides 5G services to customers based on the Samsung Cloud Platform has been launched.
6 - Database
It provides a service to easily create and manage relational, unstructured, and data analysis databases in a web environment.
6.1 - EPAS(DBaaS)
6.1.1 - Overview
EPAS (EnterpriseDB Postgres Advanced Server) is Oracle-compatible RDBMS based on PostgreSQL. Samsung Cloud Platform provides EPAS Database as a Service (DBaaS) allowing you to use EPAS without separate installation and management.
EPAS(DBaaS) Service
Service Overview
EPAS(DBaaS) is a fully managed database service that provides an easy-to-use environment for creating, configuring, and managing EPAS databases. You can focus on application development and business logic without worrying about database installation, patching, backup, and recovery.
Service Architecture
EPAS(DBaaS) consists of the following components:
Database Cluster: A cluster of one or more database servers that provides high availability and load balancing
Storage: Block storage for storing database data, archives, and backups
Network: VPC-based network configuration for secure data transmission
Monitoring: Integrated monitoring and alarming through ServiceWatch
Service Features
EPAS(DBaaS) provides the following features:
Auto Provisioning: Automatically provision database servers with just a few clicks without manual installation
Operation Control: Easily start, stop, and restart database servers
Backup and Recovery: Automated backup and point-in-time recovery for data protection
Version Management: Support for multiple EPAS versions with easy upgrades
Replica: Create read replicas for scaling read operations
Audit: Database audit logging for compliance and security
Parameter: Database parameter management for performance tuning
Monitoring: Real-time monitoring and alerting for database health
User Management: Database user and permission management
Access Control: IP-based access control for enhanced security
Archive: Archive management for long-term data retention
Log Export: Database log export for analysis and troubleshooting
Migration: Easy migration from on-premises or other cloud databases
OS Kernel Upgrade: Automated OS kernel upgrades for security and performance
Engine Versions
EPAS(DBaaS) supports the following EPAS versions:
EPAS 14.17
EPAS 15.11
EPAS 16.4
EPAS 17.6
Server Types
EPAS(DBaaS) provides the following server types:
Standard: Standard specifications for general workloads
High Capacity: Large capacity servers with 24+ vCPU for high-performance workloads
Prerequisites
To use EPAS(DBaaS), you must have a VPC configured in your project. For more information on creating a VPC, see VPC Creation Guide.
6.1.1.1 - Server Type
EPAS(DBaaS) Server Type
EPAS(DBaaS) provides server types with various combinations of CPU, Memory, Network Bandwidth, and more.
When creating an EPAS(DBaaS), the Database Engine is installed according to the selected server type based on the intended use.
The server types supported by EPAS(DBaaS) are as follows:
Standard db1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Classification of provided server types
Standard: Composed of standard specifications (vCPU, Memory) commonly used
High Capacity: Server with higher capacity than Standard
Server Specification
db1
Classification of provided server types and generations
db1: Means general specifications, and 1 means the generation
dbh2: h means high-capacity server specifications, and 2 means the generation
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory capacity
m4: 4GB Memory
Table. EPAS(DBaaS) Server Type Format
db1 Server Type
The db1 server type of EPAS(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
db1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
db1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
db1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
db1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
db1v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
db1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
db1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
db1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
db1v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
db1v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
db1v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
db1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
db1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
db1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
db1v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
db1v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
db1v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
db1v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
db1v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
db1v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
db1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
db1v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
db1v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
db1v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
db1v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
db1v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
db1v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
db1v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
db1v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
db1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
db1v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
db1v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
db1v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
db1v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v16m256
16 vCore
256 GB
Up to 12.5 Gbps
Table. EPAS(DBaaS) Server Type Specifications - db1 Server Type
dbh2 Server Type
The dbh2 server type of EPAS(DBaaS) is provided with high-capacity server specifications and is suitable for large-scale data processing database workloads.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 128 vCPUs and 1,536 GB of memory
Up to 25Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
dbh2v24m48
24 vCore
48 GB
Up to 25 Gbps
High Capacity
dbh2v24m96
24 vCore
96 GB
Up to 25 Gbps
High Capacity
dbh2v24m192
24 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v24m288
24 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
dbh2v32m128
32 vCore
128 GB
Up to 25 Gbps
High Capacity
dbh2v32m256
32 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v32m384
32 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v48m192
48 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v48m576
48 vCore
576 GB
Up to 25 Gbps
High Capacity
dbh2v64m256
64 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v64m768
64 vCore
768 GB
Up to 25 Gbps
High Capacity
dbh2v72m288
72 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v72m864
72 vCore
864 GB
Up to 25 Gbps
High Capacity
dbh2v96m384
96 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v96m1152
96 vCore
1152 GB
Up to 25 Gbps
High Capacity
dbh2v128m512
128 vCore
512 GB
Up to 25 Gbps
High Capacity
dbh2v128m1536
128 vCore
1536 GB
Up to 25 Gbps
Table. EPAS(DBaaS) Server Type Specifications - dbh2 Server Type
6.1.1.2 - Monitoring Metrics
EPAS(DBaaS) Monitoring Metrics
The table below shows the performance monitoring metrics of EPAS (DBaaS) that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, refer to the Cloud Monitoring guide.
Create a VPC to host the EPAS(DBaaS) and Bastion host.
Create Subnet
Create a subnet within the VPC for the EPAS(DBaaS) and Bastion host.
Allocate Public IP
Allocate a public IP for the Bastion host to allow external access.
Create and Attach Internet Gateway
Create an Internet Gateway and attach it to the VPC to enable internet access.
Configure Security Group
Configure the security group to allow access to the EPAS(DBaaS) from the Bastion host.
Create Virtual Server (Bastion Host)
Create a Virtual Server to serve as the Bastion host.
Connect to EPAS(DBaaS) using pgAdmin
Use pgAdmin to connect to the EPAS(DBaaS) through the Bastion host.
Detailed Connection Steps
1. Create VPC
Click All Services > Network > VPC menu.
Click the Create button.
Enter the VPC name and CIDR block.
Click the Confirm button.
2. Create Subnet
Click All Services > Network > Subnet menu.
Click the Create button.
Select the VPC created in step 1.
Enter the subnet name and CIDR block.
Click the Confirm button.
3. Allocate Public IP
Click All Services > Network > Public IP menu.
Click the Allocate button.
Select the VPC and subnet created in steps 1 and 2.
Click the Confirm button.
4. Create and Attach Internet Gateway
Click All Services > Network > Internet Gateway menu.
Click the Create button.
Select the VPC created in step 1.
Click the Confirm button.
5. Configure Security Group
Click All Services > Network > Security Group menu.
Click the Create button.
Enter the security group name and description.
Add inbound rules to allow access from the Bastion host to the EPAS(DBaaS) port.
Click the Confirm button.
6. Create Virtual Server (Bastion Host)
Click All Services > Compute > Virtual Server menu.
Click the Create button.
Select the VPC and subnet created in steps 1 and 2.
Select the public IP allocated in step 3.
Select the security group configured in step 5.
Click the Confirm button.
7. Connect to EPAS(DBaaS) using pgAdmin
Install pgAdmin on your local machine.
Open pgAdmin and create a new server connection.
Enter the connection information:
Host: EPAS(DBaaS) private IP address
Port: EPAS(DBaaS) port number (default: 5432)
Database: Database name
Username: Database username
Password: Database password
Click the Connect button to connect to the EPAS(DBaaS).
Note
To connect through the Bastion host, configure SSH tunneling in pgAdmin or use an SSH client to establish a tunnel to the EPAS(DBaaS).
6.1.2.2 - Managing
This section explains how to manage EPAS(DBaaS) resources.
DB User Management
You can manage database users for EPAS(DBaaS).
Creating DB User
To create a database user, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to manage. You will be directed to the EPAS(DBaaS) Detail page.
Click the DB User Management button. The DB User Management popup will appear.
Click the Create button. The Create DB User popup will appear.
Enter the user name, password, and privileges, then click the Confirm button.
Modifying DB User
To modify a database user, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to manage. You will be directed to the EPAS(DBaaS) Detail page.
Click the DB User Management button. The DB User Management popup will appear.
Click the Modify button next to the user you want to modify. The Modify DB User popup will appear.
Modify the user information and click the Confirm button.
Deleting DB User
To delete a database user, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to manage. You will be directed to the EPAS(DBaaS) Detail page.
Click the DB User Management button. The DB User Management popup will appear.
Click the Delete button next to the user you want to delete. The Delete DB User popup will appear.
Click the Confirm button to delete the user.
DB Access Control Management
You can manage access control for EPAS(DBaaS).
Setting IP Access Control
To set IP access control, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to manage. You will be directed to the EPAS(DBaaS) Detail page.
Click the IP Access Control button. The IP Access Control popup will appear.
Enter the IP address or CIDR block and click the Add button.
Click the Confirm button to save the settings.
Modifying IP Access Control
To modify IP access control, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to manage. You will be directed to the EPAS(DBaaS) Detail page.
Click the IP Access Control button. The IP Access Control popup will appear.
Click the x button next to the IP address you want to delete.
Click the Confirm button to save the settings.
Archive Management
You can manage archive settings for EPAS(DBaaS).
Setting Archive
To set archive, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to manage. You will be directed to the EPAS(DBaaS) Detail page.
Click the Archive button. The Archive popup will appear.
Enable or disable archive and configure the settings.
Click the Confirm button to save the settings.
DB Log Export
You can export database logs for EPAS(DBaaS).
Exporting DB Log
To export database logs, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to manage. You will be directed to the EPAS(DBaaS) Detail page.
Click the DB Log Export button. The DB Log Export popup will appear.
Select the log type and date range, then click the Export button.
Minor Version Upgrade
You can upgrade the minor version of EPAS(DBaaS).
Note
Minor version upgrade will restart the database instance. Service interruption will occur during the upgrade.
Upgrading Minor Version
To upgrade the minor version, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to upgrade. You will be directed to the EPAS(DBaaS) Detail page.
Click the Minor Version Upgrade button. The Minor Version Upgrade popup will appear.
Select the target version and click the Confirm button.
Migration Configuration
You can configure migration for EPAS(DBaaS).
Setting Migration
To set migration, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to migrate. You will be directed to the EPAS(DBaaS) Detail page.
Click the Migration button. The Migration popup will appear.
Enter the source database connection information and migration settings.
Click the Confirm button to start the migration.
OS Kernel Upgrade
You can upgrade the OS kernel for EPAS(DBaaS).
Note
OS kernel upgrade will restart the database instance. Service interruption will occur during the upgrade.
Upgrading OS Kernel
To upgrade the OS kernel, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource you want to upgrade. You will be directed to the EPAS(DBaaS) Detail page.
Click the OS Kernel Upgrade button. The OS Kernel Upgrade popup will appear.
Review the upgrade information and click the Confirm button.
6.1.2.3 - Read Replica
This section explains how to configure read replicas for EPAS(DBaaS).
Creating Read Replica
You can create read replicas to scale read operations and improve performance.
Note
Read replicas are asynchronously replicated from the primary database. There may be a slight delay in data synchronization.
To create a read replica, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the primary database resource you want to create a replica for. You will be directed to the EPAS(DBaaS) Detail page.
Click the Create Read Replica button. The Create Read Replica popup will appear.
Enter the following information:
Replica Name: Name of the read replica
Server Type: Server type for the replica
Storage: Storage configuration for the replica
Click the Confirm button to create the read replica.
Configuring Read Replica
You can configure the settings of a read replica.
Modifying Read Replica
To modify a read replica, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the read replica resource you want to modify. You will be directed to the EPAS(DBaaS) Detail page.
Click the Modify button. The Modify Read Replica popup will appear.
Modify the settings and click the Confirm button.
Promoting Read Replica
You can promote a read replica to a standalone primary database.
Caution
Promoting a read replica will break the replication relationship with the primary database. The promoted replica will become a standalone database and will no longer receive updates from the primary.
To promote a read replica, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the read replica resource you want to promote. You will be directed to the EPAS(DBaaS) Detail page.
Click the Promote button. The Promote Read Replica popup will appear.
Review the promotion information and click the Confirm button to promote the replica.
Deleting Read Replica
You can delete a read replica.
Caution
Deleting a read replica will permanently delete all data and cannot be recovered.
To delete a read replica, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the read replica resource you want to delete. You will be directed to the EPAS(DBaaS) Detail page.
Click the Delete button. The Delete Read Replica popup will appear.
Click the Confirm button to delete the read replica.
6.1.2.4 - DB Backup and Recovery
Users can configure backups for EPAS(DBaaS) through the Samsung Cloud Platform Console and recover using the backed up files.
EPAS(DBaaS) Backup
To ensure users’ data is safely stored, EPAS(DBaaS) provides data backup functionality based on its own backup commands. Additionally, you can verify whether backups were performed normally through the backup history feature and delete backed up files.
To modify the backup settings of a created resource, follow these steps:
Caution
For stable backup, it is recommended to add separate BACKUP storage or sufficiently expand the storage capacity. In particular, if the backup target data exceeds 100 GB and there are many data changes, please secure additional storage equivalent to approximately 60% of the data capacity. For information on adding and expanding storage, see EPAS(DBaaS) Storage Addition and EPAS(DBaaS) Storage Expansion guides.
If backup is set, backup will be performed at the specified time after the set time, and additional charges will be incurred depending on the backup capacity.
If the backup setting is changed to Not Set, backup execution will immediately stop, and stored backup data will be deleted and can no longer be used.
To set up backup, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource for which you want to set up backup. You will be directed to the EPAS(DBaaS) Detail page.
Click the Modify button in the Backup section. The Backup Settings popup will appear.
To set up backup, click Enable in the Backup Settings popup, select the retention period, backup start time, and archive backup cycle, then click the Confirm button.
To stop backup settings, uncheck Enable in the Backup Settings popup and click the Confirm button.
Checking Backup History
Note
To set up notifications for backup success and failure, you can configure them through the Notification Manager service. For detailed usage guide on notification policy settings, see Creating Notification Policy.
To view backup history, follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource for which you want to check backup history. You will be directed to the EPAS(DBaaS) Detail page.
Click the Backup History button. The Backup History popup will appear.
In the Backup History popup, you can check the backup status, version, backup start date/time, backup completion date/time, and capacity.
Deleting Backup Files
To delete backup history, follow these steps:
Caution
Backup files cannot be restored after deletion. Please make sure to verify that the data is unnecessary before deleting.
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) List page, click the resource for which you want to check backup history. You will be directed to the EPAS(DBaaS) Detail page.
Click the Backup History button. The Backup History popup will appear.
In the Backup History popup, check the file you want to delete and click the Delete button.
EPAS(DBaaS) Recovery
If recovery using backup files is required due to failure or data loss, you can recover based on a specific point in time through the recovery function. When EPAS(DBaaS) recovery is performed, a new server is created with the OS image at the time of initial provisioning, the DB is installed with the version at that backup point, and recovery proceeds with the DB configuration information and data.
Caution
To perform recovery, you need at least the same capacity as the data type Disk capacity. If the Disk capacity is insufficient, recovery may fail.
To recover EPAS(DBaaS), follow these steps:
Click All Services > Database > EPAS(DBaaS) menu. You will be directed to the Service Home page for EPAS(DBaaS).
On the Service Home page, click the EPAS(DBaaS) menu. You will be directed to the EPAS(DBaaS) List page.
On the EPAS(DBaaS) Resource list page, click the resource you want to recover. You will be directed to the EPAS(DBaaS) Detail page.
Click the Database Recovery button. You will be directed to the Database Recovery page.
After entering the corresponding information in the Database Recovery Configuration section, click the Complete button.
Item
Required
Detailed Description
Recovery Type
Required
Set the point in time the user wants to recover
Backup Point (Recommended): Recovery based on backup files. Select from the list of backup file points displayed in the list
User-Specified Point: Recover to a point desired by the user within the range of points where backup is possible. The recoverable period can recover from the first backup start time to 1 hour/30 minutes/10 minutes/5 minutes before the current time depending on the Archive backup cycle setting value. Select the date and time you want to backup
Server Name Prefix
Required
Recovered DB server name
Enter 3~16 characters starting with lowercase English letters and using lowercase letters, numbers, and special characters (-)
Actual server name is created with a postfix like 001, 002 based on the server name
Cluster Name
Required
Recovered DB cluster name
Enter 3~20 characters using English letters
Cluster is a unit that groups multiple servers
Service Type > Server Type
Required
Recovered DB server type
Standard: Standard specifications generally used
High Capacity: Large capacity servers with 24vCore or more
Service Type > Planned Compute
Optional
Status of resources with Planned Compute set
In Use: Number of resources in use among resources with Planned Compute set
Set: Number of resources with Planned Compute set
Coverage Preview: Amount applied by Planned Compute for each resource
Create Planned Compute Service: Move to Planned Compute service application page
In addition to the above items, if additional Extension installation is required, please refer to Support Center > Inquiry for inquiries.
Once your inquiry is received, we will review and proceed with the installation. Please note that some Extensions may not work properly during Replica configuration and recovery.
6.1.3 - API Reference
API
6.1.4 - CLI Reference
CLI Reference
6.1.5 - Release Notes
NEW
Added disaster recovery replica function
Added OS upgrade function
NEW
Added DB user management function
Added Archive management function
Added audit log export function
Added backup notification function
Added migration function
Added 2nd generation server
NEW
Launched EPAS(DBaaS) service
6.2 - PostgreSQL(DBaaS)
6.2.1 - Overview
Service Overview
PostgreSQL(DBaaS) is an open-source relational database management system (RDBMS). Samsung Cloud Platform provides an environment that automates PostgreSQL installation and performs management functions for operation through a web-based Console.
PostgreSQL(DBaaS) is designed as a high availability architecture considering storage-based data replication and failover time minimization. To prevent data loss, when the content of the Active server changes, it is synchronously replicated to the Standby server. Read-only servers called Replica for read load distribution and disaster recovery (DR) are provided up to 5. Additionally, in preparation for problems with the DB server or data, it provides automatic backup at a time specified by the user, supporting data recovery at a desired point in time.
Figure. PostgreSQL(DBaaS) Architecture
Provided Features
PostgreSQL(DBaaS) provides the following features.
Auto Provisioning: Database (DB) installation and configuration via UI, providing Active-Standby redundancy configuration based on storage replication. Automatic failover to Standby when Active server fails.
Operation Control Management: Provides function to control running server status. Start, stop, and restart possible for DB issues or to reflect configuration values. For HA configuration, users can directly perform Active-Standby node switching through Switch-over.
Backup and Recovery: Provides data backup function based on own backup commands. Backup time period and retention period can be set by user, and additional charges are incurred depending on backup capacity. Also provides recovery function for backed up data, where user performs recovery and separate DB is created, recovery proceeds to point in time selected by user (backup storage point, user-specified point). When recovering to user-specified point, recovery point can be set up to 5 minutes/10 minutes/30 minutes/1 hour before current time based on stored backup files and archive files.
Version Management: Provides version upgrade (Minor) function for some feature improvements and security patches. User can select whether to perform backup according to version upgrade, and if performing backup, backs up data before patch execution then performs DB engine update.
Replica Configuration: Can configure up to 5 Read Replicas in same/different regions for read load distribution and disaster recovery (DR).
Audit Settings: Supports audit function for major activities in database.
Parameter Management: Can modify DB configuration parameters for performance improvement and security.
Service Status Check: Checks final status of current DB service.
Monitoring: Can check CPU, memory, DB performance monitoring information through Cloud Monitoring service.
DB User Management: Manages DB account (user) information registered in DB.
DB Access Control Management: Can register and cancel access-allowed IPs based on DB accounts registered in DB.
Archive Management: Can set Archive file retention period (1~35 days) and Archive mode (On/Off) in DB server.
DB Log Export: Can export logs stored through Audit settings to user’s Object Storage.
Migration: Supports migration using replication method by synchronizing data with operating database in real time without service interruption.
OS Kernel Upgrade: Can upgrade OS Kernel for some feature improvements and security patch application.
Components
PostgreSQL(DBaaS) provides pre-verified engine versions and various server types according to open source support policy. Users can select and use according to the service scale they want to configure.
Engine Versions
Engine versions supported by PostgreSQL(DBaaS) are as follows.
Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation stops is set to 6 months before the EoTS date.
EOS and EoTS dates may change according to supplier policy, so please refer to the supplier’s license management policy page for details.
Standard: Standard specifications (vCPU, Memory) generally used
High Capacity: Large capacity server specifications with 24vCore or more
Server Specification
db1
Provided server specifications
db1: Standard specifications (vCPU, Memory) generally used
dbh2: Large capacity server specifications
Provides servers with 24 vCore or more
Server Specification
v2
vCore count
v2: 2 virtual cores
Server Specification
m4
Memory capacity
m4: 4GB Memory
Table. PostgreSQL(DBaaS) Server Type Components
Prerequisite Services
List of services that must be configured in advance before creating this service. For details, please prepare in advance by referring to the guide provided for each service.
Service that provides independent virtual network in cloud environment
Table. PostgreSQL(DBaaS) Prerequisite Services
6.2.1.1 - Server Type
PostgreSQL(DBaaS) server type
PostgreSQL(DBaaS) provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth.
When creating PostgreSQL(DBaaS), the Database Engine is installed according to the server type selected for the purpose of use.
The server types supported by PostgreSQL(DBaaS) are as follows.
Standard db1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Provided server type distinction
Standard: Composed of standard specifications (vCPU, Memory) commonly used
High Capacity: Server specifications with higher capacity than Standard
Server specification
db1
Provided server type distinction and generation
db: means general specification, and 1 means generation
dbh: h means large-capacity server specification, and 2 means generation
Server specification
v2
Number of vCores
v2: 2 virtual cores
Server specification
m4
Memory capacity
m4: 4GB Memory
Fig. PostgreSQL(DBaaS) server type format
db1 server type
The db1 server type of PostgreSQL(DBaaS) is provided with standard specifications(vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps of networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
db1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
db1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
db1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
db1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
db1v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
db1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
db1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
db1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
db1v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
db1v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
db1v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
db1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
db1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
db1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
db1v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
db1v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
db1v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
db1v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
db1v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
db1v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
db1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
db1v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
db1v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
db1v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
db1v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
db1v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
db1v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
db1v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
db1v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
db1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
db1v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
db1v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
db1v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
db1v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v16m256
16 vCore
256 GB
Up to 12.5 Gbps
Table. PostgreSQL(DBaaS) server type specifications - db1 server type
dbh2 server type
The dbh2 server type of PostgreSQL(DBaaS) is provided with large-capacity server specifications and is suitable for database workloads for large-scale data processing.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 128 vCPUs and 1,536 GB of memory
Up to 25Gbps of networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
dbh2v24m48
24 vCore
48 GB
Up to 25 Gbps
High Capacity
dbh2v24m96
24 vCore
96 GB
Up to 25 Gbps
High Capacity
dbh2v24m192
24 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v24m288
24 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
dbh2v32m128
32 vCore
128 GB
Up to 25 Gbps
High Capacity
dbh2v32m256
32 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v32m384
32 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v48m192
48 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v48m576
48 vCore
576 GB
Up to 25 Gbps
High Capacity
dbh2v64m256
64 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v64m768
64 vCore
768 GB
Up to 25 Gbps
High Capacity
dbh2v72m288
72 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v72m864
72 vCore
864 GB
Up to 25 Gbps
High Capacity
dbh2v96m384
96 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v96m1152
96 vCore
1152 GB
Up to 25 Gbps
High Capacity
dbh2v128m512
128 vCore
512 GB
Up to 25 Gbps
High Capacity
dbh2v128m1536
128 vCore
1536 GB
Up to 25 Gbps
Table. PostgreSQL(DBaaS) server type specifications - dbh2 server type
6.2.1.2 - Monitoring Metrics
PostgreSQL(DBaaS) Monitoring Metrics
The following table shows the performance monitoring metrics of PostgreSQL(DBaaS) that can be checked through Cloud Monitoring. For detailed instructions on using Cloud Monitoring, refer to the Cloud Monitoring guide.
Number of Long-Running SQL Queries (over 5 minutes)
cnt
Tablespace Used
Table Space Size
bytes
Tablespace Used [Total]
Table Space Size
bytes
Tablespace Used Bytes [MB]
File System Directory Usage (MB)
MB
Tablespaces [Total]
File System Directory Usage (MB)
MB
Transaction Time Max [Long]
Longest Running Transaction Time (minutes)
min
Transaction Time Max Total [Long]
Longest Running Transaction Time (minutes)
min
Wait Locks
Number of Sessions Waiting for Locks (per DB)
cnt
Wait Locks [Long Total]
Number of Sessions Waiting for Locks for a Long Time (over 300 seconds)
cnt
Wait Locks [Long]
Number of Sessions Waiting for Locks
cnt
Wait Locks [Total]
Total Number of Sessions Waiting for Locks
cnt
Waiting Sessions
Number of Waiting Sessions
cnt
Waiting Sessions [Total]
Total Number of Waiting Sessions
cnt
Table. PostgreSQL(DBaaS) Monitoring Metrics
6.2.2 - How-to guides
Users can create PostgreSQL(DBaaS) service by entering required information and selecting detailed options through Samsung Cloud Platform Console.
Creating PostgreSQL(DBaaS)
You can create and use PostgreSQL(DBaaS) service through Samsung Cloud Platform Console.
Note
Please configure VPC’s Subnet type as General before creating service.
If Subnet type is Local, creating this Database service is not possible.
If storing large data of 2 TB or more, backup may take long time or DB performance itself may degrade. To prevent this, operational considerations are needed such as cleaning unnecessary data or moving old data to statistics collection environment
To create PostgreSQL(DBaaS), follow these steps:
Click All Services > Database > PostgreSQL(DBaaS) menu. You will be directed to the Service Home page for PostgreSQL(DBaaS).
On the Service Home page, click the Create PostgreSQL(DBaaS) button. You will be directed to the Create PostgreSQL(DBaaS) page.
On the Create PostgreSQL(DBaaS) page, enter information required for service creation and select detailed options.
Select required information in Image and Version Selection section.
Category
Required
Detailed Description
Image Version
Required
Provides PostgreSQL(DBaaS) version list
Table. PostgreSQL(DBaaS) Image and Version Selection Items
Enter or select required information in Service Information Entry section.
Category
Required
Detailed Description
Server Name Prefix
Required
Server name where DB will be installed
Enter 3~13 characters starting with lowercase English letters, using lowercase letters, numbers, and special characters (-)
Actual server name is created with Postfix like 001, 002 based on server name
Cluster Name
Required
Cluster name where DB servers are configured
Enter 3~20 characters using English letters
Cluster is a unit that groups multiple servers
Service Type > Server Type
Required
Server type where DB will be installed
Standard: Standard specifications generally used
High Capacity: Large capacity servers with 24vCore or more
Set Storage type is applied identically to additional storage
Enter capacity in multiples of 8 in range of 16~5,120
Must allocate separate TEMP storage for use as large Sort due to SQL execution or monthly batch may cause service interruption
Additional: DATA, Archive, TEMP, Backup data storage area
Select Use then enter storage purpose and capacity
Storage type is applied identically with type set in DATA, capacity can be entered in multiples of 8 in range of 16~5,120
Click + button to add storage, x button to delete. Up to 9 can be added
Temporarily saves backup data in BACKUP storage before transmitting backup data
If backup data exceeds 100 GB and data changes are frequent, recommend adding separate BACKUP storage for stable backup, recommend setting backup capacity to about 60% of DATA capacity
If BACKUP storage is not added, /tmp area is used, and backup fails if capacity is insufficient
Only one Block Storage is allocated per service for Archive, TEMP, BACKUP storage
Redundancy Configuration
Optional
Redundancy configuration
If redundancy configuration is used, DB instance is configured divided into Active DB and Standby DB
Network > Common Settings
Required
Network settings where servers created in service are installed
Select if applying same settings to all installed servers
Select pre-created VPC and Subnet, IP, Public NAT
IP can only be auto-created
Public NAT function can be used only if VPC is connected to Internet Gateway, if Use is checked can select from IPs reserved in VPC product’s Public IP. For details, see Creating Public IP
Network > Per-Server Settings
Required
Network settings where servers created in service are installed
Select if applying different settings per installed server
Select pre-created VPC and Subnet, IP, Public NAT
Enter IP for each server
Public NAT function can be used only if VPC is connected to Internet Gateway. If Use is checked can select from IPs reserved in VPC product’s Public IP. For details, see Creating Public IP
IP Access Control
Optional
Service access policy settings
Sets access policy for IPs entered on page, so separate Security Group policy settings are not required
Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.0/24, 192.168.10.1/32) and click Add button
To delete entered IP, click x button next to entered IP
Maintenance Window
Optional
DB maintenance window
If Use is selected, set day of week, start time, duration
Recommend setting maintenance window for stable DB management. Patch operations proceed at set time and service interruption occurs
If set to Not Use, Samsung SDS is not responsible for problems caused by not applying patches
Table. PostgreSQL(DBaaS) Service Information Entry Items
Enter or select required information in Database Configuration Required Information Entry section.
Category
Required
Detailed Description
Database Name
Required
Server name applied when installing DB
Enter 3~20 characters starting with English letters, using English letters and numbers
Database Username
Required
DB user name
Account is created with this name in OS as well
Enter 2~20 characters using lowercase English letters
Database usernames with restricted use can be checked in Console
Database Password
Required
Password to use when accessing DB
Enter 8~30 characters including English letters, numbers, and special characters (excluding "’)
Database Password Confirm
Required
Re-enter password to use when accessing DB identically
Database Port Number
Required
Port number required for DB connection
Enter DB port in range of 1200~65535
Backup > Use
Optional
Whether to use backup
Select Use to set backup file retention period, backup start time, Archive backup cycle
Backup > Retention Period
Optional
Backup retention period
Select backup retention period, file retention period can be set 7~35 days
Additional charges are applied to backup files depending on capacity
Backup > Backup Start Period
Optional
Backup start time
Select backup start time
Minutes when backup is performed are set randomly, backup end time cannot be set
Backup > Archive Backup Cycle
Optional
Archive backup cycle
Select Archive backup cycle
Archive backup cycle of 1 hour is recommended. When selecting 5 minutes, 10 minutes, 30 minutes, may affect DB performance.
Audit Log Settings
Optional
Whether to save Audit Log
Select Use to set Audit Log function
DDL, DML, user connection information records are saved
User can specify SQL statement types to Audit through log_statement parameter, can modify through Parameter screen
Enter or select required information in Additional Information Entry section.
Category
Required
Detailed Description
Tags
Optional
Add tags
Can add up to 50 per resource
Click Add Tag button then enter or select Key, Value values
Table. PostgreSQL(DBaaS) Additional Information Entry Items
On Summary panel, check created detailed information and estimated billing amount, then click Create button.
When creation is complete, check created resource on Resource List page.
Checking PostgreSQL(DBaaS) Detailed Information
PostgreSQL(DBaaS) service can check and modify overall resource list and detailed information. PostgreSQL(DBaaS) Detail page is composed of Detailed Information, Tags, Operation History tabs, and for DB with Replica configured, Replica Information tab is additionally configured.
To check detailed information of PostgreSQL(DBaaS) service, follow these steps.
Click All Services > Database > PostgreSQL(DBaaS) menu. You will be directed to the Service Home page for PostgreSQL(DBaaS).
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will be directed to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to check detailed information. You will be directed to the PostgreSQL(DBaaS) Detail page.
At the top of PostgreSQL(DBaaS) Detail page, status information and additional feature information are displayed.
Category
Detailed Description
Cluster Status
Status of cluster where DB is installed
Creating: Cluster creation in progress
Editing: Cluster changing to state performing Operation
Error: Cluster state where error occurred during task execution
If occurring continuously, contact administrator
Failed: Cluster failed during creation process
Restarting: Restarting cluster
Running: Cluster operating normally
Starting: Starting cluster
Stopped: Cluster stopped
Stopping: Stopping cluster
Synchronizing: Synchronizing cluster
Terminating: Deleting cluster
Unknown: Cluster status unknown
If occurring continuously, contact administrator
Upgrading: Cluster changing to state performing upgrade
Cluster Control
Buttons that can change cluster status
Start: Start stopped cluster
Stop: Stop running cluster
Restart: Restart running cluster
Switch-Over: Switch Standby cluster to Active
Additional Features More
Cluster-related management buttons
Service Status Sync: Check real-time DB service status
Backup History: If backup is set, check backup normal execution and history
Database Recovery: Recover DB based on specific point in time
Parameter Management: Can check and modify DB configuration parameters
Replica Configuration: Configure Replica which is read-only cluster
Replica Configuration (Other Region): Configure disaster recovery Replica in other region, button disabled if no region configured in that Account
DB User Management: Check and manage DB account (user) information registered in DB
DB Access Control Management: Can register and cancel access-allowed IPs based on DB accounts registered in DB
Archive Settings Management: Can set Archive file retention period and Archive mode
DB Log Export: Can export logs saved through Audit settings to user’s Object Storage
Migration Configuration: Provides Migration function of replication method
OS(Kernel) Upgrade: Upgrade OS Kernel version
Service Termination
Button to terminate service
Table. PostgreSQL(DBaaS) Status Information and Additional Features
Detailed Information
On PostgreSQL(DBaaS) List page, check detailed information of selected resource, and modify information if necessary.
Category
Detailed Description
Server Information
Server information configured in that cluster
Category: Server type (Active, Standby, Replica)
Server Name: Server name
IP:Port: Server IP and port
Status: Server status
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In DB service, means cluster SRN
Resource Name
Resource name
In DB service, means cluster name
Resource ID
Unique resource ID in service
Creator
User who created service
Creation Date/Time
Date/Time when service was created
Modifier
User who modified service information
Modification Date/Time
Date/Time when service information was modified
Image Version
Installed DB image and version information
If version upgrade is needed, click Modify icon to set
Network information where DB is installed (VPC, Subnet, VIP, NAT IP (VIP))
IP Access Control
Service access policy settings
If IP addition and deletion is needed, click Modify icon to set
Active & Standby
Active/Standby server type, Basic OS, additional Disk information
If server type modification is needed, click Modify icon next to server type to set. For server type modification procedure, see Changing Server Type
Server restart required when modifying server type
If storage expansion is needed, click Modify icon next to storage capacity to expand. For storage expansion procedure, see Expanding Storage
If storage addition is needed, click Add Disk button next to additional Disk to add. For storage addition procedure, see Adding Storage
Table. PostgreSQL(DBaaS) Database Detailed Information Items
Replica Information
Replica Information tab is activated only when Replica is configured in cluster. Through Replica Information tab, can check Master cluster name, replica count, Replica’s replication status.
Category
Detailed Description
Master Information
Name of Master cluster
Replica Count
Number of Replicas created in Master cluster
Replica Status
Replica server status created in Master cluster
Can check server name, status check, status details, status check time
To check Replica status, click Status Check button
Cluster maintains Synchronizing status while checking, and cluster changes to Running status when check is complete
Table. Replica Information Tab Detailed Information Items
Tags
On PostgreSQL(DBaaS) List page, check tag information of selected resource, and add, change, or delete.
Category
Detailed Description
Tag List
Tag list
Can check Key, Value information of tags
Can add up to 50 tags per resource
When entering tags, search and select from list of previously created Keys and Values
Table. PostgreSQL(DBaaS) Tags Tab Items
Operation History
On PostgreSQL(DBaaS) List page, can check operation history of selected resource.
Table. Operation History Tab Detailed Information Items
Managing PostgreSQL(DBaaS) Resources
If existing configuration options of created PostgreSQL(DBaaS) resource need to be changed, or recovery, Replica configuration is needed, can perform tasks on PostgreSQL(DBaaS) Detail page.
Controlling Operation
If changes occur to running PostgreSQL(DBaaS) resource, can start, stop, restart. Also, if HA is configured, can switch Active-Standby servers through Switch-over.
To control PostgreSQL(DBaaS) operation, follow these steps:
Click All Services > Database > PostgreSQL(DBaaS) menu. You will be directed to the Service Home page for PostgreSQL(DBaaS).
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will be directed to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to control operation. You will be directed to the PostgreSQL(DBaaS) Detail page.
Check PostgreSQL(DBaaS) status and complete change through control buttons below.
Start: DB service installed server and DB service run (Running).
Stop: DB service installed server and DB service stop (Stopped).
Restart: Only DB service restarts.
Switch Over: Can change Active server and Standby server of DB.
Synchronizing Service Status
Can synchronize real-time service status of PostgreSQL(DBaaS).
To check PostgreSQL(DBaaS) service status, follow these steps:
Click All Services > Database > PostgreSQL(DBaaS) menu. You will be directed to the Service Home page for PostgreSQL(DBaaS).
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will be directed to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to check service status. You will be directed to the PostgreSQL(DBaaS) Detail page.
Click Service Status Sync button. Cluster changes to Synchronizing status while checking.
When check is complete, status is updated in server information item, and cluster changes to Running status.
Changing Server Type
Can change configured server type.
Caution
Server restart is required when modifying server type. Please check separately for SW license modification matters or SW settings and reflection according to server specification changes.
To change server type, follow these steps:
Click All Services > Database > PostgreSQL(DBaaS) menu. You will be directed to the Service Home page for PostgreSQL(DBaaS).
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will be directed to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to change server type. You will be directed to the PostgreSQL(DBaaS) Detail page.
Click Modify icon of server type to change at bottom of detailed information. Server Type Modify popup opens.
Select server type in Server Type Modify popup, then click Confirm button.
Adding Storage
If data storage space of 5 TB or more is needed, can add storage. For DB configured in redundancy, added simultaneously to both redundancy servers.
Caution
Applied identically with Storage type selected when creating service.
For DB configured in redundancy, when adding storage, applied simultaneously to storage of Active DB and Standby DB.
If Replica exists, Master cluster storage cannot be smaller than Replica storage. Expand Replica storage first then expand Master cluster storage.
When adding Archive/Temp storage, DB restarts and temporarily unavailable.
To add storage, follow these steps:
Click All Services > Database > PostgreSQL(DBaaS) menu. You will be directed to the Service Home page for PostgreSQL(DBaaS).
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will be directed to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to add storage. You will be directed to the PostgreSQL(DBaaS) Detail page.
Click Add Disk button at bottom of detailed information. Additional Storage Request popup opens.
In Additional Storage Request popup, enter purpose and capacity, then click Confirm button.
Expanding Storage
Can expand storage added as data area up to maximum 5 TB based on initially allocated capacity. For DB configured in redundancy, expanded simultaneously to both redundancy servers.
To expand storage capacity, follow these steps:
Click All Services > Database > PostgreSQL(DBaaS) menu. You will be directed to the Service Home page for PostgreSQL(DBaaS).
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will be directed to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to change server type. You will be directed to the PostgreSQL(DBaaS) Detail page.
Click Modify icon of additional Disk to expand at bottom of detailed information. Additional Storage Modify popup opens.
In Additional Storage Modify popup, enter expansion capacity, then click Confirm button.
Terminating PostgreSQL(DBaaS)
Can reduce operating costs by terminating unused PostgreSQL(DBaaS). However, since terminating service may immediately stop running service, proceed with termination task after fully considering impact of service interruption.
Caution
For DB with Replica configured, even if Master DB is terminated, Replica is not deleted together. If Replica also needs to be deleted, terminate separately from resource list.
When terminating DB, stored data and if backup is set, all backup data are deleted.
To terminate PostgreSQL(DBaaS), follow these steps:
Click All Services > Database > PostgreSQL(DBaaS) menu. You will be directed to the Service Home page for PostgreSQL(DBaaS).
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will be directed to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, select resource to terminate and click Service Termination button.
When termination is complete, check if resource is terminated on PostgreSQL(DBaaS) list page.
6.2.2.1 - Managing DB Service
Users can manage PostgreSQL(DBaaS) through the Samsung Cloud Platform Console.
Managing Parameters
Provides functionality to easily view and modify database configuration parameters.
Viewing Parameters
Follow these steps to view configuration parameters.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to view and modify parameters. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the Parameter Management button. The Parameter Management popup window will open.
In the Parameter Management popup window, click the View button. The View Notification popup window will open.
When the View Notification popup window opens, click the Confirm button. Viewing may take some time.
Modifying Parameters
Follow these steps to modify configuration parameters.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to view and modify parameters. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the Parameter Management button. The Parameter Management popup window will open.
In the Parameter Management popup window, click the View button. The View Notification popup window will open.
When the View Notification popup window opens, click the Confirm button. Viewing may take some time.
If modification is needed, click the Modify button and enter the modification in the custom value area of the Parameter to be modified.
When input is complete, click the Complete button.
Managing DB Users
Provides management functionality to view DB user information and change status information.
Viewing DB Users
Follow these steps to view DB users.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to view DB users. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the DB User Management button. You will move to the DB User Management page.
On the DB User Management page, click the View button. Viewing may take some time.
Changing DB User Status
Follow these steps to change the status of viewed DB users.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to modify DB users. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the DB User Management button. You will move to the DB User Management page.
On the DB User Management page, click the View button. Viewing may take some time.
If modification is needed, click the Modify button and change the status area value or enter remarks.
When input is complete, click the Complete button.
Managing DB Access Control
Provides IP-based DB user access control management functionality. Users can directly specify IPs that can access the database, allowing access only from permitted IPs.
Notice
Perform DB user viewing before setting DB access control. For DB user viewing, please refer to Managing DB Users.
Viewing DB Access Control
Follow these steps to view DB users with IP access control set.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to manage access control. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the DB Access Control Management button. You will move to the DB Access Control Management page.
On the DB Access Control Management page, click the View button. Viewing may take some time.
Adding DB Access Control
Follow these steps to add IP access control.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to add IP access control. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the DB Access Control Management button. You will move to the DB Access Control Management page.
On the DB Access Control Management page, click the View button. Viewing may take some time.
When viewing is complete, click the Add button. The Add DB Access Control popup window will open.
In the Add DB Access Control popup window, select the DB user name and enter the IP Address.
When input is complete, click the Complete button.
Deleting DB Access Control
Follow these steps to delete IP access control.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to delete IP access control. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the DB Access Control Management button. You will move to the DB Access Control Management page.
On the DB Access Control Management page, click the View button. Viewing may take some time.
When viewing is complete, click the Delete button. The Delete popup window will open.
In the Delete popup window, click the Confirm button.
Managing Archive
Provides functionality to set Archive mode and Archive Log retention period, allowing users to flexibly configure Archive log management policies according to their operating environment.
Additionally, provides functionality to manually delete Archive logs, enabling users to organize unnecessary log data and effectively manage system resources.
Notice
When creating a service, the default setting is Archive mode enabled with a retention period of 3 days.
Setting Archive Mode
Follow these steps to set Archive mode.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to set Archive mode. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the Archive Settings Management button. You will move to the Archive Settings Management page.
On the Archive Settings Management page, click the View button. Viewing may take some time.
Click the Modify button and select usage and retention period.
When modification is complete, click the Complete button.
Deleting Archive Files
Follow these steps to delete Archive files.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to set Archive mode. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the Archive Settings Management button. You will move to the Archive Settings Management page.
On the Archive Settings Management page, click Delete All Archive if you want to delete all Archive files, or click Delete Backed Up Archive if you want to delete only backed up Archive files.
Modifying Audit Settings
You can change the Audit log storage settings for PostgreSQL(DBaaS).
Follow these steps to change the Audit log storage settings for PostgreSQL(DBaaS).
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to view the service status. You will move to the PostgreSQL(DBaaS) Detail page.
Click the Modify icon in Audit Settings at the bottom of the detailed information. The Modify Audit Settings popup window will open.
In the Modify Audit Settings popup window, modify the usage and then click the Confirm button.
Selecting Use sets the Audit log function. Setting Audit logs may degrade DB performance.
Disabling Use deletes the Audit log storage file. Please back up the Audit log file separately before disabling use.
Exporting DB Log
Supports exporting audit(Audit) log data that requires long-term retention to Object Storage. Users can directly set the log type to be saved, the destination Bucket to export to, and the cycle for exporting logs. Logs are copied and stored to the specified Object Storage according to the set criteria.
Additionally, to efficiently manage disk space, provides an option to automatically delete original log files while exporting logs to Object Storage. Using this option allows you to effectively secure storage capacity while safely storing necessary log data for long-term retention.
Notice
To use the DB Log Export function, Object Storage creation is required. For Object Storage creation, please refer to the Object Storage User Guide.
Please check the expiration date of the authentication key. If the authentication key expires, logs will not be saved to the Bucket.
Please be careful not to expose authentication key information externally.
Setting DB Log Export Mode
Follow these steps to set DB Log export mode.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to export DB Log. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the DB Log Export button. You will move to the DB Log Export page.
On the DB Log Export page, click the Register button. You will move to the Register DB Log Export page.
On the Register DB Log Export page, enter the corresponding information and then click the Complete button.
Category
Required
Detailed Description
Log Type
Required
Log type to save
Storage Bucket Name
Required
Object Storage Bucket name to save
Authentication Key > Access key
Required
Access key to access the Object Storage to save
Authentication Key > Secret key
Required
Secret key to access the Object Storage to save
File Creation Cycle
Required
Cycle for creating files in Object Storage
Delete Original Log
Optional
Whether to delete original logs while exporting to Object Storage
Table. PostgreSQL(DBaaS) DB Log Export Configuration Items
Managing DB Log Export
Follow these steps to modify, cancel, or immediately export DB Log export settings.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource for which you want to manage DB Log export. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the DB Log Export button. You will move to the DB Log Export page.
On the DB Log Export page, click the More button according to the log type you want to manage and click the Immediate Export, Modify, or Cancel button.
Immediate Export: The selected log is exported to the Bucket of the previously set Object Storage.
Modify: Modifies the DB Log export mode settings.
Cancel: Cancels the DB Log export mode settings.
Upgrading Minor Version
Provides version upgrade functionality for some feature improvements and security patches. Only Minor version upgrades within the same Major version are supported.
Caution
Please check the service status first through service status synchronization before performing version upgrade.
Please proceed with version upgrade after setting up backup. If backup is not set, some data may not be recoverable when problems occur during update.
In DBs with Replica configured, the Master DB version cannot be higher than the Replica version. Please check the Replica version first and perform version upgrade if necessary.
Backed up data is automatically deleted after version upgrade is complete.
Follow these steps to upgrade Minor Version.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to upgrade the version. You will move to the PostgreSQL(DBaaS) Detail page.
Click the Modify button in the Image version item. The Version Upgrade popup window will open.
In the Version Upgrade popup window, select the modified version and backup setting, then click the Confirm button.
In the Version Upgrade Notification popup window, click the Confirm button.
Configuring Migration
Provides Migration functionality that replicates in real-time while synchronizing with the operating database, without service interruption, using Replication method.
You can promote a configured Migration Cluster to Master Cluster.
Caution
When promoting to Master, synchronization with the Source DB to be migrated is stopped.
Follow these steps to promote Migration Cluster to Master.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to promote to Master. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the Master Promotion button. The Master Promotion Notification popup window will open.
In the Master Promotion Notification popup window, click the Confirm button.
Upgrading OS Kernel
You can upgrade the OS Kernel to improve operating database functionality and apply security patches.
Caution
Service is interrupted during OS upgrade.
Upgrade time may vary depending on the version, and if upgrade fails, it will revert to the previous configuration.
Cannot recover to the previous OS after upgrade is complete.
Follow these steps to upgrade OS Kernel.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to upgrade OS Kernel. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the OS(Kernel) Upgrade button. The OS(Kernel) Upgrade Notification popup window will open.
In the OS(Kernel) Upgrade Notification popup window, check the instructions and click the Confirm button.
6.2.2.2 - DB Backup and Restore
Users can set up PostgreSQL(DBaaS) backup through the Samsung Cloud Platform Console and restore using backed up files.
Backing Up PostgreSQL(DBaaS)
PostgreSQL(DBaaS) provides data backup functionality based on its own backup commands. Additionally, through backup history viewing and backup file deletion functionality, it provides a backup environment optimized for data protection and management.
Follow these steps to modify backup settings for created resources.
Caution
For stable backup, it is recommended to add a separate BACKUP storage or sufficiently increase storage capacity. Especially when backup target data exceeds 100 GB and there are many data changes, please secure additional storage corresponding to approximately 60% of data capacity. For adding and increasing storage, please refer to Adding PostgreSQL(DBaaS) Storage, Expanding PostgreSQL(DBaaS) Storage guides.
When backup is set, backup is performed at the specified time after the set time, and additional charges occur depending on backup capacity.
When backup setting is changed to Not Set, backup execution stops immediately, and stored backup data is deleted and can no longer be used.
Follow these steps to set up backup.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to set backup. You will move to the PostgreSQL(DBaaS) Detail page.
Click the Modify button in the backup item. The Backup Settings popup window will open.
To set backup, click Use in the Backup Settings popup window, select retention period, backup start time, and Archive backup cycle, then click the Confirm button.
To stop backup settings, uncheck Use in the Backup Settings popup window and click the Confirm button.
Viewing Backup History
Notice
You can set notifications for backup success and failure through the Notification Manager product. For detailed user guide on notification policy settings, please refer to Creating Notification Policy.
Follow these steps to view backup history.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to view backup history. You will move to the PostgreSQL(DBaaS) Detail page.
Click the Backup History button. The Backup History popup window will open.
In the Backup History popup window, you can check backup status, version, backup start datetime, backup completion datetime, and capacity.
Deleting Backup Files
Caution
Backup files cannot be restored after deletion. Please verify that it is unnecessary data before deletion.
Follow these steps to delete backup history.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to view backup history. You will move to the PostgreSQL(DBaaS) Detail page.
Click the Backup History button. The Backup History popup window will open.
In the Backup History popup window, check the file to delete and then click the Delete button.
Restoring PostgreSQL(DBaaS)
In case of failure or data loss requiring restoration from backup files, you can restore based on a specific point in time through the restore function. When PostgreSQL(DBaaS) restore is performed, a new server is created with the OS image at the initial provisioning time, DB is installed with the version of that backup time, and restoration proceeds with DB configuration information and data.
Caution
To perform restore, at least the same capacity as the data type Disk capacity is required. If Disk capacity is insufficient, restore may fail.
Follow these steps to restore PostgreSQL(DBaaS).
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) Resource List page, click the resource to restore. You will move to the PostgreSQL(DBaaS) Detail page.
Click the Database Restore button. You will move to the Database Restore page.
Enter the corresponding information in the Database Restore Configuration area and then click the Complete button.
Category
Required
Detailed Description
Restore Type
Required
Setting the point in time user wants to restore
Backup Point(Recommended): Restore based on backup file. Select from the list of backup file time points displayed in the list
User-Specified Point: Restore to a point user wants within the backupable time range. The recoverable period can restore from the initial backup start time to 1 hour/30 minutes/10 minutes/5 minutes before current time according to Archive backup cycle setting value. Select the date and time to backup
Server Name Prefix
Required
Server name of restore DB
Enter 3~16 characters starting with lowercase English letters, using lowercase letters, numbers, and special characters(-)
Actual server name is created with a postfix like 001, 002 attached based on server name
Cluster Name
Required
Cluster name of restore DB
Enter 3 ~ 20 characters using English letters
Cluster is a unit that groups multiple servers
Service Type > Server Type
Required
Server type where restore DB will be installed
Standard: Standard specifications commonly used
High Capacity: Large-capacity servers with 24vCore or more
Service Type > Planned Compute
Optional
Status of resources with Planned Compute set
In Use: Number of resources with Planned Compute set that are currently in use
Set: Number of resources with Planned Compute set
Coverage Preview: Amount applied by Planned Compute per resource
Create Planned Compute Service: Moves to Planned Compute service application page
Users can enter required information for Read Replica through the Samsung Cloud Platform Console and create the service through detailed options.
Configuring Replica
Through Replica configuration, you can create replica servers for read-only or disaster recovery purposes. You can create up to 5 Replicas per Database.
Notice
To configure Replica for disaster recovery, please create it through Replica Configuration(Other Region).
Follow these steps to configure Replica.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to configure Replica. You will move to the PostgreSQL(DBaaS) Detail page.
Click the Configure Replica button. You will move to the Configure Replica page.
Enter information in the Configure Replica area and then click the Complete button.
Category
Required
Detailed Description
Region
Required
Region to configure Replica
Only exposed when Replica Configuration(Other Region) is selected
Replica Count
Required
Number of Replicas to configure
Can configure up to 5 per cluster
When selecting a value of 2 or more, additional input is required for Replica name and service type information
Replica Name
Required
Replica server name
Enter 3 ~ 19 characters starting with lowercase English letters, using lowercase letters, numbers, and special characters(-)
The entered Replica name is displayed as cluster name in the list
Service Type > Server Type
Required
Replica server type
Standard: Standard specifications commonly used
High Capacity: Large-capacity servers with 24vCore or more
Service Type > Planned Compute
Optional
Status of resources with Planned Compute set
In Use: Number of resources with Planned Compute set that are currently in use
Set: Number of resources with Planned Compute set
Coverage Preview: Amount applied by Planned Compute per resource
Create Planned Compute Service: Moves to Planned Compute service application page
In case of network failure or Replication delay with Master Cluster, you can replicate Master Cluster data again through the Replica reconfiguration function.
Follow these steps to reconfigure Replica.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to reconfigure Replica. You will move to the PostgreSQL(DBaaS) Detail page.
Click the Reconfigure Replica button. The Reconfigure Replica Notification popup window will open.
In the Reconfigure Replica Notification popup window, click the Confirm button.
Promoting Replica Cluster to Master Cluster
You can promote a configured Replica Cluster to Master Cluster.
Caution
When promoting to Master, synchronization with the existing Master Cluster is stopped.
Follow these steps to promote Replica Cluster to Master.
Click the All Services > Database > PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS)’s Service Home page.
On the Service Home page, click the PostgreSQL(DBaaS) menu. You will move to the PostgreSQL(DBaaS) List page.
On the PostgreSQL(DBaaS) List page, click the resource to promote to Master. You will move to the PostgreSQL(DBaaS) Detail page.
Click the More button and click the Master Promotion button. The Master Promotion Notification popup window will open.
In the Master Promotion Notification popup window, click the Confirm button.
6.2.2.4 - Connecting to DB Server
Scenario Overview
The PostgreSQL(DBaaS) Server Connection scenario involves creating a Bastion host(Virtual Server) and Database service, and accessing the DB service through the Bastion host. To stably access PostgreSQL(DBaaS) in the Samsung Cloud Platform environment, you need to create a Bastion host and configure network connections using it. To maintain stable and high security levels, it is recommended to configure the Database service in a Private Subnet environment and the Bastion host in a restricted Public Subnet environment.
This scenario explains the process of creating a Bastion host and Database service, configuring the network environment for Bastion host and Database access, and connecting through a DB connection client.
Figure. PostgreSQL(DBaaS) Server Connection Architecture
Scenario Components
You can configure this scenario using the following services.
Service Group
Service
Detailed Description
Networking
VPC
Service that provides independent virtual networks in cloud environments
Networking
VPC > Subnet
Service that subdivides networks according to user needs/scale within VPC
Networking
VPC > Public IP
Service that reserves public IPs and assigns/releases them to Compute resources
Networking
VPC > Internet Gateway
Service that connects VPC resources to the internet
Networking
Security Group
Virtual firewall that controls server traffic
Database
PostgreSQL(DBaaS)
Service that easily creates and manages EPAS in a web environment
Compute
Virtual Server
Virtual server optimized for cloud computing
Compute
Virtual Server > Keypair
Encrypted file used to connect to Virtual Server
Table. List of Scenario Components
Note
The default policy of Security Group is Deny All, so you must register only allowed IPs.
The All Open(Any IP, Any Port) policy for In/Outbound can expose cloud resources directly to external threats.
Setting policies with specific IPs and Ports can enhance security.
Scenario Configuration Method
Create the necessary services to configure the scenario through the following procedure.
1. Configuring Network
This explains the process of configuring the network environment for accessing Bastion Host and Database service.
On the Summary panel, check the detailed creation information and estimated billing amount, then click the Complete button.
When creation is complete, check the created resource on the Virtual Server List page.
2-3. Checking Bastion Host Connection ID and Password
Click the All Services > Compute > Virtual Server menu. You will move to the Virtual Server’s Service Home page.
On the Service Home page, click the Virtual Server menu. You will move to the Virtual Server List page.
On the Virtual Server List page, click the resource created in 2-2. Creating Bastion Host. You will move to the detailed information page of that resource.
On the detailed information page, click the RDP password lookup button in the Keypair name item. The RDP password lookup popup window will open.
Click the All Services > Networking > Firewall menu. You will move to the Firewall’s Service Home page.
On the Service Home page, click the Firewall menu. You will move to the Firewall List page.
On the Firewall List page, select the Internet Gateway resource name created in 1-3. Creating Internet Gateway. You will move to the detailed information page of that resource.
On the detailed information page, click the Rules tab. You will move to the Rules tab.
On the Rules tab, click the Add Rule button. You will move to the Add Rule popup window.
In the Add Rule popup window, enter the following rules and click the Confirm button.
Source Address
Destination Address
Protocol
Port
Action
Direction
Description
Bastion connection PC IP
Bastion host IP
TCP
3389(RDP)
Allow
Inbound
User PC → Bastion host
Table. Internet Gateway Firewall Rules to be Added
5. Connecting to Database
This explains the process of connecting to the Database through a DB connection client program.
This guide explains how to connect using pgAdmin. There are various Database client programs and CLI utilities, so users can install and use the appropriate tool.
5-1. Connecting to Bastion Host
Run Remote Desktop Connection in the Windows environment of the PC that wants to connect to the Bastion host, enter the NAT IP of the Bastion Host, and click the Connect button.
Connect the user PC’s hard drive to upload the file to the Bastion host.
On the Local Resources tab of Remote Desktop Connection, click the More button in the local devices and resources item.
Select the local disk of the location where the file was downloaded and click the Confirm button.
Copy the downloaded file to upload it to the Bastion Host, then click the pgAdmin installation file to install it.
5-3. Connecting to Database Using DB Connection Client Program (pgAdmin)
Run pgAdmin and click the Add New Server button.
In the Register - Server popup window, enter the Database server information created in 3-1. Creating PostgreSQL(DBaaS) Service in the General tab and Connection tab, then click the Save button.
Screen(Tab)
Required Input Item
Input Value
General
Name
User-defined (ex. service name)
Connection
Host name/address
Database server IP
Connection
Port
Database Port
Connection
Maintenance database
Database name
Connection
Password
Database password
Table. DB Connection Client Program Input Items
Click the database name created in pgAdmin to perform the connection.
After connection, you can perform simple queries, etc.
6.2.2.5 - Using Extensions
PostgreSQL(DBaaS) Extension usage
Note
The list of extensions included in each version of PostgreSQL can be found on the PostgreSQL official page.
The list of extensions that can be installed in the current database can be checked with the following SQL statement.
SQL> select * from pg_available_extensions;
PostgreSQL(DBaaS) has the following additional installation items outside of the default extension by version:
Extension name
description
pgaudit
provides detailed audit logging functionality at the session and object level
pg_cron
a scheduler that allows job scheduling with cron syntax within the database
pg_hint_plan
Provides a feature to apply hint clauses to SQL execution plans
pgvector
Vector data type and similarity search, etc. AI vector operation function provided
postgis
GIS (Geographic Information System) provides spatial object storage and spatial query functionality
In addition to the above items, if additional Extension installation is required, please refer to Support Center > Inquiry and inquire.
Once the inquiry is received, we will proceed with the installation after review. Please note that some Extensions may not work normally during Replica configuration and recovery.
6.2.3 - API Reference
API Reference
6.2.4 - CLI Reference
CLI Reference
6.2.5 - Release Note
PostgreSQL(DBaaS)
2026.03.19
FEATUREAdded OS(Kernel) Upgrade Function
Enhances latest security patches and stability through OS(Kernel) upgrade function.
2025.12.16
FEATUREAdded Disaster Recovery Replica Configuration Function
Can configure disaster recovery replicas through Replica configuration (Other Region) function.
2025.07.01
FEATUREAdded User (Access Control) Management, Archive Setting Function, DB Audit Log Export Function, Backup Notification Function, Migration Function
PostgreSQL(DBaaS) feature additions
2nd Generation Server Type added
Added 2nd generation (db2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For details, see PostgreSQL(DBaaS) Server Type
DB User and Access Control Management and Archive Setting Function added
Provides replication-based zero-downtime data migration function. For details, see Configuring Migration
HDD, HDD_KMS types added to Block Storage type
2025.02.27
FEATUREAdded New Version, Server Type, Per-Server IP Setting, Block Storage Capacity Expansion Function
PostgreSQL(DBaaS) feature changes
PostgreSQL new versions added: 13.16, 14.13, 15.8
2nd generation server type added
Added 2nd generation (dbh2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For details, see PostgreSQL(DBaaS) Server Type
Block Storage capacity expansion is possible after service creation.
Per-server network IP setting function added allowing common settings or per-server settings depending on usage purpose.
Samsung Cloud Platform common feature changes
Reflected common CX changes such as Account, IAM and Service Home, tags.
2024.10.01
NEWPostgreSQL(DBaaS) Service Official Version Release
Volume encrypted storage selection option added to Block Storage type.
Role Switch (Active ↔ Standby) function added for Active DB and Standby DB configured in redundancy.
DB instance performance and log monitoring possible through integration with cloud monitoring service.
Planned Compute policy setting possible according to server type selected by customer.
2024.07.02
NEWBeta Version Release
PostgreSQL(DBaaS) service that allows easy creation and management of PostgreSQL in web environment has been released.
6.3 - MariaDB(DBaaS)
6.3.1 - Overview
Service Overview
MariaDB(DBaaS) is an open source relational database (RDBMS) with high compatibility with MySQL. Samsung Cloud Platform provides an environment where MariaDB installation is automated through a web-based Console and management functions for operation can be performed.
MariaDB(DBaaS) is designed with a high availability architecture considering storage-based data replication and minimizing Failover time. To prevent data loss, when the content of the Active server is changed, it is synchronously replicated to the Standby server, and up to 5 read-only servers called Replicas for read load distribution and disaster recovery (DR) are provided. In addition, to prepare for problems with the DB server or data, it provides a function to automatically backup at a time specified by the user, so that data can be recovered at a desired point in time.
Figure. MariaDB(DBaaS) Architecture
Provided Features
MariaDB(DBaaS) provides the following features.
Auto Provisioning: Database (DB) installation and settings are possible through UI, and provides Active-standby redundancy configuration based on storage replication. When Active server fails, automatic Failover to Standby occurs.
Operation Control Management: Provides function to control running server status. In addition to start and stop, restart is possible when there is a problem with the DB or to reflect settings. When high availability (HA) is configured, the user can directly perform node switching of Active-Standby through Switch-over.
Backup and Recovery: Provides data backup function based on own backup command. Backup time period and retention period can be set by the user, and additional fees are charged depending on backup capacity. Also provides recovery function for backed up data, and when the user performs recovery, a separate DB is created and recovery proceeds to the point selected by the user (backup storage point, user-specified point). When recovering to a user-specified point, the recovery point can be set up to 5 minutes/10 minutes/30 minutes/1 hour ago based on stored backup files and archive files.
Version Management: Provides version upgrade (Minor) function for some function improvements and security patches. Whether to perform backup according to version upgrade can be selected by the user, and if performing backup, data is backed up before patch execution and then DB engine update is performed.
Replica Configuration: Up to 5 Read Replicas can be configured in the same/different region for read load distribution and disaster recovery (DR).
Audit Setting: Provides Audit setting function to monitor user DB access and DDL (Data Definition Language)/DML (Data Manipulation Language) execution results.
Parameter Management: DB configuration parameter modification for performance improvement and security is possible.
Service Status Inquiry: Inquires the final status of current DB service.
Monitoring: CPU, Memory, performance monitoring information can be checked through Cloud Monitoring and Servicewatch.
DB User Management: Manages by inquiring DB account (user) information registered in DB.
DB Access Control Management: Allows registration and termination of access allowed IP based on DB accounts registered in DB.
Archive Management: Archive file retention period (1 day~35 days) setting and Archive mode (On/Off) setting are possible in DB server.
DB Log Export: Logs stored through Audit settings can be exported to user’s Object Storage.
Migration: Supports migration using Replication method by synchronizing data in real time with operating database without service interruption.
OS Kernel Upgrade: OS Kernel can be upgraded for some function improvements and security patch application.
Components
MariaDB(DBaaS) provides engine versions pre-verified according to open source support policy and various server types. Users can select and use them according to the service scale they want to configure.
Engine Version
The engine versions supported by MariaDB(DBaaS) are as follows.
Technical support can be used until the EoTS (End of Technical Service) date of the supplier, and the EOS date when new creation stops is set to 6 months before the EoTS date.
According to supplier policy, EOS and EoTS dates may change, so please check the supplier’s license management policy page for details.
Standard: Standard specification (vCPU, Memory) configuration commonly used
High Capacity: Large capacity server specification with 24 vCore or more
Server Specification
db1
Provided server specifications
db1: Standard specification (vCPU, Memory) configuration commonly used
dbh2: Large capacity server specification
Provides servers with 24 vCore or more
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory capacity
m4: 4GB Memory
Table. MariaDB(DBaaS) Server Type Components
Prerequisite Services
This is a list of services that must be pre-configured before creating this service. Please prepare in advance by referring to the guide provided for each service.
A service that provides an independent virtual network in a cloud environment
Table. MariaDB(DBaaS) Prerequisite Services
6.3.1.1 - Server Type
MariaDB(DBaaS) Server Type
MariaDB(DBaaS) provides server types with various combinations of CPU, Memory, and Network Bandwidth. When creating a MariaDB(DBaaS), the database engine is installed according to the selected server type, which is chosen based on the intended use.
The server types supported by MariaDB(DBaaS) are as follows:
Standard db1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Classification of provided server types
Standard: Standard specification (vCPU, Memory) composition
High Capacity: High-capacity server specification above Standard
Server Specification
db1
Classification of provided server types and generation
db1: General specification, 1 represents the generation
dbh2: h represents high-capacity server specification, 2 represents the generation
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory capacity
m4: 4GB Memory
Table. MariaDB(DBaaS) Server Type Format
db1 Server Type
The db1 server type of MariaDB(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
db1v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
db1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
db1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
db1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
db1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
db1v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
db1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
db1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
db1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
db1v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
db1v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
db1v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
db1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
db1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
db1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
db1v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
db1v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
db1v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
db1v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
db1v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
db1v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
db1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
db1v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
db1v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
db1v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
db1v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
db1v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
db1v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
db1v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
db1v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
db1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
db1v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
db1v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
db1v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
db1v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v16m256
16 vCore
256 GB
Up to 12.5 Gbps
Table. MariaDB(DBaaS) Server Type Specifications - db1 Server Type
dbh2 Server Type
The dbh2 server type of MariaDB(DBaaS) is provided with high-capacity server specifications and is suitable for large-scale data processing database workloads.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 128 vCPUs and 1,536 GB of memory
Up to 25Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
dbh2v24m48
24 vCore
48 GB
Up to 25 Gbps
High Capacity
dbh2v24m96
24 vCore
96 GB
Up to 25 Gbps
High Capacity
dbh2v24m192
24 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v24m288
24 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
dbh2v32m128
32 vCore
128 GB
Up to 25 Gbps
High Capacity
dbh2v32m256
32 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v32m384
32 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v48m192
48 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v48m576
48 vCore
576 GB
Up to 25 Gbps
High Capacity
dbh2v64m256
64 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v64m768
64 vCore
768 GB
Up to 25 Gbps
High Capacity
dbh2v72m288
72 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v72m864
72 vCore
864 GB
Up to 25 Gbps
High Capacity
dbh2v96m384
96 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v96m1152
96 vCore
1152 GB
Up to 25 Gbps
High Capacity
dbh2v128m512
128 vCore
512 GB
Up to 25 Gbps
High Capacity
dbh2v128m1536
128 vCore
1536 GB
Up to 25 Gbps
Table. MariaDB(DBaaS) Server Type Specifications - dbh2 Server Type
6.3.1.2 - Monitoring Metrics
MariaDB(DBaaS) Monitoring Metrics
The following table shows the performance monitoring metrics of MariaDB(DBaaS) that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, refer to the Cloud Monitoring guide.
Time difference between Master and Slave data (only executed on slave)
sec
Tablespace Used
Tablespace usage
MB
Tablespace Used [Total]
Total Tablespace usage
MB
Running Threads
Number of running threads
cnt
Slowqueries
Number of sessions that execute SQL for more than 10 seconds
cnt
Slowqueries [Total]
Total number of sessions that execute SQL for more than 10 seconds
cnt
Transaction Time [Long]
Longest transaction execution time
sec
Wait Locks
Number of sessions blocked by lock for more than 60 seconds
cnt
Table. MariaDB(DBaaS) Monitoring Metrics
6.3.1.3 - ServiceWatch Metrics
MariaDB sends metrics to ServiceWatch. The metrics provided by default monitoring are data collected at a 1-minute interval.
Reference
How to check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the MariaDB namespace.
OS Basic Metrics
Category
Performance Item
Detailed Description
Unit
Meaningful Statistics
CPU
CPU Usage
CPU Usage
Percent
Disk
Disk Usage
Disk Usage Rate
Percent
Disk
Disk Write Bytes
Write capacity on block device (bytes/second)
Bytes/Second
Disk
Disk Read Bytes
Amount read from block device (bytes/second)
Bytes/Second
Disk
Disk Write Request
Number of write requests on block device (requests/second)
Count/Second
Disk
Disk Read Requests
Number of read requests on block device (requests/second)
Count/Second
Disk
Average Disk I/O Queue Size
Average queue length of requests issued to the block device
None
Disk
Disk I/O Utilization
Block device’s actual time spent processing I/O operations
Percent
Memory
Memory Usage
Memory Usage Rate
Percent
Network
Network In Bytes
Received capacity on the network interface (bytes/second)
Bytes/Second
Network
Network Out Bytes
Transmitted capacity from network interface (bytes/second)
Bytes/Second
Network
TCP Connections
Total number of TCP connections currently established correctly
Count/Second
Network
Network In Packets
Number of packets received on the network interface
Count
Network
Network Out Packets
Number of packets transmitted from the network interface
Count
Network
Network In Dropped
Number of packet drops received on the network interface
Count
Network
Network Out Dropped
Number of packet drops transmitted from the network interface
Count
Network
Network In Errors
Number of packet errors received on the network interface
Count
Network
Network Out Errors
Number of packet errors transmitted from the network interface
Count
Table. OS Basic Metrics
MariaDB Basic Metrics
Category
Performance Item
Detailed Description
Unit
Meaningful Statistics
Activelock
Active locks
Number of active locks
Count
Activesession
Active sessions
Number of active sessions
Count
Activesession
Connection usage
DB connection session usage rate
Percent
Activesession
Connections
DB connection session
Count
Activesession
Connections(MAX)
Maximum number of connections that can be attached to the DB
Count
Datafile
Binary log used
binary log usage (MB)
Megabytes
Datafile
Open files
Number of DB files in open state
Count
Datafile
Open files(MAX)
Number of DB files that can be opened
Count
Datafile
Open files usage
DB file maximum count usage rate
Percent
Datafile
Relay log used
Relay log usage (MB)
Megabytes
InnoDB
InnoDB buffer pool hit ratio
Percent
InnoDB
InnoDB row lock waits
Number of InnoDB transactions currently waiting for a lock (Lock-wait)
Count
InnoDB
InnoDB row lock time
Total time waited due to InnoDB row lock (in milliseconds)
Count
InnoDB
InnoDB deadlocks
Number of transactions rolled back due to deadlock occurrence (cumulative)
Count
InnoDB
InnoDB table locks waits
Number of times waiting occurred to acquire table lock (cumulative)
Count
State
Instance state
MariaDB Process status up/down check
Count
State
Slave behind master seconds (Replica Only)
Replica delay (unit: seconds)
Seconds
State
Replica Thread running (Replica Only)
State
Replica io thread running (Replica Only)
State
Replica SQL thread running (Replica Only)
Tablespace
Tablespace used
Tablespace usage
Megabytes
Tablespace
Tablespace used (TOTAL)
Tablespace usage (total)
Megabytes
Transactions
Slow queries
Number of slow queries
Count
Transactions
Transaction time
Long Transaction time
Seconds
Transactions
Wait locks
Number of sessions waiting for lock
Count
Transactions
SQL Queries/Sec
Total number (cumulative) of all queries (statements) received from clients since the server started
Count
Table. MariaDB Basic Metrics
6.3.2 - How-to guides
Users can create MariaDB(DBaaS) by entering required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating MariaDB(DBaaS)
You can create and use MariaDB(DBaaS) service through the Samsung Cloud Platform Console.
Notice
Before creating a service, configure the VPC Subnet type as General.
If the Subnet type is Local, the Database service cannot be created.
When loading large capacity data of 2 TB or more, backup may be performed for a long time or the performance of the DB itself may deteriorate. To prevent this, from an operational perspective, it is necessary to consider cleaning up unnecessary data or migrating old data to a statistical collection environment.
To create MariaDB(DBaaS), follow these steps:
Click the All Services > Database > MariaDB(DBaaS) menu. It moves to the Service Home page of MariaDB(DBaaS).
On the Service Home page, click the Create MariaDB(DBaaS) button. It moves to the Create MariaDB(DBaaS) page.
On the Create MariaDB(DBaaS) page, enter the information required for service creation and select detailed options.
Select the required information in the Image and Version Selection area.
Classification
Required
Detailed Description
Image Version
Required
Provides version list of MariaDB(DBaaS)
Table. MariaDB(DBaaS) Image and Version Selection Items
Enter or select the required information in the Service Information Entry area.
Classification
Required
Detailed Description
Server Name Prefix
Required
Server name where DB will be installed
Start with lowercase English letters, enter 3 to 13 characters using lowercase letters, numbers, and special characters (-)
Actual server name is created with a postfix like 001, 002 based on the server name
Cluster Name
Required
Cluster name where DB servers are configured
Enter 3 to 20 characters using English
Cluster is a unit that bundles multiple servers
Service Type > Server Type
Required
Server type where DB will be installed
Standard: Standard specification commonly used
High Capacity: Large capacity server with 24 vCore or more
Set Storage type is applied equally to additional storage
Enter capacity in multiples of 8 within the range of 16 to 5,120
Since service interruption may occur due to large Sort such as SQL execution or monthly batch, separate TEMP storage must be allocated and used
Additional: DATA, Archive, TEMP, Backup data storage area
Select Use then enter Purpose and Capacity of storage
Storage type is applied equally to the type set in DATA, and capacity can be entered in multiples of 8 within the range of 16 to 5,120
To add storage, click the + button, and to delete, click the x button. Can add up to 9
Before transferring backup data, temporarily store backup data in BACKUP storage
If backup data exceeds 100 GB and data changes are frequent, it is recommended to add separate BACKUP storage for stable backup. It is recommended to set backup capacity to about 60% of DATA capacity
If BACKUP storage is not added, /tmp area is used, and backup fails if capacity is insufficient
Only one Block Storage is allocated per service for Archive, TEMP, BACKUP storage
Redundancy Configuration
Optional
Redundancy configuration
If using redundancy configuration, DB instances are configured as Active DB and Standby DB
Network > Common Settings
Required
Network settings where servers created in the service are installed
Select if you want to apply the same settings to all installed servers
Select pre-created VPC and Subnet, IP, Public NAT
Only automatic IP generation is possible
Public NAT function is only available when VPC is connected to Internet Gateway. If you check Use, you can select from IPs reserved in Public IP of VPC product. For more information, see Creating Public IP
Network > Per Server Settings
Required
Network settings where servers created in the service are installed
Select if you want to apply different settings for each installed server
Select pre-created VPC and Subnet, IP, Public NAT
Enter IP for each server
Public NAT function is only available when VPC is connected to Internet Gateway. If you check Use, you can select from IPs reserved in Public IP of VPC product. For more information, see Creating Public IP
IP Access Control
Optional
Service access policy setting
Since access policy is set for the IP entered on the page, you don’t need to set Security Group policy separately
Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.0/24, 192.168.10.1/32) and click the Add button
To delete the entered IP, click the x button next to the entered IP
Maintenance Period
Optional
DB maintenance period
If selecting Use, set day of week, start time, and duration
It is recommended to set the maintenance period for stable DB management. Patch work is performed at the set time and service interruption occurs
If set to not used, Samsung SDS is not responsible for problems caused by not applying patches.
Table. MariaDB(DBaaS) Service Information Entry Items
Enter or select the required information in the Database Configuration Required Information Entry area.
Classification
Required
Detailed Description
Database Name
Required
Server name applied when DB is installed
Start with English, enter 3 to 20 characters using English and numbers
Database Username
Required
DB user name
Account with that name is also created in OS
Enter 2 to 20 characters using lowercase English letters
Limited Database usernames can be checked in Console
Database Password
Required
Password to use when accessing DB
Enter 8 to 30 characters including English letters, numbers, and special characters (excluding "’)
Database Password Confirmation
Required
Re-enter password to use when accessing DB identically
Database Port Number
Required
Port number required for DB connection
Enter DB port within the range of 1200 to 65535
Backup > Use
Optional
Backup usage
Select Use to set backup file retention period, backup start time, and Archive backup cycle
Backup > Retention Period
Optional
Backup retention period
Select backup retention period. File retention period can be set from 7 days to 35 days
Separate fees are charged for backup files depending on capacity
Backup > Backup Start Period
Optional
Backup start time
Select backup start time
The minutes when backup is performed are set randomly, and backup end time cannot be set
Backup > Archive Backup Cycle
Optional
Archive backup cycle
Select Archive backup cycle
1 hour is recommended for Archive backup cycle. When selecting 5 minutes, 10 minutes, 30 minutes, it may affect DB performance
Audit Log Setting
Optional
Audit Log storage
Select Use to set Audit Log function
User access information recording is stored
Users can specify event types to Audit through server_audit_events parameter, and can modify through Parameter screen
Enter or select the required information in the Additional Information Entry area.
Classification
Required
Detailed Description
Tags
Optional
Add tags
Can add up to 50 per resource
Click the Add Tag button then enter or select Key, Value values
Table. MariaDB(DBaaS) Additional Information Entry Items
In the Summary panel, review the detailed information and estimated charges, and click the Create button.
Once creation is complete, check the created resource on the Resource List page.
Checking MariaDB(DBaaS) Detailed Information
MariaDB(DBaaS) service allows you to check and modify the entire resource list and detailed information. The MariaDB(DBaaS) Details page consists of Detailed Information, Tags, Operation History tabs, and for DBs with Replica configured, Replica Information tab is additionally configured.
To check the detailed information of MariaDB(DBaaS) service, follow these steps:
Click the All Services > Database > MariaDB(DBaaS) menu. It moves to the Service Home page of MariaDB(DBaaS).
On the Service Home page, click the MariaDB(DBaaS) menu. It moves to the MariaDB(DBaaS) List page.
On the MariaDB(DBaaS) List page, click the resource to check detailed information. It moves to the MariaDB(DBaaS) Details page.
At the top of the MariaDB(DBaaS) Details page, status information and additional function information are displayed.
Classification
Detailed Description
Cluster Status
Cluster status where DB is installed
Creating: Cluster is being created
Editing: Cluster is changing to Operation execution state
Error: State where error occurred while cluster is performing operation
If it occurs continuously, contact administrator
Failed: State where cluster failed during creation process
Restarting: State where cluster is being restarted
Running: State where cluster is operating normally
Starting: State where cluster is starting
Stopped: State where cluster is stopped
Stopping: State where cluster is in stop state
Synchronizing: State where cluster is synchronizing
Terminating: State where cluster is being deleted
Unknown: State where cluster status is unknown
If it occurs continuously, contact administrator
Upgrading: State where cluster is changing to upgrade execution state
Cluster Control
Buttons to change cluster status
Start: Start stopped cluster
Stop: Stop running cluster
Restart: Restart running cluster
Switch-Over: Switch Standby cluster to Active
More Additional Functions
Cluster-related management buttons
Sync Service Status: Check real-time DB service status
Backup History: If backup is set, check backup normal execution status and history
Database Recovery: Recover DB based on specific point in time
Parameter Management: Check and modify DB configuration parameters
Replica Configuration: Configure Replica which is read-only cluster
Replica Configuration (Other-Region): Configure disaster recovery replica in different region, button is disabled if no region configured in that Account
DB User Management: Check and manage DB account (user) information registered in DB
DB Access Control Management: Allows registration and termination of access allowed IP based on DB accounts registered in DB
Archive Setting Management: Archive file retention period setting and Archive mode setting are possible
DB Log Export: Logs stored through Audit settings can be exported to user’s Object Storage
Migration Configuration: Provides Migration function of Replication method
OS(Kernel) Upgrade: OS Kernel version upgrade
Service Termination
Button to terminate service
Table. MariaDB(DBaaS) Status Information and Additional Functions
Detailed Information
On the MariaDB(DBaaS) List page, you can check the detailed information of the selected resource and modify information if necessary.
Classification
Detailed Description
Server Information
Server information configured in the cluster
Category: Server type (Active, Standby, Replica)
Server Name: Server name
IP:Port: Server IP and port
Status: Server status
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
For DB service, means cluster SRN
Resource Name
Resource name
For DB service, means cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
Image Version
Installed DB image and version information
If version upgrade is needed, click the Edit icon to set
If log collection setting is needed, click the Edit icon next to log collection to set
DB Character Set
Character encoding method to use for DB
Time Zone
Standard time zone where Database will be used
VIP
Virtual IP information
Only available when high availability is set
Network
Network information where DB is installed (VPC, Subnet, VIP, NAT IP(VIP))
IP Access Control
Service access policy setting
If IP addition and deletion are needed, click the Edit icon to set
Active & Standby
Active/Standby server type, Basic OS, Additional Disk information
If server type modification is needed, click the Edit icon next to server type to set. For server type modification procedure, see Changing Server Type
If server type is modified, server restart is needed
If storage expansion is needed, click the Edit icon next to storage capacity to expand. For storage expansion procedure, see Expanding Storage
If storage addition is needed, click the Disk Add button next to additional Disk to add. For storage addition procedure, see Adding Storage
Table. MariaDB(DBaaS) Database Detailed Information Items
Replica Information
The Replica Information tab is activated only when Replica is configured in the cluster. Through the Replica Information tab, you can check Master cluster name, replica count, and Replica status.
Classification
Detailed Description
Master Information
Name of Master cluster
Replica Count
Number of Replicas created in Master cluster
Replica Status
Replica server status created in Master cluster
Can check about server name, status inquiry, status details, status check time
To inquire Replica status, click the Status Inquiry button
While inquiring, cluster maintains Synchronizing status, and when inquiry is complete, cluster changes to Running status
Table. Replica Information Tab Detailed Information Items
Tags
On the MariaDB(DBaaS) List page, you can check the tag information of the selected resource and add, change, or delete it.
Classification
Detailed Description
Tag List
Tag list
Can check Key, Value information of tags
Can add up to 50 tags per resource
When entering tags, search and select from existing Key and Value lists
Table. MariaDB(DBaaS) Tag Tab Items
Operation History
You can check the operation history of the selected resource on the MariaDB(DBaaS) List page.
Table. Operation History Tab Detailed Information Items
Managing MariaDB(DBaaS) Resources
If you need to change existing configuration options of created MariaDB(DBaaS) resources or perform recovery, Replica configuration, you can perform operations on the MariaDB(DBaaS) Details page.
Controlling Operation
If changes occur in running MariaDB(DBaaS) resources, you can start, stop, or restart. Also, if configured with HA, you can switch Active-Standby servers through Switch-over.
To control MariaDB(DBaaS) operation, follow these steps:
Click the All Services > Database > MariaDB(DBaaS) menu. It moves to the Service Home page of MariaDB(DBaaS).
On the Service Home page, click the MariaDB(DBaaS) menu. It moves to the MariaDB(DBaaS) List page.
On the MariaDB(DBaaS) List page, click the resource to control operation. It moves to the MariaDB(DBaaS) Details page.
Check MariaDB(DBaaS) status and complete changes through the control buttons below.
Start: DB service installed server and DB service will run (Running).
Stop: DB service installed server and DB service will stop (Stopped).
Restart: Only DB service will restart.
Switch Over: Can switch Active server and Standby server of DB.
Syncing Service Status
You can sync the real-time service status of MariaDB(DBaaS).
To check MariaDB(DBaaS) service status, follow these steps:
Click the All Services > Database > MariaDB(DBaaS) menu. It moves to the Service Home page of MariaDB(DBaaS).
On the Service Home page, click the MariaDB(DBaaS) menu. It moves to the MariaDB(DBaaS) List page.
On the MariaDB(DBaaS) List page, click the resource to check service status. It moves to the MariaDB(DBaaS) Details page.
Click the Sync Service Status button. While retrieving, the cluster changes to Synchronizing status.
When retrieval is complete, the status is updated in the server information item, and the cluster changes to Running status.
Changing Server Type
You can change the configured server type.
Caution
If you modify the server type, server restart is needed. Please check SW license modification matters or SW settings and application due to server specification change separately.
To change the server type, follow these steps:
Click the All Services > Database > MariaDB(DBaaS) menu. It moves to the Service Home page of MariaDB(DBaaS).
On the Service Home page, click the MariaDB(DBaaS) menu. It moves to the MariaDB(DBaaS) List page.
On the MariaDB(DBaaS) List page, click the resource to change server type. It moves to the MariaDB(DBaaS) Details page.
Click the Edit icon of the server type you want to change at the bottom of detailed information. The Edit Server Type popup window opens.
In the Edit Server Type popup window, select the server type and click the OK button.
Adding Storage
If data storage space of 5TB or more is needed, you can add storage. For DB configured with redundancy, both redundancy servers are added simultaneously.
Caution
It is applied equally to the Storage type selected when creating the service.
For DB configured with redundancy, when adding storage, it is applied simultaneously to storage of Active DB and Standby DB.
When Replica exists, storage of Master cluster cannot be smaller than storage of Replica. Expand Replica storage first then expand Master cluster storage.
When adding Archive/Temp storage, DB restarts and is temporarily unavailable.
To add storage, follow these steps:
Click the All Services > Database > MariaDB(DBaaS) menu. It moves to the Service Home page of MariaDB(DBaaS).
On the Service Home page, click the MariaDB(DBaaS) menu. It moves to the MariaDB(DBaaS) List page.
On the MariaDB(DBaaS) List page, click the resource to add storage. It moves to the MariaDB(DBaaS) Details page.
Click the Disk Add button at the bottom of detailed information. The Additional Storage Request popup window opens.
In the Additional Storage Request popup window, after entering purpose and capacity, click the OK button.
Expanding Storage
Storage added as data area can be expanded up to maximum 5 TB based on the initially allocated capacity. For DB configured with redundancy, both redundancy servers are expanded simultaneously.
To expand storage capacity, follow these steps:
Click the All Services > Database > MariaDB(DBaaS) menu. It moves to the Service Home page of MariaDB(DBaaS).
On the Service Home page, click the MariaDB(DBaaS) menu. It moves to the MariaDB(DBaaS) List page.
On the MariaDB(DBaaS) List page, click the resource to change server type. It moves to the MariaDB(DBaaS) Details page.
Click the Edit icon of the additional Disk you want to expand at the bottom of detailed information. The Edit Additional Storage popup window opens.
In the Edit Additional Storage popup window, after entering expansion capacity, click the OK button.
Terminating MariaDB(DBaaS)
You can reduce operating costs by terminating unused MariaDB(DBaaS). However, if you terminate the service, the running service may stop immediately, so you should proceed with termination after fully considering the impact caused by service interruption.
Caution
For DB configured with Replica, even if Master DB is terminated, Replica is not deleted together. If you also want to delete Replica, terminate separately from the resource list.
If you terminate DB, stored data and backup data are all deleted even if backup was set.
To terminate MariaDB(DBaaS), follow these steps:
Click the All Services > Database > MariaDB(DBaaS) menu. It moves to the Service Home page of MariaDB(DBaaS).
On the Service Home page, click the MariaDB(DBaaS) menu. It moves to the MariaDB(DBaaS) List page.
On the MariaDB(DBaaS) List page, select the resource to terminate and click the Terminate Service button.
When termination is complete, check if the resource is terminated on the MariaDB(DBaaS) List page.
6.3.2.1 - Managing DB Service
Users can manage MariaDB(DBaaS) through Samsung Cloud Platform Console.
Managing Parameter
Provides functionality to easily view and modify database configuration parameters.
Viewing Parameters
Follow these steps to view configuration parameters.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to view or modify parameters on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click Parameter Management button. Parameter Management popup window will open.
Click View button in the Parameter Management popup window. View Notification popup window will open.
Click Confirm button when the View Notification popup window opens. It will take some time to view.
Modifying Parameters
Follow these steps to modify configuration parameters.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to view or modify parameters on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click Parameter Management button. Parameter Management popup window will open.
Click View button in the Parameter Management popup window. View Notification popup window will open.
Click Confirm button when the View Notification popup window opens. It will take some time to view.
If modification is needed, click Modify button and enter the modification in the custom value area of the Parameter to be modified.
When input is complete, click Complete button.
Note
If you change the character_set_server value, first check the collation that matches the character set with the following command:
SQL> SHOW COLLATION WHERE Charset = 'character set name';
Set the parameter values character-set-server, collation-server, and init_connect with the confirmed collation.
Item
Description
Restart Required
character-set-server
Specify default character set
Restart required
collation-server
Specify default collation
Restart required
init_connect
SQL statement executed when Client connects to database
Restart not required
Table. Parameter Setting Items
Managing DB Users
Provides management functionality to view DB user information and change status information.
Viewing DB Users
Follow these steps to view DB users.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to view DB users on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click DB User Management button. You will be taken to the DB User Management page.
Click View button on the DB User Management page. It will take some time to view.
Changing DB User Status
Follow these steps to change the status of viewed DB users.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to modify DB users on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click DB User Management button. You will be taken to the DB User Management page.
Click View button on the DB User Management page. It will take some time to view.
If modification is needed, click Modify button and change the status area value or enter note content.
When input is complete, click Complete button.
Managing DB Access Control
Provides IP-based DB user access control management functionality. Users can directly specify IPs that can access the database, setting it so that only allowed IPs can access.
Notice
Before setting DB access control, perform DB user view. For DB user view, please refer to Managing DB Users.
Viewing DB Access Control
Follow these steps to view DB users with IP access control set.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to manage access control on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click DB Access Control Management button. You will be taken to the DB Access Control Management page.
Click View button on the DB Access Control Management page. It will take some time to view.
Adding DB Access Control
Follow these steps to add IP access control.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to add IP access control on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click DB Access Control Management button. You will be taken to the DB Access Control Management page.
Click View button on the DB Access Control Management page. It will take some time to view.
When viewing is complete, click Add button. DB Access Control Add popup window will open.
Enter DB username selection and IP Address in the DB Access Control Add popup window.
When input is complete, click Complete button.
Deleting DB Access Control
Follow these steps to delete IP access control.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to delete IP access control on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click DB Access Control Management button. You will be taken to the DB Access Control Management page.
Click View button on the DB Access Control Management page. It will take some time to view.
When viewing is complete, click Delete button. Delete popup window will open.
Click Confirm button in the Delete popup window.
Managing Archive
Provides Archive mode setting and Archive Log retention period setting functionality, allowing users to flexibly set Archive log management policies according to their operating environment. Additionally, it provides functionality to manually delete Archive logs, enabling effective management of system resources by cleaning up unnecessary log data.
Notice
When creating a service, the default setting is Archive mode enabled and retention period is 3 days.
Setting Archive Mode
Follow these steps to set Archive mode.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to set Archive mode on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click Archive Setting Management button. You will be taken to the Archive Setting Management page.
Click View button on the Archive Setting Management page. It will take some time to view.
Click Modify button and select usage and retention period.
When modification is complete, click Complete button.
Deleting Archive Files
Follow these steps to delete Archive files.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to set Archive mode on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click Archive Setting Management button. You will be taken to the Archive Setting Management page.
To delete all Archive files on the Archive Setting Management page, click Delete All Archives, and to delete only backed up Archive files, click Delete Backed Up Archives button.
Modifying Audit Settings
You can change MariaDB(DBaaS)’s Audit log storage settings.
Follow these steps to change MariaDB(DBaaS)’s Audit log storage settings.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to view service status on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click Modify icon in Audit Settings at the bottom of the detail information. Audit Settings Modify popup window will open.
Modify usage in the Audit Settings Modify popup window and click Confirm button.
When Use is selected, Audit log functionality is set. When Audit log is set, DB performance may degrade.
When Use is disabled, the Audit log storage file is deleted. Back up the Audit log file separately before disabling use.
Exporting DB Log
Supports exporting audit (Audit) logs that require long-term preservation to Object Storage. Users can directly set the log type to be stored, the target Bucket to export to, and the cycle for exporting logs. Logs are copied and stored to the specified Object Storage according to the set criteria.
Additionally, for efficient management of disk space, it also provides an option to automatically delete the original log file while exporting the log to Object Storage. By using this option, you can effectively secure storage capacity while safely storing necessary log data for the long term.
Notice
To use the DB Log Export functionality, Object Storage creation is required. For Object Storage creation, please refer to Object Storage User Guide.
Please be sure to check the expiration date of the authentication key. If the authentication key expires, logs will not be stored in the Bucket.
Please be careful not to expose authentication key information to the outside.
Setting DB Log Export Mode
Follow these steps to set DB Log Export mode.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to export DB Log on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click DB Log Export button. You will be taken to the DB Log Export page.
Click Register button on the DB Log Export page. You will be taken to the DB Log Export Register page.
Enter the corresponding information on the DB Log Export Register page and click Complete button.
Category
Required
Description
Log Type
Required
Log type to store
Storage Bucket Name
Required
Object Storage Bucket name to store
Authentication Key > Access key
Required
Access key to access the Object Storage to store
Authentication Key > Secret key
Required
Secret key to access the Object Storage to store
File Creation Cycle
Required
Cycle for creating files in Object Storage
Original Log Deletion
Optional
Whether to delete the original log while exporting to Object Storage
Table. MariaDB(DBaaS) DB Log Export Configuration Items
Managing DB Log Export
Follow these steps to modify, cancel, or immediately export DB Log Export settings.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to manage DB Log Export on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click DB Log Export button. You will be taken to the DB Log Export page.
On the DB Log Export page, click More button according to the log type you want to manage and click Immediate Export, Modify, or Cancel button.
Immediate Export: Selected logs are exported to the previously set Object Storage’s Bucket.
Modify: Modifies DB Log Export mode settings.
Cancel: Cancels DB Log Export mode settings.
Upgrading Minor Version
Provides version upgrade functionality according to some feature improvements and security patches. Only Minor version upgrade functionality within the same Major version is supported.
Warning
Please check the service status first through service status synchronization, then perform version upgrade.
Please set backup before proceeding with version upgrade. If backup is not set, some data may not be recoverable if a problem occurs during upgrade.
In a DB where Replica is configured, the Master DB version cannot be higher than the Replica version. First check the Replica version and perform version upgrade if necessary.
Backed up data is automatically deleted after version upgrade is complete.
Follow these steps to upgrade Minor Version.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to upgrade version on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click Modify icon in the Image Version item. Version Upgrade popup window will open.
Select the modification version and backup setting in the Version Upgrade popup window, then click Confirm button.
Click Confirm button in the Version Upgrade Notification popup window.
Configuring Migration
Provides Migration functionality that replicates in real-time with a running database using Replication method without service interruption.
You can promote a configured Migration Cluster to Master Cluster.
Warning
When promoting to Master, synchronization with the Source DB to be migrated is stopped.
Follow these steps to promote Migration Cluster to Master.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to promote to Master on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click Master Promotion button. Master Promotion Notification popup window will open.
Click Confirm button in the Master Promotion Notification popup window.
Upgrading OS Kernel
You can upgrade the OS Kernel to improve running database functionality and apply security patches.
Warning
Service is interrupted while OS upgrade is in progress.
Upgrade time may vary depending on the version, and if upgrade fails, it will revert to the previous configuration.
Cannot recover to the previous OS after upgrade is complete.
Follow these steps to upgrade OS Kernel.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to upgrade OS Kernel on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click OS(Kernel) Upgrade button. OS(Kernel) Upgrade Notification popup window will open.
Confirm the guidelines in the OS(Kernel) Upgrade Notification popup window and click Confirm button.
6.3.2.2 - DB Backup and Restore
Users can set up MariaDB(DBaaS) backup and restore using backed up files through Samsung Cloud Platform Console.
Backing up MariaDB(DBaaS)
MariaDB(DBaaS) provides data backup functionality based on its own backup command. Additionally, it provides an optimized backup environment for data protection and management through backup history verification and backup file deletion functionality.
Follow these steps to modify the backup settings of the created resource.
Warning
For stable backup, it is recommended to add a separate BACKUP storage or sufficiently increase storage capacity. Especially if backup target data exceeds 100 GB and there is a lot of data change, please secure additional storage corresponding to approximately 60% of the data capacity. For storage addition and capacity increase methods, please refer to Adding MariaDB(DBaaS) Storage, Expanding MariaDB(DBaaS) Storage guides.
If backup is set, backup is performed at the specified time after the set time, and additional charges occur according to backup capacity.
If backup setting is changed to Not Set, backup execution is immediately stopped and stored backup data is deleted and can no longer be used.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to set up backup on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click Modify button in the backup item. Backup Setting popup window will open.
To set up backup, click Use in the Backup Setting popup window, select retention period, backup start time, Archive backup cycle, and click Confirm button.
To stop backup setting, uncheck Use in the Backup Setting popup window and click Confirm button.
Viewing Backup History
Notice
To set notifications for backup success and failure, you can set through Notification Manager product. For detailed user guide on notification policy setting, please refer to Creating Notification Policy.
Follow these steps to view backup history.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to view backup history on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click Backup History button. Backup History popup window will open.
In the Backup History popup window, you can view backup status, version, backup start date/time, backup completion date/time, and capacity.
Deleting Backup Files
Follow these steps to delete backup files.
Warning
Backup files cannot be restored after deletion. Please be sure to confirm that it is unnecessary data before deletion.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to view backup history on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click Backup History button. Backup History popup window will open.
Check the file you want to delete in the Backup History popup window and click Delete button.
Restoring MariaDB(DBaaS)
In case of failure or data loss, you can restore based on a specific point in time using the restore functionality. When MariaDB(DBaaS) restore is performed, a new server is created with the OS image at the time of initial provisioning, DB is installed with the version of that backup point, and restore is performed with the DB’s configuration information and data.
Warning
At least twice the capacity of the data type Disk is required to perform restore. If Disk capacity is insufficient, restore may fail.
Follow these steps to restore MariaDB(DBaaS).
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to restore on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click Database Restore button. You will be taken to the Database Restore page.
Enter information in the Database restore configuration area and click Complete button.
Category
Required
Description
Restore Type
Required
Set the point in time the user wants to restore
Backup Point (Recommended): Restore based on backup file. Select from the list of backup file points displayed
User-Specified Point: Restore to the point in time the user wants within the range where backup is possible. The restorable period depends on the Archive backup cycle setting value, and can restore from the initial backup start point to 1 hour/30 minutes/10 minutes/5 minutes before the current time. Select the date and time to backup
Server Name Prefix
Required
Server name of restore DB
Enter 3-16 characters starting with English lowercase letters, using lowercase letters, numbers, and special characters (-)
Actual server name is created with postfix like 001, 002 based on server name
Cluster Name
Required
Cluster name of restore DB
Enter 3-20 characters using English
Cluster is a unit that groups multiple servers
Service Type > Server Type
Required
Server type where restore DB will be installed
Standard: Standard specifications commonly used
High Capacity: Large capacity server with 24vCore or more (to be provided later)
Service Type > Planned Compute
Optional
Resource status with Planned Compute set
In Use: Number of resources in use among those with Planned Compute set
Set: Number of resources with Planned Compute set
Coverage Preview: Amount applied with Planned Compute per resource
Planned Compute Service Create: Move to Planned Compute service application page
DATA: Storage area for table data, archive files, etc.
Applied identically with the Storage type set in the original cluster
Capacity is entered in multiples of 8 within the 16-5,120 range
Additional: DATA, Archive, TEMP, Backup data storage area
Applied identically with the Storage type set in the original cluster
In restore DB, only DATA, TEMP, Archive purposes can be added
Select Use and enter storage purpose and capacity
To add storage, click + button, to delete, click x button
Capacity can be entered in multiples of 8 within the 16-5,120 range, and can create up to 9
Database Username
Required
Database username set in the original DB
Database Port Number
Required
Database Port number set in the original DB
IP Access Control
Optional
IP address to access restore DB
Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.1/32, 192.168.10.1/32) and click Add button
To delete entered IP, click x button next to the entered IP
Maintenance Window
Optional
DB maintenance window
If Use is selected, set day of week, start time, duration
For stable DB management, set maintenance window is recommended. Patching is performed at the set time and service interruption occurs
If set to Not Used, problems caused by not applying patching are not the responsibility of Samsung SDS.
Tag
Optional
Add tag
Click Add Tag button and enter or select Key, Value values
Table. MariaDB(DBaaS) Restore Configuration Items
6.3.2.3 - Configuring Read Replica
Users can enter required information for Read Replica through Samsung Cloud Platform Console and create the service through detailed options.
Configuring Replica
Through Replica configuration, you can create replica servers for read-only or disaster recovery purposes. You can create up to 5 Replicas per Database.
Notice
To configure a Replica for disaster recovery, please create it through Replica Configuration (Other Region).
Follow these steps to configure Replica.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to configure Replica on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click Replica Configuration button. You will be taken to the Replica Configuration page.
Enter information in the Replica configuration area and click Complete button.
Category
Required
Description
Region
Required
Region to configure Replica
Only exposed when Replica Configuration (Other Region) is selected
Replica Count
Required
Number of Replicas to configure
Can configure up to 5 per cluster
If selecting a value of 2 or more, additionally enter Replica name and service type information
Replica Name
Required
Replica server name
Enter 3-19 characters starting with English lowercase letters, using lowercase letters, numbers, and special characters (-)
The entered Replica name is exposed as cluster name in the list
Service Type > Server Type
Required
Replica server type
Standard: Standard specifications commonly used
High Capacity: Large capacity server with 24vCore or more (to be provided later)
Service Type > Planned Compute
Optional
Resource status with Planned Compute set
In Use: Number of resources in use among those with Planned Compute set
Set: Number of resources with Planned Compute set
Coverage Preview: Amount applied with Planned Compute per resource
Planned Compute Service Create: Move to Planned Compute service application page
DATA: Storage area for table data, archive files, etc.
Applied identically with the Storage type set in the original cluster
Capacity setting not possible
Additional: DATA, Archive, TEMP, Backup data storage area
Applied identically with the Storage type set in the original cluster
In Replica, only DATA, TEMP purposes can be added
Select Use and enter storage purpose and capacity
To add storage, click + button, to delete, click x button
Capacity can be entered in multiples of 8 within the 16-5,120 range, and can create up to 9 including the number set in the original cluster
IP Access Control
Optional
Service access policy setting
Since access policy is set for IPs entered on the page, separate Security Group policy setting is not required
Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.0/24, 192.168.10.1/32) and click Add button
To delete entered IP, click x button next to the entered IP
Maintenance Window
Optional
DB maintenance window
If Use is selected, set day of week, start time, duration
For stable DB management, set maintenance window is recommended. Patching is performed at the set time and service interruption occurs
If set to Not Used, problems caused by not applying patching are not the responsibility of the company.
Tag
Optional
Add tag
Click Add Tag button and enter or select Key, Value values
Table. MariaDB(DBaaS) Replica Configuration Items
Reconfiguring Replica
In case of network failure or Replication delay with Master Cluster, you can replicate Master Cluster’s data again through Replica reconfiguration functionality.
Follow these steps to reconfigure Replica.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to reconfigure Replica on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click Confirm button in the Replica Reconfiguration Notification popup window.
Promoting Replica Cluster to Master Cluster
You can promote a configured Replica Cluster to Master Cluster.
Warning
When promoting to Master, synchronization with the existing Master Cluster is stopped.
Follow these steps to promote Replica Cluster to Master.
Click All Services > Database > MariaDB(DBaaS) menu. You will be taken to MariaDB(DBaaS)’s Service Home page.
Click MariaDB(DBaaS) menu on the Service Home page. You will be taken to the MariaDB(DBaaS) List page.
Click the resource for which you want to promote to Master on the MariaDB(DBaaS) List page. You will be taken to the MariaDB(DBaaS) Detail page.
Click More button and click Master Promotion button. Master Promotion Notification popup window will open.
Click Confirm button in the Master Promotion Notification popup window.
6.3.2.4 - MariaDB(DBaaS) Server Connection
Scenario Overview
The MariaDB(DBaaS) Server Connection scenario involves creating a Bastion host (Virtual Server) and Database service, and accessing the DB service through the Bastion host. To securely connect to MariaDB(DBaaS) in the Samsung Cloud Platform environment, you need to create a Bastion host and configure network connections through it. We recommend configuring the Database service in a Private Subnet environment and the Bastion host in a restricted Public Subnet environment to maintain stability and high security levels.
This scenario explains the process of creating a Bastion host and Database service, configuring the network environment for Bastion host and Database access, and connecting through a DB access client.
Figure. MariaDB(DBaaS) Server Connection Architecture
Scenario Components
You can configure this scenario using the following services:
Service Group
Service
Description
Networking
VPC
Service that provides an independent virtual network in the cloud environment
Networking
VPC > Subnet
Service that subdivides the network according to user’s purpose/scale within VPC
Networking
VPC > Public IP
Service that reserves a public IP to assign to and release from Compute resources
Networking
VPC > Internet Gateway
Service that connects VPC resources to the internet
Networking
Security Group
Virtual firewall that controls server traffic
Database
MariaDB(DBaaS)
Service that allows easy creation and management of MariaDB in a web environment
Compute
Virtual Server
Virtual server optimized for cloud computing
Compute
Virtual Server > Keypair
Encrypted file used to connect to Virtual Server
Table. Scenario Component List
Note
The default policy of Security Group is Deny All, so you must register only allowed IPs.
The All Open (Any IP, Any Port) policy for In/Outbound can expose cloud resources to external threats.
Setting policies by specifying necessary IPs and Ports can enhance security.
Scenario Configuration Method
Create the services required to configure the scenario through the following procedure.
1. Configure Network
This section explains the process of configuring the network environment for Bastion Host and Database service access.
Click All Services > Networking > Firewall menu. You will be taken to Firewall’s Service Home page.
Click Firewall menu on the Service Home page. You will be taken to the Firewall List page.
Select the Internet Gateway Resource Name created in 1-3. Create Internet Gateway on the Firewall List page. You will be taken to the resource’s detail information page.
Click Rules tab on the detail information page. You will be taken to the Rules tab.
Click Add Rule button on the Rules tab. You will be taken to the Add Rule popup window.
Enter the following rules in the Add Rule popup window and click Confirm button.
Source Address
Destination Address
Protocol
Port
Action
Direction
Description
Bastion Access PC IP
Bastion host IP
TCP
3389(RDP)
Allow
Inbound
User PC → Bastion host
Table. Internet Gateway Firewall Rules to Add
5. Access Database
This section explains the process of users accessing Database through a DB access client program.
This guide explains how to connect using MySQL Workbench. Since various Database client programs and CLI utilities are available, users can install and use the tool that suits them best.
5-1. Access Bastion Host
Run Remote Desktop Connection in the Windows environment of the PC from which you want to access the Bastion host, enter the Bastion Host’s NAT IP, and click Connect button.
When Remote Desktop Connection is successful, User Credential Input Window will open. Enter the ID and Password verified in 2-3. Verify Bastion Host Access ID and PW and click Confirm button.
5-2. Install DB Access Client Program (MySQL Workbench) Inside Bastion Host
Go to the MySQL official page and download the MySQL Workbench program.
Connect your PC’s hard drive to upload the file to the Bastion host.
Click Details button in the local devices and resources item on the Local Resources tab of Remote Desktop Connection.
Select the local disk of the location where the file was downloaded and click Confirm button.
Copy the downloaded file and upload it to the Bastion Host, then click the MySQL Workbench installation file to install it.
5-3. Access Database Using DB Access Client Program (MySQL Workbench)
Run MySQL Workbench and click Database > Manage connections. Manage Server Connection popup window will appear.
Click New button at the bottom left of the Manage Server Connection popup window, enter the Database server information created in 3-1. Create MariaDB(DBaaS) Service, and click Test Connection button. Password popup window will appear.
Required Input Item
Input Value
Connection Name
User specified (ex. Service Name)
Host name
Database Server IP
Port
Database Port
Username
Database Username
Table.DB Access Client Program Input Items
Enter the password set in 3-1. Create MariaDB(DBaaS) Service in the Password popup window and click OK button. When success is complete, click OK button in the Manage Server Connection popup window.
Click Database > Connect to Database. Connect to Database popup window will appear.
Select the Connection Name registered in Stored Connection to perform Database access. After connection, you can perform simple queries, etc.
Provides notification feature for backup success and failure. For more information, see Creating Notification Policy
Migration feature added
Provides non-disruptive data migration feature based on Replication. For more information, see Configuring Migration
Added HDD, HDD_KMS types to Block Storage type
2025.02.27
FEATUREServer Type Added and Per Server IP Setting, Block Storage Capacity Expansion Feature Added
MariaDB(DBaaS) feature changes
Added 2nd generation server type
Added 2nd generation (dbh2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For more information, see MariaDB(DBaaS) Server Type
After service creation, Block Storage capacity expansion is possible.
Per server network IP setting feature added allows common settings or per server settings according to usage purpose.
Samsung Cloud Platform common feature changes
Reflected common CX changes such as Account, IAM and Service Home, Tags, etc.
2024.10.01
NEWMariaDB(DBaaS) Service Official Version Released
Added volume encrypted storage selection option to Block Storage type.
Added function to Switch Role (Active ↔ Standby) of Active DB and Standby DB configured in redundancy.
Integrated with Cloud Monitoring Service to enable DB instance performance and log monitoring.
Planned Compute policy setting is available according to the server type selected by the customer.
2024.07.02
NEWBeta Version Released
Released MariaDB(DBaaS) service that allows easy creation and management of MariaDB in a web environment.
6.4 - MySQL(DBaaS)
6.4.1 - Overview
Service Overview
MySQL(DBaaS) is an open source relational database management system (RDBMS). Samsung Cloud Platform provides an environment where MySQL installation is automated through a web-based Console and management functions for operation can be performed.
MySQL(DBaaS) is designed with a high availability architecture that considers storage-based data replication and minimization of failover time. To prevent data loss, when content in the Active server is changed, it is synchronously replicated to the Standby server, and up to 5 read-only servers called Replicas are provided for read load distribution and disaster recovery (DR). Additionally, to prepare for problems with the DB server or data, it provides automatic backup at a user-specified time, enabling data recovery at a desired point in time.
Figure. MySQL(DBaaS) Architecture
Provided Features
MySQL(DBaaS) provides the following features.
Auto Provisioning: Database (DB) installation and configuration is possible through UI, and Active-standby redundancy configuration based on storage replication is provided. When the Active server fails, it automatically fails over to Standby.
Operation Control Management: Provides functionality to control the status of running servers. In addition to start and stop, restart is possible when there is an issue with the DB or to reflect configuration values. When configured with high availability (HA), users can directly switch between Active-Standby nodes through Switch-over.
Backup and Recovery: Provides data backup functionality based on its own backup commands. Backup time and retention period can be set by the user, and additional charges occur according to backup capacity. Additionally, it provides recovery functionality for backed-up data, and when the user performs recovery, a separate DB is created and recovery proceeds to the point in time selected by the user (backup storage point, user-specified point). When recovering to a user-specified point, the recovery point can be set up to 5 minutes/10 minutes/30 minutes/1 hour ago based on stored backup files and archive files.
Version Management: Provides version upgrade (Minor) functionality for some feature improvements and security patches. Users can select whether to perform backup according to version upgrade, and if backup is performed, the data is backed up before patching and then the DB engine is updated.
Replica Configuration: Up to 5 Read Replicas can be configured in the same/different region for read load distribution and disaster recovery (DR).
Parameter Management: DB configuration parameters for performance improvement and security can be modified.
Service Status Check: Checks the final status of the current DB service.
Monitoring: CPU, memory, and performance monitoring information can be checked through Cloud Monitoring and Servicewatch.
DB User Management: Manages by checking DB account (user) information registered in the DB.
DB Access Control Management: Access allowed IP registration and cancellation based on DB accounts registered in the DB is possible.
Archive Management: Archive file retention period (1 day ~ 35 days) in the DB server and Archive mode (On/Off) can be set.
DB Log Export: Logs stored through Audit settings can be exported to the user’s Object Storage.
Migration: Supports migration using Replication method by synchronizing data in real-time with the operating database without service interruption.
OS Kernel Upgrade: OS Kernel can be upgraded for some feature improvements and security patch application.
Components
MySQL(DBaaS) provides pre-verified engine versions and various server types according to the open source support policy. Users can select and use them according to the scale of the service they want to configure.
Engine Version
The engine versions supported by MySQL(DBaaS) are as follows.
Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation stops is set to 6 months before the EoTS date.
According to the supplier’s policy, EOS and EoTS dates may change, so please refer to the supplier’s license management policy page for details.
Standard: Standard specifications (vCPU, Memory) generally used
High Capacity: Large capacity server specifications with 24 vCore or more
Server Specification
db1
Provided server specifications
db1: Standard specifications (vCPU, Memory) generally used
dbH2: Large capacity server specifications
Provides servers with 24 vCore or more
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory capacity
m4: 4GB Memory
Table. MySQL(DBaaS) Server Type Components
Prerequisite Services
This is a list of services that must be configured in advance before creating this service. Please prepare in advance by referring to the guide provided for each service.
Service that provides an independent virtual network in a cloud environment
Table. MySQL(DBaaS) Prerequisite Services
6.4.1.1 - Server Type
MySQL(DBaaS) Server Type
MySQL(DBaaS) provides server types with various combinations of CPU, Memory, and Network Bandwidth. When creating a MySQL(DBaaS), the database engine is installed according to the selected server type, which is chosen based on the intended use.
The server types supported by MySQL(DBaaS) are as follows:
Standard db1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Classification of provided server types
Standard: Composed of standard specifications (vCPU, Memory) for general use
High Capacity: Composed of high-capacity server specifications above Standard
Server Specification
db1
Classification of provided server types and generations
db1: Represents standard specifications, and 1 represents the generation
dbh2: h represents high-capacity server specifications, and 2 represents the generation
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory capacity
m4: 4GB Memory
Table. MySQL(DBaaS) Server Type Format
db1 Server Type
The db1 server type of MySQL(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
db1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
db1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
db1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
db1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
db1v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
db1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
db1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
db1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
db1v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
db1v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
db1v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
db1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
db1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
db1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
db1v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
db1v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
db1v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
db1v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
db1v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
db1v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
db1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
db1v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
db1v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
db1v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
db1v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
db1v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
db1v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
db1v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
db1v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
db1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
db1v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
db1v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
db1v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
db1v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v16m256
16 vCore
256 GB
Up to 12.5 Gbps
Table. MySQL(DBaaS) Server Type Specifications - db1 Server Type
dbh2 Server Type
The dbh2 server type of MySQL(DBaaS) is provided with high-capacity server specifications and is suitable for large-scale data processing database workloads.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 128 vCPUs and 1,536 GB of memory
Up to 25Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
dbh2v24m48
24 vCore
48 GB
Up to 25 Gbps
High Capacity
dbh2v24m96
24 vCore
96 GB
Up to 25 Gbps
High Capacity
dbh2v24m192
24 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v24m288
24 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
dbh2v32m128
32 vCore
128 GB
Up to 25 Gbps
High Capacity
dbh2v32m256
32 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v32m384
32 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v48m192
48 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v48m576
48 vCore
576 GB
Up to 25 Gbps
High Capacity
dbh2v64m256
64 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v64m768
64 vCore
768 GB
Up to 25 Gbps
High Capacity
dbh2v72m288
72 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v72m864
72 vCore
864 GB
Up to 25 Gbps
High Capacity
dbh2v96m384
96 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v96m1152
96 vCore
1152 GB
Up to 25 Gbps
High Capacity
dbh2v128m512
128 vCore
512 GB
Up to 25 Gbps
High Capacity
dbh2v128m1536
128 vCore
1536 GB
Up to 25 Gbps
Table. MySQL(DBaaS) Server Type Specifications - dbh2 Server Type
6.4.1.2 - Monitoring Metrics
MySQL(DBaaS) Monitoring Metrics
The table below shows the performance monitoring metrics of MySQL (DBaaS) that can be viewed through Cloud Monitoring. For detailed usage of Cloud Monitoring, refer to the Cloud Monitoring guide.
Number of SQL queries running for a long time (5 minutes or more) (by DB)
cnt
Slowqueries [Total]
Number of SQL queries running for a long time (5 minutes or more) (total)
cnt
Tablespace Used
Tablespace usage
MB
Tablespace Used [Total]
Tablespace total usage
MB
Transaction Time [Long]
Transaction longest execution time
sec
Wait Locks
Number of sessions blocked for 60 seconds or more by lock
cnt
Table. MySQL(DBaaS) Monitoring Metrics
6.4.1.3 - ServiceWatch Metrics
MySQL sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at a 1-minute interval.
Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the MySQL namespace.
OS Basic Metrics
Category
Performance Item
Detailed Description
Unit
Meaningful Statistics
CPU
CPU Usage
CPU Usage
Percent
Disk
Disk Usage
Disk Usage Rate
Percent
Disk
Disk Write Bytes
Write capacity on block device (bytes/second)
Bytes/Second
Disk
Disk Read Bytes
Amount read from block device (bytes/second)
Bytes/Second
Disk
Disk Write Request
Number of write requests in block device (requests/second)
Count/Second
Disk
Disk Read Requests
Number of read requests on block device (requests/second)
Count/Second
Disk
Average Disk I/O Queue Size
Average queue length of requests issued to the block device
None
Disk
Disk I/O Utilization
Percentage of time the block device actually processes I/O operations
Percent
Memory
Memory Usage
Memory Usage Rate
Percent
Network
Network In Bytes
Received capacity on network interface (bytes/second)
Bytes/Second
Network
Network Out Bytes
Amount transmitted from the network interface (bytes/second)
Bytes/Second
Network
TCP Connections
Total number of TCP connections currently properly established
Count/Second
Network
Network In Packets
Number of packets received on the network interface
Count
Network
Network Out Packets
Number of packets transmitted from the network interface
Count
Network
Network In Dropped
Number of packet drops received on the network interface
Count
Network
Network Out Dropped
Number of packet drops transmitted from the network interface
Count
Network
Network In Errors
Number of packet errors received on the network interface
Count
Network
Network Out Errors
Number of packet errors transmitted from the network interface
Count
Table. OS Basic Metrics
MySQL Basic Metrics
Category
Performance Item
Detailed Description
Unit
Meaningful Statistics
Activelock
Active locks
Number of active locks
Count
Activesession
Active sessions
Number of active sessions
Count
Activesession
Connection usage
DB connection session usage rate
Percent
Activesession
Connections
DB connection session
Count
Activesession
Connections(MAX)
Maximum number of connections that can be attached to the DB
Count
Datafile
Binary log used
binary log usage (MB)
Megabytes
Datafile
Open files
Number of DB files in open state
Count
Datafile
Open files(MAX)
Number of DB files that can be opened
Count
Datafile
Open files usage
DB file maximum count usage rate
Percent
Datafile
Relay log used
Relay log usage(MB)
Megabytes
InnoDB
InnoDB buffer pool hit ratio
Percent
InnoDB
InnoDB row lock waits
Number of InnoDB transactions currently waiting for a lock (Lock-wait)
Count
InnoDB
InnoDB row lock time
Total time waited due to InnoDB row lock (in milliseconds)
Count
InnoDB
InnoDB table locks waits
Number of times waiting occurred to acquire table lock (cumulative)
Count
State
Instance state
MariaDB Process status up/down check
Count
State
Slave behind master seconds (Replica Only)
Replica’s delay amount (unit: seconds)
Seconds
State
Replica Thread running (Replica Only)
State
Replica io thread running (Replica Only)
State
Replica SQL thread running (Replica Only)
Tablespace
Tablespace used
Tablespace usage
Megabytes
Tablespace
Tablespace used(TOTAL)
Tablespace usage (total)
Megabytes
Transactions
Slow queries
Number of slow queries
Count
Transactions
Transaction time
Long Transaction time
Seconds
Transactions
Wait locks
Number of sessions waiting for lock
Count
Transactions
SQL Queries/Sec
Total number (cumulative) of all queries (statements) received from clients since the server started
Count
Table. MySQL basic metrics
6.4.2 - How-to guides
Users can create the MySQL(DBaaS) service by entering required information through Samsung Cloud Platform Console and selecting detailed options.
Create MySQL(DBaaS)
You can create and use the MySQL(DBaaS) service in Samsung Cloud Platform Console.
Notice
Before creating the service, please configure the VPC’s Subnet type as General.
If the Subnet type is Local, the corresponding Database service cannot be created.
If loading large amounts of data of 2 TB or more, backup may take a long time or DB performance may deteriorate. To prevent this, consideration is needed from an operational perspective, such as cleaning unnecessary data or moving old data to a statistical collection environment
Follow these steps to create MySQL(DBaaS).
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the Create MySQL(DBaaS) button. You will move to the Create MySQL(DBaaS) page.
On the Create MySQL(DBaaS) page, enter the information required for service creation and select detailed options.
Select the required information in the Image and Version Selection area.
Classification
Required
Detailed Description
Image Version
Required
MySQL(DBaaS) version list provided
Table. MySQL(DBaaS) Image and Version Selection Items
Enter or select the required information in the Service Information Entry area.
Classification
Required
Detailed Description
Server Name Prefix
Required
Server name where DB will be installed
Starts with lowercase English letters, enter 3 to 13 characters using lowercase letters, numbers, and special characters (-)
Actual server name is created with a postfix like 001, 002 based on the server name
Cluster Name
Required
Cluster name where DB servers are configured
Enter 3 to 20 characters using English letters
Cluster is a unit that bundles multiple servers
Service Type > Server Type
Required
Server type where DB will be installed
Standard: Standard specifications generally used
High Capacity: Large capacity server with 24vCore or more
Configured Storage type is applied identically to additional storage as well
Enter capacity as a multiple of 8 in the range of 16 ~ 5,120
Since SQL execution or large Sort due to monthly batch, etc. may cause service interruption, separate TEMP storage must be allocated and used
Add: DATA, Archive, TEMP, Backup data storage area
Select Use and enter the purpose and capacity of the storage
Storage type is applied identically as the type set in DATA, and capacity can be entered as a multiple of 8 in the range of 16 ~ 5,120
To add storage, click the + button, and to delete, click the x button. Maximum 9 can be added
Before transferring backup data, temporarily store backup data in BACKUP storage
If backup data exceeds 100 GB and there are many data changes, it is recommended to add separate BACKUP storage for stable backup. It is recommended to set backup capacity to about 60% of DATA capacity
If BACKUP storage is not added, the /tmp area is used, and backup fails if capacity is insufficient
Per service, only 1 Block Storage is allocated for Archive, TEMP, BACKUP storage
Redundancy Configuration
Optional
Whether to configure redundancy
If redundancy configuration is used, DB instance is configured as Active DB and Standby DB
Network > Common Settings
Required
Network settings where servers created in the service are installed
Select if you want to apply the same settings to all servers being installed
Select pre-created VPC and Subnet, IP, Public NAT
Only automatic creation is possible for IP
Public NAT function can be used only if VPC is connected to Internet Gateway. If you check Use, you can select from IPs reserved in Public IP of VPC product. For details, refer to Create Public IP
Network > Per Server Settings
Required
Network settings where servers created in the service are installed
Select if you want to apply different settings to each server being installed
Select pre-created VPC and Subnet, IP, Public NAT
Enter IP for each server
Public NAT function can be used only if VPC is connected to Internet Gateway. If you check Use, you can select from IPs reserved in Public IP of VPC product. For details, refer to Create Public IP
IP Access Control
Optional
Service access policy setting
Since access policy is set for IPs entered on the page, separate Security Group policy setting is not required
Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.0/24, 192.168.10.1/32), and click the Add button
To delete entered IP, click the x button next to the entered IP
Maintenance Window
Optional
DB maintenance window
If you select Use, set day of week, start time, and duration
It is recommended to set a maintenance window for stable DB management. Patch work is performed at the set time and service interruption occurs
If set to Not Used, Samsung SDS is not responsible for problems caused by not applying patches.
Table. MySQL(DBaaS) Service Configuration Items
Enter or select the required information in the Database Configuration Required Information Entry area.
Classification
Required
Detailed Description
Database Name
Required
Server name applied when installing DB
Start with English letters, enter 3 to 20 characters using English letters and numbers
Database Username
Required
DB user name
Account with that name is also created in OS
Enter 2 to 20 characters using lowercase English letters
Restricted Database usernames can be checked in Console
Database Password
Required
Password to use when accessing DB
Enter 8 to 30 characters including English letters, numbers, and special characters (excluding "’)
Database Password Confirm
Required
Re-enter the password to use when accessing DB identically
Database Port Number
Required
Port number required for DB connection
Enter DB port in the range of 1200 ~ 65535
Backup > Use
Optional
Whether to use backup
Select Use to set backup file retention period, backup start time, and Archive backup cycle
Backup > Retention Period
Optional
Backup retention period
Select backup retention period. File retention period can be set from 7 days to 35 days
Separate charges occur for backup files according to capacity
Backup > Backup Start Period
Optional
Backup start time
Select backup start time
Minutes when backup is performed are set randomly, and backup end time cannot be set
Backup > Archive Backup Cycle
Optional
Archive backup cycle
Select Archive backup cycle
Archive backup cycle of 1 hour is recommended. If you select 5 minutes, 10 minutes, 30 minutes, it may affect DB performance
Parameter
Required
Parameters to use in DB
Click the View button to check detailed information of parameters
Parameters can be modified after DB creation is completed, and after modification, DB must be restarted
DB Character Set
Required
Character encoding method to use in DB
Table Case Sensitivity
Optional
Whether DB Table is case-sensitive
Time Zone
Required
Standard time zone where Database will be used
ServiceWatch Log Collection
Optional
Whether to collect ServiceWatch logs
Select Use to set ServiceWatch log collection function
Provided free up to 5 GB for all services in Account, and if exceeding 5 GB, charges are incurred according to storage capacity
When collecting, log group and log stream are automatically created and cannot be deleted until resource is deleted
To prevent exceeding 5 GB, direct deletion of log data or shortening of retention period is recommended
Table. MySQL(DBaaS) Database Configuration Items
Enter or select the required information in the Additional Information Entry area.
Classification
Required
Detailed Description
Tags
Optional
Add tags
Can add up to 50 per resource
Click the Add Tag button and then enter or select Key, Value values
Table. MySQL(DBaaS) Additional Information Entry Items
On the Summary panel, check the created detailed information and estimated billing amount, then click the Create button.
When creation is completed, check the created resource on the Resource List page.
Check MySQL(DBaaS) Detailed Information
MySQL(DBaaS) service allows you to check and modify the entire resource list and detailed information. The MySQL(DBaaS) Details page is composed of Detailed Information, Tags, Operation History tabs, and for DBs where Replica is configured, the Replica Information tab is additionally configured.
Follow these steps to check the detailed information of MySQL(DBaaS) service.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose detailed information you want to check. You will move to the MySQL(DBaaS) Details page.
At the top of the MySQL(DBaaS) Details page, status information and additional feature information are displayed.
Classification
Detailed Description
Cluster Status
Status of cluster where DB is installed
Creating: Creating cluster
Editing: Changing cluster to Operation execution status
Error: Status where failure occurred while cluster was performing task
If it occurs continuously, contact administrator
Failed: Status where cluster failed during creation process
Restarting: Restarting cluster
Running: Status where cluster is operating normally
Starting: Starting cluster
Stopped: Status where cluster is stopped
Stopping: Stopping cluster
Synchronizing: Synchronizing cluster
Terminating: Deleting cluster
Unknown: Status where cluster status cannot be known
If it occurs continuously, contact administrator
Upgrading: Changing cluster to upgrade execution status
Cluster Control
Buttons to change cluster status
Start: Start stopped cluster
Stop: Stop running cluster
Restart: Restart running cluster
Switch-Over: Switch Standby cluster to Active
More Features
Cluster-related management buttons
Sync Service Status: Check real-time DB service status
Backup History: If backup is set, check whether backup was executed normally and history
Database Recovery: Recover DB based on specific point in time
Parameter Management: View and modify DB configuration parameters
Replica Configuration: Configure Replica which is read-only cluster
Replica Configuration (Other-Region): Configure Replica for disaster recovery in another region, button is deactivated if there is no region configured in that Account
DB User Management: View and manage DB account (user) information registered in DB
DB Access Control Management: Register and cancel access allowed IP based on DB accounts registered in DB
Archive Management: Set Archive file retention period and Archive mode can be set
DB Log Export: Logs stored through Audit settings can be exported to user’s Object Storage
Migration Configuration: Provide Migration function using Replication method
OS (Kernel) Upgrade: Upgrade OS Kernel version
Service Termination
Button to terminate service
Table. MySQL(DBaaS) Status Information and Additional Features
Detailed Information
On the MySQL(DBaaS) List page, you can check the detailed information of the selected resource and modify information if necessary.
Classification
Detailed Description
Server Information
Server information configured in that cluster
Category: Server type (Active, Standby, Replica)
Server Name: Server name
IP:Port: Server IP and port
Status: Server status
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In DB service, it means cluster SRN
Resource Name
Resource name
In DB service, it means cluster name
Resource ID
Unique resource ID in service
Creator
User who created the service
Creation Date
Date when service was created
Modifier
User who modified service information
Modification Date
Date when service information was modified
Image Version
Installed DB image and version information
If version upgrade is needed, click the Edit icon to set
If log collection setting is needed, click the Edit icon next to log collection to set
DB Character Set
Encoding method to use in DB
Table Case Sensitivity
Whether DB Table is case-sensitive
Time Zone
Standard time zone where Database will be used
VIP
Virtual IP information
Can be checked only if high availability is set
Network
Network information where DB is installed (VPC, Subnet, VIP, NAT IP (VIP))
IP Access Control
Service access policy setting
If IP addition or deletion is needed, click the Edit icon to set
Active & Standby
Active/Standby server type, basic OS, additional Disk information
If server type modification is needed, click the Edit icon next to server type to set. For server type modification procedure, refer to Change Server Type
Server restart is required when server type is modified
If storage expansion is needed, click the Edit icon next to storage capacity to expand. For storage expansion procedure, refer to Expand Storage
If storage addition is needed, click the Add Disk button next to additional Disk to add. For storage addition procedure, refer to Add Storage
Table. MySQL(DBaaS) Database Detailed Information Items
Replica Information
The Replica Information tab is activated only if Replica is configured in the cluster. Through the Replica Information tab, you can check the Master cluster name, number of replicas, and Replica status.
Classification
Detailed Description
Master Information
Name of Master cluster
Replica Count
Number of Replicas created in Master cluster
Replica Status
Replica server status created in Master cluster
Can check server name, status check, status details, status check time
To check Replica status, click the Check Status button
While checking, cluster maintains Synchronizing status, and when check is completed, cluster changes to Running status
Table. Replica Information Tab Detailed Information Items
Tags
On the MySQL(DBaaS) List page, you can check the tag information of the selected resource and add, modify, or delete it.
Classification
Detailed Description
Tag List
Tag list
Can check tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select pre-created Key and Value lists
Table. MySQL(DBaaS) Tags Tab Items
Operation History
On the MySQL(DBaaS) List page, you can check the operation history of the selected resource.
Table. Operation History Tab Detailed Information Items
Manage MySQL(DBaaS) Resources
If you need to change existing configuration options of created MySQL(DBaaS) resources, or need recovery or Replica configuration, you can perform tasks on the MySQL(DBaaS) Details page.
Control Operation
If changes occur to running MySQL(DBaaS) resources, you can start, stop, or restart. Additionally, if HA is configured, you can switch Active-Standby servers through Switch-over.
Follow these steps to control operation of MySQL(DBaaS).
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to control operation. You will move to the MySQL(DBaaS) Details page.
Check MySQL(DBaaS) status and complete the change through the control buttons below.
Start: Server and DB service where DB is installed run (Running).
Stop: Server and DB service where DB is installed stop (Stopped).
Restart: Only DB service is restarted.
Switch Over: Can swap Active server and Standby server of DB.
Sync Service Status
You can synchronize the real-time service status of MySQL(DBaaS).
Follow these steps to check the service status of MySQL(DBaaS).
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to check service status. You will move to the MySQL(DBaaS) Details page.
Click the Sync Service Status button. While checking, cluster changes to Synchronizing status.
When check is completed, status is updated in the server information item, and cluster changes to Running status
Change Server Type
You can change the configured server type.
Follow these steps to change server type.
Caution
Server restart is required when modifying server type. Please check SW license modification matters or SW settings and reflection according to spec change separately.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to change server type. You will move to the MySQL(DBaaS) Details page.
Click the Edit icon of the server type you want to change at the bottom of detailed information. The Edit Server Type popup window opens.
On the Edit Server Type popup window, select server type and click the Confirm button.
Add Storage
If you need more than 5 TB of data storage space, you can add storage. For DB configured with redundancy, it is added to both redundancy servers simultaneously.
Caution
It is applied identically as the Storage type selected when creating the service.
For DB with high availability setting, when adding storage, it is applied to both Active DB and Standby DB storage simultaneously.
If Replica exists, Master cluster storage cannot be smaller than Replica storage. Please expand Replica storage first and then expand Master cluster storage
When adding Archive/Temp storage, DB restarts and cannot be used temporarily
Follow these steps to add storage.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to add storage. You will move to the MySQL(DBaaS) Details page.
Click the Add Disk button at the bottom of detailed information. The Additional Storage Request popup window opens.
On the Additional Storage Request popup window, enter purpose and capacity, then click the Confirm button.
Expand Storage
Storage added as data area can be expanded up to 5 TB based on initially allocated capacity. For DB configured with redundancy, it is expanded to both redundancy servers simultaneously.
Follow these steps to expand storage capacity.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to change server type. You will move to the MySQL(DBaaS) Details page.
Click the Edit icon of the additional Disk you want to expand at the bottom of detailed information. The Edit Additional Storage popup window opens.
On the Edit Additional Storage popup window, enter expansion capacity and click the Confirm button.
Terminate MySQL(DBaaS)
You can reduce operating costs by terminating unused MySQL(DBaaS). However, when terminating the service, the running service may be immediately interrupted, so you should fully consider the impact of service interruption before proceeding with termination.
Caution
For DB where Replica is configured, even if Master DB is terminated, Replica is not deleted together. If you want to delete Replica as well, please terminate separately from the resource list.
When terminating DB, stored data and if backup is set, all backup data including backup data are deleted.
Follow these steps to terminate MySQL(DBaaS).
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, select the resource to terminate and click the Terminate Service button.
When termination is completed, check whether the resource is terminated on the MySQL(DBaaS) list page.
6.4.2.1 - MySQL(DBaaS) server connection
Scenario Overview
The MySQL(DBaaS) server connection scenario is a scenario where a Bastion host (Virtual Server) and Database service are created, and the DB service is accessed through the Bastion host. To connect to MySQL(DBaaS) stably in the Samsung Cloud Platform environment, it is necessary to create a Bastion host and use it for network connection. To maintain a stable and high level of security, it is recommended to configure the Database service in a Private Subnet environment and configure the Bastion host in a limited Public Subnet environment.
This scenario largely describes the process of creating a Bastion host and Database service, and configuring the network environment for Bastion host and Database connection, so that it can be accessed through a DB connection client.
Figure. MySQL(DBaaS) server connection architecture
Scenario Components
You can configure the scenario using the following services.
Service Group
Service
Detailed Description
Networking
VPC
A service that provides an isolated virtual network in a cloud environment
Networking
VPC > Subnet
A service that allows users to subdivide the network into smaller sections for specific purposes/sizes within the VPC
Networking
VPC > Public IP
A service that reserves a public IP and assigns and returns it to Compute resources
Networking
VPC > Internet Gateway
A service that connects VPC resources to the internet
Networking
Security Group
A virtual firewall that controls the server’s traffic
Database
MySQL(DBaaS)
A service that easily creates and manages MySQL in a web environment
Compute
Virtual Server
Virtual server optimized for cloud computing
Compute
Virtual Server > Keypair
Encryption file used to connect to Virtual Server
Table. List of scenario components
Reference
The default policy of the * Security Group is Deny All, so only allowed IPs should be registered.
The In/Outbound’s All Open policy (Any IP, Any Port) can expose cloud resources to external threats.
* By specifying the necessary IP and Port, you can enhance security by setting up a policy.
Scenario composition method
To configure the scenario, create the necessary services through the following procedure.
1. Configuring the Network
This explains the process of configuring the network environment for Bastion Host and Database service connection.
Summary panel, check the detailed information generated and the expected billing amount, and click the Complete button.
Once creation is complete, check the created resource on the Virtual Server list page.
2-3. Check Bastion host connection ID and PW
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Service Home page, click the Virtual Server menu. Move to the Virtual Server list page.
Virtual Serve list page, click on the resource created in 2-2. Creating a Bastion host. It moves to the detailed information page of the corresponding resource.
Click the RDP password inquiry button in the Keypair item on the detailed information page. The RDP password inquiry popup window opens.
Click the menu for all services > Networking > Security Group. It moves to the Service Home page of Security Group.
Service Home page, click the Security Group menu. Move to the Security Group list page.
1-5. Creating a Security Group Select the Security Group resource created from. It will move to the detailed information page of the corresponding resource.
Click the Rules tab on the detailed information page. It moves to the Rules tab.
Click the Rule tab and click the Add Rule button. It moves to the Add Rule popup window.
Add Rule popup window, enter the rules below, and click the OK button
Click All services > Networking > Firewall menu. It moves to the Service Home page of Firewall.
Service Home page, click the Firewall menu. It moves to the Firewall list page.
Firewall list page, select the Internet Gateway resource name created in 1-3. Creating Internet Gateway, and move to the detailed information page of the corresponding resource.
Click the Rules tab on the detailed information page. It moves to the Rules tab.
Click the Rule tab and click the Add Rule button. It moves to the Add Rule popup window.
Add Rule In the popup window, enter the rules below and click the OK button.
Departure Address
Destination Address
Protocol
Port
Action
Direction
Description
Bastion connection PC IP
Bastion host IP
TCP
3389(RDP)
Allow
Inbound
User PC → Bastion host
Fig. Internet Gateway Firewall rules to be added
5. Connect to Database
This describes the process of a user accessing the Database through a DB connection client program.
This guide provides instructions on how to connect using MySQL Workbench. There are various database client programs and CLI utilities, so you can install and use the tools that suit you.
5-1. Connecting to the Bastion host
Run Remote Desktop Connection in the Windows environment of the PC you want to access the Bastion host, enter the NAT IP of the Bastion Host, and click the Connect button.
After a successful remote desktop connection, the User Credential Input Window opens. Enter the ID and Password confirmed in 2-3. Checking Bastion Host Access ID and PW and click the Confirm button.
5-2. Install DB connection client program (MySQL Workbench) on the Bastion host
Go to the official MySQL page and download the MySQL Workbench program.
Connect the hard drive of the user’s PC to upload the file to the Bastion host.
Click the Details button for local devices and resources entries in the Local Resources tab of Remote Desktop Connection.
Select the local disk where the file was downloaded to the drive and click the Confirm button.
Copy the downloaded file and upload it to the Bastion Host, then click the MySQL Workbench installation file to install it.
5-3. Using DB connection client program (MySQL Workbench) to connect to the Database
Run MySQL Workbench and click Database > Manage connections. The Manage Server Connection popup window will appear.
Manage Server Connection popup window, click the New button at the bottom left and enter the database server information created in 3-1. MySQL(DBaaS) service creation, then click the Test Connection button. A Password popup window will appear.
Required Input Element Items
Input Value
Connection Name
Custom (ex. Service Name)
Host name
Database server IP
Port
Database Port
Username
Database username
DB connection client program input items
In the Password popup window, enter the password set in 3-1. MySQL(DBaaS) service creation and click the OK button. When the connection is successful, click the OK button in the Manage Server Connection popup window.
Database > Connect to Database should be clicked. The Connect to Database popup window will appear.
Select the Connection Name registered in Stored Connection to perform database connection. After connection, you can try simple queries, etc.
6.4.2.2 - Manage DB Service
Users can manage MySQL(DBaaS) through Samsung Cloud Platform Console.
Manage Parameters
Provides functionality to easily view and modify database configuration parameters.
View Parameters
Follow these steps to view configuration parameters.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose parameters you want to view and modify. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the Parameter Management button. The Parameter Management popup window opens.
On the Parameter Management popup window, click the View button. The View Notification popup window opens.
When the View Notification popup window opens, click the Confirm button. It takes some time to view.
Modify Parameters
Follow these steps to modify configuration parameters.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose parameters you want to view and modify. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the Parameter Management button. The Parameter Management popup window opens.
On the Parameter Management popup window, click the View button. The View Notification popup window opens.
When the View Notification popup window opens, click the Confirm button. It takes some time to view.
If modification is needed, click the Edit button and enter modification content in the user-defined value area of the Parameter to modify.
When input is completed, click the Complete button.
Reference
When changing character_set_server value, first check the collation matching that character set with the following command.
SQL> SHOW COLLATION WHERE Charset = 'character set name';
Set parameter values of character-set-server, collation-server, init_connect with the confirmed collation.
Item
Detailed Description
Restart Required
character-set-server
Specify default character set
Restart Required
collation-server
Specify default collation
Restart Required
init_connect
SQL statement executed when Client connects to database
No Restart Required
Table. Parameter Setting Items
Manage DB Users
Provides management functionality to view DB user information and change status information.
View DB Users
Follow these steps to view DB users.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose DB users you want to view. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the DB User Management button. You will move to the DB User Management page.
On the DB User Management page, click the View button. It takes some time to view.
Change DB User Status
Follow these steps to change the status of viewed DB users.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose DB users you want to modify. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the DB User Management button. You will move to the DB User Management page.
On the DB User Management page, click the View button. It takes some time to view.
If modification is needed, click the Edit button and change the status area value or enter remarks content.
When input is completed, click the Complete button.
Manage DB Access Control
Provides IP-based DB user access control management functionality. Users can directly specify IPs that can access the database and set it so that only allowed IPs can access.
Notice
Please perform DB user viewing before setting DB access control. For DB user viewing, refer to Manage DB Users.
View DB Access Control
Follow these steps to view DB users where IP access control is set.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose access control you want to manage. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the DB Access Control Management button. You will move to the DB Access Control Management page.
On the DB Access Control Management page, click the View button. It takes some time to view.
Add DB Access Control
Follow these steps to add IP access control.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose IP access control you want to add. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the DB Access Control Management button. You will move to the DB Access Control Management page.
On the DB Access Control Management page, click the View button. It takes some time to view.
When viewing is completed, click the Add button. The Add DB Access Control popup window opens.
On the Add DB Access Control popup window, select DB username and enter IP Address.
When input is completed, click the Complete button.
Delete DB Access Control
Follow these steps to delete IP access control.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose IP access control you want to delete. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the DB Access Control Management button. You will move to the DB Access Control Management page.
On the DB Access Control Management page, click the View button. It takes some time to view.
When viewing is completed, click the Delete button. The Delete popup window opens.
On the Delete popup window, click the Confirm button.
Manage Archive
Provides Archive mode setting and Archive Log retention period setting functionality so users can flexibly set Archive log management policies according to their operating environment.
Additionally, it provides functionality to manually delete Archive logs together, enabling efficient management of system resources by cleaning unnecessary log data.
Notice
When creating the service, default settings are Archive mode use and retention period of 3 days.
Set Archive Mode
Follow these steps to set Archive mode.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose Archive mode you want to set. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the Archive Settings Management button. You will move to the Archive Settings Management page.
On the Archive Settings Management page, click the View button. It takes some time to view.
Click the Edit button and select whether to use and retention period.
When modification is completed, click the Complete button.
Delete Archive Files
Follow these steps to delete Archive files.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose Archive mode you want to set. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the Archive Settings Management button. You will move to the Archive Settings Management page.
On the Archive Settings Management page, if you want to delete all Archive files, click the Delete All Archives button, and if you want to delete only backed-up Archive files, click the Delete Backed-up Archives button.
Export DB Log
Supports exporting log data requiring long-term retention among audit (Audit) logs to Object Storage. Users can directly set the log type requiring storage, target Bucket to export, and cycle to export logs. According to the set criteria, logs are copied and stored to the specified Object Storage.
Additionally, for efficient management of disk space, it also provides an option to automatically delete original log files while exporting logs to Object Storage. By using that option, you can effectively secure storage capacity while safely storing necessary log data for long-term retention
Notice
Object Storage creation is required to use DB Log Export functionality. For Object Storage creation, refer to Object Storage User Guide.
Please make sure to check the expiration date of the authentication key. If the authentication key expires, logs are not stored in the Bucket.
Please be careful not to expose authentication key information to the outside.
Set DB Log Export Mode
Follow these steps to set DB Log export mode.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose DB logs you want to export. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the DB Log Export button. You will move to the DB Log Export page.
On the DB Log Export page, click the Register button. You will move to the DB Log Export Register page.
On the DB Log Export Register page, enter the information and click the Complete button.
Classification
Required
Detailed Description
Log Type
Required
Log type to store
Storage Bucket Name
Required
Object Storage Bucket name to store
Authentication Key > Access key
Required
Access key to access Object Storage to store
Authentication Key > Secret key
Required
Secret key to access Object Storage to store
File Creation Cycle
Required
Cycle to create files in Object Storage
Original Log Deletion
Optional
Whether to delete original log while exporting to Object Storage
Table. MySQL(DBaaS) DB Log Export Configuration Items
Manage DB Log Export
Follow these steps to modify, terminate, or immediately export DB Log export settings.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose DB Log export you want to manage. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the DB Log Export button. You will move to the DB Log Export page.
On the DB Log Export page, click the More button according to the log type you want to manage and click the Immediate Export, Edit, Terminate buttons.
Immediate Export: Selected logs are exported to the Bucket of Object Storage previously set.
Edit: Modifies DB Log export mode settings.
Terminate: Terminates DB Log export mode settings.
Minor Version Upgrade
Provides version upgrade functionality for some feature improvements and security patches. Only Minor version upgrade functionality within the same Major version is supported.
Caution
Please check service status through service status synchronization first, then perform version upgrade.
Please proceed with version upgrade after setting backup. If backup is not set, some data may not be recoverable when problems occur during update.
In DB where Replica is configured, Master DB version cannot be higher than Replica version. Please check Replica version first and perform version upgrade if needed.
Backed-up data is automatically deleted after version upgrade is completed.
Follow these steps to upgrade version.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose version you want to upgrade. You will move to the MySQL(DBaaS) Details page.
Click the Edit button in the Image Version item. The Version Upgrade popup window opens.
On the Version Upgrade popup window, select modified version and whether to set backup, then click the Confirm button.
On the Version Upgrade Notification popup window, click the Confirm button.
Configure Migration
Provides Migration functionality that replicates in real-time while synchronizing with the operating database using Replication method without service interruption.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to migrate. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the Migration Configuration button. The Migration Configuration popup window opens.
On the Migration Configuration popup window, check the notice and click the Confirm button. You will move to the Migration Configuration page.
On the Migration Configuration page, enter the information and click the Connection Check button.
When connection is completed, click the Complete button.
Classification
Required
Detailed Description
Source DB Database Name
Required
Database name of Source DB to be Migration target
Source DB IP
Required
IP of Source DB to be Migration target
Source DB Port
Required
Port of Source DB to be Migration target
Source DB Username
Required
Username of Source DB to be Migration target
Source DB Password
Required
Password of Source DB to be Migration target
Table. MySQL(DBaaS) Migration Configuration Items
Promote Migration Cluster to Master Cluster
You can promote the configured Migration Cluster to Master Cluster.
Caution
When promoting to Master, synchronization with Source DB which is Migration target stops.
Follow these steps to promote Migration Cluster to Master.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to promote to Master. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the Master Promotion button. The Master Promotion Notification popup window opens.
On the Master Promotion Notification popup window, click the Confirm button.
Upgrade OS Kernel
You can upgrade OS Kernel for operating database feature improvements and security patch application.
Caution
Service is interrupted while OS upgrade is in progress.
Upgrade time may vary depending on version, and if upgrade fails, it reverts to previous configuration.
Cannot recover to previous OS after upgrade is completed.
Follow these steps to upgrade OS Kernel.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource whose OS Kernel you want to upgrade. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the OS (Kernel) Upgrade button. The OS (Kernel) Upgrade Notification popup window opens.
On the OS (Kernel) Upgrade Notification popup window, check the notice and click the Confirm button.
6.4.2.3 - DB Backup and Recovery
The user can set up a backup of MySQL(DBaaS) through the Samsung Cloud Platform Console and restore it with the backed-up file.
MySQL(DBaaS) Backup
PostgreSQL(DBaaS) provides a data backup feature based on its own backup command. It also provides an optimized backup environment for data protection and management through backup history checking and backup file deletion functions.
To modify the backup settings of the generated resource, follow these steps.
Caution
For stable backup, it is recommended to add a separate BACKUP storage or to sufficiently expand the storage capacity. Especially when the backup target data exceeds 100 GB and the data change is frequent, please secure additional storage equivalent to about 60% of the data capacity. For storage addition and expansion methods, please refer to the MySQL(DBaaS) Add Storage, MySQL(DBaaS) Expand Storage guides.
If backup is set, backup is performed at the specified time after the set time, and additional fees are incurred depending on the backup capacity.
If the backup setting is changed to unset, the backup operation will be stopped immediately, and the saved backup data will be deleted and can no longer be used.
To set up backup, follow these steps.
All Services > Database > MySQL(DBaaS) menu is clicked. It moves to the Service Home page of MySQL(DBaaS).
Service Home page, click the MySQL(DBaaS) menu. Move to the MySQL(DBaaS) list page.
MySQL(DBaaS) list page, click the resource to set the backup. It moves to the MySQL(DBaaS) details page.
Click the Edit button of the backup item. The Backup Settings popup window opens.
If you set up a backup, click Use in the Backup Settings popup window, select the retention period, backup start time, and Archive backup cycle, and then click the Confirm button.
If you want to stop the backup settings, uncheck Use in the Backup Settings popup window and click the OK button.
Check Backup History
Notice
To set up notifications for backup success and failure, you can set it up through the Notification Manager product. For detailed usage guidelines on notification policy settings, please refer to Creating a Notification Policy.
To view the backup history, follow these steps.
All Services > Database > MySQL(DBaaS) menu is clicked. It moves to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. It moves to the MySQL(DBaaS) list page.
MySQL(DBaaS) list page, click the resource to check the backup history. Move to the MySQL(DBaaS) details page.
Click the Backup History button. The Backup History popup window opens.
Backup History popup window where you can check the backup status, version, backup start time, backup completion time, and capacity.
Delete backup files
Caution
Backup files cannot be restored after deletion. Please make sure to confirm that the data is unnecessary before deleting it.
To delete the backup history, follow these steps.
Click All Services > Database > MySQL(DBaaS) menu. It moves to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. It moves to the MySQL(DBaaS) list page.
MySQL(DBaaS) list page, click the resource to check the backup history. Move to the MySQL(DBaaS) detail page.
Click the Backup History button. The Backup History popup window opens.
Backup History popup window, check the file you want to delete, and then click the Delete button.
Recovering MySQL(DBaaS)
In the event of a failure or data loss that requires restoration from a backup file, recovery is possible based on a specific point in time through the recovery function. When performing MySQL (DBaaS) recovery, a new server is created with the OS image at the initial provisioning time, the DB is installed with the version at the backup point in time, and the recovery proceeds with the DB configuration information and data.
Caution
To perform recovery, at least 2 times the capacity of the data type Disk capacity is required. If the disk capacity is insufficient, recovery may fail.
To restore MySQL(DBaaS), follow these steps.
All Services > Database > MySQL(DBaaS) menu is clicked. It moves to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. It moves to the MySQL(DBaaS) list page.
MySQL(DBaaS) resource list page, click the resource you want to restore. It moves to the MySQL(DBaaS) details page.
Database Recovery Click the button. Database Recovery Navigate to the page.
Enter the corresponding information in the Database Recovery Configuration area, and then click the Complete button.
Classification
Necessity
Detailed Description
Recovery Type
Required
Set the point in time you want to recover
Backup Point (Recommended): Recover based on the backup file. Select from the list of backup file points displayed in the list
Custom Point: Recover to the desired point within the range of possible backup times. The recoverable period is from the initial backup start time to the current time, based on the Archive backup cycle setting value, and can be recovered up to 1 hour/30 minutes/10 minutes/5 minutes ago. Select the date and time you want to back up
Server name prefix
Required
Server name of the recovery DB
Start with lowercase English letters, using lowercase letters, numbers, and special characters (-) to input 3 ~ 16 characters
A postfix such as 001, 002 is attached based on the server name, and the actual server name is created
Cluster Name
Required
Cluster name of the recovery DB
Enter in English, 3-20 characters
Cluster is a unit that bundles multiple servers
Service Type > Server Type
Required
Server type where the recovery DB will be installed
Standard: Standard specification commonly used
High Capacity: High-capacity server with 24vCore or more (to be provided later)
Service Type > Planned Compute
Selection
Current status of resources with Planned Compute set
In Use: Number of resources with Planned Compute set that are currently in use
Setting: Number of resources with Planned Compute set
Coverage Preview: Amount applied by Planned Compute for each resource
Create Planned Compute Service: Move to the Planned Compute service application page
DATA: Storage area for table data, archive files, etc.
The storage type set in the original cluster is applied in the same way
Capacity can be entered in multiples of 8 in the range of 16 to 5,120
Additional: DATA, Archive, TEMP, Backup data storage area
The storage type set in the original cluster is applied in the same way
Only DATA, TEMP, and Archive purposes can be added in the recovery DB
After selecting Use, enter the purpose and capacity of the storage
To add storage, click the + button, and to delete, click the x button
Capacity can be entered in multiples of 8 in the range of 16 to 5,120, and up to 9 can be created
Database username
required
Database username set in the original DB
Database Port number
required
Database Port number set in the original DB
IP Access Control
Select
IP address to access the recovery DB
Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.1/32, 192.168.10.1/32) and click the Add button
To delete the entered IP, click the x button next to the entered IP
Maintenance period
Select
DB maintenance period
Use is selected to set the day, start time, and duration
It is recommended to set the maintenance period for stable management of the DB, and patch work is performed at the set time, resulting in service interruption
If not used, Samsung SDS is not responsible for any problems that occur due to non-application of patches.
tag
selection
add tag
add tag button, click after Key, Value value input or selection
Fig. MySQL(DBaaS) Recovery Configuration Items
6.4.2.4 - Configure Read Replica
Users can create the service by entering required information for Read Replica through Samsung Cloud Platform Console and selecting detailed options.
Configure Replica
Through Replica configuration, you can create replica servers for read-only or disaster recovery purposes. You can create up to 5 Replicas per Database.
Notice
To configure Replica for disaster recovery, please create through Replica Configuration (Other Region).
Follow these steps to configure Replica.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to configure Replica. You will move to the MySQL(DBaaS) Details page.
Click the Replica Configuration button. You will move to the Replica Configuration page.
After entering information in the Replica Configuration area, click the Complete button.
Classification
Required
Detailed Description
Region
Required
Region to configure Replica
Displayed only when Replica Configuration (Other Region) is selected
Replica Count
Required
Number of Replicas to configure
Can configure up to 5 per cluster
If you select a value of 2 or more, you need to additionally enter Replica name and service type information
Replica Name
Required
Replica server name
Start with lowercase English letters and enter 3 to 19 characters using lowercase letters, numbers, and special characters (-)
Entered Replica name is displayed as cluster name in the list
Service Type > Server Type
Required
Replica server type
Standard: Standard specifications generally used
High Capacity: Large capacity server with 24vCore or more
Service Type > Planned Compute
Optional
Resource status where Planned Compute is set
In Use: Number of resources in use among resources where Planned Compute is set
Set: Number of resources where Planned Compute is set
Coverage Preview: Amount applied as Planned Compute per resource
Create Planned Compute Service: Move to Planned Compute service application page
Since access policy is set for IPs entered on the page, separate Security Group policy setting is not required
Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.0/24, 192.168.10.1/32), and click the Add button
To delete entered IP, click the x button next to the entered IP
Maintenance Window
Optional
DB maintenance window
If you select Use, set day of week, start time, and duration
It is recommended to set a maintenance window for stable DB management. Patch work is performed at the set time and service interruption occurs
If set to Not Used, Samsung SDS is not responsible for problems caused by not applying patches.
Tags
Optional
Add tags
Click the Add Tag button and then enter or select Key, Value values
Table. MySQL(DBaaS) Replica Configuration Items
Reconfigure Replica
In case of network failure or Replication delay with Master Cluster occurs, you can replicate Master Cluster’s data again through Replica reconfiguration functionality.
Follow these steps to reconfigure Replica.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to reconfigure Replica. You will move to the MySQL(DBaaS) Details page.
Click the Replica Reconfiguration button. The Replica Reconfiguration Notification popup window opens.
On the Replica Reconfiguration Notification popup window, click the Confirm button.
Promote Replica Cluster to Master Cluster
You can promote the configured Replica Cluster to Master Cluster.
Caution
When promoting to Master, synchronization with existing Master Cluster stops.
Follow these steps to promote Replica Cluster to Master.
Click All Services > Database > MySQL(DBaaS) menu. You will move to the Service Home page of MySQL(DBaaS).
On the Service Home page, click the MySQL(DBaaS) menu. You will move to the MySQL(DBaaS) List page.
On the MySQL(DBaaS) List page, click the resource to promote to Master. You will move to the MySQL(DBaaS) Details page.
Click the More button and click the Master Promotion button. The Master Promotion Notification popup window opens.
On the Master Promotion Notification popup window, click the Confirm button.
6.4.3 - API Reference
API Reference
6.4.4 - CLI Reference
CLI Reference
6.4.5 - Release Note
MySQL(DBaaS)
2026.03.19
FEATUREDisaster recovery Replica configuration, OS(Kernel) upgrade function added, Servicewatch integration function provided
You can configure a disaster recovery Replica through the Replica configuration (Other Region) function.
Enhances latest security patches and stability through the OS(Kernel) upgrade function.
You can monitor metrics and logs through integration with Servicewatch.
2025.07.01
FEATUREUser(access control) management, Archive setting function added, DB Audit Log export function added, backup notification function provided, Migration function added
MySQL(DBaaS) function additions
2nd generation server type added
Added 2nd generation (db2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For details, refer to MySQL(DBaaS) Server Type
DB user and access control management and Archive setting function added
Provides notification function for backup success and failure. For details, refer to Create Notification Policy
Migration function added
Provides non-stop data migration function based on Replication. For details, refer to Configure Migration
Added HDD, HDD_KMS types to Block Storage type
2025.02.27
FEATUREServer type added and per-server IP setting, Block Storage capacity expansion function added
MySQL(DBaaS) function changes
2nd generation server type added
Added 2nd generation (dbh2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For details, refer to MySQL(DBaaS) Server Type
After service creation, Block Storage capacity expansion is possible.
Per-server network IP setting function added to allow common settings or per-server settings depending on usage purpose.
Samsung Cloud Platform common function changes
Reflected common CX changes for Account, IAM, Service Home, and tags.
2024.10.01
NEWMySQL(DBaaS) service official version release
MySQL(DBaaS) service has been released that allows easy creation and management of MariaDB in a web environment.
6.5 - Microsoft SQL Server(DBaaS)
6.5.1 - Overview
Service Overview
Microsoft SQL Server (DBaaS) is a representative relational database management system (RDBMS) used in various applications. Samsung Cloud Platform provides an environment that can automate the installation of Microsoft SQL Server through a web-based console and perform management functions for operation.
Microsoft SQL Server(DBaaS) is designed with an Always On based availability architecture, and when the content of the Primary server changes, it is synchronously replicated to the Secondary server. Additionally, it provides an automatic backup function at user-specified times to prepare for issues with the DB server or data, supporting data recovery at the desired point in time.
Provided Features
Microsoft SQL Server(DBaaS) provides the following features.
Auto Provisioning: Allows installation and configuration of Database (DB) via UI, provides a Primary-Secondary redundancy configuration built on Always On. In case of Primary server failure, it automatically fails over to Secondary.
Operation Control Management: Provides a function to control the status of running servers. In addition to start and stop, restart is possible if there is an issue with the DB or to apply configuration values. When configured for high availability (HA), the user can directly perform node switching between Primary-Secondary via Switch-over.
Backup and Recovery: Provides data backup functionality based on its own backup commands. Backup time windows, retention agencies, and full backup days can be set by the user, and additional fees may apply based on backup volume. It also provides a recovery function for backed-up data, creating a separate database when the user performs a recovery, and the recovery proceeds to the point in time selected by the user (backup storage point, user-specified point). When restoring to a user-specified point, the restore point can be set to 5 minutes/10 minutes/30 minutes/1 hour before the stored backup file and archive file.
Version Management: Provides version upgrade (Minor) functionality due to some feature improvements and security patches. Whether to perform backup for the version upgrade can be selected by the user, and if backup is performed, data is backed up before applying the patch, then the DB engine is updated.
Secondary configuration: You can additionally configure a read-only Read Replica (Secondary Replica), enabling read performance scaling and load balancing.
Audit setting: Provides an Audit setting feature that can monitor the user’s DB access and the results of DDL (Data Definition Language)/DML (Data Manipulation Language) execution.
Parameter Management: Performance improvement and security-related DB configuration parameter modifications are possible.
Service status query: Retrieves the final status of the current DB service.
Monitoring: CPU, Memory, DB performance monitoring information can be checked through the Cloud Monitoring service.
DB User Management: View and manage DB account (user) information registered in the DB.
DB Log Export: Through Audit settings, you can export the stored logs to the user’s Object Storage.
Components
Microsoft SQL Server(DBaaS) provides pre-validated engine versions and various server types. Users can select and use them according to the scale of the service they want to configure.
Engine Version
The engine versions supported by Microsoft SQL Server (DBaaS) are as follows.
Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to six months before the EoTS date.
According to the supplier’s policy, the EOS and EoTS dates may change, so please refer to the supplier’s license management policy page for details.
Standard: Standard specifications (vCPU, Memory) configuration commonly used
High Capacity: Large server specifications of 24 vCores or more
Server specifications
db1
Provided server specifications
db1: Standard specifications (vCPU, Memory) configuration commonly used
dbh2: Large-scale server specifications
Provides servers with 24 vCores or more
Server specifications
v2
Number of vCores
v2: 2 virtual cores
Server specifications
m4
Memory capacity
m4: 4GB Memory
Table. Microsoft SQL Server (DBaaS) server type components
Preliminary Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.
A service that provides an independent virtual network in a cloud environment
Table. Microsoft SQL Server(DBaaS) Preliminary Service
6.5.1.1 - Server Type
Microsoft SQL Server(DBaaS) server type
Microsoft SQL Server(DBaaS) provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc.
When creating Microsoft SQL Server (DBaaS), the Database Engine is installed according to the server type selected for the purpose of use.
The server types supported by Microsoft SQL Server (DBaaS) are as follows.
Standard db1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Provided server type distinction
Standard: Composed of standard specifications (vCPU, Memory) commonly used
High Capacity: Server specifications with high capacity over Standard
Server Specifications
db1
Classification of provided server type and generation
db1: means general specifications, and 1 means generation
dbh2: h means large-capacity server specifications, and 2 means generation
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory Capacity
m4: 4GB Memory
Table. Microsoft SQL Server(DBaaS) server type format
db1 server type
The db1 server type of Microsoft SQL Server (DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
db1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
db1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
db1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
db1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
db1v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
db1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
db1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
db1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
db1v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
db1v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
db1v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
db1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
db1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
db1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
db1v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
db1v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
db1v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
db1v10m40
10 vCore
40 GB
up to 10 Gbps
Standard
db1v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
db1v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
db1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
db1v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
db1v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
db1v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
db1v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
db1v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
db1v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
db1v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
db1v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
db1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
db1v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
db1v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
db1v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
db1v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v16m256
16 vCore
256 GB
up to 12.5 Gbps
Table. Microsoft SQL Server(DBaaS) server type specifications - db1 server type
DB2 server type
The db2 server type of Microsoft SQL Server(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
db2v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
db2v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
db2v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
db2v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
db2v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
db2v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
db2v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
db2v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
db2v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
db2v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
db2v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
db2v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
db2v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
db2v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
db2v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
db2v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
db2v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
db2v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
db2v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
db2v8m128
8 vCore
128 GB
up to 10 Gbps
Standard
db2v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
db2v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
db2v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
db2v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
db2v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
db2v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
db2v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
db2v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
db2v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
db2v12m192
12 vCore
192 GB
up to 12.5 Gbps
Standard
db2v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
db2v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
db2v14m112
14 vCore
112 GB
up to 12.5 Gbps
Standard
db2v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
db2v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
db2v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
db2v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
db2v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
db2v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
db2v16m256
16 vCore
256 GB
Up to 12.5 Gbps
Table. Microsoft SQL Server(DBaaS) server type specifications - db2 server type
DBH2 Server Type
The dbh2 server type of Microsoft SQL Server (DBaaS) is provided with large-capacity server specifications and is suitable for database workloads for large-scale data processing.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 128 vCPUs and 1,536 GB of memory
up to 25Gbps of networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
dbh2v24m48
24 vCore
48 GB
Up to 25 Gbps
High Capacity
dbh2v24m96
24 vCore
96 GB
Up to 25 Gbps
High Capacity
dbh2v24m192
24 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v24m288
24 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
dbh2v32m128
32 vCore
128 GB
Up to 25 Gbps
High Capacity
dbh2v32m256
32 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v32m384
32 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v48m192
48 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v48m576
48 vCore
576 GB
Up to 25 Gbps
High Capacity
dbh2v64m256
64 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v64m768
64 vCore
768 GB
Up to 25 Gbps
High Capacity
dbh2v72m288
72 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v72m864
72 vCore
864 GB
Up to 25 Gbps
High Capacity
dbh2v96m384
96 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v96m1152
96 vCore
1152 GB
Up to 25 Gbps
High Capacity
dbh2v128m512
128 vCore
512 GB
Up to 25 Gbps
High Capacity
dbh2v128m1536
128 vCore
1536 GB
Up to 25 Gbps
Table. Microsoft SQL Server(DBaaS) server type specifications - dbh2 server type
6.5.1.2 - Monitoring Metrics
Microsoft SQL Server(DBaaS) Monitoring Metrics
The following table shows the performance monitoring metrics of Microsoft SQL Server(DBaaS) that can be checked through Cloud Monitoring. For detailed usage of Cloud Monitoring, please refer to the Cloud Monitoring guide.
Number of SQL processes blocked by other processes
cnt
Lock Waits [Per Second]
Average number of lock waits per second
cnt
Page IO Latch Wait Time
Average wait time for Page IO latch waits
ms
Slowqueries
Long-running query (slow query)
cnt
Slowquery CPU Time
Long-running query (slow query)
ms
Slowquery Execute Context ID
Long-running query (slow query)
ID
Slowquery Memory Usage
Long-running query (slow query)
bytes
Slowquery Session ID
Long-running query (slow query)
ID
Slowquery Wait Duration Time
Long-running query (slow query)
ms
Tablespace Used
Datavolume size
bytes
Transaction Time [MAX]
Long-running transaction
cnt
Table. Microsoft SQL Server(DBaaS) Monitoring Metrics
6.5.2 - How-to guides
The user can enter the required information for Microsoft SQL Server (DBaaS) through the Samsung Cloud Platform Console, select detailed options, and create the service.
Microsoft SQL Server(DBaaS) Create
You can create and use the Microsoft SQL Server (DBaaS) service from the Samsung Cloud Platform Console.
Notice
Before creating the service, please configure the VPC’s Subnet type as General.
If the Subnet type is Local, the creation of the corresponding Database service is not possible.
To create Microsoft SQL Server (DBaaS), follow the steps below.
All Services > Database > Microsoft SQL Server(DBaaS) Click the menu. Navigate to the Service Home page of Microsoft SQL Server(DBaaS).
Click the Microsoft SQL Server(DBaaS) Create button on the Service Home page. You will be taken to the Microsoft SQL Server(DBaaS) Create page.
Microsoft SQL Server(DBaaS) Creation On the page, enter the information required to create the service, and select detailed options.
Image and version selection Select the required information in the area.
Category
Required or not
Detailed description
Image Version
Required
Provide version list of Microsoft SQL Server (DBaaS)
Table. Microsoft SQL Server (DBaaS) Image and version selection items
Service Information Input Enter or select the required information in the area.
Category
Required
Detailed description
Server Name Prefix
Required
Server name where DB will be installed
Start with a lowercase English letter, and use lowercase letters, numbers, and the special character (-) to input 3 to 13 characters
Based on the server name, a postfix such as 001, 002 is attached to create the actual server name
Cluster Name
Required
Cluster name composed of DB servers
Enter using English letters, 3 to 20 characters
A cluster is a unit that groups multiple servers
Service Type > Server Type
Required
Server type where DB will be installed
Standard: Standard specifications commonly used
High Capacity: Large-capacity server with 24 vCore or more
The configured storage type is applied identically to additional storage
Capacity must be entered as a multiple of 8 within the range 16 ~ 5,120
Additional: Data storage area
After selecting Use, enter the storage’s purpose and capacity
Click the + button to add storage, and the x button to delete
Capacity can be entered as a multiple of 8 within the range 16 ~ 5,120, and up to 9 can be created
Redundancy Configuration
Select
Redundancy Configuration Status
If redundancy configuration is used, the DB instance is configured as Active DB and Standby DB
Network > Common Settings
Required
Network settings where servers generated by the service are installed
Select if you want to apply the same settings to all servers being installed
Select a pre‑created VPC, Subnet, IP, and Public NAT
IP can only be auto‑generated
The Public NAT feature is available only when the VPC is connected to an Internet Gateway. If you check Use, you can select from reserved IPs in the VPC product’s Public IP. For more information, see Create Public IP
Network > Server-specific Settings
Required
Network settings where servers generated by the service are deployed
Select if you want to apply different settings per installed server
Select a pre‑created VPC, Subnet, IP, and Public NAT
Enter each server’s IP
The Public NAT feature is available only when the VPC is connected to an Internet Gateway. If Use is checked, you can select from reserved IPs in the VPC product’s Public IP. For more information, see Create Public IP.
IP Access Control
Select
Service Access Policy Settings
Since the access policy is set for the IP entered on the page, a separate Security Group policy setting is not required
Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.0/24, 192.168.10.1/32) and click the Add button
To delete an entered IP, click the x button next to the entered IP
Maintenance Period
Select
DB Maintenance Period
Use to set day of week, start time, and duration
It is recommended to set a maintenance period for stable DB management. Patch work will be performed at the set time, causing service interruption.
If set to not use, Samsung SDS is not responsible for issues arising from patches not being applied.
Table. Microsoft SQL Server (DBaaS) Service Configuration Items
Database configuration required information input Enter or select the required information in the area.
Category
Required or not
Detailed description
Database Service Name
Required
Database Management Unit Name
Start with an uppercase English letter and use English characters, input 1~15
Database name > Default
Required
Server name applied when installing DB
Starts with an English letter, and using English letters, numbers, and special characters (.,_) input 3 ~ 20 characters
Database name > Add
Required
Server name applied when installing DB
Select Use, then enter the name of the Database to install. It must start with an English letter, and be entered using English letters, numbers, and special characters (., _) for 3 - 20 characters
For each Database, the drive where data is stored can be selected from the drives added in Service Type > Block Storage
Adding Databases is only possible in the Enterprise version, up to a maximum of 100
Database username
Required
DB user name
An account with the same name is also created on the OS
Enter using lowercase English letters, 2 to 20 characters
The following names cannot be used as a Database username
Enter 8-30 characters including letters, numbers, and special characters (“‘ excluded)
Database password verification
Required
DB connection password verification
Re-enter the DB connection password identically
Database Port number
Required
DB connection port number
Enter DB port within the range 1200 - 65535
License
Required
SQL Server License Key
Enter the issued license key
If the entered license key is not valid, the service may not be created
Backup > Use
Select
Backup usage status
Use select to set backup file retention period, backup start time, Full backup schedule (day of week), Archive backup cycle
Backup > Retention Period
Select
Backup Retention Period
Select the backup retention period. File retention period can be set from 7 days to 35 days.
Backup files incur additional charges based on size.
Backup > Backup Start Period
Select
Backup Start Time
Select backup start time
The minutes during which the backup is performed are set randomly, and the backup end time cannot be set
Backup > Full Backup Schedule(Day of Week)
Select
Full Backup Schedule
Select the day of week for Full backup execution
Full backup is performed every week
Backup > Archive backup frequency
Select
Archive backup frequency
Select the Archive backup frequency
Archive backup frequency is recommended at 1 hour. Selecting 5 minutes, 10 minutes, or 30 minutes may affect DB performance
Audit Log Settings
Select
Whether to save Audit Log
Select Use to configure the Audit Log feature
DDL and user connection information records are saved
Enabling Audit may degrade DB performance
Parameter
Required
DB configuration parameters
Search button can be clicked to view detailed information of the parameter
Parameters can be modified after the service creation is completed, and a DB restart is required after modification
DB Collation
Select
Data sorting method
A command that specifies the data sorting and comparison method, and the result of the operation may differ depending on the setting.
Time zone
Required
Standard time zone to be used by the Database
Table. Microsoft SQL Server(DBaaS) Database configuration items
Additional Information Input area, please enter or select the required information.
Category
Required or not
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Microsoft SQL Server (DBaaS) Additional Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Generate button.
When creation is complete, check the created resource on the Resource List page.
Microsoft SQL Server(DBaaS) Check detailed information
Microsoft SQL Server(DBaaS) service allows you to view and edit the full resource list and detailed information. Microsoft SQL Server(DBaaS) Details page consists of Details, Tags, Operation History tabs, and for databases with a configured Replica, a Replica Information tab is additionally provided.
Microsoft SQL Server(DBaaS) To view detailed information, follow the steps below.
All Services > Database > Microsoft SQL Server(DBaaS) Click the menu. Navigate to the Service Home page of Microsoft SQL Server(DBaaS).
Click the Microsoft SQL Server(DBaaS) menu on the Service Home page. You will be taken to the Microsoft SQL Server(DBaaS) List page.
Microsoft SQL Server(DBaaS) List Click the resource to view detailed information on the page. Microsoft SQL Server(DBaaS) Details It navigates to the page.
Microsoft SQL Server(DBaaS) Details At the top of the page, status information and information about additional features are displayed.
Category
Detailed description
Cluster Status
Cluster status with DB installed
Creating: Cluster is being created
Editing: Cluster is changing to operation execution state
Error: Cluster encountered a failure while performing tasks
If it occurs continuously, contact the administrator
Failed: Cluster failed during creation
Restarting: Cluster is being restarted
Running: Cluster is operating normally
Starting: Cluster is being started
Stopped: Cluster is stopped
Stopping: Cluster is being stopped
Synchronizing: Cluster is being synchronized
Terminating: Cluster is being deleted
Unknown: Cluster status is unknown
If it occurs continuously, contact the administrator
Upgrading: Cluster is changing to upgrade execution state
Cluster Control
Button to change cluster state
Start: Start a stopped cluster
Stop: Stop a running cluster
Restart: Restart a running cluster
Switch-Over: Switch a standby cluster to Active
More additional features
Cluster-related management button
Service status synchronization: Query real-time DB service status
Backup history: If backup is configured, check whether backup runs correctly and view history
Database recovery: Recover DB based on a specific point in time
Parameter management: View and modify DB configuration parameters
Add secondary: Configure a read-only cluster Replica
DB user management: View and manage DB accounts (users) registered in the DB
Export DB Log: Logs stored via Audit settings can be exported to the user’s Object Storage
Service termination
Button to cancel the service
Table. Microsoft SQL Server (DBaaS) status information and additional features
Detailed Information
Microsoft SQL Server(DBaaS) List On the page, you can view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
Server Information
Server information configured in the respective cluster
Category: Server type (Primary, Secondary)
Server name: Server name
IP:Port: Server IP and port
Status: Server status
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In DB service, it means cluster SRN
Resource
Resource Name
In DB service, it means the cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date/Time
Service creation date/time
Editor
User who modified the service information
Modification Date
Date and time when service information was modified
Image Version
Installed DB Image and Version Information
If a version upgrade is needed, click the Edit icon to set
Name of the cluster where the servers are configured
Databse service name
Database management unit name
Database username
DB user name
Database name > default Database
Server name applied when installing DB
Database name > Add Database
Server name applied when installing DB
When adding a Database, click the **Add** button to add. Refer to [Add Database](/userguide/database/mssql/how_to_guides/managing.md#database-추가하기)
If you want to delete an added Database, click the **Edit** icon to add. Refer to [Delete Database](/userguide/database/mssql/how_to_guides/managing.md#database-삭제하기)
|
| Planned Compute | Resource status with Planned Compute set
For more details, refer to [Planned Compute Apply](/userguide/financial_management/planned_compute/how_to_guides/)
|
| Maintenance Period | DB Maintenance Period Status
If maintenance period setting is required, click the **Edit** icon to set
|
| Backup | Backup configuration status
If backup configuration is needed, **Edit** icon to set
For details, refer to [Microsoft SQL Server(DBaas) Backup](/userguide/database/mssql/how_to_guides/backupandrestore.md#microsoft-sql-serverdbaas-백업하기)
|
| Audit Log Settings | Audit Log Settings Status
If Audit Log settings are needed, click the **Edit** icon to configure
For details, refer to [Edit Audit Settings](/userguide/database/mssql/how_to_guides/managing.md#audit-설정-수정하기)
|
| Time zone | Standard time zone used by the Database |
| DB Collation | Data Sorting Method |
| VIP | Virtual IP information
Can be checked only when high availability is configured
|
| Network | Network information where DB is installed (VPC, Subnet) |
| IP Access Control | Service Access Policy Settings
If you need to add or delete an IP, click the **Edit** icon to set
|
| Primary & Secondary | Primary/Secondary server type, default OS, additional Disk information
If you need to modify the server type, click the **Edit** icon next to the server type to set it. Refer to [Change Server Type](#서버-타입-변경하기) for the server type modification procedure
Modifying the server type requires a server reboot
If you need to expand storage, click the **Edit** icon next to the storage capacity to expand. Refer to [Expand Storage](#스토리지-증설하기) for the storage expansion procedure
If you need to add storage, click the **Add Disk** button next to Additional Disk to add. Refer to [Add Storage](#스토리지-추가하기) for the storage addition procedure
|
Table. Microsoft SQL Server(DBaaS) Database detailed information items
Tag
Microsoft SQL Server(DBaaS) List page allows you to view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Can check the tag’s Key and Value information
Up to 50 tags can be added per resource
When entering tags, search and select from the existing list of Keys and Values
Table. Microsoft SQL Server(DBaaS) Tag Tab Items
Work History
Microsoft SQL Server(DBaaS) List page allows you to view the operation history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date/time, Resource ID, Resource name, Work details, Event topic, Work result, Verify worker information
Table. Microsoft SQL Server (DBaaS) Job History Tab Detailed Information Items
Microsoft SQL Server(DBaaS) Managing Resources
If you need to change the existing configuration options of a created Microsoft SQL Server (DBaaS) resource, or recover it, or manage parameters, you can perform the work on the Microsoft SQL Server (DBaaS) Details page.
Operating Control
If changes occur to a running Microsoft SQL Server (DBaaS) resource, you can start, stop, or restart it. Also, if HA is configured, you can switch the Primary-Secondary servers via a switch-over.
To control the operation of Microsoft SQL Server (DBaaS), follow the steps below.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. Navigate to the Service Home page of Microsoft SQL Server(DBaaS).
Click the Microsoft SQL Server(DBaaS) menu on the Service Home page. Navigate to the Microsoft SQL Server(DBaaS) List page.
Microsoft SQL Server(DBaaS) List page, click the resource to control operation. It moves to the Microsoft SQL Server(DBaaS) Details page.
Microsoft SQL Server(DBaaS) Check the status and complete the change using the control button below.
Start: The server where the DB service is installed and the DB service is running.
Stop: The server where the DB service is installed and the DB service are stopped (Stopped).
Restart: Only the DB service will be restarted.
Switch Over: You can switch the DB’s Primary server and Secondary server.
Synchronize Service Status
You can synchronize the real-time service status of Microsoft SQL Server (DBaaS).
To check the service status of Microsoft SQL Server (DBaaS), follow the steps below.
All Services > Database > Microsoft SQL Server(DBaaS) Click the menu. Navigate to the Service Home page of Microsoft SQL Server(DBaaS).
Click the Microsoft SQL Server(DBaaS) menu on the Service Home page. Navigate to the Microsoft SQL Server(DBaaS) List page.
Click the resource to view the service status on the Microsoft SQL Server(DBaaS) List page. It moves to the Microsoft SQL Server(DBaaS) Details page.
Service Status Synchronization Click the button. While it is being queried, the cluster will change to Synchronizing state.
When the query is completed, the status is updated in the server information item, and the cluster changes to Running status.
Change Server Type
You can change the configured server type.
To change the server type, follow the steps below.
Caution
If you modify the server type, a server restart is required. Please separately verify any SW license changes or SW settings and reflections due to server specification changes.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. Go to the Service Home page of Microsoft SQL Server(DBaaS).
Click the Microsoft SQL Server(DBaaS) menu on the Service Home page. Go to the Microsoft SQL Server(DBaaS) List page.
Microsoft SQL Server(DBaaS) List Click the resource to change the server type on the page. Microsoft SQL Server(DBaaS) Detail Navigate to the page.
Click the Edit icon of the server type you want to change at the bottom of the detailed information. Server Type Edit popup window opens.
Edit Server Type In the popup window, after selecting the server type, click the Confirm button.
Add Storage
If you need more than 5 TB of data storage space, you can add storage. In the case of a redundant DB configuration, all redundant servers are added simultaneously.
To add storage capacity, follow the steps below.
Caution
It is applied the same as the selected Storage type when creating a service.
For a high‑availability configured DB, adding storage is applied simultaneously to the storage of the Primary DB and the Secondary DB.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. Go to the Service Home page of Microsoft SQL Server(DBaaS).
Click the Microsoft SQL Server(DBaaS) menu on the Service Home page. Navigate to the Microsoft SQL Server(DBaaS) list page.
Microsoft SQL Server(DBaaS) List page, click the resource to add storage. Microsoft SQL Server(DBaaS) Details page will be opened.
Click the Add Disk button at the bottom of the detailed information. The Add Storage Request popup window opens.
Additional Storage Request In the popup window, after entering the purpose and capacity, click the Confirm button.
Expanding storage
You can expand the storage added to the data area up to a maximum of 5 TB based on the initially allocated capacity. In the case of a redundant DB configuration, all redundant servers are expanded simultaneously.
To increase storage capacity, follow the steps below.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. Navigate to the Service Home page of Microsoft SQL Server(DBaaS).
Click the Microsoft SQL Server(DBaaS) menu on the Service Home page. Go to the Microsoft SQL Server(DBaaS) List page.
Microsoft SQL Server(DBaaS) List Click the resource to change the server type on the page. Microsoft SQL Server(DBaaS) Details Navigate to the page.
Click the Edit icon of the additional Disk you want to add at the bottom of the detailed information. The Edit Additional Storage popup window opens.
Add Storage Modification After entering the expansion capacity in the popup window, click the Confirm button.
Microsoft SQL Server(DBaaS) Cancel
You can cancel unused Microsoft SQL Server (DBaaS) to reduce operating costs. However, if you cancel the service, the running service may be stopped immediately, so you should consider the impact of service interruption sufficiently before proceeding with the cancellation.
Caution
If you terminate the DB, all stored data and any backup data will be deleted.
To cancel Microsoft SQL Server (DBaaS), follow the steps below.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. Go to the Service Home page of Microsoft SQL Server(DBaaS).
Click the Microsoft SQL Server(DBaaS) menu on the Service Home page. Navigate to the Microsoft SQL Server(DBaaS) List page.
Microsoft SQL Server(DBaaS) List page, select the resource to cancel, and click the Cancel Service button.
Once the termination is complete, check whether the resource has been terminated on the Microsoft SQL Server(DBaaS) list page.
6.5.2.1 - Managing DB Service
Users can manage Microsoft SQL Server(DBaaS) through the Samsung Cloud Platform Console.
Managing Database
For Microsoft SQL Server Enterprise version, you can add new Databases. You can add up to 100 per cluster.
Adding Database
Follow these steps to add Database.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource to add storage. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the Add button in the Database name item. The Add Database popup window will open.
In the Add Database popup window, click the + button to enter Database name and select drive, then click the Confirm button.
Deleting Database
Follow these steps to delete Database.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource to add storage. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the Modify button in the Database name item. The Delete Added Database popup window will open.
In the Delete Added Database Request popup window, check the Database to delete and click the Delete button. The Delete Database popup window will open.
In the Delete Database popup window, enter the Database name to delete and then click the Confirm button.
Managing Parameters
Provides functionality to easily view and modify database configuration parameters.
Viewing Parameters
Follow these steps to view configuration parameters.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource for which you want to view and modify parameters. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the More button and click the Parameter Management button. The Parameter Management popup window will open.
In the Parameter Management popup window, click the View button. The View Notification popup window will open.
When the View Notification popup window opens, click the Confirm button. Viewing may take some time.
Modifying Parameters
Follow these steps to modify configuration parameters.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource for which you want to view and modify parameters. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the More button and click the Parameter Management button. The Parameter Management popup window will open.
In the Parameter Management popup window, click the View button. The View Notification popup window will open.
When the View Notification popup window opens, click the Confirm button. Viewing may take some time.
If modification is needed, click the Modify button and enter the modification in the custom value area of the Parameter to be modified.
When input is complete, click the Complete button.
Managing DB Users
Provides functionality to view and manage DB user information.
Viewing DB Users
Follow these steps to view DB users.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource for which you want to view DB users. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the More button and click the DB User Management button. You will move to the DB User Management page.
On the DB User Management page, click the View button. Viewing may take some time.
Changing DB User Status
Follow these steps to change the status of viewed DB users.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource for which you want to modify DB users. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the More button and click the DB User Management button. You will move to the DB User Management page.
On the DB User Management page, click the View button. Viewing may take some time.
If modification is needed, click the Modify button and change the status area value or enter remarks.
When input is complete, click the Complete button.
Modifying Audit Settings
You can change the Audit log storage settings for Microsoft SQL Server(DBaaS).
Follow these steps to change the Audit log storage settings for Microsoft SQL Server(DBaaS).
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource for which you want to view the service status. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the Modify icon in Audit Settings at the bottom of the detailed information. The Modify Audit Settings popup window will open.
In the Modify Audit Settings popup window, modify the usage and then click the Confirm button.
Selecting Use sets the Audit log function. Setting Audit logs may degrade DB performance.
Disabling Use deletes the Audit log storage file. Please back up the Audit log file separately before disabling use.
Exporting DB Log
Supports exporting audit(Audit) log data that requires long-term retention to Object Storage. Users can directly set the log type to be saved, the destination Bucket to export to, and the cycle for exporting logs. Logs are copied and stored to the specified Object Storage according to the set criteria.
Additionally, to efficiently manage disk space, provides an option to automatically delete original log files while exporting logs to Object Storage. Using this option allows you to effectively secure storage capacity while safely storing necessary log data for long-term retention.
Notice
To use the DB Log Export function, Object Storage creation is required. For Object Storage creation, please refer to the Object Storage User Guide.
Please check the expiration date of the authentication key. If the authentication key expires, logs will not be saved to the Bucket.
Please be careful not to expose authentication key information externally.
Setting DB Log Export Mode
Follow these steps to set DB Log export mode.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource for which you want to export DB Log. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the More button and click the DB Log Export button. You will move to the DB Log Export page.
On the DB Log Export page, click the Register button. You will move to the Register DB Log Export page.
On the Register DB Log Export page, enter the corresponding information and then click the Complete button.
Category
Required
Detailed Description
Log Type
Required
Log type to save
Storage Bucket Name
Required
Object Storage Bucket name to save
Authentication Key > Access key
Required
Access key to access the Object Storage to save
Authentication Key > Secret key
Required
Secret key to access the Object Storage to save
File Creation Cycle
Required
Cycle for creating files in Object Storage
Delete Original Log
Optional
Whether to delete original logs while exporting to Object Storage
Table. Microsoft SQL Server(DBaaS) DB Log Export Configuration Items
Managing DB Log Export
Follow these steps to modify, cancel, or immediately export DB Log export settings.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource for which you want to manage DB Log export. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the More button and click the DB Log Export button. You will move to the DB Log Export page.
On the DB Log Export page, click the More button according to the log type you want to manage and click the Immediate Export, Modify, or Cancel button.
Immediate Export: The selected log is exported to the Bucket of the previously set Object Storage.
Modify: Modifies the DB Log export mode settings.
Cancel: Cancels the DB Log export mode settings.
Upgrading Minor Version
Provides version upgrade functionality for some feature improvements and security patches. Only Minor version upgrades within the same Major version are supported.
Caution
Please check the service status first through service status synchronization before performing version upgrade.
Please proceed with version upgrade after setting up backup. If backup is not set, some data may not be recoverable when problems occur during upgrade.
Backed up data is automatically deleted after version upgrade is complete.
Follow these steps to upgrade Minor Version.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource to upgrade the version. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the Modify button in the Image version item. The Version Upgrade popup window will open.
In the Version Upgrade popup window, select the modified version and backup setting, then click the Confirm button.
In the Version Upgrade Notification popup window, click the Confirm button.
6.5.2.2 - DB Backup and Recovery
The user can set up a backup of Microsoft SQL Server(DBaaS) through the Samsung Cloud Platform Console and restore it with the backed-up file.
Microsoft SQL Server(DBaaS) backup
Microsoft SQL Server(DBaaS) provides a data backup feature based on its own backup command, and also provides an optimized backup environment for data protection and management through backup history checking and backup file deletion functions.
To modify the backup settings of the generated resource, follow these steps.
Caution
If backup is set, backup is performed at the specified time after the set time, and additional fees are incurred depending on the backup capacity.
If the backup setting is changed to unset, the backup operation will be stopped immediately, and the saved backup data will be deleted and can no longer be used.
All services > Database > Microsoft SQL Server(DBaaS) menu, click. It moves to the Service Home page of Microsoft SQL Server(DBaaS).
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. It moves to the Microsoft SQL Server(DBaaS) list page.
Click the resource to set up backup on the Microsoft SQL Server(DBaaS) list page. It moves to the Microsoft SQL Server(DBaaS) details page.
Click the Edit button of the backup item. The Backup Settings popup window opens.
If you set up a backup, click Use in the Backup Settings popup window, select the retention period, backup start time, and Archive backup cycle, and then click the OK button.
If you want to stop the backup setting, uncheck Use in the Backup Settings popup window and click the OK button.
Check Backup History
Notice
To set up notifications for backup success and failure, you can set them up through the Notification Manager product. For a detailed usage guide on setting up notification policies, please refer to Creating a Notification Policy.
To view the backup history, follow these steps.
Click All Services > Database > Microsoft SQL Server(DBaaS) menu. It moves to the Service Home page of Microsoft SQL Server(DBaaS).
Service Home page, click the Microsoft SQL Server(DBaaS) menu. It moves to the Microsoft SQL Server(DBaaS) list page.
Microsoft SQL Server(DBaaS) list page, click the resource to check the backup history. It moves to the Microsoft SQL Server(DBaaS) details page.
Click the Backup History button. The Backup History popup window opens.
Backup History popup window where you can check the backup status, version, backup start time, backup completion time, and capacity.
Deleting backup files
To delete the backup history, follow these steps.
Caution
Backup files cannot be restored after deletion. Please make sure to check if the data is unnecessary before deleting it.
Click All Services > Database > Microsoft SQL Server(DBaaS) menu. It moves to the Service Home page of Microsoft SQL Server(DBaaS).
Service Home page, click the Microsoft SQL Server(DBaaS) menu. It moves to the Microsoft SQL Server(DBaaS) list page.
Microsoft SQL Server(DBaaS) list page, click the resource to check the backup history. It moves to the Microsoft SQL Server(DBaaS) details page.
Click the Backup History button. The Backup History popup window opens.
Backup History popup window, check the file you want to delete, and then click the Delete button.
Microsoft SQL Server(DBaaS) Recovery
In the event of a disability or data loss that requires restoration with a backup file, recovery is possible based on a specific point in time through the recovery function. When performing Microsoft SQL Server (DBaaS) recovery, a new server is created with the OS image at the initial provisioning time, and the DB is installed with the version at the backup point, and the recovery proceeds with the DB configuration information and data.
To restore Microsoft SQL Server(DBaaS), follow these procedures.
Click All Services > Database > Microsoft SQL Server(DBaaS) menu. It moves to the Service Home page of Microsoft SQL Server(DBaaS).
Service Home page, click the Microsoft SQL Server(DBaaS) menu. Move to the Microsoft SQL Server(DBaaS) list page.
Microsoft SQL Server(DBaaS) resource list page, click the resource you want to restore. Move to the Microsoft SQL Server(DBaaS) details page.
Click the Database Recovery button. Go to the Database Recovery page.
Database Recovery Configuration area, enter the corresponding information and click the Complete button.
Classification
Necessity
Detailed Description
Recovery Type
Required
Set the point in time to recover
Backup Point (Recommended): Recover based on the backup file. Select from the list of backup file timestamps displayed in the list
Custom Point: Recover to the desired point in time within the range where backup is possible. The recoverable period is from the initial backup start time to 1 hour/30 minutes/10 minutes/5 minutes before the current time, based on the Archive backup cycle setting value. Select the date and time you want to back up
Server name prefix
Required
Recovery DB server name
Start with lowercase English letters, using lowercase letters, numbers, and special characters (-) to input 3-16 characters
A postfix such as 001, 002 is attached based on the server name, and the actual server name is created
Cluster Name
Required
Recovery DB Cluster Name
Enter in English, 3-20 characters
A cluster is a unit that bundles multiple servers
Service Type > Server Type
Required
Recovery DB Server Type
Standard: Generally used standard specification
High Capacity: High-capacity server with 24vCore or more
Service Type > Planned Compute
Selection
Current status of resources with Planned Compute set
In Use: Number of resources with Planned Compute set that are in use
Setting: Number of resources with Planned Compute set
Coverage Preview: Amount applied by resource-based Planned Compute
Create Planned Compute Service: Move to the Planned Compute service application page
The storage type set in the original cluster is applied in the same way
Capacity can be entered in multiples of 8 in the range of 16 to 5,120
Additional: Data storage area
The storage type set in the original cluster is applied in the same way
Only DATA and TEMP purposes can be added in the recovery DB
Select Use and enter the purpose and capacity of the storage
To add storage, click the + button, and to delete, click the x button
Capacity can be entered in multiples of 8 in the range of 16 to 5,120, and up to 9 can be created
Database username
required
Database username
Applied identically to the username set in the original cluster
Database Port number
required
Database Port number
Apply the same port number set in the original cluster
IP Access Control
Select
Set service access policy
Set access policy for IP entered on the page, so Security Group policy setting is not required separately
Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.0/24, 192.168.10.1/32) and click the Add button
To delete the entered IP, click the x button next to the entered IP
Maintenance period
Select
DB maintenance period
Use is selected, set the day of the week, start time, and duration
It is recommended to set the maintenance period for stable management of the DB. Patch work is performed at the set time and service interruption occurs
If not used, Samsung SDS is not responsible for the problems that occur due to non-application of patches.
Tag
Select
Add Tag
Click the Add Tag button and enter or select Key, Value
Fig. Microsoft SQL Server(DBaaS) Recovery Configuration Items
6.5.2.3 - Adding Secondary
Users can enter required information for Secondary through the Samsung Cloud Platform Console and create the service through detailed options.
Adding Secondary
Through Secondary configuration, you can create read-only replica servers. To configure Secondary additionally, it must be created with HA (High Availability) and Enterprise Edition, and the backup function must be enabled.
Follow these steps to configure Secondary.
Click the All Services > Database > Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS)’s Service Home page.
On the Service Home page, click the Microsoft SQL Server(DBaaS) menu. You will move to the Microsoft SQL Server(DBaaS) List page.
On the Microsoft SQL Server(DBaaS) List page, click the resource to configure Secondary. You will move to the Microsoft SQL Server(DBaaS) Detail page.
Click the Add Secondary button. You will move to the Add Secondary page.
Enter information in the Secondary Configuration area and then click the Create button.
Category
Required
Detailed Description
Secondary Count
Required
Number of Secondaries to configure
Can configure only 1 per cluster
Secondary Name
Required
Secondary server name
Enter 3 ~ 15 characters starting with lowercase English letters, using lowercase letters, numbers, and special characters(-)
The entered Secondary name is displayed as cluster name in the list
Service Type > Server Type
Required
Secondary server type
Applied identically according to the server type set in the original DB
Service Type > Planned Compute
Optional
Status of resources with Planned Compute set
In Use: Number of resources with Planned Compute set that are currently in use
Set: Number of resources with Planned Compute set
Coverage Preview: Amount applied by Planned Compute per resource
Create Planned Compute Service: Moves to Planned Compute service application page
Network settings where servers created in the service are installed
Applied identically with network settings set in original DB
Network > Per-Server Settings
Required
Network settings where servers created in the service are installed
Select when applying different settings for each server being installed
Applied identically with network settings set in original DB
When setting per server, enter IP address within 10.10.10.0/24 range
License
Required
SQL Server License Key
Enter the issued license key
If the entered license key is invalid, the service may not be created
Table. Microsoft SQL Server(DBaaS) Secondary Configuration Items
6.5.2.4 - Microsoft SQL Server(DBaaS) server connection
Scenario Overview
The Microsoft SQL Server(DBaaS) server connection scenario is a scenario where a Bastion host (Virtual Server) and a Database service are created, and the DB service is accessed through the Bastion host. To securely access Microsoft SQL Server (DBaaS) in the Samsung Cloud Platform environment, it is necessary to create a Bastion host and use it for network connection. To maintain a stable and high level of security, it is recommended to configure the Database service in a Private Subnet environment and configure the Bastion host in a limited Public Subnet environment.
This scenario largely describes the process of creating a Bastion host and Database service, and configuring the network environment for Bastion host and Database connection, and accessing it through a DB connection client.
Figure. Microsoft SQL Server(DBaaS) server connection architecture
Scenario Components
You can configure the scenario using the following services.
Service Group
Service
Detailed Description
Networking
VPC
A service that provides an independent virtual network in a cloud environment
Networking
VPC > Subnet
A service that allows users to subdivide the network into smaller segments according to purpose/size within the VPC
Networking
VPC > Public IP
A service that reserves public IP and assigns and returns it to Compute resources
Networking
VPC > Internet Gateway
A service that connects VPC resources to the internet
Networking
Security Group
A virtual firewall that controls the server’s traffic
Database
Microsoft SQL Server(DBaaS)
A service that easily creates and manages Microsoft SQL Server in a web environment
Compute
Virtual Server
Virtual server optimized for cloud computing
Compute
Virtual Server > Keypair
Encryption file used to connect to the Virtual Server
Table. Scenario Component List
Note
The default policy of Security Group is Deny All, so only allowed IPs must be registered.
In/Outbound’s All Open(Any IP, Any Port) policy can expose cloud resources to external threats.
By specifying the necessary IP and Port to set the policy, you can enhance security.
Scenario composition method
To configure the scenario, create the necessary services through the following procedure.
1. Configuring the Network
This describes the process of configuring the network environment for connecting to the Bastion Host and Database services.
Summary panel, check the detailed information generated and the expected billing amount, and click the Complete button.
After creation is complete, check the created resource on the Virtual Server list page.
2-3. Check Bastion host connection ID and PW
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Service Home page, click the Virtual Server menu. Move to the Virtual Server list page.
Virtual Serve list page, click on the resource created in 2-2. Bastion host creation. It moves to the detailed information page of the corresponding resource.
In the detailed information page, click the RDP password inquiry button in the Keypair name item. The RDP password inquiry pop-up window opens.
Click All Services > Networking > Security Group menu. It moves to the Service Home page of Security Group.
Service Home page, click the Security Group menu. Move to the Security Group list page.
1-5. Creating a Security Group Select the Security Group resource created from 1-5. Creating a Security Group. It moves to the detailed information page of the corresponding resource.
Click the Rules tab on the detailed information page. It moves to the Rules tab.
Rule tab where you click the Add Rule button. Move to the Add Rule popup window.
In the Add Rule popup window, enter the rules below and click the OK button
Click All services > Networking > Firewall menu. It moves to the Service Home page of Firewall.
Service Home page, click the Firewall menu. It moves to the Firewall list page.
Firewall list page, select the Internet Gateway resource name created in 1-3. Creating Internet Gateway. It moves to the detailed information page of the corresponding resource.
Click the Rules tab on the detailed information page. It moves to the Rules tab.
Rule tab, click the Add Rule button. Move to the Add Rule popup window.
Add Rule In the popup window, enter the following rules and click the OK button.
Departure Address
Destination Address
Protocol
Port
Action
Direction
Description
Bastion connection PC IP
Bastion host IP
TCP
3389(RDP)
Allow
Inbound
User PC → Bastion host
Table. Internet Gateway Firewall rules to be added
5. Connect to Database
This describes the process of a user accessing the Database through a DB connection client program.
This guide provides instructions on how to connect using SSMS (Microsoft SQL Server Management Studio). Since there are various database client programs and CLI utilities, you can also install and use the tools that are suitable for you.
5-1. Connect to the Bastion host
Run Remote Desktop Connection in the Windows environment of the PC that wants to access the Bastion host, enter the NAT IP of the Bastion Host, and click the Connect button.
When the remote desktop connection is successful, the User Credential Input Window opens. Enter the ID and Password confirmed in 2-3. Check Bastion host access ID and PW and click the Confirm button.
5-2. Install DB connection client program (SSMS) on Bastion host
Go to the official Microsoft SQL Server page and download the SSMS program.
Connect the hard drive of the user PC to upload the file to the Bastion host.
Click the Details button for local devices and resources entries in the Local Resources tab of Remote Desktop Connection.
Select the local disk where the file was downloaded to the drive and click the Confirm button.
Download the file, copy it to the Bastion Host, and upload it, then click the SSMS (Microsoft SQL Server Management Studio) installation file to install it.
5-3. Using DB Connection Client Program (SSMS) to Connect to Database
Run SSMS (Microsoft SQL Server Management Studio). The Connect to Server popup window will appear.
Database server IP, Database Port (ex. 192.168.10.1,2866)
Authentication
SQL Server Authentication
Login
Database username
Password
Database password
Encryption
Optional
DB Connection Client Program Input Items
Once the connection is complete, the Database will be connected. After connection, you can try performing simple queries, etc.
6.5.3 - API Reference
API Reference
6.5.4 - CLI Reference
CLI Reference
6.5.5 - Release Note
Microsoft SQL Server(DBaaS)
2025.07.01
FEATUREUser (Access Control) Management, DB Audit Log Export Function Added, Backup Notification Function Provided
Microsoft SQL Server(DBaaS) feature added
2nd generation server type added
Intel 4th generation (Sapphire Rapids) processor-based 2nd generation (db2) server type added. For more information, see Microsoft SQL Server (DBaaS) server type
Backup Notification Feature provided
* Provides notification features for backup success and failure. For more information, see Creating a Notification Policy
Block Storage type added **HDD, HDD_KMS type
2025.02.27
NEWMicrosoft SQL Server(DBaaS) Service Official Version Release
A Microsoft SQL Server (DBaaS) service that allows you to easily create and manage Microsoft SQL Server in a web environment has been released.
6.6 - CacheStore(DBaaS)
6.6.1 - Overview
Service Overview
CacheStore(DBaaS) is a service that provides the in‑memory based data stores Redis OSS and Valkey. Samsung Cloud Platform provides an environment that can automate the installation of Redis OSS and Valkey through a web‑based console and perform management functions for operation.
CacheStore (DBaaS) provides a Sentinel architecture consisting of a Master server that performs read/write operations and read‑only Replica servers that replicate the Master data. Sentinel checks the status of DB servers where the engine is installed and automatically fails over the Replica servers to become the Master server if a failure occurs on the Master server. Additionally, it provides an automatic backup feature at user‑specified times to prepare for issues with the DB server or data, allowing data recovery based on the backup point.
Figure. CacheStore(DBaaS) Architecture
Provided Features
CacheStore(DBaaS) provides the following features.
Auto Provisioning: It is possible to install and configure the Database (DB) via UI, and a redundant configuration with a Sentinel-based Single Master server and Replica server(s) (1 or 2) is possible.
Operation Control Function: Provides a function to control the status of running servers. In addition to start and stop, restart is possible if there is an issue with the DB or to apply configuration values. When configured for high availability (HA), you can switch Active-Standby servers via Switch-over.
Backup and Recovery: Provides a data backup function based on its own backup commands. The backup time window and storage agency can be set by the user, and additional fees are incurred based on backup size. It also provides a recovery function for backed-up data; when the user performs a recovery, a separate DB is created and the recovery proceeds to the point in time selected by the user.
Parameter management: It is possible to modify DB configuration parameters related to performance improvement and security.
Service Status Query: Retrieves the final status of the current DB service.
Monitoring: CPU, memory, DB performance monitoring information can be checked through the Cloud Monitoring service.
Minor version upgrade: Minor version upgrades can be performed within the same Major version to apply some feature improvements and security patches.
Components
CacheStore (DBaaS) provides pre-validated engine versions and various server types according to the open source support policy. Users can select and use them according to the scale of the service they want to configure.
Engine Version
The engine versions supported by CacheStore (DBaaS) are as follows.
Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to 6 months before the EoTS date.
Since the dates for EOS and EoTS may vary according to the supplier’s policy, please refer to the supplier’s license management policy page for details.
Standard: Standard specifications (vCPU, Memory) configuration commonly used
Server Specifications
redis1
Provided Server Specifications
redis1: Standard specifications (vCPU, Memory) configuration commonly used
Server Specifications
v2
Number of vCores
v2: 2 virtual cores
Server Specifications
m4
Memory Capacity
m4: 4GB Memory
Table. CacheStore (DBaaS) Server Type Components
Preliminary Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for detailed information and prepare in advance.
A service that provides an independent virtual network in a cloud environment
Table. CacheStore(DBaaS) Prior Service
6.6.1.1 - Server Type
CacheStore(DBaaS) server type
CacheStore(DBaaS) provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc.
When creating CacheStore(DBaaS), Redis is installed according to the selected server type suitable for the purpose of use.
The server types supported by CacheStore(DBaaS) are as follows.
Standard redis1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Provided server type distinction
Standard: Configured with standard specifications (vCPU, Memory) commonly used
Server Specification
redis1
Provided server type distinction and generation
redis: means general specification, and 1 means generation
Server specifications
v2
Number of vCores
v2: 2 virtual cores
Server specification
m4
Memory capacity
m4: 4GB Memory
Fig. CacheStore(DBaaS) server type format
redis1 server type
The redis1 server type of CacheStore(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps of networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
redis1v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
redis1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
redis1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
redis1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
redis1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
redis1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
redis1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
redis1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
redis1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
redis1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
redis1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
redis1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
redis1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
redis1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
redis1v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
redis1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
redis1v16m256
16 vCore
256 GB
Up to 12.5 Gbps
Table. CacheStore(DBaaS) server type specification - redis1 server type
css1 server type
The css1 server type of CacheStore(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps of networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
css1v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
css1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
css1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
css1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
css1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
css1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
css1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
css1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
css1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
css1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
css1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
css1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
css1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
css1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
css1v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
css1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
css1v16m256
16 vCore
256 GB
Up to 12.5 Gbps
Table. CacheStore(DBaaS) server type specification - css1 server type
6.6.1.2 - Monitoring Metrics
CacheStore(DBaaS) Monitoring Metrics
The following table shows the performance monitoring metrics of CacheStore(DBaaS) that can be checked through Cloud Monitoring. For detailed instructions on using Cloud Monitoring, refer to the Cloud Monitoring guide.
Set Storage type is applied equally to additional storage
Enter capacity in multiples of 8 within the range of 16 to 5,120
High Availability
Optional
High Availability (HA) configuration
If using HA configuration, provided as Master-Replica configuration with 1 or 2 replicas
Sentinel Port Number: Port number used when connecting to Sentinel
Port for Master-Replica communication, enter within the range of 1200 to 65535
Replica Count: Number of replicas to configure
If selecting 1, configured as Master-Replica-Sentinel
If selecting 2, configured as Master-Replica-Replica, and Sentinel is automatically installed on the server where Redis is installed
Sentinel server type is set to minimum specification
Network
Required
Network where CacheStore(DBaaS) is installed
Select and connect to pre-created VPC and Subnet
Only automatic IP generation is possible
Network > Common Settings
Required
Network settings where servers created in the service are installed
Select if you want to apply the same settings to all installed servers
Select pre-created VPC and Subnet
Only automatic IP generation is possible
Network > Per Server Settings
Required
Network settings where servers created in the service are installed
Select if you want to apply different settings for each installed server
Select pre-created VPC and Subnet, IP, Public NAT
Enter IP for each server
Public NAT function is only available when VPC is connected to Internet Gateway. If you check Use, you can select from IPs reserved in Public IP of VPC product. For more information, see Creating Public IP
IP Access Control
Optional
Service access policy setting
Since access policy is set for the IP entered on the page, you don’t need to set Security Group policy separately
Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.0/24, 192.168.10.1/32) and click the Add button
To delete the entered IP, click the x button next to the entered IP
Maintenance Period
Optional
CacheStore(DBaaS) maintenance period
If selecting Use, set day of week, start time, and duration
It is recommended to set the maintenance period for stable DB management. Patch work is performed at the set time and service interruption occurs
If set to not used, Samsung SDS is not responsible for problems caused by not applying patches.
Table. CacheStore(DBaaS) Service Configuration Items
Enter or select the required information in the Database Configuration Required Information Entry area.
Classification
Required
Detailed Description
Backup
Optional
Backup usage
If selecting Use, set backup file retention period and backup start time
Separate fees are charged for backup files depending on capacity
File retention period can be set from 7 days to 35 days
The minutes when backup is performed are set randomly, and backup end time cannot be set
Redis/Valkey Port Number
Required
Port number required for Redis/Valkey connection
Enter port within the range of 1200 to 65535
Redis/Valkey Password
Required
Password required for Redis/Valkey connection
Enter 8 to 30 characters including English letters, numbers, and special characters (excluding $"’)
Redis/Valkey Password Confirmation
Required
Re-enter password identically
Parameter
Required
Parameters to use for Redis/Valkey
Click the View button to check detailed information of parameters
After creation is complete, parameter modification is possible, and DB restart is required after modification
Enter or select the required information in the Additional Information Entry area.
Classification
Required
Detailed Description
Tags
Optional
Add tags
Can add up to 50 per resource
Click the Add Tag button then enter or select Key, Value values
Table. CacheStore(DBaaS) Additional Information Entry Items
In the Summary panel, review the detailed information and estimated charges, and click the Create button.
Once creation is complete, check the created resource on the Resource List page.
Checking CacheStore(DBaaS) Detailed Information
CacheStore(DBaaS) service allows you to check and modify the entire resource list and detailed information. The CacheStore(DBaaS) Details page consists of Detailed Information, Tags, Operation History tabs.
To check the detailed information of CacheStore(DBaaS) service, follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, click the resource to check detailed information. It moves to the CacheStore(DBaaS) Details page.
At the top of the CacheStore(DBaaS) Details page, status information and additional function information are displayed.
Classification
Detailed Description
Cluster Status
Cluster status where Redis is installed
Creating: Cluster is being created
Editing: Cluster is changing to Operation execution state
Error: State where error occurred while cluster is performing operation
If it occurs continuously, contact administrator
Failed: State where cluster failed during creation process
Restarting: State where cluster is being restarted
Running: State where cluster is operating normally
Starting: State where cluster is starting
Stopped: State where cluster is stopped
Stopping: State where cluster is in stop state
Synchronizing: State where cluster is synchronizing
Terminating: State where cluster is being deleted
Unknown: State where cluster status is unknown
If it occurs continuously, contact administrator
Cluster Control
Buttons to change cluster status
Start: Start stopped cluster
Stop: Stop running cluster
Restart: Restart running cluster
Switch-Over: Switch Replica cluster to Master
More Additional Functions
Cluster-related management buttons
Sync Service Status: Check real-time Redis/Valkey service status
Backup History: If backup is set, check backup normal execution status and history
Database Recovery: Recover DB based on specific point in time
Parameter Management: Check and modify Redis/Valkey configuration parameters
Rename-Command: Change name of Redis/Valkey Command
Service Termination
Button to terminate service
Table. CacheStore(DBaaS) Status Information and Additional Functions
Detailed Information
On the CacheStore(DBaaS) List page, you can check the detailed information of the selected resource and modify information if necessary.
Classification
Detailed Description
Server Information
Server information configured in the cluster
Category: Server type (Master, Replica, Sentinel)
Server Name: Server name
IP:Port: Server IP and port
Status: Server status
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
For DB service, means cluster SRN
Resource Name
Resource name
For DB service, means cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
Image Version
Installed Redis/Valkey image and version information
If minor version upgrade is needed, click the Edit icon to set
Network information where CacheStore(DBaaS) is installed (VPC, Subnet)
IP Access Control
Service access policy setting
If IP addition and deletion are needed, click the Edit icon to set
Master & Replica
Master, Replica server type, Basic OS, Additional Disk information
If server type modification is needed, click the Edit icon next to server type to set
If server type is modified, server restart is required. Please check SW license modification matters or SW settings and application due to specification change separately.
Sentinel
Sentinel server type, Basic OS information
Available when selecting 1 replica during HA configuration
Table. CacheStore(DBaaS) Database Detailed Information Items
Tags
On the CacheStore(DBaaS) List page, you can check the tag information of the selected resource and add, change, or delete it.
Classification
Detailed Description
Tag List
Tag list
Can check Key, Value information of tags
Can add up to 50 tags per resource
When entering tags, search and select from existing Key and Value lists
Table. CacheStore(DBaaS) Tag Tab Items
Operation History
You can check the operation history of the selected resource on the CacheStore(DBaaS) List page.
Classification
Detailed Description
Operation History List
Resource change history
Can check operation date, resource ID, resource name, operation details, event topic, operation result, operator information
To perform detailed search, click the Detailed Search button
Table. CacheStore(DBaaS) Operation History Tab Detailed Information Items
Managing CacheStore(DBaaS) Resources
If you need to change existing configuration options of created CacheStore(DBaaS) resources or perform recovery, command changes, etc., you can perform operations on the CacheStore(DBaaS) Details page.
Controlling Operation
If changes occur in running CacheStore(DBaaS) resources, you can start, stop, or restart. Also, if configured with HA, you can switch Master-Replica servers through Switch-over.
To control CacheStore(DBaaS) operation, follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, click the resource to control operation. It moves to the CacheStore(DBaaS) Details page.
Check CacheStore status and complete changes through the control buttons below.
Start: CacheStore service installed server and CacheStore service will run (Running).
Stop: CacheStore service installed server and CacheStore service can stop (Stopped).
Restart: Can restart only CacheStore service.
Switch Over: Can switch Master server and Replica server.
Syncing Service Status
You can sync the real-time service status of CacheStore(DBaaS).
To check CacheStore(DBaaS) service status, follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, click the resource to check service status. It moves to the CacheStore(DBaaS) Details page.
Click the Sync Service Status button. While retrieving, the cluster changes to Synchronizing status.
When retrieval is complete, the status is updated in the server information item, and the cluster changes to Running status.
Changing Server Type
You can change the configured server type.
Caution
If you modify the server type, server restart is required. Please check SW license modification matters or SW settings and application due to server specification change separately.
To change the server type, follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, click the resource to change server type. It moves to the CacheStore(DBaaS) Details page.
Click the Edit icon of the server type you want to change at the bottom of detailed information. The Edit Server Type popup window opens.
In the Edit Server Type popup window, select the server type and click the OK button.
Terminating CacheStore(DBaaS)
You can reduce operating costs by terminating unused CacheStore(DBaaS). However, if you terminate the service, the running service may stop immediately, so you should proceed with termination after fully considering the impact caused by service interruption.
To terminate CacheStore(DBaaS), follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, select the resource to terminate and click the Terminate Service button.
When termination is complete, check if the resource is terminated on the CacheStore(DBaaS) List page.
6.6.2.1 - Managing CacheStore Service
Users can manage CacheStore(DBaaS) through the Samsung Cloud Platform Console.
Managing Parameters
Provides a feature to easily view and modify database configuration parameters.
Viewing Parameters
To view configuration parameters, follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, click the resource to view and modify parameters. It moves to the CacheStore(DBaaS) Details page.
Click the More button and click the Parameter Management button. The Parameter Management popup window opens.
Click the View button in the Parameter Management popup window. The View Notification popup window opens.
When the View Notification popup window opens, click the OK button. It takes some time to view.
Modifying Parameters
To modify configuration parameters, follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, click the resource to view and modify parameters. It moves to the CacheStore(DBaaS) Details page.
Click the More button and click the Parameter Management button. The Parameter Management popup window opens.
Click the View button in the Parameter Management popup window. The View Notification popup window opens.
When the View Notification popup window opens, click the OK button. It takes some time to view.
If modification is needed, click the Edit button and enter modification contents in the user-defined value area of the parameter to be modified.
When input is complete, click the Complete button.
Changing Command Name
Caution
When Rename-Command is applied, the service is interrupted due to CacheStore(DBaaS) restart.
Provides Redis OSS/Valkey Command viewing and Command name modification features. To view and modify Command names, follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, click the resource to modify Command name. It moves to the CacheStore(DBaaS) Details page.
Click the More button and click the Rename-Command button. It moves to the Rename-Command page.
Click the View button on the Rename-Command page. The View Notification popup window opens.
When the View Notification popup window opens, click the OK button. It takes some time to view.
If modification is needed, click the Edit button and enter modification contents in the user-defined value area of the Command to be modified.
When input is complete, click the Complete button.
Upgrading Minor Version
Provides version upgrade feature for some function improvements and security patches. Only minor version upgrade function within the same Major version is supported.
Caution
Please check the service status first through service status sync, then perform version upgrade.
To upgrade Minor Version, follow these steps:
Click the All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) List page.
On the CacheStore(DBaaS) List page, click the resource to upgrade version. It moves to the CacheStore(DBaaS) Details page.
Click the Edit icon of the Image Version item. The Version Upgrade popup window opens.
Select the upgrade version in the Version Upgrade popup window and click the OK button.
Click the OK button in the Version Upgrade Notification popup window.
6.6.2.2 - CacheStore Backup and Recovery
The user can set up backups of CacheStore(DBaaS) through the Samsung Cloud Platform Console and recover with the backed-up files.
CacheStore(DBaaS) backup
CacheStore(DBaaS) provides a data backup function based on its own backup command. It also provides an optimized backup environment for data protection and management through backup history check and backup file deletion functions.
To modify the backup settings of the generated resource, follow these steps.
Caution
If backup is set, backup will be performed at the specified time after the set time, and additional fees will be incurred depending on the backup capacity.
If the backup setting is changed to unset, the backup operation will be stopped immediately, and the stored backup data will be deleted and can no longer be used.
All Services > Database > CacheStore(DBaaS) menu, click. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) list page.
CacheStore(DBaaS) list page, click the resource to set the backup. It moves to the CacheStore(DBaaS) detail page.
Click the Edit icon of the backup item. The Backup Settings popup window opens.
If you set up a backup, click Use in the Backup Settings popup window, select the retention period, backup start time, and Archive backup cycle, and then click the OK button.
If you want to stop the backup setting, uncheck Use in the Backup Setting popup window and click the OK button.
Check Backup History
Notice
To set up notifications for backup success and failure, you can set them up through the Notification Manager product. For detailed usage guidelines on setting up notification policies, please refer to Creating a Notification Policy.
To check the backup history, follow these steps.
Click All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) list page.
CacheStore(DBaaS) list page, click the resource to check the backup history. It moves to the CacheStore(DBaaS) details page.
Click the Backup History button. The Backup History popup window opens.
Backup History popup window where you can check the backup status, version, backup start time, backup completion time, and capacity.
Deleting Backup Files
To delete the backup history, follow these steps.
Caution
Backup files cannot be restored after deletion. Please make sure to check if the data is unnecessary before deleting it.
Click All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) list page.
CacheStore(DBaaS) list page, click the resource to check the backup history. It moves to the CacheStore(DBaaS) detail page.
Click the Backup History button. The Backup History popup window opens.
Backup History popup window, check the file you want to delete and click the Delete button.
CachStore(DBaaS) recovery
In the event of a failure or data loss, where recovery from a backup file is necessary, the recovery function allows recovery based on a specific point in time.
To recover CacheStore(DBaaS), follow these steps.
Click All Services > Database > CacheStore(DBaaS) menu. It moves to the Service Home page of CacheStore(DBaaS).
On the Service Home page, click the CacheStore(DBaaS) menu. It moves to the CacheStore(DBaaS) list page.
CacheStore(DBaaS) list page, click the resource you want to restore. It moves to the CacheStore(DBaaS) details page.
Click the Database Recovery button. It moves to the Database Recovery page.
After entering information in the Database recovery configuration area, click the Complete button.
Classification
Necessity
Detailed Description
Recovery Type
Required
User sets the point in time to recover
Recover based on backup files, and select from the list of backup file timestamps displayed
Server name prefix
Required
Server name of the recovery DB
Start with lowercase English letters, using lowercase letters, numbers, and special characters (-) to enter 3 to 16 characters
A postfix such as 001, 002 is attached based on the server name to create the actual server name
Cluster Name
Required
Cluster name of the recovery DB
Enter in English, 3-20 characters
A cluster is a unit that bundles multiple servers
Service Type > Server Type
Required
Server type where the recovery DB will be installed
Standard: Standard specification commonly used
High Capacity: High-capacity server with 24vCore or more (to be provided later)
Service Type > Planned Compute
Selection
Current status of resources with Planned Compute set
In Use: Number of resources with Planned Compute set that are in use
Settings: Number of resources with Planned Compute set
Coverage Preview: Amount applied by resource-based Planned Compute
Create Planned Compute Service: Move to the Planned Compute service application page
CacheStore(DBaaS) connection scenario is a scenario where Bastion host (Virtual Server) and Database service are created, and the DB service is accessed through the Bastion host. To connect to CacheStore (DBaaS) stably in the Samsung Cloud Platform environment, it is necessary to create a Bastion host and use it for network connection. To maintain a stable and high level of security, it is recommended to configure the Database service in a Private Subnet environment and configure the Bastion host in a limited Public Subnet environment.
This scenario largely describes the process of creating a Bastion host and Database service, and configuring the network environment for Bastion host and Database connection, allowing access through a DB connection client.
Figure. CacheStore(DBaaS) server connection architecture
Scenario Components
You can configure the scenario using the following services.
Service Group
Service
Detailed Description
Networking
VPC
A service that provides an independent virtual network in a cloud environment
Networking
VPC > Subnet
A service that allows users to subdivide the network according to purpose,size in VPC
Networking
VPC > Public IP
A service that reserves public IP and assigns and returns it to Compute resources
Networking
VPC > Internet Gateway
A service that connects VPC resources to the internet
Networking
Security Group
A virtual firewall that controls the server’s traffic
Database
CacheStore(DBaaS)
A service that easily creates and manages CacheStore in a web environment
Compute
Virtual Server
Cloud computing optimized virtual server
Compute
Virtual Server > Keypair
Encryption file used to connect to the Virtual Server
Table. Scenario Component List
Note
The default policy of Security Group is Deny All, so only allowed IPs should be registered.
In/Outbound’s All Open(Any IP, Any Port) policy can expose cloud resources to external threats.
By specifying the necessary IP and Port to set the policy, you can enhance security.
Scenario composition method
To configure the scenario, create the necessary services through the following procedure.
1. Configuring the Network
This describes the process of configuring the network environment for connecting to the Bastion Host and Database services.
In the Summary panel, review the detailed information and estimated billing amount, and click the Complete button.
Once the creation is complete, check the created resource on the Virtual Server list page.
2-3. Check Bastion host connection ID and PW
Click All Services > Compute > Virtual Server menu. It moves to the Service Home page of Virtual Server.
Service Home page, click the Virtual Server menu. Move to the Virtual Server list page.
Virtual Server list page, click on the resource created in 2-2. Creating a Bastion Host. It moves to the detailed information page of the corresponding resource.
Click the RDP password inquiry button in the Keypair item on the detailed information page. The RDP password inquiry popup window opens.
Click All services > Networking > Firewall menu. It moves to the Service Home page of Firewall.
Service Home page, click the Firewall menu. It moves to the Firewall list page.
Firewall list page, select the Internet Gateway resource name created in 1-3. Creating an Internet Gateway. It moves to the detailed information page of the corresponding resource.
Click the Rules tab on the detailed information page. It moves to the Rules tab.
Rule tab, click the Add Rule button. Move to the Add Rule popup window.
In the Add Rule popup window, enter the following rules and click the OK button.
Departure Address
Destination Address
Protocol
Port
Action
Direction
Description
Bastion connection PC IP
Bastion host IP
TCP
3389(RDP)
Allow
Inbound
User PC → Bastion host
Fig. Internet Gateway Firewall rules to be added
5. Connecting to the Database
This explains the process of a user accessing the Database through a DB connection client program.
This guide provides instructions on how to connect using Another Redis Desktop Manager. There are various database client programs and CLI utilities, so you can also install and use the tools that are suitable for the user.
5-1. Connecting to the Bastion host
Run Remote Desktop Connection in the Windows environment of the PC you want to connect to the Bastion host, enter the NAT IP of the Bastion Host, and click the Connect button.
When the remote desktop connection is successful, the User Credential Input Window opens. Enter the ID and Password confirmed in 2-3. Check Bastion host connection ID and PW and click the Confirm button.
5-2. Install DB connection client program (Another Redis Desktop Manager) on the Bastion host
Support for open source Valkey image developed by forking Redis OSS
2nd Generation Server Type added
Added 2nd generation (db2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For more information, see CacheStore(DBaaS) Server Type
Backup notification feature provided
Provides notification feature for backup success and failure. For more information, see Creating Notification Policy
Added HDD, HDD_KMS types to Block Storage type
2025.02.27
FEATURECommon Feature Changes
Samsung Cloud Platform common feature changes
Reflected common CX changes such as Account, IAM and Service Home, Tags, etc.
2024.10.01
NEWCacheStore(DBaaS) Service Official Version Released
Changed the service name to CacheStore(DBaaS).
Added volume encrypted storage selection option to Block Storage type.
Added Role Switch (Active ↔ Standby) function for Active DB and Standby DB configured in redundancy.
Integrated with Cloud Monitoring Service to enable DB instance performance and log monitoring.
Planned Compute policy setting is available according to the server type selected by the customer.
2024.07.02
NEWBeta Version Released
Released Redis(DBaaS) service that allows easy creation and management of Redis OSS in a web environment.
7 - Data Analytics
Provides an analysis service that can process big data easily and quickly.
7.1 - Event Streams
7.1.1 - Overview
Service Overview
Event Streams provides fully managed creation and configuration of the open source Apache Kafka for large-scale, massive message data processing. Samsung Cloud Platform automates the creation and configuration of Apache Kafka through a web-based Console, and users can configure the main components of Apache Kafka, such as Broker, Zookeeper, and AKHQ, in a single or cluster form.
Event Streams cluster is composed of multiple Broker nodes, and Brokers can be installed from a minimum of 1 up to a maximum of 10, typically installed with 3 or more. Zookeeper can be installed separately to manage the distributed Brokers, and if not installed separately, it is installed together on the Broker node. Additionally, a tool for managing Kafka called AKHQ (Apache Kafka HQ) is provided, allowing users to manage cluster operations through it.
Provided Features
Event Streams provides the following features.
Auto Provisioning (Auto Provisioning): You can configure and set up an Apache Kafka cluster via the UI.
Operation Control Management: Provides a function to control the status of running servers. In addition to starting and stopping the cluster, restarting is possible to apply configuration values.
AKHQ Provision: AKHQ, a tool that can manage Kafka, is provided, allowing users to manage and monitor clusters through it.
Add Broker node: If expansion is required to improve the cluster’s performance and stability, you can add nodes with the same specifications as the existing Broker nodes.
Parameter management: Performance improvement and security-related configuration parameter setting and modification are possible.
Monitoring: CPU, memory, and performance monitoring information can be checked via Cloud Monitoring and Servicewatch.
Components
Event Streams provides pre-validated engine versions and various server types according to the open source support policy. Users can select and use them according to the scale of the service they want to configure.
Engine Version
The engine versions supported by Event Streams are as follows.
Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to six months before the EoTS date.
According to the supplier’s policy, the EOS and EoTS dates may change, so please refer to the supplier’s license management policy page for details.
The server types supported by Event Streams are as follows.
For detailed information about the server types provided by Event Streams, see Event Streams Server Types.
Standard ess1v2m4
Category
Example
Detailed description
Server Type
Standard
Provided Server Types
Standard: Standard specifications (vCPU, Memory) commonly used
High Capacity: Large server specifications of 24 vCore or more
Server Specifications
ess1
Provided server specifications
ess1, ess2: Standard specifications (vCPU, Memory) configuration commonly used
esh2: Large-capacity server specifications
Provides servers with 24 vCores or more
Server specifications
v2
Number of vCores
v2: 2 virtual cores
Server Specifications
m4
Memory Capacity
m4: 4GB Memory
Table. Event Streams server type components
Preceding Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.
A service that provides an independent virtual network in a cloud environment
Table. Event Streams Preceding Service
7.1.1.1 - Server Type
Event Streams server type
Event Streams provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc.
When creating Event Streams, Apache Kafka is installed according to the selected server type suitable for the purpose of use.
The server types supported in Event Streams are as follows.
Standard ess1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Provided server type distinction
Standard: Composed of standard specifications (vCPU, Memory) commonly used
High Capacity: Server specifications with higher capacity than Standard
Server Specifications
ess1
Classification of provided server type and generation
ess1: s means general specifications, and 1 means generation
esh2: h means large-capacity server specifications, and 2 means generation
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory Capacity
m4: 4GB Memory
Table. Event Streams server type formats
Reference
Please select the server type by checking the node’s minimum specifications as follows.
Division
vCPU
Memory
Broker
2 vCore
4 GB
Zookeeper
1 vCore
2 GB
ess1 server type
The ess1 server type of Event Streams is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 64 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
ess1v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
ess1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
ess1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
ess1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
ess1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
ess1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
ess1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
ess1v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
ess1v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Table. Event Streams server type specification - ess1 server type
ess2 server type
The ess2 server type of Event Streams is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 16 vCPUs and 64 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
CPU vCore
Memory
Network Bandwidth(Gbps)
Standard
ess2v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
ess2v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
ess2v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
ess2v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
ess2v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
ess2v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
ess2v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
ess2v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
ess2v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Table. Event Streams Server Type Specifications - ess2 Server Type
esh2 server type
The esh2 server type of Event Streams is provided with high-capacity server specifications and is suitable for database workloads for large-scale data processing.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 32 vCPUs and 128 GB of memory
Up to 25Gbps networking speed
Division
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
esh2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
esh2v32m128
32 vCore
128 GB
Up to 25 Gbps
Table. Event Streams server type specification - esh2 server type
7.1.1.2 - Monitoring Metrics
Event Streams Monitoring Metrics
The table below shows the performance monitoring metrics of Event Streams that can be checked through Cloud Monitoring. For detailed Cloud Monitoring usage instructions, refer to Cloud Monitoring guide.
Event Streams sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at a 1‑minute interval.
Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the namespace Event Streams.
OS Basic Metrics
Category
Performance Item
Detailed Description
Unit
Meaningful Statistics
CPU
CPU Usage
CPU Usage Rate
Percent
Disk
Disk Usage
Disk Usage Rate
Percent
Disk
Disk Write Bytes
Write capacity on block device (bytes/second)
Bytes/Second
Disk
Disk Read Bytes
Amount read from block device (bytes/second)
Bytes/Second
Disk
Disk Write Request
Number of write requests on block device (requests/second)
Count/Second
Disk
Disk Read Requests
Number of read requests on block device (requests/second)
Count/Second
Disk
Average Disk I/O Queue Size
Average queue length of requests issued to the block device
None
Disk
Disk I/O Utilization
Proportion of time the block device actually processes I/O operations
Percent
Memory
Memory Usage
Memory Usage Rate
Percent
Network
Network In Bytes
Received capacity on the network interface (bytes/second)
Bytes/Second
Network
Network Out Bytes
Data transmitted from network interface (bytes/second)
Bytes/Second
Network
TCP Connections
Total number of TCP connections currently established correctly
Count/Second
Network
Network In Packets
Number of packets received on the network interface
Count
Network
Network Out Packets
Number of packets transmitted from the network interface
Count
Network
Network In Dropped
Number of packet drops received on the network interface
Count
Network
Network Out Dropped
Number of packet drops transmitted from the network interface
Count
Network
Network In Errors
Number of packet errors received on the network interface
Count
Network
Network Out Errors
Number of packet errors transmitted from the network interface
Count
Table. OS Basic Metrics
Event Streams Basic Metrics
Category
Performance Item
Detailed Description
Unit
Meaningful Statistics
Activelock
Active locks
Number of active locks
Count
Activesession
Active sessions
Number of active sessions
Count
Activesession
Connection usage
DB connection session usage rate
Percent
Activesession
Connections
DB connection session
Count
Activesession
Connections(MAX)
Maximum number of connections that can be attached to the DB
Count
ProxySQL
Proxy Uptime
Express the proxy’s uptime in seconds
Seconds
ProxySQL
Backend connections(CONNECTED)
Number of sessions connected to the Proxy server
Count
ProxySQL
Client connections connected
Number of client sessions currently connected to the proxy
Count
ProxySQL
Queries routed
Number of queries routed to backend server
Count
ProxySQL
Backend connections(ACTIVE, IDLE)
Number of Active / idle connections per Endpoint
Count
ProxySQL
Backend server status
Backend server status
1 - Online
2- SHUNNED
3 - OFFLINE_SOFT
4 - OFFLINE_HARD
5 - SHUNNED_REPLICATION_LAG
None
ProxySQL
Backend connection check
Backend server’s connection success/failure check
Count
State
Instance state
Scalable DB status up/down check
Count
State
Slave behind master seconds
Replica’s delay amount (unit: seconds)
Seconds
Tablespace
Tablespace used
Tablespace usage
Megabytes
Tablespace
Tablespace used(TOTAL)
Tablespace usage (total)
Megabytes
Transactions
Slow queries
Number of slow queries
Count
Transactions
Transaction time
Long Transaction time
Seconds
Transactions
Wait locks Lock
Number of waiting sessions
Count
Table. Event Streams basic metrics
7.1.2 - How-to guides
The user can enter the required information for Event Streams through the Samsung Cloud Platform Console, select detailed options, and create the service.
Event Streams Create
You can create and use the Event Streams service from the Samsung Cloud Platform Console.
Notice
Before creating the service, please configure the VPC’s Subnet type as General.
If the Subnet type is Local, the creation of the corresponding Database service is not possible.
To create Event Streams, follow these steps.
Click the All Services > Data Analytics > Event Streams menu. Navigate to the Service Home page of Event Streams.
On the Service Home page, click the Create Event Streams button. You will be taken to the Create Event Streams page.
Create Event Streams page, enter the information required to create the service, and select detailed options.
Image and version selection area, select the required information.
Category
Required or not
Detailed description
Image version
Required
Provide version list of Event Streams
Table. Event Streams Service Information Input Items
Service Information Input area, input or select the required information.
Category
Required or not
Detailed description
Server Name Prefix
Required
Server name where Apache Kafka will be installed
Start with a lowercase English letter, and use lowercase letters, numbers, and the special character (-) to input 3 to 13 characters
A postfix such as 001, 002 is attached based on the server name to create the actual server name
Cluster Name
Required
Cluster name of the servers
Enter using English letters, 3 ~ 20 characters
A cluster is a unit that groups multiple servers
Broker > Broker Node count
required
Broker Node count
Broker > Server Type
Required
Server type where the Broker will be installed
Standard: Standard specifications commonly used
High Capacity: Large-capacity server with 24 vCore or more
For detailed information about server types provided by Event Streams, refer to Event Streams Server Type
Broker > Planned Compute
Select
Status of resources with Planned Compute set
In Use: Number of resources with Planned Compute set that are currently in use
Configured: Number of resources with Planned Compute set
Coverage Preview: Amount applied by Planned Compute per resource
Apply for Planned Compute Service: Go to the Planned Compute service application page
Block Storage type to be used on the server where AKHQ is installed
Base OS: Area where the engine is installed
AKHQ > AKHQ account
Required
AKHQ account
Enter using lowercase English letters, 2 to 20 characters
AKHQ > AKHQ password
Required
AKHQ account password
Enter 8 ~ 30 characters including English letters, numbers and special characters (excluding “ ‘)
AKHQ > AKHQ Password Confirmation
Required
AKHQ Account Password Confirmation
Re-enter the same AKHQ account password
AKHQ > AKHQ Port Number
Required
AKHQ connection port number
Port number is automatically set to 8080 and cannot be modified
Network > Common Settings
Required
Network settings where servers generated by the service are installed
Choose if you want to apply the same settings to all installed servers
Select a pre‑created VPC and Subnet
IP: Only automatic generation is possible
For Public NAT settings, it is only possible in per‑server settings
Network > Per-Server Settings
Required
Network settings where servers generated by the service are installed
Select if you want to apply different settings per installed server
Select a pre‑created VPC and Subnet
IP: Enter each server’s IP
Public NAT feature is available only when the VPC is connected to an Internet Gateway; if you check Use, you can select from reserved IPs in the VPC product’s Public IP. For details, see Create Public IP
IP Access Control
Select
Service Access Policy Settings
Since the access policy is set for the IP entered on the page, you do not need to separately configure Security Group policies.
Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.0/24, 192.168.10.1/32) and click the Add button
To delete an entered IP, click the x button next to the entered IP
Maintenance Period
Select
Event Streams Maintenance Period
Select Use to set day of week, start time, and duration
It is recommended to set a maintenance period for stable service management. Patch work will be performed at the set time, and service interruption may occur
We are not responsible for issues arising from patches not applied (set as not used)
Table. Event Streams service configuration items
Database configuration required information input Please enter or select the required information in this area.
Category
Required or not
Detailed description
Zookeeper SASL account
Required
Zookeeper account
Enter using lowercase English letters, 2 ~ 20 characters
Zookeeper SASL password
Required
Zookeeper account password
Enter 8 to 30 characters including letters, numbers, and special characters (excluding “‘)
Zookeeper SASL password verification
Required
Zookeeper account password verification
Re-enter the Zookeeper SASL account password identically
Zookeeper Port number
required
Zookeeper port number
1200 ~ 65535 can be entered, but the Broker port or 2888, 3888 cannot be used
Broker SASL Account
Required
Kafka connection account
Enter using lowercase English letters, 2 to 20 characters
Broker SASL password
Required
Kafka connection account password
Enter 8 to 30 characters including English letters, numbers, and special characters (excluding “ and ‘)
Broker SASL password verification
Required
Check Kafka connection account password
Re-enter the Broker SASL account password identically
Broker Port number
Required
Kafka port number
1200 ~ 65535 can be entered, and Broker port or 2888, 3888 cannot be used
Parameter
Required
Event Streams configuration parameters
View button click to view detailed information of the parameter
Parameters can be modified after the service creation is completed, and a restart is required when modified
Time zone
Selection
Standard time zone used by the service
ServiceWatch log collection
Select
Whether to collect ServiceWatch logs
Select Use to set up the ServiceWatch log collection feature
Provided free up to 5 GB for all services within the account, and charges apply based on storage size if exceeding 5 GB
When collecting, log groups and log streams are automatically created and cannot be deleted until the resources are removed
To prevent exceeding 5 GB, direct deletion of log data or shortening the retention period is recommended
Table. Required information input items for Event Streams Database configuration
Additional Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Tag
Select
Add Tag
Add Tag button can be clicked to create and add a tag, or add an existing tag
Up to 50 tags can be added
Added new tags are applied after the service creation is completed
Table. Event Streams Service Additional Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
Once creation is complete, check the created resource on the Resource List page.
Event Streams Check Detailed Information
Event Streams service can view and edit the full resource list and detailed information. Event Streams Details page consists of Details, Tags, Activity History tabs.
To view detailed information about the Event Streams service, follow these steps.
All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
On the Service Home page, click the Event Streams menu. It navigates to the Event Streams List page.
Click the resource to view detailed information on the Event Streams List page. It navigates to the Event Streams Details page.
Event Streams Details The top of the page displays status information and information about additional features.
Category
Detailed description
Cluster Status
Cluster Status
Creating: Cluster is being created
Editing: Cluster is changing to a state of performing operation
Error: Cluster is in a state where a failure occurred while performing a task
If it occurs continuously, contact the administrator
Failed: Cluster is in a failed state during creation
Restarting: Cluster is restarting
Running: Cluster is operating normally
Starting: Cluster is starting
Stopped: Cluster is stopped
Stopping: Cluster is being stopped
Synchronizing: Cluster is synchronizing
Terminating: Cluster is terminating
Unknown: Cluster status is unknown
If it occurs continuously, contact the administrator
Upgrading: Cluster is changing to an upgrade execution state
Cluster Control
Button to change cluster state
Start: Start a stopped cluster
Stop: Stop a running cluster
Restart: Restart a running cluster
More additional features
Cluster-related management button
Service status synchronization: Can query current server status and synchronize to the Console
Parameter management: Can view and modify service configuration parameters
Add Broker Node: Add a Broker Node
If configured as a cluster, the Add Broker Node button is displayed
Service termination
Button to cancel the service
Table. Event Streams status information and additional features
Detailed Information
Event Streams list page you can view the detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
Server Information
Server information configured in the respective cluster
Category: Server type (Zookeeper&Broker,Broker, Zookeeper, AKHQ)
Modifying the server type requires a server restart
Table. Event Streams detailed information items
Tag
On the Event Streams List page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can view the Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Event Streams Tag Tab Items
Work History
You can view the operation history of the selected resource on the Event Streams list page.
Category
Detailed description
Work History List
Resource Change History
Work details, work date/time, resource type, resource ID, resource name, event topic, work result, worker information verification
Detailed Search button provides detailed search function
Table. Event Streams Job History Tab Detailed Information Items
Event Streams Resource Management
If you need to change the existing configuration options of a created Event Streams resource, manage parameters, or add broker node configurations, you can perform the tasks on the Event Streams Details page.
Operating Control
If changes occur to the running Event Streams resources, you can start, stop, or restart.
To control the operation of Event Streams, follow the steps below.
All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
On the Event Streams List page, click the resource to control operation. It navigates to the Event Streams Details page.
Check the Event Streams status and complete the changes using the control button below.
Start: the server where the Event Streams service is installed and the Event Streams service is running.
Stop: The server where the Event Streams service is installed and the Event Streams service will be stopped (Stopped).
Restart: Only the Event Streams service will be restarted.
Synchronize Service Status
You can query the current server status and synchronize it to the Console.
To synchronize the service status of Event Streams, follow the steps below.
All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
Event Streams list 페이지에서 서비스 상태를 조회할 자원을 클릭하세요. Event Streams details 페이지로 이동합니다.
Click the Service Status Synchronization button. It takes a little time to retrieve, and while retrieving, the cluster changes to Synchronizing state.
When the query is completed, the status in the server information item is updated, and the cluster changes to Running state.
Parameter Management
Provides parameter query and modification functions.
To view and modify configuration parameters, follow the steps below.
All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
Event Streams List Click the resource whose parameters you want to view and edit on the page. Event Streams Details You will be moved to the page.
Click the Parameter Management button. You will be taken to the Parameter Management page.
Parameter Management on the page, click the Search button. Database Search popup window opens.
To view the Parameter information, click the Confirm button. It takes a little time to retrieve.
You can modify the Parameter information after performing a query.
To edit the Parameter information, click the Edit button and then enter the changes in the Custom Value area of the Parameter to be edited.
When the application type is dynamic, it is applied immediately, and when it is static, a service restart is required, causing service interruption.
When input is complete, click the Save button.
Change Server Type
You can change the configured server type.
To change the server type, follow the steps below.
Caution
If the server type is configured as Standard, it cannot be changed to High Capacity. If you want to change to High Capacity, create a new service.
If you modify the server type, a server reboot is required. Please separately verify any SW license changes or SW settings and reflections due to spec changes.
Click the All Services > Data Analytics > Event Streams menu. Navigate to the Service Home page of Event Streams.
Click the Event Streams menu on the Service Home page. Navigate to the Event Streams list page.
On the Event Streams list page, click the resource to change the server type. You will be taken to the Event Streams details page.
Click the Edit button of the server type you want to change at the bottom of the detailed information. The Edit Server Type popup window opens.
Edit Server Type After selecting the server type in the popup window, click the Confirm button.
Expanding storage
You can expand the storage added to the data area up to a maximum of 5TB based on the initially allocated capacity. You can expand the storage without stopping Event Streams, and if configured as a cluster, all nodes are expanded simultaneously.
Notice
If encryption is set on the existing Block Storage, encryption will also be applied to the additional Disk.
Disk size modification is only possible to increase by at least 16GB over the current disk size.
To increase storage capacity, follow the steps below.
All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
On the Event Streams list page, click the resource whose server type you want to change. You will be taken to the Event Streams details page.
Click the Edit button of the added Disk you want to expand at the bottom of the detailed information. The Disk Edit popup window opens.
Disk edit In the popup window, after entering the expanded capacity, click the Confirm button.
Add Broker Node
If Event Streams cluster expansion is required, you can add nodes with the same specifications as the Broker Node you are using. The added nodes are added to the existing cluster without server downtime, and the existing data is automatically distributed.
Notice
Up to 10 nodes can be used within the cluster. Please note that additional charges apply for created nodes.
Adding nodes may degrade cluster performance.
To add a Broker node, follow the steps below.
All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
On the Service Home page, click the Event Streams menu. Navigate to the Event Streams list page.
Event Streams resource On the list page, click the resource you want to recover. Event Streams details page will be opened.
Click the Broker Node Add button. Navigate to the Broker Node Add page.
Enter required information after entering the relevant information in the area, click the Complete button.
Category
Required
Detailed description
Server Name
Required
Server name where Broker is installed
It is set to the server name configured in the original cluster.
Cluster Name
Required
Cluster Name
It will be set to the cluster name set in the original cluster.
Number of additional Nodes
Required
Number of Nodes to add
Use up to 10 nodes per cluster
Service Type > Server Type
Required
Server type where the Broker will be installed
It is set to be the same as the server type set in the original cluster.
Service Type > Planned Compute
Select
Status of resources with Planned Compute set
In Use: Number of resources with Planned Compute that are currently in use
Configured: Number of resources with Planned Compute set
Coverage Preview: Amount applied per resource by Planned Compute
Planned Compute Service Application: Go to the Planned Compute service application page
The Storage type and capacity set in the original cluster are applied identically
Network
Required
Network where servers are installed
Apply the same network as set in the original cluster
Table. Event Streams Broker Node Additional Items
Event Streams Cancel
You can cancel unused Event Streams to reduce operating costs. However, if you cancel the service, the running service may be stopped immediately, so you should consider the impact of service interruption sufficiently before proceeding with the cancellation.
To cancel Event Streams, follow the steps below.
All Services > Data Analytics > Event Streams Click the menu. Go to the Service Home page of Event Streams.
Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
Event Streams list on the page, select the resource to cancel, and click the Cancel Service button.
Once the termination is complete, check on the Event Streams list page whether the resource has been terminated.
7.1.3 - API Reference
API Reference
7.1.4 - CLI Reference
CLI Reference
7.1.5 - Release Note
Event Streams
2025.07.01
FEATURETerraform and Disk Type Addition
It provides Terraform.
HDD, HDD_KMS disk types are also provided.
2025.02.27
NEWEvent Streams Service Official Version Release
An Event Streams service that easily creates and manages Apache Kafka clusters in a web environment has been released.
7.2 - Search Engine
7.2.1 - Overview
Service Overview
Search Engine provides automated creation and configuration of the distributed search and analytics engines Elasticsearch and OpenSearch through a web-based console. Users can select a server type that fits the system configuration to set up a cluster, and it supports the data analysis and visualization tools Kibana and the OpenSearch dashboard.
Notice
Search Engine provides Elasticsearch Enterprise version and OpenSearch version.
Elasticsearch Enterprise’s software license uses a Bring Your Own License (BYOL), and the software license policy in cloud environments must follow the supplier’s policy.
Search Engine Cluster consists of multiple master nodes and data nodes. Data nodes can be installed from a minimum of 1 up to a maximum of 10, and are usually installed with 3 or more. If a master node is not installed separately, the data node also performs the role of the master node and can be installed up to a maximum of 10. When a master node is installed separately, data nodes can be up to 50.
Provided Features
Search Engine provides the following functions.
Auto Provisioning (Auto Provisioning): You can configure and set up Elasticsearch and OpenSearch clusters via UI.
Operation Control Management: Provides functionality to control the status of running servers. Restart is possible for reflecting configuration values, along with starting and stopping the cluster.
Backup and Recovery: Backup is possible using the built-in backup feature, and recovery can be performed to the point in time of the backup file.
Add Data Node: If cluster expansion is required, you can add nodes with the same specifications as the data nodes in use. Up to 10 nodes can be added within the cluster.
Visualization tool support: Provides data analysis and visualization tools, and supports Elasticsearch Kibana or OpenSearch dashboards.
Monitoring: CPU, memory, cluster performance monitoring information can be checked through the Cloud Monitoring service.
Components
Search Engine provides pre-validated engine versions and various server types according to the open source support policy. Users can select and use them according to the scale of the service they want to configure.
Engine Version
Search Engine supported engine versions are as follows.
Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to six months before the EoTS date.
Since the EOS and EoTS dates may change according to the supplier’s policy, please refer to the supplier’s license management policy page for details.
The server types supported by Search Engine are as follows.
For detailed information about the server types provided by Search Engine, please refer to Search Engine Server Type.
Standard se1v2m4
Category
Example
Detailed description
Server Type
Standard
Provided Server Types
Standard: Standard specifications (vCPU, Memory) configuration commonly used
High Capacity: High-capacity server specifications of 24 vCore or more
Server specifications
se1
Provided server specifications
se1: Standard specifications (vCPU, Memory) configuration commonly used
seh2: Large-capacity server specifications
Provides servers with 24 vCore or more
Server specifications
v2
Number of vCores
v2: 2 virtual cores
Server specifications
m4
Memory capacity
m4: 4GB Memory
Table. Search Engine Server Type Components
Preliminary Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.
A service that provides an independent virtual network in a cloud environment
Table. Search Engine Pre-service
7.2.1.1 - Server Type
Search Engine server type
Search Engine provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc.
When creating a Search Engine, Elastic Search is installed according to the server type selected to match the purpose of use.
The server types supported by the Search Engine are as follows.
Standard ses1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Provided server type distinction
Standard: Composed of standard specifications (vCPU, Memory) commonly used
High Capacity: Server specifications with higher capacity than Standard
Server Specification
db1
Classification of provided server type and generation
ses1: s means general specification, and 1 means generation
seh2: h means large-capacity server specification, and 2 means generation
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory Capacity
m4: 4GB Memory
Table. Search Engine server type format
ses1 server type
The ses1 server type of Search Engine is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
ses1v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
ses1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
ses1v2m16
2 vCore
16 GB
up to 10 Gbps
Standard
ses1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
ses1v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
ses1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
ses1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
ses1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
ses1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
ses1v4m64
4 vCore
64 GB
up to 10 Gbps
Standard
ses1v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
ses1v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
ses1v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
ses1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
ses1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
ses1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
ses1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
ses1v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
ses1v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
ses1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
ses1v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
ses1v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
ses1v10m80
10 vCore
80 GB
up to 10 Gbps
Standard
ses1v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
ses1v10m160
10 vCore
160 GB
up to 10 Gbps
Standard
ses1v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
ses1v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
ses1v12m96
12 vCore
96 GB
up to 12.5 Gbps
Standard
ses1v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
ses1v12m192
12 vCore
192 GB
up to 12.5 Gbps
Standard
ses1v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
ses1v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
ses1v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
ses1v14m168
14 vCore
168 GB
up to 12.5 Gbps
Standard
ses1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
ses1v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
ses1v16m64
16 vCore
64 GB
up to 12.5 Gbps
Standard
ses1v16m128
16 vCore
128 GB
up to 12.5 Gbps
Standard
ses1v16m192
16 vCore
192 GB
up to 12.5 Gbps
Standard
ses1v16m256
16 vCore
256 GB
up to 12.5 Gbps
Table. Search Engine server type specification - ses1 server type
ses2 server type
The ses1 server type of Search Engine is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
CPU vCore
Memory
Network Bandwidth(Gbps)
Standard
ses2v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
ses2v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
ses2v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
ses2v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
ses2v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
ses2v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
ses2v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
ses2v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
ses2v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
ses2v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
ses2v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
ses2v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
ses2v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
ses2v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
ses2v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
ses2v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
ses2v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
ses2v8m64
8 vCore
64 GB
up to 10 Gbps
Standard
ses2v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
ses2v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
ses2v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
ses2v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
ses2v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
ses2v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
ses2v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
ses2v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
ses2v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
ses2v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
ses2v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
ses2v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
ses2v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
ses2v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
ses2v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
ses2v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
ses2v14m224
14 vCore
224 GB
up to 12.5 Gbps
Standard
ses2v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
ses2v16m64
16 vCore
64 GB
up to 12.5 Gbps
Standard
ses2v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
ses2v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
ses2v16m256
16 vCore
256 GB
up to 12.5 Gbps
Table. Search Engine server type specification - ses2 server type
SEH2 server type
The seh2 server type of Search Engine is provided with large-capacity server specifications and is suitable for database workloads for large-scale data processing.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 72 vCPUs and 288 GB of memory
Up to 25Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
seh2v24m48
24 vCore
48 GB
Up to 25 Gbps
High Capacity
seh2v24m96
24 vCore
96 GB
Up to 25 Gbps
High Capacity
seh2v24m192
24 vCore
192 GB
Up to 25 Gbps
High Capacity
seh2v24m288
24 vCore
288 GB
Up to 25 Gbps
High Capacity
seh2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
seh2v32m128
32 vCore
128 GB
Up to 25 Gbps
High Capacity
seh2v32m256
32 vCore
256 GB
Up to 25 Gbps
High Capacity
seh2v48m96
48 vCore
96 GB
Up to 25 Gbps
High Capacity
seh2v48m192
48 vCore
192 GB
Up to 25 Gbps
High Capacity
seh2v64m128
64 vCore
128 GB
Up to 25 Gbps
High Capacity
seh2v64m256
64 vCore
256 GB
Up to 25 Gbps
High Capacity
seh2v72m144
72 vCore
144 GB
Up to 25 Gbps
High Capacity
seh2v72m288
72 vCore
288 GB
Up to 25 Gbps
Table. Search Engine server type specification - seh2 server type
7.2.1.2 - Monitoring Metrics
Search Engine Monitoring Metrics
The following table shows the performance monitoring metrics of Event Streams that can be checked through Cloud Monitoring. For detailed Cloud Monitoring usage, please refer to the Cloud Monitoring guide.
Block Storage type to be used for server where Kibana is installed
Basic OS: Area where engine is installed
Network > Common Settings
Required
Network settings where servers created in the service are installed
Select to apply the same settings to all servers being installed
Select previously created VPC and Subnet
IP: Only automatic creation is possible
Public NAT settings are only possible with per-server settings
Network > Per-Server Settings
Required
Network settings where servers created in the service are installed
Select to apply different settings for each server being installed
Select previously created VPC and Subnet
IP: Enter IP for each server
Public NAT function can be used only when VPC is connected to Internet Gateway. If Use is checked, you can select from reserved IPs in Public IP of VPC product. For more information, refer to Create Public IP
IP Access Control
Optional
Service access policy settings
Access policy is set for IPs entered on the page, so separate Security Group policy settings are not required
Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.0/24, 192.168.10.1/32) and click Add button
To delete entered IP, click x button next to the entered IP
Maintenance Window
Optional
Search Engine maintenance window
If Use is selected, set day of week, start time, and duration
It is recommended to set a maintenance window for stable service management. Patch work proceeds at the set time and service interruption occurs
If set to Not Used, problems caused by not applying patches are not the responsibility of the company
Table. Search Engine Service Information Input Items
Enter or select the required information in the Database Configuration Required Information Input area.
Division
Required
Description
Backup > Use
Optional
Whether to use node backup
If node backup is selected, select retention period and backup start time
Backup > Retention Period
Optional
Backup retention period
Select backup retention period. File retention period can be set from 7 days to 35 days
Separate charges occur for backup files depending on capacity
Backup > Backup Start Time
Optional
Backup start time
Select backup start time
Backup execution minutes are set randomly, and backup end time cannot be set
Cluster Port Number
Required
Elasticsearch connection port number
Can enter one of 1200 ~ 65535, but cannot use 9300 which is Elasticsearch internal port and 5301 which is Kibana port
Elastic Username
Required
Elasticsearch username
Enter within 2 to 20 characters using lowercase English letters
Enter 8 to 30 characters including English letters, numbers, and special characters (excluding ", ’, \)
Elastic Password Confirmation
Required
Elasticsearch connection password confirmation
Re-enter the Elasticsearch connection password identically
License Key
Required
Elasticsearch License Key
Enter the entire content in the issued license file (.json)
If the entered license key is invalid, service creation may not be possible
OpenSearch does not require License Key
Time Zone
Optional
Standard time zone where the service is used
Table. Search Engine Database Configuration Required Information Input Items
Enter or select the required information in the Additional Information Input area.
Division
Required
Description
Tags
Optional
Add tags
Create and add tags by clicking Add Tag button or add existing tags
Can add up to 50 tags
Added new tags are applied after service creation is completed
Table. Search Engine Service Additional Information Input Items
Check the detailed information and estimated billing amount in the Summary panel, and click the Complete button.
When creation is completed, check the created resource on the Resource List page.
Check Search Engine Detailed Information
Search Engine service can check and modify the entire resource list and detailed information. The Search Engine Details page consists of Details, Tags, Operation History tabs.
Follow the procedure below to check the detailed information of Search Engine service.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to check detailed information on the Search Engine List page. You will be moved to the Search Engine Details page.
Status information and additional feature information are displayed at the top of the Search Engine Details page.
Division
Description
Cluster Status
Cluster status
Creating: Cluster is being created
Editing: Cluster is changing to state where Operation is being performed
Error: Cluster failed while performing operation
If it occurs continuously, contact administrator
Failed: Cluster failed during creation process
Restarting: Cluster is being restarted
Running: Cluster is operating normally
Starting: Cluster is being started
Stopped: Cluster is stopped
Stopping: Cluster is in stopping state
Synchronizing: Cluster is being synchronized
Terminating: Cluster is being deleted
Unknown: Cluster status is unknown
If it occurs continuously, contact administrator
Upgrading: Cluster is changing to state where upgrade is being performed
Cluster Control
Buttons to change cluster status
Start: Starts the stopped cluster
Stop: Stops the running cluster
Restart: Restarts the running cluster
Additional Features More
Cluster-related management buttons
Synchronize Service Status: Can synchronize to Console by checking current server status
Backup History: If backup is set, check whether backup is executed normally and history
Cluster Recovery: Recovers cluster based on specific time point
Add Node: Adds data nodes
Service Termination
Button to terminate service
Table. Search Engine Status Information and Additional Features
Details
You can check the detailed information of the resource selected on the Search Engine List page and modify information if necessary.
Division
Description
Server Information
Server information configured in the cluster
Category: Server type (Master&Data, Master, Data, Kibana)
Server Name: Server name
IP:Port: Server IP and port
NAT IP: NAT IP
Status: Server status
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Means cluster SRN
Resource Name
Resource name
Means cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Created At
Date and time when the service was created
Modifier
User who modified the service information
Modified At
Date and time when the service information was modified
If server type is modified, server restart is required
Table. Search Engine Details Information Items
Tags
You can check the tag information of the resource selected on the Search Engine List page and add, change, or delete tags.
Division
Description
Tag List
Tag list
Can check tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Search Engine Tags Tab Items
Operation History
You can check the operation history of the resource selected on the Search Engine List page.
Division
Description
Operation History List
Resource change history
Check operation details, operation date and time, resource type, resource ID, resource name, event topic, operation result, operator information
Table. Search Engine Operation History Tab Detailed Information Items
Manage Search Engine Resources
If you need to change existing configuration options of created Search Engine resources, manage parameters, or add Node configuration, you can perform tasks on the Search Engine Details page.
Control Operation
If there are changes to running Search Engine resources, you can start, stop, or restart.
Follow the procedure below to control the operation of Search Engine.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to control operation on the Search Engine List page. You will be moved to the Search Engine Details page.
Check Search Engine status and complete changes through the following control buttons.
Start: Server where Search Engine service is installed and Search Engine service become running.
Stop: Server where Search Engine service is installed and Search Engine service become stopped.
Restart: Only Search Engine service is restarted.
Synchronize Service Status
You can check the current server status and synchronize it to Console.
Follow the procedure below to synchronize the service status of Search Engine.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to check service status on the Search Engine List page. You will be moved to the Search Engine Details page.
Click Synchronize Service Status button. It takes some time to check, and cluster changes to Synchronizing status during checking.
When checking is completed, status is updated in the server information item, and cluster changes to Running status.
Change Server Type
You can change the configured server type.
Follow the procedure below to change the server type.
Caution
If server type is configured as Standard, it cannot be changed to High Capacity. If you want to change to High Capacity, create a new service.
If server type is modified, server restart is required. Please check separately for SW license modification matters or SW settings and reflection according to specification change.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to change server type on the Search Engine List page. You will be moved to the Search Engine Details page.
Click Modify button of the Server Type you want to change at the bottom of detailed information. Modify Server Type popup window opens.
Select server type in the Modify Server Type popup window, and click Confirm button.
Expand Storage
You can expand storage added as data area up to 5TB based on initially allocated capacity. You can expand storage without stopping Search Engine, and if configured as a cluster, all nodes are expanded simultaneously.
Notice
If existing Block Storage has encryption setting, encryption is also applied to additional Disk.
Disk size modification is only possible to expand more than 16GB than current disk size.
Follow the procedure below to expand storage capacity.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to change server type on the Search Engine List page. You will be moved to the Search Engine Details page.
Click Modify button of the Additional Disk you want to expand at the bottom of detailed information. Modify Disk popup window opens.
Enter expansion capacity in the Modify Disk popup window, and click Confirm button.
Add Storage
If you need more than 5TB of data storage space, you can add storage.
Notice
If existing Block Storage has encryption setting, encryption is also applied to additional Disk.
Follow the procedure below to add storage capacity.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to add storage on the Search Engine List page. You will be moved to the Search Engine Details page.
Click Add Disk button at the bottom of detailed information. Add Disk popup window opens.
Enter purpose and capacity in the Add Disk popup window, and click Confirm button.
Backup Search Engine
Through backup setting functionality, users can set data retention period and start cycle, and can perform backup history lookup and deletion through backup history functionality.
Set Backup
For the procedure of setting backup while creating Search Engine, refer to Create Search Engine guide, and follow the procedure below to modify backup settings of created resources.
Caution
If backup is set, backup is performed at the specified time after the set time, and additional charges occur depending on backup capacity.
If backup setting is changed to Not Set, backup execution stops immediately, and stored backup data is deleted and can no longer be used.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to set backup on the Search Engine List page. You will be moved to the Search Engine Details page.
Click Modify button in the backup item. Modify Backup popup window opens.
If setting backup, click Use in the Modify Backup popup window, select retention period, backup start time, Archive backup cycle, and click Confirm button.
If stopping backup setting, uncheck Use in the Modify Backup popup window, and click Confirm button.
Check Backup History
Notice
To set notifications for backup success and failure, you can set through Notification Manager product. For detailed usage guide for notification policy setting, refer to Create Notification Policy.
Follow the procedure below to check backup history.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to check backup history on the Search Engine List page. You will be moved to the Search Engine Details page.
Click Backup History button. Backup History popup window opens.
In the Backup History popup window, you can check backup status, version, backup start date and time, backup completion date and time, and capacity.
Delete Backup File
Follow the procedure below to delete backup history.
Caution
Deleted backup files cannot be restored, so please make sure to check if it is unnecessary data before deleting.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource for which you want to check backup history on the Search Engine List page. You will be moved to the Search Engine Details page.
Click Backup History button. Backup History popup window opens.
Check the file you want to delete in the Backup History popup window, and click Delete button.
Recover Search Engine
If recovery is needed from backup file due to failure or data loss, recovery is possible based on specific time point through cluster recovery functionality.
Caution
For recovery execution, capacity at least equal to data type Disk capacity is required. If Disk capacity is insufficient, recovery may fail.
Notice
Cluster recovery is restored to the same configuration as the original. For example, if configured with 3 Master nodes and 2 Data nodes, it is restored to the same configuration
Follow the procedure below to recover Search Engine.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Click the resource you want to recover on the Search Engine Resource list page. You will be moved to the Search Engine Details page.
Click Cluster Recovery button. You will be moved to the Cluster Recovery page.
Enter the corresponding information in the Cluster Recovery Configuration area, and click Complete button.
Division
Required
Description
Recovery Time Point
Required
Set the time point user wants to recover
Select from the list of time points of backup files displayed in the list
Server Name Prefix
Required
Recovery server name
Start with lowercase English letters and enter 3 to 16 characters using lowercase letters, numbers, and special characters (-)
Actual server name is created with postfix such as 001, 002 based on server name
Cluster Name
Required
Recovery server cluster name
Enter 3 to 20 characters using English letters
Cluster is a unit that bundles multiple servers
Node Count
Required
Number of data nodes
Set to the same as the number of nodes set in the original cluster
Service Type > Server Type
Required
Data node server type
Set to the same as the number of nodes set in the original cluster
Service Type > Planned Compute
Optional
Resource status where Planned Compute is set
In Use: Number of resources in use among resources where Planned Compute is set
Set: Number of resources where Planned Compute is set
Coverage Preview: Amount applied with Planned Compute for each resource
Apply for Planned Compute Service: Move to Planned Compute service application page
Storage type and capacity set in the original cluster are applied identically
Network
Required
Network where servers are installed
Applied identically to the network set in the original cluster
Table. Search Engine Node Addition Items
Terminate Search Engine
You can reduce operating costs by terminating unused Search Engine. However, if you terminate the service, the running service may stop immediately, so you should fully consider the impact of service interruption before proceeding with termination work.
Follow the procedure below to terminate Search Engine.
Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
Select the resource to terminate on the Search Engine List page, and click Terminate Service button.
When termination is completed, check if the resource is terminated on the Search Engine list page.
7.2.3 - API Reference
API Reference
7.2.4 - CLI Reference
CLI Reference
7.2.5 - Release Note
Search Engine
2025.07.01
FEATURENew feature, Terraform and disk type added
OpenSearch 2.17.1 is newly provided.
It provides Terraform.
HDD, HDD_KMS disk types are also provided.
2025.02.27
NEWSearch Engine Service Official Version Release
A Search Engine service that can easily create and manage ElasticSearch Enterprise in a web environment has been released.
7.3 - Vertica(DBaaS)
7.3.1 - Overview
Service Overview
Vertica(DBaaS) is a high-availability enterprise database based on Data Warehouse for large-scale data analysis/processing.
It is a data analysis platform that, through a single engine, can perform basic analyses such as queries on data coming from various sources without moving them, as well as AI analyses like machine learning. In Samsung Cloud Platform, DB management functions such as high‑availability configuration, backup/recovery, patching, parameter management, and monitoring are added to ensure stable management of single instances or critical data, enabling automation of tasks throughout the database lifecycle. Additionally, to prepare for issues with DB servers or data, it provides an automatic backup function at user‑specified times, supporting data recovery at the desired point in time.
Service Architecture Diagram
Figure. Vertica diagram
Provided Features
Vertica (DBaaS) provides the following features.
Auto Provisioning: Automatically installs the DB of the standard version of Samsung Cloud Platform based on Virtual Servers of various specifications.
Cluster configuration: Provides its own high-availability architecture in a Masterless form.
Operation Control Management: Provides a function to control the status of running servers. Servers can be started and stopped, and can be restarted if there is a problem with the DB or to apply configuration values.
Backup and Recovery: Provides a data backup function based on its own backup commands. The backup retention period and backup start time can be set by the user, and additional charges apply based on backup size. It also provides a recovery function for backed-up data; when the user performs a recovery, a separate DB is created and recovery proceeds to the point selected by the user (backup save point, user-specified point). When recovering a Database, you can choose to install the Management Console for use.
Service status query: You can view the final status of the current DB service.
Monitoring: CPU, memory, DB performance monitoring information can be checked through the Cloud Monitoring service.
High-performance processing of large-scale data: Guarantees stable performance in environments with massive parallel processing (MPP, Massively Parallel Processing) and SQL query Mixed Workload. Vertica processes queries through distributed processing and has a structure that allows queries to be started from any node, so there is no Single Point of Failure where queries would not be executed in case of a specific node failure.
Components
Vertica(DBaaS) provides pre-validated engine versions and various server types. Users can select and use them according to the scale of the service they want to configure.
Engine Version
The engine versions supported by Vertica(DBaaS) are as follows.
Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to six months before the EoTS date.
According to the supplier’s policy, the EOS and EoTS dates may change, so please refer to the supplier’s license management policy page for details.
Provided version
EOS date(Samsung Cloud Platform new creation stop date)
EoTS date(supplier technical support end date)
24.2.0-2
2026-09 (planned)
2027-04-30
Table. Vertica (DBaaS) Service Provision Engine Version
Server Type
The server types supported by Vertica (DBaaS) are as follows.
For detailed information about the server types provided by Vertica (DBaaS), please refer to Vertica server types.
Category
Example
Detailed Description
Server Type
Standard
Provided Server Types
Standard: Standard specifications (vCPU, Memory) configuration commonly used
High Capacity: 24 vCore or more large-capacity server specifications
Server specifications
Db1
Provided server specifications
db1: Standard specifications (vCPU, Memory) configuration commonly used
dbh2: Large-scale server specifications
Provide servers with 24 vCore or more
Server specifications
V2
vCore count
v2: 2 virtual cores
Server specifications
M4
Memory capacity
m4: 4GB Memory
Table. Vertica (DBaaS) server type components
Preliminary Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.
A service that provides an independent virtual network in a cloud environment
Table. Vertica (DBaaS) Preliminary Service
7.3.1.1 - Server Type
Vertica(DBaaS) server type
Vertica(DBaaS) provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc.
When creating Vertica(DBaaS), the Database Engine is installed according to the server type selected for the purpose of use.
The server types supported by Vertica(DBaaS) are as follows.
Standard db1v2m4
Classification
Example
Detailed Description
Server Type
Standard
Provided server type classification
Standard: Composed of standard specifications (vCPU, Memory) commonly used
High Capacity: Server specifications with high capacity over Standard
Server Specification
db1
Classification of provided server type and generation
db: means general specification, and 1 means generation
dbh: h means large-capacity server specification, and 2 means generation
Server Specification
v2
Number of vCores
v2: 2 virtual cores
Server Specification
m4
Memory Capacity
m4: 4GB Memory
Table. Vertica(DBaaS) server type format
db1 server type
The db1 server type of Vertica(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Division
Server Type
vCPU
Memory
Network Bandwidth
Standard
db1v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
db1v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
db1v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
db1v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
db1v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
db1v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
db1v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
db1v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
db1v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
db1v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
db1v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
db1v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
db1v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
db1v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
db1v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
db1v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
db1v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
db1v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
db1v8m128
8 vCore
128 GB
Up to 10 Gbps
Standard
db1v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
db1v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
db1v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
db1v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
db1v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
db1v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
db1v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
db1v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
db1v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
db1v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
db1v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
db1v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
db1v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
db1v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
db1v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
db1v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
db1v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
db1v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
db1v16m256
16 vCore
256 GB
Up to 12.5 Gbps
Table. Vertica(DBaaS) server type specifications - db1 server type
DB2 server type
The db2 server type of Vertica(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 16 vCPUs and 256 GB of memory
Up to 12.5 Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
Standard
db2v1m2
1 vCore
2 GB
Up to 10 Gbps
Standard
db2v2m4
2 vCore
4 GB
Up to 10 Gbps
Standard
db2v2m8
2 vCore
8 GB
Up to 10 Gbps
Standard
db2v2m16
2 vCore
16 GB
Up to 10 Gbps
Standard
db2v2m24
2 vCore
24 GB
Up to 10 Gbps
Standard
db2v2m32
2 vCore
32 GB
Up to 10 Gbps
Standard
db2v4m8
4 vCore
8 GB
Up to 10 Gbps
Standard
db2v4m16
4 vCore
16 GB
Up to 10 Gbps
Standard
db2v4m32
4 vCore
32 GB
Up to 10 Gbps
Standard
db2v4m48
4 vCore
48 GB
Up to 10 Gbps
Standard
db2v4m64
4 vCore
64 GB
Up to 10 Gbps
Standard
db2v6m12
6 vCore
12 GB
Up to 10 Gbps
Standard
db2v6m24
6 vCore
24 GB
Up to 10 Gbps
Standard
db2v6m48
6 vCore
48 GB
Up to 10 Gbps
Standard
db2v6m72
6 vCore
72 GB
Up to 10 Gbps
Standard
db2v6m96
6 vCore
96 GB
Up to 10 Gbps
Standard
db2v8m16
8 vCore
16 GB
Up to 10 Gbps
Standard
db2v8m32
8 vCore
32 GB
Up to 10 Gbps
Standard
db2v8m64
8 vCore
64 GB
Up to 10 Gbps
Standard
db2v8m96
8 vCore
96 GB
Up to 10 Gbps
Standard
db2v8m128
8 vCore
128 GB
up to 10 Gbps
Standard
db2v10m20
10 vCore
20 GB
Up to 10 Gbps
Standard
db2v10m40
10 vCore
40 GB
Up to 10 Gbps
Standard
db2v10m80
10 vCore
80 GB
Up to 10 Gbps
Standard
db2v10m120
10 vCore
120 GB
Up to 10 Gbps
Standard
db2v10m160
10 vCore
160 GB
Up to 10 Gbps
Standard
db2v12m24
12 vCore
24 GB
Up to 12.5 Gbps
Standard
db2v12m48
12 vCore
48 GB
Up to 12.5 Gbps
Standard
db2v12m96
12 vCore
96 GB
Up to 12.5 Gbps
Standard
db2v12m144
12 vCore
144 GB
Up to 12.5 Gbps
Standard
db2v12m192
12 vCore
192 GB
Up to 12.5 Gbps
Standard
db2v14m28
14 vCore
28 GB
Up to 12.5 Gbps
Standard
db2v14m56
14 vCore
56 GB
Up to 12.5 Gbps
Standard
db2v14m112
14 vCore
112 GB
Up to 12.5 Gbps
Standard
db2v14m168
14 vCore
168 GB
Up to 12.5 Gbps
Standard
db2v14m224
14 vCore
224 GB
Up to 12.5 Gbps
Standard
db2v16m32
16 vCore
32 GB
Up to 12.5 Gbps
Standard
db2v16m64
16 vCore
64 GB
Up to 12.5 Gbps
Standard
db2v16m128
16 vCore
128 GB
Up to 12.5 Gbps
Standard
db2v16m192
16 vCore
192 GB
Up to 12.5 Gbps
Standard
db2v16m256
16 vCore
256 GB
up to 12.5 Gbps
Table. Vertica(DBaaS) server type specifications - db2 server type
DBH2 Server Type
The dbh2 server type of Vertica(DBaaS) is provided with large-capacity server specifications and is suitable for database workloads for large-scale data processing.
Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
Supports up to 128 vCPUs and 1,536 GB of memory
Up to 25Gbps networking speed
Classification
Server Type
vCPU
Memory
Network Bandwidth
High Capacity
dbh2v24m48
24 vCore
48 GB
Up to 25 Gbps
High Capacity
dbh2v24m96
24 vCore
96 GB
Up to 25 Gbps
High Capacity
dbh2v24m192
24 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v24m288
24 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v32m64
32 vCore
64 GB
Up to 25 Gbps
High Capacity
dbh2v32m128
32 vCore
128 GB
Up to 25 Gbps
High Capacity
dbh2v32m256
32 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v32m384
32 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v48m192
48 vCore
192 GB
Up to 25 Gbps
High Capacity
dbh2v48m576
48 vCore
576 GB
Up to 25 Gbps
High Capacity
dbh2v64m256
64 vCore
256 GB
Up to 25 Gbps
High Capacity
dbh2v64m768
64 vCore
768 GB
Up to 25 Gbps
High Capacity
dbh2v72m288
72 vCore
288 GB
Up to 25 Gbps
High Capacity
dbh2v72m864
72 vCore
864 GB
Up to 25 Gbps
High Capacity
dbh2v96m384
96 vCore
384 GB
Up to 25 Gbps
High Capacity
dbh2v96m1152
96 vCore
1152 GB
Up to 25 Gbps
High Capacity
dbh2v128m512
128 vCore
512 GB
Up to 25 Gbps
High Capacity
dbh2v128m1536
128 vCore
1536 GB
Up to 25 Gbps
Table. Vertica(DBaaS) server type specifications - dbh2 server type
7.3.1.2 - Monitoring Metrics
Vertica(DBaaS) monitoring metrics
The following table shows the performance monitoring metrics of Vertica (DBaaS) that can be checked through Cloud Monitoring. For detailed instructions on how to use Cloud Monitoring, please refer to the Cloud Monitoring guide.
Select storage type and enter capacity (for detailed information about each Block Storage type, refer to Create Block Storage)
SSD: General Block Storage
SSD_KMS: Additional encrypted volume using KMS(Key Management System) encryption key
Set Storage type is also applied identically to additional storage
Enter capacity in multiples of 8 in the range of 16 ~ 5,120
Additional: DATA, Backup data storage area
Select Use and enter storage Purpose, Capacity
Click + button to add storage, and click x button to delete, you can add up to 9
Enter capacity in multiples of 8 in the range of 16 ~ 5,120, and you can create up to 9
Management Console
Optional
If Use is selected, server type and Block Storage settings for Node for cluster management and monitoring
Management Console > Server Type
Required
Select data node server type for cluster management and monitoring
Management Console > Block Storage
Required
Select Block Storage type to be used for data node for cluster management and monitoring
Network > Common Settings
Required
Network settings where servers created in the service are installed
Select to apply the same settings to all servers being installed
Select previously created VPC and Subnet
IP: Enter IP for each server
Public NAT settings are only possible with per-server settings
Network > Per-Server Settings
Required
Network settings where servers created in the service are installed
Select to apply different settings for each server being installed
Select previously created VPC and Subnet
IP: Enter IP for each server
Public NAT function can be used only when VPC is connected to Internet Gateway. If Use is checked, you can select from reserved IPs in Public IP of VPC product. For more information, refer to Create Public IP
IP Access Control
Optional
Service access policy settings
Access policy is set for IPs entered on the page, so separate Security Group policy settings are not required
Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.0/24, 192.168.10.1/32) and click Add button
To delete entered IP, click x button next to the entered IP
Maintenance Window
Optional
DB maintenance window
If Use is selected, set day of week, start time, and duration
It is recommended to set a maintenance window for stable DB management. Patch work proceeds at the set time and service interruption occurs
If set to Not Used, problems caused by not applying patches are not the responsibility of Samsung SDS
Table. Vertica(DBaaS) Service Configuration Items
Enter or select the required information in the Database Configuration Required Information Input area.
Division
Required
Description
Database Name
Required
Server name applied when DB is installed
Start with English letters, and enter 3 to 20 characters using English letters and numbers
Database Username
Required
DB username
Account with that name is also created in OS
Enter 2 to 20 characters using lowercase English letters
Database usernames with restricted use can be checked in Console
Database Password
Required
Password to use when accessing DB
Enter 8 to 30 characters including English letters, numbers, and special characters (excluding "’)
Database Password Confirmation
Required
Re-enter the password to use when accessing DB identically
Database Port Number
Required
Port number required for DB connection
Enter DB port in the range of 1200 ~ 65535
Backup > Use
Optional
Whether to use node backup
Select Use and select node backup retention period and backup start time
Backup > Retention Period
Optional
Backup retention period
Select backup retention period. File retention period can be set from 7 days to 35 days
Separate fees are charged for backup files depending on capacity
Backup > Backup Start Time
Optional
Backup start time
Select backup start time
Backup execution minutes are set randomly, and backup end time cannot be set
License Key
Required
Enter Vertica License Key held by customer
If the entered license key is invalid, service creation is not possible
DB Locale
Required
Settings related to string processing, number/currency/date/time display format, etc. to use in Vertica(DBaaS)
DB is created with default settings to the selected Locale
Enter or select the required information in the Additional Information Input area.
Division
Required
Description
Tags
Optional
Add tags
Can add up to 50 per resource
After clicking Add Tag button, enter or select Key, Value values
Table. Vertica(DBaaS) Additional Information Input Items
Check the detailed information and estimated billing amount in the Summary panel, and click Complete button.
When creation is completed, check the created resource on the Resource List page.
Check Vertica(DBaaS) Detailed Information
Vertica(DBaaS) service can check and modify the entire resource list and detailed information. The Vertica(DBaaS) Details page consists of Details, Tags, Operation History tabs.
Follow the procedure below to check the detailed information of Vertica(DBaaS) service.
Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
Click the resource for which you want to check detailed information on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
Status information and additional feature information are displayed at the top of the Vertica(DBaaS) Details page.
Division
Description
Cluster Status
Cluster status
Creating: Cluster is being created
Editing: Cluster is changing to state where Operation is being performed
Error: Cluster failed while performing operation
If it occurs continuously, contact administrator
Failed: Cluster failed during creation process
Restarting: Cluster is being restarted
Running: Cluster is operating normally
Starting: Cluster is being started
Stopped: Cluster is stopped
Stopping: Cluster is in stopping state
Synchronizing: Cluster is being synchronized
Terminating: Cluster is being deleted
Unknown: Cluster status is unknown
If it occurs continuously, contact administrator
Upgrading: Cluster is changing to state where upgrade is being performed
Cluster Control
Buttons to change cluster status
Start: Start the stopped cluster
Stop: Stop the running cluster
Restart: Restart the running cluster
Additional Features More
Cluster-related management buttons
Synchronize Service Status: Check real-time DB service status
Backup History: If backup is set, check whether backup is executed normally and history
Database Recovery: Recover DB based on specific time point
Service Termination
Button to terminate service
Table. Vertica(DBaaS) Status Information and Additional Features
Details
You can check the detailed information of the resource selected on the Vertica(DBaaS) List page and modify information if necessary.
Division
Description
Server Information
Server information configured in the cluster
Category: Server type (Vertica cluster configuration nodes are displayed as Data, Management Console is displayed as Console)
Server Name: Server name
IP:Port: Server IP and port
Status: Server status
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Means cluster SRN
Resource Name
Resource name
Means cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Created At
Date and time when the service was created
Modifier
User who modified the service information
Modified At
Date and time when the service information was modified
Managed Console resource status set when DB is installed
Network
Installed network information (VPC, Subnet)
IP Access Control
Service access policy settings
If IP addition or deletion is needed, click Modify icon to set
Time Zone
Standard time zone where Vertica(DBaaS) DB is used
License
Vertica(DBaaS) license information
Server Information
Data/Console server type, basic OS, additional Disk information
If server type modification is needed, click Modify icon next to server type to set. For server type modification procedure, refer to Change Server Type
If server type is modified, server restart is required
If storage expansion is needed, click Modify icon next to storage capacity to expand. For storage expansion procedure, refer to Expand Storage
If storage addition is needed, click Add Disk button next to additional Disk to add. For storage addition procedure, refer to Add Storage
Table. Vertica(DBaaS) Details Information Items
Tags
You can check the tag information of the resource selected on the Vertica(DBaaS) List page and add, change, or delete tags.
Division
Description
Tag List
Tag list
Can check tag Key, Value information
Can add up to 50 tags per resource
When entering tags, search and select from previously created Key and Value lists
Table. Vertica(DBaaS) Tags Tab Items
Operation History
You can check the operation history of the resource selected on the Vertica(DBaaS) List page.
Division
Description
Operation History List
Resource change history
Check operation date and time, resource ID, resource name, operation details, event topic, operation result, operator information
Table. Vertica(DBaaS) Operation History Tab Detailed Information Items
Manage Vertica(DBaaS) Resources
If you need to change existing configuration options of created Vertica(DBaaS) resources or add storage configuration, you can perform tasks on the Vertica(DBaaS) Details page.
Control Operation
If there are changes to running Vertica(DBaaS) resources, you can start, stop, or restart.
Follow the procedure below to control the operation of Vertica(DBaaS).
Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
Click the resource for which you want to control operation on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
Check Vertica(DBaaS) status and complete changes through the following control buttons.
Start: Server where Vertica(DBaaS) service is installed and Vertica(DBaaS) service become running.
Stop: Server where Vertica(DBaaS) service is installed and Vertica(DBaaS) service become stopped.
Restart: Only Vertica(DBaaS) service is restarted.
Synchronize Service Status
You can synchronize the real-time service status of Vertica(DBaaS).
Follow the procedure below to check the service status of Vertica(DBaaS).
Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
Click the resource for which you want to check service status on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
Click Synchronize Service Status button. Cluster changes to Synchronizing status during checking.
When checking is completed, status is updated in the server information item, and cluster changes to Running status.
Change Server Type
You can change the configured server type.
Caution
If server type is configured as Standard, it cannot be changed to High Capacity. If you want to change to High Capacity, create a new service.
If server type is modified, server restart is required. Please check separately for SW license modification matters or SW settings and reflection according to server specification change.
Follow the procedure below to change the server type.
Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
Click the resource for which you want to change server type on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
Click Modify icon of the server type you want to change at the bottom of detailed information. Modify Server Type popup window opens.
Select server type in the Modify Server Type popup window, and click Confirm button.
Add Storage
If you need more than 5 TB of data storage space, you can add storage. If it is High Availability configuration (HA cluster), when storage capacity is expanded or added, it is applied to all DBs simultaneously.
Follow the procedure below to add storage.
Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
Click the resource for which you want to add storage on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
Click Add Disk button at the bottom of detailed information. Request Additional Storage popup window opens.
Enter purpose and capacity in the Request Additional Storage popup window, and click Confirm button.
Expand Storage
You can expand storage added as data area up to 5TB based on initially allocated capacity. You can expand storage without stopping Vertica(DBaaS), and if configured as a cluster, all nodes are expanded simultaneously.
Follow the procedure below to expand storage capacity.
Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
Click the resource for which you want to change server type on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
Click Modify button of the additional Disk you want to expand at the bottom of detailed information. Modify Additional Storage popup window opens.
Enter expansion capacity in the Modify Additional Storage popup window, and click Confirm button.
Change Recovery DB Instance Type
After DB recovery is completed, you can change the instance type in the Recovery detailed information screen.
Follow the procedure below to change the Recovery DB instance type.
Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
Click the resource for which you want to change Recovery DB instance type on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
Click Change Instance Type button. Change Instance Type confirmation dialog is displayed.
DB instance type is changed from Recovery to Active to perform the same function as a single DB.
Terminate Vertica(DBaaS)
You can reduce operating costs by terminating unused Vertica(DBaaS). However, if you terminate the service, the running service may stop immediately, so you should fully consider the impact of service interruption before proceeding with termination work.
Follow the procedure below to terminate Vertica(DBaaS).
Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
Select the resource to terminate on the Vertica(DBaaS) List page, and click Terminate Service button.
When termination is completed, check if the resource is terminated on the Vertica(DBaaS) list page.
7.3.2.1 - Vertica Backup and Recovery
Users can set up backups of Vertica (DBaaS) through the Samsung Cloud Platform Console and restore from the backed-up files.
Vertica(DBaaS) Backup
You can set up a backup function so that the user’s data can be stored safely. Also, through the backup history function, you can verify whether the backup was performed correctly and you can also delete backed-up files.
To modify the backup settings of Vertica (DBaaS), follow the steps below.
Caution
If a backup is set, the backup will be performed at the designated time after the set time, and additional charges will be incurred depending on the backup size.
If you change the backup setting to unused, backup execution will stop immediately, and the stored backup data will be deleted and can no longer be used.
All Services > Data Analytics > Vertica(DBaaS) Click the menu. Navigate to the Service Home page of Vertica(DBaaS).
Click the Vertica(DBaaS) menu on the Service Home page. Navigate to the Vertica(DBaaS) List page.
Click the resource to set backup on the Vertica(DBaaS) List page. You will be taken to the Vertica(DBaaS) Details page.
Click the Edit button of the backup item. Backup Settings popup window opens.
When setting up a backup, click Use in the Backup Settings popup, select the retention period and backup start time, and click the Confirm button.
If you want to stop the backup setting, uncheck Use in the Backup Setting popup window and click the Confirm button.
Check backup history
Guide
To set notifications for backup success and failure, you can configure them via the Notification Manager product. For a detailed usage guide on setting notification policies, refer to Create Notification Policy.
To view the backup history, follow these steps.
All Services > Data Analytics > Vertica(DBaaS) Click the menu. Go to the Service Home page of Vertica(DBaaS).
Click the Vertica(DBaaS) menu on the Service Home page. Navigate to the Vertica(DBaaS) list page.
Click the resource to view the backup history on the Vertica(DBaaS) List page. Go to the Vertica(DBaaS) Details page.
Click the Backup History button. Backup History popup opens.
Backup History In the popup window, you can check the backup status, version, backup start date and time, backup completion date and time, and size.
Delete backup file
To delete the backup history, follow the steps below.
Caution
Backup files cannot be restored after deletion. Please be sure to confirm whether the data is unnecessary before deleting.
All Services > Data Analytics > Vertica(DBaaS) Click the menu. Navigate to the Service Home page of Vertica(DBaaS).
Service Home page, click the Vertica(DBaaS) menu. Go to the Vertica(DBaaS) list page.
Vertica(DBaaS) List On the page, click the resource to view the backup history. Vertica(DBaaS) Details You will be taken to the page.
Click the Backup History button. The Backup History popup window opens.
Backup History In the popup window, check the file you want to delete, then click the Delete button.
Vertica(DBaaS) Recover
If restoration from a backup file is required due to a failure or data loss, you can use the cluster recovery feature to recover based on a specific point in time.
Caution
To perform recovery, a capacity at least equal to the data type Disk capacity is required. If Disk capacity is insufficient, recovery may fail.
To recover Vertica (DBaaS), follow the steps below.
All Services > Data Analytics > Vertica(DBaaS) Click the menu. Navigate to the Service Home page of Vertica(DBaaS).
Click the Vertica(DBaaS) menu on the Service Home page. Go to the Vertica(DBaaS) List page.
Vertica(DBaaS) Resource On the list page, click the resource you want to recover. You will be taken to the Vertica(DBaaS) Detail page.
Click the Database Recovery button. Go to the Database(DBaaS) Recovery page.
Database Recovery area, after entering the relevant information, click the Complete button.
Category
Required or not
Detailed description
Recovery Type
Required
Set the point in time the user wants to recover
Backup point (recommended): Recover based on backup file. Select from the list of backup file timestamps displayed in the list
Recovery point: Choose the date and time to recover. Can be selected from the start time of the backup history
Server Name Prefix
Required
Recovery DB Server Name
Enter 3~16 characters starting with a lowercase English letter, using lowercase letters, numbers, and the special character (-)
A postfix such as 001, 002 is appended based on the server name to create the actual server name
Cluster Name
Required
Recovery DB Cluster Name
Enter using English, 3 to 20 characters
A cluster is a unit that groups multiple servers
Number of nodes
Select
Number of data nodes
Set to be the same as the number of nodes configured in the original cluster.
Service Type > Server Type
Required
Recovery DB Server Type
Standard: Standard specifications commonly used
High Capacity: Large-capacity server of 24 vCore or more
Service Type > Planned Compute
Select
Status of resources with Planned Compute set
In Use: Number of resources with Planned Compute that are currently in use
Configured: Number of resources with Planned Compute set
Coverage Preview: Amount applied per resource by Planned Compute
Planned Compute Service Application: Go to the Planned Compute service application page
NEWVertica(DBaaS) Service Official Version Release
Released Vertica(DBaaS) service, which can efficiently store data and improve query performance with columnar storage-based compression and encoding features.
7.4 - Data Flow
7.4.1 - Overview
Service Overview
Data Flow is a data processing flow tool that extracts large amounts of data from various data sources and visually creates a processing flow for transformation/transmission of stream/batch data, providing open-source Apache NiFi. Data Flow can be used independently in the Kubernetes Engine cluster environment of the Samsung Cloud Platform or with other application software.
Figure. Data Flow architecture diagram
Provided Features
Data Flow provides the following functions.
Easy installation and management: Data Flow can be easily installed through the web-based Samsung Cloud Platform Console in a standard Kubernetes cluster environment. Based on open-source Apache NiFi, it automatically configures the architecture required for extensible clustering, and automatically installs ZooKeeper, Registry, and management modules. Through Data Flow, you can set up and deploy the setting files, NiFi templates, etc. required for service connection.
Easy Data Flow Management: The processing flow of stream/batch data can be easily written in a GUI-based manner tailored to the user environment, and efficient data extraction/transmission/processing between systems is possible with GUI-based data flow writing.
NiFi Template Gallery: You can share/distribute reference NiFi templates. Data Flow provides a gallery of work files for data processing flows frequently used in the field, and users can share their own data processing flow tasks.
Component
Data Flow is composed of Manager and Service modules, and provides Apache NiFi as a package.
Data Flow Manager
Data Flow Manager provides various managing functions to utilize NiFi more efficiently.
Through Data Flow Manager, customers can upload the Nar File they created and use it in the Processor, and upload setting files to share them.
Among NiFi templates, high-frequency templates are assetized and provided as a gallery, and can be used immediately with just one click.
Provides real-time monitoring and resource status monitoring for multiple services configured for Native NiFi Service.
You can easily provision setting information for NiFi configuration components within the cluster.
Data Flow Service
It provides a data flow management service based on Apache NiFi.
It automatically configures the architecture required for extensible clustering based on Apache NiFi, and Nifi, ZooKeeper, Nifi Registry modules are automatically installed.
When providing Nifi, you can set Description, resource size, access ID/PW, and Host Alias.
After creating the service, you can modify the Description, necessary resource size, access password, Host Alias, etc. and reflect them in the service.
Server spec type
When creating a Data Flow service, please check the following contents.
Recommended Service Installation Specifications: CPU 21 core, Memory 57 GB, storage 100 GB or more
Reference
The Data Flow service needs to be installed before creating the Ingress Controller.
In a Kubernetes cluster, only 1 Ingress Controller can be installed.
Data Flow is available in the following environments.
Region
Availability
Western Korea (kr-west1)
Provided
Korea East (kr-east1)
Available
South Korea (kr-south1)
Not provided
South Korea southern region 2 (kr-south2)
Not provided
South Korea southern region 3(kr-south3)
Not provided
Table. Data Flow Provision Status by Region
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance.
The user can enter the essential information of Data Flow through the Samsung Cloud Platform Console and create the service by selecting detailed options.
Creating Data Flow
You can create and use the Data Flow service in the Samsung Cloud Platform Console.
To create a Data Flow, follow the next procedure.
Click on the menu for all services > Data Analytics > Data Flow. It moves to the Service Home page of Data Flow.
On the Service Home page, click the Create Data Flow button. It moves to the Create Data Flow page.
Data Flow Creation page where you enter the information needed to create a service and select detailed options.
Version Selection area, please select the necessary information.
Division
Necessity
Detailed Description
Data Flow version
required
Select version of the selected image
Provide a list of versions of the server image provided
Fig. Data Flow version selection items
Cluster Selection area, please enter or select the required information.
To install Data Flow, creating nodes for the Kubernetes cluster and a workspace is required first.
Classification
Necessity
Detailed Description
Cluster Name
Required
Select Cluster to Use
Ingress Controller
Required
Select the Ingress Controller installed in the cluster
In the Details tab of the installed Ingress Controller, add the following information to the ConfigMap item:
Key: allow-snippet-annotations
Value: true
Fig. Data Flow cluster selection items
Service Information Input area, please enter or select the necessary information.
Classification
Necessity
Detailed Description
Data Flow name
required
Enter Data Flow name
Start with lowercase English letters and do not end with a special character (-), using lowercase English letters, numbers, and special characters (-) to input 3 ~ 30 characters
Storage Class
Required
Select the storage class used by the chosen cluster
Description
Select
Enter additional information or description about the Data Flow within 150 characters
Domain setting
Mandatory
Enter Data Flow domain
Start with lowercase English letters and do not end with a special character (-), using lowercase letters, numbers, and special characters (-) to input 3 to 50 characters
{Data Flow name}.{set domain} will be the Data Flow access address.
Node Selector
Required
To install on a specific node, enter a distinguishable label from the node’s labels
If the node label is entered incorrectly, an installation error may occur, so check the node label in advance
The node label can be checked in the yaml file of the corresponding node
Account
Required
Enter Data Flow Manager account
ID: Starts with lowercase English letters and uses lowercase letters and numbers to enter a value between 6 and 30
Password: Includes uppercase (English), lowercase (English), numbers, and special characters (!@#$%^&*) and enter 8 to 50 characters
Password Confirmation: Enter the password exactly once more
Host Alias
Selection
Add host information to be connected to Data Flow (up to 20 can be created, including default)
Select “Use”, then click the + button
Hostname: Enter in hostname or domain format, using lowercase, numbers, and special characters (-) with 3-63 characters
IP: Enter in IP format
To delete, click the X button
The firewall between the cluster and the server must be open to use the added host information
Fig. Data Flow service information input items
Enter Additional Information area, please enter or select the necessary information.
Division
Necessity
Detailed Description
Tag
Selection
Tag addition
Tag addition button to create and add tags or add existing tags possible
Up to 50 tags can be added
Newly added tags are applied after service creation is complete
Fig. Data Flow Additional Information Input Items
In the Summary panel, review the detailed information and estimated charges, then click the Complete button.
Once creation is complete, check the created resource on the Data Flow list page.
Check Data Flow Detailed Information
You can check and modify the list of all resources and detailed information of Data Flow. The Data Flow details page consists of detailed information, tags, and work history tabs.
To check the detailed information of Data Flow, follow the next procedure.
Click on the menu for all services > Data Analytics > Data Flow. It moves to the Service Home page of Data Flow.
On the Service Home page, click the Data Flow menu. It moves to the Data Flow list page.
Data Flow list page, click on the resource to check the detailed information. It moves to the Data Flow details page.
Data Flow Details page top shows status information and additional function information.
Classification
Detailed Description
Status Display
Data Flow Status
Creating: being created
Running: operating, Data Flow Services can be created
Updating: settings are being updated
Terminating: service is being terminated
Error: error occurred during creation or service is in an abnormal state
Hosts file setting information
Button to check and copy host file information to access Data Flow
Service Cancellation
Button to cancel the service
Fig. Data Flow status information and additional functions
Detailed Information
On the Data Flow List page, you can check the detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Category
Resource Type
Service Name
SRN
Unique resource ID on Samsung Cloud Platform
Means cluster SRN
Resource Name
Resource Name
Means cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service information
Revision Time
Time when service information was revised
Cluster Name
Server cluster name composed of servers
Storage Class
Storage class used by the selected cluster
Description
Additional information or description about Data Flow
Domain Setting
Data Flow Domain Name
Node Selector
Node Label
Web Url
Data Flow URL
Account
Data Flow Manager account
Host Alias
Host information to be connected to Data Flow
Fig. Data Flow detailed information tab items
Tag
On the Data Flow List page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag list
Tag list
Check Key, Value information of the tag
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing Key and Value list
Fig. Data Flow tag tab items
Work History
You can check the work history of the selected resource on the Data Flow list page.
Classification
Detailed Description
Work history list
Resource change history
Check work time, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Data Flow job history tab detailed information items
Data Flow cancellation
You can cancel unused Data Flow to reduce operating costs. However, if you cancel the service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.
To cancel Data Flow, follow the next procedure.
Click on the menu for all services > Data Analytics > Data Flow. It moves to the Service Home page of Data Flow.
Service Home page, click the Data Flow menu. It moves to the Data Flow list page.
Data Flow list page, select the resource to be canceled and click the Service Cancellation button.
Once the cancellation is complete, check the Data Flow list page to see if the resource has been cancelled.
Notice
Data Flow You must first delete the connected Data Flow Services to cancel.
Data Flow will be cancelled, and the created namespace will also be deleted.
7.4.2.1 - Data Flow Services
The user can enter the essential information of Data Flow Services in the Data Flow service through the Samsung Cloud Platform Console and create the service by selecting detailed options.
Create Data Flow Services
The user can add a service by selecting the detailed options of the Data Flow service or entering the setting value.
Notice
When applying for Data Flow Services, the scale of resources must be secured to be more than the available capacity of the K8s cluster.
To create Data Flow Services, follow these steps.
Click all services > Data Analytics > Data Flow menu. It moves to Data Flow Service Home page.
On the Service Home page, click Data Flow Services. It moves to the Data Flow Services list page.
On the Data Flow Services list page, click the Create Data Flow Services button. It moves to the Create Data Flow Services page.
Data Flow Services Creation page, enter the information required for service creation and select detailed options.
Enter Service Information Enter or select the required information in the area.
Classification
Necessity
Detailed Description
Data Flow name
required
Data Flow selection
Flow Service name
Required
Enter Data Flow Services name
Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to enter 3 to 30 characters
Storage Class
Required
Select the storage class used by the selected cluster
Description
Select
Enter additional information or description about Data Flow Services within 150 characters
Domain Setting
Mandatory
Enter the Data Flow Services domain
Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to input 3 ~ 50 characters
{Data Flow Services name}.{set domain} will be the Data Flow Services access address.
Node Selector
Required
To install on a specific node, enter a distinguishable Label from the node’s Labels
If the node Label is entered incorrectly, an installation error may occur, so check the node Label in advance
The node Label can be checked in the yaml file of the corresponding node
Service Workload
Required
Nifi: A module that provides Apache Nifi services and UI
Nifi Registry: A module for setting and deploying Nifi templates
Zookeeper: A module that supports distributed processing of Nifi in multiple nodes
Account
Required
Enter Nifi account
ID: Enter a value between 6 and 30 characters, starting with a lowercase letter and using lowercase letters and numbers
Password: Enter a value of 8 to 50 characters, including uppercase letters (English), lowercase letters (English), numbers, and special characters (!@#$%^&*)
Password Confirmation: Enter the password again, identical to the previous entry
Fig. Data Flow Services Service Information Input Items
Additional Information Input area, please enter or select the required information.
Classification
Necessity
Detailed Description
Host Alias
Selection
Add host information to be connected to Data Flow (up to 20 can be created, including default)
Use is selected and then + button is clicked
Hostname: in the form of hostname or domain, using lowercase letters, numbers, and special characters (-) to enter 3 ~ 63 characters
IP: enter in IP format
click the X button to delete
the firewall between the cluster and the corresponding server must be open to use the added host information
Tag
Selection
Add tag
Add tag button to create and add tags or add existing tags
Up to 50 tags can be added
Newly added tags are applied after service creation is completed
Fig. Data Flow Additional Information Input Items
In the Summary panel, review the detailed information and estimated charges, and click the Complete button.
Once creation is complete, check the created resource on the Data Flow Services list page.
Data Flow Services detailed information check
You can check and modify the list of all resources and detailed information of Data Flow Services. The Data Flow Services details page consists of details, tags, and operation history tabs.
To check the detailed information of Data Flow Services, follow the next procedure.
All Services > Data Analytics > Data Flow menu should be clicked. It moves to the Service Home page of Data Flow.
Service Home page, click the Data Flow Services menu. It moves to the Data Flow Services list page.
Data Flow Services list page, click on the resource to check the detailed information. Move to the Data Flow Services details page.
Data Flow Services Details page displays status information and additional features at the top.
Classification
Detailed Description
Status Display
Data Flow Services status
Creating: being created
Running: in operation
Updating: updating settings
Terminating: service termination in progress
Error: creation failed or service unavailable
Hosts file setting information
A button to check and copy host file information to access Data Flow Services
Data Flow Services deletion
Button to cancel the service
Fig. Data Flow Services Status Information and Additional Functions
Detailed Information
On the Data Flow Services list page, you can check the detailed information of the selected resource and modify the information if necessary.
Division
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Means cluster SRN
Resource Name
Resource Name
Means cluster name
Resource ID
Unique resource ID in the service
Creator
Service creator user
Creation Time
The time when the service was created
Modifier
User who modified the service information
Modified Time
Time when service information was modified
Data Flow Name
Data Flow Name
Storage Class
Storage class used by the selected cluster
Description
Additional information or description about Data Flow Services
Domain Setting
Data Flow Services domain name
Node Selector
Node Label
Web Url
Data Flow Services URL
Account
Airflow Account
Host Alias
Host information to be connected to Data Flow Services
Fig. Data Flow Services detailed information tab items
Tag
On the Data Flow Services List page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag list
Tag list
Key, Value information of the tag can be checked
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing Key and Value list
Fig. Data Flow Services Tag Tab Items
Work History
You can check the operation history of the selected resource on the Data Flow Services list page.
Classification
Detailed Description
Work history list
Resource change history
Check work date, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Data Flow Services job history tab detailed information items
Cancel Data Flow Services
You can cancel unused Data Flow Services to reduce operating costs. However, when canceling a service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.
To cancel Data Flow or Data Flow Services, follow the procedure below.
Click All Services > Data Analytics > Data Flow menu. It moves to the Service Home page of Data Flow.
Service Home page, click the Data Flow Services menu. Move to the Data Flow Services list page.
Data Flow Services list page, select the resource to be canceled and click the Data Flow Services delete button.
Once the cancellation is complete, please check if the resource has been cancelled on the Data Flow Services list page.
Notice
Data Flow Services will be cancelled, and the created namespace will also be deleted.
7.4.2.2 - Install Ingress Controller
User must install Ingress Controller before creating Data Flow service. Only one Ingress Controller should be installed in the Kubernetes cluster.
Install Ingress Controller using Container Registry
To install the Ingress Controller using Container Registry, follow the steps below.
The Data Flow service, which extracts/transforms/transfers data from various sources and automates data processing flows, has been released.
It provides open-source Apache NiFi.
7.5 - Data Ops
7.5.1 - Overview
Service Overview
Data Ops is a managed workflow orchestration service based on Apache Airflow that writes workflows for periodic or repetitive data processing tasks and automates task scheduling. Users can automate the process of bringing useful data to the right place at the right time, and monitor the configuration and progress of data pipelines.
Figure. Data Ops Architecture Diagram
Provided Features
Data Ops provides the following functions.
Easy installation and management: Data Ops can be easily installed through a web-based Console in a standard Kubernetes cluster environment. Apache Airflow and management modules are automatically installed, and integrated monitoring of the execution status of web servers and schedulers is possible through an integrated dashboard.
Dynamic Pipeline Composition: Pipeline composition for data tasks is possible based on Python code. Since it dynamically generates tasks in conjunction with data task scheduling, you can freely compose the desired workflow form and scheduling.
Convenient workflow management: DAG (Direct Acyclic Graph: directed acyclic graph) configuration is visualized and managed through a web-based UI, making it easy to understand the data flow’s preceding and parallel relationships. Additionally, each task’s timeout, retry count, priority definition, etc. can be easily managed.
Component
Data Ops consists of Manager and Service modules, and provides Apache Airflow by packaging it.
Data Ops Manager
Data Ops Manager provides various managing functions to use Airflow more efficiently.
You can upload Plugin File, Shared File, Python Library File to be used in Ops Service through Ops Manager.
You can easily provision setting information for Airflow configuration components within the cluster.
You can manage and easily provision different service settings within the Airflow cluster.
Data Ops Service
Provides a managed workflow orchestration service based on Apache Airflow.
When Airflow is provided, you can set Description, necessary resource size, DAGs GitSync, and Host Alias.
After creating a service, you can modify Description, resource usage, DAGs GitSync, and Host Alias to reflect the service.
Server Spec Type
When creating a Data Ops service, please check the following contents.
Recommended Service Installation Specifications: CPU KubernetesExecutor 43 core, CPU CeleryExecutor 25 core, Memory 50 GB, storage 100 GB or more
Note
It is necessary to install Ingress Controller before creating Data Ops service.
In a Kubernetes cluster, only 1 Ingress Controller can be installed.
Data Ops is available in the following environments.
Region
Availability
Western Korea(kr-west1)
Provided
Korea East(kr-east1)
Not provided
South Korea (kr-south1)
Provided
South Korea Central(kr-central)
Available
South Korea southern region 3(kr-south3)
Provided
Table. Data Ops Regional Provision Status
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance.
A service that easily stores, manages, and shares container images
Fig. Data Ops Preceding Service
7.5.2 - How-to guides
The user can enter the essential information of Data Ops through the Samsung Cloud Platform Console and create the service by selecting detailed options.
Create Data Ops
You can create and use the Data Ops service on the Samsung Cloud Platform Console.
To create Data Ops, follow the following procedure.
Click on the menu for all services > Data Analytics > Data Ops. It moves to the Service Home page of Data Ops.
On the Service Home page, click the Create Data Ops button. It moves to the Create Data Ops page.
Data Ops Creation page, enter the information required for service creation and select detailed options.
Version Selection area, please select the necessary information.
Classification
Necessity
Detailed Description
Data Ops version
required
Select version of the selected image
Provide a list of versions of the provided server image
Table. Data Ops version selection items
Cluster Selection area, please enter or select the required information.
To install Data Ops, it is necessary to create nodes for the Kubernetes cluster and the working environment first.
Classification
Mandatory
Detailed Description
Cluster Name
Required
Select Cluster to Use
Ingress Controller
required
Select the Ingress Controller installed in the cluster
Fig. Data Ops Cluster Selection Items
Enter Service Information area, please enter or select the necessary information.
Classification
Necessity
Detailed Description
Data Ops name
required
Enter Data Ops name
Start with lowercase English letters and do not end with special characters (-), use lowercase letters, numbers, and special characters (-) to enter 3 ~ 30 characters
Storage Class
Required
Select the storage class used by the selected cluster
Description
Optional
Enter additional information or description about Data Ops within 150 characters
Domain Setting
Mandatory
Enter Data Ops domain
Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to enter 3 to 50 characters
{Data Ops name}.{set domain} will be the Data Ops access address.
Node Selector
Required
To install on a specific node, enter a distinguishable Label from the node’s Labels
If the node Label is entered incorrectly, an installation error may occur, so check the node Label in advance
The node Label can be checked in the yaml file of the corresponding node
Account
Required
Enter Data Ops Manager account
ID: Enter a value between 6 and 30 characters, starting with a lowercase English letter and using only lowercase letters and numbers
Password: Enter a value between 8 and 50 characters, including uppercase letters (English), lowercase letters (English), numbers, and special characters (!@#$%^&*)
Password Confirmation: Enter the password again, identical to the previous entry
Host Alias
Selection
Add host information to be connected to Data Ops (up to 20 can be created, including default)
Select “Use” and click the + button
Hostname: Enter in hostname or domain format, using lowercase letters, numbers, and special characters (-) in 3-63 characters
IP: Enter in IP format
To delete, click the X button
The firewall between the cluster and the corresponding server must be open to use the added host information
Fig. Data Ops Service Information Input Items
Enter Additional Information Enter or select the required information in the area.
Classification
Necessity
Detailed Description
Tag
Select
Add Tag
Add Tag button to create and add tags or add existing tags
Up to 50 tags can be added
Newly added tags will be applied after service creation is complete
Fig. Data Ops Additional Information Input Items
In the Summary panel, review the detailed information and estimated charges, and then click the Complete button.
Once creation is complete, check the created resource on the Data Ops list page.
Data Ops detailed information check
You can check and modify the full list of Data Ops resources and detailed information. The Data Ops details page consists of detailed information, tags, and work history tabs.
To check the detailed information of Data Ops, follow the next procedure.
All Services > Data Analytics > Data Ops menu should be clicked. It moves to the Service Home page of Data Ops.
Service Home page, click the Data Ops menu. It moves to the Data Ops list page.
Data Ops list page, click on the resource to check the detailed information. It moves to the Data Ops details page.
Data Ops Details page top shows status information and additional function information.
Classification
Detailed Description
Status Display
Data Ops Status
Creating: being created
Running: operating, Data Ops Services can be created
Updating: settings update in progress
Terminating: service termination in progress
Error: error occurred during creation or service abnormal status
Hosts file setting information
Button to check and copy host file information to access Data Ops
Service Cancellation
Button to cancel the service
Fig. Data Ops Status Information and Additional Features
Detailed Information
On the Data Ops list page, you can check the detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Means cluster SRN
Resource Name
Resource Name
Means cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service information
Modified Date
Date when service information was modified
Cluster Name
Server cluster name composed of servers
Storage Class
Storage class used by the selected cluster
Description
Additional information or description about Data Ops
Domain Setting
Data Ops Domain Name
Node Selector
Node Label
Web Url
Data Ops URL
Account
Data Ops Manager account
Host Alias
Host information to be connected to Data Ops
Fig. Data Ops detailed information tab items
Tag
On the Data Ops list page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag list
Tag list
Check Key, Value information of the tag
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing list of created Key and Value
Fig. Data Ops tags tab items
Work History
You can check the work history of the selected resource on the Data Ops list page.
Classification
Detailed Description
Work history list
Resource change history
Check work date, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Data Ops job history tab detailed information items
Cancel Data Ops
You can cancel unused Data Ops to reduce operating costs. However, if you cancel the service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.
To cancel Data Ops, follow the procedure below.
Click All Services > Data Analytics > Data Ops menu. It moves to the Service Home page of Data Ops.
On the Service Home page, click the Data Ops menu. It moves to the Data Ops list page.
Data Ops list page, select the resource to be canceled and click the Service Cancellation button.
Once the cancellation is complete, please check if the resource has been cancelled on the Data Ops list page.
Notice
Data Ops cannot be deleted until you delete the connected Data Ops Services.
7.5.2.1 - Data Ops Services
Users can enter essential information for Data Ops Services within the Data Ops service and create the service by selecting detailed options through the Samsung Cloud Platform Console.
Create Data Ops Services
The user can add a service by selecting detailed options for Data Ops or entering setting values.
Notice
When applying for Data Ops Services, the scale of resources should be secured to be more than the available capacity of the K8s cluster.
To create Data Ops Services, follow the procedure below.
Click on the menu for all services > Data Analytics > Data Ops. It moves to the Service Home page of Data Ops.
On the Service Home page, click Data Ops Services. It moves to the Data Ops Services list page.
On the Data Ops Services list page, click the Create Data Ops Services button. It moves to the Create Data Ops Services page.
Data Ops Services Creation page, enter the information required for service creation and select detailed options.
Enter Service Information area, enter or select the required information.
Division
Necessity
Detailed Description
Data Ops Name
Required
Data Ops Selection
Ops Service Name
Required
Enter Data Ops Services name
Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to input 3 ~ 30 characters
Storage Class
Required
Select the storage class used by the chosen cluster
Description
Optional
Enter additional information or description about Data Ops Services within 150 characters
Domain setting
Mandatory
Enter Data Ops Services domain
Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to input 3 ~ 50 characters
{Data Ops Services name}.{set domain} will be the Data Ops Services access address.
Node Selector
Required
To install on a specific node, enter a distinguishable label from the node’s labels
If the node label is entered incorrectly, an installation error may occur, so check the node label in advance
Node labels can be checked in the yaml file of the corresponding node
Service Workload
Required
Web Server: Provides visualization of DAG components and status, and Airflow configuration management module
Scheduler: Manages scheduling and execution of various DAGs and tasks for orchestration
Worker: Performs actual orchestration and data processing tasks
Worker(Kubernetes): Dynamically creates and runs pods when worker conditions are met, allowing for efficient resource usage. The Replica text box is disabled when Kubernetes is selected.
Worker(Celery): Creates and maintains static pods when worker conditions are met, allowing for faster performance with large requests. The Replica text box is enabled and user input is allowed when Celery is selected.
The type of executor chosen cannot be changed once selected
Account
Required
Enter Airflow account
ID: Starts with lowercase English letters and uses lowercase letters and numbers to enter a value between 6 and 30 characters
Password: Includes uppercase (English), lowercase (English), numbers, and special characters (!@#$%^&*) and enters 8 to 50 characters
Password Confirmation: Enter the password again
Table. Data Ops Services service information input items
Enter Additional Information area, enter or select the required information.
Classification
Necessity
Detailed Description
Host Alias
Selection
Add host information to be connected to Data Ops (up to 20 can be created, including default)
Select “Use” and click the + button
Hostname: Enter in hostname or domain format, using lowercase letters, numbers, and special characters (-) with 3 ~ 63 characters
IP: Enter in IP format
To delete, click the X button
The firewall between the cluster and the server must be open to use the added host information
Tag
Selection
Tag addition
Tag addition button to create and add tags or add existing tags possible
Up to 50 tags can be added
Newly added tags are applied after service creation is complete
Fig. Additional Data Ops information input items
In the Summary panel, review the detailed information and estimated charges, then click the Complete button.
Once creation is complete, check the created resource on the Data Ops Services list page.
Data Ops Services detailed information check
You can check and modify the full list of Data Ops Services resources and detailed information. The Data Ops Services details page consists of details, tags, and work history tabs.
To check the details of Data Ops Services, follow the next procedure.
Click on the menu for all services > Data Analytics > Data Ops. It moves to the Service Home page of Data Ops.
On the Service Home page, click the Data Ops Services menu. It moves to the Data Ops Services list page.
Data Ops Services list page, click on the resource to check the detailed information. It moves to the Data Ops Services details page.
Data Ops Services Details page top shows status information and additional features.
Classification
Detailed Description
Status Indicator
Data Ops Services status
Creating: being created
Running: operating
Updating: updating settings
Terminating: service termination in progress
Error: creation failed or service unavailable
Hosts file setting information
Button to check and copy host file information to access Data Ops Services
Data Ops Services deletion
button to cancel the service
Fig. Data Ops Services Status Information and Additional Features
Detailed Information
On the Data Ops Services list page, you can check the detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Category
Resource Type
Service Name
SRN
Unique resource ID in Samsung Cloud Platform
Means cluster SRN
Resource Name
Resource Name
Means cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service information
Revision Time
The time when service information was revised
Data Ops Name
Data Ops Full Name
Storage Class
Storage class used by the selected cluster
Description
Additional information or description about Data Ops Services
Domain Setting
Data Ops Services domain name
Node Selector
Node Label
Web Url
Data Ops Services URL
Account
Airflow Account
Host Alias
Host information to be connected to Data Ops Services
Fig. Data Ops Services detailed information tab items
Tag
On the Data Ops Services list page, you can check the tag information of the selected resource and add, change, or delete it.
Classification
Detailed Description
Tag list
Tag list
Key, Value information of the tag can be checked
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing Key and Value list
Fig. Data Ops Services tags tab items
Work History
You can check the operation history of the selected resource on the Data Ops Services list page.
Classification
Detailed Description
Work history list
Resource change history
Check work time, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Data Ops Services job history tab detailed information items
Data Ops Services cancellation
You can cancel unused Data Ops Services to reduce operating costs. However, when canceling a service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.
To cancel Data Ops Services, follow the procedure below.
Click on the menu for all services > Data Analytics > Data Ops. It moves to the Service Home page of Data Ops.
On the Service Home page, click the Data Ops Services menu. It moves to the Data Ops Services list page.
Data Ops Services list page, select the resource to be canceled and click the Data Ops Services delete button.
Once the cancellation is complete, please check if the resource has been cancelled on the Data Ops Services list page.
7.5.2.2 - Ingress Controller Install
User must install the Ingress Controller before creating the Data Ops service. Only one Ingress Controller should be installed in the Kubernetes cluster.
Install Ingress Controller using Container Registry
To install the Ingress Controller using Container Registry, follow the steps below.
A workflow can be created and job scheduling automated for periodic or repetitive data processing tasks with the release of the Data Ops service.
It is a managed workflow orchestration service based on Apache Airflow.
7.6 - Quick Query
7.6.1 - Overview
Service Overview
Quick Query is an interactive query service that allows you to analyze large amounts of data quickly and easily using standard SQL.
It is automatically installed on a standard Kubernetes cluster and provides easy and fast access to various data sources such as Cloud Hadoop, Object Storage, and RDB, enabling data retrieval and processing.
Key Features
Easy and Fast Data Retrieval: After defining a schema for data stored in Object Storage, you can easily and quickly retrieve data using standard SQL. Any user who can handle SQL can easily analyze large datasets without being a professional analyst.
Rapid Parallel Distributed Processing: Using the Trino engine, which supports parallel distributed processing, queries are automatically divided and processed in parallel on multiple nodes, allowing you to quickly retrieve query results even for large amounts of data.
Various Service Structures: It provides a shared fixed resource mode, a shared resource expansion mode, and a personal resource expansion mode. The shared fixed resource mode supports a stable response speed for large data queries, while the shared resource expansion mode allows for more affordable use in cases of irregular usage. Additionally, the personal resource expansion mode supports each user’s independent analysis work, enabling the use of Quick Query with a structure that meets user demands.
Service Composition Diagram
Figure. Quick Query Composition Diagram
Provided Functions
Quick Query provides the following functions:
Single Access Support for Various Data Sources (Supporting 11 Data Sources)
Automatic Storage Function for Result Data in Object Storage
Reuse Function for Query Results
Access Control Function through Ranger Integration
Data Usage Control Function
Category
Type
Note
Cloud Hadoop
hive_on_cloud_hadoop iceberg_on_cloud_hadoop
Using Cloud Hadoop’s Hive Metastore
Object Storage
hive_on_object_storage iceberg_on_object_storage
Deploying Hive Metastore in Quick Query
RDB
postgresql mariadb sqlserver oracle mysql
JDBC Driver Upload required (licensed)
TPCDS
tpcds
Built-in Data Source provided by Quick Query
TPCH
tpch
Built-in Data Source provided by Quick Query
Table. Supported Data Sources
Type
select
insert
update
delete
create
drop
alter
analyze
call
hive_on_cloud_hadoop
O
O
O
O
O
O
O
O
O
iceberg_on_cloud_hadoop
O
O
O
O
O
O
O
O
O
hive_on_object_storage
O
O
O
O
O
O
O
O
O
iceberg_on_object_storage
O
O
O
O
O
O
O
O
O
postgresql
O
O
O
O
O
O
mariadb
O
O
O
O
O
O
sqlserver
O
O
O
O
O
O
greenplum
O
O
O
O
O
O
oracle
O
O
O
O
O
O
mysql
O
O
O
O
O
O
tpcds
O
tpch
O
Table. Supported SQL
Components
Query Engine Type: Shared
The query engine is a structure that is shared by multiple users when one is running.
Fixed Resource Mode (No Auto Scaling): When Auto Scaling is not used, the query engine runs with fixed resources according to the user’s selection. Since the query engine always runs with the same resources, it can guarantee consistent query performance.
Figure. Fixed Resource Mode (No Auto Scaling)
Resource Expansion Mode (Using Auto Scaling): When Auto Scaling is used, the query engine’s worker nodes automatically scale in/out according to the processing volume. When the processing volume is low, the worker nodes decrease to one, and when the processing volume increases, the worker nodes increase. Additionally, resources can be adjusted according to the cluster size.
Figure. Resource Expansion Mode (Using Auto Scaling)
Query Engine Type: Personal
Resource Expansion Mode (Using Auto Scaling): The personal query engine type is a structure where the query engine runs separately for each user. Each query engine supports Auto Scale in/out and automatically stops when not used for an extended period. When used again, the query engine automatically restarts.
The worker nodes decrease to one when the processing volume is low and increase when the processing volume increases. Additionally, resources can be adjusted according to the cluster size.
Figure. Resource Expansion Mode (Using Auto Scaling)
Server Type
The server types supported by Quick Query are as follows:
Classification
Example
Detailed Description
Server Type
Standard
Provided server types
Standard: Standard specifications (vCPU, Memory) configuration commonly used
High Capacity: Server specifications with 24 cores or more
Server Size
s1v2m4
Provided server specifications
vCPU 2, Memory 4G
Table. Quick Query Supported Server Types
The minimum specifications required to use Quick Query are as follows:
Classification
Details
Cluster Size (User Input Value)
Fixed Node Pool
Auto-Scaling Node Pool
Shared
Fixed Resource Mode (No Auto Scaling)
Replica: 1 CPU: 4 Core Memory: 8GB
8 Core, 16GB * 4
N/A
Shared
Resource Expansion Mode (Using Auto Scaling)
Small(1 Core, 4GB)
8 Core, 16GB * 3
8 Core, 16GB * 1
Personal
Resource Expansion Mode (Using Auto Scaling)
Small(1 Core, 4GB)
8 Core, 16GB * 3
8 Core, 32GB * 2
Table. Quick Query Minimum Specifications
Region-Based Provisioning Status
Quick Query is available in the following environments:
Region
Availability
Korea West (kr-west1)
Available
Korea East (kr-east1)
Available
Korea South 1 (kr-south1)
Not Available
Korea South 2 (kr-south2)
Not Available
Korea South 3 (kr-south3)
Not Available
Table. Quick Query Region-Based Provisioning Status
Preceding Services
The following services must be configured before creating Quick Query. Please refer to the guides provided for each service to prepare them in advance.
A storage that allows multiple client servers to share files through network connections
Table. Quick Query Preceding Services
7.6.2 - How-to guides
Users can create Quick Query services by entering the required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating Quick Query
You can create Quick Query services through the Samsung Cloud Platform Console.
To create Quick Query, follow these steps:
Click All Services > Data Analytics > Quick Query. This will take you to the Service Home page of Quick Query.
On the Service Home page, click the Create Quick Query button. This will take you to the Create Quick Query page.
On the Create Quick Query page, enter the required information and select the detailed options.
In the Version Selection section, select the required information.
Category
Required
Description
Quick Query
Required
Select the Quick Query service version
Provides a list of available versions
Table. Quick Query Service Version Selection Items
In the Service Information Input section, enter or select the required information.
Category
Required
Description
Quick Query Name
Required
Enter the Quick Query name
Starts with a lowercase letter and does not end with a special character (-), uses lowercase letters, numbers, and special characters (-) to enter 3-30 characters
Description
Optional
Enter additional information or description of Quick Query within 150 characters
Domain Setting
Required
Enter the Quick Query domain
Starts with a lowercase letter and does not end with special characters (-, .), uses lowercase letters, numbers, and special characters (-, .) to enter 3-50 characters
{Quick Query Name}.{Set Domain} will be the Quick Query access address.
Query Engine Type
Required
Select the query engine type
Shared: Multiple users share a single query engine
Dedicated: Each user has a separate engine
Cluster Size
Required
Select the resource capacity for cluster configuration
If the engine type is Shared,
Auto Scaling can be selected to choose the cluster capacity (Small, Medium, Large, Extra Large).
If Auto Scaling is not selected, the cluster capacity can be set by entering Replica, CPU, and Memory.
If the engine type is Dedicated,
the cluster capacity can be selected (Small, Medium, Large, Extra Large).
Select the maximum number of queries to execute concurrently in Quick Query
Available values: 32, 64, 96, 128
Data Service Console Connection
Required
Enter the Data Service Console domain
Starts with a lowercase letter and does not end with special characters (-, .), uses lowercase letters, numbers, and special characters (-, .) to enter 3-50 characters
Host Alias
Optional
Add host information to be connected to Quick Query (up to 20 can be created, including the default)
Use is selected, and the + button is clicked
Hostname: Hostname or domain format, using lowercase letters, numbers, and special characters (-, .) to enter 3-63 characters
IP: IP format input
To delete, click the X button
The firewall between the cluster and the corresponding server must be open to use the added host information
Table. Quick Query Service Information Input Items
In the Cluster Information Input section, enter or select the required information.
Category
Required
Description
Cluster Name
Required
Enter the cluster name
Starts with a lowercase letter and does not end with a special character (-), uses lowercase letters, numbers, and special characters (-) to enter 3-30 characters
Control Area Setting
Required/Optional
Kubernetes Version: Displays the Kubernetes version
The Kubernetes version can be upgraded after provisioning.
Public Endpoint Access: To access the Kubernetes API server endpoint from outside, select Use and enter the Access Control IP Range (cannot be changed after service application).
Control Area Logging: Select whether to use control area logging
If Use is selected, the cluster control area’s Audit/event log can be checked in Management > Cloud Monitoring > Log Analysis.
1GB of log storage is provided free of charge for all services in the project, and logs exceeding 1GB will be deleted sequentially.
Network Setting
Required
Set the network connection
VPC: Use the same VPC as Data Service Console
Subnet: Select a subnet from the selected VPC
Security Group: Click Search and select a security group in the Security Group Selection popup window
File Storage Setting
Required
Select the file storage volume to be used by the cluster
Default Volume (NFS): Click Search and select a file storage in the File Storage Selection popup window
Table. Quick Query Service Cluster Information Input Items
Enter Node Pool Information area, enter or select the required information.
Classification
Required
Detailed Description
Node Pool Configuration
Required/Optional
Enter detailed information about the node pool to be added
* marked items are required input items
If the Query Engine Type is Public and Auto Scaling is set to Not Used, only the Node Pool Configuration (Fixed) item can be set.
Keypair: Select the authentication method to use when connecting to the Virtual Server
Table. Quick Query Service Node Pool Information Input Items
Enter Additional Information area, enter or select the required information.
Classification
Required
Detailed Description
Tags
Optional
Add tags
Tag Add button to create and add tags or add existing tags
Up to 50 tags can be added
Newly added tags are applied after service creation is complete
Table. Quick Query Service Additional Information Input Items
In the Summary panel, check the detailed information created and the estimated billing amount, and click the Complete button.
After creation is complete, check the created resource in the Quick Query List page.
Check Quick Query Details
You can check the entire resource list and detailed information of the Quick Query service and modify it. The Quick Query Details page consists of Details, Tags, and Work History tabs.
To check the detailed information of the Quick Query service, follow these steps:
Click All Services > Data Analytics > Quick Query menu. Move to the Quick Query Service Home page.
Click the Quick Query menu on the Service Home page. Move to the Quick Query List page.
Click the resource to check the detailed information on the Quick Query List page. Move to the Quick Query Details page.
At the top of the Quick Query Details page, status information and additional feature information are displayed.
Classification
Detailed Description
Status Display
Status of the Quick Query created by the user
Creating: Creating
Running: Creation complete, service available
Updating: Setting update in progress
Terminating: Service termination in progress
Error: Error occurred during creation or service abnormal state
Hosts File Setting Information
Button to check and copy host file information for accessing Quick Query and Data Service Console
Service Termination
Button to terminate the service
Table. Quick Query Status Information and Additional Features
Details
You can check the detailed information of the resource selected on the Quick Query List page and modify it if necessary.
Classification
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Means cluster SRN
Resource Name
Resource name
Means cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service information
Modification Time
Time when the service information was modified
Quick Query Name
Quick Query name
Description
Additional information or description of Quick Query
Version
Quick Query version
Service Type
Quick Query service type
Query Engine Type
Quick Query engine type
Engine Spec
Whether Auto Scaling is used
Resource capacity for cluster configuration
Maximum Concurrent Query Execution
Maximum number of queries that can be executed concurrently in Quick Query
Domain Setting
Quick Query domain
Data Service Console
Data Service Console domain
Host Alias
Host information to be connected to Quick Query
Web URL
Web URL of Data Service Console and Quick Query
Cluster Name
Name of the cluster composed of servers
Installation Node Information
Detailed information of the installed node pool
Table. Quick Query Details Tab Items
Tags
You can check the tag information of the resource selected on the Quick Query List page and add, change, or delete it.
Classification
Detailed Description
Tag List
Tag list
Key, Value information of tags can be checked
Up to 50 tags can be added per resource
When entering tags, existing Key and Value lists can be searched and selected
Table. Quick Query Tag Tab Items
Work History
You can check the work history of the resource selected on the Quick Query List page.
Classification
Detailed Description
Work History List
Resource change history
Work time, resource type, resource name, work details, work result, and worker information can be checked
Click the corresponding resource in the Work History List. The Work History Details popup window opens.
Detailed Search button provides detailed search function
Table. Quick Query Work History Tab Detailed Information Items
Connecting to Quick Query
To connect to Quick Query, follow these steps:
Check the IP of the Windows system (PC) that you want to connect to Quick Query.
You need to check the public IP of the system since external access is required.
Check if the IGW connection is set to use in the VPC where Quick Query is installed.
The Internet Gateway setting must be enabled for external access.
Add the following contents to the hosts file of the Windows system:
Domain address of Data Service Console
Domain address of Data Service Console IAM
Domain address of Quick Query
You can check the hosts file setting information by clicking Hosts file setting information in the Quick Query detailed screen.
Add the following rules to the VPC IGW Firewall that you selected when applying for the Quick Query service:
Source IP: IP of the Windows system (PC)
Destination IP: Subnet range of the Kubernetes where Quick Query is installed
Protocol: TCP
Port: 443
Add the following rules to the Load Balancer Firewall that you selected when applying for the Quick Query service:
Source IP: IP of the Windows system (PC)
Destination IP: Subnet range of the Kubernetes where Quick Query is installed
Protocol: TCP
Port: 443
Add the following rules to the Security Group that you selected when applying for the Quick Query service:
Type: Inbound rule
Destination address: IP of the Windows system (PC)
Protocol: TCP
Port: 443, 30000 ~ 32767
Run the Chrome browser on the Windows system (PC) that you want to connect to and access the Quick Query URL.
Quick Query Target IP/Port Information
To access Quick Query, add the target IP and port for each service to the Security Group as follows:
Item
Protocol
Source
Target IP
Port
Note
Quick Query
TCP
User IP
Quick Query
443, 30000 ~ 32767
Quick Query web https
Table. Quick Query Target IP/Port Information
Canceling Quick Query
You can cancel the service to reduce operating costs. However, canceling the service may immediately stop the operating service, so you should carefully consider the impact of service cancellation before proceeding.
To cancel Quick Query, follow these steps:
Click the All Services > Data Analytics > Quick Query menu. You will be taken to the Service Home page of Quick Query.
Click the Quick Query menu on the Service Home page. You will be taken to the Quick Query List page.
On the Quick Query List page, select the resource you want to cancel and click the Cancel Service button.
After cancellation is complete, check if the resource has been canceled on the Quick Query List page.
7.6.3 - API Reference
API Reference
7.6.4 - CLI Reference
CLI Reference
7.6.5 - Release Note
Quick Query
2025.07.01
NEWQuick Query Official Version Release
A Quick Query service has been released, allowing for easy analysis of large-scale data using standard SQL.
8 - Application Service
It can easily manage and monitor APIs for application linkage, and efficiently integrate management of corporate assets such as data, software, and applications.
8.1 - API Gateway
8.1.1 - Overview
Service Overview
API Gateway is a service that easily creates, manages, and monitors APIs. It defines resources and methods related to APIs in a consistent manner, and can apply built-in security access. Additionally, it can easily and conveniently monitor API usage status and performance metrics.
Features
Convenient API Management: Through the console, you can conveniently register and manage APIs, and provide JWT (Json Web Token) for access permission management. It is also linked with SCP Cloud Functions, allowing Cloud Functions function calls via API Gateway.
Stable Traffic Handling: API Gateway can manage backend system traffic through usage plans. Usage plans can set the maximum number of calls per hour (hour/day/month), and this prevents excessive traffic from entering, enabling stable service usage.
Easy and convenient monitoring: Provides a dashboard that allows you to manage various functions such as API version management that links different deployment versions per stage, and to monitor API usage status. Through this, you can easily and quickly identify performance metrics such as API calls, response times, and error counts.
Service Architecture Diagram
Figure. API Gateway Diagram
The developer (3rd party Developer) can access various backend services via a single endpoint (API Gateway) using Rest API.
API Gateway can route the request to an appropriate backend service or Cloud Function.
If authentication and authorization are required, the user is verified with JWT.
Request data is transformed as needed, or responses from multiple services are aggregated into one through the API Gateway.
When traffic is high, you can apply load balancing and rate limiting to improve service stability.
Supports web clients to call APIs from other domains through CORS settings.
All requests and responses are logged and monitored in the API Gateway service, allowing rapid detection of failures and anomalies.
By separating stages for each environment such as development, testing, and production, you can manage API versions and utilize the required version. API management, security policy application, etc., can be handled consistently centrally through the API Gateway service.
Provided Features
API Gateway provides the following features.
API Management and Operations
Custom Domain Name: Connect a custom domain to the API to provide a unique URL for the user
REST API creation and management: Define resources and methods (GET, POST, etc.) and set authentication method
API version and stage management: Operate the same API in multiple versions simultaneously and manage changes
Routing: Routing requests to various backend services based on the URI path or request headers
Monitoring and Logging: API performance monitoring and logging possible (available December 2025)
API security
IP ACL setting: Control to allow only specific IPs to access, enhancing security
Cloud Functions integration: Execute business logic in response to external requests by integrating with serverless computing
CORS support: Set Cross-Origin Resource Sharing (CORS) to allow resource access from other domains
Components
API
An API is a collection of resources and methods integrated with backend HTTP endpoints, Cloud Functions, or other SCP services. APIs provide a logical interface to the actual service and are deployed across multiple stages, allowing use in various environments (development, production, etc.).
Resources
Resources are logical units that represent specific endpoints (URI paths) within an API. Each resource can be organized in a tree structure and can have multiple HTTP methods. For example, paths such as
/users
,
/orders
become individual resources.
Method
The method defines the HTTP actions (e.g., GET, POST, PUT, DELETE, etc.) that can be performed on each resource. Each method is integrated with a specific backend to process actual data or execute functionality.
Stage
The stage is a named reference to a specific point in time (snapshot) of an API deployment, distinguishing environments in the API lifecycle such as development (dev), testing (test), and production (prod). Each stage has its own unique URL, and separate settings per environment are possible for caching, logging, throttling, stage variables, etc. Stages support various operational scenarios such as environment-specific configurations and traffic segregation.
Endpoint
The endpoint is a unique URL address used by the client to access the API. A separate endpoint is created for each stage.
Integration
Integration defines how API methods connect to the actual backend (HTTP endpoints, Functions). Through request and response data transformation, authentication, mapping templates, etc., you can finely control the integration with the backend.
JWT (Json Web Token)
It is a token-based web standard (RFC 7519) used for authentication and authorization. JWT encodes a JSON object composed of three parts (Header, Payload, Signature) in Base64 URL-safe format, and prevents tampering by digitally signing with a secret key or public key.
When securely exchanging authentication information and permissions between a server and client, or between services, it is used by placing them in the HTTP header, allowing stateless authentication without session storage.
CORS (Cross-Origin Resource Sharing)
It is a mechanism that bypasses the Same-Origin Policy applied in web browsers for security reasons, allowing resource sharing between servers of different origins (when protocol, domain, or port differ).
The server specifies which origins’ requests are allowed through HTTP response headers (e.g., Access-Control-Allow-Origin, etc.), enabling the client (browser) to safely perform cross-origin requests.
If CORS is not properly configured, the browser blocks requests for resources from other origins, which is a web standard security policy that must be considered when using various resources such as external API calls, fonts, images, and videos.
Regional Provision Status
API Gateway can be provided in the environments below.
Region
Availability
Korea West 1 (kr-west1)
Provided
Korea East1 (kr-east1)
Provided
South Korea 1(kr-south1)
Not provided
South Korea 2(kr-south2)
Not provided
South Korea 3(kr-south3)
Not provided
Table. API Gateway Regional Availability
Preliminary Service
This is a list of services that can be optionally configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.
A service that runs application code in a serverless computing environment
When you connect a Cloud Functions function as the integration target of an endpoint in API Gateway, the client’s HTTP request is passed to the function and you can receive the execution result. This allows you to easily implement an API backend in a serverless manner.
Table. API Gateway Pre-service
8.1.2 - How-to guides
Users can create the API Gateway service by entering required information through the Samsung Cloud Platform Console and selecting detailed options.
Creating an API
An API is a collection of resources and methods integrated with backend HTTP endpoints, Cloud Functions, or other SCP services. An API provides a logical interface to the actual service and can be deployed to multiple stages for use in different environments (development, production, etc.).
You can create and use APIs through the Samsung Cloud Platform Console.
To create an API, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the Create API button on the Service Home page. This will take you to the Create API page.
Enter the required information for creating the service and select detailed options on the Create API page.
Select the required information in the Service Information section.
Item
Required
Description
API Name
Required
Enter API name
Start with lowercase English letters, do not end with special characters (-), and enter 3 ~ 50 characters using lowercase letters, numbers, and special characters (-)
API Creation Method
Required
Select API creation method
Select from Create New, Clone Existing API
API to Clone
Required
When selecting Clone Existing API as the API creation method, select from already created APIs
Description
Optional
Enter additional information or description about the API within 50 characters
API Endpoint Type
Required
Path to access the API
Region: Process requests within the region where the API is deployed
Private: Expose to receive API requests privately from other VPCs
When Private is selected, JWT activation is applied
Table. API service information input items
Enter or select the required information in the Additional Information section.
Item
Required
Description
Tags
Optional
Add tags
Click the Add Tag button to create and add a new tag or add an existing tag
Up to 50 tags can be added
Newly added tags are applied after service creation is complete
Table. API additional information input items
Review the detailed information and estimated charges in the Summary panel, then click the Complete button.
Once creation is complete, verify the created resource on the API List page.
Viewing API Details
You can view and modify the complete resource list and detailed information of API services. The API Details page consists of Details, Tags, and Operation History tabs.
To view detailed information of an API service, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API menu on the Service Home page. This will take you to the API List page.
Click the resource for which you want to view detailed information on the API List page. This will take you to the API Details page.
The API Details page displays status information and additional feature information, and consists of Details, Tags, and Operation History tabs.
Item
Description
Status Display
Status of the API created by the user
Creating: API being created
Active: API operating normally
Deleting: API being deleted
Error: Service unavailable due to API internal error
Service Termination
Button to terminate the service
Table. API status information and additional features
Details
On the API Details page, you can view detailed information of the selected resource and modify information if necessary.
Item
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
API Name
API name
API Endpoint Type
API endpoint type
DNS Status
DNS status
Displays Creating, Active, Inactive, Error
Description
Additional information or description about the API
Table. API details tab items
Connection Management
On the Connection Management page, you can manage connection requests for PrivateLink Service for API Gateway.
Item
Description
Request Endpoint ID
Requested endpoint ID
Creation Date
Date and time when the service was created
Status
Resource status value
Reject
Reject PrivateLink Service connection request
Approve
Approve PrivateLink Service connection request
Block
Block connected PrivateLink Endpoint
Reconnect
Reconnect blocked PrivateLink Endpoint
Table. API connection management tab items
Note
If the connection status is Rejected or Error, requests such as approval/rejection are not possible.
Tags
On the API Details page, you can view tag information of the selected resource, and add, modify, or delete tags.
Item
Description
Tag List
Tag list
Can view Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from existing Key and Value lists
Table. API tag tab detailed information items
Operation History
On the API Details page, you can view the operation history of the selected resource.
Item
Description
Operation History List
Resource change history
Can view operation details, operation date/time, resource type, resource name, operation result, operator information
Click the corresponding resource in the Operation History List list. The Operation History Details popup window opens.
Provides detailed search functionality through the Detailed Search button
Table. API operation history tab detailed information items
Integrating with PrivateLink Service
By integrating API Gateway service with PrivateLink service, you can connect ‘API Gateway and VPC’ or ‘API Gateway and other SCP services’ without external internet. Data uses only the internal network, providing high security, and no public IP, NAT, VPN, or internet gateway is required.
Creating PrivateLink Service for API Gateway Service
When creating an API, select the endpoint type as Private. You can expose the API to be accessed privately from other VPCs or services.
Note
You can use the internal network by specifying it as a target of Private Endpoint. For instructions on creating a PrivateLink Endpoint in your VPC, see Creating a PrivateLink Endpoint.
You can create an entry point to access other PrivateLinks in API Gateway service.
To create a PrivateLink Endpoint, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the PrivateLink Endpoint menu on the Service Home page. This will take you to the PrivateLink Endpoint List page.
Click the Create PrivateLink Endpoint button on the PrivateLink Endpoint List page. This will take you to the Create PrivateLink Endpoint page.
Enter or select the required information.
Item
Required
Description
PrivateLink Endpoint Name
Required
Enter PrivateLink Endpoint name
Enter 3 ~ 20 characters using English letters and numbers
Description
Optional
Enter additional information or description within 50 characters
PrivateLink Service ID
Required
Enter the ID of the PrivateLink Service to connect
Check the Service ID with the PrivateLink Service provider in advance, and after creating the Endpoint, provide the Endpoint ID to the provider
Enter 3 ~ 60 characters using English letters and numbers
Table. PrivateLink Endpoint creation information input items
When information entry and selection is complete, click the Confirm button.
Check the message in the notification popup window, then click the Confirm button.
Once creation is complete, verify the created resource in the PrivateLink Endpoint list.
To delete a PrivateLink Endpoint, select the resource to delete from the list and click the Delete button.
Note
To request a connection to a service provider through PrivateLink, you must go through an approval process.
When applying for a service connection, you must check the PrivateLink Service ID to be connected in advance.
Usage agreement with the service provider must be completed before applying for the service.
After the user creates a PrivateLink Endpoint, they must provide the Endpoint ID to the service provider. The service provider can check the user’s Endpoint ID and proceed with usage approval quickly.
Viewing PrivateLink Endpoint Details
You can view and modify the complete resource list and detailed information of PrivateLink Endpoint. The PrivateLink Endpoint Details page consists of Details and Operation History tabs.
To view detailed information of an API service, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the PrivateLink Endpoint menu on the Service Home page. This will take you to the PrivateLink Endpoint List page.
Click the resource for which you want to view detailed information on the PrivateLink Endpoint List page. This will take you to the PrivateLink Endpoint Details page.
The PrivateLink Endpoint Details page displays status information and additional feature information, and consists of Details and Operation History tabs.
Rejected: Connection rejected, Request Approval Again button displayed
Error: Error occurred
Canceled: Connection request canceled, Request Approval Again button displayed
Cancel Request
Request connection cancellation
Request Approval Again
Request connection again when connection request is in canceled status
Table. PrivateLink Endpoint status information and additional features
Details
On the PrivateLink Endpoint Details page, you can view detailed information of the selected resource.
Item
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
PrivateLink Endpoint Name
PrivateLink Endpoint name
PrivateLink Endpoint ID
PrivateLink Endpoint ID
PrivateLink Service ID
Connected PrivateLink Service ID
API Endpoint Type
API endpoint type
Description
Additional information or description about the PrivateLink Endpoint
Table. PrivateLink Endpoint details tab items
Operation History
On the PrivateLink Endpoint Details page, you can view the operation history of the selected resource.
Item
Description
Operation History List
Resource change history
Can view operation details, operation date/time, resource type, resource name, operation result, operator information
Click the corresponding resource in the Operation History List list. The Operation History Details popup window opens.
Provides detailed search functionality through the Detailed Search button
Table. PrivateLink Endpoint operation history tab detailed information items
Creating a Resource
A resource is a logical unit representing a specific endpoint (URI path) within an API. Each resource can be organized in a tree structure and can have multiple HTTP methods.
To create a resource, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Resource menu on the Service Home page. This will take you to the Resource page.
Click the Create Resource button on the Resource page. This will take you to the Create Resource popup window.
Enter or select the required information.
Item
Required
Description
Resource Name
Required
Enter resource name
Start with lowercase English letters and enter 3 ~ 50 characters using lowercase letters, numbers, and special characters (-{})
When using braces, only the format {character} is allowed and cannot be empty
Resource Path
Required
Select the path selected from the resource menu tree
Table. Resource creation information input items
When information entry and selection is complete, click the Confirm button.
Check the message in the notification popup window, then click the Confirm button.
Once creation is complete, verify the created resource in the resource list.
To delete a resource, select the resource to delete from the list and click the Delete button.
Note
Up to 300 resources can be created.
The depth of resources is up to 30 including Root.
Creating a Method
A method defines HTTP actions (e.g., GET, POST, PUT, DELETE, etc.) that can be performed on each resource. Each method is integrated with a specific backend to process actual data or execute functions.
To create a method, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Resource menu on the Service Home page. This will take you to the Resource page.
Click the Create Method button on the Resource page. This will take you to the Create Method popup window.
Enter or select the required information.
Item
Required
Description
Method Type
Required
Select method type
Already created values are not displayed in the list.
When ANY is selected, all types of methods are created
Integration Type
Required
Select endpoint type
Select from HTTP, Cloud Function, PrivateLink
Endpoint URL
Required
Enter endpoint URL when selecting HTTP type
An endpoint is a unique URL used by clients to access the API. Separate endpoints are created for each stage. Various types such as Regional, Edge-Optimized, Private, etc.
Must be a valid URL starting with http:// or https://, and enter within 500 characters using English letters and special characters ($-_.+!*’:(){}/)
Endpoint
Required
Select endpoint when selecting Cloud Function type
Region is provided as the current region and cannot be changed
URL Query String Parameters
Optional
Check Use and then enter Name
Enter using English letters, numbers, and special characters (_)
HTTP Request Headers
Optional
Check Use and then enter Name
Enter using English letters, numbers, and special characters (-)
API Key Usage
Optional
Check Use to limit usage through usage policy
Table. Method creation information input items
When information entry and selection is complete, click the Save button.
Check the message in the notification popup window, then click the Confirm button.
Once creation is complete, verify the created resource in the method list.
To delete a method, select the resource to delete from the list and click the Delete button.
Note
Methods can be created up to 7, one of each type. When created as Any, all types of methods are created.
Item
Description
Service
Service name
GET
Retrieve (read) resource
POST
Create (register) resource
PUT
Modify (update) entire resource
PATCH
Partially modify only part of resource
DELETE
Delete resource
OPTIONS
Retrieve list of HTTP methods supported by the endpoint
HEAD
Retrieve only headers without body (return only metadata without response body)
Table. Method types
Deploying an API
To reflect an API under development to the actual service environment, API deployment is required.
To deploy a created API, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Resource menu on the Service Home page. This will take you to the Resource page.
Click the Deploy API button on the Resource page. This will take you to the Deploy API popup window.
Enter or select the required information.
Item
Required
Description
Stage
Required
Select stage to deploy API
New Stage: Deploy by creating a new stage
None Stage: Deploy without selecting a stage
Stage Name
Required
When selecting New Stage, enter new stage name
Start with lowercase English letters, do not end with special characters (-), and enter 3 ~ 30 characters using lowercase letters, numbers, and special characters (-)
Deployment Description
Optional
Enter additional information or description about API deployment within 50 characters
Table. API deployment information input items
When information entry and selection is complete, click the Deploy button.
Check the message in the notification popup window, then click the Confirm button.
Creating a Stage
A stage is a named reference to a specific point in time (snapshot) of an API deployment, distinguishing environments for each lifecycle of the API such as development (dev), test (test), production (prod), etc. Each stage has a unique URL, and separate settings can be made per environment such as caching, logging, throttling, and stage variables. Through stages, various operational scenarios such as Canary release, environment-specific settings, and traffic separation are supported.
To create a stage to deploy an API, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Stage menu on the Service Home page. This will take you to the Stage page.
Click the Create Stage button on the Stage page. This will take you to the Create Stage popup window.
Enter or select the required information.
Item
Required
Description
Stage Name
Required
When selecting New Stage, enter new stage name
Start with lowercase English letters, do not end with special characters (-), and enter 3 ~ 50 characters using lowercase letters, numbers, and special characters (-)
Stage Description
Optional
Enter additional information or description about the stage within 100 characters
API Deployment Version
Required
Select API version to deploy
Start with lowercase English letters, do not end with special characters (-), and enter 3 ~ 50 characters using lowercase letters, numbers, and special characters (-)
Table. Stage creation information input items
When information entry and selection is complete, click the Confirm button.
Check the message in the notification popup window, then click the Confirm button.
Once creation is complete, verify the created resource in the stage list.
Note
Up to 10 stages can be created.
Viewing Stage Details
You can view and modify the stage list and detailed information. The details page consists of Stage Details information and API Deployment Version Management, CORS, Usage Policy tabs.
To view detailed information of a stage, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Stage menu on the Service Home page. This will take you to the Stage page.
Click the resource for which you want to view detailed information in the stage list.
The Stage Details displays status information and additional feature information, and consists of API Deployment Version Management, CORS, and Usage Policy tabs.
To delete a stage, select the resource to delete from the list and click the Delete button.
To modify a stage, select the resource to modify from the list and click the Modify button.
Stage Details
On the Stage Details page, you can view detailed information of the selected resource.
Item
Description
Stage Name
Stage name
CORS
CORS operation status
Stage Description
Stage information
JWT
JSON Web Token usage status
API Key
API Key usage status
Invoke URL
URL for API invocation
Activation Date
Stage activation date/time
Deployment ID
API deployment ID
Table. Stage details items
API Deployment Version Management
On the API Deployment Version Management tab, you can view API deployment history.
Item
Description
API Deployment Version Management List
API deployment history
Can view deployment date/time, status, description, deployment ID
Change Deployment
Select the resource to change deployment from the list and click the Change Deployment button. When you click the Confirm button in the notification popup window, the active deployment ID is immediately updated.
Table. API deployment version management tab detailed information items
CORS (Cross-Origin Resource Sharing)
Note
For details on CORS (Cross-Origin Resource Sharing), see Components > CORS.
On the CORS tab, you can view the CORS list.
Item
Description
Name
CORS name
Mapping Value
Mapping value applied to CORS
Table. CORS tab detailed information items
Usage Policy
On the Usage Policy tab, you can view the usage policy connected to the stage.
Item
Description
Usage Policy Name
Usage policy name
Usage Policy ID
Usage policy ID
Quota
Quota set in the usage policy
Connected API Key Name
API Key name connected to the usage policy
Table. Usage policy tab detailed information items
Note
When calling an API, you must call with the Key value of the API Key connected to the stage in the ‘x-scp-apikey’ header.
Usage policies are connected at the stage level, but quotas are calculated per method checked for API Key usage.
Creating Authentication
JWT (JSON Web Token) is an open standard (RFC 7519) used for user authentication. JWT is a claim-based web token that stores information about the user in an encrypted token using JSON format.
To create a JWT, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Authentication menu on the Service Home page. This will take you to the Authentication List page.
Click the Create JSON Web Token button on the Authentication List page. This will take you to the Create JSON Web Token popup window.
Enter or select the required information.
Item
Required
Description
JWT Name
Required
Enter token name
Start with lowercase English letters, do not end with special characters (-), and enter 3 ~ 50 characters using lowercase letters, numbers, and special characters (-)
Stage to Connect
Optional
Check Use and then select a stage
Table. Authentication creation information input items
When information entry and selection is complete, click the Confirm button.
Check the message in the notification popup window, then click the Confirm button. This will take you to the Access Token notification popup window.
Tokens can only be viewed in the Access Token notification popup window. If necessary, download the Access Token file.
Check the message in the Access Token notification popup window, then click the Confirm button.
Once creation is complete, verify the created resource in the authentication list.
To delete a token, select the resource to delete from the list and click the Delete button.
To modify a token, select Modify from the context menu of the resource to be modified.
Creating Access Control
You can add access allowed IPs so that API calls are made only from specific IPs when calling an API.
Note
A stage is connected to one access control. When a stage is initially created, the Default access control is applied by default to block access from all IPs (All deny). By creating a new access control and connecting it to the stage, you can configure it to be called only from specific IPs.
Access control cannot be created in the following cases:
When the available service quota limit is exceeded: Check the current allocated value and additional possible value in Quota Service.
When there is no available API: Create an API first.
When the API endpoint type is Private: Access control is not supported, but JWT activation is mandatorily applied to the stage of that API.
To create an access control, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Access Control menu on the Service Home page. This will take you to the Access Control List page.
Click the Create Access Control button on the Access Control List page. This will take you to the Create Access Control popup window.
Enter or select the required information.
Item
Required
Description
Access Control Name
Required
Enter access control name
Start with lowercase English letters, do not end with special characters (-), and enter 3 ~ 50 characters using lowercase letters, numbers, and special characters (-)
Public Access Allowed IP
Required
Enter IP to allow access
Enter up to 100 using ‘,’
Stage to Connect
Optional
Check Use and then select a stage
Description
Optional
Enter additional information or description about access control within 50 characters
Table. Access control creation information input items
When information entry and selection is complete, click the Confirm button.
Check the message in the notification popup window, then click the Confirm button.
Once creation is complete, verify the created resource in the access control list.
To delete the access control list, select the resource to delete from the list and click the Delete button. The Default access control cannot be deleted.
To modify an access control, select Modify from the context menu of the resource to be modified.
Terminating an API
You can reduce operating costs by terminating services that are not in use. However, since terminating a service may immediately stop the operating service, you should proceed with termination after fully considering the impact of service interruption.
To terminate an API, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API menu on the Service Home page. This will take you to the API List page.
Select the resource to terminate on the API List page and click the Terminate Service button.
When termination is complete, verify that the resource has been terminated on the API List page.
Using Report
You can check API traffic, performance, and error status.
To use Report, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Report menu on the Service Home page. This will take you to the Report page.
Enter or select the required information.
Item
Description
Query Period
Select date to query (default 1 week from current date, can query up to one month)
Stage Name
Stage name under API
Table. Report information input items
When information entry and selection is complete, you can view Report information.
Item
Description
Top 5 Resources
Top 5 most called resources among resources called by the user with API status code 2XX (if identical, not shown as duplicate rank)
API Call Count
Number of calls with API status code 2XX
Latency
Time from when the user sends a request to API Gateway to when they receive a response
Integration Latency
Time from when API Gateway sends a request to the backend server to when it receives a response from the backend
4XX Error
Number of calls with API status code 4XX
5XX Error
Number of calls with API status code 5XX
Table. Report detailed information items
Note
When a stage is deleted, it cannot be queried in Report.
Report queries data from 1 hour ago from the current time.
Creating a Usage Policy
Usage policies are established to ensure efficient distribution of server resources, secure service stability, and prevent unnecessary traffic and abuse.
To create a usage policy, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Usage Policy menu on the Service Home page. This will take you to the Usage Policy page.
Click the Create Usage Policy button on the Usage Policy page. This will take you to the Create Usage Policy page.
Enter or select the required information.
Item
Required
Description
API Name to Connect
Required
Select from created APIs
Usage Policy Name
Required
Start with lowercase English letters, do not end with special characters (-), and enter 3 ~ 50 characters using lowercase letters, numbers, and special characters (-)
Quota
Required
Enter between 1 ~ 2,000,000,000 based on monthly/daily/hourly
Description
Optional
Enter description of the usage policy within 50 characters
Table. Usage policy information input items
When information entry and selection is complete, click the Complete button.
Check the message in the notification popup window, then click the Confirm button.
Once creation is complete, verify the created resource in the usage policy list.
Creating an API Key
API Keys are used to identify which user or application is calling an API. They are mainly used to limit usage through usage policies.
To create an API Key, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Usage Policy menu on the Service Home page. This will take you to the Usage Policy page.
Click the usage policy in the list. This will take you to the Usage Policy Details page.
Click the Create API Key button on the Usage Policy Details page. This will take you to the Add API Key popup window.
Enter or select the required information.
Item
Required
Description
API Key Name
Required
Start with lowercase English letters, do not end with special characters (-), and enter 3 ~ 50 characters using lowercase letters, numbers, and special characters (-)
Description
Optional
Enter description of the API Key within 50 characters
Table. API Key information input items
When information entry and selection is complete, click the Confirm button.
Check the message in the notification popup window, then click the Confirm button.
Once creation is complete, verify the created resource on the Usage Policy Details page.
Note
Up to 10 usage policies and 5 API Keys can be created.
Quotas are calculated per API Key.
Creating a Resource Policy
You can block unauthorized access from the source through resource-based policies and enhance the security level of the service.
To create a resource policy, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Resource Policy menu on the Service Home page. This will take you to the Resource Policy page.
Click the Create Resource Policy button on the Resource Policy page. This will take you to the Create Resource Policy page.
Enter or select the required information in the Service Information section.
Item
Required
Description
Policy Template
Required
Select policy template
Default Policy: Policy automatically registered when creating an API
Account Allow List: Policy that allows only users of specific SCP accounts (Root user or IAM Role) to call the API
IP Range Deny List: Policy that allows or blocks only specific IP addresses or CIDR ranges
Table. Resource policy information input items
When information entry and selection is complete, click the Complete button.
Check the message in the notification popup window, then click the Confirm button.
Once creation is complete, you can view, modify, or delete the resource policy.
8.1.2.1 - Resource-based Policy
Resource-based Policy Overview
API Gateway’s Resource-based Policy is a policy granted to a resource that allows you to decide whether to allow or deny (Effect) actions on specific resources to principals. Using resource-based policies, you can directly define the principals that can call the API.
Note
While general IAM policies (Identity-based) grant permissions to users, resource-based policies are applied to the API itself to allow external access.
Through resource-based policies, you can allow secure API calls by defining the following:
Users of specific Samsung Cloud Platform accounts
Specific source IP address ranges or CIDR blocks
Source policies are defined as JSON policy documents attached to an API to control whether a specified security principal (usually an IAM role or group) can call the API.
Item
Description
Example
Principal
Specify the principal that will call the API
-
Action
Define the functions to allow
-
Condition
Restrict to allow only in specific situations
Allow only requests from specific SRN
Table. Entities that control API call permission
Note
API Gateway’s resource-based policy utilizes the rules of IAM’s resource-based policy.
For instructions on creating or modifying policies using JSON, see JSON Writing Guide.
Resource-based Policy Usage Scenarios
The main usage scenarios for resource-based policies are as follows:
Resource-based Policy Scenarios
The resource-based policy scenarios used when specific features of API Gateway operate are as follows:
Item
Description
Reference Example
Default Policy
This is the DEFAULT resource policy that is automatically created when an API is created.
While not automatically registered by API Gateway’s resource-based policy, users can add and utilize it as needed.
Scenarios that users can add and utilize are as follows:
Cross-account access
When an IAM user of account A wants to execute Lambda of account B, register account A in the function policy of account B.
Hybrid access control
Instead of simply limiting accounts or IPs, you can configure it so that both specific users and specific IP bands must be satisfied simultaneously to allow access.
Managing API Gateway’s Resource-based Policy
To view and set API Gateway’s resource-based policy, follow these steps:
Click the All Services > Application Service > API Gateway menu. This will take you to the API Gateway Service Home page.
Click the API Gateway > Resource Policy menu on the Service Home page. This will take you to the Resource Policy page.
Click the Modify button in the Policy Details item. The Modify Resource Policy popup window opens.
* When you click the Delete button, the registered policy is deleted.
In the Modify Resource Policy popup window, select a Policy Template and then write the policy.
* For policy examples by policy template, see Resource-based Policy Examples.
When writing is complete, click the Complete button.
Resource-based Policy Examples
Users can additionally define resource-based policies or modify existing policies as needed.
Note
For some features, a resource-based policy (or credentials) must be registered to use them in API Gateway.
For the resource-based policy examples described in this guide, API Gateway automatically registers the example resource-based policies when each feature is activated or connected.
Default Policy
This is a policy that is automatically registered when an API is created.
This is a policy that allows UserId2 belonging to accountId2 to call API apiId1 belonging to accountId1.
You can add conditions to simultaneously validate the User ID (Principal) and resource Condition (Condition). Below is an example that additionally defines inaccessible IPs.
A resource-based policy is a policy that is applied to the API itself to allow external access.
Using resource-based policies, you can allow or deny actions on specific resources to specific principals.
2025.07.01
NEWOfficial release of API Gateway service
API Gateway service that allows easy management and monitoring of APIs has been released.
You can easily define resources and methods related to APIs, and conveniently monitor API usage status and performance metrics.
8.2 - Queue Service
8.2.1 - Message API reference
Overview
The Queue Service provided by Samsung Cloud Platform can send, receive, and delete messages. In this guide, we provide an explanation of the Queue Service API and how to call it.
Queue Service Call Procedure
Queue Service API URL address must be changed according to the operating environment and region. Please check the operating environment and region information in the table below.
Scp-Accesskey : Access Key issued from the Samsung Cloud Platform portal
Scp-Signature : Signature that encrypts the called API request with the Access Secret Key mapped to the Access Key. The HMAC encryption algorithm uses HmacSHA256.
Scp-Target : Action that requests the Queue Service. ScpQS.SendMessage, ScpQS.SendMessageBatch, ScpQS.ReceiveMessage, ScpQS.DeleteMessage, ScpQS.DeleteMessageBatch one of
Scp-Timestamp : January 1, 1970 00:00:00 defines the elapsed time from Coordinated Universal Time (UTC) as milliseconds.
Scp-ClientType : user-api specification
Create Signature
Generate the string to be signed from the request, encrypt it with HmacSHA256 algorithm using Access and Secret Key, then encode it in Base64.
Use this value as Scp-Signature.
The generated Signature is valid for 15 minutes.1. Click the All Services > Application > Queue Service menu. Navigate to the Service Home page of Queue Service.
### Queue Service API Call Example
#### Curl
```commandline
curl -i -X GET
-H "Scp-Accesskey:2sd2gg=2agbdSD26svcD"
-H "Scp-Signature:fsfsdf235f9U35sdgf35Xsf/qgsdgsdg326=sfsdr23rsef="
-H "Scp-Timestamp:1605290625682"
-H "Scp-ClientType:user-api"
-H "Scp-Target:ScpQS.SendMessage"
--data '{"MessageBody": "sample message", "QueueUrl": "https://queueservice.kr-west1.e.samsungsdscloud.com/33ff0000a8a345d78cdf163673f3da11/samplequeue"}'
'https://queueservice.service.kr-west1.e.samsungsdscloud.com'
Python
importrequestsurl="https://queueservice.service.kr-west1.e.samsungsdscloud.com"payload={'MessageBody':'sample message','QueueUrl':'https://queueservice.kr-west1.e.samsungsdscloud.com/33ff0000a8a345d78cdf163673f3da11/samplequeue'}headers={'Scp-Accesskey':'2sd2gg=2agbdSD26svcD','Scp-Signature':'fsfsdf235f9U35sdgf35Xsf/qgsdgsdg326=sfsdr23rsef=','Scp-Timestamp':'1605290625682','Scp-ClientType':'user-api','Scp-Target':'ScpQS.SendMessage'}response=requests.request("GET",url,headers=headers,data=payload)ifresponse.status_code==200:contents=response.textreturncontentselse:raiseException(f"Failed to GET API: {response.status_code}, {response.text}")
Java
StringapiUrl="https://queueservice.service.kr-west1.e.samsungsdcloud.com";StringaccessKey="2sd2gg=2agbdSD26svcD"Stringsignature="fsfsdf235f9U35sdgf35Xsf/qgsdgsdg326=sfsdr23rsef="Stringtimestamp="1605290625682"StringclientType="user-api"StringscpTarget="ScpQS.SendMessage"publicstaticStringgetAPI(Stringtoken,StringapiUrl)throwsIOException{CloseableHttpClienthttpClient=HttpClients.createDefault();HttpGetgetRequest=newHttpGet(apiUrl);getRequest.addHeader("Scp-Accesskey",accessKey);getRequest.addHeader("Scp-Signature",signature);getRequest.addHeader("Scp-Timestamp",timestamp);getRequest.addHeader("Scp-ClientType",clientType);getRequest.addHeader("Scp-Target",scpTarget);HttpResponseresponse=httpClient.execute(getRequest);intstatusCode=response.getStatusLine().getStatusCode();if(statusCode==200){StringresponseBody=EntityUtils.toString(response.getEntity());httpClient.close();returnresponseBody;}else{StringresponseBody=EntityUtils.toString(response.getEntity());httpClient.close();thrownewRuntimeException("Failed to Request: "+statusCode+", "+responseBody);}}
Queue Service API
SendMessage
POST https://queueservice.service.kr-west1.e.samsungsdscloud.com
You can create and use a Queue Service from the Samsung Cloud Platform Console. To create a Queue Service, follow these steps.
Click the All Services > Application > Queue Service menu. Go to the Service Home page of Queue Service.
Click the Create Queue button on the Service Home page. It navigates to the Create Queue page.
After entering the information required to create the service on the Queue creation page, click the Confirm button.
Category
Required
Detailed description
Type
Required
Select service type
Basic, FIFO select
Queue name
Required
Enter queue name
Start with a lowercase English letter and include lowercase letters, numbers, and special characters (-) within 3 ~ 64 characters
Standard type: cannot include ‘.fifo’ in the name
FIFO type: use the format ’name+.fifo’
Description
Select
Enter service description within 100 characters
Message Size
Required
Enter the message size value (KB) between 1 and 256
Up to 50 can be added per resource
Message retention period
Required
Enter message retention period
Select a unit period and then enter the desired value
seconds: 60 ~ 1,209,600
minutes: 1 ~ 20,160
hours: 1 ~ 336
days: 1 ~ 14
Encryption
Required
Choose whether to use encryption
Create new: Go to the KMS page and create a new KMS encryption
Do not use: Do not use encryption
KMS encryption: Select when using KMS
Data Key reuse period: After selecting the unit period, enter the desired value
Minutes: 5 ~ 1,440
Hours: 1 ~ 24
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Queue creation information input items
When the popup notifying creation opens, click the Confirm button.
Queue is charged based on usage.
Queue Service Check detailed information
You can view detailed information and messages about the Queue Service. To view detailed information of the Queue Service, follow the steps below.
All Services > Application > Queue Service Click the menu. Go to the Service Home page of Queue Service.
Click the Queue menu on the Service Home page. It moves to the Queue List page.
Click the resource to view detailed information on the Queue list page. It moves to the Queue details page.
Queue Details page displays status information and additional feature information, and consists of Details, Message Management, Tags, Task History tabs.
Category
Detailed description
Queue Service status
Describes the status of Queue Service
Creating: Creating
Available: Creation completed, server connection possible
Deleting: Service termination in progress
Error Deleting: Abnormal state during deletion
Inactive: Abnormal state
Error: Abnormal state during creation
Service termination
Service termination button
Table. Queue Service status information and additional features
Detailed Information
On the Queue list page, you can view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In Queue Service, it means resource SRN
Resource Name
Resource Name
In Queue Service, it means the Queue name
Resource ID
Unique resource ID of the service
Creator
User who created the service
Creation DateTime
Date and time the service was created
Editor
User who modified the service
Modification Date/Time
Date/Time when the service was modified
Volume name
Volume name
type
Queue type
Configuration Settings
Queue configuration information
Displays message size, retention period, and whether encryption is used
Click the Edit button to modify
For more details, see [Configure Queue Settings](#queue-configuration-settings reference)
Description
Queue description
Click the Edit button to edit
IP Access Allow List
List of IPs allowed to access the Queue service
Click the Add IP Address button to register a new IP
Click the Delete button of a registered IP to remove it
You can manage IPs that can access the Queue Service.
Add Accessible IP
To add an accessible IP, follow the steps below.
All Services > Application > Queue Service Click the menu. Go to the Service Home page of Queue Service.
Click the Queue menu on the Service Home page. It navigates to the Queue List page.
On the Queue list page, click the resource to add an accessible IP. It navigates to the Queue details page.
Click the Add IP Address button of the IP Access Allow List item. The Add IP Address popup opens.
After entering the IP to add to the IP Access Allow List, click the Confirm button.
Click the + button to add multiple IPs at once (up to 10).
When the popup notifying the addition of IP opens, click the Confirm button.
Exclude accessible IP
To exclude IPs registered in the IP Access Allow List, follow the steps below.
All Services > Application > Queue Service Click the menu. Navigate to the Service Home page of Queue Service.
Service Home page, click the Queue menu. Navigate to the Queue list page.
Click the resource to exclude the accessible IP on the Queue List page. You will be taken to the Queue Details page.
IP Access Allow List After checking the IP to exclude from the item, click the Delete button at the top of the list.
IP Access Allow List you can also individually exclude by clicking the Delete button of the IP you want to remove.
When the popup notifying IP deletion opens, click the Confirm button.
Message Management
You can send or manage queue messages.
Send Message
To send a Queue message, follow the steps below.
All Services > Application > Queue Service Click the menu. Go to the Service Home page of Queue Service.
Click the Queue menu on the Service Home page. Move to the Queue list page.
Click the resource to send a Queue message on the Queue List page. You will be taken to the Queue Details page.
Queue Details page’s Message Management tab, click it.
Click the More > Send Message button at the top of the message list. The Send Message popup window will open.
Message Sending After entering the message information to be sent in the popup window, click the Confirm button.
Category
Required
Detailed description
Message body
Required
Enter the message to send
Up to 262,244 bytes can be entered
Meta Information
Select
Select whether to use meta information to add to the message
If used, up to 10 Key, Value entries can be entered
Encryption
Required
Choose whether to use encryption
Create new: Go to the KMS page and create a new KMS encryption
Do not use: Do not use encryption
KMS encryption: Select when using KMS
Data Key reuse period: After selecting the unit period, enter the desired value
Minutes: 5 ~ 1,440
Hours: 1 ~ 24
Table. Message Sending Input Items
Delete individual messages
You can delete Queue messages individually.
To delete a message, follow the steps below.
All Services > Application > Queue Service Click the menu. Go to the Service Home page of Queue Service.
Click the Queue menu on the Service Home page. Navigate to the Queue List page.
On the Queue List page, click the resource to delete the Queue message. You will be taken to the Queue Details page.
Click the Message Management tab on the Queue Details page.
After selecting all messages to delete from the message list, click the More > Delete button at the top of the list.
You can also delete individually by clicking the Delete button at the far right of the message you want to delete in the message list.
If a popup window notifying message deletion opens, click the Confirm button.
Remove all messages
You can delete all messages in the queue.
Caution
Message removal deletes messages that cannot be recovered.
If the same removal request is in progress, it will not be deleted. Please try removing the message again after a moment.
To delete all messages, follow the steps below.
All Services > Application > Queue Service Click the menu. Navigate to the Service Home page of Queue Service.
Click the Queue menu on the Service Home page. Go to the Queue List page.
Queue List page, click the resource to remove the Queue message. Navigate to the Queue Details page.
Click the Message Management tab on the Queue Details page.
Click the More > Remove Message button at the top of the message list.
When the popup notifying message deletion opens, click the Confirm button.
Queue Service Cancel
You can reduce operating costs by canceling the unused Queue Service. However, if you terminate the service, the currently operating service may be discontinued immediately, so you should proceed with the termination after fully considering the impact that may occur when the service is discontinued.
Caution
All messages are deleted upon termination and cannot be recovered.
To cancel the Queue Service, follow the steps below.
All Services > Application > Queue Service Click the menu. Navigate to the Service Home page of Queue Service.
Click the Queue menu on the Service Home page. It moves to the Queue List page.
Queue List page, after selecting the resource to cancel, click the Cancel Service button.
After moving to the Queue Details page of the resource to be terminated, you can also terminate individually by clicking the Terminate Service button.
If a popup notifying service termination opens, click the Confirm button.
PrivateLink Service Integration
Queue Service can be used by integrating with PrivateLink Service, allowing direct communication with Queue Service from the user’s VPC instead of internet communication, thereby enhancing security.
PrivateLink Endpoint Create and Connect
Follow the steps below to integrate the Queue Service with the PrivateLink Service.
Check the PrivateLink Service ID of the Queue Service for creating a PrivateLink Endpoint.
The PrivateLink Service ID of Queue Service can be obtained by contacting us.
PrivateLink Service usage approval is automatically processed when connected.
Check the Security Group of the PrivateLink Endpoint to verify whether the target VM IP is registered.
Caution
When connecting via PrivateLink Endpoint, IAM policies and IP access control for authentication keys cannot be used.
8.2.3 - Overview
Service Overview
Queue Service is a service that efficiently manages and delivers messages or tasks, supporting message transmission between systems. This service smooths the data flow between the Producer that generates messages and the Consumer that receives messages, and provides a FIFO (First-In-First-Out) function that guarantees message order. Through this, it distributes system load caused by messages, allowing efficient message management in microservice architectures or event-driven systems.
Features
Efficient message processing : By processing and managing the simultaneous sending and receiving of a large number of messages, you can efficiently handle the message processing tasks of the user system.
Fast Service Processing : Producer and Consumer operate independently of each other, allowing for improved responsiveness and processing speed.
Message Order Guarantee : Ensures the order of received messages to maintain data consistency.
Strong security and reliability : Protects sensitive information through encryption during message transmission and storage, and provides reliable message management.
Service Diagram
Figure. Queue Service Diagram
Provided Features
Queue Service provides the following features.
Queue creation: Create a Queue of type basic or FIFO that guarantees message order, depending on the message reception handling method.
If using FIFO type, the Queue Service sorts messages in order of receipt time.
Message Transmission: The Producer sends the message to be delivered to the Consumer to the Queue.
Message Reception: Consumer receives the Producer’s message from the Queue.
Message Management: Check and manage messages stored in the Queue.
Message Encryption: Encrypt messages within the Queue by integrating with the KMS service.
We support preventing message exposure by configuring message encryption.
Components
Producer
Create and send messages using Queue Service.
Consumer
Receive and process messages from the Queue Service.
Message Manager
You can check the loaded messages in the Queue Service and manage them, such as deleting them.
Region-specific provision status
Queue Service can be provided in the environment below.
Region
Availability
Korea West 1(kr-west1)
Provided
Korea East 1 (kr-east1)
Provided
South Korea 1(kr-south1)
Not provided
Korea South2(kr-south2)
Not provided
South Korea South 3 (kr-south3)
Not provided
Table. Queue Service regional availability status
Pre-service
Queue Service has no preceding service.
8.2.3.1 - ServiceWatch Metrics
Queue Service sends metrics to ServiceWatch. The metrics provided by default monitoring are data collected at a 1‑minute interval.
Reference
Refer to the ServiceWatch guide for how to check metrics in ServiceWatch.
Basic Indicators
The following are the default metrics for the Queue Service namespace.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Table. Queue Service basic metrics
8.2.4 - CLI Reference
CLI Reference
8.2.5 - API Reference
API Reference
8.2.6 - Release Note
Queue Service
2025.12.16
NEWOfficial Service Version Release
Queue Service has been officially released.
Through Queue Service, you can distribute system load caused by messages and efficiently manage messages in microservice architectures or event-driven systems.
Message transmission and reception operate independently, improving responsiveness and processing speed.
9 - Security
Based on the largest and longest accumulated security service operation experience, we provide automated security services tailored to various customer environments.
9.1 - Key Management Service
9.1.1 - Overview
Service Overview
Key Management Service(KMS) is a service that easily creates encryption keys and safely stores/manages them to securely protect important application data.
The user encrypts and decrypts data using an encryption key, and the encryption key is reliably managed using a centrally concentrated encryption key method that is hierarchically encrypted.
Provided Features
Key Management Service provides the following functions.
Key Management: KMS can create/delete and manage customer-managed keys. Users directly generate data keys that encrypt data using the master key created by KMS.
Key Permission Management: You can control and manage usage permissions for the master key based on custom policies.
Key Lifecycle Management: Through key rotation, you can generate new encrypted data for the master key without creating a new key, and the key rotation interval can be set according to customer policy. By lifecycle management, encryption keys that are no longer used can be deactivated or deleted, safely protecting data from cryptographic threats.
Platform Managed Key: Check item??
Components
Master Key
The master key is used to generate data keys that are used to encrypt data, and depending on the purpose, you can generate symmetric keys (encryption/decryption (AES), generation/verification (HMAC)) and asymmetric keys (encryption/decryption and signing/verification (RSA), signing/verification (ECDSA)). With proper master key management, you can encrypt data keys to protect frequently used data keys during operation.
Master key is a key generated through KMS product service creation in the Samsung Cloud Platform Console.
Data Key
Data keys are used to encrypt actual data and are generated for each target service that performs encryption. This ensures that even if one data key is compromised, services encrypted with other data keys are not affected.
HSM (Hardware Security Module)
Stores the root key of the KMS system domain. The master key is generated through the root key stored in an HSM (Hardware Security Module) that complies with the FIPS 140-2 Lv3 standard, and is safely distributed and stored in the KMS for protection.
Constraints
Samsung Cloud Platform’s Key Management Service limits the number of keys generated as follows.
Item
Detailed description
Quota
KMS Key
Number of KMS Keys created per region
10000
KMS Validation Password Key
Number of public authentication algorithm keys that can be generated per Account
100
Table. Key Management Service constraints
Reference
KMS keys generated as a regional service can only be used within the region.
The restrictions on the public authentication algorithm key apply only to the KR SOUTH region.
Preceding Service
Key Management Service has no preceding service.
9.1.2 - How-to guides
Users can enter the required information for the Key Management Service through the Samsung Cloud Platform Console, select detailed options, and create the service.
Reference
Key Management Service provides the following two key services.
Customer Managed Key: Add content.
Platform Managed Key: Add content.
Create a customer-managed key
You can create and use a customer-managed key in the Samsung Cloud Platform Console.
To create a customer-managed key, follow the steps below.
All Services > Security > Key Management Service Click the menu. Go to the Service Home page of Key Management Service.
Click the Customer Managed Key Creation button on the Service Home page. You will be taken to the Customer Managed Key Creation page.
Customer Managed Key Creation On the page, enter the information required to create the service and enter additional information.
Service Information Input area, input or select the required information.
Category
Required
Detailed description
Key name
Required
Enter key name
Public Authentication Algorithm
Select
Use When selected, can generate encryption keys that meet public encryption standards
Public Authentication Algorithm option is available only in the KR SOUTH region
In the Public Authentication Algorithm, the ARIA algorithm, which has passed security verification through the Korean cryptographic module verification system, is provided
Purpose
Required
Select the key’s purpose and encryption method
If you do not select the use of public authentication algorithms, choose among encryption/decryption (AES-256), encryption/decryption and signing/verification (RSA-2048), signing/verification (ECDSA), generation/verification (HMAC)
Automatic rotation
Select
Select whether to use automatic rotation of the key
If Use is selected, the internal algorithm of the generated key is converted to a different value and applied at each set rotation period
The rotation period can be set to a value between 1 and 730 days. If no rotation period is entered, it defaults to 90 days
Description
Selection
Enter additional key information
Table. Customer Managed Key Service Information Input Items
Additional Information Input Enter or select the required information in the area.
Category
Required
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Customer Managed Key Additional Information Input Items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
When creation is complete, check the created resource on the Customer Managed Key List page.
Reference
When selecting a public authentication algorithm, you can create up to 100 customer-managed keys.
Check detailed information of customer-managed key
Customers can view and edit the full resource list and detailed information of customer-managed keys. The Customer Managed Key Details page consists of Details, Tags, Activity Log tabs.
Reference
If the status of the customer-managed key service is Creating, you cannot navigate to the detail page because the service is being created.
If it remains in Creating state after a certain amount of time has passed, delete the key and recreate it.
To view detailed information about the Key Management Service, follow these steps.
Click the All Services > Security > Key Management Service menu. Navigate to the Service Home page of Key Management Service.
Click the Customer Managed Key menu on the Service Home page. Navigate to the Customer Managed Key List page.
Click the resource to view detailed information on the Customer Managed Key List page. It navigates to the Customer Managed Key Details page.
Customer Managed Key Details At the top of the page, status information and descriptions of additional features are displayed.
Category
Detailed description
| Status | Displays the status of the customer-managed key
Active: available/activated
Stop: disabled/deactivated
To be terminated: scheduled for deletion
Creating: in progress/creation error (immediate retry possible)
|
| Key Rotation | Button that can manually rotate the generated key | | Key Deactivation | Button to deactivate the created key |
| Service termination | Button to terminate the service
When in To be terminated state, display Cancel termination button
|
Table. Customer-managed key status information and additional functions
Detailed Information
Customer Managed Key List page allows you to view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation time
Service creation time
Key name
Name of the generated key
Public authentication algorithm
Whether to use public authentication algorithm
Purpose
Purpose and encryption method of keys such as encryption/decryption and signing/verification
Current version
Current version of the generated key
The version increments by 1 when the key is rotated
Auto rotation
Key auto rotation usage
Click the Edit icon to edit
Next rotation date
Display the next rotation date of the key according to the rotation cycle
Automatically rotate the key on that date
Rotation Period
Rotation Period Duration When Auto-Rotate Is Used
Description
Show additional description for the key
Edit Click the icon to edit
Table. Customer Managed Key Detailed Information Tab Items
Tag
Customer Managed Key List page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
|Tag List| Tag List
Can view the tag’s Key, Value information
Up to 50 tags can be added per resource
When entering tags, search and select from the existing list of Keys and Values
|
Table. Customer Managed Key Tag Tab Items
Work History
You can view the operation history of the selected resource on the Customer Managed Key List page.
Category
Detailed description
Work History
Task Execution Details
encryption, decryption, signing, verification, data key generation, rewrap API log entry display
Task Date/Time
Task Execution Date/Time
Resource Type
Resource Type
Resource Name
Resource Name
Work Result
Task Execution Result (Success/Failure)
Operator Information
Information of the user who performed the task
Table. Customer Managed Key Operation History Tab Detailed Information Items
Managing customer-managed keys
You can create a new version of a registered key or change its usage status.
Setting up customer-managed key rotation
Key rotation is a function that converts the internal algorithm of a generated key to a different value.
Reference
When rotating the key, only the master key value changes, and the ciphertext and plaintext values of previously generated data keys do not change.
Even if key rotation is performed, because the master key holds the data from the previous version, there is no impact on decryption performed via the master key, and the value of the data key used does not change either.
Note, if wrapping with the changed master key (decrypt then re-encrypt), calling the rewrapData API will trigger the key rotation function.
To create a new version of the generated customer-managed key (key rotation), follow the steps below.
All Services > Security > Key Management Service Click the menu. Go to the Service Home page of Key Management Service.
Click the Customer Managed Key menu on the Service Home page. You will be taken to the Customer Managed Key List page.
Customer Managed Key List page, click the resource to view detailed information. You will be taken to the Customer Managed Key Details page.
Customer Managed Key Details page, click the Key Rotation button. Key Rotation alert window will open.
Key Rotation Click the Confirm button in the notification window.
Enabling Customer Managed Key
You can set whether the selected key is used.
Reference
If you change the key to a disabled state, users who use that key will no longer be able to use the key.
To set the activation/deactivation status of the generated customer-managed key, follow the steps below.
All Services > Security > Key Management Service Click the menu. Navigate to the Service Home page of Key Management Service.
Click the Customer Managed Key menu on the Service Home page. Navigate to the Customer Managed Key List page.
Click the resource to view detailed information on the Customer Managed Key List page. You will be taken to the Customer Managed Key Details page.
Customer Managed Key Details page, click the Key Activation/Key Deactivation button. You will be taken to the Key Activation/Key Deactivation notification window.
Key activation/key deactivation Click the Confirm button in the alert window.
Encryption case using Key Management Service
The example procedure for encrypting and storing important user application data by issuing a data key from KMS is as follows.
When the Application starts, obtain a data key using the KMS master key information, and perform and store security data encryption on the client side using the plaintext data key.
The data key is stored in the database in an encrypted form with the master key.
When performing secure data decryption, retrieve the data key stored in the database and request decryption using the KMS master key information.
The encryption/decryption procedure using the Key Management Service key is explained with the following diagram.
Encryption
Figure. KMS Encryption Procedure Example
Decryption
Figure. KMS Decryption Procedure Example
Cancel customer-managed key
You can cancel unused customer-managed keys.
Caution
If you cancel the key, you will not be able to use any requests or functions of the customer-managed key, and it will be permanently deleted either immediately upon cancellation or after 72 hours via scheduled cancellation.
To cancel a customer-managed key, follow the steps below.
All Services > Security > Key Management Service Click the menu. Navigate to the Service Home page of Key Management Service.
Click the Customer Managed Key menu on the Service Home page. Navigate to the Customer Managed Key List page.
Customer Managed Key List Click the resource to view detailed information on the page. Customer Managed Key Details Navigate to the page.
Customer Managed Key Details page, click the Service Cancellation button. Service Cancellation alert window will appear.
Service Termination in the alert window, select Immediate termination/Scheduled termination and confirm the details, then click the Confirm button.
When termination is complete, check on the Customer Managed Key List page whether the resource has been terminated.
When key deletion is completed, notifications are sent to both the user who created the key and the user who deleted it.
Reference
Even if you click the Cancel Service button within the More Options menu button at the far right of the generated customer-managed key list, you can cancel the selected key.
To cancel the cancellation of a terminated service, click the Cancel Cancellation button on the customer-managed key list page or detail page.
Service Cancellation popup window where Confirm is clicked, the selected key is not deleted but restored in a disabled state.
To reuse the key, click the Activate Key button on the Customer Managed Key Details page.
9.1.2.1 - Key Management Service Encryption example using keys
Key Management Service Encryption Example Using Keys
This is a Java code example for implementing envelope encryption (Envelope Encryption) and data signing/verification using a key generated by KMS.
Reference
The code below is a simple reference example to help understand the Samsung Cloud Platform KMS.
Since only the functions required for KMS operation are described, executing it as is will cause an error. Be sure to modify and use it according to the user’s actual scenario.
Envelope Encryption
It presents an envelope encryption scenario, and you can view the Java, Go, and Python example code and results written according to the scenario.
Scenario
Obtain a Data Key to encrypt password information using envelope encryption.
Use the issued Data Key information to encrypt the password.
Encrypt the password and encrypted Data Key information using envelope encryption and save it as a JSON file.
Java Example Code
This is a Java example code written according to the presented scenario.
// URI
static String KMS_API_BASE_URI = {{ Reference the OpenAPI guide URL }};
// END POINT
static String KMS_API_DECRYPT = "/v1/kms/openapi/decrypt/%s";
static String KMS_API_CREATE_DATAKEY = "/v1/kms/openapi/datakey/%s";
// KEY ID
static String KEY_ID = {{Master Key ID}};
createEnvelop() {
// Request creation of a new data key
String encryptedDataKey = getDataKey();
// Data to be encrypted
String example_json_data = "{\"PASSWORD\":\"SECRET_CREDENTIAL\"}";
// Encrypted data envelope(Envelop encryption)
String envelope = encryptData(example_json_data, encryptedDataKey);
// In this example code, the encrypted data envelope is saved to a file
File envelopeFile = new File("envelope.json");
}
getDataKey() {
String endPoint = String.format(KMS_API_CREATE_DATAKEY, KEY_ID);
String url = KMS_API_BASE_URI + endPoint;
JSONObject data = new JSONObject();
data.put("key_type", "plaintext");
JSONObject respJsonObject = callApi(endPoint, data.toJSONString());
return respJsonObject.get("ciphertext").toString();
}
encryptData() {
Map<String, String> envelope = new HashMap<>();
// Data key decryption
String dataKey = decryptDataKey(encryptedDataKey);
// Encrypt the generated data key using AES-CBC method
// Cipher Class usage (User can use the encryption algorithm they are already using)
SecretKey secretKey = new SecretKeySpec(decodeBase64(dataKey), "AES");
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
cipher.init(Cipher.ENCRYPT_MODE, secretKey);
byte[] iv = cipher.getParameters().getParameterSpec(IvParameterSpec.class).getIV();
byte[] cipherText = cipher.doFinal(obj.toString().getBytes());
envelope.put("encryptedKey", encryptedDataKey);
envelope.put("cipherText", encodeBase64(cipherText));
envelope.put("iv", encodeBase64(iv));
return JSONValue.toJSONString(envelope);
}
decryptDataKey() {
String endPoint = String.format(KMS_API_DECRYPT, KEY_ID);
JSONObject data = new JSONObject();
data.put("cipherText", sealedKey);
JSONObject respJsonObject = callApi(endPoint, data.toJSONString());
String plaintext = (respJsonObject.get("plaintext")).toString();
return plaintext;
}
Go example code
This is a Go example code written according to the presented scenario.
// URI
const KMS_API_BASE_URI = {{ Reference the OpenAPI guide URL }}
// END POINT
const KMS_API_DECRYPT = "/v1/kms/openapi/decrypt/%s"
const KMS_API_CREATE_DATAKEY = "/v1/kms/openapi/datakey/%s"
// KEY ID
const KEY_ID = {{Master Key ID}}
createEnvelop() {
// Request new data key creation
encryptedDataKey := getDataKey()
// data to be encrypted
example_json_data := "{\"PASSWORD\":\"SECRET_CREDENTIAL\"}"
// encrypted data envelope(Envelop encryption)
envelope := encryptData(example_json_data, encryptedDataKey)
// In this example code, the encrypted data envelope is saved to a file
file, _ := os.Create("envelope.json")
defer file.Close()
file.WriteString(envelope)
"}
getDataKey() {
endPoint := fmt.Sprintf(KMS_API_CREATE_DATAKEY, KEY_ID)
data := map[string]interface{}{
"key_type": "plaintext",
}
jsonData, _ := json.Marshal(data)
respJsonObject := callApi(endPoint, jsonData)
info := &KMSDatakeyInfo{}
json.Unmarshal([]byte(respJsonObject), info)
return info.DataKey
"}
encryptData() {
envelope := make(map[string]string)
// Data key decryption
dataKey := decryptDataKey(encryptedDataKey)
secretKey, _ := base64.StdEncoding.DecodeString(dataKey)
// Encrypt the generated data key using AES-CBC method
// Cipher Class use
block, _ := aes.NewCipher(secretKey)
cipherText := make([]byte, aes.BlockSize+len(example_json_data))
iv := cipherText[:aes.BlockSize]
if _, err := io.ReadFull(rand.Reader, iv); err != nil {
panic(err)
}
mode := cipher.NewCFBEncrypter(block, iv)
mode.XORKeyStream(cipherText[aes.BlockSize:], []byte(example_json_data))
envelope["encryptedKey"] = encryptedDataKey
envelope["cipherText"] = base64.StdEncoding.EncodeToString(cipherText)
envelope["iv"] = base64.StdEncoding.EncodeToString(iv)
jsonString, _ := json.Marshal(envelope)
return string(jsonString)
}
decryptDataKey() {
endPoint := fmt.Sprintf(KMS_API_DECRYPT, KEY_ID)
data := map[string]interface{}{
"cipherText": sealedKey,
}
jsonData, _ := json.Marshal(data)
respJsonObject := callApi(endPoint, jsonData)
info := &KMSDecryptInfo{}
json.Unmarshal([]byte(respJsonObject), info)
return info.DecryptedData
}
Python example code
This is a Python example code written according to the presented scenario.
# URI
KMS_API_BASE_URI = {{ Refer to the URL of the OpenAPI guide }}
# END POINT
KMS_API_DECRYPT = "/v1/kms/openapi/decrypt/"
KMS_API_CREATE_DATAKEY = "/v1/kms/openapi/datakey/"
# KEY ID
KEY_ID = {{Master Key ID}}
create_envelop()
# Request new data key creation
encrypted_data_key = get_dataKey()
# Data to be encrypted
example_json_data = {"PASSWORDTEST":"SECRET_CREDENTIALTEST"}
json_data_str = json.dumps(example_json_data)
# Encrypted Data Envelope(Envelop encryption)
envelope = encrypt_data(json_data_str,encrypted_data_key)
# In this example code, the encrypted data envelope is saved to a file
with open("envelope.json", "w") as file:
file.write(envelope)
get_dataKey()
end_point = f"{KMS_API_CREATE_DATAKEY}{KEY_ID}"
data = {
"key_type": "plaintext"
}
response_object = call_api(end_point, data)
data_key = response_object.get("ciphertext", "")
return data_key
encrypt_data()
envelope = {}
# Data key decryption
dataKey = decrypt_data_key(encrypted_data_key)
decoded_data_key = base64.b64decode(dataKey)
# Encrypt the generated data key using AES-CBC
# Cipher Class use
iv = get_random_bytes(16)
cipher = AES.new(decoded_data_key, AES.MODE_CBC, iv)
data_to_encrypt = obj
data_bytes = data_to_encrypt.encode()
padded_data = pad(data_bytes, AES.block_size)
cipher_text = cipher.encrypt(padded_data).hex()
envelope["encryptedKey"] = encrypted_data_key
envelope["cipherText"] = cipher_text
envelope["iv"] = base64.b64encode(iv).decode()
return json.dumps(envelope)
decrypt_data_key()
end_point = f"{KMS_API_DECRYPT}{KEY_ID}"
data = {}
data["cipherText"] = sealed_key
resp_json_object = call_api(end_point,data)
plaintext = resp_json_object.get("decryptedData")
return plaintext
## Use envelope encryption
Present a use case for envelope encryption and you can check the example code in Java, Go, Python written according to the scenario and the resulting values.
### Scenario
1. Decrypt the Data Key of the encrypted envelope file.
2. Decrypt the encrypted data of the envelope file using the decrypted Data Key.
### Java Example Code
This is a Java example code written according to the presented scenario.
// URI
static String KMS_API_BASE_URI = {{ Refer to the OpenAPI guide URL }};
// END POINT
static String KMS_API_DECRYPT = “/v1/kms/openapi/decrypt/%s”;
// KEY ID
static String KEY_ID = {{Master Key ID}};;
getData() {
// Encrypted data envelope(Envelop encryption)
String envelope = new String(Files.readAllBytes(Paths.get(“envelope.json”)));
JSONParser parser = new JSONParser();
JSONObject envelopeJson = (JSONObject) parser.parse(envelope);
String encryptedDataKey = envelopeJson.get(“encryptedKey”).toString();
String cipherText = envelopeJson.get(“cipherText”).toString();
String iv = envelopeJson.get(“iv”).toString();
This is a Python example code written according to the presented scenario.
# URI
KMS_API_BASE_URI = {{ Refer to the OpenAPI guide URL }}
# END POINT
KMS_API_DECRYPT = "/v1/kms/openapi/decrypt/"
# KEY ID
KEY_ID = {{Master Key ID}}
get_data()
# Open Encrypted Data Envelope(Envelop encryption)
with open("envelope.json", "r") as file:
envelope = file.read()
envelope_json = json.loads(envelope)
encrypted_data_key = envelope_json["encryptedKey"]
cipher_text = envelope_json["cipherText"]
iv = envelope_json["iv"]
return decrypt_data(cipher_text, encrypted_data_key, iv)
decrypt_data()
data_key = decrypt_data_key(encrypted_data_key)
iv_bytes = base64.b64decode(iv)
decoded_data_key = base64.b64decode(data_key)
cipher_txt = bytes.fromhex(cipher_text)
cipher = AES.new(decoded_data_key, AES.MODE_CBC, iv_bytes)
plain_text_bytes = unpad(cipher.decrypt(cipher_txt), AES.block_size)
plain_text = plain_text_bytes.decode('utf-8')
return plain_text
decrypt_data_key()
end_point = f"{KMS_API_DECRYPT}{KEY_ID}"
data = {}
data["cipherText"] = sealed_key
resp_json_object = call_api(end_point,data)
plaintext = resp_json_object.get("decryptedData")
return plaintext
Example code output
Displays the result value of the example code.
{"PASSWORD":"SECRET_CREDENTIAL"}
Use Data Signature
It presents a data signature usage scenario to ensure data integrity, and you can check the Java, Go, Python example code and results written according to the scenario.
Scenario
Call OpenAPI with the data to be signed and sign it.
The signed data is enveloped and saved as a json file.
Java Example Code
This is a Java example code written according to the presented scenario.
// URI
static String KMS_API_BASE_URI = {{ Refer to the OpenAPI guide URL }};
// END POINT
static String KMS_API_SIGN = "/v1/kms/openapi/sign/%s";
// KEY ID
static String KEY_ID = {{master key ID}};
signEnvelop() {
// signature data envelope(Envelop encryption)
String envelope = sign();
// In this example code, the signature data envelope is saved to a file
File envelopeFile = new File("signEnvelope.json");
OutputStream os = new BufferedOutputStream(new FileOutputStream(envelopeFile));
try {
os.write(envelope.getBytes());
} finally {
os.close();
}
}
sign() {
Map<String, String> envelope = new HashMap<>();
String example_credential = "SCP KMS Sign Test!!!";
String endPoint = String.format(KMS_API_SIGN, KEY_ID);
JSONObject data = new JSONObject();
data.put("input", encodeToBase64(example_credential));
JSONObject respJsonObject = callApi(endPoint, data.toJSONString());
envelope.put("signature", respJsonObject.get("signature").toString());
if(respJsonObject.get("batch_results") != null) {
envelope.put("batch_results", respJsonObject.get("batch_results").toString());
}
return JSONValue.toJSONString(envelope);
"}
Go example code
This is a Go example code written according to the given scenario.
// URI
const KMS_API_BASE_URI = {{ Reference the OpenAPI guide URL }}
// END POINT
const KMS_API_SIGN = "/v1/kms/openapi/sign/%s"
// KEY ID
const KEY_ID = {{Master Key ID}}
signEnvelop() {
// signature data envelope(Envelop encryption)
envelope := sign()
// In this example code, the signature data envelope is saved to a file
file, _ := os.Create("signEnvelope.json")
defer file.Close()
file.WriteString(envelope)
"}
sign() {
envelope := make(map[string]string)
example_credential := "SCP KMS Sign Test!!!"
endPoint := fmt.Sprintf(KMS_API_SIGN, KEY_ID)
data := map[string]interface{}{
"input": base64.StdEncoding.EncodeToString([]byte(example_credential)),
}
jsonData, _ := json.Marshal(data)
respJsonObject := callApi(endPoint, jsonData)
info := &KMSSignInfo{}
json.Unmarshal([]byte(respJsonObject), info)
envelope["signature"] = info.Signature
jsonString, _ := json.Marshal(envelope)
return string(jsonString)
}
Python Example Code
This is a Python example code written according to the given scenario.
# URI
KMS_API_BASE_URI = {{ Refer to the URL of the OpenAPI guide }}
# END POINT
KMS_API_SIGN = "/v1/kms/openapi/sign/"
# KEY ID
KEY_ID = {{Master Key ID}}
sign_envelop()
# Signature Data Envelope(Envelop encryption)
envelope = sign()
# This example code saves the signature data envelope to a file
with open("signEnvelope.json", "w") as file:
file.write(envelope)
sign()
envelope = {}
example_credential = "SCP KMS Sign Test!!!"
end_point = f"{KMS_API_SIGN}{KEY_ID}"
credential_bytes = example_credential.encode('utf-8')
data = {
"input": base64.b64encode(credential_bytes).decode('utf-8')
}
resp_json_object = call_api(end_point,data)
envelope["signature"] = resp_json_object.get("signature")
return json.dumps(envelope)
It presents a verification usage scenario for validating data integrity, and you can view the Java, Go, and Python example code and results written according to the scenario.
Scenario
Retrieve the signature value of the signed envelope file.
Verify the signed data and output the result.
Java example code
This is a Java example code written according to the presented scenario.
// URI
static String KMS_API_BASE_URI = {{ Reference the OpenAPI guide URL }};
// END POINT
static String KMS_API_VERIFY = "/v1/kms/openapi/verify/%s";
// KEY ID
static String KEY_ID = {{Master Key ID}};
getSign() {
// signature data envelope(Envelop encryption)
String envelope = new String(Files.readAllBytes(Paths.get("signEnvelope.json")));
JSONParser parser = new JSONParser();
JSONObject envelopeJson = (JSONObject) parser.parse(envelope);
String signature = envelopeJson.get("signature").toString();
return verify(signature);
}
verify() {
String endPoint = String.format(KMS_API_VERIFY, KEY_ID);
JSONObject data = new JSONObject();
data.put("input", "U0NQIEtNUyBTaWduIFRlc3QhISE=");
data.put("signature", signature);
JSONObject respJsonObject = callApi(endPoint, data.toJSONString());
String valid = (respJsonObject.get("valid")).toString();
return valid;
}
Go example code
This is a Go example code written according to the presented scenario.
// URI
const KMS_API_BASE_URI = {{ Reference the OpenAPI guide URL }}
// END POINT
const KMS_API_VERIFY = “/v1/kms/openapi/verify/%s”
### Example code output
Displays the result value of the example code.
{
“valid”: true
“}
9.1.2.2 - Platform Managed Key
Users can view detailed information of the platform-managed key automatically generated for service provision on Samsung Cloud Platform.
Reference
Platform-managed keys are created and managed directly by the CSP (Cloud Service Provider), so users cannot change or delete key attributes.
If other products within Samsung Cloud Platform encrypt using KMS keys, the CSP will generate platform-managed keys directly and perform encryption even if the user does not create keys directly in KMS.
Check detailed information of platform managed key
You can view the full resource list and detailed information of platform-managed keys. Platform Managed Key Details page consists of Details, Operation History tabs.
To view detailed information about the Key Management Service, follow these steps.
All Services > Security > Key Management Service Click the menu. Navigate to the Service Home page of Key Management Service.
Click the Platform Managed Key menu on the Service Home page. Navigate to the Platform Managed Key List page.
Click the resource to view detailed information on the Platform Managed Key List page. You will be taken to the Platform Managed Key Details page.
Platform Managed Key Details At the top of the page, status information and descriptions of additional features are displayed.
Category
Detailed description
Status
Displays the status of the platform-managed key
Active: Available/Enabled
Table. Platform Managed Key Status Information
Detailed Information
You can view detailed information of the selected resource on the Platform Managed Key List page.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creation time
Service creation time
Key name
Name of the generated key
Description
Display additional description for the key
Edit Click the icon to edit
Table. Platform Managed Key Detailed Information Tab Items
Work History
Platform Managed Key List page allows you to view the operation history of the selected resource.
Category
Detailed description
Work History
Task Execution Details
Encryption, Decryption, Signing, Verification, Data Key Generation, Display rewrap API log entries
Work date and time
Task execution date and time
Resource Type
Resource Type
Resource Name
Resource Name
Work Result
Task Execution Result (Success/Failure)
Operator Information
Information of the user who performed the task
Table. Platform Managed Key Operation History Tab Detailed Information Items
9.1.3 - API Reference
API Reference
9.1.4 - CLI Reference
CLI Reference
9.1.5 - Release Note
Key Management Service
2026.03.19
FEATUREPlatform Managed Key Service Provision
In addition to the ‘customer-managed key’ that the user creates directly, a ‘platform-managed key’ service that the CSP (Cloud Service Provider) creates and manages directly is also provided.
If other products within Samsung Cloud Platform encrypt using KMS keys, the user can encrypt with a platform-managed key generated directly by CSP without creating a key directly in KMS.
2025.10.23
FEATURELog expansion provision and notification feature improvement
Improved by segmenting the work history of API calls such as encryption and decryption into individual API units and logging them, making tracking management of API calls easier.
When an encryption key is deleted, it provides a notification not only to the user who deleted the key but also to the key creator, and additionally includes the region name where the encryption key is located in the notification.
2025.07.01
FEATUREAdditional encryption method provided
Provides additional generation/verification (HMAC) encryption method used for creating and verifying hash-based message authentication codes.
2025.02.27
NEWKey Management Service Official Version Release
Launched an encryption key management service (Key Management Service) to securely protect important data of customer applications.
You can generate, provide, and manage encryption keys for various purposes (encryption/decryption, signing/verification).
9.2 - Config Inspection
9.2.1 - Overview
Service Overview
Config Inspection is a service that diagnoses the security level of console settings for each service of Samsung Cloud Platform. It provides a security checklist organized by areas such as IAM, Networking, Database, Logging, and checks the current status via API calls to see whether the recommended security settings for each diagnostic item are applied.
Users can create a diagnostic target through service creation and then request a diagnosis, and the diagnosis request results can be checked via the Report. The Report provides the diagnosis request history and item-specific diagnosis results, and for diagnostic items that require the user’s final confirmation or action, detailed results including the resource information corresponding to each item and a remedial guide can be viewed.
Figure. Config Inspection Diagram
Provided Features
Config Inspection provides the following features.
Console Diagnosis: You can diagnose the security level by calling the Console API using the authentication key method.
Diagnosis Target Management: Through service creation, you can create and manage the user’s Samsung Cloud Platform account as a diagnosis target.
Diagnosis Request: In the resource detail screen, you can request a diagnosis by clicking the Diagnosis Request button.
Diagnostic Result Management: In Report, you can view the list of diagnosis requests and detailed diagnosis results, and download them as an Excel file.
Components
Checklist
The checklist is a collection of diagnostic items that serve as the basis for diagnostic results, and the checklist currently provided by Config Inspection is as follows.
Cloud
Checklist Name
Number of Items
Samsung Cloud Platform
Best Practice
18
Table. Config Inspection checklist
The detailed diagnostic items of the Best Practice checklist provided by Samsung Cloud Platform are as follows.
Area
Diagnostic Item
Networking
Private subnets that do not require internet access should not use a NAT Gateway.
Network integration services must use a Firewall.
Security Groups should register only the necessary rules per IP and port.
Remote access ports for each protocol must allow connections by specifying the IPs that need access.
The Firewall of network integration products should register only the necessary rules per IP/port.
Container
You must use private endpoint access control for the Kubernetes cluster and allow access only to authorized resources.
You must use private endpoint access control for the Container Registry and allow access only to authorized resources.
Enable automatic vulnerability scanning for Container Registry images.
Do not use a vulnerability scan exclusion policy for Container Registry images.
Restrict pulling of unscanned images from the Container Registry.
Restrict pulling of vulnerable images from the Container Registry.
Database
SQL-level audit logs must be stored.
Logging
Activate the Trail service of Logging&Audit and set the scope to all regions/resource types/users.
Set the log file verification of Logging&Audit Trail to enabled.
Security Group must have logging enabled.
Network integration products must enable Firewall logging.
Enable NAT logging for the Internet Gateway.
Enable control plane logging for the Kubernetes Engine cluster.
Table. Samsung Cloud Platform Best Practice checklist composition items
Report
In the Config Inspection Report, you can view the diagnostic results in the order of result list, result details, and item details.
Category
Detailed description
Diagnosis Result List
All diagnosis request history within Account
Completed: Diagnosis request has been successfully completed
Click the instance to view detailed diagnosis result
Error: Diagnosis request was not successfully completed
If the diagnosis result is an error, detailed diagnosis result is not provided.
The cause of the error can be found in Config Inspection detailed information
Diagnosis Result Details
Result of a successfully completed diagnosis request (diagnosis item list)
PASS: No vulnerable resources exist in the diagnosis item.
FAIL: Vulnerable resources exist in the diagnosis item.
CHECK: Final user confirmation is required regarding vulnerability.
ERROR: There is an error with user/authentication key permissions or API call.
N/A: No resources correspond to the diagnosis item.
The user can enter the required information for the Config Inspection service through the Samsung Cloud Platform Console, select detailed options, and create the service.
Create Certificate
To create and use the Config Inspection service on the Samsung Cloud Platform Console, a prior authentication key generation is required.
Authentication key creation can be done from My Menu > My Info. > Authentication Key Management > Create Authentication Key. For more details, refer to Authentication Key Management.
Reference
The expiration period of the authentication key is up to 365 days.
To create an authentication key without an expiration date, you must create it permanently.
Config Inspection Create
You can create and use the Config Inspection service in the Samsung Cloud Platform Console.
Reference
The user must belong to the AdministratorGroup user group in order to use the services provided by the Config Inspection service properly.
To create a Config Inspection, follow these steps.
All Services > Security > Config Inspection Click the menu. Navigate to Config Inspection’s Service Home page.
On the Service Home page, click the Create Config Inspection button. You will be taken to the Create Config Inspection page.
Config Inspection creation On the page, enter the inputs required to create the service, and select detailed options.
Service Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Diagnosis Type
-
Automatically set with Console
Cloud
Required
Select cloud to diagnose
SCP: Samsung Cloud Platform
AWS: Amazon Web Services
Azure: Microsoft Azure
Detailed input fields vary depending on the selected cloud type
Diagnosis Target > Diagnosis Name
Required
Name to distinguish the diagnosis target
Use the entered value as the resource name
Enter within 25 characters using English letters, numbers, and special characters(-, _)
Diagnostic Target > Diagnostic Account
Required
Console information for the diagnostic target
Select the Account ID to diagnose from the list
If the same Account ID is selected, duplicate application occurs and additional charges will be incurred
If AWS is selected, enter the Account ID (12 digits) in the diagnostic account
If Azure is selected, enter the Subscription ID (36 characters including letters, numbers, and special characters) in the diagnostic account
Diagnosis Schedule > Checklist
Required
Automatically set when Use Diagnosis Schedule is selected
Diagnosis Schedule > Diagnosis Cycle
Required
Select Diagnosis Cycle
The diagnosis is executed on the selected date according to the specified cycle
Monthly is selected, the diagnosis may not be performed on the selected date
e.g., selecting the 31st of each month – February has no such date, so the diagnosis is not performed
Diagnosis Schedule > Start Time
Required
Select Diagnosis Start Time
Set the hour and minute information to start the diagnosis
Authentication Key
Required
Select authentication key to use for Open API calls
Click the **Select** button and choose the appropriate authentication key from the list in the **Select Authentication Key** popup.
If there are no selectable authentication keys, click **Authentication Key Management** to create a new authentication key.
For detailed information about authentication keys, refer to [Manage Authentication Keys](/userguide/management/iam/how_to_guides/myinfo.md/#인증키-관리하기).
| Plan | Select | Select the plan to use
**Standard**: charge based on the number of diagnoses
**Monthly flat-rate**: charge a fixed amount each month regardless of the number of diagnoses (based on up to 30 diagnoses per month)
The plan cannot be changed after service application
|
Table. Config Inspection Service Information Input Items
Additional Information Input area, enter or select the required information.
Category
Required or not
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Config Inspection Additional Information Input Items
Summary In the panel, check the detailed information and estimated billing amount you created, and click the Create button.
When creation is complete, check the created resources on the Config Inspection List page.
Config Inspection Check detailed information
Config Inspection service allows you to view and edit the full resource list and detailed information. Config Inspection detailed page consists of Details, Tags, Work History tabs.
To view detailed information of the Config Inspection service, follow the steps below.
All Services > Security > Config Inspection Click the menu. Navigate to Config Inspection’s Service Home page.
Click the Config Inspection menu on the Service Home page. You will be taken to the Config Inspection list page.
On the Config Inspection List page, click the resource to view detailed information. You will be taken to the Config Inspection Details page.
Config Inspection Detailed page displays status information and additional feature information, and consists of Detailed Information, Tags, Work History tabs.
Category
Detailed description
Status
Displays the status of Config Inspection
Ready: When there is no diagnostic request after service creation (diagnostic request possible)
In Progress: When a diagnostic request is in progress (diagnostic request/service termination not possible)
Error: When an error occurs in the diagnostic request (diagnostic request possible)
Completed: When the diagnostic request is completed successfully (diagnostic request possible)
Diagnosis Request
Button that can perform Console diagnosis
Service Cancellation
Button to cancel the service
Table. Config Inspection status information and additional functions
Detailed Information
Config Inspection List page allows you to view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation date/time
Date/time the service was created
Editor
User who modified the service information
Modification Date/Time
Date/Time when service information was modified
Diagnosis Type
Diagnosis types provided by the service
Cloud
Diagnosis Target Types
Diagnosis Target
Console information of the diagnostic target
Provides the diagnostic name and diagnostic account information of the diagnostic target
If the diagnostic target is AWS or Azure, you can click the Edit icon to modify the diagnostic account
Plan
Selected plan type
Recent diagnosis date/time
Last executed diagnostic request date/time
Recent Diagnosis Result
Last executed diagnosis request result
Completed: The diagnosis request has been completed successfully
Error: The diagnosis request was not completed successfully
UNAUTHORIZED: Key permission used for the diagnosis request needs to be verified
INVALID_INPUT_VALUE: Input values such as diagnosis account need to be verified
CONNECTION_FAIL: Console access control settings need to be verified
ETC: Other errors such as diagnosis engine require inquiry through the service desk
※ Diagnosis results can be viewed in the Security > Config Insepction > Report menu
Authentication Key
User’s authentication key registered at service creation
Access Key, user, status information provided
Access Key information and edit icon are displayed only to the user who created the authentication key
Click the Edit icon to change the authentication key
If the authentication key is deleted, it is shown as - status; if expired, shown as Expired
Authentication key information (Access Key, status) of resources created by other users is displayed as -
Diagnosis Schedule
Display selected diagnosis schedule information
If the diagnosis target is SCP, you can click the Edit icon to change the diagnosis schedule.
Table. Config Inspection Detailed Information Tab Items
Tag
Config Inspection List page allows you to view the tag information of selected resources, and you can add, modify, or delete them.
Category
Detailed description
Tag List
Tag List
You can view the Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Config Inspection Tag Tab Items
Work History
On the Config Inspection List page, you can view the operation history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date and time, Resource ID, Resource name, Work details, Event topic, Work result, Check worker information
Table. Config Inspection Work History Tab Items
Config Inspection Resource Management
If you need to view the status of Config Inspection resources and request a diagnosis, you can perform the task on the Config Inspection List or Config Inspection Details page.
Edit Authentication Key
You can select the authentication key to use for diagnosis for each diagnosis target.
To modify the service’s authentication key, follow the steps below.
All Services > Security > Config Inspection Click the menu. Go to Config Inspection’s Service Home page.
Click the Config Inspection menu on the Service Home page. You will be taken to the Config Inspection list page.
Config Inspection List page, click the resource to edit the authentication key. You will be taken to the Config Inspection Details page.
Check the authentication key and click the Edit icon. The Edit Authentication Key popup window opens.
Edit Authentication Key Select the authentication key to use in the popup window and click the Confirm button.
Category
Detailed description
Authentication Key
Authentication Key Details
Creation Date/Time
Authentication Key Creation Date
Expiration Date and Time
Authentication Key Expiration Date
Status
Status of the authentication key
Use: Usable state
Expired: Expired usage period state
Table. Authentication Key Edit Popup Items
Reference
If the authentication key is deleted, it is displayed as - status.
The authentication key information (authentication key, status) of resources created by other users is displayed as -.
Request Diagnosis
You can request a console diagnosis based on the configured checklist.
To request a console diagnosis, follow the steps below.
All Services > Security > Config Inspection Click the menu. Go to Config Inspection’s Service Home page.
Click the Config Inspection menu on the Service Home page. You will be taken to the Config Inspection list page.
Config Inspection list page, click the resource to request a diagnosis. Config Inspection details page will be opened.
Click the Diagnostic Request button on the Config Inspection Details page. The Diagnostic Request popup will open.
Diagnosis Request Enter the information required for diagnosis in the popup window and click the Confirm button.
Diagnosis Request The items in the popup window vary depending on the selected Console.
Category
Detailed description
Console Access Method
Fixed to authentication key method as the way to access the Console
Checklist
Fix as Best Practice when selecting SCP
Authentication Key
If SCP is selected, choose the pre-generated authentication key
Access Key
Enter Access Key if AWS is selected
Secret Key
Enter Secret Key if AWS is selected
Client ID
Enter Client ID if Azure is selected
Client Secret
Enter Client Secret if Azure is selected
Tenant ID
Enter Tenant ID if Azure is selected
Table. Diagnosis Request Popup Items
On the Config Inspection List page, check the Status value.
When the diagnostic request is completed, the status value is displayed as Completed or Error.
Completed: You can view the diagnosis request results in the diagnosis results menu. For more details, see Report Management.
Reference
For detailed information on the prerequisite settings required to run diagnostics per console, refer to Set Up Prerequisites.
Config Inspection Cancel
You can cancel the unused Config Inspection service. However, if you cancel Config Inspection, all stored diagnostic data will be deleted.
Caution
If you cancel the resource, all diagnostic data will be deleted, and you will not be able to view the diagnostic results in the Report.
Config Inspection service cannot be terminated if its status is In Progress.
To cancel Config Inspection, follow the steps below.
Click the All Services > Security > Config Inspection menu. Go to Config Inspection’s Service Home page.
Click the Config Inspection menu on the Service Home page. Navigate to the Config Inspection List page.
On the Config Inspection List page, click the resource to be terminated. Navigate to the Config Inspection Details page.
Click the Service Termination button on the Config Inspection Details page.
When termination is complete, check on the Config Inspection List page whether the resource has been terminated.
9.2.2.1 - Dashboard Check
Users can view the diagnostic results of the Config Inspection service at a glance on the dashboard through the Samsung Cloud Platform Console.
Check Dashboard
On the dashboard page, you can view the status of Config Inspection’s diagnostic targets and diagnostic history, etc.
To check the dashboard, follow the steps below.
Click the All Services > Security > Config Inspection menu. Navigate to the Service Home page of Config Inspection.
Click the Dashboard menu on the Service Home page. Navigate to the Dashboard page.
Dashboard Check the summary of diagnostic results on the page.
Dashboard You can view the dashboard information at the top of the page based on period or diagnosis name.
Period: Based on the current month, you can set a period within 6 months to view summary information of the diagnosis results.
Diagnosis Name: If you select All, you can view a summary of the entire diagnostic history results, and if you select a diagnostic account, you can view the detailed information of that diagnostic result.
Download button can be clicked to download the information displayed on the dashboard page as a PDF file.
Category
Detailed description
Security Level (Overall)
Display average of latest diagnostic results for all diagnostic targets
Recent diagnostic results are displayed in the list
Diagnostic score calculation formula = Total – (Fail + Error + Check)) / Total x 100
Periodic Diagnosis Status
Display diagnosis status by target during search period
Diagnosis Completed: Show recent completed diagnosis details
Diagnosis Error: Show recent diagnosis error details, when selecting diagnosis name go to detailed result page
Summary of Diagnosis Results by Period (All)
Display summary of diagnosis results (All) during the search period
If you select a diagnosis name from the list, you will be taken to the detailed diagnosis result page
Table. Detailed dashboard item description for overall diagnosis results
Category
Detailed description
Security Level
Display the last diagnostic result score of the selected diagnostic account
Recent diagnostic results are displayed in the list
Period-wise diagnostic result summary
Show summary of diagnostic results for the last diagnostic account within the search period
Vulnerability Status by Period
Display the vulnerability diagnosis results of the diagnostic account during the search period as a graph
When a graph is selected, display detailed information of the vulnerable items in the diagnosis results
Table. Detailed dashboard item description for diagnostic results by diagnostic account
9.2.2.2 - Diagnostic Result Management
You can view the Config Inspection diagnostic request results on the diagnostic results page and change the diagnostic results.
Reference
The diagnostic result is generated when a diagnostic request is made in the Config Inspection service, and it is deleted when the service is terminated.
Security standards recommended for service-specific settings
Result
Diagnosis item criteria check result
Diagnosis Criteria
Result Judgment Criteria
Diagnostic Method
Current Settings Check Method
Action Guide
Configuration method that meets security standards
Detailed Result
Resource information and settings corresponding to the diagnostic item
Diagnosis Result Change
Button to change diagnosis result
If the diagnosis result is changed, the Check Result button is displayed, and clicking the Delete button allows deletion of the changed result
Table. Config Inspection Diagnosis Item Details
Manage Diagnosis Results
On the diagnosis result page, you can change the results of items whose diagnosis result is in CHECK status.
Change Diagnosis Result
To change the diagnosis result, follow the steps below.
All Services > Security > Config Inspection Click the menu. Navigate to the Service Home page of Config Inspection.
Click the Diagnostic Results menu on the Service Home page. It navigates to the Diagnostic Results List page.
Diagnosis Result List page, click the item whose diagnosis result is Completed. You will be taken to the Diagnosis Result Details page.
Items with a diagnostic result in error state do not display detailed information.
Click the More > Diagnosis Result Management button at the top of the Diagnosis Result Details page. You will be taken to the Diagnosis Result Management page.
Click the Result Change button for the item whose diagnostic result you want to modify on the Diagnostic Result Management page. The Result Change popup window will open.
Result Change In the popup window, select or enter the information required to change the result.
Category
Required?
Detailed description
Registrant
-
Diagnosis result change registrant email
Validity Period
Required
Set the validity period of the diagnostic result
Change Result
Required
Select the diagnostic result to change among Pass, Check, Fail
Detailed Reason
Required
Enter the detailed reason for changing the result
Attachment File
Select
Upload files required for confirming result changes
Attach File button to upload files, up to 5 can be registered
Inspection Result
-
Detailed inspection result display
Table. Detailed Items of Diagnosis Result Change
Check the entered information and click the Register button. Verify whether the diagnostic results have changed in the Diagnostic Result Management list.
Delete diagnosis result change history
To delete the diagnostic result change history, follow the steps below.
All Services > Security > Config Inspection Click the menu. Navigate to Config Inspection’s Service Home page.
Click the Diagnostic Results menu on the Service Home page. Navigate to the Diagnostic Results List page.
Click an item with a completed diagnosis result on the Diagnosis Result List page. It moves to the Diagnosis Result Details page.
Items whose diagnostic result is in error state do not display detailed information.
Click the Diagnosis Result Details page’s top Diagnosis Result Management button. It navigates to the Diagnosis Result Management page.
Diagnosis Result Management page, click the Check Result button for the item whose diagnosis result you want to change. The Check Result popup window opens.
Check Results in the popup window, click the Delete button.
9.2.2.3 - Pre-configuration
Users must perform cloud pre-configuration such as authentication key creation and access control IP addition through the Samsung Cloud Platform Console to use the Config Inspection service.
Note
Items to set vary depending on the type of cloud you use. Refer to the corresponding chapter and set the required items for each cloud.
Samsung Cloud Platform Console Settings
To diagnose Samsung Cloud Platform and external clouds in the Config Inspection service, set the following items.
Check Policies Linked to User Group
Notice
Config Inspection can diagnose Samsung Cloud Platform or external clouds. You can use it by granting appropriate policy requirements to the user group according to the diagnosis target.
Check if the user group policy matching your desired diagnosis target is set.
If policy creation is required, contact the Account administrator.
To check the policy of the user group you belong to, follow the procedure below.
Click All Services > Management > IAM menu. You will be redirected to the Service Home page of IAM.
Click User Groups menu on the Service Home page. You will be redirected to the User Group List page.
Click the user group you want to check on the User Group List page. You will be redirected to the User Group Details page.
Click Policies tab on the User Group Details page. You will be redirected to the Policies tab page.
Click the policy you want to check on the Policies tab page. You will be redirected to the Policy Details page.
Check the detailed information on the Policy Details page.
Check if the policy information in the table below is set. If necessary, contact the administrator to add the policy.
Item
Policy Requirement 1
Policy Requirement 2
Action
List, Read
Create, Delete, List, Read, Update
Resource
All resources
Individual resource (Config Inspection)
Auth Type
All authentication
Temporary key authentication, Console login
Allowed IP
123.37.11.42, User-defined IP
For diagnosis, you must add IP 123.37.11.42 and IP for user console access separately
User-defined IP
Table. Policy setting details for diagnosing all clouds
Create Authentication Key
You can check and create authentication keys to use in the Config Inspection service.
Notice
You can create only up to 2 authentication keys.
After creating a new authentication key, you must apply the changed API authentication key to the service you are using.
To create an authentication key in Samsung Cloud Platform Console, follow the procedure below.
Click My Menu > My info. menu in the Console. You will be redirected to the My info. details page.
Click Authentication Key Management tab on the My info. details page. You will be redirected to the Authentication Key Management tab page.
Click Create Authentication Key button on the Authentication Key Management tab page. You will be redirected to the Create Authentication Key page.
You can check the authentication key list on the authentication key management page.
Enter the expiration period on the Create Authentication Key page and click OK button.
Check if the created authentication key is displayed in the authentication key list.
Add Access Allowed IP
You can add access allowed IPs in Samsung Cloud Platform Console.
To add access allowed IPs in the Console, follow the procedure below.
Click My Menu > My info. menu in the Console. You will be redirected to the My info. details page.
Click Authentication Key Management tab on the My info. details page. You will be redirected to the Authentication Key Management tab page.
Click Edit icon in Security Settings item on the Authentication Key Management tab page. The Edit Authentication Key Security Settings popup will open.
Enter the authentication method and access allowed IP in the Edit Authentication Key Security Settings popup.
Select Authentication Key for authentication method.
Set access allowed IP to Enable, enter the IP address, and click Add button.
When adding access allowed IP is complete, click OK button. Check if the information is modified to the entered information in the Security Settings item.
AWS Settings
To diagnose AWS (Amazon Web Services) cloud in the Config Inspection service, set the following items.
Add Permission Policy
You can add permission policies for users/user groups in AWS Console.
Add User Permission
To add user access permission policy in AWS Console, follow the procedure below.
Click IAM > Users in AWS Console.
Select the diagnostic user name from the user list.
Click Permissions tab on the user information page.
Select Add permissions in the permission policy.
Select ReadOnlyAccess, ViewOnlyAccess when adding permissions.
Add User Group Permission
To add user group access permission policy in AWS Console, follow the procedure below.
Click IAM > User groups in AWS Console.
Select the group the user belongs to from the user group list.
Click Permissions tab on the user group page.
Select Add permissions in the permission policy.
Select ReadOnlyAccess, ViewOnlyAccess when adding permissions.
Add Access Control IP
If using IP access control policy, you must add block exception IPs to that policy.
Add User Access Control IP
To add user access control IP in AWS Console, follow the procedure below.
Click IAM > Users in AWS Console.
Select the diagnostic user name from the user list.
Click Permissions tab on the user information page.
Click Edit in IP Access Control Policy in the permission policy item.
Add 123.37.24.82 to block exception IP.
Add User Group Access Control IP
To add user group access control IP in AWS Console, follow the procedure below.
Click IAM > User groups in AWS Console.
Select the group the user belongs to from the user group list.
Click Permissions tab on the user group page.
Click Edit in IP Access Control Policy in the permission policy item.
Add 123.37.24.82 to block exception IP.
Generate Access Key
To generate Access Key in AWS Console, follow the procedure below.
Click IAM > Users in AWS Console.
Select the diagnostic user name from the user list.
Click Security credentials tab on the user information page.
Click Access keys on the Security credentials page.
Create access keys for third-party services on the Create access key page.
Make sure to save the created access key information.
Caution
Download the Secret Key as a csv file or record it separately.
Secret key information can only be checked when creating the access key and cannot be recovered later.
Azure Settings
To diagnose Azure cloud in the Config Inspection service, set the following items.
Register Entra ID Application
To register Entra ID Application in Azure Console, follow the procedure below.
Click Microsoft Entra ID > App registrations in Azure Console.
Click New registration on the App registrations page.
Register application (client) ID.
When app registration is complete, check App name, Application (client) ID, Directory (tenant) ID on the overview page.
Add API Permission
Note
To use Config Inspection service, you must pre-configure with an account granted the Global Administrator role among Azure AD roles.
To add API permission in Azure Console, follow the procedure below.
Click Microsoft Entra ID > App registrations > Entra ID Application registration > created App name > API permissions > Add a permission in Azure Console.
Select Microsoft Graph to add permissions from the API permissions list.
Click Application permissions on the Request API permissions page.
Select Application.Read.All, Device.Read.All, Group.Read.All, User.Read.All, DeviceManagementManagedDevices.Read.All, AuditLog.Read.All, Directory.Read.All, Domain.Read.All, GroupMember.Read.All, Policy.Read.All, Reports.Read.All from the permission list.
After adding permissions in App API permission registration, click Grant admin consent for account name.
Check if it changes to Granted for account name status for the account name.
Create Client Secret
To create Client Secret in Azure Console, follow the procedure below.
Click Microsoft Entra ID > App registrations > Entra ID Application registration > created App name > Certificates & secrets in Azure Console.
Click New client secret from the Certificates & secrets list.
When client secret is created, check the Client Secret in the Value item from the list.
Make sure to save the Client Secret value.
Caution
Client Secret value (Value) can only be checked at creation time. Make sure to record or save it separately.
Add Subscription Access Permission in Azure Console
You can add subscription access permissions in Azure Console from Tenant Root Group or individual Subscription. Choose your preferred method to add subscription access permissions.
Add Permission from Tenant Root Group
To add subscription access permission in Azure Console from Tenant Root Group, follow the procedure below.
Click Management groups > Overview in Azure Console.
Click Tenant Root Group > Access control (IAM).
If you cannot enter the Tenant Root Group menu, change the setting below.
Change Microsoft Entra ID > Properties > ‘Account name’ can manage access to all Azure subscriptions and management groups in this tenant. > Yes
After adding permissions, you must change it to No.
Click Add > Add role assignment on the Access control page.
Enter detailed information on the Add role assignment page and click Review+assign.
When entering role assignment information, select the information below from the Role and Member tabs to add the App created in Entra ID Application registration. You must add all three permissions below.
Category
Permission
Reader
Users, group, or service principal
Key Vault Reader
Users, group, or service principal
Reader and Data Access
Users, group, or service principal
Table. Additional permission items when entering role assignment information
Add Permission from Individual Subscription
To add subscription access permission in Azure Console from individual Subscription, follow the procedure below.
Click Subscription > Overview in Azure Console.
Check Subscription ID from the basic information on the overview page.
Click Subscription > Access control (IAM).
Click Add > Add role assignment on the Access control page.
Enter detailed information on the Add role assignment page and click Review+assign.
When entering role assignment information, select the information below from the Role and Member tabs to add the App created in Entra ID Application registration. You must add all three permissions below.
Category
Permission
Reader
Users, group, or service principal
Key Vault Reader
Users, group, or service principal
Reader and Data Access
Users, group, or service principal
Table. Additional permission items when entering role assignment information
Add Access Permission via PowerShell
To add subscription access permission in Azure Console using PowerShell, follow the procedure below.
Run the following command in Cloud shell > PowerShell in Azure Console.
New-AzRoleAssignment -ObjectId “App’s Object ID confirmed in Enterprise Application” -Scope “/providers/Microsoft.aadiam” -RoleDefinitionName ‘Reader’ -ObjectType ‘ServicePrincipal’
If the command does not run, change the setting below.
Change Microsoft Entra ID > Properties > ‘Account name’ can manage access to all Azure subscriptions and management groups in this tenant. > Yes
After adding permissions, you must change it to No
Run the following command to check if the setting is complete.
Get-AzRoleAssignment –ObjectId "App’s Object ID confirmed in Enterprise Application" –Scope "/providers/Microsoft.aadiam"
If you need to delete permissions, run the following command.
Remove-AzRoleAssignment -ObjectId “App’s Object ID confirmed in Enterprise Application” -Scope “/providers/Microsoft.aadiam” -RoleDefinitionName ‘Reader’
9.2.3 - Release Note
Config Inspection
2025.07.01
FEATUREService Offering Expansion
We have launched the Config Inspection product, which can comprehensively diagnose and manage security vulnerabilities in the customer’s multi-cloud console.
The account (or other cloud account) to be diagnosed is registered, allowing for continuous diagnosis, and the dashboard and detailed results can be checked in the report.
2025.02.27
FEATURECommon Feature Changes
Samsung Cloud Platform common feature changes
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.12.23
NEWBeta version release
You can manage Samsung Cloud Platform Console setting vulnerabilities through console diagnostics.
It provides a Report that can view the security diagnosis results.
9.3 - Certificate Manager
9.3.1 - Overview
Service Overview
Certificate Manager is a service that supports certificate deployment and integrated management, allowing users to create and use SSL/TLS certificates issued by a Certificate Authority (CA) and self-signed certificates for development or testing purposes in Samsung Cloud Platform resources. It also enables management of the certificate lifecycle by checking expiring certificates through expiration notification emails.
Features
Easy creation: You can create a certificate with a simple task on the Samsung Cloud Platform Console. User certificates issued from outside undergo validity verification and only deployable certificates are distributed.
Service Integration: Connects certificates registered in Certificate Manager to Load Balancer to encrypt network connections and protect services.
Certificate Expiration Alert: Until 1 day before the expiration date, periodic notifications allow you to check and replace certificates that are about to expire.
Service Composition Diagram
Figure. Certificate Manager Configuration Diagram
Provided Features
Certificate Manager provides the following functions.
Certificate Creation: You can create a user certificate issued by a certificate authority or a self-signed certificate suitable for development/testing purposes.
Connected Resource Inquiry: You can inquire about Samsung Cloud Platform resources that are using certificates. Currently, it provides a list of Load Balancer’s Listener(HTTPS).
Expiration Notice: You can set the recipient of the expiration notice for each certificate. The notification recipient will receive an email from 45 days before expiration. (Sent 45/30/15/7/1 day before expiration)
Components
The Certificate Manager’s user certificate consists of Private Key, Certificate Body, and Certificate Chain. Enter the certificate information, including the entire contents, including the BEGIN and END lines.
Private Key
Enter the private key in PEM format. The private key supports RSA and the decrypted value must be entered.
Server(Leaf) inputs the certificate in PEM format. Only one certificate can be entered in the Certificate Body.
-----BEGIN CERTIFICATE-----
Server Certificate
-----END CERTIFICATE-----
Certificate Chain
Enter the upper certificate in PEM format. Enter in the order of Sub(Intermediate) CA → Root CA, and it can be omitted only when it is a self-signed/issued certificate.
Certificate Manager provides a service by Region unit. Please create and use the service in the required Region. The quota per Region is as follows.
Classification
Basic Quantity
Description
CERTIFICATE_MANAGER.USER_CERT_DEFAULT.COUNT
100
Number of user certificates per region
CERTIFICATE_MANAGER.SELFSIGNED_CERT_DEFAULT.COUNT
100
Number of self-issued certificates per Region
Table. Restrictions of Certificate Manager
Preceding Service
Certificate Manager has no preceding services.
9.3.2 - How-to guides
The user can enter the required information for the Certificate Manager service through the Samsung Cloud Platform Console, select detailed options, and create the service.
Certificate Manager Create
You can create and use the Certificate Manager service from the Samsung Cloud Platform Console.
To request the creation of a Certificate Manager service, follow the steps below.
All Services > Security > Certificate Manager Click the menu. Service Home page will be opened.
Click the Create Certificate Manager button on the Service Home page. You will be taken to the Create Certificate Manager page.
Certificate Manager creation On the page, enter the information required to create the service, and select detailed options.
Service Information Input area: enter or select the required information.
Category
Required
Detailed description
Certificate Name
Required
Enter the name of the Certificate Manager to use
Enter within 3-30 characters, including English letters, numbers, and special characters (-, _, .)
Cannot be the same as an existing name in use
Type
Required
Select the Certificate Manager type to use
User Certificate: Public certificate issued by a Certificate Authority (CA)
Self-issued Certificate: Certificate self-issued (Self-signed) by Samsung Cloud Platform
Since it is relatively insecure, it is recommended for development/testing use.
User Certificate > Certificate Body
Required
Enter Server (Leaf) certificate information
Only one certificate can be entered in the certificate body
Enter the entire content including the lines from —–BEGIN CERTIFICATE—– to —–END CERTIFICATE—–
User Certificate > Private Key
Required
Enter private key information
Private Key supports RSA encryption method
Private Key can be entered in unencrypted PEM format
Enter the entire content including the lines from —–BEGIN RSA PRIVATE KEY—– to —–END RSA PRIVATE KEY—-
User Certificate > Certificate Chain
Required
Enter Certificate Chain information
Can be omitted when using a private certificate
Enter the Certificate Chain in order: Intermediate (Subordinate) certificate → Root certificate
Public certificates must provide Certificate Chain information; only when there is no intermediate certificate (Chain CA) should use be disabled
Enter the entire content including the lines from —–BEGIN CERTIFICATE—– to —–END CERTIFICATE—–
If there are multiple Intermediate (Subordinate) certificates, enter each certificate’s content in order
User Certificate > Certificate Validity Check
Required
Validate the entered certificate’s validity
Self-issued certificate > Common Name
Required
Enter the domain name to be used for the certificate
Self-issued certificate > Organization Unit
Required
Enter the organization and department that will use the certificate
Self-issued Certificate > Start Date
Required
Enter the certificate usage start date (creation date)
Self-issued certificate > Expiration date
Required
Enter certificate expiration date
Expiration Alert
Select
Set whether to receive alerts before certificate expiration
Use can be selected to enable expiration alerts
If expiration alerts are set, an email is sent to recipients 45 days/30 days/15 days/7 days/1 day before certificate expiration
Expiration Alert > Notification Recipient
Required
Select notification recipient when using expiration alert
Enter user name in the search area to select notification recipient
Up to 100 can be registered
Table. Certificate Manager Service Information Input Items
Reference
If the entered certificate information is not valid, you cannot create the Certificate Manager service.
If the Private Key is encrypted, enter the decrypted value using the openssl command below.
Additional Information Input Enter or select the required information in the area.
Category
Whether required
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Certificate Manager additional information input items
Verify the entered service information and additional information, and click the Complete button.
Once creation is complete, check the created resource on the Certificate Manager List page.
Reference
To create a Load Balancer to use in the Certificate Manager service, click Load Balancer creation in Service Home.
For detailed explanation about creating a Load Balancer, please refer to Creating a Load Balancer.
Certificate Manager View Detailed Information
Certificate Manager service can view and edit the full resource list and detailed information. Certificate Manager Details page consists of Details, Connected Resources, Tags, Activity History tabs.
To view detailed information of Certificate Manager, follow the steps below.
All Services > Security > Certificate Manager Click the menu. Go to the Certificate Manager’s Service Home page.
On the Service Home page, click the Certificate Manager menu. Navigate to the Certificate Manager list page.
Click the resource to view detailed information on the Certificate Manager List page. You will be taken to the Certificate Manager Details page.
Certificate Manager Details page displays the status information and detailed information of Certificate Manager, and consists of Details, Connected Resources, Tags, Activity History tabs.
Category
Detailed description
Service Status
Certificate Manager Status
Creating: Creating
Active/Valid: Certificate valid
Expired: Certificate expired
Editing: Editing settings
Terminating: Terminating
Error: Certificate error
Service termination
Button to cancel Certificate Manager
Table. Status Information and Additional Functions
Detailed Information
Certificate Manager list page, you can view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation DateTime
Date and time the service was created
Editor
User who modified the service information
Modification DateTime
Date and time when the service information was modified
Certificate Name
Certificate Manager Certificate Name
Type
Certificate type information
Certificate Information
Detailed information of the selected certificate type
User Certificate When selected, display certificate information
Self-issued Certificate When selected, display Commom Name, Organization Unit, start date, expiration date
Certificate Manager list page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can check the Key and Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Certificate Manager tag tab items
Work History
Certificate Manager List page, you can view the operation history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work details, work date and time, resource type, resource name, work result, worker information can be checked
When you click the corresponding resource in the Work History List list, the Work History Details popup opens
Table. Certificate Manager operation history tab detailed information items
Certificate Manager Cancel
You can apply for termination of the Certificate Manager service from the Samsung Cloud Platform Console.
Caution
If there are resources connected to the Certificate Manager service, you cannot cancel it. To cancel the service, first delete the connected resources.
To request termination of the Certificate Manager service, follow the steps below.
All Services > Security > Certificate Manager Click the menu. Go to the Service Home page of Certificate Manager.
Click the Certificate Manager menu on the Service Home page. Navigate to the Certificate Manager list page.
Certificate Manager List Click the resource to view detailed information on the page. Certificate Manager Details You will be taken to the page.
Click the Service Termination button on the Certificate Manager Details page.
Once termination is complete, check the service termination status in the Certificate Manager list.
9.3.2.1 - Chain Certificate Extraction
The user can extract and enter the Certificate Chain certificate to be used when creating the Certificate Manager service.
Extract Certificate Chain
You can extract the Certificate Chain certificate value required when creating a Certificate Manager.
Caution
The Certificate Chain consists of Intermediate (Subordinate) certificates issued by a public certification authority to the Root certificate.
Even if you have an existing Certificate Chain value, it is recommended to re-extract and register the Intermediate (Subordinate) certificate to the Root certificate through the Certificate Body file.
Intermediate (Subordinate) Certificate Value Extraction
You can extract the Intermediate (Subordinate) certificate of the Certificate Chain required when registering a user certificate.
Reference
If there are more than two Intermediate(Subordinate) certificates, extract the values for each certificate.
To extract the Intermediate(Subordinate) certificate value, follow these steps.
Run the crt file format certificate file on PC. The certificate window appears.
Click the Certificate Path tab in the Certificate window.
If it is in PEM file format, change the file format to crt.
Click the certificate under the Root and click Certificate View.
Click the Details tab and move, then click Copy to file.
When the Certificate Export Wizard runs, click Next.
Select Base 64 encoded X.509(.CER)(S) as the format to use and click Next.
Click Browse to select the path where you want to save the file, and then click Next.
Click Finish. The Certificate Export Wizard is complete.
Open the exported file in TEXT file format and check the value.
The extracted certificate value must have —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—- items at the beginning and end.
Root Certificate Value Extraction
You can extract the Root certificate of the Certificate Chain required when registering a user certificate.
To extract the Root certificate value, follow these steps.
Run the crt file format certificate file on PC. The certificate window appears.
Click the Certificate Path tab in the Certificate window.
If it is in PEM file format, change the file format to crt.
Click the topmost Root certificate and click Certificate View.
Click the Details tab and move, then click Copy to file.
When the Certificate Export Wizard runs, click Next.
Select Base 64 encoded X.509(.CER)(S) as the format to use and click Next.
Click Browse to select the path where you want to save the file, and then click Next.
Click Finish. The Certificate Export Wizard is complete.
Open the exported file in TEXT file format and check the value.
The extracted certificate value must have —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—- items at the beginning and end.
Input Certificate Chain value
This explains how to enter the extracted Intermediate (Subordinate) certificate and Root certificate values into the Certificate Chain item when creating a Certificate Manager.
Include the certificate value, including —–BEGIN CERTIFICATE—– at the beginning and —–END CERTIFICATE—- at the end, and paste it.
Copy the entire value of the Root certificate file.
Paste it into the Certificate Chain input area of the Certicafate Manager Creation page.
Includes the —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—- at the start and end of the certificate value and paste it.
Intermediate (Subordinate) certificate’s below line will be pasted with the Root certificate value.
9.3.3 - API Reference
API Reference
9.3.4 - CLI Reference
CLI Reference
9.3.5 - Release Note
Certificate Manager
2025.07.01
NEWCertificate Manager Service Official Version Release
Released Certificate Manager service that supports SSL/TLS certificate deployment and integrated management.
You can register a certificate issued by a certification authority (CA) or create a self-signed certificate for development/test purposes.
Samsung Cloud Platform connects to resources and enables encryption of network communication and management of certificate lifecycles.
9.4 - Secret Vault
9.4.1 - Overview
Service Overview
Secret Vault is a service that allows access to Samsung Cloud Platform services and resources with a security-enhanced token-based temporary key without hard-coding security information in plain text format when accessing using Open API, and also manages the lifecycle of the temporary key to maintain a security-enhanced environment when using the API.
Features
Enhanced Security Environment: Instead of entering hard-coded authentication information into the application source code, you can respond to security threats due to authentication information leakage by issuing a token-based temporary key.
Life-Cycle based key management: Users do not need to manage the life cycle of the key directly to meet security requirements. It provides automated key management and replacement functions according to the set life cycle.
Various resource utilization possible: Through the token issued by Secret Vault, not only resources within Samsung Cloud Platform but also external resources (other CSP, On-Premise, etc.) can be accessed through an enhanced security environment.
Service Composition Diagram
Figure. Secret Vault Configuration Diagram
Provided Features
Secret Vault provides the following features.
Token Authentication Addition and Encryption Storage: It provides token issuance and temporary key issuance functions using authentication keys, and safely stores authentication key information by encrypting it (AES-256).
Key Life-cycle Management: Provides key issuance and automatic replacement functions based on the life cycle, and allows setting the replacement cycle by time unit (up to 36 hours).
Access Control Function: The user application can control access to resources based on IP.
Component
Secret
Secret is a form of information that combines Token information and temporary key exchange cycle information, and is an object that can be applied by the user in the console.
Token
Token is a unique string used to authenticate the user’s identity and verify authority, and a temporary key can be issued to access the Samsung Cloud Platform through token-based authentication when requesting Open API.
Constraints
Secret Vault provides a region-based service. Therefore, when creating a Secret, you cannot select an authentication key being used in a Secret from a different region.
Preceding Service
Secret Vault does not require any separate prior service work.
9.4.2 - How-to guides
The user can enter the essential information of the Secret Vault service and create the service by selecting detailed options through the Samsung Cloud Platform Console.
Secret Vault creation
You can create and use the Secret Vault service on the Samsung Cloud Platform Console.
To create a Secret Vault, follow the following procedure.
All services > Security > Secret Vault menu, click. It moves to the Service Home page of Secret Vault.
Service Home page, click the Create Secret Vault button. It moves to the Create Secret Vault page.
Secret Vault Creation page where you enter the information required for service creation and select detailed options.
Service Information Input area, please select the required information.
Classification
Necessity
Detailed Description
Secret name
required
Enter Secret name
Enter 3-63 characters using lowercase English letters and numbers
Type
Required
Select the type of encryption target
Authentication Key
Required
Select the authentication key to use for the Secret Vault service
Click the Use button to select from the pre-created authentication keys in the Authentication Key Management menu.
In the Authentication Key Management menu, you must select the security authentication method as Private Key Authentication.
Expired authentication keys will not be retrieved, and authentication keys with a remaining usage period of less than 30 days or already in use in the Secret Vault product cannot be used. (Only one Secret Vault product can be applied per authentication key.)
Token usage period
required
The usage period of the Token provided by encrypting the authentication key
The Token usage period is automatically set to be the same as the validity period of the input authentication key by default.
If the authentication key validity period is set to permanent, the Token usage period can be set up to a maximum of 7300 days (20 years).
The Token usage period cannot be changed after the service application is completed.
For security enhancement, periodic replacement of the Token is recommended.
If the Token usage period expires, it is impossible to issue a temporary key, and a new Token must be issued through a new service application.
If the Token usage period expires, it is impossible to extend the period, and the Token can no longer be used. Before the Token usage period expires, a new Token must be issued through a new service application, and the issued Token information must be applied to the source code.
Access key replacement cycle
Required
Select the replacement cycle of the access key to be used to access Samsung Cloud Platform resources
The access key usage time is applied from the time the service creation is completed.
For security enhancement, the access key usage period can only be set up to a maximum of 1.5 days (36 hours).
A new access key is issued before the access key usage period expires, and the same usage period is applied.
Access Allowed IP
Required
Enter the IP to allow access and click the Add button
The entered IP must also be set identically in Authentication Key Management > Security Settings > Access Allowed IP to allow access.
Even when entering a single IP, you must enter ‘/32’ after the IP.
Up to 10 IPs can be registered.
Description
Selection
Additional Information Input
Table. Secret Vault service information input items
Additional Information Input area, please select the required information.
Classification
Mandatory
Detailed Description
tag
selection
add tag
add tag button to create and add a tag or add an existing tag
up to 50 can be added per resource
newly added tags are applied after service creation is completed
Table. Additional Information Input Items for Secret Vault
Summary panel where you can check the detailed information generated and the expected billing amount, and click the Complete button.
Once creation is complete, check the created resource on the Secret Vault list page.
Secret Vault detailed information check
You can check and modify the entire resource list and detailed information of the Secret Vault service. The Secret Vault details page consists of details, tags, and work history tabs.
To check the detailed information of the Secret Vault service, please follow the following procedure.
All services > Security > Secret Vault menu, click. It moves to the Service Home page of Secret Vault.
Service Home page, click the Secret Vault menu. It moves to the Secret Vault list page.
Secret Vault list page, click on the resource to check the detailed information. It moves to the Secret Vault details page.
Secret Vault details page displays status information and additional feature information, and consists of details, tags, work history tabs.
Classification
Detailed Description
Secret Vault status
the status of the Secret Vault created by the user
Active: in operation
To be terminated: after applying for service cancellation, waiting for cancellation
The scheduled cancellation time of the service is displayed, and the service cancellation can be canceled.
Expired: token expiration status
The Secret changed to the Expired status cannot perform any actions such as information inquiry, and is automatically deleted after 7 days.
Replace Master Key
Delete the master key currently in use and create a new master key
Only the creator of the Secret Vault service can replace the master key.
Service Cancellation
Button to cancel the service
Table. Secret Vault Status Information and Additional Functions
Detailed Information
Secret Vault List page where you can check the detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Title
Resource ID
Unique resource ID in the service
Creator
The user who created the service
Creation Time
The time when the service was created
Editor
User who modified the service information
Revision Time
Time when service information was revised
Secret name
Name of the generated Secret
Type
Encryption Method
Description
Additional information or description of the Secret Vault service
Authentication Key
Authentication key used in Secret Vault service
Token usage period
The available period of the Token provided by encrypting the authentication key
Token Expiration Time
Token Usage Expiration Time
Token ID
Token’s unique ID
Token Secret
Token ID and a pair of generated Token Secret
Token replacement cycle
The replacement cycle of the token used to access Samsung Cloud Platform resources
Expiration Date of License Key
Expiration Date of License Key Usage
Allowed IP
List of IPs that are allowed to access
Description
Additional information or description about Secret Vault
Table. Secret Vault detailed information tab items
Tag
Secret Vault List page where you can check the tag information of the selected resource, and add, change or delete it.
Classification
Detailed Description
Tag List
Tag List
Tag’s Key, Value information can be checked
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing list of created Key and Value
Fig. Secret Vault tag tab items
Work History
Secret Vault list page where you can check the work history of the selected resource.
Classification
Detailed Description
Work History List
Resource Change History
Work details, work time, resource type, resource name, work result, worker information can be checked
Click the corresponding resource in the Work History List. The Work History Details popup window opens.
Fig. Secret Vault work history tab detailed information items
Secret Vault Cancellation
You can cancel the corresponding service that is not in use to reduce operating costs. However, if you cancel the service, the operating service may be stopped immediately, so you must consider the impact of stopping the service sufficiently before proceeding with the cancellation work.
Caution
After the service is canceled, the data cannot be recovered, so please be careful.
To cancel the Secret Vault, follow the following procedure.
All services > Security > Secret Vault menu, click. It moves to the Service Home page of Secret Vault.
Service Home page, click the Secret Vault menu. It moves to the Secret Vault list page.
Secret Vault list page, select the resource to be canceled and click the Service Cancellation button. It moves to the Service Cancellation pop-up window.
Service Cancellation popup window, enter the cancellation waiting period (7-30 days) and click the Confirm button. The service will be cancelled after the cancellation waiting period entered by the user.
Note
During the cancellation waiting period, the existing access key is deleted, and an additional access key for accessing Samsung Cloud Platform resources cannot be issued.
Secret Vault cancellation cancellation
You can cancel the cancellation of the service that is waiting for cancellation and use it again.
To cancel the cancellation of Secret Vault, follow the next procedure.
All Services > Security > Secret Vault menu, click. It moves to the Service Home page of Secret Vault.
Service Home page, click the Secret Vault menu. It moves to the Secret Vault list page.
Secret Vault list page, click the resource to cancel the cancellation. It moves to the Secret Vault details page.
Secret Vault details page, click the cancel cancellation button. It moves to the service cancellation cancellation pop-up window.
Service Cancellation Cancel popup window, check the contents, and then click the Confirm button. The status of the resource that canceled the cancellation will be restored to Active.
Reference
If the authentication key used in Secret is deleted, the service cancellation cannot be cancelled.
If the authentication key used in Secret is stopped or deleted, you cannot cancel the service cancellation. First, release the suspension of the authentication key.
Only the creator of the Secret Vault service can cancel the service cancellation.
Application Token settings
Secret Vault service application to obtain the Token information is required for API calls for OpenAPI key issuance request information, Token information for each application environment to fit please set.
To set the token information, follow the next procedure.
Apply the Token information to the environment variable setting file of the Application.
Set the Token information so that it can be referenced by the API call Logic within the Application.
use OpenAPI → GET /v1/temporarykey/{secretvault_id}
For more detailed information, please refer to the Open API Guide of Samsung Cloud Platform Console.
Set the Token information so that the API call Logic within the Application can reference it.
The IMS kit can remove hard coding from existing source code and use token information to call OpenAPI and issue it for use. For more information, please refer to the Open API Guide in the Samsung Cloud Platform Console.
Notice
The following is an example for reference. Set the source code according to the application standard you want to use the Token.
application.yml or application.properties and other environment variable setting files
Apply the issued Token information to the environment variable setting file.
secretvault.secretvault.id= {{ ID }}
secretvault.tokenId= {{ Token ID }}
secretvault.tokenSecret= {{ Token Secret }}
Java file
Apply to the class file for environment variable recognition.
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class SecretVaultConfiguration {
@Value("${secretvault.id}")
private String id;
@Value("${secretvault.tokenId}")
private String tokenId;
@Value("${secretvault.tokenSecret}")
private String tokenSecret;
@Bean
public OpenApiClient openApiClient() {
// Create OpenApiClient or another API client and initialize it using the setting values
return new OpenApiClient(secretVaultName, tokenId, tokenSecret);
}
}
9.4.3 - API Reference
API Reference
9.4.4 - CLI Reference
CLI Reference
9.4.5 - Release Note
Secret Vault
2025.07.01
NEWOfficial Release of Secret Vault Service
A Secret Vault service has been released that can manage token-based temporary key issuance and lifecycle.
9.5 - SingleID
9.5.1 - Overview
Service Overview
SingleID not only allows authorized users to easily access information assets with one-time authentication, but also strengthens account security through policy-based authority management and real-time abnormal authentication detection, and provides account management and access framework through various history management.
Features
Easy and convenient login and app linking: Building an integrated authentication system that can log in from On-Premises to SaaS apps with one ID can improve work productivity. Administrators can automate linking to various global SaaS apps through prepared Pre-Built Connectors, allowing them to easily link various apps without domain knowledge of authentication.
Account Management Efficiency and Security Enhancement: It systematically manages the account lifecycle from creation to deletion for various users, including employees, partner companies, corporations, and subsidiaries. Additionally, it grants permissions to authorized users in a timely manner and revokes unnecessary permissions in a timely manner to prevent unauthorized access and strengthen account security.
Enhanced Anomaly Detection: Situation-based authentication anomaly detection through user type, login IP, device information, access time, etc. enables the application of security policies according to the situation, preventing account infringement accidents.
Cloud Access Management: Unifies the access path of operators/developers accessing the public cloud and executes role-based temporary token-based console/resource access control to further strengthen cloud security in a multi-cloud environment.
Service Composition Diagram
Figure. SingleID Configuration Diagram
Provided Features
SingleID provides the following functions.
Integrated Authentication and Account Management
Supports various authentication linkage protocols (SAML, OIDC, etc.)
Provide self-service features for app usage application and approval
Salesforce, Workday etc. account synchronization and role (group) synchronization/management within the account
Provides membership registration/withdrawal function that can issue accounts to non-employees, such as partners and customers
Passwordless and Multi-Factor Authentication
PC/Mobile passwordless authentication and multi-factor authentication (MFA)
Existing 1st authentication environment linkage to provide 2nd authentication composite authentication (MFA-only service use case)
Support for certificate-based authentication through Private CA (Certificate Services Authority), a private certificate issuance/management function (separate Use Case)
Automation of app connection through Pre-Built Connector
DIY integration template for simplified custom app integration
Anomaly Detection based on Risk-based Authentication
Context-based access control according to the situation of attempting authentication
Enhanced security through detailed login and authentication policy settings
Public Cloud Access Management for Cloud Operators/Developers
Role-based console access control through assigned accounts
Request/Approval of Resource Access Permission and OTP-based Credential Method for Resource Access
Component
The components of the SingleID service are as follows. Users can use the service through the Samsung Cloud Platform SingleID Console.
Access Management
Supports various authentication linkage protocols (SAML, OIDC, etc.)
Provide integrated login to in-house and out-of-house work systems through a single login
Identity Management
Manage lifecycle from account creation to disposal
Directory integration and synchronization (Active Directory, LDAP, etc.)
Multi Factor Authentication
PC and mobile simple authentication
SMS, email, mOTP, TOTP, PIN, biometric, Knox Messenger, Window Hello, etc. provide various composite authentication methods
Anomaly Detection Management
Context-based access control according to the situation of attempting authentication
Providing adaptive access control through risk analysis
Cloud Access Management
Cloud security enhancement through singleization of access paths for cloud operators/developers
Role-based temporary token method for console/resource access control
Regional Provision Status
SingleID can be provided in the following environments.
Region
Availability
Korea West(kr-west1)
Provided
Korean East(kr-east1)
Not provided
South Korea 1 (kr-south1)
Not provided
South Korea, southern region 2(kr-south2)
Not provided
South Korea, southern region 3(kr-south3)
Not provided
Table. SingleID Region-based Service Status
Preceding Service
SingleID has no preceding service.
9.5.2 - How-to guides
The user can enter the required information for the SingleID service and select detailed options through the Samsung Cloud Platform Console to create the service.
Reference
Check the detailed services provided per item on SingleID and apply for the product. The services provided per item are as follows.
Service
Detailed Description
Access Management (AM)
Integrated authentication (AM) is an integrated authentication service that allows users to log in to everything from On-Premises to SaaS apps with a single ID
Integrated Authentication (SSO)
DIY App Integration
Catalog Service
Self Service
Dashboard
Integrated Logout Service
Account Creation/Registration
Tenant Management
Agent Management
Identity Management (IM)
Account management (IM) enables systematic account lifecycle management from creation to termination for various users such as employees, partners, corporations, subsidiaries, etc.
Permission management
Universal Directory
Account lifecycle management
Provisioning
Policy management
Multi-Factor Authentication (MFA)
Multi-factor authentication (MFA) provides secondary authentication services in various methods when accessing major systems, external systems, mobile, etc.
Passwordless authentication
Multi-factor authentication
MFA for Web apps
If MFA is applied alone, only secondary authentication functionality is provided
Anomaly Detection Management (ADM)
Anomaly Detection (ADM) is a service that detects authentication anomalies in login situations such as user type, login IP, device information, and access time
Authentication anomaly detection
Anomaly detection email notification service
Cloud Access Management (CAM)
Cloud Access Management (CAM) is a privileged account access management solution that strengthens cloud console/resource access control in public/multi-cloud environments
Cloud console/resource access control
Table. SingleID Service Provision Guide by Item
Create SingleID
You can create and use the SingleID service in the Samsung Cloud Platform Console.
All Services > Security > SingleID Click the menu. SingleID Service Home Navigate to the page.
Service Home on the page click the Create SingleID button. Navigate to the Create SingleID page.
SingleID Creation On the page, enter the required information in the service information input area, and select the detailed options.
Service Configuration Selection area, enter the information for the service and select detailed options.
Category
Required or not
Detailed description
Service Selection
Required
SingleID Service Selection
Multiple services can be selected and applied
MFA When applied alone, simple authentication function is not provided
When IM, MFA are selected, AM is automatically selected
Selecting ADM automatically selects AM, IM, MFA
Selecting CAM automatically selects AM, IM, MFA
When AM, IM, MFA or AM, IM, MFA, ADM are selected, a tenant is automatically created in the TAP/UP/MFA portal. If only the MFA item is selected, a tenant is created in the TAP/MFA portal
Tenant user count
Required
Enter the minimum number of Tenant users according to the selected service
Can be entered within the range of 50 - 999,999
Resource Unit Count
Select
Enter the number of Resource Units to register when selecting CAM service
Input possible within the range 20 - 99,999
Integration Support
Select
Enter number of integration support units
Can be entered within the range 1 - 9,999
AM: 1 unit
MFA: 1 unit
IM: 2 units
When using AM and MFA simultaneously, counted as 1 unit
Table. SingleID Service Configuration Selection Items
Enter Service Information area, enter the information required to create the service.
Category
Required or not
Detailed description
Tenant name
Required
Enter Tenant name
Tenant code
Required
Tenant code input
Table. SingleID Service Information Input Items
Member Selection Select the tenant user who will use the service in the area.
Category
Required
Detailed description
User
Required
Select members from user list
You must select at least one user to be able to create the service
Table. SingleID Service Member Selection Items
Additional Information Input area, please enter or select the required information.
Category
Required or not
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. SingleID additional information input items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resources on the SingleID List page.
SingleID Check Detailed Information
SingleID service can view and edit the full resource list and detailed information. SingleID Detail page consists of Detail Information, Tags, Activity History tabs.
To view detailed SingleID information, follow the steps below.
All Services > Security > SingleID Click the menu. Service Home page will be displayed.
Click the SingleID menu on the Service Home page. Navigate to the SingleID List page.
SingleID List Click the resource to view detailed information on the page. SingleID Details You will be taken to the page.
SingleID Details page displays status information and additional feature information, and consists of Details, Tags, Activity History tabs.
Category
Detailed description
Service Status
Service Status Display
Creating: Creating tenant
Active: Tenant creation completed
Terminating: Terminating service
Failed: Tenant creation failed
CAM Portal
Cloud Access Management portal popup button
Displayed only when applying for CAM service
Admin Portal
Admin portal window popup button
Service termination
Service termination button
Table. SingleID status information and additional functions
Detailed Information
SingleID List page allows you to view detailed information of the selected resource and, if needed, modify the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Service’s unique resource ID
Creator
User who created the service
Creation time
Service creation time
Editor
User who modified the service
Modification Date and Time
Service Modification Date and Time
Tenant name
Entered Tenant name
Tenant code
Entered Tenant code information
Tenant user count
Entered Tenant user count
Click the edit icon to edit
Resource Unit Count
Entered Resource Unit Count
Only displayed when applying for CAM service
Click the edit icon to edit
Payment status
Payment status and first payment date information
Requested Service
Display of Requested Service
Integration Support
Add Application Click the button to apply for integration support
Table. SingleID detailed information tab items
Reference
If the service status is Failed, you can resolve the issue by checking the error details in the Support Center > Contact Us menu.
Tag
SingleID list page allows you to view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can check the Key and Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. SingleID tag tab items
Work History
SingleID list page allows you to view the operation history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work details, work date/time, resource type, resource name, work result, and worker information can be checked
Click the relevant resource in the Work History List list. The Work History Details popup will open.
Table. SingleID Work History Tab Information Items
SingleID Admin Portal Using
In the Admin Portal, you can configure and manage SSO authentication settings, account synchronization integration, multi-factor authentication, etc.
To go to SingleID’s Admin Portal, follow the steps below.
All Services > Security > SingleID Click the menu. Service Home page will be displayed.
Click the SingleID menu on the Service Home page. Navigate to the SingleID list page.
On the SingleID List page, click the resource to view detailed information. You will be taken to the SingleID Details page.
Click the Admin Portal button on the SingleID Details page. The SingleID admin portal window appears.
For detailed description of the Admin Portal, please refer to Admin Portal.
SingleID CAM Portal Usage
In the CAM Portal, you can set and manage console and resource access control and security management of the CSP.
To go to SingleID’s CAM Portal, follow the steps below.
All Services > Security > SingleID Click the menu. Navigate to the Service Home page.
Service Home page, click the SingleID menu. Go to the SingleID List page.
SingleID List Click the resource to view detailed information on the page. SingleID Details page will be opened.
4.SingleID Details on the page, click the CAM Portal button. The SingleID cloud access management portal window appears.
For detailed description of CAM Portal, please refer to CAM Portal.
SingleID Cancel
You can reduce operating costs by terminating the unused service.
To cancel SingleID, follow the steps below.
All Services > Security > SingleID menu, click it. SingleID Dashboard page, navigate.
Click the resource to be terminated on the SingleID List page. It moves to the SingleID Details page.
Service Cancellation Click the button. A termination notice window appears.
In the alert window, enter the Tenant name and click the Confirm button.
9.5.2.1 - SingleID Manuals
SingleID enables only authorized users to easily access information assets with a single authentication, and strengthens account security through policy-based permission management and real-time detection of abnormal authentication behavior, and provides account management and access framework through various history management.
SingleID Provided Manual List
SingleID provides various manuals as shown in the table below.
Category
Description
User Portal
- SingleID User Portal is the user interface of the SingleID service, providing various security features such as access to company applications, SSO, and access permission requests. - For more details, see User Portal.
Admin Portal
- SingleID Admin Portal provides all configuration and management functions through the Admin Portal for all authentication services and account management services of organizations using the service, as well as the establishment and setting of security policies. - For more details, refer to Admin Portal.
MFA Portal
- SingleID can, while maintaining the authentication system used by existing applications, additionally require users to undergo various additional second-factor authentications through system integration to enhance security. Also, SingleID provides the MFA Portal so that users can pre-register and manage their preferred authentication methods during authentication. - For more details, refer to MFA Portal.
CAM Portal
- CAM(Cloud Access Management) Portal is a service for cloud console and resource access management that provides users with an easy and convenient way to access cloud consoles and resources. Users can access the portal from a PC located on the internal network using multi-factor authentication (MFA). It issues one-time tokens instead of passwords to enable access to cloud consoles and resources, and allows monitoring of all access, operation history, and abnormal behavior. - For more details, see CAM Portal.
SingleID Authenticator
- SingleID Authenticator is a SingleID dedicated authentication tool that can conveniently and securely authenticate website users’ identity verification and multi-factor authentication using a mobile phone. - For more details, refer to SingleID Authenticator.
SingleID Open API Guides
- Provides various APIs such as applications, Idp, authentication, etc., for using SingleID. - For more details, refer to the Open API Guides.
Table. SingleID manual list
Reference
The features and configurations provided to the user may vary depending on the SingleID product configuration.
9.5.2.1.1 - User Portal
Overview
SingleID allows only authorized users to easily access information assets with a single authentication, and strengthens account security through policy-based permission management and real-time detection of authentication anomalies, and provides account management and access framework through various history management.
Provided Features
Integrated authentication and account management
Support various authentication integration protocols (SAML, OIDC, etc)
Provision of self-service function for app usage request and approval
Salesforce, Workday account synchronization and role (group) synchronization/management within the account
Provide sign-up/withdrawal functionality that can issue accounts to partners, customers, etc., who are not employees.
Passwordless and Multi-Factor Authentication
PC/Mobile passwordless authentication and multi-factor authentication (MFA)
Provide composite authentication for secondary authentication through integration with existing primary authentication environment (MFA-only service use case)
private certificate issuance/management function Private CA(Certificate Service Authority) through certificate-based authentication support(separate Use Case)
Authentication and Account Information Integration
Automation of app integration through Pre-Built Connector
Simplified custom app integration through DIY integration templates
Risk-based authentication anomaly detection
Context-based access control based on the situation of attempting authentication
Strengthening security through detailed login and authentication policy settings
Public cloud access management for cloud operators/developers
Console access control through role-based assigned accounts
Resource access permission request/approval and resource access using OTP-based credential verification method
Notice
Depending on the company’s SingleID usage plan, the features provided to users may vary.
Service Configuration Diagram
Figure. SingleID Diagram
Reference
Depending on the SingleID product configuration, the features and configurations provided to the user may differ.
User Portal what is?
SingleID User Portal is the user interface of the SingleID service, providing various security features such as access to company applications, SSO, and access permission requests.
User Portal Screen Layout
User Portal is composed of the following menus.
My App
App Catalog
Notification
Approval Request
Manual composition
This manual is composed of the following contents.
Overview: Explains the concept and manual screen composition with the SingleID overview.
Announcements and Language Settings: Explains how to set the language in the SingleID solution and how to check urgent announcements that can be viewed before logging in.
Login and Authentication: It explains how to register and use various authentication methods for login.
Register authentication tool: Explains the enrollment process where the user registers an authentication tool.
Sign Up: Explains the two methods of sign up.
Find ID: Describes the procedure where the user finds their ID themselves through the Find ID function.
Privacy Policy and Terms of Use: Explains the privacy policy and terms of use that can be found via the link at the bottom of the screen.
PC SSO Agent: Describes the PC SSO Agent, which is a login/logout auxiliary function of SingleID.
My App: Describes the My App menu that can be accessed via SSO.
App Catalog: Describes the App Catalog menu that allows you to view the list of apps that can be requested.
Notification: Describes the Notification menu that can check emergency notices and general notices.
12.Approval Request: Describes the Approval Request menu that can request or approve app usage.
Personal Information Settings: You can set photos, preferred language, and system time zone Personal Information Settings, Authentication Settings, Login History/Environment, Logout etc., describing the personal settings menu.
9.5.2.1.1.1 - Notice and Language Settings
Notice
You can check the notice notifications posted by the administrator on the user portal login screen and the screen after logging in to the user portal. Notices are divided into general notices and urgent notices.
General Notice: General notices posted by administrators, used to deliver information to users. It can be checked in the User Portal > Notification menu.
Urgent Notice: Urgent notices posted by the administrator, and can be checked on the User Portal > Login Screen and User Portal > Notification menu.
Language setting
To modify the language that appears on the screen, follow these steps.
User Portal Screen > Top Language selection, click on the desired language from Korean or English.
A dropdown list to select between Korean and English appears.
Select your desired language. The screen will be switched according to the selected language.
Note
It is provided in the language set in the user’s browser at the initial login. If the language is not Korean or English, it will be set to English.
Guide
All SingleID portal sites provide services in Korean and English.
9.5.2.1.1.2 - Login using authentication method
Log in using authentication method
What is authentication method?
Authentication method is commonly called Authenticator and refers to an authentication tool.
SingleID provides the following nine authentication methods for user authentication.
Password: Enter password on SingleID login screen
Email OTP: Send OTP via email and enter OTP on the SingleID login screen
SMS OTP: Send OTP via SMS and enter OTP on the SingleID login screen
Knox Messenger OTP: Send OTP via Knox Messenger and enter OTP on the SingleID login screen
Knox Identity: Authentication integration with Knox Portal user ID/Password
SingleID Authenticator Bio: Install the dedicated SingleID mobile app and link authentication with biometric verification
SingleID Authenticator PIN: Install the dedicated SingleID mobile app and link authentication with a PIN.
SingleID Authenticator mOTP: Install the SingleID dedicated mobile app and integrate authentication with mOTP (Mobile OTP)
SingleID Authenticator TOTP: Install the SingleID dedicated mobile app and integrate authentication with TOTP (Time base OTP)
Passkey: Login and authentication using biometrics (fingerprint, facial), Mobile, PIN code without password based on Windows Hello
Reference
If you are using the SingleID Authenticator mobile app for the first time, please refer to SingleID Authenticator.
Enter user ID
The user attempts to log in by entering their ID on the login screen below.
To log in using the user ID, follow the steps below.
Login screen > Account ID Enter the ID in the input field, and click the Next button.
Enter the password in the password field, and click the Next button.
Login is completed.
Passwordless Login
SingleID provides login service without a password.
To log in without using a password, follow the steps below.
Login screen > Do you want to log in without a password? Click it.
Select verification method The screen appears. Click one of the desired authentication methods.
Enter the authentication code according to the selected authentication method.
After login is completed, you will be taken to the User Portal main screen.
Reference
Authentication methods displayed as Registration Required require registration. Click Registration Required to register immediately, or check Register Authentication Tool.
Notice
Passwordless login may not be provided depending on whether it is set in the login policy settings. Please contact the administrator.
Set Preferred Authentication Method
SingleID users log in to the User Portal provided by SingleID and set up their preferred primary and secondary authentication methods.
If the user sets their preferred method, the Select verification method screen is omitted during login and authentication, allowing immediate authentication using primary and secondary methods.
If you want to set your preferred authentication method, follow the steps below.
Click the User Portal > Personal Profile > Authentication settings.
Authentication Settings screen appears.
Click the ☆ 1st, ☆ 2nd that you want in front of each authentication method.
1st, 2nd can each be selected only one at a time. Selection is completed when it changes to ★.
Once the setup is complete, it will be configured in that manner for the next login, providing convenient login.
Reference
Even if a user sets a preferred authentication method for first and second factor authentication, the administrator can restrict it to a specific authentication method through login policy settings.
Register authentication method
All authentication methods can be set by the user. Registering an authentication method by the user is called enrollment. When a user account is first created, only email OTP is automatically enrolled using the email information from the user data. Other authentication methods can be directly enrolled by the user as needed.
There are two ways to register authentication methods (Enrollment).
Register from Authentication Settings: User Portal > Profile > Authentication settings, click the + Add New button at the bottom to register.
Select verification method screen registration: first authentication at login, second authentication at Select verification method screen, select the authentication method with a gray check mark (V) and register.
Reference
For detailed information about authentication method registration (Enrollment), refer to Register Authentication Tool.
First login
Password Reset
If the user logs in for the first time, they can log in after resetting the password.
If you want to reset your password, follow the steps below.
Login screen > Account ID input field, enter the ID, and click the Next button.
Click reset password under the Next button.
Consent for collection/use of personal information
When logging in for the first time or during a certain period, SingleID requires consent for the collection/use of personal information. According to the consent procedure, select the required, optional items and agree.
Required items must be selected to log in.
Password Authentication
Password is the most basic authentication method as the default authentication tool of SingleID.
Enter password
Follow the steps below to log in using your user ID.
Login screen > Account ID input field, enter ID, and click the Next button.
Password input field, enter the password, and click the Next button to log in.
Reference
If you click the eye-shaped icon in the password input field, you can view the password you entered.
Caution
When the entered password is entered incorrectly
If the entered password is entered incorrectly, re-entry is required along with the message ID or password is incorrect. (1/3). The number of retry attempts is limited to the number set by the administrator in the password policy.
If the password is entered incorrectly consecutively and locked
If you entered the password incorrectly and the device is locked, you can unlock it in two ways.
Automatic unlock after 1~5 minutes: When automatic unlock is set, the account will be locked for 1~5 minutes. After that time, login is possible.
Unlock with password reset: When the administrator sets the password policy to password reset, a password reset is required. Login is possible after password reset. Find ID you can check detailed information there.
Email OTP Authentication
Authenticate
If you want to authenticate with email OTP, an OTP will be sent to the email registered by the user.
If you want to authenticate with email OTP, follow the steps below.
Click Email in the Identity Verification Selection method.
An OTP code will be sent to the registered email. Enter the OTP within the time set by the administrator (usually 3-5 minutes).
After entering the OTP, click the Confirm button, and the authentication will be completed.
Reference
Code Resend: If you exceed the input validity time, click the code resend button. The OTP code will be resent via email.
‘Would you like to authenticate in a different way?’: If the current authentication cannot be used, switch to a different authentication method.
‘If you have changed your email, please register.’: Depending on the administrator settings, you can register (Enrollment) a different email to authenticate. For registration, you can check the details at Email Authentication Tool Registration.
Guide
If you entered the code incorrectly
If the user enters the OTP code incorrectly, they can re-enter it as many times as the administrator specifies.
When locked due to exceeding the user input limit
If you enter the OTP code incorrectly more times than the number set by the administrator, the screen will be locked for the duration set by the administrator. After waiting for that time, you can enter again. Refresh and try again after the input restriction period.
SMS OTP authentication
Authenticate
If you want to authenticate with SMS OTP, an SMS OTP will be sent to the mobile registered by the user.
If you want to authenticate with email OTP, follow the steps below.
Click Email in the Identity Verification Selection method.
The OTP code will be sent to the registered mobile phone. Enter the OTP within the time set by the administrator (usually 3–5 minutes).
After entering, click the Confirm button, and the authentication will be completed.
Reference
Resend Code: If you exceed the input validity time, click the resend code button. The OTP code will be resent to your mobile phone.
‘Would you like to authenticate in a different way?’: If the current authentication cannot be used, switch to a different authentication method.
‘If you have changed your mobile phone, please register.’: Clicking the link will take you to a screen for enrolling with the new mobile. You can check the details for registration at Register SMS authentication tool.
Notice
If you entered the code incorrectly
If the user enters the OTP code incorrectly, they can re-enter it as many times as the administrator specifies.
If locked due to exceeding the user input limit
If you enter the OTP code incorrectly more times than the number set by the administrator, the screen will be locked for the duration set by the administrator. After waiting for that time, you can enter again. Refresh and try again after the input restriction period.
Knox Messenger OTP authentication
Authenticate
Knox Messaenger OTP if you want to authenticate with OTP, the OTP will be sent to the Knox Messanger you are using.
To authenticate Knox Messenger OTP, follow the steps below.
In the Identity Verification Selection method, click Knox Messenger.
The OTP code is sent via the Knox Messenger you are using. Enter the OTP within the time set by the administrator (usually 3-5 minutes).
After entering, click the Confirm button, and the authentication will be completed.
Reference
Resend Code: If you exceed the input validity time, click the resend code button. The OTP code will be resent via Knox Messenger.
‘Would you like to authenticate in a different way?’: If the current authentication cannot be used, switch to a different authentication method.
‘Would you like to use a different Knox ID?’: Clicking the link takes you to a screen for enrolling a new Knox ID. For registration, you can see the details at Knox Messenger Authentication Tool Registration.
Guide
If you entered the code incorrectly
If the user enters the OTP code incorrectly, they can re-enter it as many times as the administrator specifies.
If locked due to exceeding the user input limit
If you enter the OTP code incorrectly more times than the number set by the administrator, the screen will be locked for the duration set by the administrator. After waiting for that time, you can enter again. Refresh and try again after the input restriction period.
Knox Identity Password Authentication
Authenticate
To authenticate with Knox Identity, you need to enter the Knox Identity password you are using.
If you want to authenticate with Knox Identity, follow the steps below.
In the Select Identity Verification method, click Knox Identity.
Enter the password for your own Knox account.
After entering, click the Confirm button, and the authentication will be completed.
Reference
‘Would you like to authenticate in a different way?’: If the current authentication cannot be used, it changes to a different authentication method.
Guide
If the password is entered incorrectly
If the user enters the password incorrectly, they can re-enter it as many times as the administrator specifies.
When locked due to exceeding the user input limit
If you enter the password incorrectly more times than the number set by the administrator, the screen will be locked for the amount of time set by the administrator. After waiting for that time, you can enter again. Refresh and try again after the input restriction period.
SingleID Authenticator Authentication
SingleID service provides a mobile authentication app called SingleID Authenticator, and offers authentication in various ways.
Authentication method
Authentication method
Description
SingleID Authenticator Bio
Send a push via the installed SingleID Authenticator mobile app on the mobile to request biometric authentication.
SingleID Authenticator Pin
Send a push using the installed SingleID Authenticator mobile app on the mobile device to request authentication with a PIN code. Not provided
SingleID Authenticator TOTP
Send a push via the installed SingleID Authenticator mobile app on the mobile device to request authentication with TOTP.
SingleID Authenticator mOTP
Send a push via the installed SingleID Authenticator mobile app on the mobile device to request authentication with mOTP.
For SingleID Authenticator installation and configuration method, refer to SingleID Authenticator.
Passkey authentication
SingleID service provides simple authentication and multi-factor authentication through a window-based Passkey.
Authentication Method
Simple authentication: Provides easy login without ID/Password through Sign in with Passkey at the bottom of the login page.
Multi-factor authentication: Provides easy login without needing ID/Password during secondary multi-factor authentication.
Authentication Types
Mobile Passkey: Scan the QR code, and log in using Android and iOS mobile
Security key: Log in using the Windows security key
PIN: Login using Windows PIN code
Reference
Passkey support environment
1.Operating System(laptop or desktop)
Windows 11, macOS Ventura, ChromeOS 109 or higher
Mobile phone: iOS 16 or Android 9 or higher
Hardware security key: hardware security key supporting the FIDO2 protocol
Browse version
Chrome 109 or higher
Safari 16 or higher
Edge 109
Device Settings
Bluetooth activation
Set password for screen lock
PIN code registration
Allow fingerprint or facial recognition
Reference
Passkey must have Windows Hello set up in advance. For details, see the Reference Link.
Admin Authentication
Authenticate
In the SingleID service, the administrator provides authentication by delegating identity verification on behalf of the user.
If you want to perform administrator authentication, follow the steps below.
Identity verification selection method, if you cannot perform identity verification at the bottom of the screen, you can request verification from the administrator. Click here. Click.
On the administrator selection screen, select the administrator to delegate and click the Request button.
After clicking the Request button and requesting approval from the selected administrator, the authentication will be completed.
Guide
If there is no phrase at the bottom If you cannot verify your identity, you can request verification from the administrator. Click here
The administrator has disabled the admin authentication delegation feature by policy. Please contact the administrator.
9.5.2.1.1.3 - Register authentication tool
Register authentication tool
All authentication tools should be registered and used by the user themselves as a principle. Registering an authentication tool by a user is called enrollment. When a user is created for the first time, only Email OTP is automatically registered using the email information from the user data. The remaining information can be directly registered and used by the user as needed.
There are three ways to register.
Login screen > ID/Passwrod Enter > Identity verification method Register on the selection screen
On the identity verification method selection screen, if you click the authentication tool marked Registration Required (V mark), you can register.
User Portal(after login) > Profile > Authentication Settings > + Add New Click the button to register
Register through the registration message link at the bottom of all authentication screens
Below screen is an example of SMS verification screen. At the bottom, you can click the If you have changed your mobile phone, please register. message to register.
All authentication code entries can be changed via the message below(Message format: ~ please register.)
Example of authentication code input screen
Figure. Authentication Screen
Register Email Verification Tool
Email registration consists of the following three steps.
Verification Stage: It is the identity verification stage before registering the email authentication tool.
Registration stage: This is the step of registering a new email and checking whether the number is valid.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Verification Stage
This step verifies identity before using the authentication tool. To view the identity verification process, refer to Login and Authenticate.
Caution
In the verification stage, the authentication method to be used can only be performed with the authentication tool configured by the administrator.
Registration Stage
It is the step of registering the email address the user wants to register and checking the email address’s validity.
The user proceeds as follows.
Confirm step, when you complete identity verification, you automatically move to the Register step.
Enter the email address you want to register.
Click the Send verification code button.
Check the OTP code sent to the entered email address, and enter the OTP code on the screen.
If the verification code is entered correctly, it moves to the Complete stage.
Notice
According to company policy, for security reasons, a new email address that is not a company email address may not be registered.
Completion Stage
Registration completion screen will appear, and on the next login you can perform first and second authentication using the email verification tool.
Register SMS authentication tool
SMS registration consists of the following three steps.
Verification step: This is the identity verification step before registering the SMS authentication tool.
Registration Stage: This is the stage where you register a new mobile phone number and check whether the number is valid.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Verification Stage
This is the step of verifying your identity before using the authentication tool. To view the identity verification process, refer to Login and Authentication.
Confirm stage can only be authenticated using the authentication tool set by the administrator.
Registration Stage
It is the step of registering the mobile phone number the user wants to register and checking the validity of the mobile phone number.
The user proceeds as follows.
Verification stage, if you complete identity verification, you automatically move to the Registration stage.
Select the Country code, and enter the mobile phone number you want to register.
Click the Send verification code button.
Check the OTP code sent to the entered mobile phone number, and enter the OTP code on the screen.
If the verification code is entered correctly, it moves to the complete stage.
Completion Stage
Registration complete screen will appear, and on the next login you can perform first and second authentication using the SMS authentication tool.
Register Knox Messenger authentication tool
Knox Messenger registration consists of the following three steps.
Verification Stage: This is the verification stage before registering the Knox Messenger authentication tool.
Registration step: Enter the Knox ID to register. This is the step that checks whether the Knox ID to be registered is valid.
Completion Stage: This is the final verification stage confirming that the registration has been completed successfully.
Verification Stage
This is the identity verification step before using the authentication tool. To view the identity verification process, see Login and Authentication.
In the verification stage, the authentication method to be used can only be authenticated using the authentication tool configured by the administrator.
Registration Stage
This is the step of registering the mobile phone number the user wants to register and checking the validity of the mobile phone number.
The user proceeds as follows.
Verification stage, if you complete identity verification, you automatically move to the Registration stage.
Enter the Knox ID to register.
Click the Send verification code button.
Check the OTP code sent to Knox Messenger of the entered Knox ID, and enter the OTP code on the screen.
If the authentication code is entered correctly, it moves to the complete stage.
Completion Stage
Registration complete screen appears, and on the next login you can perform first and second authentication using the Knox Messenger authentication tool.
Register Passkey authentication tool
SingleID Authenticator is an authentication tool provided to the SingleID service.
Passkey enrollment consists of the following three steps.
Verification stage: This is the identity verification stage before registering the Passkey authentication tool.
Registration Stage: Passkey registration stage.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Confirmation Stage
This is the step to verify your identity before registering the authentication tool. To view the identity verification process, refer to Login and Authenticate.
Notice
In the verification stage, the authentication method to be used can only be authenticated using the authentication tool configured by the administrator.
Registration Stage
This is the step to check the mobile phone or PC environment you want to register a Passkey on.
Please complete the registration process in the four steps below.
Activation: This is a guide to the Passkey supported environment.
Confirm: Complete identity verification using an authentication method.
Registration: This is the Passkey registration step. Create on this device button click generates and registers a Passkey on the PC. Create on another device button click registers with a mobile phone or hardware security key.
Complete: Registration completed is the step to confirm that it has been completed. Click the Continue button.
Reference
Passkey supported environment
1.Operating System(Laptop or Desktop)
Windows 11, macOS Ventura, ChromeOS 109 or higher
Mobile phone: iOS 16 or Android 9 or higher
Hardware security key: hardware security key supporting the FIDO2 protocol
Browse version
Chrome 109 or higher
Safari 16 or higher
Edge 109
3.Device Settings
Bluetooth activation
Set password for screen lock
PIN code registration
Allow fingerprint or facial recognition
Completion Stage
After the Passkey registration is completed, the Registration complete screen appears. During the next login, you can perform first and second factor authentication using the Windows Hello authentication tool.
Reference
PC Passkey must have Windows Hello set up in advance. For details, see the Reference Link.
When registering a passkey on mobile, it can be set in an environment where QR code scanning is possible.
SingleID Authenticator is an authentication tool provided to the SingleID service.
SingleID Authenticator enrollment consists of the following four steps.
Verification Stage: It is the identity verification stage before registering the SingleID Authenticator authentication tool.
Installation Step: This is the user’s SingleID installation guide step.
Registration Stage: This is the step to register a new mobile app and for service registration.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Verification Stage
Before using the authentication tool, you must verify your identity. To view the identity verification process, refer to Login and Authenticate.
Guide
In the verification stage, the authentication method to be used can only be authenticated using the authentication tool configured by the administrator.
Installation Steps
There are three main ways to install the SingleID mobile app.
Recognize QR code on user mobile or search for “SinlgeID” on Google Play (for Android) or App Store (for iOS) to install SingleID Authenticator.
How to install by entering your mobile phone number and using the download link via SMS
How to install via manual download link
Install the SingleID Authenticator app and click the Next button to proceed to the registration step.
Registration Stage
After installing the SingleID Authenticator mobile app on the mobile phone you want to register, please run SingleID Authenticator.
Please perform the registration process in the following three steps.
Service Registration: Click the ‘+’ at the top in the SingleID Authenticator app.
QR or authentication number input: Scan QR code or enter authentication code to register.
Service Registration Complete: Confirm Click the button to complete the registration.
Completion Stage
After registration is completed in SingleID Authenticator, the Registration Complete screen appears. At the next login, you can perform first and second factor authentication using the Windows Hello authentication tool.
9.5.2.1.1.4 - Sign up
Sign up
According to the company’s internal policy, users who are not employees, such as partners, subsidiaries, and customers, can create an account through separate membership registration.
Sign up through the login page link
This is a method of signing up through the sign up link on the login page.
On the login page, click “Sign up” at the bottom if you don’t have an account, join.
Agreement
To sign up, you need to agree to the terms and conditions.
Information Input
Follow the procedure below.
Please enter the email you want to register.
After entering the email, click the OTP transmission button, and the OTP code will be sent.
Enter the OTP code from the received email address and click the Confirm button.
If you enter the authentication code correctly, the sign-up button will be activated.
Click the Sign up button.
Information Input
Enter various personal information required for membership.
Division
Description
ID
Enter the ID to register
Korean Name
Enter Korean Name
English Name
Enter English Name
Enter phone number
Enter registered country and mobile phone number
OTP Code
Enter the received OTP code
Department
Department Name Input
Language and Time Zone
User language and time zone settings
Table. Personal Information Input Items
Notice
The above information input items may vary depending on the company’s membership policy.
Membership
After entering personal information and clicking the join button, the approval request will be completed. You can proceed to the next step after approval is completed.
Once the administrator has completed the approval, you can log in through password reset.
Notice
You may also be able to join without approval according to the membership policy.
Membership through invitation email
You can join through an invitation email from the administrator.
By clicking the sign up button in the received email, you can sign up for membership.
If the user has forgotten their ID, click ID Find on the login screen.
Find ID using mobile phone number
The user can find their ID by entering their name and mobile phone number.
Follow the procedure below.
Mobile tab should be clicked.
Please enter Name.
Please enter Last Name.
Enter the country code and phone number.
Click the Send Authentication Code button.
On the authentication code input screen, enter the received authentication code and click the Confirm button.
Reference
If the ID with the corresponding information does not exist, a ‘ID not found’ message appears. To search again, click the Go back to find ID button.
Password Reset
Reset Password
If the user wants to reset their password, click Password Reset at the bottom of the login screen.
Perform self-authentication
To set a password, the user must first go through self-authentication. When the password reset button is clicked, a screen for selecting an authentication method according to the policy set by the administrator appears. For more information on authentication, please refer to Logging in and Authenticating.
Password Reset
Once the user completes the self-authentication, the user can move to the screen where they can set their new password. The password must be set to match the password pattern and complexity set by the administrator as a policy. When the user enters the password, it is displayed in green if it meets the conditions, and in red if it does not. Set the password so that all items turn green.
Please follow the following procedure to reset your password.
Please enter a new password.
If the newly entered password does not meet any of the complexity and patterns set by the administrator, create a more complex password.
To prevent user input errors, please enter the same password as the one you entered again.
Click the Change Password button.
When the password setting is complete, clicking the Login with Password button will take you back to the login screen.
9.5.2.1.1.6 - Privacy Policy, Terms of Service, Service Desk
All screens have links to Personal Information Processing Policy and Terms of Use at the bottom left, so users can always check them.
Personal Information Processing Policy
A link to the Privacy Policy is provided at the bottom left of every screen, allowing users to view the privacy policy for SingleID services at any time.
To check the privacy policy, please follow the following procedure.
Click the Privacy Policy at the bottom left of the screen. You can view the latest version of the Privacy Policy.
When you want to check the previous version, you can select the desired version at the top and inquire about it.
Terms of Service
There is a link to Terms of Service at the bottom left of every screen, so users can always check the terms of service for SingleID services.
To check the terms of use, please follow the following procedure.
Click the Terms of Service at the bottom left of the screen. You can view the latest version of the Terms of Service.
When you want to check the previous version, you can select the desired version at the top and inquire about it.
Service Desk Information
If the user has any inquiries about SingleID, they can contact us using the Service Desk phone number and the representative email account at the bottom of the screen.
9.5.2.1.1.7 - PC SSO Agent
PC SSO Agent
SingleID PC SSO Agent provides integrated SSO authentication services in the Window Desktop environment.
SingleID PC SSO Agent provides the following features.
Integration SSO and login/logout between internet browsers
PC Device Authentication
Check for installation of essential security software feature (SingleID administrator settings)
Notice
SingleID PC SSO Agent may not be used depending on the administrator’s settings.(Agentless operation)
Reference
PC SSO Agent recommended installation environment
Windows Desktop 10 and 11 (x86 and x64 CPU Only)
Web Browser: Microsoft Edge 88.x or higher, Chrome 87.x or higher
.NET Framework 4.0 or higher
Disk Capacity 100MB or more
Check if PC SSO Agent is installed
If the administrator has set the policy to use the PC SSO Agent, SingleID automatically checks if the SingleID SSO Agent is installed on the user’s PC as follows:
After the user logs in to SingleID, check if the PC SSO Agent is installed automatically.
If the PC SSO Agent is installed on the user’s PC, it automatically moves to the next screen, and if not, it automatically moves to the installation guide screen.
If the automatic installation guide screen does not appear, click the Next button to install the PC SSO Agent.
Download PC SSO Agent
Click the Download button on the PC SSO Agent installation guide screen to download and install the PC SSO Agent program on your PC.
Installing PC SSO Agent
If you download and install the SingleID Agent.exe file on your PC, a ‘ID’ tray will be created in the right bottom tray of the PC as follows.
If the PC SSO Agent is installed normally and SSO authentication is successful, you can check that it is working normally by right-clicking and clicking Status View.
Notice
If the installation does not proceed smoothly, remove the ‘SingleIdAgent’ app from the list of existing installed apps and install again.
Re-authentication attempt
After installing the PC SSO Agent, you can log in from the beginning again or click the Re-authentication button at the bottom of the screen below to try re-authentication using the Agent.
Notice
SingleID PC SSO Agent integrates logout processing for Chrome and Edge browsers when logging out.
9.5.2.1.1.8 - My App
Recently used apps
When the user logs in to the User Portal, they can see the My Apps menu first.
The left menu bar can be expanded or collapsed by clicking the arrow(→) icon at the bottom left.
When you click the My App menu, 3 sub-menus that are provided by default and cannot be modified will appear
Recently used apps
Bookmark
basic app
Among them, clicking Recently Used Apps will display the apps that the user has recently used. Recently used apps can be displayed up to a maximum of 12.
Bookmark
In the My App menu, clicking the Bookmark menu displays the apps that the user has bookmarked. You can bookmark frequently used apps to use them conveniently.
You can add a bookmark by clicking the Bookmark button at the bottom right of the app card, and clicking it again will remove the bookmark. Up to 12 bookmarks are possible.
Add/Delete Bookmark
Click the Bookmark button at the bottom right of the app you want to add, and it will be added to the Bookmark. If you click again, the bookmark will be deleted.
Basic App
The basic app menu exposes all apps available to the logged-in user. When the user clicks on an app, it is authenticated with SSO and the app runs in a new browser. If a disabled app is clicked, a popup window appears indicating that it is disabled.
Add category
The user can click the Add Category button to create a category with the user’s desired category name and manage the app.
Click the Add Category button, then enter the category name and click the Check button.
After adding a category, the user can click the More button located to the right of the category to move, change, or delete the category.
If an app is included in a category and the category is deleted, the remaining apps will be moved to the Default App category.
9.5.2.1.1.9 - App Catalog
Using the App Catalog
When you click the App Catalog menu, by default, the list of apps that are Pending Approval is displayed.
The app catalog can be checked as a list of apps in three states
Not in use: available for request
Pending Approval: The request for use has been completed and is waiting for approval
In use: The request for use has been approved and is in the state of being used
If there is no “request” button among unused apps, it is a case where the user cannot request it by themselves due to company policy. Please contact the administrator to use it.
Requesting App Usage
To request the use of an unused app, the user must click the Request button, enter the purpose of using the app, and then click the Request button.
The app usage approval process may vary depending on the administrator’s settings.
By default, the list of approvers set by the administrator is displayed, and if there are multiple approvers, it is determined by the result of the first approval or rejection process.
When the app usage request is completed, you can check the request status in two menus.
App Catalog > Pending Approval status can be checked from the status.
App Usage Approval > My Request can be checked in detail.
You can check the details by clicking my request list and app, and in the waiting state for use approval, you can cancel the request through the Cancel Request button.
9.5.2.1.1.10 - Notification
Notification
If you click the notification menu, you can check the notification list. There are two types of notifications.
Urgent: Tenant administrator urgently announces an urgent notification (e.g., system outage) that users can check before login regardless of the user’s login.
General: All notifications that are not emergency alerts, which the user can see after logging in, can be checked in the Notifications menu.
When you click the Notification menu, by default the All status notification is set, so both urgent and regular notifications are displayed.
If there are unread notifications, they are displayed as a number next to the notification menu, and because they are marked with a red dot in the list, unread notifications can be easily recognized.
If you click this notification, you can view the details.
Name
Description
Type
This is the type of notice. It is divided into urgent and general.
Title
This is a notice title.
Start date and time
This is the start date and time of the notice posting.
End Date/Time
This is the end date and time of the notice posting.
Table. Notification List
Approval Request
When you click the approval request menu, the administrator can view and cancel all users’ approval requests.
Approval requests consist of the Approval request list and Approval request queue tabs.
Approval Request List
There are several types of approval request statuses. You can easily filter and view them using the Approval Request, Approve, Reject, Cancel Submission buttons at the top. If you want detailed search, you can use detailed search in the search bar at the top right.
Approval Request: Shows all approval request statuses.
Approval: Shows all approved completed statuses.
Rejection: Shows approval request items that have been rejected.
Submission Cancellation: Shows approval request items where the approval has been cancelled.
The description of the approval request list items is as follows.
Name
Description
Approval System
It indicates the approval system according to the approval policy. It is possible to verify which approval system the request was made through. Policy > Approval Policy Please refer to.
Title
This is a notice title.
Start date and time
This is the start date and time of the notice posting.
End Date/Time
This is the end date and time of the notice posting.
Table. Notification List
9.5.2.1.1.11 - Approval Request
Approval Request
The app usage approval menu provides two functions.
My Request Tab: A list of apps I’ve requested to use is displayed.
Approved List Tab: A list of app usage requests requested by me will be displayed.
Requesting App Usage
To request the use of an unused app, the user must click the request button, enter the purpose of using the app, and then click the request button. The app usage approval process may vary from company to company.
By default, the list of approvers set by the tenant administrator is displayed, and if there are multiple approvers, it is determined by the result of the first approval or rejection process.
When the app usage request is completed, you can check the request status in two menus.
App Catalog > Pending Approval status can be checked from the status.
Approval Request > My Request where you can check the details and perform additional tasks.
My Request
You can check the details by clicking the app in the My Request list, and when waiting for use approval, you can cancel the request through the Cancel Request button.
When the use approval is completed, the status item in my request list will be changed to Approved.
By clicking approved apps in the list, you can check the details of the approved use.
Approval List
If you are an app usage approver, please click the Approved List tab.
If the user is in a state where approval for using the app is requested, you can see that the status item in the list is indicated as Pending Approval.
To check the details of the requested approval, click on the corresponding list.
After checking the details and leaving the approver’s opinion, clicking the approval button will approve the request so that the requester can use the app.
In the Approved List tab, you can see that the status item has been changed to Approved.
By clicking on the app in the list, you can also check the details of the history approved by the user as an approver.
9.5.2.1.1.12 - Personal Profile
Set up personal information
This is a menu for the user’s environment settings.
To set up your personal information, please follow the following procedure.
Click the personal profile > personal information settings on the top right corner of the screen.
You can check photos, names, emails, phone numbers, languages, and time zones.
Photo: Photo > Photo Click to change and upload the icon image you want to display.
Language: Korean or English, please select your desired language.
Language Time Zone: Please select the time zone where you are currently located. When you click the City Search button, a city search popup window appears. Search for the desired city in English and select it.
Click the Save button at the bottom of the screen to save.
Reference
By clicking the withdrawal button at the bottom left of the personal information screen, you can withdraw your current user account.
If you withdraw, your account will be deleted, so please only withdraw if you are sure you want to delete it.
Set up authentication
You can register the user’s authentication tool and set the preferred authentication tool.
To set up authentication, please follow the following procedure.
Click the Personal Profile > Authentication setting on the top right corner of the screen.
+Add new button is clicked to add the desired authentication tool.
Delete button to delete the authentication tool you do not want to use.
☆ Click the icon to set your preferred authentication method.
Reference
Please refer to ‘How to register/delete authentication tools’(link insertion needed) for the user’s authentication tool registration/deletion method.
Change password
In the authentication settings, you can change your password by clicking on the password change and going through the self-confirmation authentication process.
Check login history
You can check the user’s login history/environment.
To view the user’s login history/environment, please follow the following procedure.
Click Personal Profile > Login History/Environment at the top right corner of the screen.
Login History tab allows you to check the information of login time, location, country, city, IP address, OS type, browser type, detection, and result.
Login Environment tab, if there is a registered login environment, you can check the detailed contents, and if it is an environment that is no longer used, you can delete it through the ‘Delete’ button.
If you are using the SingleID ADM (Anomaly Detection Management) feature
The detection item will be displayed as Normal or Detected. The item is a login history where authentication abnormality was detected.
Log out
Click the photo icon located at the top right of the screen and click Logout.
When you click the Logout button, you will be logged out of all applications you visited through SingleID, and if PC SSO Agent is set up for integrated logout, you will also be logged out of the associated browser.
9.5.2.1.2 - Admin Portal
SingleID provides SSO (Single Sign-On) authentication service and account management (Identity Management) service needed to access various business systems in the company’s on-premise and cloud environments.
All authentication services and account management services of organizations using SingleID, as well as the establishment and configuration of security policies, are managed through the Admin Portal.
Users who can access the Admin Portal to configure and manage the system are called administrators, and through the Admin Portal’s management functions, they can integrate the organization’s business systems without restriction and define security policies to access each business system.
The administrative functions provided by the Admin Portal are as follows.
Function
Description
Notification Management
Through the user portal, you can register posts to announce to the organization’s users and manage posting periods, etc. If there is urgent information related to system usage, you can post the content on the login screen so that even users who are not logged in can see it.
Application Integration Management
Connects the organization’s internal business systems or cloud environment business systems. You can configure authentication integration using standard protocols such as SAML, OIDC, or use the SCIM protocol to import information such as accounts and groups into SingleID or export them through SingleID.
Identity Provider Integration Management
If an integrated authentication environment is already set up within the organization, you can register the system as an Identity Provider so that you can use applications linked by SingleID without re-authenticating through SingleID. Authentication integration with all Identity Providers that use standard authentication protocols such as SAML and OIDC is possible.
Authenticator Management
You can add and manage Authenticators to configure user identity verification or multi-factor authentication. If you add a desktop Authenticator such as PC SSO Agent, you can use multi-browser SSO.
MFA Service Provider Integration Management
If you want to increase the security level when accessing business systems while using an already configured in‑organization authentication system, you can link the business system with an MFA Consumer Provider to add only the multi‑factor authentication function to the system. By linking the system with an MFA Consumer Provider, you can configure the authentication environment to perform second‑factor authentication using the Authenticators added to SingleID.
User Management
You can view and edit all users registered in the organization, and you can delete users or directly register new users. You can also change a user’s group membership or assign permissions so that the user can use the application.
Group Management
You can view and edit all groups registered in the organization, delete groups, or register new groups. You can also change the group’s membership rules or assign permissions so that group members can use the application.
Login Policy Management
You can set detailed policies on which authentication methods users can use when logging in with SingleID, and, if necessary, create and manage condition-specific authentication policies for users authenticating in specific environments.
Authentication Policy Management
According to the organization’s security policy, detailed authentication settings can be configured by dividing them into the following four categories: Session Policy, Authenticator Policy, MFA Service Provider Policy, Password Policy
Abnormal behavior detection policy management
SingleID collects and analyzes user behavior information before and after authentication in real time to determine whether there is abnormal authentication behavior, and provides a function that immediately notifies the user of risk when identified as belonging to an abnormal authentication category. Tenant administrators can manage detailed settings of policies for abnormal behavior detection and decide whether each policy is enabled.
Terms and Conditions Management
Using the provided templates, register privacy policies, terms of use, and conditions tailored to the organization’s needs, then notify users and obtain their consent.
SMS Settings
SingleID issues OTP via SMS for identity verification and authentication. In SMS Settings, you can configure and set the SMS sent by SingleID.
Table. Admin Portal provided features
If you are using SingleID for the first time, you can set up the basic environment by configuring the functions in the following order.
The supported range and recommended specifications for the SingleID connection environment are as follows.
Support
Recommended
Windows : Windows Desktop 10 and 11 (x86 and x64 CPU Only)
Web Browser: Microsoft Edge, Latest public version
Windows : Windows Desktop 10 and 11 (x86 and x64 CPU Only)
Web Browser: Microsoft Edge 88.x or higher, Chrome 87.x or higher
Android : 8 and later versions
Web Browser: Samsung Internet Latest public version
Android : 8 and later versions
Models released in 2018 and beyond among Samsung Galaxy Mobile Products
Galaxy S9 ↑
Web Browser: Samsung Internet 9.0 ↑
iOS : 16 ,17
Web Browser: Safari , Latest public version
iOS : 16 ,17
iPhone Xs ↑, Models released in 2018 and beyond among Apple iPhone Products
Web Browser: Safari 14.1 ↑
Table. SingleID Connection Environment Support Scope and Recommended Specifications
9.5.2.1.2.1 - Dashboard
Notifications are a feature that can deliver and share important alerts related to the use of SingleID to users.
Administrators can register and manage notifications through the notification menu. Administrators select the notification type (normal/urgent) based on the notification content and importance, and when they create a notification, users can receive the notification before login (urgent) or after login (normal/urgent).
The administrator can register and manage notifications to be delivered to users. There are two types of notifications, which are provided as distinguished below.
Type
Description
General
You can create and deliver a general notice to users. Users can view general notifications in the User Portal > Notifications menu.
Urgent
You can create and deliver urgent notices to users. Users can view urgent alerts in a popup window on the login page.
Table. Notification Type
Notification
List
To check the notification list, access the menu as follows.
Admin Portal > Dashboard > Notifications
Category
Description
Type
This is the type of notification.
General: If you register a notification as a general announcement, users can view the general notification in the User Portal > Notifications menu.
Urgent: If you register a notification as an urgent announcement, users can view the urgent notification via a popup on the login page.
Title
It is the title of the notification.
Period
It is the period for announcing the notification.
Registrant
Name of the administrator who registered.
Editor
It is the name of the administrator who edited.
Date Modified
This is the date of the last modification.
All button
Both regular and urgent notifications can be viewed in the list.
General button
Only general notifications can be viewed in the list.
Emergency Button
Only emergency alerts can be viewed in the list.
Search term input field
You can search the notification list. After entering a search term, click the magnifying glass icon or press Enter to perform the search.
Searchable items: Title, Registrant, Modifier
Detail button
Detailed search is possible. Search conditions can be combined with AND. After entering multiple fields, click the Search button to search according to the conditions.
Click the Reset button to reset all search fields.
Registration button
You can register a new notification.
Table. List
Notification Registration
If you want to register a notification, follow the steps below.
Admin Portal > Dashboard > Notifications Please click the menu.
Register button, when clicked, you will be taken to the notification registration page.
Check the input items as below and select and enter the details in detail.
Click the Save button.
Check the notifications registered in the list.
Category
Required?
Description
Type
Required
Notification type “General”, “Urgent” select
Period
Required
Specify notification posting period “Start Date~End Date”
Language
Required
Select notification language (activates the “Language tab” based on the selected language)
Title
Required
Notification Title
Content
Required
Write notification content
Table. Alarm Registration
Reference
If you exceed the maximum number of characters that can be entered, an error message will be displayed.
All required fields must be entered in all active tabs. Clicking the Cancel button will not save the data and will navigate to the notification list screen.
Notification Edit
If you want to edit the notification, follow the steps below.
Admin Portal > Dashboard > Notifications Please click the menu.
Select the notification that needs editing, and click the Edit button at the bottom of the screen.
After editing the field you want to modify, click the Save button.
Check the edited notification in the list.
Delete Notification
If you want to delete the notification, follow the steps below.
Admin Portal > Dashboard > Notifications Click the menu.
Select the notification that needs to be deleted, and click the Delete button at the top right of the screen.
The notification delete popup appears.
Confirm If you click the button, the notification will be deleted.
Approval Request
When you click the approval request menu, the administrator can view and cancel all users’ approval requests.
The approval request consists of the Approval Request List and Approval Request Queue tabs.
Approval Request List
If you click the approval request list tab, you can view all approval request items.
There are four types of approval request statuses. You can easily filter and view them using the Approval Request, Approve, Reject, Cancel Submission buttons at the top. If you want a detailed search, you can use detailed search in the search bar at the top right.
Approval Request: Shows all approval request statuses.
Approval: Shows all completed approval statuses.
Rejection: Shows approval request items that have been rejected.
Submission Cancellation: Shows approval request items where the approval has been cancelled.
The description for the approval request list items is as follows.
Name
Description
Approval System
Represents the approval system according to the approval policy. It is possible to check which approval system was used for the request. Policy > Refer to Approval Policy.
Type
This is the type of approval request. App Access, Sign Up, Usage Period types are available. - App Access: This is the type for an application access request. - Sign Up: This is the type for a sign‑up request during registration - Usage Period: This is an approval request used when extending the account usage period before it expires.
Title
It is the approval request title.
Requester
It is the approval requester.
Recent update date
It is the update date of the recent approval list.
Request date/time
This is the initial approval request date and time.
Status
Indicates the status of the approval request. It matches the button at the top.
Table. Approval Request List
Approval request lookup and cancellation
When you click the approval request list, the information of the corresponding approval request appears in a popup.
Requests that have not yet been approved can be cancelled by the administrator using the Cancel Request button.
Approval Request Queue
Click the approval request queue tab to view all ongoing approval requests and delete them by selecting all or selecting individually.
Through detailed search, if the requester has resigned or the approver is absent, the administrator can arbitrarily cancel (delete) the approval request.
Delete approval request
If you want to delete the approval request, follow the steps below.
check the left selection box of the list (v).
At the top of the list, the Delete button will be activated. Please click the Delete button.
Request Delete Popup appears. Click the Delete button.
The selected approval request in the list has been deleted.
Sign Up
When you click the sign-up menu, the list of sign-up requests appears.
Sign-up Request
When you click the sign-up request tab, the list of sign-up requests appears.
The status of approval requests has four types. You can easily filter and view them using the Approval Request, Approval, Rejection, Submission Cancel buttons at the top. If you want detailed search, you can use detailed search in the search bar at the top right.
Approval Request: Shows all approval request statuses.
Approval: Shows all completed approval statuses.
Rejected: Shows approval request items that have been rejected.
Submission Cancellation: Shows approval request items where the approval has been cancelled.
Name
Description
Type
This is the type of approval request. General, IdP types are available. - General: When applied through sign‑up on the login page or a separate sign‑up page - Idp: When sign‑up is requested via an Identity Provider
Approval System
Indicates the approval system according to the approval policy. It is possible to verify which approval system the request was made through. Policy > Approval Policy Please refer to.
Requester
It is the approval requester.
Name
It is the requester’s name.
Email
Requester’s email address.
Phone
It is the requester’s mobile number.
Status
Indicates the status of the approval request. It matches the button at the top.
Registration Date
This is the sign-up registration date.
Modification Date
Recent modification date and time.
Table. Approval Request List
Sign-up Email Invitation
The sign-up email invitation is a method where the administrator sends an invitation email to the desired user via their email address for them to register.
If you want to send an invitation email, follow the steps below.
Dashboard > Sign Up > Sign Up Email Invitation Click the tab.
Click the Send Invitation Email button at the top right.
Invitation Email Sending Popup appears.
Enter the email address to invite in the email field, and click the Add button.
Select the group that will be automatically assigned when a recipient joins the group item. (If not set, the group will be unspecified)
Click the Invite button at the bottom right of the popup.
An invitation email will be sent to the email address you specified.
Reference
Please refer to the Policy > Sign-up Policy menu for detailed sign-up policies.
9.5.2.1.2.2 - Integration
Integration is a service that sets up and manages authentication services and account information for various applications.
In SCP SingleID, we support integration with new applications through customized authentication linkage and account distribution services, as well as the DIY (Do-It-Yourself) feature.
The application is a menu that registers and connects various applications to apply the authentication service of SCP SingleID.
The administrator can register/modify a new application through the application list screen, and can sort, search, and delete registered applications.
Application List
The administrator can select a registered application on the application list screen to edit/delete, sort, search, etc., and can navigate to a menu screen where a new application can be registered.
To check the application list, access the menu as follows.
Admin Portal > Integration > Application
Category
Description
Name
This is the name of the application. It can be entered when creating the application.
Type
Classified by application integration protocols as SAML, OIDC, SCIM.
Display
This is a displayed item in the User Portal application list.
Display: It is shown to users in the User Portal, allowing them to request access permissions.
Blank: It is hidden in the User Portal, making it impossible for users to request directly.
Status
It is the application status. It is divided into active and inactive.
Active: The state where the administrator has completed the settings so that the user can access the application
Inactive: The state where the user cannot access the application due to the administrator’s settings
All button
Displays all active and inactive applications in the list.
Active button
Only active applications are displayed in the list.
Inactive button
Only inactive applications are displayed in the list.
Search term input field
You can search the application list. After entering a search term, click the magnifying glass icon or press Enter to perform the search.
Searchable items: name, description
Detail button
Detailed search is possible. Search conditions can be combined with AND. After entering multiple fields and clicking the ‘Search’ button, the search is performed according to the conditions.
Reset button clicking resets all search fields.
Download button
SAML metadata download is available. You can download the SAML metadata files for the internal network and the internet network.
Register button
You can register a new application.
Table. Application List
Application Registration
The administrator can register the application by clicking the Register button on the list screen.
Application registration is possible in two ways: Custom App Integration and Pre-Built App Integration.
To register an application, access the menu as follows.
Custom App Integration or Pre-Built App Integration Select tab
Custom App Integration
Custom App Integration registration is a connection menu for authenticating the application you want to integrate and distributing accounts.
We provide three types of connection functions as follows.
When you want to register an application by linking authentication, you provide and select the type (SAML, OIDC) according to the standard authentication linkage method.
When registering an application by linking account distribution, we provide the standard online API method (SCIM).
Reference
The integration features provided by SingleID can be classified as follows, and the information input and configuration steps differ depending on the required integration scope. When setting up the standard authentication integration methods SAML and OIDC, if account provisioning is not selected, the attribute integration step is omitted, shortening the registration process.
Custom App Integration > Web Application(SAML) orWeb Application(OIDC) or Identity Provisioning(SCIM v2.0) select > Next click the button
Go to detailed settings
Through a screen consisting of six steps as follows, you can enter and configure the information required for integration and register the application.
Applications using standard protocols (SAML, OIDC, SCIM) can register information and set policies and attributes through a screen consisting of the following six steps.
Select SSO connection attribute information and set a unique value.
Required
‘Metadata File Import’ button
Provides SAML metadata file upload functionality. (Identifies ID provider endpoint and certificate)
Select
Table. SSO Information
Reference
Single Sign-On Settings
If you select either Validation On Request or Encryption, you must register a certificate. (Register the certificate value exported as Plain Text)
Attribute to map during SSO Information can be added by clicking to select attribute information provided by SingleID. Among the selected attributes, a unique value for user identification must be selected as mandatory.
To deliver SingleID’s Attribute information to the connected target application, you can align the SingleID attribute name to the attribute name that will be mapped in the application and deliver it. This communication information exchanged during authentication is called claim (Claim) information, and the received information is used by the SP to set permissions or as attribute information for operation and management.
Provisioning
The Provisioning menu is an account management function that can distribute user information to applications for synchronization. In SingleID, we provide methods based on global standard API specifications such as SCIM and REST.
On the Provisioning information input screen, enter the configuration information for account information distribution.
Category
Description
Required?
Provisioning Configuration
If you want to use account information synchronization, please click the On button. If you select Off, you can skip account synchronization.
Required
Base Address
Enter the Base Address (URL) that defines the Endpoint of the target system supporting the SCIM API.
Required
Accept
Enter the Accept (e.g., application/json) information, which is the HTTP Accept Header value used in SCIM REQUEST.
Required
Content Type
Enter the Content Type (e.g., application/json), which is the HTTP Content Type header value used in SCIM REQUEST.
Required
User Name
Registers the User Name used for authentication to the target REST service.
Required
Password
Set the password used for authentication to the target REST service.
Required
Bearer Token
Register the Bearer Token used when calling the API (for authorization).
Optional
Client ID
Register the Client ID. The Client ID is an ID issued by the authentication server to a registered client, and because the Client ID itself is information disclosed to the resource owner, it should not be used alone for client authentication.
Optional
Client Secret
Register the Client Secret information. Client Secret is a secret information generated by the authentication server, a unique value known only to the authentication server.
Optional
Access Token Node ID
Register Access Token Node ID. Access Token Node ID is the Field ID of a JSON Object Node, returned from the target Access Token REST service, and includes the Token value. Access Token is used for the purpose of authorizing access to resources. It is important that the resource server only accepts Access Tokens from the Client.
Optional
Access Token Base Address
Register the Access Token Base Address (URL) required to receive an Access Token as the Base Address of the target REST service.
Optional
Access Token Content Type
Registers the Access Token Content Type (e.g., application/x-www-form-urlencoded), which is the HTTP Content-Type header value of the target Access Token REST service.
Required
Provisioning
Select one of user or group as the default target for provisioning, and if necessary, you can select both user and group.
Select
Inbound Provisioning Schedule
Click On to register periodically (hour, day, month, year) through Intbound Provisioning Schedule
Select
Outbound Provisioning Schedule
Click On to register the Outbound Provisioning Schedule. Click Off to deploy in real time.
Select
Table. Provisioning information input
Reference
If you select Provisioning Configuration to “Off”, the Provisioning stage and profile stage are omitted, and the application registration is set to use only the authentication service, completing the process.
Profile
Enter the setting information for user/group for deployment on the profile information input screen.
Category
Description
Required
Profile name
Enter the profile name.
Required
Description
Register a description for the profile.
Optional
Attribute
Click Add to select and enter attribute information.
Select
Table. Profile Information Input
Notice
Profile Mapping
Provisioning target selection tab menu, click User, Group to add properties.
Click Profile Mapping to match and connect the required information in the target application based on the SCIM schema information.
Provides a feature that allows you to configure the creation of an execution script that can perform real-time conversion when running provisioning (a conversion script based on the JEXL standard script).
However, there is no validation check function as it receives and executes as entered.
After entering all items and clicking the Complete button, the basic application settings are completed.
When you complete registering a new application, it will be added to the application list and new tabs called Policy, Assignment will be created.
Policy
You can set login policy and access control information for application policy configuration.
Category
Description
Required?
Login Policy
Set the login policy applied when logging into the application. To set it, please assign the application in the Login Policy to be configured.
Select
Access Control
This is a setting that allows the user to control access to the app. When enabled, you can set whether to request access permission for the application and whether it is approved.
Select
Table. Policy Settings
Allocation
Register information for assigning application users based on users and groups. This menu assigns access permissions by setting the users and groups that can access the registered application.
If you want to assign a user, follow the steps below.
If you click the application, you will be taken to the detailed page of that application.
Click the Assign tab and User tab > Assign button
User Assignment When the popup appears, select the user you want to assign, and click the Assign button.
Assignment tab shows the selected user in the list.
Caution
Similarly, you can assign a predefined group via the Group tab’s Assign button. Assign the group using the same method.
Group Settings
When setting groups that can access the application, configure it to include information that defines specific groups for distinction.
You must define rules and groups in advance so that you can manage access permissions with member rules that can distinguish groups.
Reference
Application status
Activation (Active): Exposes the application in the User Portal, and by configuring Sign-On services, provisioning, policies, etc., it is a state where users can access and use the application.
Inactive: It does not expose the application on the User Portal, and it is a state where the application can be deleted.
Delete: When deleting a registered application, caution is required. Therefore, a popup is displayed to allow you to verify the application information and status once more.
Pre-Built App Integration
Pre-Built App Integration menu provides a convenient way to quickly and easily connect the SaaS application you want to use, by pre-preparing necessary settings such as connection information, name, icon, so you can use it conveniently.
To integrate the application via Pre-Built App Integration, check the menu path below.
Pre-Built App Integration menu, like the Custom App Integration menu, can register an application by entering and configuring the necessary integration information through a screen consisting of six steps as follows.
The input items and methods for each step are the same, except for the information that has been predefined and entered for Pre-Built.
Enter the general application information by referring to the below.
Category
Description
Required?
Name
Enter the name of the application.
Required
Description
Enter a description of the application (e.g., tasks, usage, etc.).
Optional
Logo Image
Register a logo that can intuitively identify the application. There are file upload and URL link methods.
Optional
Screen display
When selected, it is shown to the user in the User Portal.
Select
Access URL
Enter the application’s Access URL. For the application to access, enter the login page.
Required
Auto logout
When selected, it will be automatically logged out without re-confirmation according to the session policy.
Select
Automatic Redirection
When selected, it moves to the Service Provider without displaying the logout completion page.
Select
Logout URL
Enter the URL address to navigate to when the user logs out. If left blank, it will be set to the Access URL address.
Optional
Table. General
SSO
Enter Single Sign On setting information on the SSO information input screen.
Category
Description
Required
Issuer
Enter the Issuer, which is the unique identifier of the SP (Service Provider) and the value verified by the Response Issuer.
Required
Single Sign-On URL
Enter the Single Sign-On URL, which is the full URL required when logging into the system.
Required
Logout URL
Enter the Logout URL, which is the URL value for SLO (Single Logout) Return.
Optional
Logout Method
The logout methods for SLO (Single Logout) Return are provided in three ways as follows.
Back-Channel Logout: The user logs out safely from the application without interaction.
Front-Channel Logout (HTTP Redirect Binding): The user interacts to safely log out from the application using a browser-based logout (HTTP Redirect Binding) method.
Front-Channel Logout (HTTP POST Binding): The user interacts to safely log out from the application using a browser-based logout (HTTP POST Binding) method.
Required
Response Signing
If you want to sign the returned SAML Response after the authentication process, use Response Signing.
Select
Validation On-Request
Check to use Signature Validation.
Select
Encryption
Select whether to apply Encryption.
Select
Application Certificate
If you select one of Validation On Request or Encryption, you must register a “certificate”. Please enter a valid value according to the PEM (Privacy-Enhanced Mail) format.
Required
Attribute to map during SSO
Select the attribute information required for SSO connection and set a unique value for user identification. ※ The ‘Next’ button is activated only after selecting a Subject Attribute.
Required
‘Metadata file import’ button
The SAML metadata file contains information about various SAML identity providers that can be used for SAML 2.0 protocol message exchanges. This metadata identifies the IdP endpoints and certificates to secure SAML 2.0 message exchanges. When you click ‘Import metadata file’, you can upload a file.
Select
Table. SSO Information
Guide
Single Sign-On Settings
If you select either Validation On Request or Encryption, you must register the certificate. (Register the certificate value exported as Plain Text)
Attribute to map during SSO Information can be added by clicking and selecting attribute information provided by SingleID. Among the selected attributes, a unique value for user identification must be selected as mandatory.
To deliver SingleID attribute information to the connected target application, you can align the SingleID attribute name to the attribute name that will be mapped in the application and deliver it. This information communicated during authentication is called claim (Claim) information, and the SP uses the received information to set permissions or as attribute information for operation and management.
Provisioning
The Provisioning menu is an account management function that can distribute user information to applications for synchronization. In SingleID, we provide methods based on global standard API specifications such as SCIM and REST.
Enter the configuration information for account information distribution on the Provisioning information input screen.
Category
Description
Required
Provisioning Configuration
Click the ‘On’ button to enable account information synchronization. Selecting ‘Off’ will allow you to SKIP account synchronization.
Required
Base Address
Enter the Base Address (URL) that defines the Endpoint of the target system supporting the SCIM API.
Required
Accept
Enter the Accept (e.g., application/json) information, which is the HTTP Accept Header value used in SCIM REQUEST.
Required
Content Type
Enter the Content Type (e.g., application/json), which is the HTTP Content Type header value used in SCIM REQUEST.
Required
User Name
Registers the User Name used for authentication to the target REST service.
Required
Password
Set the password used for authentication to the target REST service.
Required
Bearer Token
Register the Bearer Token used when calling the API (for authorization).
Optional
Client ID
Register the Client ID. The Client ID is an ID issued by the authentication server to a registered client, and because the Client ID itself is information disclosed to the resource owner, it should not be used alone for client authentication.
Optional
Client Secret
Register Client Secret information. Client Secret is a secret generated by the authentication server, a unique value known only to the authentication server.
Optional
Access Token Node ID
Register the Access Token Node ID. The Access Token Node ID is the Field ID of a JSON Object Node, which is returned from the target Access Token REST service and includes the token value. The Access Token is used for the purpose of authorizing access to resources. It is important that the resource server accepts only the Access Token from the client.
Selection
Access Token Base Address
Register the Access Token Base Address (URL) required to obtain an Access Token as the Base Address of the target REST service.
Optional
Access Token Content Type
Registers the Access Token Content Type (e.g., application/x-www-form-urlencoded), which is the HTTP Content-Type header value of the target Access Token REST service.
Required
Provisioning
Select one of user or group as the default target for provisioning, and if needed you can select both user and group.
Select
Inbound Provisioning Schedule
Click On to register periodically (hour, date, month, year) through Intbound Provisioning Schedule.
Select
Outbound Provisioning Schedule
Click On to register the Outbound Provisioning Schedule. Click Off to deploy in real time.
Select
Table. Provisioning information
Note
If you select Provisioning Configuration as “Off”, the Provisioning stage and profile stage are omitted, and the application registration is set to use only the authentication service and is completed.
Profile
Enter the user/group configuration information for deployment on the profile information input screen.
Category
Description
Required?
Profile name
Enter the profile name.
Required
Description
Register a description for the profile.
Required
Attribute
Click Add to select and enter attribute information.
Required
Table. Profile
Notice
Profile Mapping
In the tab menu where the Provisioning target is selected, click User, Group to add properties.
Click Profile Mapping to match and connect the required information in the target application based on the SCIM schema information.
Provides the ability to configure an execution script (written as a conversion script based on the JEXL standard script) that can perform real-time conversion when executing provisioning.
However, there is no validation check function as it receives and executes as entered.
After entering all items and clicking the Complete button, the basic application settings are completed.
When you complete registering a new application, it is added to the application list and new tabs called Policy, Assignment are created.
Policy
You can set login policies and access control information for application policy settings.
Category
Description
Required
Login Policy
Set the login policy applied when logging into the application. To set it, please assign the application in the ‘Login Policy’ to be configured.
Select
Access Control
This is a setting that allows the user to control access to the app. When enabled, you can set whether to allow access requests to the application and whether they are approved.
Select
Table. Policy
Assignment Settings
Register information for assigning application users based on User and Group. This menu assigns access permissions by setting users and groups that can access the registered application.
To assign a user, follow the steps below.
When you click the application, you will be taken to the detailed page of that application.
Click the Assign tab and the User tab > Assign button.
User Assignment When the popup appears, select the user you want to assign, and click the Assign button.
Assignment tab shows the selected user in the list.
Caution
Similarly, you can assign a predefined group via the Assign button in the group tab. Assign groups using the same method.
Group Settings
When setting the groups that can access the application, configure it to include information that defines specific groups for distinction.
You must define rules and groups in advance so that you can manage access permissions with member rules that can distinguish groups.
Note
Application status
Activation (Active): Exposes the application on the User Portal, and by setting Sign-On services, provisioning, policies, etc., it is a state where users can access and use the application.
Inactive: Does not expose the application in the User Portal, and is a state where the application can be deleted.
Delete: When deleting a registered application, caution is required. Therefore, a popup is displayed so that the application information and status can be checked once more.
Application Modification
You can modify the settings by clicking the application on the list screen.
If you want to modify the application, follow the steps below.
Click the General, SSO, Provisioning, Policy, Assignment, Permission Items, Rebranding tab to edit the items.
Click the Save button.
Notice
If you want to deactivate the application, select the application and click the Deactivate button.
Permission Items
The permissions tab provides synchronization integration with the application’s permissions.
If you want to set permissions, follow the steps below.
If you click the application, you will be taken to the detailed page of that application.
Click the Assignment tab and the Permission Items tab > click the Register button.
Permission item When the popup window appears, it is necessary to register the permission item.
Enter Permission, key, display name, content and click Save to register the permission.
Rebranding
When registering in the application, an additional rebranding tab that does not appear is created. The application’s rebranding includes rebranding functionality for the login page when accessing a separate application.
The included rebranding features are as follows.
Favicon : The favicon can be edited in the browser.
Header logo: The header logo on the login screen can be changed to the logo you want.
Key visual image: The key image set by default on the login page can be modified.
Sign-up page redirection: Registration can be done on a separate operating sign-up page instead of SingleID’s sign-up page.
Privacy Policy Redirection: You can register the privacy policy URL used in the existing application.
Terms of Service redirection: You can register the Terms of Service URL used in the existing application.
Reference
Rebranding Tab Activation Conditions
The rebranding tab appears in SAML and OIDC target applications.
UI
By clicking the application on the list screen, and clicking the edit button on the rebranding tab, you can configure application-specific UI rebranding.
Guide
Clicking the temporary save at the bottom right allows you to save the settings midway.
Favicon Change
Favicon changes in the application can be set according to the characteristics of the corporate application.
If you want to edit the favicon, follow the steps below.
Favicon image (pencil shape) Click the item, then click the favicon image.
Upload an icon file or enter the icon image URL.
Save button, click it and verify through the preview screen that the upload was successful.
6.Korean page Enter the title in Korean.
English page Enter in English in the title.
If the input is completed, check through the right preview whether it was entered correctly.
Click the Publish button at the lower right corner.
Notice
The recommended size for the favicon image is 256 x 256 px, only ICO files are allowed, and please upload files under 2MB.
Favicon images are applied only on PC screens.
Header Logo Change
In the application, separate header logo changes can be configured to suit the characteristics of the corporate application.
If you want to edit the header logo, follow the steps below.
Enter the Korean Redirect URL and the English Redirect URL.
If the input is completed, check through the right preview whether it was entered correctly.
Click the Publish button at the lower right corner.
Notice
The recommended size for the header logo image is 288 x 72 px. Only PNG, JPG, JPEG files are allowed, and please upload files under 1MB.
It is possible to set logo images separately for each language.
Key Visual Change
In the application, separate key visual changes can be configured to suit the characteristics of the corporate application.
If you want to edit the key visual, follow the steps below.
Click to use a single key visual for all languages and language-specific key visuals.
If the image upload is complete, check through the right preview to see if it was entered correctly.
Click the Publish button at the lower right.
Guide
The recommended size for the key visual image is 600 x 612 px. Only PNG, JPG, JPEG files are allowed, and please upload files under 1MB.
Redirect
By clicking the application on the list screen, then clicking the edit button in the Rebranding tab, you can configure application-specific rebranding for redirection.
Guide
You can save the settings midway by clicking the temporary save at the lower right.
Category
Description
Sign Up
Enter the URL if you want to set a separate sign-up page.
Privacy Policy
Enter a separate privacy policy URL in the application.
Terms of Service
Enter a separate Terms of Service URL in the I application.
Table. Redirection
Notice
The default selection outputs the SingleID basic registration page, conditions, and terms.
Application Deletion
From the application list screen, select the application, deactivate it, then return to the list screen and you can delete it from the three‑dot menu. To register again, click the Add button to register.
Identity Provider
This is a menu for registering and managing IdPs that provide authentication services and credentials to SCP SingleID. At this time, SCP SingleID acts as a Service Provider and receives authentication services from the IdP.
Identity Provider List
On the list screen, you can select a registered Identity Provider to edit/delete, sort, search, etc., and you can navigate to a menu screen where you can register a new Identity Provider.
To view the Identity Provider list, you can access the following menu.
Admin Portal > Integration > Identity Provider
Category
Description
Name
Identity Provider name.
Type
Displays the standard protocol registered by the Identity Provider. The Identity Provider type is distinguished by SAML2.0 and OIDC methods.
Status
Displays the status of the Identity Provider. It is distinguished as active and inactive.
Active button
Only active Identity Providers are displayed in the list.
Inactive button
Only inactive Identity Providers are displayed in the list.
Search term input field
You can search the Identity Provider list. After entering a search term, click the magnifying glass icon or press Enter to perform the search. Searchable items: name, description
Detail button
You can perform a detailed search. Search conditions can be combined with AND. After entering multiple fields, click the Search button, and the search will be performed according to the conditions. Click the Reset button to reset all search fields.
Download button
SAML metadata download is available. You can download the SAML metadata files for the internal network and the internet network.
Register button
You can register a new application.
Table. Identity Provider List
Reference
Identity Provider Delete
If you want to delete, select the checkbox (V) and then click the Delete button at the top of the list.
Identity Provider Registration
You can register by clicking Register at the top of the Identity Provider list screen.
To register Identity Provider, follow the steps below.
Enter general information for IdP (Identity Provider).
Category
Description
Required
Name
Enter the name of the Identity Provider. Since it is identified by name, rules for distinction and management are required.
Required
Description
Enter a description of the Identity Provider (business, usage, etc.).
Optional
Logo Image
Register a logo that can intuitively identify the Identity Provider.
Optional
Login button
Displays IdP as a button/link (Text) etc.
Logo icon display: Choose whether to display the logo icon on the login button.
Button text: Enter the text to display on the login button.
Required
Table. Identity Provider General
SSO
Enter Single Sign On configuration information on the SSO information input screen.
When integrating with Web Application (OIDC)
Category
Description
Required
Client ID
Register the Client ID. The Client ID is an ID issued by the authentication server to a registered Client, and because the Client ID itself is information disclosed to the resource owner, it should not be used alone for Client authentication.
Required
Client Secret
Register the Client Secret information. The Client Secret is a secret piece of information used for authentication to the target REST service, a unique value known only to the authentication server.
Required
Authorization Endpoint URL
The Authorization Endpoint must obtain authorization from the Resource Owner. Enter the Authorization Endpoint URL, which is the URL value used at this time.
Required
Token Endpoint URL
Token Endpoint is used by the client and obtains an Access Token via an Authorization Grant or Refresh Token. Enter the Token Endpoint URL, which is the URL value used at this time.
Required
Logout URL
Enter the Logout URL, which is the URL value for Return in SLO (Single Logout).
Optional
Userinfo Endpoint URL
Provided by the IdP (Identity Provider) and enter the Userinfo Endpoint URL that includes the user profile (username, name, etc.).
Optional
IdP Sign-In Key
Set the IdP Sign-In Key value and select the SingleID mapping attribute for the IdP Sign-In Key.
Required
Table. Web Application(OIDC) SSO
Guide
IdP Sign-In Key Settings
There are two ways to handle login in SCP SingleID by receiving the key value that passes the ID.
How to receive identifier ID value using standard SAML Keyword
How to create and receive a custom identifier ID
You can map the name obtained by one of the above methods to the User ID, or you can also map it to the CN value. This is a feature that sets how to map authentication information to a value for handling login.
JIT provisioning
Identity Provider’s JIT provisioning feature tab has been added. This feature synchronizes accounts in real time when user changes occur. You can set items when synchronizing accounts in real time.
Category
Description
Required
JIT provisioning
JIT provisioning stands for Just-In Provisioning and is an ID and access management feature used to quickly create user accounts when a user logs into the system for the first time.
The feature can be set to On or OFF.
Required
When there is no SingleID user mapped to the IdP user
Manage actions when the user accesses for the first time.
Go to the sign‑up page: create a new account. To prevent ID duplication, set a separate ID suffix for the logged‑in ID.
Automatically create a new SingleID user without user invitation: automatically generate an ID.
Go to the user registration website: if a separate user sign‑up page exists, navigate to that separate registration page.
Required
If there is a SingleID user mapped to the IdP user
If the user exists, update the user information.
Required
Table. JIT provisioning
After entering all items and clicking the Complete button, the basic application settings are completed.
Identity Provider Edit
If you click the Identity Provider in the list screen, you can modify the settings.
If you want to modify the Identity Provider, follow the steps below.
Click the General, SSO, Provisioning, Policy, Assignment tab to edit the items you want to modify.
If you want to add an Authenticator, follow the steps below.
Notice
If you want to deactivate the application, select the application and click the Deactivate button.
Identity Provider Delete
On the Identity Provider list screen, after selecting an Identity Provider and disabling it, you can return to the list screen and delete it from the three‑dot menu. To register again, click the Add button to register.
Authenticator
Configure by integrating the Authenticator provided by SCP SingleID. By default, password and Email are set to active state.
The Authenticator that is additionally configured and provided is as follows.
Knox Messenger: OTP can be sent via Knox Messenger.
PC SSO Agent: SingleID: Provides SSO with Agentless, but uses SSO Agent for multi-browser SSO functionality,
SingleID Authenticator: It is a SingleID dedicated authentication mobile app that supports biometrics (fingerprint, facial), PIN, mOTP, TOTP.
SMS: OTP can be sent via mobile SMS.
Active Directory: Performs authentication with an AD account.
Passkey: Mobile Passkey, security key, a convenient authentication method that allows easy login with Windows biometric/PIN code.
Authenticator List
We support all authenticators of the six available types.
If you want to check the Authenticator, please check at the following path.
Admin Portal > Integration > Authenticator
Authenticator Add
When you click Register on the Authenticator list screen, it moves to the next screen and switches to a screen where you can add an Authenticator.
each authentication methodto select > Next Click the button.
Enter the information required for authentication settings.
Click the Save button.
Notice
All nine types of Authenticators, including optimized work environments that a typical IdP service can provide, are already offered and registered/configured, so there are no new Authenticators to add until a new type of Authenticator is needed.
Notice
If you want to disable the Authenticator, select the application and click the Disable button.
Authenticator Edit
On the Authenticator list screen, after selecting an Authenticator and clicking edit, it switches to a screen where you can edit.
If you want to modify the Authenticator, follow the steps below.
Edit each item and click the Edit button to complete the modification.
Authenticator Delete
On the Authenticator list screen, select the Authenticator, deactivate it, then return to the list screen and you can delete it from the three‑dot menu. If you want to register again, click the Add button to register.
MFA Service Provider
MFA Service Provider menu provides a service that enhances user convenience by meeting the security requirements required by companies through multi-factor authentication, applying stronger authentication technologies along with biometric and simple authentication technologies.
MFA Service Provider List
To check the MFA Service Provider list, you can access the following menu.
Admin Portal > Integration > MFA Service Provider
Category
Description
Name
It is the name of the MFA Service Provider.
System Code
Displays system code information.
Project Code
Displays the project code information.
User Tag
Displays the User Tag.
Type
Displays the MFA Service Provider integration method. It is shown in the following three ways.
ADFS Plugin
MFA API
RADIUS
System Code Input Field
Enter system code information.
Project Code Input Field
Enter the project code information.
Search input field
You can search the Identity Provider list. After entering a search term, click the magnifying glass icon or press Enter to perform the search.
Searchable items: name, description, system code, project code
Detail button
Detailed search is possible. Search conditions can be combined with AND. After entering multiple fields and clicking the ‘Search’ button, the search is performed according to the conditions.
Reset button: when clicked, all search fields are reset.
Register button
You can register a new MFA Service Provider.
Table. MFA Service Provider List
MFA Service Provider Registration
To register the MFA Service Provider, follow the steps below.
ADFS Federated Application or Custom Application or Network Equipment select > next button click
Notice
MFA Service Provider has three types as follows.
ADFS Federated Application : Register an ADFS federated application that will be linked with SingleID MFA.
Custom Application : Register an application that uses the MFA API to be integrated with SingleID MFA.
Network Equipment : Register network equipment that will be linked with RADIUS-based MFA.
You can register an MFA Service Provider by entering and configuring the information required for MFA Service Provider integration through a three-step screen as follows.
Enter the name of the MFA Service Provider. Since it is identified by name, rules for distinction and management are required.
Required
Description
Enter description of MFA Service Provider (tasks, usage, etc.).
Optional
Logo Image
Register a logo that can intuitively identify the MFA Service Provider.
Optional
User Management using User Tag
If you enable the use of User Tag, when a new user is registered from the MFA Service Provider, “#"+User Tag is automatically added after the user’s ID, preventing duplicate ID registration.
Select
User Tag
Only one User Tag can be registered per MFA Service Provider.
User Tag cannot be modified after registration, and it is a tag attached to the MFA Service Provider and the user.
Tenant administrators can define and use User Tags. Users provisioned JIT through the MFA Service Provider have the same User Tag set as a user attribute, allowing you to determine where the user was created.
Required
System Code
Enter system code information.
Optional
Project Code
Enter project code information.
Optional
Campaign
If only one authentication method is used, a popup page guiding the user to register a personal authentication method is displayed. It becomes active when the selection box is selected.
Select
Table. MFA Service Provider General
MFA integration
Enter MFA integration information.
Category
Description
Required
Login
Select the provided Authenticator from the drop-down list.
Required
Identity verification at registration
Set the identity verification method that must be performed obligatorily during the registration process.
The user sets first and second Authenticator for identity verification.
Delegating authentication to an administrator allows a specific administrator to set authentication on behalf of the user when there is no mobile device or other authentication tool for identity verification. ※ It is not recommended to use this except for special circumstances.
Required
ADFS Identifier
Please enter the ADFS Identifier URL information.
Required
Claim
Enter Claim name.
A Claim is an authentication method that manages user authentication and permissions through a specific key value, and you can add the necessary data here for use.
Defines whether to map to verify if it is the same user. Up to 30 can be registered.
Required
Secret Key
Secret Key is an encryption key for trusted communication between SingleID and MFA Service Provider.
Issue button to issue it.
Required
Table. MFA Integration
Notice
The person who can verify identity on your behalf can be set in the Person in charge tab.
Person in charge
Select and register the person in charge of the newly registered MFA Service Provider.
Category
Description
Add button
You can add a person in charge of the MFA Service Provider.
Search
You can find the person in charge by search term (ID, name, email, status).
Select (Check Box)
Select the person in charge found in the list.
Add
You can add the selected assignee.
Complete
Complete assigning the person in charge.
Table. Person in charge registration
Click the Complete button to complete the registration.
MFA Service Provider Edit
On the MFA Service Provider list screen, after selecting the Authenticator and clicking edit, it switches to a screen where you can modify.
If you want to modify the MFA Service Provider, follow the steps below.
Admin Portal > Integration > MFA Service Provider > Edit Click the button.
Modify each item and click the Edit button to complete the modification.
MFA Service Provider Delete
MFA Service Provider list screen, select the MFA Service Provider, deactivate it, then return to the list screen and you can delete it from the three‑dot menu. To register again, click the Add button to register.
9.5.2.1.2.3 - Identity Store
The Identity Store provides a feature to manage users and groups registered in an organization.
There are several cases where users or groups are registered in an organization, such as being provisioned through registered applications or being directly registered by administrators.
The Identity Store integrates users and groups registered in various ways, allowing them to be searched and providing various management functions for administrators to configure detailed settings for each user or group.
Administrators can manage all users and groups registered in the organization through the Identity Store.
Users
Tenant administrators can use the features provided in the user menu to search and modify all users registered in the organization, delete users, or directly register new users.
Additionally, administrators can change a user’s group membership or assign usage permissions to allow users to use applications.
Users are registered in SingleID in the following ways:
Registered through account synchronization (Inbound Provisioning) from an application
Registered through Just-In-Time (JIT) provisioning from an Identity Provider
Registered from an MFA Service Provider
Manually registered by an administrator
Administrators can manage registered users in a unified manner using the user menu.
To access the user menu, go to the following menu:
Admin Portal > Identity Store > User
User List
You can view and search all users registered in SingleID in a list format.
Category
Description
ID
The user’s ID is displayed.
Name
The user’s name is displayed (in the order of last name and first name).
Email
The user’s email address is displayed.
Phone
The user’s mobile phone number is displayed.
Admin
Indicates whether the user is an administrator of the Admin Portal.
System Mapping ID
The application system mapping ID.
Status
Indicates whether the account is active.
Active: The current login-enabled user status.
Inactive: The user status that has been intentionally inactivated.
Pending: The account synchronization is complete, and the user is in a pending state until they log in.
Locked: The account locked due to password errors.
Dormant: The account status that has been dormant for a certain period.
Managing Entity
Indicates the managing entity of the account. You can see which system the account was automatically registered from or if it was manually registered.
SingleID: The account registered directly by the administrator
Others: The account synchronized automatically
Registration Date
The initial registration date of the account
Modification Date
The latest update date
Expiration Date
The account expiration date
Dormant User Button
You can view dormant users.
Search Input Field
You can search the user list. Enter a search term and click the magnifying glass icon or press Enter to perform the search.
Searchable items: Name, Email, ID
Detail Button
You can perform a detailed search. You can search with AND conditions. Enter multiple fields and click the ‘Search’ button to search according to the conditions.
Clicking the Reset button initializes all search fields.
Register Button
You can register a new MFA Service Provider.
Table. User List
Guide
There are three methods to search for users.
Filter by user status
Keyword search
Advanced search
Filter by User Status
To filter users by status, follow these steps.
Click the button of the group that displays the status you want to filter by. (Only one button can be selected at a time)
After filtering, you can move to another page to view the list of users you want.
After filtering, you can use the keyword search to find the user you want. (However, if you perform an advanced search after filtering, the filter will be removed)
To remove the filter, click the All button.
Keyword Search
To perform a keyword search, follow these steps.
Click on the keyword search input box with your mouse.
Enter the word you want to search for. At this time, a dropdown menu will be displayed below the search input box. If you select one of ID, English Name, Email from the dropdown menu, the search will be executed for the corresponding field, and if you select All, the search will be executed for all ID, English Name, and Email fields.
After entering the search term, press the Enter key or click on the magnifying glass icon with your mouse to execute the search. At this time, the search will be executed for all ID, English Name, and Email fields.
The search results will be displayed in the user list.
If you want to cancel the search results and display the entire list, click the X icon on the right side of the keyword search input box.
Advanced Search
To perform an advanced search, follow the procedure below.
Click the Advanced button.
In the advanced search screen, enter the search term in the field you want to search.
In the advanced search screen, you can select the user’s registration date and modification date to limit the search range.
If you enter search terms in multiple fields, the search will be executed with AND conditions.
After entering the search term, press Enter or click the Search button to execute the search.
The search results will be displayed in the user list.
If you want to cancel the search results and display the entire list, click the Reset button in the advanced search screen.
User Registration
The tenant administrator can register users manually on the screen without going through account synchronization.
To register a user, follow the procedure below.
Click the Admin Portal > Identity Store > User > Register button
The user can input and register information through a 3-step screen as follows:
Profile
User Group
Summary
Profile
In the profile screen, enter the user’s basic profile information.
The fields to be entered are as follows.
Classification
Description
Required
ID
Enter the user’s ID. A value that overlaps with the ID of an already registered user cannot be entered.
Required
Administrator
Specifies whether it is an administrator. Selecting “Allow” gives administrator privileges.
Required
Name (Korean name, surname)
Enter the Korean name and surname in order.
Required
Name (English name, surname)
Enter the English name and surname in order. If there is no English name, enter the Korean name and surname again.
Required
Email
Enter the email address. This information is used for identity verification, so accurate information must be entered.
Required
Phone
Enter the mobile phone number. This information is used for identity verification, so accurate information must be entered.
Required
Department
Enter the Korean department name and English department name.
Optional
Organization
Enter the Korean organization name and English organization name.
Optional
Language
Specifies the user’s preferred language. The screen is displayed in the specified language when the user logs in.
Required
Time zone
Specifies the user’s time zone. All times are displayed in the specified time zone when the user logs in.
Required
Expiration date
Sets the user’s expiration date. The default value is “Not set”.
When automatic account deletion is set after the setting date, select the date to be deleted.
Optional
Table. Profile Information
Click the Next button to move to the User Group screen.
User Group
In the User Group screen, specify the group to be registered for the user.
The entire group that can be assigned to the user is displayed on the left side of the screen.
Select the group to be assigned to the user and click the > button to move to the assigned group.
To cancel group assignment, select the group to be canceled in the assigned group and click the < button.
Click the Next button to move to the Summary screen.
Note
The reason for assigning a group to a user is to organically control access in login policies, authentication policies, application access policies, and more.
Summary
On the summary screen, confirm the registered information and register the user.
If you want to modify the entered information, click the Back button to return to the screen you want to modify.
To cancel the registration, click the Cancel button.
Clicking the Complete and Add Registration button registers the user and returns to the profile screen to register a new user.
Clicking the Complete button registers the user and moves to the detailed information screen of the registered user.
User Modification
To modify a user, follow the procedure below.
Click the user you want to modify in Admin Portal > Identity Store > User.
Profile, Group, Application, Multi-factor Authentication (MFA) method, Device, Active Session will be displayed.
Click the Modify button at the bottom and modify the data you want to change.
Click the Save button.
Changing the User’s Status
The status of users managed by SingleID is as follows.
Category
Description
Active
A user who has logged in to SingleID after initial registration, initialized their password, and is using it normally.
Inactive
A user whose use has been suspended by the administrator.
Pending
A user who has not logged in to SingleID even once after initial registration.
Locked
A user who has been locked due to repeated login failures, etc. (The user can unlock themselves through password reset)
Dormant User
An account status that has been dormant due to no access for a certain period.
Table. User Status
The tenant administrator can change the user’s status according to the user’s current status as follows.
Current
Change
Description
Active
Inactive
You can change the active user to inactive by clicking the inactive button.
Inactive
Active
You can change the inactive user to active by clicking the active button.
Pending
None
A pending user cannot be changed to active or inactive.
Locked
Active
A locked user can be changed to active by clicking the password reset button and initializing the password at the same time.
Table. User Status
The button to change the user’s status is exposed as follows in the list and detail screens.
When one or more active or inactive users are selected in the list screen
When moving to the detail screen of an active or inactive user
Notice
If the tenant administrator attempts to deactivate a user, a confirmation popup will be displayed.
To deactivate a user, confirm the user’s information and then click the deactivate button again to change the user’s status from active to inactive.
In contrast, when changing a user from inactive to active, no separate confirmation popup is displayed.
Password Reset
The tenant administrator can reset a user’s password.
When the tenant administrator resets a user’s password, a guidance email is sent to the user.
Note
The reset password is not displayed to the administrator.
Also, the reset password is not directly delivered to the user through the guidance email.
The user must access SingleID directly after receiving the guidance email and use the password reset function to change their password after going through the identity verification process.
To change a user’s password, follow these steps:
Select and click the user to change the password from the user list.
Click the Password Reset button at the top right of the user details screen.
When the confirmation popup is displayed, click the Confirm button.
If the user’s password is reset while it is locked, the lock is released and the status is changed to active.
Group
The tenant administrator can view the groups to which a user belongs and add or delete group memberships.
To manage a user’s group, click the Group tab on the details screen.
Classification
Description
| Group Tab | Displays the user’s group management screen. |
| All Groups | Displays a list of all groups that can be assigned to the user. |
| Assigned Groups | Displays a list of groups that have already been assigned to the user. |
| All Groups Search | Searches for groups or group descriptions that can be assigned to the user. The search results are displayed in the list below. To display the entire list after searching, click the X button on the right side of the search input field. |
| Assigned Groups Search | Searches for groups or group descriptions that have already been assigned to the user. The search results are displayed in the list below. To display the entire list after searching, click the X button on the right side of the search input field. |
| Delete Assigned Groups | Deletes the selected group from the groups assigned to the user. The user is excluded from the members of the deleted group. |
| Assign Group | Assigns the selected group to the user. The user becomes a member of the assigned group. |
Group Tab
Delete Group
To delete a group assigned to a user, follow these steps:
Select the group to be deleted from the list of assigned groups. (Check the checkbox to the left of the group name)
Click the < button to delete the assigned group.
Notice
Groups assigned by group rules do not display a checkbox next to the group name. Membership established by rules cannot be manually removed by an administrator.
Assign Group
To assign a new group to a user, follow these steps:
Select the Group to be newly assigned from the list of all groups. (Check the checkbox to the left of the group name)
Click the > button to assign the group.
Notice
When assigning a group, user permissions are automatically granted for the applications assigned to the added group.
Application
The tenant administrator can view the applications that users can use, add or delete applications. To manage a user’s application, click the Application tab on the detailed screen.
Classification
Description
Application Tab
Displays the application management screen for the user.
Assigned Application List
The applications assigned to the user are displayed in a list.
Assign Button
Allows you to assign an application to the user.
Application Tab
Deleting an Application
To delete an application assigned to a user, follow these steps:
Select the application to be deleted from the assigned application list. (Check the checkbox to the left of the application name)
Click the Unassign button displayed above the application list.
Click the Confirm button in the confirmation popup.
Guide
If you delete an assigned application, it will no longer be displayed in the User Portal > My Apps menu.
Application Assignment
To assign a new application to a user, follow these steps:
Click the Assign button located at the top right of the application list.
In the Application Assignment popup, select the application (check the checkbox to the left of the application name).
Click the Assign button.
If you have assigned all applications, click the Cancel button to close the popup.
Note
Assigned applications can be found in the User Portal > My Apps menu. (If the “Screen Display” option for the assigned application is turned off, it will not be displayed in the user portal.)
Multi-Factor Authentication (MFA) Method Inquiry and Management
The tenant administrator can view the multi-factor authentication method registered by the user and modify or delete some of the registration information.
To manage a user’s multi-factor authentication (MFA) method, click the Multi-Factor Authentication (MFA) Method tab on the detailed screen.
Classification
Description
Multi-Factor Authentication (MFA) Method Tab
Displays the management screen for the user’s multi-factor authentication (MFA) method.
Multi-Factor Authentication (MFA) Method List
Displays a list of the user’s registered multi-factor authentication (MFA) methods.
Modify Button
Allows modification or deletion of the user’s registered multi-factor authentication (MFA) method.
To modify the MFA method registered by the user, follow the procedure below.
Click the Modify button at the bottom right of the screen.
Click the Registration Information column of the MFA list you want to modify.
After modifying the information, click the Save button at the bottom right of the screen.
Deleting Multi-Factor Authentication (MFA) Method
To delete the MFA method registered by the user, follow the procedure below.
Click the Modify button at the bottom right of the screen.
Click the Delete button to the right of the MFA method you want to delete.
Click the Confirm button in the warning popup.
Click the Save button at the bottom right of the screen.
Viewing User Device Information
The administrator can view the device information added when the user registers the MFA method.
To view the user’s device information, click the Device tab in the detailed screen.
Category
Description
Device Tab
Displays the user’s device management screen.
Device List
Displays a list of devices added when the user registers the MFA method.
Device Tab
Notice
Device information can only be viewed and cannot be added, modified, or deleted by the tenant administrator.
Active Sessions
When a user logs in to SingleID, SingleID manages the session information of the logged-in user.
The tenant administrator can view the user’s current active session and manage it to force the session to end and log out the user.
To manage a user’s session, click the Active Sessions tab on the detailed screen.
Classification
Description
Active Sessions Tab
Displays the user’s session management screen.
Active Sessions List
The user’s currently active sessions are displayed in a list.
Terminate Button
Forces the user’s active session to terminate.
Active Sessions Tab
Notice
If the user’s active session list is displayed as an empty list, it means that the user is not currently logged in to SingleID.
Session Forced Termination
To forcibly terminate a user’s session, follow these steps:
Click the Terminate button located at the top right of the session you want to terminate.
In the Terminate Confirmation popup, click the Terminate button.
Notice
The terminated user will be forcibly logged out of SingleID and must log in again to use the system.
However, the session of the application accessed using SingleID before the session termination will be maintained, and the session of each application will be managed by each application.
Forcible Termination of Multiple Sessions
If you want to terminate multiple sessions simultaneously, follow these steps:
Select the sessions you want to terminate in the list and check the checkbox (V) displayed on the left side of the session information.
Click the Terminate button displayed at the top of the list.
In the Terminate Confirmation popup, click the Terminate button.
User Deletion
The tenant administrator can delete user information from SingleID.
The delete user button is exposed in both the list and detail screens as follows:
When one or more users are selected in the list screen
After selecting a user, click the Delete button to display a Confirmation popup on the screen.
To delete a user, confirm the user’s information and enter the user’s ID, then click the Delete button.
When multiple users are selected and the Delete button is clicked, the following Confirmation popup is displayed on the screen.
To delete the selected users, use the <, > buttons to confirm all users’ information, enter Delete All, and then click the Delete button.
Notice
You must confirm all user information and enter Delete All to activate the delete button.
If you have moved to the user details screen
If the administrator wants to delete a user, a confirmation popup will be displayed.
To delete a user, check the user’s information, enter the user’s ID, and click the Delete button.
Note
Deleted user information cannot be recovered.
When user information is deleted, the groups, applications, and multi-factor authentication (MFA) methods assigned to the user are also deleted. Even if you re-register a user with the same ID, the deleted groups, applications, and MFA methods will not be recovered.
Users registered through an application’s inbound provisioning can be re-provisioned from the application even if they are deleted from SingleID.
To completely delete a user, you must delete the user’s information from the original system that manages the user’s information.
Even if a user with the same ID is re-registered after deletion, the deleted groups, applications, and MFA methods will not be automatically recovered.
Group
The administrator can use the functions provided in the Group menu to view and modify all groups registered in the organization, delete groups, or register new groups.
You can also change the group membership rules or assign usage permissions to group members so that they can use applications.
Groups are registered in SingleID in the following ways:
Registered through inbound provisioning from an application (Application)
Manually registered by the administrator (Create Group)
The tenant administrator can manage registered groups in various ways using the group menu.
To access the group menu, move as follows:
Admin Portal > Identity Store > Group
Group List
The tenant administrator can view and search all groups registered in the organization in a list format.
Classification
Description
Group List
The group list is displayed.
Keyword Search
Search by group name and description.
Detailed Search
Detailed options for searching groups are displayed on the screen.
Table. Group List
Create Group
The administrator can manually register a group on the screen without going through inbound provisioning.
To manually register a group, click the Register button on the group list screen.
When you click the Register button, the group registration popup is displayed on the screen.
The fields that must be entered for group registration are as follows:
Classification
Description
Required
Type
Select the group type.
Required
Name
Enter the name of the group. Duplicate values of already registered group names cannot be entered.
Required
Description
Enter a description of the group.
Required
Table. Group List
Complete button is clicked, the group is registered and moves to the detailed information screen of the registered group.
Detailed Information Inquiry and Modification
The administrator can move to the group’s detailed information inquiry screen by clicking the Group in the group list.
If a new group is registered, it will also move to the group’s detailed screen immediately after registration.
At the top of the group detail screen, the group name, description, and management entity information are displayed, and below that, the group information is composed of multiple tabs.
Division
Description
Type
The type of group is displayed.
Name
The name of the group is displayed.
Management Entity
The system that manages the group is displayed. For groups directly registered by the tenant administrator in SingleID, it is displayed as SingleID, and for groups provisioned in through an application, the application name is displayed.
Description
The description of the group is displayed.
Table. Detailed Information Inquiry
The tenant administrator can confirm the detailed information of the registered group through the Group Profile tab.
Category
Description
Group Profile
The group profile will be displayed.
List
A button to return to the list.
Edit
Edit the profile.
Table. Detailed Information Inquiry
To modify the group’s detailed information, follow the procedure below.
In the group detail screen, select the Profile tab.
Click the Edit button.
Modify the Group Information.
The fields that can be modified are as follows.
Category
Description
Required
Name
Enter the group name. A value that is duplicated with an already registered group name cannot be entered.
Required
Description
Enter a description of the group.
Required
Table. Edit Fields
Click the Save button.
To return to the inquiry state without saving the modified information, click the Cancel button.
Group Membership Rule Management
The administrator can set rules to automatically configure users who meet certain conditions as group members.
When a group rule is set, the tenant administrator does not need to manually manage members, and the group members are automatically configured and added or deleted according to the set condition.
To manage group membership rules, click the Rules tab on the detailed screen.
Category
Description
Rules Tab
The group rules are displayed.
Rules
The set group rules are displayed. (The default setting for membership policy after creating a group is Off) If the membership policy setting is Off, members are not automatically managed.
List
A button to return to the list.
Edit
Edits the rules.
Table. Rules Tab
To set a group rule, follow the procedure below.
Select the Rules tab on the group detailed screen.
Click the Edit button.
Click the On button for the membership policy setting.
Set the condition in the WHEN section.
Click the Save button.
To return to the inquiry state without saving the set rule, click the Cancel button.
Notice
When a group membership rule is set, if the administrator changes the user’s detailed information or the user’s detailed information is changed by inbound provisioning, the system automatically searches for users according to the set rule and manages each group member.
Members automatically added according to the group membership rule cannot be manually deleted by the tenant administrator.
WHEN area conditions are composed as follows.
Classification
Description
Conditional expression operation relationship
If there is one or more conditional expressions, it defines the operation relationship between conditional expressions. You can choose one of AND or OR, and it is commonly applied to all conditional expressions.
Conditional expression type
Sets the type of conditional expression. You can select User Attribute.
Condition item
Sets the condition item of the conditional expression. When the type of conditional expression is User Attribute, you can select the user’s attribute from the list.
Operator
Sets the operation method of the conditional expression.
Condition value
Sets the condition value of the conditional expression.
Add conditional expression
Adds a conditional expression.
Delete conditional expression
Deletes a conditional expression. When there is only one conditional expression, it cannot be deleted.
Table. Rule Tab
The user’s attributes that can be set in the condition item are as follows.
Property
Data Type
Description
Mandatory
key
String
Key
Mandatory
username
String
ID
Mandatory
password
GuardedString
Password
Mandatory
status
String
Status
Mandatory
mustChangePassword
Boolean
Forced password setting
Mandatory
suspended
Boolean
Waiting status
Mandatory
creator
String
Creator
Optional
creationDate
Date
Creation Date
Optional
lastModifier
String
Last Modifier
Optional
lastChangeDate
Date
Last Change Date
Optional
administrator
Boolean
Administrator
Optional
displayName
String
Display Name
Optional
cn
String
Common Name
Optional
local
String
Locale (Email Sending Standard)
Optional
userSource
String
User Source
Optional
syncDate
String
Last Sync Date
Optional
contractNumber
String
Contract Number
Optional
contractStartDate
String
Contract Start Date
Optional
contractEndDate
String
Contract End Date
Optional
agreementDate
String
Agreement Date
Optional
accountStartDate
String
Account Start Date
Optional
accountEndDate
String
Account End Date
Optional
partnerOrganizationCode
String
Partner Organization Code
Optional
approvalUser
String
Approval User ID
Optional
formattedName
String
Korean Display Name
Optional
familyName
String
Korean Last Name
Optional
givenName
String
Korean First Name
Optional
enFormattedName
String
English Display Name
Optional
enFamilyName
String
English Last Name
Optional
enGivenName
String
English First Name
Optional
adDomain
String
AD Domain
Optional
nickName
String
Nickname
Optional
employeeNumber
String
Employee Number
Optional
epId
String
EP ID
Optional
email
String
Email Address
Optional
phoneNumberWork
String
Phone Number
Optional
mobile
String
Phone Number
Optional
title
String
Title Name
Optional
executiveYn
String
Executive Status
Optional
timeZone
String
Time Zone
Optional
accountLocked
Boolean
Account Forced Lock
Optional
accountAutoLocked
Boolean
Account Auto Lock
Optional
accountDisabled
Boolean
Account Disabled
Optional
accountSuspended
Boolean
Dormant Account
Optional
accountSuspendedTime
Date
Dormant Processing Time
Optional
lastLoginTime
Date
Last Login Time
Optional
accountState
String
Account Status
Optional
Table. Condition Attributes
The operators that can be set in the operator are as follows.
Operator
Description
Equals
Searches for users whose condition item value matches the condition value.
Not Equals
Searches for users whose condition item value does not match the condition value.
Starts with
Searches for users whose condition item value starts with the condition value string.
Ends with
Searches for users whose condition item value ends with the condition value string.
Contains
Searches for users whose condition item value contains the condition value string.
Table. Operator List
Group Member Management
Tenant administrators can manually specify members of a group or delete users from group members.
To manage group members, click the Members tab on the detail screen.
Name
Description
Members Tab
Displays the group member management screen.
Member List
Displays group members in a list format.
Filter Button Group
Filters group members by status and displays the list.
Keyword Search
Searches for group members by entering keywords.
Advanced Search
Searches for group members by entering detailed search conditions.
Add Button
Adds members to the group.
Table. Member Tab
There are three ways to search for members within the group tab.
Member status filter
Keyword search
Advanced search
Notice
Member Status Classification
Active: A user who has logged in to SingleID after initial registration, initialized their password, and is currently using it normally
Inactive: A user whose use has been suspended by an administrator
Pending: A user who has not logged in to SingleID even once after initial registration
Locked: A user who has been locked out due to repeated login failures, etc. (in a state where the user can unlock themselves through password reset)
Member Status Filter
To filter members by status, follow these steps:
Click the button for the status of the member you want to filter (Active, Inactive, Pending, Locked button)
You can check the list of members in the filtered state by moving to another page
You can search for the desired member using keyword search in the filtered state (However, if you perform a detailed search in the filtered state, the filter will be removed)
To remove the filter, click the All button
Keyword Search
To perform a keyword search, follow these steps:
Click on the keyword search input box with your mouse
Enter the word you want to search for. At this time, a dropdown menu will be displayed below the search input box. If you select one of the “ID”, “English Name”, or “Email” displayed in the dropdown menu, the search will be executed for the corresponding field, and if you select “All”, the search will be executed for all ID, English Name, and Email fields
After entering the search term, press the Enter key or click on the magnifying glass icon with your mouse to execute the search. At this time, the search will be executed for all ID, English Name, and Email fields
The search results will be displayed in the member list
If you want to cancel the search results and display the entire list, click the X icon on the right side of the keyword search input box
Advanced Search
To perform an advanced search, follow these steps:
Click the Advanced button
Enter the search term in the field you want to search for on the advanced search screen
On the advanced search screen, you can limit the search range by selecting the member’s registration date
If you enter search terms in multiple fields, the search will be executed with an “AND” condition
Enter the search term and press the Enter key or click the Search button to execute the search.
The search results are displayed in the member list.
If you want to cancel the search results and display the entire list, click the Reset button on the detailed search screen.
Member Deletion
To delete a member from a group, follow these steps.
Select one or more members to delete from the member list. (Check the checkbox to the left of the member ID)
Click the Delete button displayed at the top of the list.
Click the Confirm button in the warning popup.
Guide
Deleting a member from a group does not delete the member’s user information.
The deleted member will lose the application usage rights assigned through the group.
Member Addition
To add a member to a group, follow these steps.
Click the Add button at the top right of the member list.
In the member addition popup, select one or more users to add as members. (Check the checkbox to the left of the user ID)
Click the Add button.
If you have added all the desired users as members, click the Cancel button in the popup to close the member addition popup.
Guide
Added members will immediately receive application usage permissions assigned through the group.
Application Management
The tenant administrator can view the applications assigned to a group and add or delete applications.
To manage a group’s applications, click the Group tab on the detail screen.
Name
Description
Application Tab
Displays the application management screen for the group.
Assigned Application List
The applications assigned to the group are displayed in a list.
Assign Button
Allows you to add and assign applications to the group.
Table. Application Management
Application Deletion
To delete an application assigned to a group, follow these steps.
Select the application to be deleted from the list of assigned applications. (Check the checkbox to the left of the application name)
Click the Unassign button displayed above the application list.
Click the Confirm button in the confirmation popup.
Notice
If an assigned application is deleted, it will no longer be displayed in the User Portal > My Apps menu for group members.
Application Assignment
To assign a new application to a group, follow the procedure below.
Click the Assign button displayed at the top right of the application list.
In the Application Assignment popup, select the application. (Check the checkbox to the left of the application name)
Click the Assign button.
If you have assigned all applications, click the Cancel button to close the Application Assignment popup.
Notice
Assigned applications can be found in the User Portal > My Apps menu for group members. (If the Screen Display option for the assigned application is turned off, it will not be displayed in the user portal)
Group Deletion
Tenant administrators can delete groups from SingleID.
The group deletion button is exposed as follows in the list and detail screens.
When one or more groups are selected in the list screen
After selecting the group, click the Delete button to display the following Confirmation Popup on the screen.
To delete the group, confirm the group information and enter the group name, then click the Delete button.
If you select multiple groups and click the Delete button, the following Confirmation Popup will be displayed on the screen.
To delete the selected groups, use the <, > buttons to confirm the information of all groups and enter the phrase Delete All, then click the Delete button.
Notice
You must confirm the information of all groups and enter the phrase Delete All to activate the Delete button.
Moved to the group detail screen
If the tenant administrator wants to delete a group, a confirmation popup will be displayed as follows.
To delete a group, check the group information, enter the group name, and click the Delete button.
Note
The information of the deleted group cannot be recovered again.
When the group information is deleted, the group members and application information assigned to the group are also deleted, and even if the group is registered again with the same name, the member or application information is not recovered.
Groups registered through the application’s inbound provisioning can be reprovisioned from the application even if they are deleted from SingleID.
To completely delete a group, it must be deleted from the ledger system that manages the group information.
Even if the group is registered again with the same name after deletion, the deleted members or application information are not automatically recovered.
9.5.2.1.2.4 - Policy
When logging in to SingleID or logging in to an application registered with SingleID, various settings such as login method, authentication session, and password must be set according to the organization’s security policy.
SingleID provides a policy management feature that allows for detailed settings for login and authentication information. If you have purchased the anomaly detection feature (ADM), you can set it to analyze the user’s login behavior when logging in and alert the user to potential security threats when an unusual authentication is detected.
The policy features provided by SingleID are as follows:
Login policy
Authentication policy
Anomaly detection policy
Using SingleID’s policy feature, you can specify a detailed login method according to who, when, and under what environment logs in to which application, creating a secure authentication environment that meets the organization’s security requirements.
Login Policy
The administrator can set a detailed policy on which authentication means can be used to authenticate when a user logs in to SingleID, and can create a conditional authentication policy for users authenticating in a specific environment if necessary.
Login policy can be configured using the following conditions:
Which application is logging in?
Who logs in?
In what environment do they log in?
To access the login policy menu, navigate as follows:
Admin Portal > Policies > Login Policy
Basic Login Policy
The Admin Portal has two default policies created as follows.
Admin Portal Policy: Policy to control Admin Portal access rights
Default Policy: Basic access control policy for users
The Admin Portal Policy is a login policy applied when logging in to the Admin Portal, and the Default Policy is a login policy applied when logging in to the user portal.
After integrating an application with SingleID, if no separate login policy is assigned, the Default Policy is automatically assigned as the basic login policy.
Notice
The above two basic policies cannot be deactivated or deleted.
Registering a Login Policy
The login policy sets the login policy for administrators and users. You can set login policies based on access environment, application, and situation.
The login policy can be registered through a 4-step screen as follows:
General
Assignment
Initial Redirection
Rules
General
In the general screen, enter the name and description of the login policy.
The fields to be entered are as follows.
Name
Description
Required
Name
Enter the name of the login policy.
Required
Description
Enter the description of the login policy.
Required
Table. General
Click the Next button to move to the assignment screen.
Assignment
In the assignment screen, specify the application to which the login policy will be applied.
Name
Description
Filter
Filters applications by status.
Keyword Search
Searches by application name and description.
Detailed Search
Displays detailed search options for applications on the screen.
Assign Button
Displays the application assignment popup on the screen.
Assigned Application List
The assigned applications are displayed in a list format. The list starts empty.
Table. Assignment
Click the Assign button to display the application assignment popup on the screen.
Application Assignment popup, select one or more applications to assign to the login policy and click the Assign button.
If all applications have been assigned, click the Cancel button to close the Application Assignment popup.
Initial Redirection
The Initial Redirection screen specifies the user’s login screen entry method and login method
Redirected to SingleID’s Sign-in page (login page)
Redirected to the external IdP
The explanations for the two methods are as follows:
If Redirected to SingleID’s Sign-in page is selected, the SingleID login page will be displayed to the user attempting to log in.
If Redirected to the external IdP is selected, the login page of the selected Identity Provider will be displayed to the user attempting to log in.
After selecting Redirected to the external IdP, you must select and specify the Identity Provider from the selection list.
If Redirected to SingleID’s Sign-in page is selected, you can optionally display a button at the bottom of the SingleID login screen that allows the user to log in through an Identity Provider.
AND see the following external IdP buttons on the Sign-In page, you can set up the login screen to display by selecting one or more Identity Providers registered with SingleID in the text input box below and clicking the mouse.
Notice
For settings on registering an Identity Provider or displaying a registered Identity Provider on the login screen, refer to
Identity Provider Registration.
Rules
On the Rules screen, you can modify or add login rules and set the priority between login rules.
Name
Description
Rule List
The login rules are displayed on the screen in a list format. The Default Rule is displayed by default, and the Default Rule cannot be deleted.
Keyword Search
Searches by the name or description of the login rule.
Register Button
Registers a new login rule.
Complete Button
Registers the login policy.
Table. Rule
Default Rule Setting
The login rule list on the rule screen displays the Default Rule by default.
The Default Rule cannot be deleted and can only be modified. Additionally, when one or more login rules are added, the priority cannot be set. (It is always the lowest priority.)
To modify the Default Rule, follow these steps:
Click on the Default Rule in the rule list.
The WHEN condition of the Default Rule cannot be modified.
The THEN result of the Default Rule can be modified.
Name
Description
Access Permission Setting
Sets the access permission.
Mandatory Authentication Method
Sets the primary login method. Additional login methods can be displayed on the login screen besides the default login method.
MFA Authentication
Sets additional login to be required after the primary login is successful.
Terms and Conditions for Collecting Consent and Terms
Sets the terms and conditions to be displayed and consent to be obtained when the user logs in to SingleID for the first time.
Save Button
Saves the modified login rule.
Table. Default Rule
You can select one of the following two options in the access permission setting:
Deny Access
Allow Access
If you select Deny Access, all user logins will be denied.
If you select Allow Access in the access permission setting, you can set the user’s login method.
Notice
If you selected Redirected to the external IdP as the login method on the Initial Redirection screen, the primary login setting will not be displayed on the screen.
Essential authentication methods are performed by the external Identity Provider based on the Initial Redirection settings.
To allow users to log in through multi-factor authentication, check the MFA authentication checkbox and select one or more authenticators in the text input box.
If you want to set up the terms and conditions agreement for users logging in to SingleID for the first time, check the terms and conditions agreement setting (d) checkbox and select one or more terms or conditions to be displayed on the screen in the text input box.
Add Rule
To add a login rule, follow these steps:
Click the Register button at the top right of the rule list.
Enter the name and description of the rule on the rule registration screen.
Refer to the following to enter the rule items:
Name
Description
Name
The name of the rule.
Description
Rule description.
User Group Assignment
Select the user group to which the rule will be applied.
Profile Attribute Assignment
Click the ‘Add’ button in the profile attribute assignment list to add attributes. For attribute descriptions and operator explanations, refer to the help below.
Group Settings
Specifies the group to which the logging-in user belongs.
User Attribute List
Specifies the attributes of the logging-in user and the conditions for each attribute.
Add User Attribute Button
Displays the “Add Attribute” popup on the screen.
Table. Rule Addition
Access Environment
Name
Description
Network
Specifies the IP or network range of the logging-in user. The default value is “IP address anywhere”.- Desktop- Mobile
Platform
Specifies the device information of the logging-in user. The default value is “Any platforms”.- Desktop- Mobile
Browser
Specifies the browser information of the logging-in user. The default value is “Any browsers”.- Edge- Chrome- Safari
OS
Specifies the OS information of the logging-in user. The default value is “Any OS”.- Windows 10- Windows 11- Android- iOS
AND Anomalies (Abnormal Behavior)
Sets the condition for whether an anomaly was detected during login.Anomaly detection condition setting is only possible for tenants who have purchased the Anomaly Detection Management (ADM) option.To use the anomaly detection function (ADM), you must select the additional option when signing the SingleID usage contract.If you want to use the anomaly detection function, you can make an additional purchase on the SCP product purchase page.After setting all the “WHEN” condition areas, set the login method to be used when a user who meets the conditions logs in.
Table. Access Environment
Guide
The following are the attributes of the user that can be selected.
User Attribute Information
Attribute Name
Data Type
Required
Description
key
String
Required
Key
username
String
Required
ID
password
GuardedString
Required
Password
status
String
Required
Status
mustChangePassword
Boolean
Required
Password Forced Setting
suspended
Boolean
Required
Waiting Status
creator
String
-
Creator
creationDate
Date
-
Creation Date
lastModifier
String
-
Last Modifier
lastChangeDate
Date
-
Last Change Date
administrator
Boolean
-
Administrator
displayName
String
-
Display Name
cn
String
-
Common Name
local
String
-
Locale (Email Sending Standard)
userSource
String
-
User Source
syncDate
String
-
Last Sync Date
contractNumber
String
-
Contract Number
contractStartDate
String
-
Contract Start Date
contractEndDate
String
-
Contract End Date
agreementDate
String
-
Mandatory Agreement Date
accountStartDate
String
-
Account Usage Start Date
accountEndDate
String
-
Account Usage End Date
partnerOrganizationCode
String
-
Partner Company Code
approvalUser
String
-
Approval User ID
formattedName
String
-
Korean Display Name
familyName
String
-
Korean Last Name
givenName
String
-
Korean First Name
enFormattedName
String
-
English Display Name
enFamilyName
String
-
English Last Name
enGivenName
String
-
English Name
adDomain
String
-
AD Domain
nickName
String
-
Nickname
employeeNumber
String
-
Employee Number
epId
String
-
EP ID
email
String
-
Email Address
phoneNumberWork
String
-
Phone Number
mobile
String
-
Mobile Phone Number
title
String
-
Title
enTitle
String
-
English Title
titleCode
String
-
Title Code
entitlement
String
-
Position
department
String
-
Department Name
enDepartment
String
-
English Department Name
departmentCode
String
-
Department Code
organization
String
-
Company Name
enOrganization
String
-
English Company Name
organizationCode
String
-
Company Code
region
String
-
Location
userStatus
String
-
Employee Status
userType
String
-
Employee Type
securityLevel
String
-
Security Level
preferredLanguage
String
-
Preferred Language
executiveYn
String
-
Executive Status
timeZone
String
-
Time Zone
accountLocked
Boolean
-
Account Lock
accountAutoLocked
Boolean
-
Account Auto Lock
accountDisabled
Boolean
-
Account Disabled
accountSuspended
Boolean
-
Dormant Account
accountSuspendedTime
Date
-
Dormant Account Time
lastLoginTime
Date
-
Last Login Time
accountState
String
-
Account State
Table. User Attributes
Operators are as follows.
Operator
Description
Equals
Searches for users whose attribute value matches the condition value.
Not Equals
Searches for users whose attribute value does not match the condition value.
Starts with
Searches for users whose attribute value starts with the condition string.
Ends with
Searches for users whose attribute value ends with the condition string.
Contains
Searches for users whose attribute value contains the condition string.
Table. Operators
THEN Settings
THEN result area sets the login method and procedure.
In the access permission setting (a), you can select one of the following two options:
Deny Access
Allow Access
Deny Access is selected, all user logins will be denied. (The default value of access permission setting (a) is Deny Access)
To allow users to log in and set detailed login methods, select Allow Access.
Name
Description
Access Permission Setting
Sets the access permission.
Primary Login Setting
Sets the primary login method. In addition to the default login method, additional login methods can be displayed on the login screen.
Additional Login Setting
Sets to require additional login after the primary login is successful.
Terms and Conditions Agreement Setting
Sets to display the terms and conditions and request agreement when the user logs in to SingleID for the first time.
PC SSO Agent Setting
Sets to check if a security program (Endpoint Security) is installed on the user’s PC using the PC SSO Agent.
Save Button
Saves the modified login rules.
Table. THEN
In the selection list of the primary login setting, select the Authenticator to be used for login.
If you want to allow the user to log in with another Authenticator in addition to the selected primary login method, select the checkbox (V) of And allow another factors below: and select one or more Authenticators in the text input box.
Guide
If Redirected to the external IdP is selected as the login entry method on the Initial Redirection screen, the primary login setting will not be displayed.
The primary login is performed at the external Identity Provider according to the Initial Redirection setting.
To allow users to log in through multi-factor authentication, select the checkbox (V) of the additional login setting and select one or more Authenticators in the text input field.
To set the terms and conditions agreement when the user logs in to SingleID for the first time, select the checkbox of the terms and conditions agreement setting and select one or more terms or conditions to be displayed on the screen in the text input box.
To check if a security program (Endpoint Security) is installed on the user’s PC using the PC SSO Agent, select the checkbox (V) of the PC SSO Agent setting. If this setting is enabled, login will be blocked for users who do not have a security program installed on their PC.
If the PC SSO Agent is not registered, the PC SSO Agent setting item will not be displayed on the screen.
While the PC SSO Agent setting is enabled, instead of blocking the login of users who do not have a security program installed on their PC, you can require additional authentication by selecting the checkbox below and selecting one or more Authenticators in the text input box.
Click the Save button to register the login rule and return to the rule list.
Rule Priority Management
If one or more login rules have been added, the administrator can set the priority of the login rules. If a user meets the conditions set for multiple rules, the login method will be applied according to the rule with the higher priority.
To set the priority of the login rules, follow the procedure below.
Drag the ≡ area to the left of the rule name in the rule list with the mouse.
The priority of the login rules will be determined based on the position where they are dragged and dropped.
The higher the position in the rule list, the higher the priority.
Note
The Default Policy has the lowest priority and cannot be changed.
Policy Status Change
The status of the login policy managed by SingleID is as follows.
Status
Description
Active
Login policy that is working normally
Inactive
Login policy that has been suspended by the administrator
Table. Policy Status
Administrators can change the status of the login policy according to the current status of the login policy as follows:
Current Status
Changeable Status
Description
Active
Inactive
You can change the active login policy to inactive by clicking the Deactivate button.
Inactive
Active
You can change the inactive login policy to active by clicking the Activate button. You can also delete the inactive login policy.
Table. Policy Status
Notice
Two login policies provided by default in SingleID, Admin Portal Policy and Default Policy, cannot be deactivated.
When a login policy is deactivated, the applications assigned to the deactivated login policy will be automatically changed to be assigned to the default policy (Default Policy).
Policy Deactivation
To deactivate an active login policy, follow these steps:
Click the policy you want to deactivate in the policy list to move to the policy details screen.
Click the Deactivate button.
Confirm the login policy information (the number of assigned applications, the number of rules included in the login policy) displayed in the Confirm popup, and then click the Deactivate button.
Notice
When a login policy is deactivated, the applications assigned to the deactivated login policy will be automatically changed to be assigned to the default policy (Default Policy).
Even if the deactivated login policy is changed back to active, the previously assigned applications will not be automatically reassigned.
Policy Activation
To change the login policy from inactive to active, follow these steps:
Click on the policy you want to activate in the policy list to move to the policy details screen.
Click the Activate button to change the status of the login policy to active.
Notice
When activating an inactive login policy, the status will be changed immediately without a separate confirmation popup.
Policy Deletion
The administrator can delete the login policy from SingleID.
To delete a login policy, follow these steps:
Click on the policy you want to delete in the policy list to move to the policy details screen.
If the login policy is activated, click the Deactivate button to deactivate the policy.
Click the Delete button displayed at the top right of the deactivated login policy.
A popup screen will appear to confirm the deletion of the login policy.
To delete the login policy, confirm the policy information, enter the name of the policy you want to delete, and click the Delete button.
Note
Deleted login policies cannot be recovered.
When a login policy is deleted, the rules included in the policy are also deleted. Even if you register a login policy with the same name, the deleted rules or settings will not be recovered.
Access Simulation
As the number of login policies and rules increases, it can be difficult to understand which user is subject to which policy for login methods.
SingleID provides an access simulation feature that allows administrators to quickly check the login policies and rules applied to users.
Using the access simulation feature, you can select a user and an application to access, and define the user’s login environment (network, device, browser, OS) to predict in advance what kind of login method the user will experience in different cases.
Additionally, if there are users who are having trouble logging in and need to review their requests, you can use the access simulation feature to quickly check and modify the policies or rules that are causing the problem.
To use the access simulation feature, click the Access Simulation button at the top right of the login policy list screen.
Name
Description
User ID Input
Enter the user ID to be simulated.
Network Settings
Specify the IP of the user to be simulated. The default value is “IP address anywhere”.
Platform Settings
Specify the device information of the user to be simulated. The default value is “Any platforms”.
Browser Settings
Specify the browser information of the user to be simulated. The default value is “Any browsers”.
OS Settings
Specify the OS information of the user to be simulated. The default value is “Any OS”.
Application Selection
Select the application to be simulated. Click the application selection button to display a popup.
Run Simulation Button
Run the access simulation.
Simulation Results
Display the access simulation results on the screen. The login policies and rules applied to the specified user are displayed.
List Button
Return to the login policy list.
Table. Access Simulation
To run the access simulation, follow these steps:
Enter the ID of the user to be simulated.
Specify the IP of the user to be simulated. You can select Specific IP Address and enter the IP directly. Enter the IP in the format 123.123.123.123.
Specify the device information of the user to be simulated. You can select Platform and choose a device from the selection list.
Specifies the browser information of the user to be simulated. After selecting Browser, you can select a browser from the selection list.
Specifies the OS information of the user to be simulated. After selecting OS, you can select an OS from the selection list.
Click the Application Selection button to select the target application to be simulated.
In the Application Selection popup, click the radio button to the left of the application name to select the application, and then click the Add button.
Note
If you want to reselect the application, click the X button to the right of the selected application name, and then click the Application Selection button again.
Click the Simulation Run button.
The access simulation is executed, and when the execution is finished, the login policy and rules screen are displayed according to the simulation result.
Authentication Policy
The administrator may need to change the detailed settings related to authentication according to the organization’s security policy.
SingleID manages the detailed settings related to authentication in the following four policies:
Session policy
Authenticator policy
MFA Service Provider policy
Password policy
To access the authentication policy menu, move as follows:
Admin Portal > Policy > Authentication Policy
To modify the authentication policy, click the Modify button at the bottom right of the authentication policy screen, change the settings, and then click the Save button.
Session Policy
To change the session policy, follow the procedure below:
Click the Modify button at the bottom right of the authentication policy screen.
Set the maximum number of sessions that a user can create at the same time in the maximum session limit setting.
The minimum value that can be set is 1, and the maximum value is 100. If set to 1, the user can only log in from one browser at a time and cannot log in from multiple PCs or browsers simultaneously.
In the session priority setting, you can set the priority of the session created by the user. The priority can be one of the following two options:
Old session
New session
If the maximum session limit is set to 1 and Old session is selected in the maximum session limit setting, when a logged-in user attempts to log in from another PC or browser that is not logged in, the login will be blocked.
Additionally, if the maximum session limit is set to 1 and New session is selected in the maximum session limit setting, when a logged-in user attempts to log in from another PC or browser that is not logged in, the session of the previously logged-in browser will be forcibly expired, and the session of the new PC or browser will be maintained.
In the maximum session time setting, you can set the maximum time to maintain a session.
The maximum session time can be one of the following two options:
No time limit
Set time limit
If set to No time limit, once a session is created, it will not expire automatically until the user logs out.
If set to Set time limit and a time is set, the session will expire when the set time passes, and the user will be automatically logged out.
In the maximum idle session time setting, you can set the maximum idle session time.
If the maximum idle session time is set, the session will expire if the user does not make an authentication request within the set time, and the user will be automatically logged out.
To save the changed settings, click the Save button at the bottom right of the authentication policy screen.
To discard the changed settings without saving, click the Cancel button at the bottom right of the authentication policy screen.
Name
Description
Maximum session limit setting
Sets the maximum number of concurrent sessions for the user.
Session priority setting
Sets the priority between the old session and the new session when the number of concurrent sessions exceeds the maximum allowed.
Maximum Session Time Setting
Sets the maximum time to maintain a session after it is created. The session expires when the maximum session time elapses.
Maximum Idle Session Time Setting
Sets the time when a session expires if a user does not make an authentication request to the server for a certain period after the session is created.
Table. Access Simulation
Authenticator Policy
To change the Authenticator policy, follow the procedure below.
Click the Edit button at the bottom right of the authentication policy screen.
Set each item as follows.
When the settings are complete, click the Save button.
Name
Description
Available Authenticator Settings(for login policy)
Sets the Authenticators available for authentication.
Registration Authentication Method
Sets the primary identity verification method for users when registering an Authenticator.
Additional Authentication
Sets the additional identity verification methods allowed for users when registering an Authenticator, in addition to the primary method.
Account Search
Sets the authentication method for ID search.
Password Reset
Sets the authentication method for password search.
Unlock Setting
If a user fails to authenticate repeatedly using Authenticators, their ID will be locked. This setting allows you to specify a time after which the lock will be automatically released.
Table. Authenticator Policy
Notice
To remove an Authenticator specified in the available Authenticator settings, it must first be removed from all login policy rules.
Note: I’ve translated only the Korean text into English, leaving the rest of the content (including HTML, code, and Hugo shortcodes) unchanged.
2. Configurable Authenticators can be registered in the Authenticator addition menu. Disabled Authenticators cannot be set in the available Authenticator settings.
Notice
If you haven’t purchased the MFA product
Available Authenticator settings (for login policy) will not be displayed on this screen.
To purchase additional MFA products, please contact us through Support Center > Inquiry.
Notice
If a user fails to log in due to repeated incorrect password entries and is locked out, the lock will not be released even after a certain period of time. The password lock and release method should be set in the Password Policy.
If you reset a user’s password in the user menu, you can release the lock before the lock release waiting time. Please refer to the password reset.
MFA Service Provider Policy
To change the MFA Service Provider policy, follow the procedure below.
Click the Edit button at the bottom right of the authentication policy screen.
Refer to the table below and set each item accordingly.
When the settings are complete, click the Save button.
Name
Description
Available Authenticator settings (for MFA Service Provider)
Sets the Authenticator that users can use when an authentication request occurs from the MFA Service Provider.
Terms and Conditions option
When a user is registered from the MFA Service Provider, it can display the terms and conditions and obtain the user’s consent.
Lock release settings
When an authentication request occurs from the MFA Service Provider and a user fails to authenticate repeatedly, the ID will be locked. It can set the time for the locked user to be automatically released after a certain period of time.
Table. MFA Service Provider Policy
Notice
To remove the specified Authenticator from the available Authenticator settings, the Authenticator must be removed from all MFA Service Providers first.
The Authenticators that can be set are registered in the Authenticator addition menu. Disabled Authenticators cannot be set in the available Authenticator settings.
To set up the terms and conditions to be displayed to the user and to request the user’s consent when the user authenticates from the MFA Service Provider for the first time, check the checkbox in the terms and conditions option and select one or more terms or conditions to be displayed on the screen in the text input box.
If a user who authenticates from the MFA Service Provider repeatedly fails to authenticate, the user’s ID will be locked. To automatically unlock the lock after a certain period of time, set the lock release waiting time in the lock release settings.
Password Policy
To change the password policy, follow the procedure below.
Click the Edit button at the bottom right of the authentication policy screen.
Refer to the table below and set each item accordingly.
When the settings are complete, click the Save button.
Name
Description
Password History
You can set it to prevent the reuse of previously used passwords. Specify the number of recently used passwords to prevent reuse. The user will not be able to use the password used in the past as many times as set above.
Password Expiration
Specify the password validity period. After the validity period has passed, you must change your password to log in. It can be set from 1 day to 365 days.
Password Lock
The user’s ID will be locked when the password is repeatedly entered incorrectly. Specify the number of repeated input failures.
Automatic lock release after the set time (minutes) (1-1,440): The account that exceeds the set failure count will be locked for the set time (minutes). Enter the automatic lock release time (minutes).
Automatic lock release after password reset
Pattern and Complexity
Set the minimum length, minimum characters, numbers, etc. of the password.
Minimum Character Setting
Specify the minimum length of the password.
Minimum Alphabet Setting
Specify the minimum number of alphabets to be included in the password.
Minimum Number Setting
Specify the minimum number of numbers to be included in the password.
Minimum special character setting
Specifies the minimum number of special characters to be included in the password.
Maximum character setting
Specifies the maximum length of the password.
Allow using user ID as password
Sets whether to allow the user’s ID to be included in the password.
Table. Password policy
Notice
A user locked out due to repeated password input failure must reset their password themselves to be unlocked.
To change the status of a user locked out due to repeated password input failure, refer to Changing User Status.
Membership registration policy
To allow user membership registration, you must activate the membership registration policy, which allows registration of users other than those provisioned from the personnel system or IdP. It provides features to register, create, modify, and delete accounts through account synchronization, as well as invite users through the login screen or email.
To activate and use the membership registration policy, follow these steps:
After activation, the Policy tab and User invitation tab will appear.
Refer to the explanations of the Policy tab and User invitation tab below and set the policy.
Once the settings are complete, click the Save button.
Policy
You can set general membership registration policies.
Name
Description
Display membership registration link on login screen
Displays the membership registration link on the SingleID login screen.
Display SingleID membership registration screen as a link: Select when using the SingleID membership registration screen as default
Display external membership registration screen as a link: Select when having a separate membership registration page
Terms and conditions option
Selects the terms and conditions agreement option during membership registration. During membership registration, you can apply terms and conditions separately.
Allow membership registration invitation
When activated, you can invite users by email. You can set it so that only invited users can join, without a separate membership registration page. In this case, joining through the SingleID membership registration link is not possible.
Registration Input Form
Sets the user attributes to be input when signing up. Can be added as required.
ID Duplication Prevention Setting
If activated, a suffix is added to the ID to prevent duplication. This setting is to prevent cases where the ID of an existing auto-provisioned account is the same. Since there are many cases where the ID value is the same, setting is recommended. When signing up, the PostFix value is added to the end of the ID.
Maximum Usage Period
The maximum usage period is set after signing up. Can be set from 1 to 2000 days.
Approval when Signing up
When the sign-up approval setting is activated, the registered approval policy can be loaded and set.
Table. Policy Tab
Approval Policy
The administrator can select the approval system and set the policy according to the type, such as sign-up policy and app access policy, with various approval lines. Various approval policies can be applied flexibly whenever the security policy changes.
Approval is possible by dividing it into self-approval system function and Knox Portal approval system. If you need to link with another approval system, please request it through 1:1 inquiry.
To check the approval policy, follow the path below.
Admin Portal > Policy > Approval Policy
Approval Policy List
The administrator can select the approval system and set the policy according to the type, such as sign-up policy and app access policy, with various approval lines. Various approval policies can be applied flexibly whenever the security policy changes.
Name
Description
ID
Automatically generated ID when creating an approval policy.
Approval System
Divided into SingleID and Knox Portal. If you need to register another approval system, please request it through 1:1 inquiry.
Type
Divided into app access and sign-up.
Status
Approval policy status. Unavailable means you need to change the approver and notifier.
Approval Use
Divided into in use and not in use. Details button click to view the applications using the approval policy.
Table. Approval Policy List
Approval Policy Registration
Register button, you can set the approval system, type, approver, notification method, and approval period.
Name
Description
Approval System
2 options are available.
SingleID : Self-approval, available through the user portal
Knox Portal : Samsung Knox Portal approval system, available for approval
Select the notification method when an approval request is received by the approver and notifier.
Table. Approval Policy Registration
Anomaly Detection Policy
SingleID collects and analyzes user behavior information in real-time before and after authentication, determining whether the authentication is abnormal. If it is identified as an abnormal authentication category, it immediately notifies the user of the risk.
To access the anomaly detection policy menu, follow these steps:
Admin Portal > Policy > Anomaly Detection Policy
Notice
A detailed description of the anomaly detection policy menu is provided separately to ADM purchasing customers.
If you have not purchased the anomaly detection feature as an option, you will not be able to view the policy management menu in the Admin Portal.
If you want to use the anomaly detection feature, please contact us through 1:1 inquiry or sales representative.
9.5.2.1.2.5 - Terms and Conditions
The company using SingleID can manage the Personal Information Processing Policy and Terms of Use, etc. according to the situation and characteristics of each company.
The organization can write a personal information processing policy according to the requirements and notify the user or show the terms of use or terms and conditions to the user using SingleID before use and obtain consent.
Through the Terms and Conditions menu, you can notify users of the Personal Information Processing Policy, Terms of Use, and Terms and Conditions, and obtain consent.
SingleID provides a basic template to make it easy to write terms and conditions.
To access the Terms and Conditions menu, move as follows.
Tenant Admin Portal > Rebranding > Terms and Conditions
The functions provided by the Terms and Conditions menu are as follows.
Terms and Conditions Attribute Setting
Terms and Conditions Version Management
Terms and Conditions Publication
Terms and Conditions List
The tenant administrator can view the terms and conditions in a list format.
The basic template provided by SingleID is as follows.
Terms Type Template
Privacy
Terms of Use
Collection and Use of Personal Information
Marketing
Conditions Type Template
Are you over age 14?
Cookie Type Template
Cookie
By clicking on the terms and conditions to be modified in the list, you can move to the detailed screen of the terms and conditions.
Name
Description
Type
The type is displayed in the form of an icon.
Name
The name is displayed.
Description
The description is displayed.
Type Setting
The type can be changed.
Name
The name can be modified.
Mandatory Setting
The mandatory setting can be set.
Email Notification Setting
Whether to set an email notification when the terms and conditions are changed.
Description
The description can be modified.
Registration Date and Registrar
The registration date and registrar are displayed.
Last Modified Date
The last modified date and last modifier are displayed.
List Button
A button to return to the list.
Modify Button
Modifies the terms and conditions.
Table. Terms and Conditions List
In the detailed screen of the terms and conditions, select the General Settings tab.
Click the Modify button.
You can modify the Title.
You can modify the Mandatory setting. The available options are as follows.
Mandatory: When this term or condition is posted to the user, if the user does not agree, the use will be restricted so that the user cannot log in further.
Optional: The agreement is up to the user’s choice, and even if the user does not agree to the terms and conditions, there is no restriction on logging in.
Reference: The agreement is not checked.
You can modify the description of the terms and conditions. The description is for reference by the administrator and is not displayed to the user.
After modifying all settings, click the Save button.
If you want to return to the inquiry state without saving the modified information, click the Cancel button.
Terms and Conditions Version Management
The tenant administrator can view and manage the version list of terms and conditions.
The default version of terms and conditions is v1.0.0, and it is registered by default for each template when the tenant is created.
To check the version list, click the Version History tab in the detailed screen of the terms and conditions.
Version History
The version history can be checked by clicking the Version item at the top of the personal information processing policy and terms of use.
By clicking List, you can check the history of previously published versions. Once a version is published, it cannot be modified.
Version Addition
By clicking the Add button on the Version History tab, you can create a new version of the terms and conditions.
To select version addition, follow the procedure below.
Click the Add button on the Version History tab.
Click the desired Locale to select the writing language.
The selected language represents the region where the terms and conditions will be displayed. The terms and conditions must be written for each language.
Enter the Title and Content for each language.
Click the Save button and click the List button to return to the list.
After completing the writing, review the written content.
Republishing
The newly written version is published by setting the republishing scheduled date.
To publish a new version, follow the procedure below.
Click the Republishing Scheduled Date button on the Version History tab.
Set the Version.
Set the Republishing Date.
Set the Republishing Modification. If activated, the modified terms and conditions will be republished, and the user may need to agree based on the General Settings > Mandatory setting.
Enter a simple reason for the modification.
Click the Publishing Settings button to complete the settings.
Note
Before the republishing scheduled date, the title and content of the terms and conditions can be modified. After republishing, modification is not possible for version management.
On the Version History tab, clicking the Delete button on the version history list cancels the republishing.
9.5.2.1.2.6 - Open Source licence
The open source licenses used in the SingleID solution are as follows: Please refer to the details below.
SingleID_MobileApp_Client-APK
The following sets forth attribution notices for third party software that may be contained in portions of this product. If you have any questions, please contact <global.cs@samsung.com.>
JDOM License Copyright (C) 2000-2004 Jason Hunter & Brett McLaughlin. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions, and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the disclaimer that follows these conditions in the documentation and/or other materials provided with the distribution. 3. The name “JDOM” must not be used to endorse or promote products derived from this software without prior written permission. For written permission, please contact {request_AT_jdom_DOT_org}. 4. Products derived from this software may not be called “JDOM”, nor may “JDOM” appear in their name, without prior written permission from the JDOM Project Management {request_AT_jdom_DOT_org}.
In addition, we request (but do not require) that you include in the end-user documentation provided with the redistribution and/or in the software itself an acknowledgment equivalent to the following: “This product includes software developed by the JDOM Project (http://www.jdom.org/)." Alternatively, the acknowledgment may be graphical using the logos available at http://www.jdom.org/images/logos.
THIS SOFTWARE IS PROVIDED “AS IS” AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE JDOM AUTHORS OR THE PROJECT CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Checker Qual : Copyright 2004-present by the Checker Framework developers
Mocha: Copyright (c) 2011-2022 OpenJS Foundation and contributors, https://openjsf.org
Xamarin.Android.Support.ViewPager , Android - platform - hardware - intel - common - libva: Copyright (c) .NET Foundation Contributors
android-gif-drawable : Copyright (c) 2013 - present Karol Wrótniak, Droids on Roids LLC
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF License Open Source Component License Text MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
SingleID_MobileApp_Client-APK
SingleID_MobileApp_Client-IOS
The following sets forth attribution notices for third party software that may be contained in portions of
This product. If you have any questions, please contact global.cs@samsung.com
License
Open Source Component
License Text
Apache License 2.0
Open Computer Vision Library (OpenCV): KA ProgressLabel:
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
“License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License.
“Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
1. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
2. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: a. You must give any other recipients of the Work or Derivative Works a copy of this License; and b. You must cause any modified files to carry prominent notices stating that You changed the files; and c. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, rademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and d. If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets “[]” replaced with your own identifying information. (Don’t include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same “printed page” as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple Computer, Inc.
(“Apple”) in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software.
In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple’s copyrights in this original Apple software (the “Apple Software”), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Computer, Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated.
The Apple Software is provided by Apple on an “AS IS” basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS.
IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Copyright (c) 1998-2013, Brian Gladman, Worcester, UK. All rights reserved. The redistribution and use of this software (with or without changes) is allowed without the payment of fees or royalties provided that: source code distributions include the above copyright notice, this list of conditions and the following disclaimer; binary distributions include the above copyright notice, this list of conditions and the following disclaimer in their documentation. This software is provided ‘as is’ with no explicit or implied warranties in respect of its operation, including, but not limited to, correctness and fitness for purpose.
TPPropertyAnimation: Copyright 2010 A TASTY PIXEL. All rights Reserved
sqlcipher: Copyright (c) 2008-2023, ZETETIC LLC All rights reserved.
ASM All: Copyright (c) 2000-2011 INRIA, France Telecom All rights reserved.
Protocol Buffers [BOM]: Copyright 2008 Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The OpenSSL toolkit stays under a dual license, i.e. both the conditions of the OpenSSL License and the original SSLeay license apply to the toolkit. See below for the actual license texts. Actually both licenses are BSD-style Open Source licenses. In case of any license issues related to OpenSSL please contact openssl-core@openssl.org.
OpenSSL License —————
Copyright (c) 1998-2008 The OpenSSL Project. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. All advertising materials mentioning features or use of this software must display the following acknowledgment: “This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. (http://www.openssl.org/)" 4. The names “OpenSSL Toolkit” and “OpenSSL Project” must not be used to endorse or promote products derived from this software without prior written permission. For written permission, please contact openssl-core@openssl.org. 5. Products derived from this software may not be called “OpenSSL” nor may “OpenSSL” appear in their names without prior written permission of the OpenSSL Project. 6. Redistributions of any form whatsoever must retain the following acknowledgment:
“This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/)"
THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT “AS IS” AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). This product includes software written by Tim Hudson (tjh@cryptsoft.com).
Original SSLeay License
Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com) All rights reserved.
This package is an SSL implementation written by Eric Young (eay@cryptsoft.com). The implementation was written so as to conform with Netscapes SSL.
This library is free for commercial and non-commercial use as long as the following conditions are aheared to. The following conditions apply to all code found in this distribution, be it the RC4, RSA, lhash, DES, etc., code; not just the SSL code. The SSL documentation included with this distribution is covered by the same copyright terms except that the holder is Tim Hudson (tjh@cryptsoft.com). Copyright remains Eric Young’s, and as such any Copyright notices in the code are not to be removed. If this package is used in a product, Eric Young should be given attribution as the author of the parts of the library used. This can be in the form of a textual message at program startup or in documentation (online or textual) provided with the package.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. All advertising materials mentioning features or use of this software must display the following acknowledgement:
“This product includes cryptographic software written by Eric Young (eay@cryptsoft.com)” The word ‘cryptographic’ can be left out if the rouines from the library being used are not cryptographic related :-). 4. If you include any Windows specific code (or a derivative thereof) from the apps directory (application code) you must include an acknowledgement: “This product includes software written by Tim Hudson (tjh@cryptsoft.com)”
THIS SOFTWARE IS PROVIDED BY ERIC YOUNG “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The licence and distribution terms for any publically available version or derivative of this code cannot be changed. i.e. this code cannot simply be copied and put under another distribution licence [including the GNU Public Licence.]
This software is provided ‘as-is’, without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.
Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution.
SingleID_MobileApp_Client-APK
SingleID_MobileApp_Flutter-UMA
The following sets forth attribution notices for third party software that may be contained in portions of this product. If you have any questions, please contact global.cs@samsung.com
License
Open Source Component
License Text
Apache License 2.0
Android Support Library media compat, Converter: Gson, Adapter: RxJava 2, Android Support Library core utils, Android Arch-Runtime, Guava (Google Common Libraries), Android Support AnimatedVectorDrawable, Android Support Library core UI, Android Support Library Custom View - androidx.customview:customview, Android Lifecycle LiveData, OkHttp, Gson, android.support.annotation, Android Support Library Custom View - androidx.swiperefreshlayout:swiperefreshlayout, Android Support Library v4, OkHttp, Android Lifecycle ViewModel, Commons Lang, rxjava, Android Support Library compat, Roboto Fonts, Apache Commons Collections, Android Support Library v4, Android Lifecycle LiveData Core, RxAndroid, joda-time, okio, Apache Commons IO, JetBrains/java-annotations, Android AppCompat Library v7, Android Support Library Collections, Android Support VectorDrawable, Kotlin Stdlib, Android Lifecycle-Common, Android Support Library loader, Retrofit
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
“License” shall me an the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License.
“Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
a. You must give any other recipients of the Work or Derivative Works a copy of this License; and b. You must cause any modified files to carry prominent notices stating that You changed the files; and c. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and d. If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets “[]” replaced with your own identifying information. (Don’t include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same “printed page” as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN “AS-IS” BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an “owner”) of an original work of authorship and/or a database (each, a “Work”).
Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works (“Commons”) that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others.
For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the “Affirmer”), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights (“Copyright and Related Rights”). Copyright and Related Rights include, but are not limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person’s image or likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and
vii. other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer’s Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work
i. in all territories worldwide,
ii. for the maximum duration provided by applicable law or treaty (including future time extensions),
iii. in any current or future medium and for any number of copies, and
iv. for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the “Waiver”).
Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer’s heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer’s express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer’s express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer’s Copyright and Related Rights in the Work
i. in all territories worldwide,
ii. for the maximum duration provided by applicable law or treaty (including future time extensions),
iii. in any current or future medium and for any number of copies, and
iv. for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the “License”).
The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not i. exercise any of his or her remaining Copyright and Related Rights in the Work or ii. assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer’s express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person’s Copyright and Related Rights in the Work. Further,
Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work. Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work.
Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Xamarin.Android.Support.ViewPager: Copyright (c) .NET Foundation Contributors All rights reserved.
secure-random: Copyright (C) 2011 by Anton Vodonosov (avodonosov@yandex.ru). All rights reserved.
Xamarin.Android.Support.CursorAdapter: Copyright (c) .NET Foundation Contributors All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others.
The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.
DEFINITIONS
“Font Software” refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation.
“Reserved Font Name” refers to any names specified as such after the copyright statement(s).
“Original Version” refers to the collection of Font Software components as distributed by the Copyright Holder(s).
“Modified Version” refers to any derivative made by adding to, deleting, or substituting — in part or in whole — any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment.
“Author” refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions:
1. Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself.
2. Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user.
3. No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users.
4. The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission.
5. The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.
SingleID_MobileApp_Flutter-UMA
SingleID_SSO-Agent-Windows
The following sets forth attribution notices for third party software that may be contained in portions of this product. If you have any questions, please contact global.cs@samsung.com
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor’s Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License.
1.10. “Modifications”
means any of the following:
c. any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or
d. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this License. For legal entities, “You” includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control” means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
a. under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor:
c. for any code that a Contributor has removed from Covered Software; or
d. for infringements caused by: (i) Your and any other third party’s modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or
e. under Patent Claims infringed by Covered Software in the absence of its Contributions.
This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients’ rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients’ rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party’s negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party’s ability to bring cross-claims or counter-claims.
9. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number.
10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible With Secondary Licenses”, as defined by the Mozilla Public License, v. 2.0.
SingleID_SSO-Agent-Windows
SingleID_ADFS-Adapter
The following sets forth attribution notices for third party software that may be contained in portions of this product. If you have any questions, please contact global.cs@samsung.com
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
“License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License. “Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice License Open Source Software License Text that is included in or attached to the work (an example is provided in the Appendix below).
“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
a. You must give any other recipients of the Work or Derivative Works a copy of this License; and
b. You must cause any modified files to carry prominent notices stating that You changed the files; and
c. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
d. If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License.
You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or License Open Source Software License Text redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets “[]” replaced with your own identifying information. (Don’t include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same “printed page” as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
MICROSOFT SOFTWARE LICENSE TERMS MICROSOFT .NET LIBRARY
These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its affiliates) and you. Please read them. They apply to the software named above, which includes the media on which you received it, if any. The terms also apply to any Microsoft
* updates,
* supplements,
* Internet-based services, and
* support services
for this software, unless other terms accompany those items. If so, those terms apply.
BY USING THE SOFTWARE, YOU ACCEPT THESE TERMS. IF YOU DO NOT ACCEPT THEM, DO NOT USE THE SOFTWARE.
IF YOU COMPLY WITH THESE LICENSE TERMS, YOU HAVE THE PERPETUAL RIGHTS BELOW.
1. INSTALLATION AND USE RIGHTS.
a. Installation and Use. You may install and use any number of copies of the software to design, develop and test your programs. You may modify, copy, distribute or deploy any .js files contained in the software as part of your programs.
b. Third Party Programs. The software may include third party programs that Microsoft, not the third party, licenses to you under this agreement. Notices, if any, for the third party program are included for your information only.
2. ADDITIONAL LICENSING REQUIREMENTS AND/OR USE RIGHTS.
a. DISTRIBUTABLE CODE. In addition to the .js files described above, the software is comprised of Distributable Code. “Distributable Code” is code that you are permitted to distribute in programs you develop if you comply with the terms below.
i. Right to Use and Distribute.
* You may copy and distribute the object code form of the software.
* Third Party Distribution. You may permit distributors of your programs to copy and distribute the Distributable Code as part of those programs.
ii. Distribution Requirements. For any Distributable Code you distribute, you must
* use the Distributable Code in your programs and not as a standalone distribution;
* require distributors and external end users to agree to terms that protect it at least as much as this agreement;
* display your valid copyright notice on your programs; and
* indemnify, defend, and hold harmless Microsoft from any claims, including attorneys’ fees, related to the distribution or use of your programs.
iii. Distribution Restrictions. You may not
* alter any copyright, trademark or patent notice in the Distributable Code;
* use Microsoft’s trademarks in your programs’ names or in a way that suggests your programs come from or are endorsed by Microsoft;
* include Distributable Code in malicious, deceptive or unlawful programs; or
* modify or distribute the source code of any Distributable Code so that any part of it becomes subject to an Excluded License. An Excluded License is one that requires, as a condition of use, modification or distribution, that
* the code be disclosed or distributed in source code form; or
* others have the right to modify it. 3. SCOPE OF LICENSE. The software is licensed, not sold. This agreement only gives you some rights to use the software. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you may use the software only as expressly permitted in this agreement. In doing so, you must comply with any technical limitations in the software that only allow you to use it in certain ways. You may not
* work around any technical limitations in the software;
* reverse engineer, decompile or disassemble the software, except and only to the extent that applicable law expressly permits, despite this limitation;
* publish the software for others to copy;
* rent, lease or lend the software; or
* transfer the software or this agreement to any third party.
BACKUP COPY.
You may make one backup copy of the software. You may use it only to reinstall the software.
DOCUMENTATION.
Any person that has valid access to your computer or internal network may copy and use the documentation for your internal, reference purposes.
EXPORT RESTRICTIONS.
The software is subject to United States export laws and regulations. You must comply with all domestic and international export laws and regulations that apply to the software. These laws include restrictions on destinations, end users and end use. For additional information, see www.microsoft.com/exporting SUPPORT SERVICES. Because this software is “as is,” we may not provide support services for it. ENTIRE AGREEMENT. This agreement, and the terms for supplements, updates, Internet-based services and support services that you use, are the entire agreement for the software and support services. APPLICABLE LAW.
a. United States. If you acquired the software in the United States, Washington state law governs the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws principles. The laws of the state where you live govern all other claims, including claims under state consumer protection laws, unfair competition laws, and in tort.
b. Outside the United States. If you acquired the software in any other country, the laws of that country apply.
LEGAL EFFECT.
This agreement describes certain legal rights. You may have other rights under the laws of your country. You may also have rights with respect to the party from whom you acquired the software. This agreement does not change your rights under the laws of your country if the laws of your country do not permit it to do so.
DISCLAIMER OF WARRANTY. THE SOFTWARE IS LICENSED “AS-IS.” YOU BEAR THE RISK OF USING IT. MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES OR CONDITIONS. YOU MAY HAVE ADDITIONAL CONSUMER RIGHTS OR STATUTORY GUARANTEES UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT EXCLUDES THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
FOR AUSTRALIA – YOU HAVE STATUTORY GUARANTEES UNDER THE AUSTRALIAN CONSUMER LAW AND NOTHING IN THESE TERMS IS INTENDED TO AFFECT THOSE RIGHTS.
LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
This limitation applies to
* anything related to the software, services, content (including code) on third party Internet sites, or third party programs; and claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence, or other tort to the extent permitted by applicable law.
It also applies even if Microsoft knew or should have known about the possibility of the damages. The above limitation or exclusion may not apply to you because your country may not allow the exclusion or limitation of incidental, consequential or other damages.
Please note: As this software is distributed in Quebec, Canada, some of the clauses in this agreement are provided below in French.
Remarque : Ce logiciel étant distribué au Québec, Canada, certaines des clauses dans ce contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le logiciel visé par une licence est offert « tel quel ». Toute utilisation de ce logiciel est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection des consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.
LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES DOMMAGES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de dommages directs uniquement à hauteur de 5,00 $ US. Vous ne pouvez prétendre àaucune indemnisation pour les autres dommages, y compris les dommages spéciaux, indirects ou accessoires et pertes de bénéfices.
Cette limitation concerne :
* tout ce qui est relié au logiciel, aux services ou au contenu (y compris le code) figurant sur des sites Internet tiers ou dans des programmes tiers ; et
* les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité stricte, de négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.
Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel dommage. Si votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages indirects, accessoires ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus ne s’appliquera pas à votre égard.
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de votre pays si celles-ci ne le permettent pas.
Copyright (c) Microsoft Corporation. All rights reserved.
Microsoft.Bcl.AsyncInterfaces
Copyright (c) Microsoft Corporation. All rights reserved.
Microsoft.IdentityModel.Abstractions
Copyright (c) Microsoft Corporation. All rights reserved
Microsoft.IdentityModel.Logging
Copyright (c) Microsoft Corporation. All rights reserved
Microsoft.IdentityModel.Tokens
Copyright (c) Microsoft Corporation. All rights reserved
System.Buffers
Copyright (c) Microsoft Corporation. All rights reserved
System.DirectoryServices
Copyright (c) Microsoft Corporation. All rights reserved
System.IdentityModel.Tokens.Jwt
Copyright (c) Microsoft Corporation. All rights reserved
System.IO.FileSystem.AccessControl
Copyright (c) Microsoft Corporation. All rights reserved
System.Memory
Copyright (c) Microsoft Corporation. All rights reserved
System.Numerics.Vectors
Copyright (c) Microsoft Corporation. All rights reserved
System.Runtime.CompilerServices.Unsafe
Copyright (c) Microsoft Corporation. All rights reserved
System.Security.AccessControl
Copyright (c) Microsoft Corporation. All rights reserved
System.Security.Principal.Windows
Copyright (c) Microsoft Corporation. All rights reserved
System.Text.Encodings.Web
Copyright (c) Microsoft Corporation. All rights reserved
System.Text.Json
Copyright (c) Microsoft Corporation. All rights reserved
System.Threading.Tasks.Extensions
Copyright (c) Microsoft Corporation. All rights reserved
System.ValueTuple Copyright (c) Microsoft Corporation. All rights reserved
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
SingleID_ADFS-Adapter
9.5.2.1.3 - MFA Portal
Overview
SingleID’s MFA service provides additional 2-factor authentication services to users through system integration while maintaining the authentication system used by applications.
Also, SingleID provides an MFA Portal that allows users to pre-register and manage their preferred authentication method, making it easy to set up.
The MFA Portal manual provides a function for users to self-register for 2-factor composite authentication.
For more information, please refer to the following items:
Select the language at the top of the user portal screen > Select the desired language from ‘Korean’ or ‘English’.
The language will be changed to the selected language.
Note
It is provided in the language set by the user’s browser at the initial login. If the language is not Korean or English, it is set to English.
SingleID Access Environment and Support
Support
Recommended
Windows : Windows Desktop 10 and 11 (x86 and x64 CPU Only)
Web Browser: Microsoft Edge, Latest public version
Windows : Windows Desktop 10 and 11 (x86 and x64 CPU Only)
Web Browser: Microsoft Edge 88.x ↑, Chrome 87.x ↑Android : 8 and later versions
Galaxy S9 ↑
Models released in 2018 and beyond among Samsung Galaxy Mobile Products
Web Browser: Samsung Internet 9.0 ↑
Android : 8 and later versions
Web Browser: Samsung Internet Latest public version
Android : 8 and later versions
Models released in 2018 and beyond among Samsung Galaxy Mobile Products
Galaxy S9 ↑
Web Browser: Samsung Internet 9.0 ↑
iOS : 16 ,17
Web Browser: Safari , Latest public version
iOS : 16 ,17
iPhone Xs ↑, Models released in 2018 and beyond among Apple iPhone Products
Web Browser: Safari 14.1 ↑
Table. SingleID Access Environment and Support
9.5.2.1.3.1 - Login using authentication method
Log in using authentication method
What is an authentication method?
Authentication method is commonly called Authenticator and refers to an authentication tool.
SingleID provides the following 9 authentication methods for user authentication.
Password: Enter password on SingleID login screen
Email OTP: Send OTP via email and enter OTP on the SingleID login screen
SMS OTP: Send OTP via SMS and enter OTP on the SingleID login screen
Knox Messenger OTP: Send OTP via Knox Messenger and enter OTP on the SingleID login screen.
Knox Identity: Knox Portal authentication integration using user ID and password
lo: When activated on PC, link the activated Windows Hello authentication with authentication results
SingleID Authenticator Bio: Install the SingleID dedicated mobile app and link authentication using biometric authentication (fingerprint, facial)
SingleID Authenticator PIN: Install the SingleID dedicated mobile app and link authentication with a PIN
SingleID Authenticator mOTP: Install the SingleID dedicated mobile app and integrate authentication with mOTP (Mobile OTP)
SingleID Authenticator TOTP: Install the SingleID dedicated mobile app and integrate authentication with TOTP (Time-based OTP)
Passkey: Login and authentication using biometrics (fingerprint, facial), Mobile, PIN code without a password, based on Windows Hello
Note
SingleID Authenticator if you are using the mobile app for the first time, please refer to SingleID Authenticator.
Set Preferred Authentication Method
The user logs into the User Portal provided by SingleID and provides settings for the primary and secondary authentication methods they prefer.
When the user sets their preferred method, the screen for selecting a verification method is skipped during login and authentication, allowing immediate authentication with first and second factor methods.
If you want to set your preferred authentication method, follow the steps below.
User Portal > Personal Profile > Authentication settings click.
Preferred 1st authentication (1st) method, 2nd authentication (2nd) method click the star (☆) for each.
When the setup is complete, it will be set in that method at the next login, providing convenient login.
Notice
Even if a user sets a preferred authentication method for first and second factor authentication, the administrator can restrict it to a specific authentication method through login policy settings.
Register authentication tool
All authentication methods can be set by the user. Registering an authentication method by the user is called enrollment (Enrollment). When a user account is first created, only email OTP is automatically enrolled (Enrollment) using the email information from the user data. Other authentication methods can be directly enrolled (Enrollment) by the user as needed.
I will explain the two methods of authentication registration (Enrollment).
Register from Authentication Settings: User Portal > Profile > Authentication settings, click the bottom + Add New button to register.
Register on the identity verification method selection screen: During login, first authentication; during second authentication, on the Identity Verification Method Selection screen, select the authentication method that has the gray check mark (V) and register.
SingleID requires consent for the collection/use of personal information when logging in for the first time or during a certain period. According to the consent procedure, select required, optional items to agree. Required items must be selected in order to log in.
Password Authentication
Password is the most basic authentication method as a SingleID basic authentication tool.
Enter password
To log in using the user ID, follow the steps below.
Login screen > Account ID Enter the ID in the input field, and click the Next button.
Enter the password in the Password field, and click the Next button to log in.
Reference
If you click the eye-shaped icon in the password input field, you can check the password you entered.
Notice
When the entered password is entered incorrectly
If the entered password is not correct, you can re-enter it with a message indicating it is wrong.
The number of retry attempts allowed is as many as set by the administrator in the password policy.
When the password is entered incorrectly consecutively and gets locked
If you entered the password incorrectly and the device is locked, you can unlock it in two ways.
Automatically unlock after 1~5 minutes: When the automatic unlock setting is enabled, the account will be locked for 1~5 minutes. After that period, login is possible.
Unlock with password reset: When the administrator sets the password policy to password reset, a password reset is required. Login is possible after resetting the password.
Find ID you can view the detailed contents.
Email OTP Authentication
Authenticate
If you want to authenticate with email OTP, an OTP will be sent to the email registered by the user.
If you want to authenticate with email OTP, follow the steps below.
Click Email in Identity verification selection method.
An OTP code will be sent to the registered email. Enter the OTP within the time set by the administrator (usually 3-5 minutes).
After entering, click the Confirm button, and the authentication will be completed.
Reference
Code Resend: If you exceed the input validity time, click the code resend button. The OTP code will be resent via email.
Do you want to authenticate in a different way?: If the current authentication cannot be used, change to a different authentication method.
If you have changed your email, please register.: Depending on the administrator settings, you can register (Enrollment) a different email for verification.
You can check the details of enrollment at Email authentication tool enrollment.
Notice
If the code is entered incorrectly
If the user enters the OTP code incorrectly, they can re-enter it as many times as the administrator has specified.
If locked due to exceeding the user input limit
If you enter the OTP code incorrectly more times than the number set by the administrator, the screen will be locked for the time set by the administrator. After waiting for that time, you can enter again. Refresh and try again after the input restriction period.
SMS OTP authentication
Authenticate
If you want to authenticate with SMS OTP, an SMS OTP will be sent to the mobile registered by the user.
If you want to authenticate with email OTP, follow the steps below.
Click Email in Identity Verification Selection Method.
The OTP code will be sent to the registered mobile phone. Enter the OTP within the time set by the administrator (usually 3~5 minutes).
After entering, click the Confirm button, and the authentication will be completed.
Reference
Code Resend: If you exceed the input validity time, click the code resend button. The OTP code will be resent to your mobile phone.
Would you like to authenticate in a different way?: If the current authentication cannot be used, switch to a different authentication method.
If you have changed your mobile phone, please register.: Clicking the link will take you to a screen to enroll the new mobile.
You can check the detailed information about enrollment at SMS authentication tool enrollment.
Guide
If you entered the code incorrectly
If the user enters the OTP code incorrectly, they can re-enter it as many times as the administrator specifies.
If locked due to exceeding the user input limit
If you enter the OTP code incorrectly more times than the number set by the administrator, the screen will be restricted from input for the time set by the administrator. After waiting for that time, you can input again. Refresh and try again after the input restriction period.
Knox Messenger OTP authentication
Authenticate
Knox Messaenger If you want to authenticate with OTP, the OTP will be sent to the Knox Messanger you are using.
If you want to authenticate Knox Messenger OTP, follow the steps below.
From Identity verification selection method, click Knox Messenger.
The OTP code will be sent via the Knox Messenger you are using. Enter the OTP within the time set by the administrator (usually 3~5 minutes).
After entering, click the Confirm button, and the authentication will be completed.
Reference
Resend Code: If you exceed the input validity time, click the resend code button. The OTP code will be resent to your mobile phone.
Would you like to authenticate in a different way?: If the current authentication cannot be used, switch to a different authentication method.
Would you like to use a different Knox ID?: Clicking the link will take you to a screen that enrolls a new Knox ID.
You can view the detailed information for enrollment at Knox Messenger Authentication Tool Enrollment.
Information
If you entered the code incorrectly
If the user enters the OTP code incorrectly, they can re-enter it up to the number of times specified by the administrator.
If locked due to exceeding user input limit
If you enter the OTP code incorrectly more times than the number set by the administrator, the screen will be locked for the amount of time set by the administrator. After waiting for that time, you can enter again. Refresh and try again after the input restriction period.
Knox Identity Password Authentication
Authenticate
If you want to authenticate with Knox Identity, you need to enter the Knox Identity password you are using.
If you want to authenticate with Knox Identity, follow the steps below.
Click Knox Identity in Identity verification selection method.
Please enter the password of your own Knox account.
After entering, click the Confirm button, and the authentication will be completed.
Reference
Would you like to authenticate in a different way?: If the current authentication cannot be used, it changes to a different authentication method.
Notice
If the password is entered incorrectly
If the user enters the password incorrectly, they can re-enter it up to the number of times specified by the administrator.
If locked due to exceeding user input limit
If you enter the password incorrectly more times than the number set by the administrator, the screen will be locked for the duration set by the administrator. You can enter after waiting for that time. Refresh and try again after the lockout period.
SingleID Authenticator authentication
SingleID service provides a mobile authentication app called SingleID Authenticator, and offers authentication in various ways.
Authentication Method
Authentication method
Description
SingleID Authenticator Bio
Send a push via the installed SingleID Authenticator mobile app on the mobile device to request biometric authentication.
SingleID Authenticator PIN
Send a push via the installed SingleID Authenticator mobile app on the mobile device to request authentication with a PIN code.
SingleID Authenticator TOTP
Send a push using the installed ID Authenticator mobile app on the mobile to request authentication with TOTP.
SingleID Authenticator mOTP
Send a push using the installed SingleID Authenticator mobile app on the mobile device to request authentication with mOTP.
SingleID Authenticator installation and configuration method, please refer to SingleID Authenticator.
SingleID Authenticator Detailed information on how to register the authentication tool can be found at Register Authentication Tool.
Passkey authentication
SingleID service provides simple authentication and multi-factor authentication through a window-based Passkey.
Authentication Method
Convenient authentication: Provides easy login without ID/Password through Sign in with Passkey at the bottom of the login page.
Multi-factor authentication: Provides easy login without needing ID/Password during secondary multi-factor authentication.
Authentication Types
Mobile Passkey: Scan the QR code and log in using Android and iOS mobile
Security key: Log in using the Windows security key
PIN: Window login using PIN code
Reference
Passkey supported environment
1.Operating System(Laptop or Desktop)
Windows 11, macOS Ventura, ChromeOS 109 or higher
Mobile phone: iOS 16 or Android 9 or higher
Hardware security key: hardware security key that supports the FIDO2 protocol
Browse version
Chrome 109 or higher
Safari 16 or higher
Edge 109
3.Device Settings
Bluetooth activation
Set password for screen lock
PIN code registration
Allow fingerprint or facial recognition
Reference
Passkey must have Window Hello set up in advance. For detailed information, please check the Reference link.
Admin Authentication
Authenticate
In the SingleID service, the administrator delegates identity verification on behalf of the user and provides authentication.
If you want to perform administrator authentication, follow the steps below.
Identity verification selection method, if you cannot perform identity verification at the bottom of the screen, you can request verification from the administrator. Click here. Click it.
On the administrator selection screen, select the administrator to delegate and click the Request button.
After clicking the Request button and requesting approval from the selected administrator, the authentication will be completed.
Guide
If there is no If you cannot perform identity verification, you can request verification from the administrator. Click here text at the bottom
The administrator has disabled the admin authentication delegation feature by policy. Please contact the administrator.
9.5.2.1.3.2 - Register authentication tool
Register authentication tool (Enrollment)
All authentication tools are to be registered and used by the user themselves as a principle. The act of a user registering an authentication tool is called enrollment (Enrollment).
When a user is first created, only Email OTP is automatically registered using the email information among the user information. The remaining information can be directly registered by the user as needed.
There are three ways to register.
Login screen > ID/Passwrod entry > Select authentication method Register on the screen
If you click the authentication tool marked as ‘Registration required’ (gray check mark) on the authentication method selection screen, you can register.
User Portal(after login) > Profile > in authentication settings +Add New Click the button to register
Register via the registration message link at the bottom of all authentication screens
The screen below is an example of an SMS verification screen. At the bottom, you can click the ‘If you have changed your mobile phone, please register.’ message to register.
You can change it through a message below all authentication code inputs (Message format: ~ please register.)
Figure. Authentication screen example
Register email verification tool
Email registration consists of the following three steps.
Verification Stage: This is the identity verification stage before registering the email authentication tool.
Registration Stage: This is the step where you register a new email and check if the number is valid.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Verification Stage
This is the step of verifying your identity before using the authentication tool. To view the identity verification process, refer to Login.
Caution
In the verification stage, the authentication method to be used can only be authenticated with the authentication tool configured by the administrator.
Registration Stage
It is the step of registering the email address the user wants to register and checking the email address’s validity.
The user can proceed with the following procedure.
If you complete identity verification in the confirmation step, you will automatically move to the registration step.
Please enter the email address you want to register.
Click the Send verification code button.
Check the OTP code sent to the entered email address and enter the OTP code on the screen.
If the authentication code is entered correctly, it moves to the completion stage.
Completion Stage
The registration complete screen will appear, and on the next login you can perform first and second authentication using the email verification tool.
Register SMS authentication tool
SMS registration consists of the following three steps.
Verification step: This is the identity verification step before registering the SMS authentication tool.
Registration Stage: This is the stage where you register a new mobile phone number and check if the number is valid.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Verification Stage
It is the step of identity verification before using the authentication tool. To view the identity verification process, refer to Login.
In the verification stage, the authentication method to be used can only be authenticated using the authentication tool set by the administrator.
Registration Stage
This is the step of registering the mobile phone number the user wants to register and checking the validity of the mobile phone number.
The user proceeds as follows.
If you complete identity verification in the confirmation step, you will automatically move to the registration step.
Select the country code and enter the mobile phone number you want to register.
Click the Send verification code button.
Check the OTP code sent to the entered mobile phone number, and enter the OTP code on the screen.
If the authentication code is entered correctly, it moves to the completion stage.
Completion Stage
Registration Complete The screen will appear, and on the next login you can perform first and second authentication using the SMS verification tool.
Register Knox Messenger authentication tool
Knox Messenger registration consists of the following three steps.
Verification Stage: This is the identity verification stage before registering the Knox Messenger authentication tool.
Registration step: Enter the Knox ID to register. This is the step that checks whether the Knox ID to be registered is valid.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Verification Stage
It is the step of verifying your identity before using the authentication tool. To view the identity verification process, refer to Login.
In the verification stage, the authentication method to be used can only be authenticated using the authentication tool configured by the administrator.
Registration Stage
This is the step of registering the mobile phone number the user wants to register and checking the validity of the mobile phone number.
The user proceeds as follows.
If you complete identity verification in the confirmation step, you will automatically move to the registration step.
Enter the Knox ID to register.
Click the Send verification code button.
Check the OTP code sent to the Knox Messenger of the entered Knox ID, and enter the OTP code on the screen.
If the authentication code is entered correctly, it moves to the completion stage.
Completion Stage
Registration Complete screen will appear, and on the next login you can perform first and second authentication using the Knox Messenger authentication tool.
Register Passkey authentication tool
SingleID Authenticator is an authentication tool provided to the SingleID service.
Passkey enrollment consists of the following three steps.
Verification stage: It is the identity verification stage before registering the Passkey authentication tool.
Registration Stage: This is the Passkey registration stage.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Verification Stage
This step verifies your identity before registering the authentication tool. To view the identity verification process, refer to Login and Authenticate.
Notice
In the verification stage, the authentication method to be used can only be authenticated using the authentication tool configured by the administrator.
Registration Stage
This is the step to verify the mobile phone you want to register the Passkey on or the PC environment you are accessing.
Please complete the registration process in the four steps below.
Activation: This is a guide to the Passkey supported environment.
Confirm: Complete identity verification using an authentication method.
Registration: Passkey registration stage. Create on this device button click registers the passkey on the PC. Create on another device button click registers with a mobile phone or hardware security key.
Complete: Registration Complete is the step to confirm that it has been completed. Click the Continue button.
Reference
Passkey support environment
1.Operating System(Laptop or Desktop)
Windows 11, macOS Ventura, ChromeOS 109 or higher
Mobile phone: iOS 16 or Android 9 or higher
Hardware security key: Hardware security key that supports the FIDO2 protocol
Browse version
Chrome 109 or higher
Safari 16 or higher
Edge 109
3.Device Settings
Bluetooth activation
Set password for screen lock
PIN code registration
Allow fingerprint or facial recognition
Completion Stage
After the Passkey registration is completed, the registration complete screen appears. At the next login, you can perform first and second factor authentication using the Windows Hello authentication tool.
Reference
PC Passkey must have Windows Hello set up in advance. For detailed information, see the Reference Link.
When registering a passkey on mobile, it can be set in an environment where QR code scanning is possible.
SingleID Authenticator is an authentication tool provided to the SingleID service.
SingleID Authenticator enrollment consists of the following four steps.
Verification Stage: This is the identity verification stage before registering the SingleID Authenticator authentication tool.
Installation step: This is the user’s SingleID installation guide step.
Registration Stage: This is the stage to register a new mobile app and for service registration.
Completion Stage: This is the final step to confirm that the registration has been completed successfully.
Confirmation Stage
Before using the authentication tool, this is the step to verify your identity. To view the identity verification process, refer to Login.
Notice
In the verification stage, the authentication method to be used can only be authenticated using the authentication tool configured by the administrator.
Installation Steps
There are three main ways to install the SingleID mobile app.
How to install ‘SingleID Authenticator’ by recognizing a QR code on the user’s mobile, or searching for ‘SinlgeID’ on Google Play (for Android) or the App Store (for iOS).
How to install by entering your mobile phone number and using the download link via SMS
How to install via manual download link
SingleID Authenticator install the app and click the Next button to move to the registration step.
Registration Stage
After installing the SingleID Authenticator mobile app on the mobile phone you want to register, run the SingleID Authenticator.
Please complete the registration process in the three steps below.
Service Registration: In the SingleID Authenticator app, click the ‘+’ at the top.
QR or authentication number input: Scan QR code or enter authentication code to register.
Service Registration Complete: Click the Confirm button to complete registration.
Completion Stage
SingleID Authenticator after registration is completed Registration Complete screen appears. On the next login, you can perform first and second factor authentication using the Windows Hello authentication tool.
Registration Stage
After installing the SingleID Authenticator mobile app on the mobile phone you want to register, run the SingleID Authenticator.
Perform the registration process in the three steps below.
Service Registration: Click the ‘+’ at the top in the SingleID Authenticator app.
QR or authentication number input: Scan QR code or enter authentication code to register.
Service registration complete: Confirm Click the button to complete the registration.
Completion Stage
SingleID Authenticator after registration is completed, the Registration Complete screen appears. At the next login, you can perform first and second factor authentication using the Windows Hello authentication tool.
9.5.2.1.3.3 - Set Up Personal Information
Set Up Personal Information
This menu is for the user’s environment settings.
To set up your personal information, follow these steps:
Click on Personal Profile > Personal Information settings in the top right corner of the screen.
You can view your photo, name, email, phone number, language, and time zone.
Photo: Click on Photo > Change Photo to upload the icon image you want to display.
Language: Select your desired language, either Korean or English.
Time Zone: Select the time zone where you are currently located. Click the City Search button to open the city search popup window, search for your city in English, and select it.
Click the Save button at the bottom of the screen to save your changes.
Note
You can click the Withdrawal button at the bottom left of the personal information screen to withdraw from your current user account.
Please note that withdrawing will delete your account, so only do so if you intend to delete it.
Set Up Authentication
You can register your authentication tools and set your preferred authentication tool.
To set up authentication, follow these steps:
Click on Personal Profile > Authentication settings in the top right corner of the screen.
Click the + Add New button to add your desired authentication tool.
Click the Delete button to delete the authentication tool you no longer want to use.
Click the Star (☆) icon to set your preferred authentication method.
In the authentication settings, you can change your password by going through the self-verification authentication process.
Check Login History
You can check your login history/environment.
To view your login history/environment, follow these steps:
Click on Personal Profile > Login History/Environment in the top right corner of the screen.
In the Login History tab, you can view information such as login time, location, country, city, IP address, OS type, browser type, detection, and results.
In the Login Environment tab, you can view detailed information if you have registered login environments, and delete them if you no longer use them.
If you are using the SingleID ADM (Anomaly Detection Management) feature,
the detection items will be displayed as Normal or Detected. These items are login histories where authentication anomalies were detected.
Log Out
Click on the photo icon in the top right corner of the screen and click Log Out.
The Log Out button will log you out of all applications you visited through SingleID, and if PC SSO Agent is set up for integrated logout, it will also log you out of associated browsers.
9.5.2.1.4 - CAM Portal
Overview
CAM (Cloud Access Management) is a service for managing cloud console and resource access, providing users with easy and convenient access to cloud consoles and resources.
Users can access the portal from a PC located on the company network through multi-factor authentication. Instead of using a password, a one-time token is issued to access the cloud consoles and resources, and all console access history, activity history, and approval history can be monitored.
Fig. CAM Concept
Service Scenario
In the past, users accessed the console and resources directly with their IAM personal accounts, but now CAM provides a unified access channel.
Step 1: During the transition period, the TO-BE access channel is newly configured and operated in parallel with the AS-IS access channel.
Step 2: After the Cut-Over, the AS-IS access channel is blocked, and the TO-BE channel is switched.
Fig. Service Scenario
Key Features
User Scenario
The user scenario proceeds in the following order:
Sign-In → Basic Information Setting → Console Access Control → Resource Access Control → Monitoring
Fig. User Scenario
Login & Home
Users log in with their SingleID or SSO account (e.g., Knox Portal) and proceed with multi-factor authentication. After entering the authentication code received via SMS or email, the login process is completed and access to CAM is granted.
Fig. SingleID Login
The home screen provides a personalized screen that allows users to access cloud consoles and resources with one click, making it easy for users to access consoles and resources.
Fig. Home
Configuration
After creating a project, users can easily register their CSP (Cloud Service Provider) account. Additionally, users can be added to the project to provide project-specific permissions.
Console Access
Roles and policies can be created to set and control access permissions to the cloud console. Roles can be mapped to specific accounts and users, defining which users can access the CSP console and their permission levels.
Resource Access
Cloud resource access permissions are managed. To manage cloud resource access, users first request permissions, download and install the PC client agent, and register their access IP address. Once set up, users can access their desired resources from their personalized resource list.
9.5.2.1.4.1 - Getting Started
This manual aims to help users quickly understand the essential features and processes required to effectively use CAM.
Network Environment
Access is only possible in a network environment allowed by each tenant.
CAM Portal, Console Access: Access is possible from a network environment allowed by each tenant.
DEV, STG, ETC Resource Access: Access is possible from a network environment allowed by each tenant.
PRD Resource Access: Access is only possible from a network environment with internet blocked, so access is only possible from a specific IP range for each tenant.
Additional individual PC environment settings are required.
Pre-work
To use the CAM portal, some pre-work is necessary.
If you are a PM (Project Manager) or PL (Project Leader) group user, please check the cloud account and resource preparation below and prepare the environment in advance.
Cloud Account Preparation
To register and manage accounts in CAM, you need to create a role in the CSPs (AWS, Azure, SCP) and configure it with the policies required by CAM, and then assume the role in CAM.
To register and access resources in CAM, some setup work is required during resource configuration.
First, you must allow password-based connections. This configuration is necessary to access resources through CAM because a one-time password is issued for SSH connections when accessing resources in CAM.
Additionally, if the resource type is Compute, the following configuration must be added.
Add the following content to a file named /etc/sudoers.
Ubuntu: %sudo ALL=(ALL) NOPASSWD:ALL
Amazon Linux: %wheel ALL=(ALL) NOPASSWD: ALL
Restart the server with systemctl restart sshd.service.
Network Settings
To access resources through CAM, you need to configure the firewall and security group registration in the tenant’s network environment so that CAM can access the resources. Please check the necessary information with the tenant administrator and proceed with the network settings.
Service Scope
CAM currently supports AWS, Azure, SCP CSPs and plans to expand to more CSPs sequentially.
After logging in, you can access your CSP console and personally assigned resources with one click from your personalized homepage. Operators and developers can access approved consoles and resources quickly and easily from one place, streamlining their work.
The CAM (Cloud Access Management) home screen is divided into two sections:
Top Resources
My CSP Consoles
Both sections provide access to assigned resources and CSP consoles.
Top Resources
This section displays a list of the top 30 accessible resources.
Card View and List View
Resources are provided in card view by default, and you can switch to list view according to your preference.
Search and Filter
You can use the search function to quickly find a specific resource, and filter resources based on the following items:
Project
CSP (Cloud Service Provider)
Environment (e.g., DEV, STG, PRD, ETC)
Resource Type (e.g., Compute, DB)
Favorites
You can set favorites using the Favorites (★) icon, and set a favorites filter to filter only resources that have been set as favorites.
Sorting
The resource list can be sorted by two criteria:
Recent (default sorting)
Creation Date
Resource Information
You can check the detailed information of resources in both card and list views.
Resource Name
Project
CSP (e.g., AWS, Azure, SCP)
Environment (e.g., DEV, STG, PRD, ETC)
Resource Type (e.g., Compute, DB)
Resource Connection/Disconnection
Each resource has the Connect button to connect or disconnect.
If you are already connected to a resource, the following details are displayed:
Last Connection Date/Time
Connection Status
Resource Connection
When you click the Connect button on a card or list, a connection popup opens.
To connect to a resource, enter the following details:
Local Port: Enter a port number between 1024 and 65535 that is not currently in use on your PC.
Remote Port: Enter the port number of the resource.
Launch Putty: Select ‘ON’ to automatically run putty during the connection process.
Note
Before attempting to connect, ensure that the client agent is installed and the IP address is registered.
Refer to Resource Access > PC Settings for settings.
Connection Details
When connected to a resource, you can click the dropdown to view detailed connection information, such as user ID, password, and local IP. This information is provided through a popup as details for the user to connect to the resource via SSH.
User ID: Click the Copy icon on the right to copy the user ID.
Password: Click the Copy icon on the right to copy the password.
Local IP: Click the Copy icon on the right to copy the local IP address.
Client Server IP: Refer to the connected client server IP displayed on the screen.
Resource Disconnection
When connected to a resource, the Disconnect button appears. Click this button to start the disconnection process. A popup will be displayed for final confirmation before the connection is terminated.
My CSP Consoles
The page provides a CSP access link in a sticky footer at the bottom. It offers CSP console access via SAML SSO, allowing you to access it directly without a separate authentication process.
9.5.2.1.4.3 - Console Access
The console access feature allows PM and PL group users to manage access to the CSP console by assigning roles and policies to cloud accounts and users. Here, users can access the console with the appropriate permissions based on the settings.
The console access section consists of four main management areas.
Role Management: Defines and manages the level at which users can access the CSP console.
Policy Management: Defines new policies and manages the roles mapped to each policy.
Account Management: Manages cloud accounts and ensures each account is mapped to the correct role permissions.
User Management: Controls user console access by mapping users to the correct roles, giving them the necessary permissions to access the cloud console.
Role Management
In the role management menu, you can view and manage all roles registered to a project, and filter roles by CSP or project.
Create Role
To create a role, click the Create Role button.
To create a new role, you must fill in the required information in the popup window:
Project: Select a project from the user’s project list.
CSP: Select a CSP.
Role Name: Enter a unique role name and click the Validate button to check for consistency.
Description: Add a brief description of the role.
View Role
To access detailed information about a role, go to the role management menu and click on the desired role. All project users can view role details, including policies, cloud accounts, and users mapped to the role.
The role view screen displays the following key details:
Role Information: Basic details related to the role.
Delete Role: Click the Delete button to remove this role.
Policies: Displays a list of policies currently mapped to the role.
Accounts: Displays a list of accounts related to the role.
Users: Displays a list of users connected to the role.
Note
To set up policy, account, and user mappings, you must first create policies in the policy management menu and ensure that cloud accounts and users are pre-registered to the project.
Note
The process in CSPs starts after the user addition approval is completed. Therefore, it may take some time for the status to change to Approved and be confirmed in the user’s CSP Role list. (Up to 10 minutes)
Maximum 10 policies can be mapped to an AWS role.
Each account has a role limit based on its CSP with up to 800 roles in AWS and up to 5000 in Azure.
Each user has a role limit based on the CSP with up to 10 roles of AWS and 4000 roles of Azure can be mapped.
Delete Role
To delete a role and remove its mapping, select the role from the list and click the Delete button. Or alternatively, click the Delete button on the View Role page. Confirm the action to delete the role permanently. Removing a policy mapping eliminates the relationship between the role and the related policy.
Policy Management
PM and PL group users can add or delete policies mapped to a role by selecting or deselecting policies from the policy list.
Create Policy
To create a new policy, click Create Policy and fill in the required information:
Project: Select a project from your list of registered projects.
CSP: Choose the cloud service provider.
Policy Name: Enter a name for the policy and validate it.
JSON Code: Provide the JSON code that defines the policy.
Description: Include a brief description of the policy.
To map a policy to a role, click the Add button above the policy list to open a popup. In the popup, you can view and select policies defined within the same project. Click the Save button to complete the mapping process. You can map multiple policies at once.
Make sure to check if the desired policy is created in the policy management menu before mapping.
View Policy
To access detailed information about a policy, navigate to the Policy Management section and click on the desired policy. All project users can view policy details, including the roles mapped to the policy.
Delete Policy
To remove a policy mapping from a role, select the policy from the list and click the Delete button. The deleted policy will reappear in the Add Policy popup list, allowing you to add it back if needed. Removing a policy mapping eliminates the relationship between the role and the related policy.
Account Management
PM and PL group users can map cloud accounts to a role or remove them.
View Account
To view account details:
Navigate to Account Management and click on the desired account.
All project users can access the account’s details, including a list of roles mapped to that account.
Project managers or PL group users can also edit or delete roles associated with the account.
Add Role to Account
To map roles to an account, click the “Add” button above the roles list to open the “Add Roles” pop-up.
In the pop-up, select roles from the list that belong to the same project as the account, and click the Save button to complete the mapping process.
Note
Maximum 800 roles can be mapped to an AWS account and 5000 roles to an Azure account.
Delete Role from Account
To remove a role from an account, select the role from the list and click the Delete button. The deleted role will reappear in the Add Role popup, allowing you to add it back if needed. You can delete multiple roles at once.
User Management
Through the user management menu, users can view and manage all users registered to a project. Users can be searched by name.
View User
To view user details:
Go to the user management menu and click on the user.
All project users can view user details, including roles mapped to the user.
PM or PL group users can add or delete roles from the user.
Add Role to User
To map a role to a user, click the Add button above the role list to open the Add Role popup. In the popup, you can view all roles in the user’s project, select the role to add, and click the Create Approval button to proceed with the approval process.
Note
Each user has a role limit based on the CSP with up to 10 roles of AWS and 4000 roles of Azure can be mapped.
Create Approval
Assigning a role to a user requires an approval process, which is done through the Create Approval popup and sent via Knox approval system or CAM’s own approval system.
Title: Automatically input by the system and cannot be modified.
Approver: Automatically added by the system, with the option to add approvers and consensus following the approval guide.
Content: Project and role information is automatically input by the system and cannot be modified.
Delete Role from User
To remove a role from a user, click the Delete button. After a final deletion confirmation, the user’s role mapping will be removed. The removed role will reappear in the Add Role popup, allowing you to add it back if needed. Role removal does not require approval, but re-adding a role does.
9.5.2.1.4.4 - Resource Access
You can check all resources with individual permissions and access them. To access resources, a PM or PL group user must register the resources of the cloud account registered in the project and go through the user’s permission request and approval process.
Resources
It shows all resources that have been approved for the user. The user can check the resource list and access the resources directly.
Access
You can access resources by clicking the Connect button, and after connection, it provides connection details.
Local Port: Enter a port number that is not used for other purposes on your PC.
Remote Port: Enter the port number of the resource.
Putty Execution: To automatically run Putty, you need to set it to ON.
Note
Before accessing resources, please make sure to install the client agent and register the IP address.
For more information, refer to Resource Access > PC Settings.
Connection Information
It provides detailed connection information to access resources through SSH.
User ID: You can copy and use the user ID by clicking the copy icon.
Password: You can copy and use the password by clicking the copy icon.
Local IP: You can copy and use the local IP by clicking the copy icon.
Cloud Server IP: You can copy and use the cloud server IP by clicking the copy icon.
Disconnection
When connected to a resource, the Connect button changes to Disconnect. To disconnect from the resource, click the Disconnect button.
Resource Registration
This menu allows you to register resource information necessary for resource access and shows a list of registered resources.
Registration
To register a resource, cloud account registration must be done in advance in the project menu. PM and PL group users can register resources created in the cloud account. Click the Enrollbutton to move to the resource registration screen and set the resource connection information.
Project: Select a project registered as a PM or PL group user.
Account: Select a cloud account registered in the selected project.
Region: Select the region information of the selected account.
Resource Type: Select one of Compute or DB.
Resource: Select a resource that matches the selected criteria.
Connection Type: Select one of Direct (connect directly to the server) or Bastion (connect through a proxy server).
Address: Enter the address information of the resource.
Root User: Provide the IP and password of the root user of the resource.
Note
Before registering a resource, please make sure that cloud account registration and resource creation are completed.
Cloud account registration can be done in Configuration > Project.
Guide
Supported OS/DB
Currently, the OS and DB that can be registered for ‘Resource Registration’ are limited as follows, and supported OS and DB will be continuously added.
OS
Version
Ubuntu
Ubuntu Server 24.04 LTS
Ubuntu
Ubuntu Server 22.04 LTS
Amazon Linux
Amazon Linux 2023 AMI
Redhat
Red Hat Enterprise Linux 9.4
Table. Supported OS
DB Engine
Version
PostgreSQL
16.x
MySQL
8.0.x
Aurora PostgreSQL
15.x
Aurora MySQL
3.05.x
Aurora MySQL
3.04.x
Aurora MySQL
3.03.x
MariaDB
10.11.10x
Table. Supported DB
Network Settings
To register resources in CAM and access resources through CAM, network settings must be done in advance.
Please follow the guide from the tenant administrator and proceed with network settings suitable for each tenant environment, such as firewall registration and security group registration, before registering resources.
Withdrawal
Resources that are no longer used must be deleted from the registered resource list. Select the resource from the resource view or resource registration list and click the “Withdraw” button to prevent further access.
Request Permission
The permission request menu allows you to inquire about the resource permissions of project members and request user-specific CSP resource type permissions.
Request
Users can request resource permissions by type of CSP resource by selecting the period and permission type. All permissions require approval, but when ‘Emergency’ is selected, the permission is granted simultaneously with the approval request, and a related email is sent to the approver.
Resource Information
Project: Select a project that the user belongs to.
Account: Select an account registered in the selected project.
Resource Type: Select one of Compute or DB.
Permission
Period: Select a period (e.g., 4h, 8h, 24h, 10d, 30d, 12m).
Emergency: If checked, the permission is granted simultaneously with the approval request, and a related email is sent to the approver.
Permission Level: Select one of USER, ADMIN, or DBA.
Comment: Add a comment for approval.
Create Approval
To obtain resource access permissions, an approval process is required. To proceed with the approval process, go through the “Create Approval” popup and send it to Knox Approval, which will proceed through Knox Approval.
Title: Automatically entered by the system and cannot be modified.
Approver: Add approvers and consensus according to the guide.
Content: Project and permission information are automatically entered by the system and cannot be modified.
Withdrawal
To remove permissions that are no longer needed, select the corresponding permission from the permission request list and click the Withdraw button.
PC Settings
To access cloud resources, you must install the client agent and register the IP address of the access environment.
Client Agent Download
Click Download Client Agent to start the download and install the client agent.
User Guide
To access resources, client download and installation are required. Also, if the installation is not completed or the version is not supported, you cannot connect to resources even if other preparations, such as permission and IP registration, are completed.
Installation Guide
To start the installation process, click the Download Client Agent button to download the installation file. After the download is complete, refer to the following information to proceed with the installation.
Download Location: Specify a folder in the local drive.
Execution: Select the downloaded file and click Run as Administrator by right-clicking to run it.
IP Registration
Cloud resource access is only possible for registered IPs, and up to 5 IPs can be registered.
Refer to the following information to register an IP.
To add a new IP, click the Add button.
To remove an existing IP, select the corresponding IP from the list and click the Delete button.
9.5.2.1.4.5 - Monitoring
CAM’s monitoring menu provides essential features for tracking console access history, user activities, and approval history. This feature ensures transparency, security, and compliance by providing insights through detailed information.
Access Log
The Access Log section provides a record of user activities within the CAM console, allowing administrators to track and review access-related actions across projects and cloud environments. It helps ensure security compliance and offers visibility into how and when users interact with cloud resources through the CAM interface.
Console Access Log
The Console Access Log records all events related to console access performed through CAM. This log enables tenant administrators to monitor console connection attempts, view event results, and identify access patterns for AWS, Azure, and SCP accounts.
The Console Access Log page is available under Monitoring > Access Log > Console Access Log.
Console Access Log Features
Log Scope
Tenant (Company) Administrator: Can view logs for console access within the tenant.
User: Can only view logs for their own projects.
Log Details
The Console Access Log captures event data for all console-related activities initiated through CAM.
Logs are available for all configured tenants and cover access events across AWS, Azure, and SCP.
Log details display information such as event type, date/time, project, account ID, etc.
Click the Expand icon to view detailed information about all actions. This detailed view provides a deeper understanding of each access attempt. The detailed event information includes:
Event ID
Event Source
Event Result
Request Type
User Agent
Region
Source IP Address
User Information
Use filters such as project, CSP, environment, etc. to narrow down the results.
Select a period to filter logs. The default period is 30 days.
Logs are sorted in reverse chronological order, with the most recent actions at the top.
Each log entry serves as an audit trail to trace console access patterns and user activity across CAM.
Download all log data for the selected period as an Excel file for offline analysis or record-keeping.
Audit Log
Guide
Navigate to Monitoring > Audit Log from the menu.
Select the desired log type: audit log or approval log.
Use search and filter options to find logs based on criteria such as user, resource type, or period.
Check the details, including the timestamp of access, user information, and resource details.
The Audit Log section of the monitoring module provides a comprehensive history of user and system actions performed within the CAM portal, divided into two detailed items.
Audit Log
Approval Log
Audit Log
The Audit Log section displays the history of operations related to the creation, update, and deletion of data within the CAM portal.
Audit Log Features
Log Scope
Tenant (Company) Administrator: Can view logs for all projects within the tenant.
User: Can only view logs for their own projects.
Log Details
Log details display information such as event type, date/time, user, IP, etc.
Click the Expand icon to view detailed information about all actions.
Use filters such as project, event type, user, etc. to narrow down the results.
Select a period to filter logs. The default period is 30 days.
Logs are sorted in reverse chronological order, with the most recent actions at the top.
Download all log data for the selected period as an Excel file for offline analysis or record-keeping.
Approval Log
The Approval Log section provides a history of all approval requests and approval statuses within the CAM.
Approval Log Features
Log Scope
Tenant (Company) Administrator: Can view approval logs for all projects within the tenant.
User: Can view approval logs for their own projects.
Log Details
Log details display approval type, approval status, details, and approval history.
Check the approval status, such as pending, approved, or rejected.
Open a specific approval item to view its details.
Use filters such as project, approval type, approval status, user, etc. to narrow down the results.
Select a period to filter logs. The default period is 30 days.
Logs are sorted in reverse chronological order, with the most recent approvals at the top.
Download all log data for the selected period as an Excel file for offline analysis or record-keeping.
9.5.2.1.4.6 - Configuration
The configuration feature enables PM and PL group users to manage essential project settings, account configurations and tenant administrators to set up approval lines and organizational charts.
Project
The project menu allows users to view all projects they belong to. Project details are initially registered by the PM, and can be modified by the PM or PL group users as needed.
Create Project
To create a project, click the Create Project button and enter the project information.
Project Name: Give a name to the project.
PM: Designate a project manager who can manage project-related information and permissions. Note that if you designate someone other than yourself, you will no longer be able to manage the project after creation.
Organization: Select the organization that will carry out the project.
Description: Enter a description of the project.
View Project
In the View Project screen, PM or PL group users can manage project information and add CSP accounts and users to the project.
General Information: Displays the project information registered in the Create Project screen.
Edit: Click the Edit button to modify the project’s general information.
Delete: Click the Delete button to delete the project.
Users: Displays a list of users registered to the project.
Accounts: Displays a list of cloud accounts registered to the project.
Cloud Account Management
PM and PL group users can add new accounts to the project or delete accounts that are no longer in use.
Adding AWS Account
CAM supports a keyless method to enhance security when connecting cloud accounts.
To register an account, you need to create a new role in the AWS IAM service with the policy required by CAM. Follow these steps to create a role in AWS:
Click the Create button in the Access management > Roles section to go to the Create Role screen.
Create Role > Step 1: Select a trusted entity.
This step is where you enter CAM account information.
Select AWS account and Another AWS account in order, and enter the CAM account ID 022499039571 in the account ID field.
Create Role > Step 2: Add permissions
Assign the CAM policy to the newly created role.
Guide
Search for and select the relevant policy, and proceed to the next step.
IAMFullAccess
AmazonEC2FullAccess
AmazonRDSFullAccess
AWSCloudTrail_FullAccess
AmazonS3FullAccess
AmazonEventBridgeFullAccess
Create Role > Step 3: Name, review, and create
Enter a role name and click the Create Role button to complete the role creation.
※ The role name created here will be used as the Role Name when registering the account in CAM.
Guide
After creating a role in IAM, go back to the Project View screen in CAM and register the account. Click the Add button above the account list and enter the account information to register the account to the project. To complete the account registration, an approval process is required. Click the Create Approval button to proceed with the approval, which will be sent to an approval system such as Knox for processing. Once the approval is complete, you can view the newly registered account in the account list.
CSP: Select the CSP.
Environment: Select the service environment.
Account Name: Give a name to the account.
Account ID: Enter the account ID registered in AWS and click the ‘Verify’ button to confirm.
AWS Type: Set to ON if the account is a Chinese account.
Role Name: Enter the role name created in the AWS IAM.
Note
Account registration policies may vary depending on the tenant. According to the tenant’s policy, accounts may be restricted to registration in only one project.
Title: Automatically entered by the system and cannot be modified.
Approver: The approval line is automatically added by the system, and approvers and agreeers can be added according to the approval guide.
Content: Account information is automatically entered by the system and cannot be modified.
Adding SCP Account
PM and PL group users can add new SCP account to a CAM project through the Add Account button in the View Project page. CAM supports a keyless connection method for enhanced security, so no credentials are exchanged directly during account registration. Before you begin, make sure that the required setup is completed in the SCP Console.
Note
SCP includes both SCP for Samsung and SCP for Enterprises environments. Depending on your CSP authority or selection, the pre-requisites and steps for adding an account are identical for both.
Step 1. Pre-requisite Setup (One-time Trust Configuration for CAM Account)
Before adding your SCP account in CAM, ensure the following configuration is completed on the SCP side.
This setup allows CAM to securely access the target project and verify account information.
First you will need to setup policy, if it is not already created based on the described steps in the manual below. Then authorize the CAM account via Permission Groups and then add members.
Create a Policy for CAM Access
Go to SCP Console.
Login and navigate to the IAM > Policies section in SCP Console.
Create a Policy with the name ‘CAM_Linked_Policy’
Create a new policy that includes the necessary permissions required for CAM operation based on the following table:
ID
Action
Reason
[Platform] Permission Management
List, Read, Create, Delete, Update Permission
Create/Delete Policy, Assign Policy to Role
[Platform] Resource Management
List, Read
View List and Details of SCP
[Platform] Tag Management
List, Read
View Tag List/Information, etc.
[Platform] Project Management
List, Read
Assigned Project List/Information
Table. Policy for CAM Access list
Alternatively, you can also add policy requirements to JSON Mode.
You can connect Permission Group and role later, so Complete Policy creation without checking anything
Authorize the CAM Account via Permission Groups
Once the policy is created, link it to the CAM system account using a permission group.
Step-by-step:
Navigate to IAM > Permission Groups
Create a new permission group (e.g., CAM-Access-Group)
Create a Permission Group with the name ‘CAM_Linked_Group’.
Attach the CAM policy created above to this group
When you add User to your project, you will connect User to Permission Group, so now you can Complete Permission Group generation without checking any User.
Assign CAM Service Account to the Permission Group
Navigate to the Project Members section in your SCP Console.
Add the required account as a member of your target project.
This account represents CAM and will be used for integration.
To Add it, select the target project > Identity Access Management > Add User > Add Project Member > Add SCP User to Target Project
Add the User available from the list. Alternatively, you can search the user by using the search functionality.
Search user to add as a Project Member.
Select the Permission Group with the name ‘CAM_Linked_Group’ that you created above and complete the Add Project member operation.
Connect Permission Groups to complete Adding Project Member
After completing the above steps, return to the Project View screen in CAM to add your SCP account.
Step 2. Add Account in CAM console
In CAM, go to View Project > Manage Accounts.
Click the Add Account button.
In the pop-up that opens, fill in the following details:
Select CSP and Environment
CSP: Choose SCP for Enterprises or SCP for Samsung.
Environment: Select the environment this account will belong to (e.g., DEV, STG, PRD, or ETC).
Enter Account Information
Account Name:
Enter a name to identify this account within CAM.
This can be up to 50 characters long.
Only English letters and numbers are allowed.
Project ID (from SCP Console):
Enter the Project ID of the SCP project you prepared earlier.
Allowed: English letters, numbers, and hyphens only
Max: 30 characters
Click Verify after entering the Project ID. CAM checks the following:
The project exists in SCP.
The required roles (cam-Administrator, cam-Operator, cam-Developer) are present.
The project isn’t already registered in another CAM project or awaiting approval elsewhere.
If any of these conditions are not met, you’ll see a validation message.
Step 3. Create Approval
Once the Project ID is verified and other details are complete, the Create Approval button will become active.
Click it to send the account addition request for approval. Depending on your CAM setup, you can either select the approvers manually or let the system route it to the default approvers.
After approval, the SCP account will appear in the Project Accounts table in CAM.
Adding Azure Account
Before adding an Azure account in CAM, complete the following setup steps in the Microsoft Entra ID and Azure Portal. These steps must be performed by a Tenant Admin.
Step 1: Pre-requisite Setup (One-time Trust and Domain Configuration for CAM Account)
This step ensures that CAM is trusted within the target Azure tenant and has the required access permissions. This step needs to be completed before adding an Azure account in CAM by the Tenant Admin.
These pre-requisites are divided into two sections:
Trust Configuration
Domain Configuration
Trust Configuration for CAM Account
This step ensures that CAM is trusted within the target Azure tenant and has the required access permissions. It must be performed by a Tenant Administrator in the target Azure tenant. The purpose is to grant the CAM application the necessary permissions to access resources within Microsoft Entra ID.
To allow CAM to integrate with Azure, the Tenant Administrator must open the CAM Admin Consent URL. This URL triggers a Microsoft Entra Admin Center consent dialog, where the admin can approve the requested permissions for the CAM application.
Obtain the Tenant ID
The CAM Admin Consent URL includes an App Client ID linked to a specific tenant. Before using it, the Tenant ID of the target Azure tenant must be confirmed.
To find your Tenant ID:
Sign in to the Azure Portal.
In the left navigation menu, go to Microsoft Entra ID.
In the Overview tab (first screen), locate the Tenant ID field.
Copy the Tenant ID for use in the Admin Consent URL.
Replace the placeholder {Your_Tenant_ID} in the URL with the actual Tenant ID you copied earlier.
When prompted, select the Global Administrator account of the target tenant.
This account must have the highest administrative privileges in the tenant.
Review the Consent Agreement displayed. This agreement outlines the exact permissions CAM will be granted.
If you agree, click Accept to approve the integration.
By completing this step, CAM gains access to the tenant-level resources in Microsoft Entra ID.
No Subscription Access Yet: This step does not grant CAM access to Azure subscriptions. Subscription-level access will be configured separately in later steps (Management Group Role creation and Subscription Role assignment).
Verify CAM application registration after granting consent
In Azure Portal, navigate to Microsoft Entra ID → Enterprise Applications.
Search for the CAM application.
Confirm the CAM app appears in the list and is properly registered.
Note
When you grant Admin Consent, you are giving CAM tenant-level recognition.
Domain Configuration for CAM Account
In Azure, domain linkage is required so that you as a user can authenticate through email and integrate with CAM’s Keycloak authentication.
The process of Domain Configuration has two main phases:
Phase
Who Performs It
Frequency
Create a Domain
Tenant Admin or PM/PL
Once per tenant (maybe repeated for new domains if required)
Register your domain in the Azure Tenant
Tenant Admin
Once per tenant (unless additional domains are added later)
Table. Domain Configuration for CAM Account list
Create a Domain
You can create a public domain using any DNS service that can generate TXT records (e.g., AWS Route 53, SCP DNS).
For this guide, we use SCP DNS as an example.
Pre-Domain Creation Operations
Log into SCP DNS.
Access the SCP console and navigate to the DNS menu.
Initiate Public Domain Purchase.
Click Product Request.
This opens the Purchase Form.
Fill in the details of Domain Purchase Form.
Usage Type: Select Public
Domain Name: Enter desired public domain name.
Registrant Details: Enter name, email, address, phone number.
Description and Designation fields.
Billing Information will be displayed before purchase confirmation.
Confirm Purchase
Review the final billed amount.
Click Next to confirm.
Verify DNS Status
Once created, the domain will appear in the SCP DNS list.
Wait until the status shows Active; which suggests it is now publicly usable.
You now have an active public domain that can be linked to your Azure tenant for user authentication.
Register your Domain in the Azure tenant
Now that the public domain exists, it must be linked to Microsoft Entra ID for authentication.
Pre-Domain Setup Operations (Azure Tenant)
Sign in to the Azure Portal with a Tenant Administrator account.
Navigate to Microsoft Entra ID → Custom Domain Names.
Click +Add Custom Domain.
Enter your public domain name (the one you created in SCP).
Click Add Domain.
Generate a TXT Record of the Domain (Azure → SCP DNS)
Once you add the domain in Azure:
Azure will display a TXT record value that must be added to your domain’s DNS settings. This is required to verify domain ownership.
Copy the TX record value from Azure.
Add TXT Record (To SCP / Domain Host)
Go to SCP DNS then select the Active public domain you created.
Click Add Record.
Record Type: Select TXT.
Value: Paste the TXT record value copied from Azure.
TTL (Time to Live): Choose according to preference.
Click Confirm.
Ensure the record appears in the domain’s DNS list.
Validate Domain in Azure
Return to the Azure Portal; select Microsoft Entra ID; then select Custom Domain Names.
Initially, the domain status will be unverified.
Click the domain and click Verify button.
Once Azure detects the TXT record (propagation may take several minutes); Status changes to Verified.
Your public domain is now officially linked to the Azure tenant.
Step 2. Add Account in CAM console
In CAM, go to View Project > Manage Accounts.
Click the Add Account button.
In the pop-up that opens, fill in the following details:
Select CSP and Environment
CSP: Choose Azure
Environment: Select the environment this account will
Enter Account Information
Account Name:
Enter a name to identify this account within CAM.
This can be up to 50 characters long.
Only English letters and numbers are allowed.
Tenant ID (from Azure Portal):
Enter the Tenant ID.
Only English letters, numbers, and hyphens are allowed.
Maximum 36 characters can be entered.
Click Verify and CAM will check the following:
Confirm if the Tenant ID format is correct.
Validate it against Azure to ensure it exists.
Only after Tenant ID is verified will the Subscription ID field be enabled.
Subscription ID (from Azure Portal):
Enter the Subscription ID.
Only English letters, numbers, and hyphens are allowed.
Maximum 36 characters can be entered.
Click Verify and CAM will check the following:
Confirm if the Subscription ID format is correct.
Check if the Subscription ID is already linked to another CAM project.
Check if it is already registered or has a pending approval request.
Only after Subscription ID is verified will the Federation Domain field be enabled.
Federation Domain (from Azure Portal):
Enter the Federation Domain.
Only English letters, numbers, hyphens, and dots are allowed.
Maximum 48 characters can be entered.
Click Verify and CAM will check the following:
Confirm that the Federation Domain format is correct.
Ensure it matches an existing verified domain from Azure Domain Configuration.
Step 3. Create Approval
Once all the fields are verified and details are complete, the Create Approval button will become active.
Click it to send the account addition request for approval. Depending on your CAM setup, you can either select the approvers manually or let the system route it to the default approvers.
After approval, the Azure account will appear in the Project Accounts table in CAM.
Delete Account
Click the Delete button in the View Account section to delete an account that is no longer in use.
User Management
PM and PL group users can add or remove users from the project. Only users registered to the project can be granted console and resource access within the project, so users who need console or resource access must be registered as project users.
Add User
Click the Add button above the user list to add a user to the project.
Name: Search for the user name registered in CAM.
Group: Select the user’s group.
PL: Can manage project-related information and has the same permissions as the project manager.
Operator, Developer: Can view project-related information and request permissions for resources. These users are categorized for project role management but have the same permissions in the CAM portal.
Delete User
Select the user to delete from the user list and click the Delete button.
After deleting a user, the deleted user can no longer view project-related information.
Notice
The Notice section allows Tenant Admins to create and manage notices that are displayed in the GNB Notices panel for users within the tenant. Multiple notices can be active simultaneously. Each notice can include a title, detailed description, optional attachment(s), and a defined display period.
Create Notice
To Create a Notice, click on the Create button on List page. In the Create Notice page, enter the following details:
Title: Enter a title for the notice.
Description: Provide the content or message to be displayed.
Attachment (Optional): Upload supporting files (up to 5 files, with a combined maximum size of 50 MB). Empty files cannot be uploaded and supported file formats include images, documents, .mp4, and .zip.
Display: Toggle ON to enable the notice for display in the GNB. Once the toggle is turned ON, you can select the Display Period or the date range during which the notice should be visible to users.
Select Save to create the notice. The newly created notice will appear in the Notice list.
View Notice Details
Select any notice title from the list to open the Notice Details page.
All notice information (Title, Description, Attachments, Display Period, Created By, and Created Date) is displayed in read-only mode.
From this view:
Use Edit to modify the notice.
Use Delete to permanently remove the notice.
.
Edit Notice
From the Notice List, select a notice to open its Detail View.
Select Edit.
Modify the required fields (Title, Description, Attachment, Display settings, or Date Range).
Select Save to update the notice.
Note
Changes made to an active notice take effect immediately.
Delete Notice
From the Notice Detail view page, select Delete.
Confirm the deletion when prompted.
The selected notice will be removed from the list and will no longer appear in GNB Notices.
Approval Line
Tenant administrators can predefine approval lines that users must specify when creating an approval.
Create Approval Line
To create an approval line, click the Create button and specify the approval case and organization to create.
Name: Enter a name for the approval line that will not be exposed to users.
Target: Select when and which organization to apply.
Approver Guide: Enter the approver information that cannot be automatically designated by the system but must be included in the approval line. If entered, it will be exposed to users as follows.
Approver: Search for and add the approver’s name to be automatically designated and exposed by the system.
View Approval Line
To view detailed information about an approval line, go to the Approval Line menu and click on the desired approval line. You can view information about all approval lines and modify or delete them.
Modify Approval Line
Click the Edit button in the View Approval Line screen to modify the information.
Delete Approval Line
Click the Delete button to delete an approval line that is no longer in use.
Organization
The organization menu allows tenant administrators to manually manage the tenant’s organization.
Tenant administrators can create organizations, which can be used to manage projects and approval lines by organization unit.
Add Organization
To add an organization, click the Add button and enter the following details in the Add Organization popup.
Parent (Upper Organization): Select the name of the upper organization. The default value is the tenant name.
Name: Enter the name of the organization to create.
Display: Set the toggle to ON to expose the organization in the Organization list to users.
View Organization
The View Organization page displays a list of all created organizations. Click on the organization name to view detailed organization information on the right.
You can expand the entire organization list to view all organizations at once, or collapse it to view only the top-level organizations.
Modify Organization
The data entered when creating the organization is displayed, and all data can be modified. Click the Save button after modifying.
Delete Organization
Click the Delete button in the View Organization screen to delete an organization that is no longer in use.
Note
Parent organizations and organizations with registered projects cannot be deleted.
Tenant Administrator
The tenant administrator menu allows you to add, specify, or delete administrators who manage the tenant.
Initially, the user who applied for the service is designated as the tenant administrator, and subsequent administrators can be directly added, deleted, and managed by tenant administrators with administrative privileges.
Tenant administrators can manage tenant-based information through dedicated menus (e.g., Approval Line, Organization, etc.) and view all content within the tenant.
Add Tenant Administrator
To add a tenant administrator, click the Add button and search for and register a user among those registered to the tenant.
Delete Tenant Administrator
Select the user to delete from the tenant administrator list and click the Delete button.
The CAM site can only be accessed from the allowed internal network of the tenant.
Please check if the network you are accessing is an accessible environment.
Refer to Getting Started > Network Environment.
Do I need to process the firewall in advance?
To access resources through the CAM site, firewall registration must be done in advance from CAM to the accessing resources (Jumphost).
For the contents required for firewall registration, please inquire with the tenant administrator.
I am unable to log in.
Membership registration and login follow the SingleID system, so you must go through SingleID’s sign-in process or your company’s SSO process (e.g., Knox SSO), followed by MFA (e.g., SMS, Email, etc.) to access the CAM portal.
When accessing for the first time, select the MFA method (SMS, Email, etc.), and the OTP will be sent to the phone number or email stored in Knox personal information, if SSO is configured for Knox, and you can log in by entering the OTP. If your phone number has changed, it may take some time for the Knox personal information to be updated and reflected in SingleID, so please try again.
An error message appears on the CSP console login screen.
The CAM site provides CSP login based on console roles, and if there is no role that the user can log in to, CSP recognizes it as an invalid request and displays an error message.
If you see the message “Your request included an invalid SAML response,” please request an CSP role from the PM or PL.
After the PM or PL registers the user to the role and the approval is completed, you can log in by selecting the corresponding role on the CSP console screen.
I am unable to access the CSP console login screen.
The CAM site has restrictions on accessing some services depending on the access environment.
While the CAM site can be accessed from the internal network environment, resource access may be allowed only for specific IP ranges for each tenant. Please check your access environment and inquire with the tenant administrator.
I created a role and policy, but the role is not visible on the CSP console login screen.
It may take a few minutes for the role and policy to be actually created and applied in CSP.
Or, the user registration and approval for the role must be completed for the registered user to select the role on the CSP console login screen, so please check the user registration and approval status.
I get an ‘Invalid CSP policy JSON.’ error message and policy creation fails.
This is a CSP error that occurs when the input JSON template is not supported by CSP.
Please check the input JSON and rewrite it in a valid format.
I get an ‘Unable to complete due to a CSP error response.’ error message and role mapping fails.
This is a CSP error that occurs when the JSON of the policy being mapped or the policy to be mapped is invalid.
Please delete the corresponding policy and create a new policy in a JSON format supported by CSP.
Is there anything I need to prepare in advance to access resources?
To access resources through the CAM site, you need to apply for access permissions by account and resource type and get approval.
Please apply and get approval through the authority request menu.
After approval, you need to install the client agent on the PC you want to access and register the IP of the access environment.
You can install the client agent by clicking the Download Client Agent button in the PC settings menu, and you can register up to 5 IPs.
Refer to Resource Access > PC Settings.
I get an ‘Access to PRD server resources is restricted to ‘VDI for server connect’ environments where the Internet is blocked.’ error message and resource access fails.
Access to PRD resources is restricted in general internal network environments.
PRD resources can only be accessed from specific IP ranges where the Internet is blocked. For tenant-specific restricted environments, please inquire with the tenant administrator.
I get a ‘Resource connect is restricted due to unauthorized IP.’ error message and resource access fails.
This error occurs because the current access IP is not registered in the CAM site.
To access resources through the CAM site, you need to register the IP of the access environment in advance.
Please register the IP of the access environment in the PC settings menu and use it.
Refer to Resource Access > IP Registration.
I get an ‘Unable to connect to the resource since the local port you entered is already in use.’ error message and resource access fails.
This message appears when the local port you entered is already in use by another application on your PC.
Please enter an unused port between 1024 and 65,535 and try again.
What are the supported OS, DB vendors, and versions for enrolling resources?
Currently, OS supports Ubuntu and Amazon Linux, and DB supports some versions of PostgreSQL, MySQL, Aurora PostgreSQL, and Aurora MySQL.
For detailed version information of each OS/DB, please refer to Getting Started > Service Scope.
I get a ‘The address information you provided is not valid.’ error message and resource registration fails.
This message appears when the address information of the resource to be registered is invalid.
Please check the IP and port information of the address and try again.
I get an ‘Invalid authentication credentials. Please check your credentials, then try again.’ error message and resource registration fails.
This message appears when the root user information of the resource to be registered is invalid.
Please check the ID and password information of the root account and try again.
I get an ‘Unable to connect to the resource because the PC agent is not connected.’ error message and resource access fails.
This occurs when the Client Agent is not automatically executed when the PC is running.
We recommend restarting the PC to automatically execute the Client Agent, or reinstalling and executing the Client Agent.
I get an ‘Unable to connect to the resource. Please try again after checking the resource status.’ error message and resource access fails.
This occurs when the resource to be accessed is not in an accessible state.
Please check if the resource can be accessed through the network and try again after taking necessary actions.
9.5.2.1.5 - SingleID Authenticatior
Overview
SingleID Authenticator is a dedicated authentication tool that allows users to authenticate themselves on a website using their mobile phone in a convenient and secure manner.
SingleID Authenticator Authentication Methods
Biometric (fingerprint, facial recognition)
TOTP (Time-based One-Time Password)
mOTP (mobile One-Time Password)
PIN
Notice
The available authentication methods may vary depending on the services supported by the authentication method and the device support range.
Mobile Environment Support
SingleID Authenticator supports the following mobile environments.
Support
Recommended
Android : 8 and later versions
Web Browser: Samsung Internet Latest public version
Android : 8 and later versions
Models released in 2018 and beyond among Samsung Galaxy Mobile Products
Galaxy S9 ↑
Web Browser: Samsung Internet 9.0 ↑
iOS : 16 ,17
Web Browser: Safari , Latest public version
iOS : 16 ,17
iPhone Xs ↑, Models released in 2018 and beyond among Apple iPhone Products
Web Browser: Safari 14.1 ↑
Table. Mobile Environment Support
9.5.2.1.5.1 - Installing the App
SingleID Authenticator mobile app can be downloaded in various ways.
Scanning the QR Code to Download
When proceeding with the SingleID Authenticator registration procedure on the SingleID User Portal, such as during authentication settings or authentication pages, you can scan the QR code to conveniently visit the app store and download it.
Note
If you are a user in China who cannot access the app store, click For Chinese users or those who cannot access the App Store, click here below the QR code on the screen to receive the SMS URL.
Downloading from the Mobile App Store
If you cannot scan the QR code with your camera due to company internal security, you can download it directly by searching for it on the app store for Android and iOS operating systems.
Run Play Store (Android) or App Store on your smartphone.
Search for SingleID.
Confirm SingleID Authenticator and press the install button to install it.
Downloading via Smartphone Browser
If you have accessed the additional authentication page on your mobile device, follow the procedure below to download and install the app.
Click the button below on the service registration guide page.
Click the app download to download the installation file and install it.
Caution
For iOS, after installation, set SAMSUNG SDS to trusted in Device Settings > General > Device Management to use it.
9.5.2.1.5.2 - Authenticating Users
Authenticating with PUSH
Registered users will automatically receive a PUSH notification on their mobile app from the service for additional authentication. To authenticate using PUSH, follow the procedure below.
When additional authentication is requested, the SingleID Authenticator receives a PUSH notification. Tap the PUSH notification to launch the app.
Authenticate using your preferred method.
If authentication is successful, return to the browser to complete the authentication.
Note
For iOS, users must manually switch to the browser by clicking the top-left Safari button to complete the authentication. For Android OS devices, the browser will automatically switch.
Requesting Manual Authentication
If you don’t tap the PUSH notification or didn’t receive it, you can request additional authentication directly from the app. To authenticate by requesting authentication from the app, follow the procedure below.
Launch the app and click the + button at the top right.
Scan the QR code or enter the manual code displayed on the web browser into the mobile app.
Once the input is complete, the authentication service will be registered.
Authenticating with OTP
For users registered with the OTP service, the additional authentication screen will automatically send OTP information to the user’s mobile app via PUSH. To confirm and authenticate OTP in the app, follow the procedure below.
When OTP authentication is requested, the SingleID Authenticator receives a PUSH notification. Tap the PUSH notification to launch the app.
Check if the OTP displayed in the app matches the OTP on the web screen. If they match, select Confirm in the app.
If authentication is successful, return to the browser to complete the authentication.
Note
If you are using an older version of the app that does not support OTP, you can update the app and use OTP authentication. Follow the guide on the authentication screen to update the app, register OTP, and use it.
9.5.2.1.5.3 - Manage Authentication Methods
To use SingleID Authenticator, you must set a PIN, and you can add other authentication methods supported by the service.
PIN Change
When you first register a service with SingleID Authenticator, you will register a PIN as a required authentication method. To change the PIN, follow the steps below.
Main screen > Authentication method go to.
Click change on the PIN item. Go through the identity verification process and change to the desired number.
Reference
The authentication methods that can be registered may vary depending on the authentication methods and devices supported by the service.
Cancel authentication method
If you no longer use the registered authentication method or need to re‑register, you can cancel the authentication method. To cancel the authentication method, follow the steps below.
Go to Settings > Authentication Method Management.
Authentication with PIN is required when accessing the menu.
Select the right icon of the authentication method you want to cancel.
A delete confirmation popup appears as shown below.
If the authentication method is deregistered, the right icon changes to Off state.
Reference
After registering the service, the initial PIN cannot be cancelled with the default setting. If you do not want to authenticate with SingleID Authenticator, delete the service.
9.5.2.1.5.4 - Managing Service List
You can change the order of the list of registered services or delete services that are not in use.
Changing the List Order
If you want to change the order of the service list, follow the procedure below.
Select the icon from the home screen to move to the service list change.
Press and hold the icon of the service you want to change the order of, and drag it to the desired location.
After changing to the desired order, click Complete. The changed list will be saved.
Deleting Registered Services
There are two ways to delete registered services: deleting one service at a time and bulk deleting multiple services.
If you want to delete a service, you can delete it directly from the list. Follow the procedure below.
From the home screen, select the service you want to delete and slide it to the left.
When the trash can icon appears on the right, click Trash.
When the Do you want to delete the selected service? popup appears, click Confirm to delete.
Confirm that the service has been deleted from the list.
FAQ
The app does not open when using the Samsung browser.
For users using the latest version of the Samsung browser, the app logo may not be displayed in the browser due to smartphone settings, and the app may not open automatically.
You can open the app by selecting the app icon next to the browser address bar. To set the app to open automatically, follow the procedure below.
Go to Samsung Browser > Internet Settings > Useful Features.
Change the setting to open links in other apps to On.
Go back to the browser and run the app again, and it will work normally.
9.5.2.1.5.5 - Open Source Licence(Android)
The open source licenses used in the SingleID solution are as follows. For more details, see below.
SingleID_MobileApp_Client-APK
The following sets forth attribution notices for third party software that may be contained in portions of this product. If you have any questions, please contact <global.cs@samsung.com.>
JDOM License Copyright (C) 2000-2004 Jason Hunter & Brett McLaughlin. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions, and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the disclaimer that follows these conditions in the documentation and/or other materials provided with the distribution. 3. The name “JDOM” must not be used to endorse or promote products derived from this software without prior written permission. For written permission, please contact {request_AT_jdom_DOT_org}. 4. Products derived from this software may not be called “JDOM”, nor may “JDOM” appear in their name, without prior written permission from the JDOM Project Management {request_AT_jdom_DOT_org}.
In addition, we request (but do not require) that you include in the end-user documentation provided with the redistribution and/or in the software itself an acknowledgment equivalent to the following: “This product includes software developed by the JDOM Project (http://www.jdom.org/)." Alternatively, the acknowledgment may be graphical using the logos available at http://www.jdom.org/images/logos.
THIS SOFTWARE IS PROVIDED “AS IS” AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE JDOM AUTHORS OR THE PROJECT CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Checker Qual : Copyright 2004-present by the Checker Framework developers
Mocha: Copyright (c) 2011-2022 OpenJS Foundation and contributors, https://openjsf.org
Xamarin.Android.Support.ViewPager , Android - platform - hardware - intel - common - libva: Copyright (c) .NET Foundation Contributors
android-gif-drawable : Copyright (c) 2013 - present Karol Wrótniak, Droids on Roids LLC
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF License Open Source Component License Text MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
SingleID_MobileApp_Client-APK
SingleID_MobileApp_Flutter-UMA
The following sets forth attribution notices for third party software that may be contained in portions of this product. If you have any questions, please contact global.cs@samsung.com
License
Open Source Component
License Text
Apache License 2.0
Android Support Library media compat, Converter: Gson, Adapter: RxJava 2, Android Support Library core utils, Android Arch-Runtime, Guava (Google Common Libraries), Android Support AnimatedVectorDrawable, Android Support Library core UI, Android Support Library Custom View - androidx.customview:customview, Android Lifecycle LiveData, OkHttp, Gson, android.support.annotation, Android Support Library Custom View - androidx.swiperefreshlayout:swiperefreshlayout, Android Support Library v4, OkHttp, Android Lifecycle ViewModel, Commons Lang, rxjava, Android Support Library compat, Roboto Fonts, Apache Commons Collections, Android Support Library v4, Android Lifecycle LiveData Core, RxAndroid, joda-time, okio, Apache Commons IO, JetBrains/java-annotations, Android AppCompat Library v7, Android Support Library Collections, Android Support VectorDrawable, Kotlin Stdlib, Android Lifecycle-Common, Android Support Library loader, Retrofit
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
“License” shall me an the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License.
“Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
a. You must give any other recipients of the Work or Derivative Works a copy of this License; and b. You must cause any modified files to carry prominent notices stating that You changed the files; and c. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and d. If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets “[]” replaced with your own identifying information. (Don’t include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same “printed page” as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN “AS-IS” BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an “owner”) of an original work of authorship and/or a database (each, a “Work”).
Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works (“Commons”) that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others.
For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the “Affirmer”), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights (“Copyright and Related Rights”). Copyright and Related Rights include, but are not limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person’s image or likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and
vii. other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer’s Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work
i. in all territories worldwide,
ii. for the maximum duration provided by applicable law or treaty (including future time extensions),
iii. in any current or future medium and for any number of copies, and
iv. for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the “Waiver”).
Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer’s heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer’s express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer’s express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty‑free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer’s Copyright and Related Rights in the Work
i. in all territories worldwide,
ii. for the maximum duration provided by applicable law or treaty (including future time extensions),
iii. in any current or future medium and for any number of copies, and
iv. for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the “License”).
The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not i. exercise any of his or her remaining Copyright and Related Rights in the Work or ii. assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer’s express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as‑is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person’s Copyright and Related Rights in the Work. Further,
Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work. Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work.
Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Xamarin.Android.Support.ViewPager: Copyright (c) .NET Foundation Contributors All rights reserved.
secure-random: Copyright (C) 2011 by Anton Vodonosov (avodonosov@yandex.ru). All rights reserved.
Xamarin.Android.Support.CursorAdapter: Copyright (c) .NET Foundation Contributors All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others.
The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.
DEFINITIONS
“Font Software” refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation.
“Reserved Font Name” refers to any names specified as such after the copyright statement(s).
“Original Version” refers to the collection of Font Software components as distributed by the Copyright Holder(s).
“Modified Version” refers to any derivative made by adding to, deleting, or substituting — in part or in whole — any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment.
“Author” refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions:
1. Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself.
2. Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user.
3. No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users.
4. The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission.
5. The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.
SingleID_MobileApp_Flutter-UMA
9.5.2.1.5.6 - Open Source Licence(ISO)
The open source licenses used in the SingleID solution are as follows. For more details, see below.
SingleID_MobileApp_Client-IOS
The following sets forth attribution notices for third party software that may be contained in portions of
This product. If you have any questions, please contact global.cs@samsung.com
License
Open Source Component
License Text
Apache License 2.0
Open Computer Vision Library (OpenCV): KA ProgressLabel:
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
“License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License.
“Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
1. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
2. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: a. You must give any other recipients of the Work or Derivative Works a copy of this License; and b. You must cause any modified files to carry prominent notices stating that You changed the files; and c. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, rademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and d. If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets “[]” replaced with your own identifying information. (Don’t include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same “printed page” as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple Computer, Inc.
(“Apple”) in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software.
In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple’s copyrights in this original Apple software (the “Apple Software”), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Computer, Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated.
The Apple Software is provided by Apple on an “AS IS” basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS.
IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Copyright (c) 1998-2013, Brian Gladman, Worcester, UK. All rights reserved. The redistribution and use of this software (with or without changes) is allowed without the payment of fees or royalties provided that: source code distributions include the above copyright notice, this list of conditions and the following disclaimer; binary distributions include the above copyright notice, this list of conditions and the following disclaimer in their documentation.
This software is provided ‘as is’ with no explicit or implied warranties in respect of its operation, including, but not limited to, correctness and fitness for purpose.
TPPropertyAnimation: Copyright 2010 A TASTY PIXEL. All rights Reserved
sqlcipher: Copyright (c) 2008-2023, ZETETIC LLC All rights reserved.
ASM All: Copyright (c) 2000-2011 INRIA, France Telecom All rights reserved.
Protocol Buffers [BOM]: Copyright 2008 Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The OpenSSL toolkit stays under a dual license, i.e. both the conditions of the OpenSSL License and the original SSLeay license apply to the toolkit. See below for the actual license texts. Actually both licenses are BSD-style Open Source licenses. In case of any license issues related to OpenSSL please contact openssl-core@openssl.org.
OpenSSL License —————
Copyright (c) 1998-2008 The OpenSSL Project. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. All advertising materials mentioning features or use of this software must display the following acknowledgment: “This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. (http://www.openssl.org/)" 4. The names “OpenSSL Toolkit” and “OpenSSL Project” must not be used to endorse or promote products derived from this software without prior written permission. For written permission, please contact openssl-core@openssl.org. 5. Products derived from this software may not be called “OpenSSL” nor may “OpenSSL” appear in their names without prior written permission of the OpenSSL Project. 6. Redistributions of any form whatsoever must retain the following acknowledgment:
“This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/)"
THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT “AS IS” AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). This product includes software written by Tim Hudson (tjh@cryptsoft.com).
Original SSLeay License
Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com) All rights reserved.
This package is an SSL implementation written by Eric Young (eay@cryptsoft.com). The implementation was written so as to conform with Netscapes SSL.
This library is free for commercial and non-commercial use as long as the following conditions are aheared to. The following conditions apply to all code found in this distribution, be it the RC4, RSA, lhash, DES, etc., code; not just the SSL code. The SSL documentation included with this distribution is covered by the same copyright terms except that the holder is Tim Hudson (tjh@cryptsoft.com). Copyright remains Eric Young’s, and as such any Copyright notices in the code are not to be removed. If this package is used in a product, Eric Young should be given attribution as the author of the parts of the library used. This can be in the form of a textual message at program startup or in documentation (online or textual) provided with the package.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. All advertising materials mentioning features or use of this software must display the following acknowledgement:
“This product includes cryptographic software written by Eric Young (eay@cryptsoft.com)” The word ‘cryptographic’ can be left out if the rouines from the library being used are not cryptographic related :-). 4. If you include any Windows specific code (or a derivative thereof) from the apps directory (application code) you must include an acknowledgement: “This product includes software written by Tim Hudson (tjh@cryptsoft.com)”
THIS SOFTWARE IS PROVIDED BY ERIC YOUNG “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The licence and distribution terms for any publically available version or derivative of this code cannot be changed. i.e. this code cannot simply be copied and put under another distribution licence [including the GNU Public Licence.]
This software is provided ‘as-is’, without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software.
Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution.
SingleID_MobileApp_Client-APK
SingleID_MobileApp_Flutter-UMA
The following sets forth attribution notices for third party software that may be contained in portions of this product. If you have any questions, please contact global.cs@samsung.com
License
Open Source Component
License Text
Apache License 2.0
Android Support Library media compat, Converter: Gson, Adapter: RxJava 2, Android Support Library core utils, Android Arch-Runtime, Guava (Google Common Libraries), Android Support AnimatedVectorDrawable, Android Support Library core UI, Android Support Library Custom View - androidx.customview:customview, Android Lifecycle LiveData, OkHttp, Gson, android.support.annotation, Android Support Library Custom View - androidx.swiperefreshlayout:swiperefreshlayout, Android Support Library v4, OkHttp, Android Lifecycle ViewModel, Commons Lang, rxjava, Android Support Library compat, Roboto Fonts, Apache Commons Collections, Android Support Library v4, Android Lifecycle LiveData Core, RxAndroid, joda-time, okio, Apache Commons IO, JetBrains/java-annotations, Android AppCompat Library v7, Android Support Library Collections, Android Support VectorDrawable, Kotlin Stdlib, Android Lifecycle-Common, Android Support Library loader, Retrofit
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
“License” shall me an the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License.
“Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
a. You must give any other recipients of the Work or Derivative Works a copy of this License; and b. You must cause any modified files to carry prominent notices stating that You changed the files; and c. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and d. If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets “[]” replaced with your own identifying information. (Don’t include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same “printed page” as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN “AS-IS” BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an “owner”) of an original work of authorship and/or a database (each, a “Work”).
Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works (“Commons”) that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others.
For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the “Affirmer”), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights (“Copyright and Related Rights”). Copyright and Related Rights include, but are not limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person’s image or likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and
vii. other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer’s Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work
i. in all territories worldwide,
ii. for the maximum duration provided by applicable law or treaty (including future time extensions),
iii. in any current or future medium and for any number of copies, and
iv. for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the “Waiver”).
Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer’s heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer’s express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer’s express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer’s Copyright and Related Rights in the Work
i. in all territories worldwide,
ii. for the maximum duration provided by applicable law or treaty (including future time extensions),
iii. in any current or future medium and for any number of copies, and
iv. for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the “License”).
The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not i. exercise any of his or her remaining Copyright and Related Rights in the Work or ii. assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer’s express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person’s Copyright and Related Rights in the Work. Further,
Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work. Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work.
Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Xamarin.Android.Support.ViewPager: Copyright (c) .NET Foundation Contributors All rights reserved.
secure-random: Copyright (C) 2011 by Anton Vodonosov (avodonosov@yandex.ru). All rights reserved.
Xamarin.Android.Support.CursorAdapter: Copyright (c) .NET Foundation Contributors All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others.
The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.
DEFINITIONS
“Font Software” refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation.
“Reserved Font Name” refers to any names specified as such after the copyright statement(s).
“Original Version” refers to the collection of Font Software components as distributed by the Copyright Holder(s).
“Modified Version” refers to any derivative made by adding to, deleting, or substituting — in part or in whole — any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment.
“Author” refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions:
1. Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself.
2. Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user.
3. No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users.
4. The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission.
5. The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.
SingleID_MobileApp_Flutter-UMA
9.5.2.1.6 - Open API Guides
SingleID Open API Guides
To use the SCP SingleID Open API, the system must be registered as an Application first.
The registered system information is used to issue a JWT Token, which must be included in the HTTP header when calling the SCP SingleID Open API.
API Calling Method
Call with the token (JWT Token) value included in the HTTP header
Set the access token header name to Authorization and the access token type value to Bearer - set the JWT Token value after the Bearer string.
Original token sample data { "sys":"test-system", "req":"761efd52-97d0-451f-9cf9-cf86740e7ca3", "uid":"gildong.hong", "rtn":"https://test.com/mfa/response","email":"gildong.hong@samsung.com","mobile":"+82-1012345678", "nbf": 1698232068, "exp": 1698239268, "iat": 1698232068, "displayUid": "gildong.hong@samsung.com" }
Table. Request Parameters
Response
MFA authentication page is redirected.
By default, the response token is delivered in the post method, but if you want to deliver it in the get method (query), add the following parameters to the request token:
returnMethod: get
Sample
Request
Response
curl -X GET "https://stg2-cloud.singleid.samsung.net/test-tenant/common-api/open/v1.1/mfa/request?jwtTokenRequest=eyJhbGciOiJIUzI1NiJ9.eyJzeXMiOiJ0ZXN0LXN5c3RlbSIsInJlcSI6Ijc2MWVmZDUyLTk3ZDAtNDUxZi05Y2Y5LWNmODY3NDBlN2NhMyIsInVpZCI6Imppbm9uZS5raW0iLCJydG4iOiJodHRwczovL3Rlc3QuY29tL21mYS9yZXNwb25zZSIsIm5iZiI6MTY5ODIzMjA2OCwiZXhwIjoxNjk4MjM5MjY4LCJpYXQiOjE2OTgyMzIwNjh9.cDgKMHIINaHhBiyAd_OIlVvQwmUs0QaXH_RfJ8B_KdY"
Page moved
Table. Sample
Error Code
Http Response Code
Error Code
Error Message
Measures
400
N/A
N/A
Check the token data.
Table. Error Code
API Specification - MFA Consumer Request(Portal Common)
Token original data sample { "sys":"test-system", "req":"761efd52-97d0-451f-9cf9-cf86740e7ca3", "uid":"gildong.hong", "rtn":"https://test.com/mfa/response","email":"gildong.hong@samsung.com","mobile":"+82-1012345678", "nbf": 1698232068, "exp": 1698239268, "iat": 1698232068, "displayUid": "gildong.hong@samsung.com" }
MFA Consumer Home Redirect
registerFlag
Y
query
Boolean
true
Determines whether to move to MFA Consumer Home. If true, it moves to MFA Consumer Home.
Table. Request Parameters
Response
registerFlag = true: Redirects to MFA Consumer Home.
registerFlag = false: Redirects to MFA authentication page.
Sample
Request
Response
curl -X POST "https://stg2-cloud.singleid.samsung.net/test-tenant/common-api/open/v1.1/mfa/request?jwtTokenRequest=eyJhbGciOiJIUzI1NiJ9.eyJzeXMiOiJ0ZXN0LXN5c3RlbSIsInJlcSI6Ijc2MWVmZDUyLTk3ZDAtNDUxZi05Y2Y5LWNmODY3NDBlN2NhMyIsInVpZCI6Imppbm9uZS5raW0iLCJydG4iOiJodHRwczovL3Rlc3QuY29tL21mYS9yZXNwb25zZSIsIm5iZiI6MTY5ODIzMjA2OCwiZXhwIjoxNjk4MjM5MjY4LCJpYXQiOjE2OTgyMzIwNjh9.cDgKMHIINaHhBiyAd_OIlVvQwmUs0QaXH_RfJ8B_KdY®isterFlag=true"
Page move
Table. Sample
Error Code
Http Response Code
Error Code
Error Message
Action Plan
400
N/A
N/A
Check token data.
Table. Error Code
API Specification - Send Email about Anomaly Detection(Tenant Admin Portal)
curl -X GET "https://stg1-cloud.singleid.samsung.net/test-tenant/common-api/open/v1.1/asis/test-tenant/user/mfa/token/authentication?userName=mkdir.kim&protocol=uma-uaf&sessionDataKey=sessionDataKey111&redirectUrl=redirectUrl1111&errorRedirectUrl=errorRedirectUrl1111¶ms=params111&language=en"
The request is not found. Please contact the administrator.
400
N/A
request.error.invalidStatus
The request status is incorrect. Please contact the administrator.
400
N/A
otp.error.notMatch
The OTP is incorrect. Please check the OTP.
400
N/A
otp.error.tooManyAttempts
Move to the security warning screen (the account is locked due to multiple authentication failures)
Table. Error Code
9.5.2.1.6.1 - ADFS Adapter Guide
ADFS Adapter Guide
Microsoft ADFS (Active Directory Federation Service) is a service that supports SAML/Oauth-based SSO (Single-Sign-On) for web services based on AD accounts.
MS supports MFA (multi-factor authentication) using 3rd party solutions for SSO-linked web services. To do this, an ADFS Adapter must be developed and installed.
There are two main ways to implement an ADFS Adapter:
Server-to-Server Call method
WebClient method
Among them, the WebClient method has the advantage of minimizing firewall opening between MFA servers and AD (FS) and utilizing CX provided by MFA providers, making it possible to lightweight ADFS Adaptor.
Note
SingleID ADFS Adapter was developed using the WebClient method.
Caution
The diagram in this document assumes a setting that stores the nonce value in LDAP. The nonce value is used to verify the MFA result, and the setting can be changed to store it in the MFA server instead of LDAP.
Please refer to the ADFS Adaptor settings manual for more detailed information.
Server-to-Server Call Method
Figure. Server-to-Server Call Method
WebClient Method
Figure. WebClient Method
Internal Operation
Overall Flowchart of Adapter
Figure. Overall Flowchart of Adapter
Flowchart at First Run of Adapter
Figure. Flowchart at First Run of Adapter
Flowchart after MFA (MFA PASS Case)
Figure. Flowchart after MFA (MFA PASS Case)
Flowchart after MFA (MFA FAIL Case)
Figure. Flowchart after MFA (MFA FAIL Case)
Operation by Scenario
Figure. Scenario-based actions
Case #1
Passcode input screen has timed out due to exceeding the time limit.
When timed out, the “Resend Code” button is activated, and you can retry the Passcode by clicking this button.
Case #2
Incorrect Passcode has been entered.
You can attempt to enter the Passcode up to 3 times.
Case #3
Passcode input has failed 3 times.
You cannot enter the Passcode for 1 minute.
Case #4
Normal MFA process.
Case #5
On the MFA selection screen, the Passcode was not entered, and a new browser tab was added, proceeding to MFA selection.
After that, MFA is successful on the initial tab.
After that, the new tab times out.
Case #6
On the MFA selection screen, the Passcode was not entered, and a new browser tab was added, proceeding to MFA selection.
After that, MFA is successful on the initial tab.
After that, an incorrect Passcode is entered on the new tab.
Case #7
On the MFA selection screen, the Passcode was not entered, and a new browser tab was added, proceeding to MFA selection.
After that, MFA is successful on the initial tab.
After entering the normal Passcode:
1st tab, 2nd tab are both in passcode input waiting state, after 1st authentication, 2nd authentication attempt results in no response from 2nd tab (page freeze)
1st tab is in passcode input waiting state, 2nd tab is in MFA selection waiting state, after 1st authentication, selecting 2nd MFA type results in error, error message is displayed from AD before adapter operation
Scenario-based actions
Adapter installation
Application method
Pre-check
Pre-check
Location
Check item
Note
ADFS server
MFA server connection availability (group network, TCP 80/443)
If nonce is stored in LDAP, MFA server communication is not required
.NET Framework 4.8 installation availability
User PC
MFA server connection availability (internet network, TCP 80/443)
Quality: ops-sopenapi.singleid.samsung.net
Operation: sopenapi.singleid.samsung.net
If connection is not available, check the following three items ① Firewall check ② Proxy check ③ Website block check
Table. Personal information input items
Adapter deployment
Caution
If multiple ADFS servers are configured, steps 1-4 of the following 7 steps must be applied to all servers.
Upload adapter-related files to the ADFS server
Location: [drvie]:\ADFSadapter\
ADFSadapter.dll: Adapter file
ADFS Adapter Configuration File : configuration file
replace_dll.ps1 : script file used to replace the installed Adapter with an improved version
restart_adfs.ps1 : AD FS service restart script file
Assembly_netstandard2.0 folder : pre-installation dll files for Adapter application
Grant full permissions to the ADFS service account for the corresponding folder
Right-click on the C:\ADFSadapter folder > Properties > Security > Add the ADFS service account and select all permissions
※ The ADFS service account can be checked by running services.msc > AD FS service execution account "Log on as"
Registry addition
Create a registry to record Adapter-related events in the Windows event log
Create a key and two values under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog
- Created key: MFA_Adapter
- Create two values in MFA_Adapter
. Name: AutoBackupLogFiles
. Type: DWORD (32-bit) value (REG_DWORD)
. Data: 0
. Name: MaxSize
. Type: DWORD (32-bit) value (REG_DWORD)
. Data: hexadecimal 80000
Create a key and one value under MFA_Adapter
- Created key: AdapterDLL
- Create one value in AdapterDLL
. Name: EventMessageFile
. Type: expandable string value (REG_EXPAND_SZ)
Here is the translation:
. Data: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\EventLogMessages.dll
Adapter required dll pre-installation
The Assembly_netstandard2.0 folder in C:\ADFSadapter contains a collection of libraries that need to be pre-installed. Refer to the following for the installation work.
When the ADFS Adapter runs, the necessary assemblies are loaded, and the dll is installed in the Global Assembly Cache of the ADFS server.
**dll installation**
#Pre-work
Unzip and copy the Assembly_netstandard2.0 folder to the C:\ADFSadapter folder.
#Run Powershell with administrator privileges and move to the execution location
PS>cd C:\ADFSadapter
#dll installation
PS>.\gacutil.exe /il .\Assembly_netstandard2.0\AssemblyList.txt
#dll verification
PS>.\gacutil.exe /l
Notice
Assembly_netstandard v2.0.zip file can be requested separately via email (singleid.scp@samsung.com).
The necessary assembly files are stored in the Assembly_netstandard2.0 folder, and the files can be copied to the server and installed offline.
Assembly_netstandard2.0 folder: dll files for installing Microsoft.IdentityModel.Tokens v7.2, System.IdentityModel.Tokens.Jwt v7.2 assemblies (including all dependency files)
Installed dll list
Assembly Name
Installation Version
Package Version
Microsoft.Bcl.AsyncInterfaces
1.0.0.0
1.0.0
Microsoft.IdentityModel.Abstractions
7.2.0.0
7.2.0
Microsoft.IdentityModel.JsonWebTokens
7.2.0.0
7.2.0
Microsoft.IdentityModel.Logging
7.2.0.0
7.2.0
Microsoft.IdentityModel.Tokens
7.2.0.0
7.2.0
System.Buffers
4.0.3.0
4.5.1
System.IdentityModel.Tokens.Jwt
7.2.0.0
7.2.0
System.Memory
4.0.1.1
4.5.3
System.Numerics.Vectors
4.1.4.0
4.5.0
Microsoft.CSharp
4.0.4.0
4.5.0
System.Runtime.CompilerServices.Unsafe
4.0.4.1
4.5.3
System.Security.Cryptography.Cng
4.3.0.0
5.0.0
System.Text.Encodings.Web
4.0.5.1
4.7.2
System.Text.Json
4.0.1.2
4.7.2
System.Threading.Tasks.Extensions
4.2.0.1
4.5.4
Table. Installed DLL List
Adapter Deployment
The nuget file was downloaded and installed, and note that the nuget package version and the version installed on the server may differ.
Use the dll as .net standard 2.0 (.net framework 4.8 supported) based on .net framework 4.8.
Adapter Application
Run in administrator mode using powershell and execute the following command
# Move to execution location
PS>cd C:\ADFSadapter
# Register dll
PS>./gacutil.exe /if ADFSadapter.dll
# Check dll
PS>./gacutil.exe /l ADFSadapter
The following assembly is in the global assembly cache.
ADFSadapter, Version=1.2.1.0, Culture=neutral, PublicKeyToken=0e0fe476002e81b3, processorArchitecture=MSIL
# Register as authentication provider in ADFS
PS>$typename="ADFSadapter.AuthenticationAdapter, ADFSadapter, Version=1.2.1.0, Culture=neutral, PublicKeyToken=0e0fe476002e81b3, processorArchitecture=MSIL"
PS>Register-AdfsAuthenticationProvider -TypeName $typename -Name "ADFSadapter"
# Check authentication provider in ADFS
PS>Get-AdfsAuthenticationProvider
AdminName : ADFS MFA Adapter
AllowedForPrimaryExtranet : False
AllowedForPrimaryIntranet : False
AllowedForAdditionalAuthentication : True
AuthenticationMethods : {http://schemas.microsoft.com/ws/2012/12/authmethod/otp}
Descriptions : {[1033, ADFS MFA Adapter], [1042, ADFS MFA Adapter]}
DisplayNames : {[1033, ADFS MFA Adapter], [1042, ADFS MFA Adapter]}
Name : ADFSadapter
IdentityClaims : {http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn}
IsCustom : True
RequiresIdentity : True
# Restart ADFS service
PS>net stop adfssrv
PS>net start adfssrv
#Move to the execution location
PS>cd C:\ADFSadapter
#Register dll
PS>./gacutil.exe /if ADFSadapter.dll
#Check dll
PS>./gacutil.exe /l ADFSadapter
The following assembly is in the global assembly cache.
ADFSadapter, Version=1.2.1.0, Culture=neutral, PublicKeyToken=0e0fe476002e81b3, processorArchitecture=MSIL
#Restart ADFS service
PS>net stop adfssrv
PS>net start adfssrv
Set up ADFS multi-factor authentication method
AD FS Management > Services > Authentication Methods > Multi-factor Authentication Methods > Click Edit and select the created mfa (ADFS MFA Adapter) and apply (multiple selections are possible)
Apply MFA policy to relying party trust
AD FS Management > Relying Party Trusts > Select the relying party trust to apply > Edit Access Control Policy > Select ‘Allow all users and require MFA’ and apply
Adapter Upgrade and Change
This method is performed when the ADFS MFA Adapter is already registered and the Adapter needs to be upgraded or changed. The adapter replacement work can be completed by running this script only.
#Move to the execution location and upload the changed Adapter.dll file
PS>cd C:\ADFSadapter
#Perform adapter replacement
PS>./replace_dll.ps1
Confirmation window output: Click Yes (Y) or All Yes (A)
- Selecting Yes (Y) or All Yes (A): Remove existing Adapter from ADFS and proceed with replacement work (normal procedure)
- Selecting No (N) or All No (L): Do not remove Adapter and proceed to the next step, resulting in an error
- Selecting Suspend (S): Suspend the script
Note
※ Perform on both primary and secondary servers.
On secondary servers, an error occurs when registering with ADFS, but it is necessary to perform for dll installation
Adapter Settings
Description of the Adapter environment setting file.
You must configure the environment before applying the ADFS Adapter.
Guide
Adapter installation location change
From adapter 1.2.0.6, installation is possible on drives other than C.
Existing: Only installed on C:/ADFSadapter
Changed: Installed on the root of drives from C to Z
Example: C:/ADFSadapter, D:/ADFSadapter, E:/ADFSadapter, ……, Z:/ADFSadapter
Precautions: It can only be installed on one drive, and if it is installed on multiple drives, the first discovered directory is used while scanning from C to Z
The following example is the case where the adapter is installed in the C:\ADFSadapter directory.
If installed on a drive other than C, only the drive name (drive letter) in the example below needs to be changed.
Example: If installed in D:\ADFSadapter, the ini path is → D:\ADFSadapter\ADFSadapter.ini
File Name and Path
File Name → ADFSadapter.ini
Full Path → C:\ADFSadapter\ADFSadapter.ini
File Encoding → Must be saved in UTF-8 (otherwise, Korean characters will be broken)
Things to Keep in Mind
When expressing values, " and ’ can be used, and spaces can be entered on either side of =
Spaces before and after the Value are trimmed
The following Values are all the same
Example 1) MAIN_TITLE=DWP MFA Adapter
Example 2) MAIN_TITLE = DWP MFA Adapter
Example 3) MAIN_TITLE = “DWP MFA Adapter”
Example 4) MAIN_TITLE = " DWP MFA Adapter "
Section names with -1033, -1042 at the end represent locale
At least 1033 must exist.
# ADFS MFA Adapter Environment Settings
# Installation location changes
# - Before v1.2.0.6: C:\ADFSadapter\ADFSadapter.ini
# - From v1.2.0.6: Can be installed on a drive other than C (same location as adapter resource installation)
# Example: C:\ADFSadapter\ADFSadapter.ini, D:\ADFSadapter\ADFSadapter.ini, E:\ADFSadapter\ADFSadapter.ini
# Note: The DLL file name is ADFSadapter.dll, which is different from the existing MFAadapter.dll linked to Nexsign
# When expressing values, " and ' can be used, and spaces can be entered on both sides of =
# Spaces before and after the value are trimmed.
# The following values are all the same.
# Example 1) MAIN_TITLE=ADFS MFA Adapter
# Example 2) MAIN_TITLE = ADFS MFA Adapter
# Example 3) MAIN_TITLE = "ADFS MFA Adapter"
# Example 4) MAIN_TITLE = " ADFS MFA Adapter "
# Among the section names, those with -1033, -1042 at the end mean locale
# At least 1033 must exist
# Locale number: 1033 (en-us), 1042 (ko)
# Locale section: MFA-1033, MFA-1042, TXT-1033, TXT-1042, MSG-1033, MSG-1042
# LOG_LEVEL (criteria for recording in Windows event log)
# 0: Error
# 1: Error + Warning
# 2: Error + Warning + Information + Debug
[MAIN]
MAIN_MFA_TITLE="ADFS MFA Adapter"
MAIN_CLAIM1=http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod
MAIN_CLAIM2=http://schemas.microsoft.com/ws/2012/12/authmethod/otp
# MFA API Information
# Do not add "/" at the end of the URL
#MFA_API_URL="https://stg2-cloud.singleid.samsung.net/test/common-api/open/v1.1/mfa/request"
MFA_API_URL="https://stg1-cloud.singleid.samsung.net/test/common-api/open/v1.1/mfa/request"
CONSUMER_KEY="**************************************"
SECRET_KEY="**************************************"
# Domain vs Consumer Key List
# If the Consumer Key is different for each domain, list it (in this case, leave the CONSUMER_KEY value above blank)
# Insert the sys value of the Request Token
# Format: DOMAIN_CONSUMER_KEY_##=domain;consumerKey
# Example: DOMAIN_CONSUMER_KEY_01=aaa.com;**************************************
# DOMAIN_CONSUMER_KEY_02=bbb.com;**************************************
# (Note) If both CONSUMER_KEY value and list value exist, only the CONSUMER_KEY value is used
DOMAIN_CONSUMER_KEY_01=aaa.com;**************************************
DOMAIN_CONSUMER_KEY_02=bbb.com;**************************************
# Domain vs Secret Key List
# If the Secret Key is different for each domain, list it (in this case, leave the SECRET_KEY value above blank)
# Format: DOMAIN_SECRET_KEY_##=domain;secretKey
# Example: DOMAIN_SECRET_KEY_01=aaa.com;**************************************
# DOMAIN_SECRET_KEY_02=bbb.com;**************************************
# (Note) If both SECRET_KEY value and list value exist, only the SECRET_KEY value is used
DOMAIN_SECRET_KEY_01=aaa.com;**************************************
DOMAIN_SECRET_KEY_02=bbb.com;**************************************
# LDAP Search result-based MFA progress
# 0 : Do not perform LDAP Search (do not use the information below, such as LDAP_SERVER, LDAP_USE_IDPW, etc. Insert an empty value into the token)
# 1 : Attempt LDAP Search, but failure is irrelevant (proceed with MFA even if server failure or no information occurs. Insert an empty value into the token)
# 2 : LDAP Search must be successful and user information must exist (proceed only if user information exists. However, proceed even if the result value is empty)
USE_LDAP_SEARCH=1
# LDAP address and ID/PW
# LDAP_SERVER can be domain, ipv4, ipv6, and must be prefixed with "LDAP://" in uppercase (must be uppercase)
# Example: LDAP://adpw5004.hw.dev , LDAP://70.2.180.218 , LDAP://fe80::644b:3c9f:c5ac:ce1c%10
# Set LDAP_USE_IDPW to 1 to use ID/PW, and set to 0 not to use
# Set LDAP_SSLTLS to 1 to use SSL/TLS, and set to 0 not to use (only applicable when LDAP_USE_IDPW=1)
LDAP_SERVER="LDAP://adpw5004.hw.dev"
LDAP_USE_IDPW=1
LDAP_SSLTLS=1
LDAP_ID="isadmin"
LDAP_PW="sds*****"
# Perform DNS Lookup to verify the IP address of the LDAP server (LDAP_SERVER) and connect based on the IP address
# Even if the LDAP_SERVER value is set to IP (ipv4, ipv6), DNS Lookup is performed and the IP is returned as is
# If DNS Lookup fails, connect using the LDAP_SERVER value as is
# 0 : Connect to the server using the LDAP_SERVER value as is (do not perform DNS lookup)
# 1 : Connect to the LDAP server using the IP address verified through DNS lookup (use the first IP in the DNS lookup result list)
# 2 : Verify the IP address through DNS lookup and use the first matching IP in the LDAP_WHITE_IP_## list (use the LDAP_SERVER if not found in the list)
# 3 : Verify the IP address through DNS lookup and use the first matching IP in the LDAP_WHITE_IP_## list (do not connect to the LDAP server if not found in the list)
LDAP_DNS_LOOKUP=1
# DNS Lookup result has multiple entries, try to connect to the next IP address if the first one fails
# Example: 4 lookup results: 1st IP connection fails -> try 2nd IP & fail -> try 3rd IP & fail -> try 4th IP
LDAP_DNS_IF_FAIL_USE_NEXT=1
# List of allowed LDAP server IP addresses to compare with DNS Lookup results (only applicable when LDAP_DNS_LOOKUP = 2 or 3)
# In the format of LDAP_WHITE_IP_##, recorded sequentially from 01 to 99
# Compare DNS Lookup results with the list sequentially
# Record in IPv4 or IPv6 format (if the same server has both IPv4 and IPv6, the one with higher priority in the list is applied)
# If the order of DNS Lookup results and White IP list is different, follow the order of White IP list
LDAP_WHITE_IP_01="70.2.180.218"
LDAP_WHITE_IP_02="fe80::644b:3c9f:c5ac:ce1c%10"
# Whether to encrypt user information (e.g., mobile, email, etc.)
# Target: USERINFO_## list
# Depending on the encryption, the claim name of the token sent to the API server is different
# 0: Do not encrypt -> token claim name is plainMobile, plainEmail
# 1: Encrypt -> token claim name is mobile, email
USERINFO_ENCRYPT=0
# LDAP Search user information attribute name and JWT token claim name (delimiter between two values = ";")
# Format: USERINFO_## = attribute;encryptedClaim;plainClaim
# Example: If "mail" attribute is read from LDAP and used as "email" claim in JWT, then "mail;email;plainEmail"
# Key name is in the format "USERINFO_##", starting with USERINFO_01
# Number of keys: 0 to a maximum of 99 (if 0, do not write anything in the ini file, and do not write USERINFO_00)
# Note) In USERINFO_##, the number corresponding to ## must start from 01 and not be interrupted if there are multiple
# USERINFO_01, USERINFO_02, USERINFO_03: OK (01, 02, 03 information is used)
# USERINFO_01, USERINFO_02, USERINFO_05: only read up to 02, and do not use the numbers after the interruption (01, 02 information is used)
USERINFO_01=mobile;mobile;plainMobile
USERINFO_02=mail;email;plainEmail
# MFA API server's callback result parameter key name
# Example: https://adpw5004.hw.dev/adfs/ls?client-request-id=xxxxxx&pullStatus=0&jwtTokenResponse=yyyyyy
KEY_NAME_IN_RESPONSE="jwtTokenResponse"
# JWT Token's exp additional value
# Format: dhms (day, hour, minute, second) string -> 1d=86400, 1h=3600, 1m=60 (simple number without dhms is considered as seconds)
# Example 1: 1d02h38m27s -> 95907 seconds
# Example 2: 12345 -> 12345 seconds
TOKEN_EXP_TIME=1d
# Whether to add client claim to the token when calling the API
# Client: issuer for SAML, client-id for OIDC
# 0: Do not include client in the token
# 1: Include client in the token
TOKEN_CLAIM_CLIENT=0
# MFA nonce (guid, request-id) verification method
# 0: Do not verify
# 1: Adapter generates guid and stores/compares it in LDAP (adapter verifies)
# -> Related settings: CACHE_ATTRIBUTE, CACHE_DELIMETER, SKEW_SECONDS, CACHE_LIFE_TIME
# 2: API server generates request-id and adapter uses it in the call URL (API server verifies)
# -> Related settings: MFA_VERIFY_URL
MFA_VERIFY_TYPE=2
# MFA result verification URL (server to server communication) : Appends the {request-id} received from the API server to the end of the URL
# The adapter checks if the return is 200 (OK) to process the MFA result
# Do not add a "/" at the end of the URL
MFA_VERIFY_URL="https://stg1-cloud.iam.samsung.net/test/common-api/open/v1.1/mfa/request/status"
# Security protocol used for MFA result verification
# Available protocols (case-insensitive) : TLS12, TLS13
# (Note) Do not use SSL3, TLS, TLS11
MFA_VERIFY_SECURE_PROTOCOL="TLS12"
# Name of the LDAP attribute to store the user's req guid value
# (Note) Write permission to LDAP is required
CACHE_ATTRIBUTE="otherPager"
# Delimiter used to combine req and time information stored in LDAP -> "req;time"
CACHE_DELIMETER=";"
# Allowed time difference (in seconds) between the time stored in LDAP and the time the JWT is received
# This is the time after AD login, not when the MFA selection screen is displayed (the time is already stored when the MFA selection screen is displayed)
# Therefore, the time should not be set too tightly, and around 1 hour is suitable?? (Is there anyone who takes 1 hour to select MFA?)
SKEW_SECONDS=3600
# Time to live for req stored in LDAP -> Check time on next access and delete old ones
# Format: String in dhms (day, hour, minute, second) format -> 1d=86400, 1h=3600, 1m=60 (numbers without dhms are considered seconds)
# Example 1: 1d02h38m27s -> 95907 seconds
# Example 2: 12345 -> 12345 seconds
CACHE_LIFE_TIME=1d
# Whether to bypass adapter functionality (0=normal use, 283901=disable, other values=normal use)
# For emergency situations where adapter functionality needs to be disabled due to MFA issues
# Do not modify this value under normal circumstances -> Normal value is 0
# Note: To disable, the exact value must be set (not just any non-zero number, exact number required to avoid noise)
BYPASS_ADAPTER=0
[API]
API_SYSTEMNAME=SingleID
[MSG-1033]
MSG_INTERNAL_ERROR="Internal error occurred. Contact administrator."
[MSG-1042]
MSG_INTERNAL_ERROR="Internal error occurred. Contact administrator."
[MANAGE]
LOG_LEVEL=2
Setting Value Description
Fixed Value: This means that the value displayed in the “Setting Value” column in the table below is used as is when installing the ADFS server.
If you want to add languages other than English and Korean, you can add them for two sections: MSG-1033, MSG-1042
dss
Key
Setting Value (Example)
Fixed
Description
MAIN
MAIN_MFA_TITLE
ADFS MFA Adapter
O
HTML page title (does not affect MFA functionality)
SingleID MFA API address The address may vary depending on the tenant, so you need to check the exact address value
CONSUMER_KEY
4312a8b9-75c4-7897-89a7-89347f18943e
Consumer Key issued by SingleID
SECRET_KEY
gQgkyLVO6FR8vJkLtlgBiupsRM/ilgrbEfoKWRnhALd=
Secret Key issued by SingleID Used for JWT Signature verification Absolute prohibition on external disclosure
DOMAIN_CONSUMER_KEY_01
4312a8b9-75c4-7897-89a7-89347f18943e
Domain vs Consumer Key list If the Consumer Key is different for each domain, list it (in this case, leave the CONSUMER_KEY value blank) Format: DOMAIN_CONSUMER_KEY_##=domain;consumerKey Example: DOMAIN_CONSUMER_KEY_01=aaa.com;4312a8b9-75c4-7897-89a7-89347f18943e DOMAIN_CONSUMER_KEY_02=bbb.com;96567780-2b12-23da-637c-9375a6502d5a (Note) If both CONSUMER_KEY value and list value exist, only CONSUMER_KEY value is used
DOMAIN_CONSUMER_KEY_02
96567780-2b12-23da-637c-9375a6502d5a
DOMAIN_CONSUMER_KEY_##
367c89d5-88f7-978a-9739-8ed21748f36b
DOMAIN_SECRET_KEY_01
gQgkyLVO6FR8vJkLtlgBiupsRM/ilgrbEfoKWRnhALd=
Domain vs Secret Key list If the Secret Key is different for each domain, list it (in this case, leave the SECRET_KEY value blank) Format: DOMAIN_SECRET_KEY_##=domain;secretKey Example: DOMAIN_SECRET_KEY_01=aaa.com;gQgkyLVO6FR8vJkLtlgBiupsRM/ilgrbEfoKWRnhALd= DOMAIN_SECRET_KEY_02=bbb.com;kgkWRnLygQhsRgrLVbtKlO6FiLdABupEgoMR8v/ilfJ= (Note) If both SECRET_KEY value and list value exist, only SECRET_KEY value is used
DOMAIN_SECRET_KEY_02
kgkWRnLygQhsRgrLVbtKlO6FiLdABupEgoMR8v/ilfJ=
DOMAIN_SECRET_KEY_##
dABupkRnLygQhsrLgWVRbt8vRgkLilLKlO1FioMgfJE=
USE_LDAP_SEARCH
0 or 1 or 2
MFA progress based on LDAP Search result 0 : Do not perform LDAP Search (do not use the information below, such as LDAP_SERVER, LDAP_USE_IDPW, etc. and insert an empty value into the token) 1 : Try LDAP Search, but proceed with MFA even if it fails (proceed with MFA even if server failure or no information occurs, and insert an empty value into the token) 2 : Proceed with MFA only if LDAP Search is successful and user information exists (proceed only when user information exists, but proceed even if the result value is empty)
LDAP_SERVER
LDAP://adpw5004.hw.dev
LDAP address that can query AD user information Domain, IPv4, and IPv6 are all possible, and “LDAP://” must be attached to the beginning
LDAP_USE_IDPW
0 or 1
Whether to use id/pw when accessing LDAP The adapter operates with system privileges, so it is common to access LDAP without id/pw, but there are cases where it is not If there is an AD connection error in the event log while the id/pw is not used for connection, it is necessary to set it to use id/pw If this value is set to 1, LDAP_ID and LDAP_PW values must be set
LDAP_SSLTLS
0 or 1
Whether to use SSL/TLS when connecting to LDAP Generally, it is set to use
LDAP_ID
LDAP connection id
LDAP connection id (when LDAP_USE_IDPW=1)
LDAP_PW
LDAP connection pw
LDAP connection pw (when LDAP_USE_IDPW=1)
LDAP_DNS_LOOKUP
0 or 1 or 2 or 3
Whether to perform DNS Lookup to check the IP address of the LDAP server (LDAP_SERVER) and connect based on the IP address 0 : Connect to the server with the LDAP_SERVER value as is (do not perform DNS lookup) 1 : Perform DNS lookup to check the IP address and connect to the LDAP server (use the first IP in the DNS lookup result list) 2 : Perform DNS lookup to check the IP address and use the first IP that matches the LDAP_WHITE_IP_## list (if not in the list, use the LDAP_SERVER value) 3 : Perform DNS lookup to check the IP address and use the first IP that matches the LDAP_WHITE_IP_## list (if not in the list, do not connect to LDAP)
LDAP_DNS_IF_FAIL_USE_NEXT
0 or 1
Whether to try the next IP address when the first IP address fails to connect after performing DNS lookup Example: If there are 4 lookup results, try to connect to the first IP, and if it fails, try to connect to the second IP, and if it fails, try to connect to the third IP, and if it fails, try to connect to the fourth IP
LDAP_WHITE_IP_01
70.2.180.218
List of allowed LDAP server IP addresses for comparison with DNS lookup results (only applicable when LDAP_DNS_LOOKUP = 2 or 3) Format: LDAP_WHITE_IP_##, recorded sequentially from 01 to 99 Compare DNS lookup results with the list in sequence Recorded in IPv4 or IPv6 format (if the same server has both IPv4 and IPv6, the IP in the higher order of the list is applied) If the DNS lookup result order and the White IP list order are different, follow the White IP list order
LDAP_WHITE_IP_02
fe80::644b:3c9f:c5ac:ce1c%10
LDAP_WHITE_IP_##
A. : 01 ~ 99 White IP address (IPv4 or IPv6)
USERINFO_ENCRYPT
0 or 1
Whether to encrypt user information (e.g., mobile, email, etc.) Target: USERINFO_## list The claim name of the token sent to the API server differs depending on the encryption 0: Not encrypted -> token claim name is plainMobile, plainEmail 1: Encrypted -> token claim name is mobile, email
USERINFO_01
mobile;mobile;plainMobile
O
LDAP Search user information attribute name and JWT token claim name (delimiter to separate three values = “;”) Format: USERINFO_## = attribute; encryptedClaim; plainClaim Example: If you read the “mail” attribute from LDAP and use the encrypted value as the “email” claim and the plain text value as the “plainEmail” claim in the JWT → “mail;email;plainEmail”
Value added to the exp of the JWT token String in the format of day, hour, minute, and second (dhms) 1d = 86400, 1h = 3600, 1m = 60 If there is no dhms, it is considered as seconds Example 1: 1d02h38m27s → 95907 seconds Example 2: 12345 → 12345 seconds
TOKEN_CLAIM_CLIENT
0 or 1
Whether to add the client claim to the token when calling the API Client: issuer in the case of SAML, client-id in the case of OIDC 0: Do not include the client in the token 1: Include the client in the token
MFA_VERIFY_TYPE
0 or 1 or 2
MFA nonce (guid, request-id) verification method 0: Do not verify 1: Method of storing and comparing the guid created by the adapter in LDAP (verified by the adapter) → related setting values: CACHE_ATTRIBUTE, CACHE_DELIMETER, SKEW_SECONDS, CACHE_LIFE_TIME 2: Method of using the request-id created by the API server and used in the call URL (verified by the API server) → related setting value: MFA_VERIFY_URL
MFA result verification URL (server-to-server communication): The {request-id} received from the API server is appended to the end of the URL and called → The adapter checks if the return is 200 (OK) to process the MFA result Do not add a “/” at the end of the URL
MFA_VERIFY_SECURE_PROTOCOL
TLS12 or TLS13
Secure protocol used for MFA result verification Selectable protocols (case-insensitive): TLS12, TLS13 (Note) Do not use SSL3, TLS, or TLS11
CACHE_ATTRIBUTE
otherPager
O
Name of the LDAP attribute to store the user’s req guid value
CACHE_DELIMETER
“;”
Delimiter used to combine the req and time information when storing in LDAP -> “req;time”
SKEW_SECONDS
3600
Time difference allowance (in seconds) between the time stored in LDAP and the time received in the JWT This is the time right after AD login, not the time when the MFA selection screen is displayed (the time has already been stored when the MFA selection screen is displayed) Therefore, do not set the time too tightly, and about 1 hour is suitable?? (Is there anyone who takes 1 hour to select MFA?)
CACHE_LIFE_TIME
1d
Lifetime of the req stored in LDAP -> delete old ones when checking the time at the next access String in the format of day, hour, minute, and second (dhms) 1d = 86400, 1h = 3600, 1m = 60 If there is no dhms, it is considered as seconds
BYPASS_ADAPTER
0 or 283901
Whether to bypass the adapter function (0 = normal use, 283901 = disable, other values = normal use) Used in emergency situations where the adapter function needs to be disabled due to MFA function issues Do not modify this value in normal situations -> the normal value is 0 Note: To disable, you must set the exact value (not just any number other than 0, but the exact number is required. Be careful of noise)
API
API_SYSTEMNAME
SingleID
O
(No effect on MFA function)
MSG-1033
MSG_INTERNAL_ERROR
“Internal error occurred. Contact administrator.”
Message to display to the user when stopping due to authentication interruption, error occurrence, etc. (English)
MSG-1042
MSG_INTERNAL_ERROR
“Internal error occurred. Contact administrator.”
Message to display to the user when stopping due to authentication interruption, error occurrence, etc. (English) Enter in English, as entering in Korean will cause an error
MANAGE
LOG_LEVEL
0, 1, or 2
Standard for recording in the Windows event log 0 = record only errors 1 = record errors and warnings 2 = record all including errors, warnings, and information
Table. Setting value description
INI Setting Method
LDAP Search related
Using DNS Lookup with the hostName of the LDAP server
Using only the first address among multiple DNS Lookup results
Attempting to connect to multiple DNS Lookup results in sequence
Using id/pw when connecting to the LDAP server
Using a specific LDAP attribute name and JWT token claim name (USERINFO_##)
Only allowed LDAP addresses can be accessed (White IP list)
Set user attributes to be retrieved from LDAP
API connection related
Whether to encrypt user information included in the token sent to the API server
MFA integrity verification method: Verified by the adapter
MFA integrity verification method: Verified by the API server
Others
Options that should never be changed
Options that must be issued and set by the SingleID operations department
Options that need to be set according to the installation environment
Caution
The consumer key and secret key used on this page are sample data. (fake value)
LDAP Search related
When using DNS Lookup with the hostName of the LDAP server
The beginning of the LDAP server address must be “LDAP://” in uppercase.
It has been confirmed that if it is in lowercase during testing on the development server, the connection will not be made.
If DNS Lookup fails, the LDAP_SERVER value is used as the LDAP connection address.
When you want to use only the first address among multiple DNS Lookup results
LDAP_DNS_LOOKUP=1
LDAP_DNS_IF_FAIL_USE_NEXT=0
DNS lookup result is as follows,
IP1 = 10.10.10.10
IP2 = 10.10.10.20
IP3 = 10.10.10.30
Since LDAP_DNS_IF_FAIL_USE_NEXT=0,
it attempts to connect to IP1 only and stops regardless of success or failure.
Therefore, setting LDAP_DNS_IF_FAIL_USE_NEXT=0 requires caution.
When you want to attempt to connect to all DNS lookup results sequentially
LDAP_DNS_LOOKUP=1
LDAP_DNS_IF_FAIL_USE_NEXT=1
DNS lookup result is as follows,
IP1 = 10.10.10.10
IP2 = 10.10.10.20
IP3 = 10.10.10.30
Since LDAP_DNS_IF_FAIL_USE_NEXT=1,
it attempts to connect to IP1, IP2, and IP3 sequentially until it succeeds.
For example, if it succeeds in connecting to IP2, it will not attempt to connect to IP3.
LDAP Server Connection using ID/PW
LDAP_ID="******"
LDAP_PW="******"
MFA operates with system privileges, so ID/PW may not be necessary.
If LDAP connection is not possible without ID/PW (can be checked in server logs),
please try setting it to use ID/PW.
Allowing only permitted LDAP addresses (White IP list)
LDAP_DNS_LOOKUP=2
or
LDAP_DNS_LOOKUP=3
LDAP_WHITE_IP_01="70.2.180.218"
LDAP_WHITE_IP_02="fe80::644b:3c9f:c5ac:ce1c%10"
This method does not use the DNS Lookup result directly,
but compares it with the White IP list and only uses addresses that belong to the list.
For example, if the DNS Lookup result is as follows,
IP1 = 10.10.10.10
IP2 = 10.10.10.20
IP3 = 10.10.10.30
And the White IP list is as follows,
WIP1 = 10.10.10.20
WIP2 = 10.10.10.40
The actual address used is IP2 = WIP1 = 10.10.10.20.
The order follows the White IP list order.
In the following case, the server attempts to connect in the order of 10.10.10.30, 10.10.10.20.
IP1 = 10.10.10.10
IP2 = 10.10.10.20
IP3 = 10.10.10.30
WIP1 = 10.10.10.30
WIP2 = 10.10.10.20
If there is no White IP list,
LDAP_DNS_LOOKUP=2 → The LDAP_SERVER value is used directly as the LDAP connection address.
LDAP_DNS_LOOKUP=3 → No connection to the LDAP server is made. (An option that requires caution when using)
User information claim to be included in the Request Token to be sent to the MFA API server.
It retrieves a list of items from LDAP based on the number set in the ini file and includes the results in the token to be sent to the MFA API server.
The configuration rules can be found in the table on the page below, under the “USERINFO_##” description.
If the LDAP query results are as follows, like the sample above,
The Request Token will be composed as follows.
If the query results are empty, they will be included in the token as is (like plainCompany and plainDepartment below).
To avoid querying from LDAP, you can empty or remark the setting value.
In this case, the token will not contain user information.
USERINFO_01=
or
#USERINFO_01=mobile;mobile;plainMobile
API Connection Related
Whether to encrypt user information included in the token to be sent to the API server
USERINFO_ENCRYPT=0
As of adapter version 1.2.0.8, it is not possible to transmit encrypted data because the encryption logic of the API server is not implemented in the same way.
The server uses AES GCM encryption, but the adapter cannot use AES GCM due to its development environment characteristics.
Target information: mobile, email
Therefore, we use USERINFO_ENCRYPT=0.
Since the adapter and API server are connected via https, it is unlikely that there will be a man-in-the-middle interception issue.
MFA Integrity Verification Method: Verified by Adapter
You must have write permission to LDAP → Very important every week!
This method uses the “otherPager” attribute in LDAP user information as a temporary storage space.
The adapter has no session concept, so it cannot store or remember information on its own.
The LDAP server is the same as the LDAP Search address.
In other words, options such as LDAP SERVER and LDAP_DNS_LOOKUP are also applied.
The above settings are interpreted as follows:
MFA_VERIFY_TYPE=1 : A method of storing/comparing the guid created by the adapter in LDAP (verified by the adapter)
Use the “otherPager” attribute of LDAP user information
Multiple stored information is concatenated with “;” and stored as a string → Example: “aaa;bbb;ccc”
The allowed difference between the time of the request stored in LDAP and the time of receiving JWT is 3600 seconds
The lifespan of the request stored in LDAP is 1d (one day) → When accessing again, check the time and delete old ones
MFA integrity verification method: Verified by API server
When the adapter receives the MFA result jwt token responded by the API server, it uses the req value in the token’s information to ask the API server again, and checks if the result is 200.
The above settings are interpreted as follows:
MFA_VERIFY_TYPE=2 : The method of using the request-id created by the API server and received by the adapter to call the URL (verified by the API server)
The contents of the INI file provided at the initial installation must be maintained as is.
If changed arbitrarily, the adapter may not work at all.
Some values may need to be changed depending on the system situation, but the opinion of the administrator/responsible person must be gathered in advance.
Options that must be issued and set by the SingleID Operations Department
API-related addresses, keys, and bearer values are provided by the SingleID operations department.
The general setting for the security protocol (MFA_VERIFY_SECURE_PROTOCOL) is TLS 1.2.
Options that need to be set according to the installation environment
These options are determined after investigating the installation environment.
INI Settings and Results
USERINFO_ENCRYPT
USE_LDAP_SEARCH
LDAP_DNS_LOOKUP
LDAP_DNS_IF_FAIL_USE_NEXT
LDAP_USE_IDPW
MFA_VERIFY_TYPE
Note
The consumer key and secret key used on this page are sample data. (fake value)
USERINFO_ENCRYPT
USERINFO_ENCRYPT=0
Sets whether the user information included in the token sent by the adapter to the MFA API server is encrypted or in plain text. (For example, mobile, email)
As of adapter version v1.2.0.8 (April ‘24), since AES/GCM/NoPadding cannot be used, it is set to plain text.
In other words, USERINFO_ENCRYPT=0 is fixed.
Later, if the adapter supports AES/GCM/NoPadding, the setting can be changed.
USE_LDAP_SEARCH
USE_LDAP_SEARCH=0
LDAP_SERVER=“LDAP://adpw5004.hw.dev”
Since USE_LDAP_SEARCH is 0, the LDAP_SERVER value is not used.
In other words, if USE_LDAP_SEARCH is 0, LDAP_SERVER can be set to an empty value or deleted.
USE_LDAP_SEARCH=1
What if the LDAP search fails?
The user information is treated as an empty value and proceeds to the next step.
The cause of the failure, whether it’s a server connection failure or no information, is irrelevant.
USE_LDAP_SEARCH=2
What if the LDAP search fails?
An error is displayed to the user and the process is stopped.
The server log will record the following (or similar content): → “Failed to retrieve user information from LDAP.”
This option should be used with caution and, if possible, set to USE_LDAP_SEARCH=1.
It is desirable to leave the handling of user information that is not available to the MFA API side.
Adapters may not be able to handle user guidance and functions for these situations.
The adapter remembers the DNS lookup result in memory as a list (an ordered list) → LDAP address list
Assuming the LDAP server is duplicated and each IP is as follows. (IP that can be looked up from DNS)
IP#1 : 10.10.10.10
IP#2 : 10.10.10.20
Since the DNS lookup result applies to both IPv4 and IPv6, the result comes out as follows. (The following is a sample and is different from the actual result)
IP#1 = fe80::644b:3c9f:c5ac:ce1c%10
IP#2 = fe80::f03d:b045:8dc3:f5ed%3
IP#3 = 10.10.10.10
IP#4 = 10.10.10.20
In this state, the following cases can be considered.
Case 1) If DNS lookup fails
The number of LDAP address lists is 1, and the LDAP_SERVER value is directly assigned.
Note: I’ve translated only the Korean text into English, leaving the code, HTML, and other non-Korean elements unchanged.
That is, the 1st value of the LDAP address list = “LDAP://adpw5004.hw.dev”
Case 2) If DNS Lookup is successful and there is a White IP list setting value (LDAP_WHITE_IP_##=“x.x.x.x”)
The LDAP address list is created in the order of the White IP list.
In the case of the above sample, the value of the LDAP address list is as follows.
→ 1st value = 10.10.10.10
The 2nd White IP 10.10.10.30 is not reflected in the LDAP address list because it does not exist in the DNS Lookup result.
Case 3) If DNS Lookup is successful and there is no White IP list setting value (LDAP_WHITE_IP_##="" or no LDAP_WHITE_IP_##)
The DNS Lookup result is reflected in the LDAP address list.
In the case of the above sample, the value of the LDAP address list is as follows. → 1st value = fe80::644b:3c9f:c5ac:ce1c%10 → 2nd value = fe80::f03d:b045:8dc3:f5ed%3 → 3rd value = 10.10.10.10 → 4th value = 10.10.10.20
LDAP_DNS_IF_FAIL_USE_NEXT
LDAP_DNS_IF_FAIL_USE_NEXT=0
Assuming the LDAP address list is as follows.
1st value = 10.10.10.10
2nd value = 10.10.10.20
If the connection attempt to the 1st address 10.10.10.10 fails, it will not proceed further.
The LDAP search result (user information) is set to an empty value.
LDAP_DNS_IF_FAIL_USE_NEXT=1
Assuming the LDAP address list is as follows.
1st value = 10.10.10.10
2nd value = 10.10.10.20
If the connection attempt to the 1st address 10.10.10.10 fails, it will attempt to connect to the 2nd address.
If the connection to the 2nd address also fails, the LDAP search result (user information) is set to an empty value.
LDAP_USE_IDPW
LDAP_USE_IDPW=0
LDAP_ID="******"
LDAP_PW="******"
If LDAP_USE_IDPW is 0, the LDAP_ID and LDAP_PW values are not used.
In other words, if LDAP_USE_IDPW is 0, LDAP_ID and LDAP_PW can be set to empty values or deleted.
LDAP_USE_IDPW=1
LDAP_ID=""
LDAP_PW=""
If LDAP_USE_IDPW is 1, LDAP_ID and LDAP_PW values are absolutely necessary.
Therefore, if you leave LDAP_ID and LDAP_PW values empty or delete them, as shown in the sample above, you will not be able to connect to the LDAP server.
LDAP_USE_IDPW=1
LDAP_ID="******"
LDAP_PW="******"
This means that id/pw is used for LDAP connection, and if the connection fails, check if the id/pw is correct.
Since the INI file is in plain text, there is a risk that the id/pw will be exposed.
Therefore, it is necessary to configure the server environment so that LDAP connection is possible without using id/pw as much as possible.
MFA_VERIFY_TYPE
MFA_VERIFY_TYPE=0
From the adapter’s perspective, MFA result verification means that the adapter re-confirms the result of the user’s MFA performance, which is done through the MFA API.
If the MFA_VERIFY_TYPE value is 0, it means that MFA result verification is not performed.
In normal operating conditions, it is not set to 0.
The adapter directly performs MFA result verification.
To do this, the LDAP server is utilized, and LDAP write permission is required.
The CACHE_ATTRIBUTE value is not allowed to be changed.
The user information included in the token sent to the API server is set to an empty string value. (e.g., mobile, email, etc.)
The reason LDAP_SERVER information exists even though LDAP search is not used is because MFA_VERIFY_TYPE=1.
DNS lookup for the LDAP server is not performed.
In other words, the LDAP_SERVER value is used directly as the LDAP address.
The adapter directly verifies the MFA result, using the LDAP server. Therefore, the LDAP server address value must exist.
The above setting means that the adapter stores the nonce it created in the “otherPager” attribute of the user information in the LDAP server and retrieves it for comparison when MFA is completed.
Retrieve user information from LDAP (e.g., mobile, email, etc.).
If the LDAP connection fails or there is no search result, the user information will be set to an empty string value.
Do not use id/pw for LDAP connection.
This applies to cases where the LDAP connection is possible without entering id/pw.
Use SSL/TLS to enhance security when connecting to LDAP.
Do not perform DNS lookup for the LDAP server.
In other words, use the LDAP_SERVER value directly as the LDAP address.
The adapter directly verifies the MFA result, using the LDAP server. Therefore, the LDAP server address value is required.
The above settings mean that the adapter stores the created nonce in the “otherPager” attribute of the user information in the LDAP server and retrieves it for comparison when MFA is completed.
Retrieve user information from LDAP (e.g., mobile, email, etc.).
If the LDAP connection fails or there is no search result, the user information will be set to an empty string value.
Do not use id/pw for LDAP connection.
This applies to cases where you can connect to LDAP without entering id/pw.
DNS lookup for the LDAP server is not performed.
In other words, the LDAP_SERVER value is used directly as the LDAP address.
The API server verifies the MFA result, and the security protocol uses TLS 1.2.
Extract the “req” value included in the MFA result response token received from the API server, and append it to the end of the result verification URL.
Retrieve user information from LDAP (e.g., mobile, email, etc.).
If the LDAP connection fails or there is no search result, the user information is set to an empty string value.
Use id/pw for LDAP connection.
This account must have write permission.
Use SSL/TLS for LDAP connection to enhance security.
Use DNS lookup for the LDAP server.
DNS Lookup results are directly inserted into the LDAP address table.
If DNS Lookup fails, only one LDAP_SERVER value is recorded in the LDAP address table.
Only the first one in the LDAP address table is attempted to connect.
Even if it fails, it does not attempt to connect to the next server in sequence.
The adapter directly performs MFA result verification, using the LDAP server. Therefore, the LDAP server address value must exist.
The above settings mean that the adapter stores the created nonce in the “otherPager” attribute of the user information in the LDAP server and compares it when MFA is completed.
User information is retrieved from LDAP (e.g., mobile, email, etc.).
If the LDAP connection fails or there are no search results, the user information is set to an empty string value.
ID/PW is used for LDAP connection.
This account must have write permissions.
SSL/TLS is used to enhance security when connecting to the LDAP server.
DNS Lookup is used for the LDAP server.
The DNS Lookup result is compared to the White IP list, and the LDAP address table is created in the order of the White IP list.
If the DNS Lookup is successful but the IP is not in the White IP list, only one LDAP_SERVER value is recorded in the LDAP address table.
If the DNS Lookup fails, only one LDAP_SERVER value is recorded in the LDAP address table.
The LDAP address table is attempted to connect in order from the beginning, and if it fails, it attempts to connect to the next server in sequence.
The API server verifies the MFA result, and the security protocol uses TLS 1.2.
The “req” value included in the MFA result response token received from the API server is extracted and appended to the end of the result verification URL.
The log of the adapter execution process is recorded in the Windows event log area.
By adjusting the LOG_LEVEL value in the ADFSadapter.ini setting, you can selectively record error, warning, and general logs.
LOG_LEVEL Setting in ADFSadapter.ini
Value Setting
Recorded Log
LOG_LEVEL=0
Error recording
LOG_LEVEL=1
Error, warning recording
LOG_LEVEL=2
Error, warning, general message all recording
Windows Event Log Location
Computer Management (Local) > System Tools > Event Viewer > Application and Service Logs > MFA_Adapter
At the beginning of each account log, the MFA version and account name are displayed → Refer to log analysis/tracing
During operation, focus on monitoring the parts displayed as “Error” or “Warning”
Windows Event Log Description and Handling Method
[#0000] Success
Err.Success
This is not an error, but a simple log.
It's an unnecessary log, and if you see this log, you can ask the developer to delete it.
[#0001] Invalid Arguments
Err.InvalidArguments
Error: This means that an argument is missing when calling a function within the adapter program.
Action: This is a serious error, and it should be immediately reported to the developer for prompt action.
Note
Although the adapter may work without any symptoms, it has the potential for serious errors, so it should not be neglected.
[#1000] Cannot extract account information from identityClaim
Err.IdentityClaimHasNoAccount
Error: When the adapter is executed initially, it receives the current user's information from the AD server, but cannot find the account information.
Action: Check the status of the AD server.
Note
This is not an LDAP query, but an internally processed information flow within ADFS. If this situation occurs, it means that the adapter is in an environment where it cannot function normally.
[#1001] Cannot load INI file
Err.FailToLoadIni
Error: The server cannot read the MFA environment configuration INI file.
Action: Check if the file exists in the following path on the server
C:\ ADFSadapter \ ADFSadapter.ini
Note
If the file exists, check the file properties or permissions.
[#1002] HTML files cannot be loaded.
Err.FailToLoadHtml
Error: The server cannot read the HTML file.
Action: Check if the file exists in the following path on the server.
C:\ ADFSadapter \ Html_*.txt
Note
If the file exists, check the file properties or permissions. If any of them are missing, an error will occur. You can find out what is missing in the server event log.
[#1003] Cannot retrieve user information from LDAP.
Err.FailToLdapSearch
Error: The LDAP server was queried, but the AD user information could not be retrieved.
Action: Check the status of the AD server.
Note
The token configuration requires mobile and email information, but this information failed to be retrieved. If user information exists, even if mobile and email are empty, it will not be treated as an error. Therefore, this error means that the LDAP query itself failed.
[#1004] The BeginAuthentication function's request does not contain URL information.
Err.NoURLInRequest
Error: The BeginAuthentication function's argument request does not contain URL information when the adapter is first executed.
Action: Check if the SingleID MFA API server is sending a normal response.
Note
If there is no URL information, the response sent by the SingleID MFA API server in GET mode cannot be used.
[#1005] Cannot create a JWT token.
Err.FailToMakeJwtToken
Error: The GenerateRequestToken function failed to create a token.
Action: The exact cause can be found in the server event log, and the developer should be asked to analyze the cause.
[#1006] ADFS adapter directory or INI file not found. The [drive]:/ADFSadapter/ADFSadapter.ini file must exist on one of the drives from C to Z.
Err.CannotFindDirOrIni
Error: From adapter version 1.2.0.6, the adapter installation location is not fixed to the C drive, but can be installed on any drive from C to Z, and the adapter scans the drives to determine the installation location. The [drive]:/ADFSadapter/ADFSadapter.ini file must exist.
Action: Check if the adapter is installed correctly on the server, if the directory name and file name are correct, and if drive access is blocked.
[#2000] No account information in TryEndAuthentication().
Err.TryEndHasNoAccount
Error: The authentication process has moved to the TryEndAuthentication stage, but account information is unknown. (Adapter internal error)
Action: Immediately report the situation to the developer and request cause analysis.
Note: This case should never occur and should not occur.
[#2001] No step information.
Err.NoStepInfo
Error: There is no information about the MFA progress stage (step). (Adapter internal error)
Action: Immediately report the situation to the developer and request cause analysis.
Note: This case should never occur and should not occur.
[#2002] Invalid step information.
Err.InvalidStepInfo
Error: The MFA progress stage (step) information is incorrect. (Adapter internal error)
Action: Immediately report the situation to the developer and request cause analysis.
Note: This case should never occur and should not occur.
[#3000] Retrieves the HTML string.
Err.SucceedInGetHtml
This is not an error, but a simple log. It displays the contents of the Html_.txt file read from the server. It helps to check if the adapter reads the file contents correctly after modifying the Html_.txt file contents.
[#3001] Unable to retrieve HTML.
Err.FailToGetHtml
Error: The server is unable to read the Html_*.txt file.
Action: Check if the file exists, has read permission, or is locked.
[#4000] HTML file not found.
Err.HtmlFileNotFound
Error: The server is unable to read the Html_*.txt file.
Action: Check if the file exists.
[#4001] HTML file exists but is empty.
Err.HtmlFileIsEmpty
Error: The server is unable to read the Html_*.txt file.
Action: Check if the file has read permission or is locked.
[#4002] Step not found in HtmlPrefix list.
Err.StepNotInHtmlPrefixList
Error: The adapter has a predefined keyword list for each processing step, and a keyword not in the list was found.
Action: Immediately report the situation to the developer and request cause analysis.
Note: This case should never occur and should not occur.
[#4003] Empty prefix value in HtmlPrefix list.
Err.EmptyPrefixInHtmlPrefixList
Error: The adapter has a predefined keyword list for each processing step, and the list is empty.
Action: Immediately report the situation to the developer and request cause analysis.
Note: This case should never occur and should not occur.
[#5000] Unable to read the ini file.
Err.FailToReadIniFile
Error: Unable to read the INI file.
Action: Check if the file exists at the following path on the server:
C:\ ADFSadapter \ ADFSadapter.ini
Note:
If the file exists, check the file properties or permissions.
[#5001] System name (API_SYSTEMNAME) is not in the ini file.
Err.NoSystemNameInIni
Error: The "API_SYSTEMNAME" setting value is not in the INI file.
Action: Check if anything is missing in the INI file, or if the INI file is an old version.
[#5002] Claim1 (MAIN_CLAIM1) is not in the ini file.
Err.NoClaim1InIni
Error: The "MAIN_CLAIM1" setting value is not in the INI file.
Action: Check if anything is missing in the INI file, or if the INI file is an old version.
[#5003] Claim2 (MAIN_CLAIM2) is not in the ini file.
Err.NoClaim2InIni
Error: The "MAIN_CLAIM2" setting value is not in the INI file.
Action: Check if anything is missing in the INI file, or if the INI file is an old version.
[#5004] The ini file does not exist.
Err.IniFileNotFound
Error: Unable to find the MFA environment configuration INI file on the server.
Action: Check if the file exists at the following path on the server:
C:\ ADFSadapter \ ADFSadapter.ini
[#5005] Failed to add to the ini list using AddToList().
Err.FailToAddIniList
Error: This is an internal adapter error.
Action: Immediately report the situation to the developer and request cause analysis.
Note
This case should never occur and should not occur.
[#5006] No key or value was read from the ini file.
Err.NoKeyValueInIni
Error: The INI file was read, but no key-value combination was set.
Action: Check the contents of the INI file.
[#5007] LDAP server information is not in the ini file. (LDAP_SERVER)
Err.NoLdapServerValueInIni
Error: There is no "LDAP_SERVER" setting value in the INI file.
Action: Check if anything is missing in the INI file, or if the INI file is an old version.
[#5008] MFA API URL is not in the ini file. (MFA_API_URL)
Err.NoMfaApiUrlValueInIni
Error: There is no "MFA_API_URL" setting value in the INI file.
Action: Check if anything is missing in the INI file, or if the INI file is an old version.
[#5009] Consumer Key value is not in the ini file. (CONSUMER_KEY)
Err.NoConsumerKeyValueInIni
Error: There is no "CONSUMER_KEY" setting value in the INI file.
Action: Check if anything is missing in the INI file, or if the INI file is an old version.
[#5010] Secret Key value is not in the ini file. (SECRET_KEY)
Err.NoSecretKeyValueInIni
Error: There is no "SECRET_KEY" setting value in the INI file.
Action: Check if anything is missing in the INI file, or if the INI file is an old version.
[#5011] Cache Attribute value is not in the ini file. (CACHE_ATTRIBUTE)
Err.NoCacheAttributeValueInIni
Error: There is no "CACHE_ATTRIBUTE" setting value in the INI file.
Action: Check if there are any missing values in the INI file, or if the INI file is an old version.
[#5012] Cache Delimeter value is not in the ini file. (CACHE_DELIMETER)
Err.NoCacheDelimeterValueInIni
Error: There is no "CACHE_DELIMETER" setting value in the INI file.
Action: Check if there are any missing values in the INI file, or if the INI file is an old version.
[#5013] Skew Seconds value is not in the ini file. (SKEW_SECONDS)
Err.NoSkewSecondsValueInIni
Error: There is no "SKEW_SECONDS" setting value in the INI file.
Action: Check if there are any missing values in the INI file, or if the INI file is an old version.
[#5014] Token expiration time value is not in the ini file. (TOKEN_EXP_TIME)
Err.NoTokenExpTimeInIni
Error: There is no "TOKEN_EXP_TIME" setting value in the INI file.
Action: Check if there are any missing values in the INI file, or if the INI file is an old version.
[#5015] Cache life time value is not in the ini file. (CACHE_LIFE_TIME)
Err.NoCacheLifeTimeInIni
Error: There is no "CACHE_LIFE_TIME" setting value in the INI file.
Action: Check if there are any missing values in the INI file, or if the INI file is an old version.
[#5016] User information claim list is not in the ini file. (USERINFO_##)
Err.NoUserinfoListInIni
Reserved (This error code is reserved and will be used in the future.)
[#5017] LDAP connection is set to use id/pw (LDAP_USE_IDPW=1), but LDAP id or pw is not in the ini file (LDAP_ID, LDAP_PW)
Err.NoLdapIdPwInIni
Error: LDAP connection is set to use id/pw, but LDAP_ID and LDAP_PW settings are not found in the INI file.
Action: Check if there are any missing settings in the INI file and if the INI file is an old version.
[#6000] An exception occurred while searching for user information in AD (LDAP).
Err.ExceptionInAD
Error: An exception occurred while querying the LDAP server.
Action: Check if the AD server address set in the INI file is correct and check the status of the AD server.
Reference: Provide the detailed exception content in the event log to the developer.
[#6001] Unable to find user information in AD (LDAP).
Err.CannotFindUserInAD
Error: Unable to retrieve AD user information from the LDAP server.
Action: Check the status of the AD server.
Reference: The token configuration requires mobile and email information, but failed to retrieve this information. If user information exists, empty mobile and email values are not treated as errors. Therefore, this error occurs when the LDAP query itself fails.
User Error Message
If an error occurs during the MFA process, an error message is displayed on the user’s PC screen.
The error message is fixed as “Internal error occurred. Contact administrator.” and the error code is displayed on the next line.
Below is an explanation of the error code, its cause, and the measures to take.
※ Refer to the event log of the server’s internal processing procedure, excluding user error messages
ErrorCode : 0001
* Err.IdentityClaimHasNoAccount
* The function call arguments were incorrect
* This error is not shown to the user
+ → If it appears, contact the developer
+ → Check the server event log at this point
ErrorCode : 1000
* Err.IdentityClaimHasNoAccount
* "Cannot extract account information from identityClaim."
* At the initial execution of the adapter, it receives the current user's information from the AD server, but cannot find account information
* Since it's an internally processed information flow within ADFS, rather than querying LDAP,
+ If this situation occurs, consider the adapter unable to function normally
* Check the AD server status first
ErrorCode : 1001
* Err.FailToLoadIni
* The server cannot read the MFA environment configuration INI file
* Check if the file exists at the following path on the server
+ C:\ ADFSadapter \ ADFSadapter.ini
* If the file exists, check its properties or permissions
ErrorCode : 1002
* Err.FailToLoadHtml
* The server cannot read the HTML file
* Check if the file exists at the following path on the server
+ C:\ ADFSadapter \ Html_*.txt
* If the file exists, check the file properties or permissions
* If one or more are missing, an error occurs → You can find out what is missing in the server event log
ErrorCode : 1003
* Err.FailToLdapSearch
* "Failed to retrieve user information from LDAP."
* The LDAP server was queried, but failed to retrieve AD user information
* The token configuration requires mobile and email information, but failed to retrieve this information
* Even if mobile and email are empty values, they are not processed as errors
* So, this error occurred because the LDAP query itself failed
ErrorCode : 1004
* Err.NoURLInRequest
* "There is no URL information in the request of the BeginAuthentication function."
* The request argument of the BeginAuthentication function, which runs at the initial execution of the adapter, has no URL information
* Without URL information, the response sent by the SingleID MFA API server in GET method cannot be used
* You need to check if the SingleID MFA API server is sending the response normally
ErrorCode : 1005
* Err.FailToMakeJwtToken
* "Failed to create a JWT token."
* The GenerateRequestToken function failed to create a token
* The exact cause can be found in the server event log
ErrorCode : 1006
* Err.CannotFindDirOrIni
* "Failed to create a JWT token."
* The ADFSadapter directory or INI file cannot be found
* From adapter version 1.2.0.6, the adapter installation location can be installed in any drive from C to Z, not just the C drive, and
+ Adapter scans C to Z drives to find the installed location
* [drive]:/ADFSadapter/ADFSadapter.ini file must exist
* Check if the adapter is installed correctly on the server and if the directory name and file name are correct
* Check if drive access is blocked
ErrorCode : 2000
* Err.TryEndHasNoAccount
* "No account information in TryEndAuthentication()"
* Moved to the TryEndAuthentication step, but account information is unknown
* This case should never occur (if it occurs, contact the developer)
ErrorCode : 2001
* Err.NoStepInfo
* No MFA step information
* Check the server's event log for detailed information and cause
ErrorCode : 2002
* Err.InvalidStepInfo
* MFA step information is incorrect
* Check the server's event log for detailed information and cause
ADFS Login Page Modification
Editing onload.js
Background
If multiple MFAs are set, the user will see a selection screen like the one below.
In the initial screen display (MFA not yet completed), a selection is required on this screen.
The problem is that after completing the MFA, the selection screen is displayed again, and the user has to perform the selection action again. This can cause user inconvenience, and if a different MFA is selected the second time, it may lead to unintended results.
After completing the MFA, the above selection screen should be automatically submitted when it appears. To achieve this, the existing onload.js file in ADFS needs to be edited. If it’s not a multi-MFA case, editing the onload.js file is not necessary.
File Path
The file exists in the following directory on the AD server:
Directory = C:/default_WebTheme/script
File name = onload.js
File Editing
Add the following script to the end of the file contents:
Note
Do not use copy and paste with the example Script Text below, as multilingual messages may not be input correctly.
Prepare a separate file with the correct Script Text.
The onload.js file must be saved in UTF-8 format.
// ------------------------------------------ SingleID MFA : begin
function singleidMfa() {
var authOptions = document.getElementById('authOptions')
if (authOptions) {
var noticeflag = document.getElementById('mfaGreeting');
var url = document.location.href;
var isToken = url.indexOf('jwtTokenResponse');
if (noticeflag && isToken < 1) {
var browserLang = navigator.language || navigator.userLanguage;
// 다국어 처리
// 한국어
if (lang == 'ko-KR' || lang == 'ko') {
document.getElementById('footerPlaceholder').innerHTML="<h3 style='font-weight: bold;'><br/> <br/> ※ 신규 복합인증솔루션 테스트 중 (13:00~15:00) <br/> 'My Authentication Provider' 메뉴를 이용해주세요. </h3>";
}
// 중국어
else if (lang == "zh" || lang.indexOf("zh-") > -1) {
document.getElementById('footerPlaceholder').innerHTML="<h3 style='font-weight: bold;'><br/> <br/> ※ 正在??新的?合??解?方案 (13:00~15:00) <br/> ?登? 'My Authentication Provider' 菜?。 </h3>";
}
// 베트남어
else if (lang == "vi") {
document.getElementById('footerPlaceholder').innerHTML="<h3 style='font-weight: bold;'><br/> <br/> ※ đang ki?m tra gi?i phap xac th?c k?t h?p m?i (13:00~15:00) <br/> Xin vui long đ?ng nh?p vao trinh đ?n 'My Authentication Provider'. </h3>";
}
// 스페인어
else if (lang == "es" || lang.indexOf("es-") > -1) {
document.getElementById('footerPlaceholder').innerHTML="<h3 style='font-weight: bold;'><br/> <br/> ※ Prueba de una nueva solucion de autenticacion compleja (13:00~15:00) <br/> Inicie sesion en el menu 'My Authentication Provider'. </h3>";
}
// 영어
else {
document.getElementById('footerPlaceholder').innerHTML="<h3 style='font-weight: bold;'><br/> <br/> ※ Testing a new MFA solution (13:00~15:00) <br/> Please use 'My Authentication Provider' menu. </h3>";
}
}
var opt = document.getElementById('optionSelection');
if (opt && isToken > 0) {
opt.value = 'ADFSadapter';
document.forms['options'].submit();
}
}
}
window.addEventListener('load', function () {
singleidMfa();
});
// ------------------------------------------ SingleID MFA : end
Script Function
This applies to the case where authOptions exist among the page controls.
It operates after the page load is completely finished (because an error occurs if it runs before that).
It uses window.addEventListener to add to the load event (same as the window.onload event).
Case 1: If mfaGreeting exists among the controls and jwtTokenResponse does not exist in the URL, it displays a user guide message according to the browser language setting (multilingual).
Case 2: If optionSelection exists among the controls and jwtTokenResponse exists in the URL, it assigns ADFSadapter to optionSelection and forcibly submits the options form.
Precautions when Adding Scripts
To manage without affecting existing scripts and for ease of management, it is safe to put the script at the end.
Applying onload.js
The ADFS Sign-in Page customization is possible by modifying and reflecting the onload.js file.
Note
The command contains a potentially malicious script command, so please note that the command you need to enter is different from the displayed command.
-ON-LOADScriptPath is actually the following command, so please be aware of it to avoid confusion.
Application Method
Current Status Check
PS> Get-AdfsWebConfig ## Check the applied (activated) WebTheme
PS> Get-AdfsWebTheme ## Check the list of created WebThemes
Theme Application
Theme Application Method 1) Create a new theme from the default theme
PS> New-AdfsWebTheme -Name "custom_stg" -SourceName default ## Create a new WebTheme
PS> Set-AdfsWebTheme -TargetName "custom_stg" -Illustration @{Path="C:\adfs_Login_dev\illustration\image_0624\8.jpg"} -Logo @{Path="C:\adfs_Login_dev\images\logo.png"} -StyleSheet @{Path="C:\adfs_Login_dev\css\style.css"} -ON-LOADScriptPath "C:\adfs_Login_dev\script\ON-LOAD_new.js" ## Apply a custom js file
Theme Application Method 2) Update from an existing theme
PS> New-AdfsWebTheme -Name "custom_stg" -SourceName [existing theme] ## Create a new WebTheme
PS > Set-AdfsWebTheme -TargetName "custom_stg" -OnLoadScriptPath "C:\adfs_Login_dev\script\onload_stg.js" ## Apply a custom js file
Figure. Theme Application
※ Server command capture for theme application methods 1) and 2). The OnLoad command is automatically changed when organizing Confluence, so a capture is attached
PS> Set-AdfsWebConfig -ActiveThemeName "custom_stg" ## Activate the created WebTheme
Only one custom js file can be applied to a single WebTheme
We also inquired with MS, but officially, only one onload.js file can be applied, and the additional methods they provided do not work
“the ON-LOAD.js is an integrated part of the HTML (the last script in the body) which always executes when the ADFS Page is loaded
There can be only one named ON-LOAD.JS per Web theme.
What is possible though is that additional ('external') scripts can be loaded as part of the actual ON-LOAD.js execution
let’s say in a specific part of your ON-LOAD.JS you want to load a bootstap.js which implements additional functionality
you would firstly import that additional JS to the webpage as AdditionalFileResource //it should not be named ON-LOAD.js
eg
Set-AdfsWebTheme -TargetName custom -AdditionalFileResource @{Uri='/adfs/portal/script/bootstrap.js';path="c:\theme\script\bootstrap.js"}
then you implement a loading functionality in the ON-LOAD.js which dynamically loads your additional script as needed”
In other words, as stated in the official documentation, only one ON-LOAD.JS file can be applied to a single ADFS theme page.
However, it is possible to apply an additional file named bootstap.js as AdditionalFileResource to the same page.
Note
WebTheme settings allow additional options
Options can be used to apply illustrations, logos, stylesheets, etc.
When applying multiple Adapters for user selection, the Adapter’s display name can be set to show to the user (browser). By default, the name used when registering the Adapter is displayed.
Before application
The name used when registering the Adapter is displayed to the user (browser)
After application
The Adapter’s display name is shown to the user (browser)
Display names can be set differently for each language
Guide
Three languages (Korean, English, Global) are applied during testing
The Adapter’s display name changes according to the browser’s language settings (chrome://settings/languages, edge://settings/languages)
English (US), English (UK), and other languages starting with ’en-’ are all applied as English settings. If a language other than Korean or English is selected, the global setting is applied
Setup method
ADFS adapter (new adapter name) display name setting
Set to 3 locales: ko (Korean), en (English), and unset (global)
Set-AdfsAuthenticationProviderWebContent -Name "ADFSadapter" -Locale ko -DisplayName "New ADFS Plugin (ko)" -Description "New ADFS Plugin Description (ko)"
Set-AdfsAuthenticationProviderWebContent -Name "ADFSadapter" -locale en -DisplayName "New ADFS Plugin (en)" -Description "New ADFS Plugin Description (en)"
Set-AdfsAuthenticationProviderWebContent -Name "ADFSadapter" -DisplayName "New ADFS Plugin (global)" -Description "New ADFS Plugin Description (global)"
MyAuthenticationProvider (existing adapter name) display name setting
Set to 3 locales: ko (Korean), en (English), and unset (global)
Tools required for adapter management (gacutil.exe)
Tool for registering/deleting adapter DLL in AD FS server’s GAC area
Reference
What is GAC?
Global Assembly Cache abbreviation, a special cache for sharing .NET Assembly across the machine. GAC is located in the Windows directory under the assembly directory.
The Assembly installed in the GAC must be a Strongly-named assembly
The DLL must include the name, Version, Culture, and public key
When the DLL is installed in the GAC, it takes priority at runtime
Multiple versions of the same DLL can coexist even with the same name
Registering/Unregistering Adapter in GAC (using gacutil.exe)
Typically used in cmd, but used in PowerShell for convenience (must be used in the form .\gacutil.exe)
Unregister from GAC → The file C:\ADFSadapter\ADFSadapter.dll is not deleted
PS C:\ADFSadapter> .\gacutil.exe /u ADFSadapter
Check if registered in GAC
PS C:\ADFSadapter> .\gacutil.exe /l ADFSadapter
DLL replacement order in GAC
Delete using gacutil.exe /u
Replace the file C:\ADFSadapter\ADFSadapter.dll
Register using gacutil.exe /if
Registering/Unregistering GAC Assembly in ADFS (PowerShell command)
Register in ADFS
First, check the Version, Culture, and public key information using the gacutil.exe /l option
PS C:\ADFSadapter> .\gacutil.exe /l ADFSadapter
ADFS adapter, Version=1.0.0.0, Culture=neutral, PublicKeyToken=3b3a799d949dc414, processorArchitecture=MSIL
Use the result string to construct TypeName and register it with AD FS
(The first part of TypeName is fixed as ADFSadapter.AuthenticationAdapter)
PS C:\ADFSadapter> $typename = "ADFSadapter.AuthenticationAdapter, ADFSadapter, Version=1.0.0.0, Culture=neutral, PublicKeyToken=3b3a799d949dc414, processorArchitecture=MSIL"
PS C:\ADFSadapter> Register-AdfsAuthenticationProvider -TypeName $typename -Name "ADFSadapter" -Verbose
Disable multi-factor authentication method in ADFS management
Unregister-AdfsAuthenticationProvider
Restart ADFS service
Replace DLL in GAC
Register-AdfsAuthenticationProvider
Restart ADFS service
Configure multi-factor authentication method in ADFS management
The above steps 1) to 7) can be automated using the replace_dll.ps1 script file
If MFA function does not work properly
AD account/password authentication not available → This occurs before the MFA step, so it is not related to the Adapter
Adapter registration status check → Check if ADFS MFA Adapter is displayed when running the Get-AdfsAuthenticationProvider command
AD FS setting check → Check if ADFS MFA Adapter is specified in Service > Authentication Method > Multi-factor Authentication Method → Check if Service > Device Registration is set → Check if Access Control Policy is set to require MFA
Adapter execution log check → Log location: Computer Management > System Tools > Event Viewer > Application and Service Logs > MFA_Adapter → Check if there are any logs marked as Error → If there are errors, send the log content to the developer for analysis
9.5.2.1.6.2 - Adapter Setup Guide
Adapter Setup Guide
This is a description of the Adapter environment setup file.
You must configure the environment before applying the ADFS Adapter.
Notice
Adapter Installation Location Change
From adapter 1.2.0.6, installation is possible on drives other than the C drive.
Precautions : It can only be installed on one drive, and if it is installed on multiple drives, the first discovered directory is used while scanning from C to Z
The following example is for the case where the adapter is installed in the C:\ADFSadapter directory. If installed on a drive other than C, only the drive name (drive letter) in the example below needs to be changed.
Example: If installed in D:\ADFSadapter, the ini path → D:\ADFSadapter\ADFSadapter.ini
File Name and Path
File Name: ADFSadapter.ini
Full Path: C:\ADFSadapter\ADFSadapter.ini
File Encoding: Must be saved in UTF-8 (Korean characters may be broken if not)
Things to Keep in Mind
* When expressing values, " and " can be used, and spaces can be entered on either side of the = sign.
+ Spaces before and after the Value are trimmed.
+ The following Values are all the same
+ Example 1) MAIN_TITLE = DWP MFA Adapter
+ Example 2) MAIN_TITLE = DWP MFA Adapter
+ Example 3) MAIN_TITLE = "DWP MFA Adapter"
+ Example 4) MAIN_TITLE = " DWP MFA Adapter "
* Some section names have -1033, -1042 appended to the end, which means locale.
+ At least 1033 must exist.
+ Locale number: 1033 (en-us), 1042 (ko)
+ Locale section: MFA-1033, MFA-1042, TXT-1033, TXT-1042, MSG-1033, MSG-1042
Ini File Structure Example
Some values in the example settings below are masked for security purposes, and the actual values are not asterisks
# ADFS MFA Adapter Environment Settings
# Installation Location Changes
# - Before v1.2.0.6: C:\ADFSadapter\ADFSadapter.ini
# - From v1.2.0.6: Can be installed on a drive other than C (same location as adapter resource installation)
# Examples: C:\ADFSadapter\ADFSadapter.ini, D:\ADFSadapter\ADFSadapter.ini, E:\ADFSadapter\ADFSadapter.ini
# Note: The DLL file name is ADFSadapter.dll, which is different from the existing MFAadapter.dll linked to Nexsign
# When expressing values, " and ' can be used, and spaces can be entered on either side of =
# Leading and trailing spaces of the Value are trimmed.
# The following Values are all the same.
# Example 1) MAIN_TITLE=ADFS MFA Adapter
# Example 2) MAIN_TITLE = ADFS MFA Adapter
# Example 3) MAIN_TITLE = "ADFS MFA Adapter"
# Example 4) MAIN_TITLE = " ADFS MFA Adapter "
# Section names with -1033, -1042 at the end represent locale
# At least 1033 must exist
# Locale number: 1033 (en-us), 1042 (ko)
# Locale section: MFA-1033, MFA-1042, TXT-1033, TXT-1042, MSG-1033, MSG-1042
# LOG_LEVEL (Windows event log recording criteria)
# 0: Error
# 1: Error + Warning
# 2: Error + Warning + Information + Debug
0 : Do not perform LDAP Search (do not use the information below, such as LDAP_SERVER, LDAP_USE_IDPW, etc. Insert an empty value into the token)
1 : Attempt LDAP Search, but failure is irrelevant (proceed with MFA even if server failure or no information occurs. Insert an empty value into the token)
2 : LDAP Search must be successful and user information must exist (proceed only when user information exists. However, proceed even if the result value is empty)
USE_LDAP_SEARCH=1
LDAP address and ID/PW
LDAP_SERVER can be domain, ipv4, ipv6, etc., and the upper case “LDAP://” must be attached to the front (must be upper case)
Perform DNS Lookup to check the IP address of the LDAP server (LDAP_SERVER) and connect based on the IP address
Even if the LDAP_SERVER value is set to IP (ipv4, ipv6), DNS Lookup is performed, and the IP is returned as is
If DNS Lookup fails, connect using the LDAP_SERVER value as is
0 : Connect to the server using the LDAP_SERVER value as is (do not perform DNS lookup)
1 : Connect to the LDAP server using the IP address confirmed by DNS lookup (use the first IP in the DNS lookup result list)
2 : Confirm the IP address using DNS lookup and use the IP that corresponds to the LDAP_WHITE_IP_## list first (use the LDAP_SERVER if not in the list)
3 : Confirm the IP address using DNS lookup and use the IP that corresponds to the LDAP_WHITE_IP_## list first (do not connect to the LDAP if not in the list)
LDAP_DNS_LOOKUP=1
DNS Lookup result has multiple entries, try to connect to the next IP address if the first one fails
Example: 4 lookup results: 1st IP connection fails -> try 2nd IP & fail -> try 3rd IP & fail -> try 4th IP
LDAP_DNS_IF_FAIL_USE_NEXT=1
List of allowed LDAP server IP addresses to compare with DNS Lookup results (only applicable when LDAP_DNS_LOOKUP = 2 or 3)
In the format of LDAP_WHITE_IP_##, recorded sequentially from 01 to 99
Compare DNS Lookup results with the list in sequence
Record in IPv4 or IPv6 format (if the same server has both IPv4 and IPv6, the one with higher priority in the list is applied)
If the order of DNS Lookup results and White IP list is different, follow the order of White IP list
# Setting Value Description
* **Fixed Value** : It means that the value displayed in the **Setting Value** column of the table below is used as is when installing the ADFS server.
* If you want to add a language other than English and Korean, you can add up to 2 sections.
* MSG-1033, MSG-1042
(Note) If both SECRET_KEY value and list value exist, only SECRET_KEY value is used
DOMAIN_SECRET_KEY_02
kgkWRnLygQhsRgrLVbtKlO6FiLdABupEgoMR8v/ilfJ=
DOMAIN_SECRET_KEY_##
dABupkRnLygQhsrLgWVRbt8vRgkLilLKlO1FioMgfJE=
USE_LDAP_SEARCH
0 or 1 or 2
MFA progress based on LDAP Search result
0 : Do not perform LDAP Search (do not use the information below, such as LDAP_SERVER, LDAP_USE_IDPW, etc. and insert an empty value into the token)
1 : Try LDAP Search, but it doesn’t matter if it fails (proceed with MFA even if server failure, no information, etc. occurs, and insert an empty value into the token)
2 : LDAP Search must be successful and user information must exist (proceed only if user information exists, but proceed even if the result value is empty)
LDAP_SERVER
LDAP://adpw5004.hw.dev
LDAP address that can query AD user information
All three types, domain, ipv4, and ipv6, are possible, and “LDAP://” must be attached to the front
LDAP_USE_IDPW
0 or 1
Whether to use id/pw when accessing LDAP
The adapter operates with system privileges, so it is common to access LDAP without id/pw, but there are cases where it is not
If an AD connection error occurs in the event log in a state where id/pw is not used for connection, it is necessary to set it to use id/pw for connection
If this value is set to 1, LDAP_ID and LDAP_PW values must be set
LDAP_SSLTLS
0 or 1
Whether to use SSL/TLS when connecting to LDAP
Generally, set to use
LDAP_ID
LDAP connection id
LDAP connection id (when LDAP_USE_IDPW=1)
LDAP_PW
LDAP connection pw
LDAP connection pw (when LDAP_USE_IDPW=1)
LDAP_DNS_LOOKUP
0 or 1 or 2 or 3
Whether to perform DNS lookup to check the IP address of the LDAP server (LDAP_SERVER) and connect based on the IP address
0 : Connect to the server with the LDAP_SERVER value as is (do not perform DNS lookup)
1 : Perform DNS lookup to check the IP address and connect to the LDAP server (use the first IP in the DNS lookup result list)
2 : Perform DNS lookup to check the IP address and use the first corresponding IP in the LDAP_WHITE_IP_## list (if not in the list, use LDAP_SERVER)
3 : Perform DNS lookup to check the IP address and use the first corresponding IP in the LDAP_WHITE_IP_## list (if not in the list, do not connect to LDAP)
| | LDAP_DNS_IF_FAIL_USE_NEXT | 0 or 1 | | When there are multiple DNS lookup results, whether to try the next IP address if the connection to the first IP address fails<ul><li> Example: 4 lookup results: 1st IP connection failure -> 2nd IP connection attempt & failure -> 3rd IP connection attempt & failure -> 4th IP connection attempt |
| | LDAP_WHITE_IP_01 | 70.2.180.218 | | List of allowed LDAP server IP addresses to compare with DNS lookup results (only applicable when LDAP_DNS_LOOKUP = 2 or 3)</li></ul><ul><li> In the format of LDAP_WHITE_IP_##, recorded sequentially from 01 to 99</li></ul><ul><li> Compared sequentially with DNS lookup results</li></ul><ul><li> Recorded in IPv4 or IPv6 format (if the same server has both IPv4 and IPv6, the IP in the higher priority list is applied)</li></ul><ul><li> If the order of DNS lookup results and White IP list is different, the order of the White IP list is followed </li></ul>|
| | LDAP_WHITE_IP_02 || |fe80::644b:3c9f:c5ac:ce1c%10 |
| | LDAP_WHITE_IP_## | | | A. : 01 ~ 99<br>White IP address (IPv4 or IPv6) |
| | USERINFO_ENCRYPT | 0 or 1 | | Whether to encrypt user information (e.g., mobile, email, etc.)<ul><li> Target: USERINFO_## list</li></ul><ul><li> Depending on the encryption, the claim name of the token sent to the API server is different</li></ul><ul><li> 0: Not encrypted -> token's claim name is plainMobile, plainEmail</li></ul><ul><li> 1: Encrypted -> token's claim name is mobile, email </li></ul>|
| | USERINFO_01 | mobile;mobile;plainMobile | O | LDAP search user information attribute name and JWT token claim name (3 values are separated by ";")<ul><li> Format: USERINFO_## = attribute; encryptedClaim; plainClaim</li></ul><ul><li> Example: If you read the "mail" attribute from LDAP and use the encrypted value as "email" claim and the plain value as "plainEmail" claim in JWT, then "mail;email;plainEmail"</li></ul> |
| | USERINFO_02 | mail;email;plainEmail | O | |
| | USERINFO_## | | | A. : 01 ~ 99<br>[LDAP attribute name];[encrypted token claim name];[plain token claim name] |
| | KEY_NAME_IN_RESPONSE | jwtTokenResponse | O | Key name used in the result parameter when the MFA API server calls back<ul><li> Example: https://adpw5004.hw.dev/adfs/ls?client-request-id=xxxxxx&pullStatus=0&jwtTokenResponse=yyyyyy</li></ul> |
| | TOKEN_EXP_TIME | 1d | | Value to be added to the exp of the JWT token<ul><li> String in the format of day, hour, minute, second (dhms)<br>1d=86400, 1h=3600, 1m=60</li></ul><ul><li> If dhms is not present, it is considered as seconds</li></ul><ul><li> Example 1: 1d02h38m27s -> 95907 seconds<br>Example 2: 12345 -> 12345 seconds</li></ul> |
| | TOKEN_CLAIM_CLIENT | 0 or 1 | | Whether to add the client claim to the token when calling the API<ul><li> Client: issuer in SAML, client-id in OIDC</li></ul><ul><li> 0: Do not include client in the token</li></ul><ul><li> 1: Include client in the token</li></ul> |
| | MFA_VERIFY_TYPE | 0 or 1 or 2 | | MFA nonce (guid, request-id) verification method<ul><li> 0: Do not verify</li></ul><ul><li> 1: Store and compare the guid created by the adapter in LDAP (verified by the adapter) -> related settings: CACHE_ATTRIBUTE, CACHE_DELIMETER, SKEW_SECONDS, CACHE_LIFE_TIME</li></ul><ul><li> 2: Use the request-id created by the API server and used by the adapter in the call URL (verified by the API server) -> related settings: MFA_VERIFY_URL</li></ul> |
| | MFA_VERIFY_URL | https://stg1-cloud.iam.samsung.net/test/common-api/open/v1.1/mfa/request/status | | MFA result verification URL (server-to-server communication): {request-id} received from the API server is appended to the end of the URL and called -> the adapter checks if the return is 200 (OK) to process the MFA result<ul><li> Do not add "/" at the end of the URL </li></ul>|
| | MFA_VERIFY_SECURE_PROTOCOL | TLS12 or TLS13 | | Secure protocol used for MFA result verification<ul><li> Selectable protocols (case-insensitive): TLS12, TLS13</li></ul><ul><li> (Note) Do not use SSL3, TLS, TLS11 </li></ul>|
| | CACHE_ATTRIBUTE | otherPager | O | Name of the LDAP attribute to store the user's req guid value |
| | CACHE_DELIMETER | ";" | | Delimiter used to combine req + time information when storing in LDAP -> "req;time" |
| | SKEW_SECONDS | 3600 | | Allowed difference in seconds between the time stored in LDAP and the time received in JWT<ul><li> The time when the user logs in to AD, not when the MFA selection screen is displayed (the time is already stored when the MFA selection screen is displayed)</li></ul><ul><li> Not the time it takes for the user to select MFA and enter the passcode</li></ul><ul><li> Therefore, do not set the time too tightly, and about 1 hour is suitable?? (Is there anyone who takes 1 hour to select MFA?) |
| | CACHE_LIFE_TIME | 1d | | Lifetime of the req stored in LDAP -> delete old ones when checking the time at the next access<ul><li> String in the format of day, hour, minute, second (dhms)<br>1d=86400, 1h=3600, 1m=60<br>(If dhms is not present, it is considered as seconds) </li></ul>|
| | BYPASS_ADAPTER | 0 or 283901 | | Whether to bypass the adapter function (0 = normal use, 283901 = disable, other values = normal use)<ul><li> Used in emergency situations where the adapter function needs to be disabled due to MFA issues</li></ul><ul><li> Do not modify it in normal situations -> the normal value is 0</li></ul><ul><li> Note: To disable, you must set the exact value (not just any number other than 0, but the exact number. Be careful of noise) </li></ul>|
|API | API_SYSTEMNAME | SingleID | O | (No effect on MFA function)|
| MSG-1033 | MSG_INTERNAL_ERROR | "Internal error occurred. Contact administrator." | | Message displayed to the user when the authentication process is stopped due to an error (English) |
| MSG-1042 | MSG_INTERNAL_ERROR | "Internal error occurred. Contact administrator." | | Message displayed to the user when the authentication process is stopped due to an error (Korean) <ul><li> If you enter Korean, an error occurs, so please enter it in English. </li></ul> |
|MANAGE | LOG_LEVEL | 0 or 1 or 2 | | Standard for recording in the Windows event log<ul><li> 0 = only error</li></ul><ul><li> 1 = error + warning</li></ul><ul><li> 2 = error + warning + notice, etc. all recorded </li></ul>|
<div class="figure-caption">
Table. Setting value description
</div>
9.5.3 - Release Note
SingleID
2025.11.04
FEATUREAdd console access history log monitoring feature, Expand CSP support for console access control, Improve announcement feature, Improve approval system feature, Improve batch scheduler management feature, Improve CAM system user role management feature, Improve system global variable management feature
Console access history log monitoring feature added
Added the feature to view and download console access logs
Console access control support CSP expanded
Expanded support CSP for console access control from the existing AWS to Azure and Samsung Cloud Platform (KR EAST1 region, KR WEST1 region)
Notice feature improved
Improved the feature to register and manage notices per tenant
Approval system feature improved
Added a self-built approval system-based approval function to the existing Knox-based approval function
Batch scheduler management feature improved
The batch scheduler management function has been improved, allowing execution results and details to be viewed and enabling immediate execution.
CAM system user role management feature improved
Improved to allow creation/listing/viewing/detail of user roles for the CAM system itself.
System-wide variable management feature improved
Added system-wide variable management function for CAM Portal system itself
Other convenience improved
Improved so that users of PM/PL group can change the IP of already enrolled resources (no need to re-enroll the resource)
Improved to allow navigation to the detailed Role/Policy/Account page from Console Access menu
Changed manual, release note and FAQ URLs to SCP Documentation URL
2025.10.23
FEATUREAdd admin delegation feature, Add approval status menu to dashboard, Add sign-up status menu to dashboard, Add user campaign feature, Add dormant account policy feature, Add user lifecycle management feature, Add rebranding feature to login page, Improve simple authentication feature, Add user security enhancement feature, Improve user profile attribute setting feature, Add application entitlement management feature
Admin delegation feature added
A feature that allows delegating authentication for identity verification to an administrator has been added. This feature is only available for MFA products.
Approval status menu added to dashboard
A feature has been added that allows managing user approval requests and statuses from the dashboard.
Member registration status menu added to dashboard
A feature has been added that allows managing users’ sign-up status from the dashboard.
User campaign feature added
If only one user authentication method is registered, a campaign feature that recommends adding additional authentication registration has been added.
Dormant account policy feature added
Dormant user settings, alarm sending settings, exception user registration, long-term dormant user, dormant self-recovery settings have been added.
User lifecycle management feature added
When signing up and registering users, features for setting user defaults, setting user account usage period, and approval policy have been added.
Rebranding feature added to login page
A feature has been added to change the top and bottom logos, key visual images, text, etc. in the Admin Portal.
The redirection functions for member sign-up page settings, bottom privacy policy, terms of use, etc., have been added.
Passwordless authentication feature improved
Mobile Passkey, security key, a convenient authentication method that allows easy login with a Windows PIN code has been added.
User security feature enhanced
If you use only one authentication method for a long period, a conditional authentication policy feature that requires additional identity verification has been added.
User profile attribute setting feature improved
You can further expand and apply the user’s personal information attributes.
Added a feature to set a prefix text when sending SMS
Improved the image upload screen and process
2025.07.01
NEWSingleID Service Official Version Release
SingleID service launched that allows users to log into business systems with a single ID and enables administrators to easily control access by integrating various access environments
9.6 - WAF
9.6.1 - Overview
Service Overview
WAF (Web Application Firewall) is a service that monitors website traffic to safely protect web applications. It quickly detects and analyzes HTTP, HTTPS-based security threats that target website vulnerabilities.
Features
Powerful Detection/Blocking: Monitor the HTTP, HTTPS traffic of web pages registered by the customer, detecting hacker attack attempts in real time. Classify attacks such as SQL Injection, Cross-Site Scripting (XSS), Web Scan, and provide various defense functions needed for web security to immediately counter new web attack types.
Stable Web Service Operation Support: We respond to new security threats through web firewall signature pattern and firmware updates. OWASP (Open Web Application Security Project) top 10 attacks, National Intelligence Service’s 8 major vulnerability attacks, Zero-Day attacks, and other new web threats as well as Bad Bot hacker attack attempts are detected to support the operation of efficient and stable web services..
Convenient Security Management: Provides monthly reports so you can conveniently check event history.
Service Configuration Diagram
Figure. WAF concept diagram
Public-facing WAF service does not provide monitoring (Security Center).
Provided Features
We provide the following features.
Intrusion detection/analysis provided
24x365 event monitoring (alert issuance, monthly report provision), however, the public-facing WAF service does not provide this content.
Attack classification through web firewall event analysis (Injection, XSS, File Include, File Up/Download, Web Scan, etc)
Detection of latest attack patterns (including Apache Struts vulnerabilities)
Intrusion Response
Provide IP information of attack attempts on registered URL targets
Components
Samsung Cloud Platform’s VPC Virtual Server installs a WAF license and provides the service.
Constraints
To use WAF, please check the following items in advance.
If WAF is configured as a single unit, service continuity cannot be guaranteed in case of a WAF installation VM or WAF application failure.
Samsung Cloud Platform’s Load Balancer and WAF do not support bypass.
Samsung Cloud Platform provided security monitoring service is offered only for Pentasecurity products. (operation + monitoring product)
The public-oriented WAF service does not provide security monitoring services.
The WAF service is directly installed with support from engineers, and it takes some time from application to deployment.
Provision status by region
WAF is available in the environment below.
Region
General (Enter)
Public
Korea West (kr-west1)
Provided
Not provided
Korea East(kr-east1)
Not provided
Not provided
Korea South1(kr-south1)
Not provided
Provided
South Korea 2 (kr-south2)
Not provided
Provided
South Korea South3 (kr-south3)
Not provided
Provided
Table. WAF regional availability status
Preceding Service
This is a list of services that must be pre-configured before applying for the service. For details, refer to the guide provided for each service and prepare in advance.
When using the WAF service, a WAF license is installed on the Virtual Server and provided. First install a Virtual Server that matches the service specifications you want.
A service that safely and quickly connects the customer’s network with Samsung Cloud Platform
Table. WAF Pre-service
Reference
Customers using Secured VPN do not need a separate Direct Connect application. (Direct Connect application required when applying for Secured VPN)
However, regular (enterprise) customers who do not use Secured VPN must apply for Direct Connect separately.
* Application path : Console > Support Center > Service request
* Service : Networking > Direct Connect
* Work classification : Uplink line request
9.6.2 - How-to guides
Users can apply for the service by entering the required information for using the WAF service through the Samsung Cloud Platform Console.
WAF Apply
You can apply for and use the WAF service from the Samsung Cloud Platform Console.
To request WAF service creation, follow the steps below.
All Services > Security > WAF Click the menu. Navigate to the WAF’s Service Home page.
On the Service Home page, click the WAF Service Request button. Navigate to the Support Center > Service Request List > Service Request page.
Service Request page, please enter or select the relevant information in the required input fields.
Select WAF creation in the task category.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: WAF service creation request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select service category and service. If the WAF service request button is pressed, it is entered automatically
Service Category: Security
Service: WAF
Task Category
Select the type you want to request
WAF creation: select when requesting a new service
Content
Guidance on creating and applying basic customer information
Content to be written: End customer/MSP information
Attachment
Upload the completed WAF service application (required) and any additional files you wish to share
Each attached file must be within 5MB, up to a maximum of 5 files can be attached
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. WAF Service Creation Request Items
After checking the application process and reference information, click the Form Download > Service Request Form Download button to download the WAF Service Application Form.
WAF Service Application Form please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required fields.
Category
Detailed Content
Application Information
Write required items such as application type, usage period, throughput information, basic information, etc.
Monitoring Information
Write required items such as WAF service application information, SSL certificate information, etc.
Public sector customers do not need to fill out
Table. Main contents of WAF service creation application form
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the requested content on the Support Center > Service Request List page.
After the monitoring officer verifies the submitted service request, the process for using the service proceeds.
WAF service will be launched.
WAF Cancel
To request termination of the WAF service, follow the steps below.
All Services > Management > Support Center Click the menu. Support Center > Service Home Go to the page.
Click the Service Request button on the Support Center Service Home page. You will be taken to the Service Request List page.
Service Request List page, click the Service Request button. It navigates to the Service Request page.
Service Request page, please enter or select the relevant information in the required input fields.
Select WAF termination in the work classification.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: WAF service termination request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select service category and service
Service Category: Security
Service: WAF
Task Category
Select the type you want to request
WAF termination: select if you are terminating the service
Content
Guidance on creating and applying basic customer information
Content to be written: End customer/MSP information
Attachment
Upload the completed WAF service application (required) and any additional files you wish to share
Each attached file must be within 5 MB, up to a maximum of 5 files can be attached
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Table. WAF service termination request items
After checking the Application Process and Reference Information, click the Form Download > Service Request Form Download button to download the WAF Service Application Form.
WAF Service Application Form please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required items.
Category
Detailed Content
Application Information
Application type, usage period, processing amount information, basic information, etc. Fill in required items
Control Information
When terminating the entire service, no input is required
Table. Main contents of WAF service termination application form
Please attach the completed application form to the attachment area.
On the service request page, click the Request button.
When the application is completed, check the submitted details on the Support Center > Service Request list page.
After the monitoring officer verifies the submitted service request, if the monitored URL, Port, and IP are deleted, the termination process is completed.
Service termination takes 3 business days, counting from the cancellation request date.
9.6.2.1 - WAF Construction Process Guide
To initiate the WAF service, a license installation and control system connection check are required after applying for the service. If you apply for the WAF service, the person in charge will contact you after checking the service request details.
Refer to the process below to apply for the WAF service.
Notice
WAF installation is directly supported by SDS engineers, and it proceeds after discussing the configuration/specifications with the customer company.
Please apply for the service at least 1 month before the minimum service opening date (based on business days) considering the entire process schedule.
Figure. WAF construction process
1. Preparatory Work
The preliminary preparation work for using the WAF service will proceed according to the following procedure.
Apply for WAF installation as a service request.(MSP → SDS)
Request WAF SW installation.(SDS → Engineer)
Please provide engineer information for WAF installation work.(SDS → MSP)
2. Samsung Cloud Platform Console work (MSP performance)
To use the WAF service, the following work is done in the Samsung Cloud Platform Console.
Register the SSL certificate in the Certificate Manager service.
Create an L4 service when load balancing is needed for WAF redundancy.
Create an L4 service when load balancing is needed for WEB server duplication.
Set the necessary Load Balancer/Firewall/Security Group.
Load Balancer’s communication path should have a corresponding Firewall and Security Group set as follows.
The starting point is where you enter your network information.
Classification
Common Security Zone FW
Internet Gateway FW
Load Balancer FW
Virtual Server SG
Inbound (Destination)
LB service Public IP
LB service Private IP
LB service Private IP
LB Link IP
IP (example)
123.43.8.xxx
10.10.0.xxx
10.10.0.xxx
192.168.254.xxx
Port
LB Service Port
LB Service Port
LB Service Port
Forward/Health Check Port
Table. FW/SG setting items according to the communication path of Load Balancer
Set the HTTP redirection of the LB service. (optional)
Load Balancer’s HTTP redirection item should be set as follows.
Load Balancer Service
L7 HTTP
L7 HTTPS
LB Profile > Profile Type
Application
Application
LB Profile > Service Classification
L7 HTTP
L7 HTTP
LB Profile > HTTP Redirection
Settings
Not Set
IP/NAT IP
set the same way
set the same way
Service Port
80
443
Transfer Port
80
80
Server Group > WAF in use
Not set
WAF Virtual Server
Server Group > WAF not used
not set
WEB Virtual Server
Certificate Registration
Unregistered
Registered
Table. Load Balancer's HTTP redirection settings
Grant WAF engineers access permission to the WAF Virtual Server.
3. WAF SW installation and testing (WAF engineer & MSP)
When the WAF specification is confirmed, the engineer installs the WAF software and proceeds with the test.
4. Policy request and reflection for WAF security monitoring
WAF security monitoring requires policies to be created and applied.
Request the necessary policy from the Samsung Cloud Platform Console.(SDS → MSP)
Deliver and apply the created policy.(SDS → MSP)
Check the details that require policy registration.(Direct Connect Firewall/Security Group/Routing)
SDS → Check if the WAF access path is secured for each customer company. If additional registration is required, please request by email.
It checks if the log transmission path from WAF to SIEM is secured for each client company. If additional registration is required, please request by email.
Limitations
WAF installation, check the following restrictions first and proceed.
When WAF is configured alone, service continuity cannot be guaranteed in case of WAF installation Virtual Server or WAF application failure (Samsung Cloud Platform LB and WAF do not support bypass)
If the service availability of the WAF-applied target website is important, WAF duplication application is required. If WAF duplication application is required, it must be requested separately.
Samsung Cloud Platform service provides security monitoring through Pentasecurity products only.
Other vendor products are registered in the marketplace, but the SamsungSDS security management service is not provided.
9.6.3 - Release Note
WAF
2025.07.01
NEWWAF Service Official Version Release
We are launching a WAF service to protect web applications from web vulnerabilities and attacks.
9.7 - DDoS Protection
9.7.1 - Overview
Service Overview
DDoS Protection is a service that detects and defends against DDoS (Distributed Denial of Service) attacks that generate large amounts of traffic intensively and cause service disruptions. Through continuous monitoring, it detects and blocks external traffic attacks to protect the servers inside the Samsung Cloud Platform. When a DDoS attack occurs, by blocking the attack traffic, it minimizes the traffic load entering the internal servers of the Samsung Cloud Platform, ensuring the continuity of web services.
Features
Rapid Attack Detection: Detects DDoS attacks in real time when a large amount of traffic is incoming. Continuously updates DDoS defense items to effectively respond to the latest attack techniques.
Effective Attack Defense: When a DDoS attack occurs, it detects in real time and blocks attack traffic to ensure service availability, supporting regular users to access the website normally.
Stable web service operation: Based on large‑scale network operation experience, we can effectively respond to external security threats. Additionally, we provide monthly reports to check the details of events.
Diagram
Figure. DDoS Protection concept diagram
The public DDoS Protection service does not provide monitoring (Security Center).
Provided Features
We provide the following features.
Intrusion Detection and Analysis
24x365 event monitoring (However, the public DDoS Protection service does not provide this content.)
DDoS attack automatic detection
Intrusion Response
Provide learning-based detection and blocking for various L3/L4 level DDoS attacks
Monitoring Information Provision
Alarm on event detection
Monthly report provision
Components
DDoS Protection provides services based on public IP configured within the VPC.
We provide services targeting servers that can be accessed via the Internet, and blocking is possible based on attacker IP.
Constraints
When providing DDoS Protection service, a minimum one-month learning period is required to set the protection threshold, and we analyze the learned thresholds to provide optimal policy settings.
Provision status by region
DDoS Protection is available in the following environments.
Region
General (Enter)
Public
Korea West (kr-west1)
Provided
Not provided
Korea East (kr-east1)
Not provided
Not provided
South Korea 1 (kr-south1)
Not provided
Not provided
South Korea 2(kr-south2)
Not provided
Not provided
South Korea 3 (kr-south3)
Not provided
Provided
Table. DDoS Protection regional provision status
Preliminary Service
DDoS Protection service list that must be pre-configured before creating the service. For details, refer to the guide provided for each service and prepare in advance.
When creating a VPC’s Internet Gateway, you must select SIGW (Secure Internet Gateway) in the category to be able to use DDoS Protection.
Caution
When creating a VPC’s Internet Gateway, if you select Internet Gateway in the ‘Category’, you cannot use the DDoS Protection service.
if changed to Secure Internet Gateway, you need to change the public IP you are using.
A service that provides an independent virtual network in a cloud environment
Table. DDoS Protection Preliminary Service
9.7.2 - How-to guides
The user can apply for the service by entering the required information for using the DDoS Protection service through the Samsung Cloud Platform Console.
DDoS Protection Create
You can apply for and use the DDoS Protection service on the Samsung Cloud Platform Console.
To request DDoS Protection service creation, follow the steps below.
All Services > Security > DDoS Protection Click the menu. Navigate to DDoS Protection’s Service Home page.
Click the DDoS Protection Service Request button on the Service Home page. Navigate to the Support Center > Service Request List > Service Request page.
Service Request page, enter or select the relevant information in the required input fields.
In the task category, select Create DDoS Protection.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: DDoS Protection service creation request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select the service category and service. If the DDoS Protection service request button is pressed, it is entered automatically
Service Category: Security
Service: DDoS Protection
Task Category
Select the type you want to request
Create DDoS Protection: select when requesting a new service
Content
Guide to creating basic customer information and application process
Content to be written: End customer/MSP information
Attachment
Upload the completed DDoS Protection service application form (required) and any additional files you wish to share
Each attachment can be up to 5MB, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files are allowed
Table. DDoS Protection Service Creation Request Items
After checking the application process and reference information, click the Form Download > Service Request Form Download button to download the DDoS Protection Service Application Form.
DDoS Protection service application form please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required items.
Category
Detailed Content
Application Information
Fill in required items such as application type, usage period, basic information, etc.
Basic Information: Account name, Project name, recipient information input
Monitoring Information
Write required items such as protected target IP, exception handling IP, etc. (need to specify purpose per IP)
Write application classification per IP
New: select when applying for a new service
Public-facing customers do not need to fill out
Table. DDoS Protection service creation application form main contents
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the applied content on the Support Center > Service Request List page.
After the monitoring officer verifies the submitted service request, the process for using the service proceeds.
DDoS Protection service will be launched.
DDoS Protection Cancel
If you want to request termination of DDoS Protection service, follow the steps below.
All Services > Management > Support Center Click the menu. Support Center > Service Home Navigate to the page.
Support Center Service Home on the page click the Service Request button. Navigate to the Service Request List page.
Service Request List page, click the Service Request button. Service Request page will be opened.
Service Request on the page, enter or select the relevant information in the required input fields.
In the task category, please select DDoS Protection cancellation.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: DDoS Protection service termination request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select service category and service
Service Category: Security
Service: DDoS Protection
Task Category
Select the type you want to request
DDoS Protection termination: select if you want to cancel the service
Content
Guide to creating and applying basic customer information
Content to write: End customer/MSP information
Attachment
Upload the completed DDoS Protection service application form (required) and any additional files you wish to share
Each attached file must be within 5 MB, and up to 5 files can be attached
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Table. DDoS Protection service termination request items
After checking the Application Process and Reference Information, click the Form Download > Service Request Form Download button to download the DDoS Protection Service Application Form.
DDoS Protection service application form please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required fields.
Category
Details
Application Information
Application type, usage period, basic information, etc. required items fill
Application type: Select termination
Usage period: Enter desired termination date
Basic information: Account name, Project name, Recipient information
Control Information
No input required when terminating the entire service
Table. DDoS Protection Service Termination Application Form Main Contents
Attach the completed application form to the attachment section.
Click the request button on the service request page.
When the application is completed, check the applied content on the Support Center > Service Request list page.
After the monitoring staff verifies the submitted service request, if the monitored IP/policy is deleted, the termination process is completed.
Service termination takes 3 business days, including the cancellation request date.
9.7.3 - Release Note
DDoS Protection
2025.07.01
NEWDDoS Protection Service Official Version Release
We are launching a DDoS Protection service that provides detection and response to large-scale network traffic attacks.
9.8 - IPS
9.8.1 - Overview
Service Overview
IPS(Intrusion Prevention System) continuously updates IPS intrusion detection policies reflecting the latest security threats to respond in real time. Additionally, it detects up to the application layer through packet monitoring.
Features
Latest Attack Type Detection: Generate detection patterns for new threats, and improve detection rate through continuous signature management. Apply the TI DB of security specialist companies and self-developed advanced detection policies, and provide services by correlational analysis of the relationship between attack patterns detected by IPS and patterns set in SIEM (Security Information and Event Management).
Cloud Optimized Operations: We provide detection services optimized for cloud environments. When a security threat occurs, we respond quickly through security professionals.
Efficient response and support: Monthly reports are provided to check the details of the event.
Configuration diagram
Figure. IPS concept diagram
The public IPS service does not provide monitoring (Security Center).
Provided Features
We provide the following features.
Intrusion Detection and Analysis
In-depth analysis through raw data
New threat detection pattern update reflecting external trend information
Periodic detection pattern optimization
Monitoring Information Provision
Monthly report provision
Intrusion response
Provide IP information of attack attempts on SCP client servers
Components
IPS provides services based on public IP configured within the VPC.
We provide services targeting servers that can be accessed via the Internet, and when a user requests a service, we refer to the server (Virtual Server) specifications listed in the service application form.
Constraints
IPS provides detection based on traffic that is not encrypted with HTTP. It does not provide monitoring for traffic encrypted with HTTS SSL.
The public IPS service does not provide monitoring (Security Center).
Provision status by region
IPS can be provided in the environment below.
Region
Normal (Enter)
Public
Korea West (kr-west1)
Provided
Not provided
Korea East (kr-east1)
Not provided
Not provided
Korea South 1 (kr-south1)
Not provided
Provided
South Korea South 2(kr-south2)
Not provided
Provided
South Korea South3 (kr-south3)
Not provided
Provided
Table. IPS Region-wise Provision Status
Preceding Service
This is a list of services that must be pre-configured before creating the IPS service. For details, refer to the guide provided for each service and prepare in advance.
When creating a VPC’s Internet Gateway, you must select SIGW (Secure Internet Gateway) in the category to be able to use IPS.
Caution
When creating a VPC’s Internet Gateway, if you select Internet Gateway in the ‘Category’, you cannot use the IPS service.
If you change to Secure Internet Gateway, you need to change the public IP you are using.
A service that provides an independent virtual network in a cloud environment
Table. IPS Pre-service
9.8.2 - How-to guides
The user can apply for the service by entering the required information for using the IPS service through the Samsung Cloud Platform Console.
Create IPS
You can apply for the IPS service and use it from the Samsung Cloud Platform Console.
To request IPS service creation, follow the steps below.
All Services > Security > IPS Click the menu. Navigate to the IPS Service Home page.
Service Home on the page, click the IPS Service Request button. Navigate to the Support Center > Service Request List > Service Request page.
Service Request page, please enter or select the relevant information in the required input fields.
Please select IPS creation in the work category.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: IPS service creation request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select the service category and service. If the IPS service request button is pressed, it is entered automatically
Service Category: Security
Service: IPS
Task Classification
Select the type you want to request
IPS creation: select when requesting a new service
Content
Guidance on creating and applying basic customer information
Content to be written: End customer/MSP information
Attachment
Upload the completed IPS service application (required) and any additional files you wish to share
Each attached file can be up to 5 MB, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. IPS Service Creation Request Items
After checking the application process and reference information, click the Form Download > Service Request Form Download button to download the IPS Service Application Form.
IPS Service Application Form please fill out.
Refer to the item-by-item description of the Application Information and Control Information tabs, and fill out the required fields.
Category
Details
Application Information
Fill in required items such as application type, usage period, basic information, etc.
Application type: select application
Usage period: enter desired start date, contract status, expected usage period
Basic information: enter Account name, Project name, recipient information
Monitoring Information
Write required items such as protected target IP, exception handling IP, etc. (Purpose per IP required)
Write application classification per IP
New: select when applying for a new service
Public customers do not need to fill out
Table. IPS Service Creation Application Form Main Contents
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the requested details on the Support Center > Service Request List page.
After the monitoring officer verifies the submitted service request, the process for using the service proceeds.
IPS service will be launched.
Cancel IPS
To request termination of the IPS service, follow the steps below.
All Services > Management > Support Center Click the menu. Support Center > Service Home Navigate to the page.
Support Center Service Home on the page, click the Service Request button. Navigate to the Service Request List page.
Service Request List page, click the Service Request button. Service Request page will be opened.
Service Request page, enter or select the required information in the mandatory input fields.
Select IPS termination in the work type.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: IPS service termination request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select service category and service
Service Category: Security
Service: IPS
Task Category
Select the type you want to request
IPS termination: select if canceling the service
Content
Guidance on creating and applying basic customer information
Content to be written: End customer/MSP information
Attachment
Upload the completed IPS service application (required) and any additional files you want to share
Each attached file can be up to 5 MB, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Table. IPS Service Termination Request Items
After checking the Application Process and Reference Information, click the Form Download > Service Request Form Download button to download the IPS Service Application Form.
IPS Service Application Form please fill out.
Refer to the item-by-item description of the Application Information and Control Information tabs, and fill out the required fields.
Category
Detailed Content
Application Information
Fill in required items such as application type, usage period, basic information
Application type: Termination selected
Usage period: Enter desired termination date
Basic information: Account name, Project name, recipient information
Control Information
When the entire service is cancelled, no input is required
Table. Main contents of IPS service termination application form
Attach the completed application form in the attachment area.
Click the request button on the service request page.
When the application is completed, check the applied content on the Support Center > Service Request list page.
After the monitoring staff verifies the submitted service request, the termination process is completed once the monitored IP is deleted.
Service termination takes 3 business days, including the cancellation request date.
9.8.3 - Release Note
IPS
2025.07.01
NEWIPS Service Official Version Release
Launched an IPS service that continuously updates IPS intrusion detection policies reflecting the latest security threats and responds in real-time.
9.9 - Secured Firewall
9.9.1 - Overview
Service Overview
Secured Firewall is a next-generation firewall service for cloud network security provided by Samsung Cloud Platform. It manages network access to servers based on IP address/port policies, and supports detailed analysis in the event of a security incident.
Special Features
Diverse Network Environment Protection: Supports setting robust network security policies to safely protect cloud assets. Detects and blocks based on IP, protocol/port-based ACL policies, providing a secure network communication environment.
Easy and simple network management: You can easily establish firewall policies, and conveniently create and manage applied rules such as source/destination IP, protocol/port, inbound/outbound, etc.
Security expert-based firewall policy implementation: We support establishing firewall policies optimized for customers’ systems migrating to the cloud. We provide a service where we receive firewall policy requests from customers to more easily apply security policies in the cloud environment, and security professionals reflect the policies.
Security Authentication Product-Based Service: You can use firewall services that meet various networks and requirements for internet connectivity and ensure an optimized security environment. This safely protects the internal network from unauthorized access.
Diagram
Figure. Secured Firewall concept diagram
Public-oriented Secured Firewall service does not provide monitoring (Security Center).
Provided Features
We provide the following features.
Cloud Optimized Firewall
Apply domain policy considering cloud environment
Apply firewall rules and logging
Monitoring Information Provision
Monthly report provision
Components
Secured Firewall provides services based on public IP configured within the VPC.
We provide services targeting servers that can be accessed via the Internet, and when a user requests a service, we provide it based on the server (Virtual Server) specifications listed in the service application form.
Constraints
Firewall policy applications cannot be applied for in the Samsung Cloud Platform Console.
We will send the application form to the email you registered when applying for the service. Please refer to the form to proceed with the firewall application.
Access control management of the system built inside the Pool (Security Group, etc. firewall policy management) must be performed by the customer directly.
Provision status by region
Secured Firewall is available in the environment below.
Region
General (Enter)
Public
Korea West (kr-west1)
Provided
Not provided
Korea East (kr-east1)
Not provided
Not provided
Korea South 1 (kr-south1)
Not provided
Provided
South Korea South2(kr-south2)
Not provided
Provided
South Korea South3(kr-south3)
Not provided
Provided
Table. Secured Firewall Provision Status by Region
Preliminary Service
This is a list of services that must be pre-configured before creating the Secured Firewall service. For details, refer to the guide provided for each service and prepare in advance.
When creating a VPC’s Internet Gateway, you must select SIGW (Secure Internet Gateway) in the type to be able to use Secured Firewall.
Caution
When creating a VPC’s Internet Gateway, if you select Internet Gateway in the ‘Category’, you cannot use the Secured Firewall service.
If you change to Secure Internet Gateway, you need to change the public IP you are using.
Service that protects web applications from web vulnerabilities and attacks
Table. Secured Firewall Preliminary Service
9.9.2 - How-to guides
The user can apply for the service by entering the required information for using the Secured Firewall service through the Samsung Cloud Platform Console.
Secured Firewall Create
You can apply for and use the Secured Firewall service on the Samsung Cloud Platform Console.
Secured Firewall If you want to request service creation, follow the steps below.
All Services > Security > Secured Firewall Click the menu. Navigate to the Service Home page of Secured Firewall.
Click the Secured Firewall Service Request button on the Service Home page. Navigate to the Support Center > Service Request List > Service Request page.
Service Request page, enter or select the relevant information in the required input fields.
In the work category, select Secured Firewall creation.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: Secured Firewall service creation request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select the service category and service. Secured Firewall service request button is pressed, it is entered automatically
Service Category: Security
Service: Secured Firewall
Task Category
Select the type you want to request
Create Secured Firewall: select when requesting a new service
Content
Guidance on creating and applying customer basic information
Content to write: End customer/MSP information
Attachment
Upload the completed Secured Firewall service application (required) and any additional files you wish to share
Each attached file must be within 5MB, and up to 5 files can be attached
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Secured Firewall service creation request items
After checking the application process and reference information, click the Form Download > Service Request Form Download button to download the Secured Firewall Service Application Form.
Secured Firewall service application form please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required items.
Category
Details
Application Information
Fill in required items such as application type, usage period, basic information
Basic Information: Account name, Project name, recipient information input
Monitoring Information
Write required items such as protected target IP, exception handling IP, etc. (need to specify purpose per IP)
Write application classification per IP
New: select when applying for a new service
Public customers do not need to fill out
Table. Secured Firewall Service Creation Application Form Main Contents
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the applied content on the Support Center > Service Request List page.
After the monitoring officer verifies the submitted service request, the process for using the service proceeds.
Secured Firewall service will be launched.
Secured Firewall Cancel
Secured Firewall To request service termination, follow the steps below.
All Services > Management > Support Center Click the menu. Support Center > Service Home Go to the page.
Support Center Service Home on the page, click the Service Request button. Navigate to the Service Request List page.
Service Request List page, click the Service Request button. It navigates to the Service Request page.
Service Request page, enter or select the relevant information in the required input fields.
In the work category, select Secured Firewall termination.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: Secured Firewall service termination request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select service category and service
Service Category: Security
Service: Secured Firewall
Task Category
Select the type you want to request
Secured Firewall termination: select if you are terminating the service
Content
Guidance on creating and applying customer basic information
Content: End customer/MSP information
Attachment
Upload the completed Secured Firewall service application (required) and any additional files you wish to share
Each attached file must be within 5 MB, and up to 5 files can be attached
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Table. Secured Firewall service termination request items
Application Process and Reference Information after checking, click the Form Download > Service Request Form Download button to download the Secured Firewall Service Application Form.
Secured Firewall Service Application Form please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required items.
Category
Detailed Content
Application Information
Fill in required items such as application type, usage period, basic information, etc.
Application type: Select termination
Usage period: Enter desired termination date
Basic information: Account name, Project name, Recipient information
Control information
When the entire service is terminated, no input is required
Table. Secured Firewall Service Termination Application Form Key Contents
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the submitted details on the Support Center > Service Request list page.
After the monitoring staff verifies the submitted service request, the termination process is completed once the monitored IP is deleted.
Service termination takes 3 business days, including the cancellation request date.
9.9.3 - Release Note
Secured Firewall
2025.07.01
NEWSecured Firewall Service Official Version Released
Samsung Cloud Platform has released Secured Firewall, a next-generation firewall service for cloud network security.
9.10 - Secured VPN
9.10.1 - Overview
Service Overview
Secured VPN (Virtual Private Network) is a service that securely connects external customer networks and the Samsung Cloud Platform network through an encrypted virtual private network. Authenticated customer networks can securely access the Samsung Cloud Platform at any time via a secure channel.
Features
Rapid Service Provision: To ensure a secure VPN communication link between the customer’s network and the Samsung Cloud Platform, a dedicated VPN device must be deployed, and during deployment, we provide installation support services by security specialists.
Secure Access: Provides a virtual network tunnel equipped with certified authentication devices and nationally certified encryption modules that have been verified for performance and stability, allowing customers to safely connect from their external network to the internal network built on the Samsung Cloud Platform.
Convenient operating environment: Providing network configuration and VPN operation services optimized for the customer’s environment by security experts, we provide an operating environment that enables easier use of VPN services.
Configuration diagram
Figure. Secured VPN concept diagram
Provided Features
We provide the following features.
IPSec VPN provision
IPSec VPN provided with nationally validated cryptographic module
Virtual Private Gateway creation
to connect the internal cloud network with the customer’s network, create Virtual Private Gateway
Select traffic bandwidth for bidirectional communication considering network scale
VPN Tunnel Creation
IPsec VPN Gateway Redundant configuration ensures service continuity in case of failure
Components
Secured VPN(Virtual Private Network) is composed of a center VPN managed by SDS and a branch VPN installed within the customer’s internal network, providing services.
Constraints
The center VPN equipment is a shared device used by many customers, and it cannot be used if it overlaps with VPC ranges used by other client companies or ranges currently used in Samsung Cloud Platform. Customers who need to use the Secured VPN service, please check the available range in advance.
Example: Customer A has applied for and is using the 10.0.0.1/24 range, and when Customer B newly applies for Secured VPN, the 10.0.0.1/24 range cannot be used. Need to check available ranges in advance and configure VPC range accordingly.
Reference
To check the available band, go to Console > Support Center > Contact or inquire via mssp.scp@samsung.com.
After checking the available range, SDS changes the IP by processing NAT on the branch VPN (rental). However, if the branch VPN equipment was purchased directly by the customer, the NAT setting is performed by the customer.
MSP adds the NATed IP to the VPC routing rule in the Samsung Cloud Platform Console.
Check if the branch VPN and Samsung Cloud Platform IP ranges overlap. If the destination IP range is included in the source IP range, the router will send traffic internally instead of externally, making communication impossible.
The branch VPN is provided as a rental of SECUI equipment, and a separate cost is incurred when renting the equipment. If the client has VPN equipment in use, it is necessary to verify whether non-SECUI vendor equipment is compatible with the center VPN equipment (SECUI).
For matters related to compatibility testing other than SECUI equipment, Console > Support Center > Contact Us or contact via mssp.scp@samsung.com.
Provision status by region
Secured VPN is available in the following environment.
Region
General (Enter)
Public
Korea West (kr-west1)
Provided
Not provided
Korea East (kr-east1)
Not provided
Not provided
Korea South1(kr-south1)
Not provided
Provided
South Korea 2 (kr-south2)
Not provided
Provided
South Korea 3 (kr-south3)
Not provided
Provided
Table. Secured VPN Provision status by region
Preceding Service
Before creating the Secured VPN service, this is a list of services that must be pre-configured. For details, refer to the guide provided for each service and prepare in advance.
When creating Direct Connect, create a connection to the target VPC and DCon-VPN.
A service that safely and quickly connects the customer’s network with Samsung Cloud Platform
Table. Secured VPN Preliminary Service
Secured VPN service usage requires configuration work for communication between the customer’s Office (On-premise) and the customer’s VPC within Samsung Cloud Platform. Please follow the process below, including external integration software and VPN settings, Direct Connect firewall opening, etc., to apply for an Uplink line.
Application path : Console > Support Center > Service request
Service : Networking > Direct Connect
Work classification : Uplink line request
Note
Direct Connect creation and Uplink line application must be completed to use Secured VPN service.
9.10.2 - How-to guides
The user can create the service by entering the required information for using the Secured VPN (Virtual Private Network) service through the Samsung Cloud Platform Console.
Secured VPN Create
You can apply for and use the Secured VPN service from the Samsung Cloud Platform Console.
To request the creation of a Secured VPN service, follow the steps below.
All Services > Security > Secured VPN Click the menu. Go to the Secured VPN Service Home page.
Service Home page, click the Secured VPN Service Request button. Navigate to the Support Center > Service Request List > Service Request page.
Service Request page, enter or select the relevant information in the required input fields.
In the task category, select Secured VPN creation.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: Secured VPN service creation request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select the service category and service. If the Secured VPN service request button is pressed, it is entered automatically
Service Category: Security
Service: Secured VPN
Task Category
Select the type you want to request
Secured VPN creation: select when requesting a new service
Content
Guidance on creating and applying basic customer information
Content to write: End customer/MSP information
Attachment
Upload the completed Secured VPN service application form (required) and any additional files you wish to share
Each attached file can be up to 5 MB, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Secured VPN Service Creation Request Items
After checking the application process and reference information, click the Form Download > Service Request Form Download button to download the Secured VPN Service Application Form.
Secured VPN service application form please fill it out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required items.
Category
Details
Application Information
Fill in required items such as application type, usage period, basic information, etc.
Application type: select application
Usage period: enter desired start date
Guaranteed bandwidth: select bandwidth
Basic information: enter Account name, Project name, recipient information
Control information
Common application information, same model/different model connection application information, etc. Fill required items (need to specify purpose per IP)
Same-model connection application information: when connecting SECUI equipment
Different-model connection application information: when connecting equipment other than SECUI
Table. Secured VPN Service Creation Application Form Main Contents
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the submitted details on the Support Center > Service Request List page.
After the monitoring officer verifies the submitted service request, the process for using the service proceeds.
Secured VPN service will be launched.
Secured VPN Cancel
If you want to request termination of Secured VPN service, follow the steps below.
All Services > Management > Support Center Click the menu. Support Center > Service Home Navigate to the page.
Support Center Service Home on the page click the Service Request button. Service Request List page navigate.
Service Request List page, click the Service Request button. Service Request page will be opened.
Service Request page, enter or select the relevant information in the required input fields.
In the work category, please select Secured VPN termination.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: Secured VPN Service Termination Request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select service category and service
Service Category: Security
Service: Secured VPN
Task Category
Select the type you want to request
Secured VPN termination: select if you want to terminate the service
Content
Guidance on creating and applying basic customer information
Content to write: End customer/MSP information
Attachment
Upload the completed Secured VPN service application form (required) and any additional files you wish to share
Each attached file can be up to 5 MB, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Table. Secured VPN Service Termination Request Items
Application Process and Reference Information after checking, click the Form Download > Service Request Form Download button to download the Secured VPN Service Application Form.
Secured VPN Service Application please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required items.
Category
Detailed Content
Application Information
Fill in required items such as application type, usage period, basic information, etc.
Application type: select termination
Usage period: enter desired termination date
Guaranteed bandwidth: select the bandwidth applied for
Basic information: enter Account name, Project name, recipient information
Control Information
When terminating the entire service, no input is required
Table. Secured VPN Service Termination Application Form Main Contents
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the applied content on the Support Center > Service Request list page.
After the monitoring staff verifies the submitted service request, if the monitored target IP is deleted, the termination process is completed.
Service termination takes 3 business days, including the cancellation request date.
9.10.2.1 - Secured VPN Construction Process Guide
To initiate the Secured VPN service, it is necessary to proceed with the installation of the branch VPN in the customer’s band and then perform the connection inspection work. However, if you have a directly operated VPN, you do not need to perform the connection inspection work.
Please refer to the process below to apply for the Secured VPN service.
Caution
When using the Secured VPN service, please check the restrictions.
Figure. Secured VPN Construction Process
1. Samsung Cloud Platform Console work (MSP performance)
Apply for Direct Connect.
Create a connection target VPC and DCon-VPN connection.
Apply for Uplink line.
Application purpose: This is a setup work for communication between the customer’s Office (On-premise) and the customer VPC within the Samsung Cloud Platform.
Application path: Console > Support Center > Service Request should be selected.
Service: Networking > Direct Connect
Work classification: Uplink line application
Please inquire about the construction period and Uplink line work schedule through Console > Support Center > Contact Us.
Set up routing, such as Firewall, Security Group, Direct Connect, etc.
2. Routing and Firewall Settings (Customer Implementation)
Set up routing between the customer’s Office internal subnet and branch VPN, and configure the customer’s firewall.
Prior consultation is required for routing and firewall settings. (SDS → MSP → Customer Company)
Set up the Samsung Cloud Platform bandwidth and the customer’s Office bandwidth to allow for two-way communication.
3. Installation of customer’s VPN equipment and tunnel opening (MSP/SDS performance)
When installing VPM equipment for customer companies, you can use SDS equipment for rent or use your own equipment. Please check the process suitable for the situation.
Case 1) Using the branch VPN equipment as SECUI leased equipment provided by SDS
Check the specifications, quantity, schedule, and installation location of the leased VPN equipment.(MSP→SDS)
Request to create a pre-installation environment survey for VPN installation.(SDS → MSP)
Visit the customer’s site and install SECUI leased VPN equipment.(SDS)
Open a tunnel between the branch VPN and the center VPN.(SDS)
Case 2) When using the branch VPN equipment as the customer’s own equipment
Check the specifications and schedule of the branch VPN equipment.(MSP→SDS)
Open a tunnel between the branch VPN and the center VPN.(Customer/SDS)
Reference
In case the customer requests a VPN installation work plan, please inquire through Console > Support Center > Contact Us or mssp.scp@samsung.com.
Please proceed with the work in compliance with the National Intelligence Service VPN installation guide and security review standards.
4. End-to-End test (MSP/SDS execution)
Check and share the test schedule after installing the branch VPN equipment (or setting up existing equipment) and share it. (SDS → MSP)
Check the communication between the branch VPN device and VPC (both directions).
Caution
The End-to-End test may fail due to reasons such as not applying for an uplink line, customer routing and firewall setting errors, etc.
9.10.3 - Release Note
Secured VPN
2025.07.01
NEWOfficial Release of Secured VPN Service
Launched Secured VPN service that securely connects the customer network outside and the cloud network of Samsung Cloud Platform through an encrypted virtual private network.
9.11 - FPMS
9.11.1 - Overview
Service Overview
FPMS(Firewall Policy Management System) is a firewall operation automation service for efficient and safe operation of firewalls in various cloud environments. It automates all processes that operators are currently performing manually, eliminating human errors and failures, and reducing the user’s service lead time.
Features
Failure Prevention: Prevent human errors that may occur when manually registering firewall policies, and check if the IP, Port information, etc. of the application information is a value that conforms to grammar and structure, thereby converting it to the correct data to prevent failures in advance.
Improved Operational Convenience: It provides features such as automating firewall policy application and replicating the applied policy to another firewall for duplication configuration. It can be used to enable policies to be used only for a certain period of time using the firewall policy expiration feature provided by FPMS, and provides features such as automatic deletion of inactive policies, which can reduce the operational burden of personnel.
Firewall Policy Optimization: Optimizes the firewall policy being applied by utilizing optimization algorithms, and also checks for duplicate or permanent policies to prevent unnecessary rule applications.
Continuous Security Enhancement: Analyze and diagnose excessive open policies, expired or unmanaged policies, and quantify the scores by department to easily grasp the vulnerability status. Additionally, the vulnerability handling guide enables continuous security enhancement.
Service Composition Diagram
Figure. FPMS Configuration Diagram
Provided Features
FPMS provides the following functions.
Policy Management
Policy change history management and real-time monitoring
Policy search and policy expiration management
Policy Auto Registration
Check application information consistency and automatic conversion
Network operation/security standard inspection and conversion
Automatic creation/application of rules based on firewall vendor characteristics
Policy Optimization
Remove duplicates of policy address/port/protocol
Policy pattern analysis optimization
Analysis of unused/expired/duplicate policies
Policy Security Analysis
Provides security index results by firewall policy
Analyze the similarity between application information and policy, and report risks after analysis
Component
Firewall
FPMS can register and manage firewalls in operation.
It is necessary to check if the firewall is connectable before registration. (Check manufacturer, model name, OS version)
FPMS uses API to access firewall devices and put in policies or retrieve information. To do this, the firewall operator must create a linked account on the firewall device and set up API settings or check information to enable access.
Firewall Application System
To retrieve the firewall application data, FPMS and the application system must be linked.
Constraints
The limitations of the FPMS service are as follows. Please confirm the limitations below before use and reflect them in your service usage plan.
A separate infrastructure must be prepared for the installation and provision of FPMS services.
VM and DBMS configuration for Web/App services and data storage are required.
Regional Provision Status
FPMS can be provided in the following environment.
Region
Availability
Western Korea(kr-west1)
Provided
Korea East(kr-east1)
Provided
South Korea 1(kr-south1)
Not provided
South Korea, southern region 2(kr-south2)
Not provided
South Korea southern region 3(kr-south3)
Not provided
Table. FPMS Regional Provision Status
Preceding service
FPMS has no preceding service.
9.11.2 - How-to guides
The user can create the service by entering the necessary information to receive the FPMS service through the Samsung Cloud Platform Console.
Create FPMS
You can create and use the FPMS service in the Samsung Cloud Platform Console.
To request the creation of an FPMS service, follow the following procedure.
Click all services > Security > FPMS menu. It moves to the Service Home page of FPMS.
Service Home page, click the FPMS Service Request button. It moves to the Support Center > Service Request List > Service Request page.
Service Request page, enter or select the corresponding information in the required input field.
Select FPMS Service Creation in the work division.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: FPMS Service Creation Request
Region
Select the location of Samsung Cloud Platform
Service
Select service group and service. If the FPMS service request button is pressed, it is automatically entered
Service group: Security
Service: FPMS
Work Division
Select the work you want to request
FPMS Service Creation: Select if you are requesting a new service
Content
Check the service application process and notes, and enter the detailed application content
Attachments
If you have additional files you want to share for service application, you can upload them
Attached files can be up to 5 files, each within 5MB
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. FPMS Service Creation Request Items
Check the required information entered on the Service Request page and click the Request button.
Once the FPMS service application is completed, the FPMS dedicated technical support manager will contact you by email for FMPS installation and usage settings.
After checking the details with the FPMS dedicated technical support person in charge, FPMS installation and related system linkage work will be proceeded.
FPMS Application History Check
After applying for the FPMS service, you can check the detailed history and processing process.
To check the FPMS service application history, follow the following procedure.
Click all services > Support Center menu. It moves to the Service Home page of Support Center.
On the Service Home page, click the Service Request menu. It moves to the Service Request List page.
On the Service Request List page, select the application item. It moves to the Service Request Details page.
Service Request Details page to check the details and processing procedure.
Guide
FPMS detailed information can be found in a separate FPMS management portal.
The management portal address will be sent separately by email after the FPMS installation is completed by the person in charge.
Cancel FPMS
To request the cancellation of FPMS service, please follow the following procedure.
Click All Services > Security > FPMS menu. It moves to the Service Home page of FPMS.
On the Service Home page, click the FPMS Service Request button. It moves to the Support Center > Service Request List > Service Request page.
Service Request page, enter or select the corresponding information in the required input area.
Select FPMS Service Cancellation in the work classification.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: FPMS Service Cancellation Request
Region
Select the location of Samsung Cloud Platform
Service
Select service group and service. If the FPMS service request button is pressed, it is automatically entered
Service group: Security
Service: FPMS
Work Classification
Select the work you want to request
FPMS Service Cancellation: Select if you want to cancel the service
Content
Check the service cancellation process and notes, and enter the detailed application content
Attachments
If you have any additional files you would like to share for service cancellation, please upload them
Attached files can be up to 5 MB each, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Table. FPMS Service Cancellation Request Items
On the Service Request page, check the required information entered and click the Request button.
Once the FPMS service cancellation application is completed, the FPMS dedicated technical support person in charge will confirm and proceed with the FPMS service cancellation and deletion work.
9.11.3 - Release Note
FPMS
2025.12.16
FEATUREAdd firewall and Security Group registration feature, improve SecuAI firewall support
A feature has been added that allows registering the firewall and Security Group of the Samsung Cloud Platform Console to FPMS for management.
SecuEye firewall v3.7 support (anyzone) feature has been improved.
2025.07.01
NEWFPMS Service Official Version Release
We have launched the Firewall Policy Management System (FPMS) service for automating firewall operation tasks to efficiently and safely operate firewalls in various cloud environments.
9.12 - ESS
ESS (Endpoint Security Suite) is a service that allows you to manage security policies for your virtual server.
ESS Overview
ESS provides the following features:
Security Policy Management: Create and manage security policies for your virtual server
Endpoint Protection: Protect your virtual server from security threats
Policy Enforcement: Apply security policies to your virtual server
Getting Started
To get started with ESS, refer to the following guides:
Endpoint Security Suite (ESS) provides the existing On-Premise Endpoint Security solutions ESCORT, NASCA, SecuPrint as SaaS, enabling reduction of security solution adoption and operational costs. Also, through the One-View integrated management console, each point solution can be managed easily and conveniently, allowing efficient security solution operation.
Features
One-View Integrated Management ESCORT, NASCA, SecuPrint By integrating the management consoles previously provided for each solution into One-View, it became possible to install/uninstall solutions in an integrated manner, improving the efficiency of solution management and security operations.
PC Security Management Area Expansion The existing On-Premise solution managed only PCs within the site after setting up a local network within the site, but the Endpoint Security Suite that uses the Internet network can provide the same level of PC security management regardless of whether it is inside or outside the site.
Flexible scalability based on Rest API By providing various common functions such as personnel information, administrator account information, and license management information via Rest API, integration and expansion with point solutions become easy.
Service Architecture Diagram
Figure. Endpoint Security Suite (ESS) concept diagram
Provided Features
ESS provides the following functions.
ESCORT
Control of information leakage through storage devices (USB, external HDD, etc.)
Control of information leakage through networks (WiFi, Bluetooth, etc.)
Program execution control and vulnerability removal
NASCA
Electronic document permission management and encryption/decryption
Ensuring business continuity by providing automatic decryption functionality
screen watermark
SecuPrint
Output watermark
Output history management (log/statistics/tracking)
Personal information search and blocking (resident registration number/account number/card number, etc.)
Components
ESCORT Windows Client
ESCORT solution’s Windows PC client for preventing internal information leakage
ESCORT Linux Client
ESCORT solution’s Linux PC client for preventing internal information leakage
NASCA Client
NASCA solution’s Windows PC client for document encryption/decryption and document permission management
SecuPrint Client
Providing output watermark and SecuPrint solution’s Windows PC client for output security
Base Plan
ESCORT, NASCA, SecuPrint annual license cost for each server SW of the solution
Region-wise Provision Status
ESS is available in the environment below.
Region
Availability
Korea West (kr-west1)
Provided
Korea East (kr-east1)
Not provided
Korea South 1 (kr-south1)
Not provided
South Korea 2(kr-south2)
Not provided
South Korea 3(kr-south3)
Not provided
Table. ESS Region-wise Provision Status
Preceding Service
ESS has no prior service.
9.12.2 - How-to guides
The user can apply for the service by entering the required information for using the Endpoint Security Suite (ESS) service through the Samsung Cloud Platform Console.
Create ESS
You can apply for the ESS service from the Samsung Cloud Platform Console.
To request ESS service creation, follow the steps below.
All Services > Security > ESS Click the menu. Go to the Service Home page of ESS.
Service Home on the page, click the ESS Service Request button. Navigate to the Support Center > Service Request List > Service Request page.
Service Request page, enter or select the required information in the required input fields.
Select ESS creation in the work classification.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: ESS Service Creation Request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select the service category and service. If the ESS service request button is pressed, it is entered automatically
Service Category: Security
Service: ESS
Task Classification
Select the type you want to request
ESS creation: select when requesting a new service
Content
Guide to the service application process and reference information
Attachment
Upload the completed ESS service application form (required) and any additional files you wish to share
Each attached file must be within 5MB, and up to 5 files can be attached
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. ESS Service Creation Request Items
Service Request Verify the required information entered on the page, and click the Request button.
When the application is completed, check the submitted request on the Support Center > Service Request List page.
After the service manager verifies the submitted service request, the process for using the service is carried out.
Check ESS application details
You can view the detailed information and processing steps after applying for the ESS service.
To check the ESS service application details, follow the steps below.
All Services > Support Center Click the menu. Navigate to Support Center’s Service Home page.
Click the Service Request menu on the Service Home page. You will be taken to the Service Request List page.
Service Request List page, select the request item. Service Request Details page will be displayed.
Service Request Details Check the detailed information and processing steps on the page.
Cancel ESS
To request termination of the ESS service, follow the steps below.
Click the All Services > Security > ESS menu. Go to the ESS Service Home page.
Click the ESS Service Request button on the Service Home page. Navigate to the Support Center > Service Request List > Service Request page.
Service Request on the page, enter or select the required information in the mandatory input fields.
Select ESS service termination in the work category.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: ESS service termination request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select service group and service. If the ESS service request button is pressed, it is entered automatically
Service Group: Security
Service: ESS
Work Category
Select the task you want to request
ESS Service Termination: Select if you want to terminate the service
Content
Check the service termination process and reference notes, and enter detailed application content
Attachment
If there are any additional files you want to share for service termination, proceed with the upload
Each attached file can be up to 5 MB, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. ESS Service Termination Request Items
Service Request page, verify the required information entered, and click the Request button.
If you complete the ESS service termination request, the service manager will verify it, and the ESS service termination and deletion process will proceed.
9.12.3 - Release Notes
Release Notes
NEW
External facing product launch
NEW
Official version launch
9.13 - Secrets Manager
9.13.1 - Overview
Service Overview
Secrets Manager is a service that encrypts customers’ sensitive information as Secrets (secure information) and stores and manages it safely. It removes hardcoding of important information in application source code, and allows you to call and retrieve Secrets stored safely in a Key-Value format. Secrets are encrypted with user-managed keys in conjunction with Key Management Service and stored securely.
Service Architecture Diagram
Figure. Secrets Manager diagram
Provided Features
Secrets Manager provides the following features.
Secret creation/deletion: Secrets Manager can create/delete and manage Secrets. Users store security (sensitive) information in Key/Value form in the created Secret.
Secret lookup: You can view the Secret value based on custom policies and permissions.
Label-based version control: You can set a label on the version, which is a snapshot of unique data generated each time a Secret is modified, allowing you to manage Secrets more efficiently.
Components
Secret
It stores the logical unit for sensitive (important) information by encrypting security information values in Key/Value form with a KMS key.
Secret is an object created through the creation of a Secrets Manager product service in the Samsung Cloud Platform Console.
Version
It is a snapshot of unique data that is newly created each time a Secret is modified (the unit that stores the actual value of the Secret).
Label
It is a name tag or label attached to a specific version of a Secret (a pointer for referencing a specific version).
Constraints
Secrets Manager service constraints are as follows. Before use, be sure to check the constraints below and reflect them in your service usage plan.
Reference
Secrets Manager is a regional service, and the created Secret can only be used within that region.
As of December 2025, Secrets Manager provides only public endpoints via Open API. In the future, we plan to provide private endpoints that can be connected based on Samsung Cloud Platform resources.
Item
Detailed Description
Quota
Secret Value Size
Size of encrypted Secret value
65,536
Secrets
Number of Secrets per region in an Account
500,000
Attached Labels for Secret
Number of Labels attached to all versions of Secret
20
Versions per Secret
Number of Secret versions
100
Table. Secrets Manager Constraints
Pre-service
Secrets Manager has no prerequisite service.
9.13.2 - How-to guides
The user can enter the required information for the Secrets Manager service through the Samsung Cloud Platform Console, select detailed options, and create the service.
Secrets Manager Create
You can create and use Secrets Manager from the Samsung Cloud Platform Console.
To create Secrets Manager, follow the steps below.
All Services > Security > Secrets Manager Click the menu. Navigate to the Service Home page of Secrets Manager.
Click the Secrets Manager Create button on the Service Home page. You will be taken to the Secrets Manager Create page.
Secrets Manager creation On the page, enter the information required to create the service and enter additional information.
Service Information Input area, please enter or select the required information.
Category
Required status
Detailed description
Secret name
Required
Enter Secret name
Type
Required
Select the type to manage encrypted with Secret from the list
Key/Value input
Required
Enter the Secret information’s Key/Value as a pair
+ Click the + icon to add up to 10
X Click the X icon to delete the entry
Encryption Key
Required
Select the KMS key to use when encrypting the Secret from the list
Choose a key created in the KMS service from the list. Or click +Create New to create a KMS key
Only KMS keys for encryption/decryption can be selected. The selectable encryption/decryption KMS key types are encryption/decryption (AES-256), encryption/decryption and signing/verification (RSA-2048), encryption/decryption (ARIA) – three types
When entering Key/Value, input must be within 64 KB; registration is not allowed if the size exceeds
For detailed information on creating a KMS key, refer to Create KMS Key
Public Access Control
Required
Enter public access allowed IP
After entering IP address, click Add button to register up to 10
Click Delete All button to delete all IP entries in the list
0.0.0.0/24 - 0.0.0.0/32 ranges can be registered but may be vulnerable to security
Private Access Control
Select
Use After selecting, select resources to allow private access
Click the Add button to add an access-allowed resource
If not set to use, all subnet resources in the same region are allowed access
Description
Select
Enter description for Secrets Manager
Table. Secrets Manager service information input items
Additional Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Secrets Manager additional information input items
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
When creation is complete, check the created resource on the Secrets Manager List page.
Secrets Manager View Detailed Information
Secrets Manager can view and edit the full list of resources and detailed information. Secrets Manager Details page consists of Detailed Information, Version, Tag, Activity History tabs.
To view detailed information of Secrets Manager, follow the steps below.
All Services > Security > Secrets Manager Click the menu. Go to the Service Home page of Secrets Manager.
Click the Secrets Manager menu on the Service Home page. You will be taken to the Secrets Manager List page.
Secrets Manager List Click the resource to view detailed information on the page. Go to the Secrets Manager Details page.
Secrets Manager Details At the top of the page, status information and descriptions of additional features are displayed.
Category
Detailed description
Status
Displays the status of Secrets Manager
Active: available/active
To be terminated: scheduled for deletion
Service cancellation
Button to cancel the service
Table. Secrets Manager status information and additional features
Detailed Information
Secrets Manager List page allows you to view detailed information of the selected resource and, if necessary, edit the information.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation time
Service creation time
Editor
User who modified the service
Modification Date and Time
Service Modification Date and Time
Secret name
Name of the created Secret
Secret value
Entered Secret value
View button click then after entering password, you can view and edit information in the Secret value view window
Type
Type of the generated Secret
Recent search date/time
Recent search date/time of generated Secret
Encryption Key
Displays the KMS key name selected by the user
Clicking the key name navigates to the KMS key detail page
Clicking the edit icon allows changing the key in the encryption key edit window
Table. Secrets Manager detailed information tab items
Version
Secrets Manager List page allows you to track the version of a selected Secret using labels.
Reference
When checking the version information of Secret Manager, refer to the definition of each item.
Secret: Logical unit that stores sensitive (important) information
Version: a snapshot of unique data that is newly created each time the Secret is modified (the unit that stores the actual value of the Secret)
Label: a name tag or label attached to a specific version of a Secret (a pointer to reference a specific version)
Category
Detailed description
Version ID
Displays the ID of the current version, previous version, and the version with a custom label (Custom Label) set
Copy Click the icon to copy the version ID value
Label
Secret version display
CURRENT: current version
PREVIOUS: previous version
CUSTOM_LABEL: custom label
Last Access Time
Secret’s Recent Access Time
Creation time
Creation time of Secret
Table. Secrets Manager version tab items
Caution
The constraints when using Secret’s version are as follows.
Up to 100 versions can be stored per Secret. Regardless of whether a custom label is set, if the number of versions exceeds 100, the oldest versions will be deleted.
For important versions with custom labels set, create a new Secret before the version is deleted due to quota exceedance, and configure the running application to reference the new Secret.
Tag
On the Secrets Manager List page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can check the tag’s Key, Value information
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing Key and Value list
Table. Secrets Manager tag tab items
Work History
Secrets Manager list page allows you to view the operation history of the selected resource.
Category
Detailed description
Work Details
Work Execution Content
Work date and time
Task execution date and time
Resource Type
Resource Type
Resource Name
Resource Name
Work result
Task execution result (success/failure)
Operator Information
Information of the user who performed the task
Table. Secrets Manager operation history tab detailed information items
Secrets Manager Cancel
You can cancel the unused Secrets Manager.
Caution
If you cancel Secret Manager, you cannot use any features of Secrets Manager, and it will be permanently deleted after the cancellation waiting period. During the cancellation waiting period, the Secret cannot be searched.
To cancel Secrets Manager, follow the steps below.
All Services > Security > Secrets Manager Click the menu. Navigate to the Service Home page of Secrets Manager.
Click the Secrets Manager menu on the Service Home page. Go to the Secrets Manager List page.
Secrets Manager List page, click the resource to view detailed information. You will be taken to the Secrets Manager Details page.
Secrets Manager Details on the page, click the Cancel Service button. You will be taken to the Cancel Service popup.
Service Termination popup window, enter the cancellation waiting period and click the Confirm button.
The termination waiting period can be entered within the range of 7 - 30 days.
Once termination is complete, check on the Secrets Manager list page whether the resource has been terminated.
Guide
If you want to reuse the Secret during the termination waiting period, go to the Secrets Manager List page and click the context menu of the desired Secret item > Cancel termination.
If the termination cancellation succeeds, you can use the Secret again.
9.13.2.1 - Secret lookup API reference
This user guide explains how to use and call the Public / Private Endpoint of Secrets Manager.
Caution
Public Endpoint can be called in an environment where internet communication is possible.
Private Endpoint can only be called from Samsung Cloud Platform VMs.
Pre-setup for Endpoint call
Describes the prerequisite configuration items required when calling the Secrets Manager endpoint.
Register Security Group’s Outbound Rule
To call the endpoint, you need to register an outbound rule in the security group.
To register the Outbound Rule of the Security Group, follow the steps below.
Click the All Services > Security > Secrets Manager menu. Navigate to the Service Home page of Secrets Manager.
Click the Secrets Manager menu on the Service Home page. Navigate to the Secrets Manager List page.
On the Secrets Manager List page, click the resource to view detailed information. You will be taken to the Secrets Manager Details page.
Check the URL information on the Secrets Manager Details page.
URL You can copy the public / private URL information from the URL item.
Use the nslookup command to check the IP to register in the Security Group.
nslookup <endpoint URL to call>
Security Group > Security Group List: Select the Security Group of the VM for which you want to set access control. Security Group Details page will be opened.
In the Security Group Details > Rules tab, click the Add Rule button. When the Add Rule window appears, enter the information below to add a rule.
Item
Detailed description
Target Input Method
CIDR Select
Target address
Enter the IP address retrieved by nslookup
Type
Select Destination Port/Type after entering protocol information
Among protocols TCP select, enter 443 in TCP destination port
Direction
Outbound rule Select
Description
Enter Secrets Manager Public / Private Endpoint call rule
Table. Security Group rule addition input items
Security Group rules Check that the rule you entered in the list has been added.
Register access control for Secrets Manager
You can register public/private access controls for Secrets Manager.
To configure the access control items of Secrets Manager, follow the steps below.
Click the All Services > Security > Secrets Manager menu. Go to the Service Home page of Secrets Manager.
Service Home page, click the Secrets Manager menu. Navigate to the Secrets Manager list page.
Secrets Manager list On the page, click the resource to view detailed information. Secrets Manager detail You will be taken to the page.
On the Secrets Manager Details page, click the edit icon of Public Access Control to add an allowed IP for Public Endpoint access.
Public Access Control Edit Popup In the window, enter the IP and click the Add button. When addition is complete, click the Confirm button.
For security, adding a single IP is recommended, and up to 10 can be registered.
0.0.0.0/24 - 0.0.0.0/32 can be registered, but be careful as it may be insecure.
On the Secrets Manager Details page, click the edit icon of Private Access Control to add a VM that allows Private Endpoint access.
In the Private Access Control Edit Popup window, select the resources to allow access and click the Add button. When the addition is complete, click the Confirm button.
If you do not set usage, you can access all subnet resources in the same region.
Secrets Manager API Call
Explains how to call the Secrets Manager API.
Check Secrets Manager URL information
All Services > Security > Secrets Manager > Secrets Manager Details page, check the URL information.
URL You can copy public / private URL information from the item.
Secrets Manager Lookup API
get /v1/secret
## Description
Secret value lookup
## Parameters
Type
Name
Description
Schema
query
secretId (required)
Secret ID (Example : b3ed8b7637574255b83c274a6ed79426)
Provides a Private Endpoint that can be called with Secret from VM resources in Samsung Cloud Platform.
You can select the VM resources within the Samsung Cloud Platform of the Secret that stores security information and set access control.
2025.12.16
NEWSecrets Manager Service Official Version Release
We have launched a service that encrypts customers’ sensitive information in the form of Secret (security information) and stores and manages it safely.
You can remove hardcoding of security information in the application source code and call securely stored Secrets to retrieve them.
9.14 - Log Transmission
9.14.1 - Overview
Service Overview
Log Transmission collects and stores logs in real time from firewalls, IPS, DDoS security devices, and transmits them to the area needed by the customer as a service. It provides a foundation for performing security monitoring of the user area using those logs.
Features
Security event log collection/transmission: Collect/store logs from security devices in real time and transmit security events.
Secure Log Storage/Transmission: Log data can be stored securely, and backup and recovery are possible when needed. Collected logs are safely stored in a redundant storage, and data is utilized using VPN services, etc.
Sending safely.
Diagram
Figure. Log Transmission Concept Diagram
Provided Features
We provide the following features.
Various security log source integration
Real-time log collection from various log sources such as firewalls, IPS, DDoS security devices, etc.
Log filtering and processing
Filter out unnecessary logs or extract only the logs requested by the customer
Components
Log Transmission sends service log sources from Samsung Cloud Platform to the equipment or system desired by the customer.
The service runs by connecting via VPN to the customer’s office (server room) where the device that receives the log source is located.
Constraints
To use Log Transmission, please check the following items in advance.
Collect and send logs targeting Security products provided by Samsung Cloud Platform.
To send logs, you must be connected via VPN to the device that will receive the logs.
Region-wise Provision Status
Log Transmission is available in the environment below.
Region
Availability
Korea West (kr-west1)
Provided
Korea East (kr-east1)
Not provided
South Korea South1(kr-south1)
Not provided
South Korea South2(kr-south2)
Not provided
South Korea 3 (kr-south3)
Not provided
Table. Log Transmission Regional Provision Status
Preliminary Service
This is a list of services that must be pre-configured before creating the Log Transmission service. For details, refer to the guide provided for each service and prepare in advance.
Next-generation firewall service that meets high security requirement levels
Table. Log Transmission Pre-service
Log Transmission when executed, you must select the service to which logs should be sent. When the log transmission service target is determined, a VPN connection is required for secure log transmission.
Log Transmission service usage requires configuration work for communication between the client company’s Office (On-premise) and the client VPC within Samsung Cloud Platform. Please follow the process below, such as external integration software and VPN settings, Direct Connect firewall opening, to apply for an uplink line.
Create a connection between the client company’s VPC and DCon-VPN.
Apply for an uplink line for communication between the client company’s on-premise and VPC.
Application path: Support Center > Service Request List > Service Request
Select Service: Networking > Direct Connect
Task type: Uplink line request
Proceed with the routing configuration of the VPN path.
Firewall, Security Group, Direct Connect etc. configure the necessary routing information.
Reference
Direct Connect creation and Uplink line application must be completed to use the Log Transmission service.
9.14.2 - How-to guides
The user can create the service by entering the required information for using the Log Transmission service through the Samsung Cloud Platform Console.
Log Transmission Create
You can apply for and use the Log Transmission service from the Samsung Cloud Platform Console.
To request the creation of the Log Transmission service, follow the steps below.
All Services > Security > Log Transmission Click the menu. Navigate to Log Transmission’s Service Home page.
Click the Log Transmission Service Request button on the Service Home page. Navigate to the Support Center > Service Request List > Service Request page.
Service Request page, enter or select the relevant information in the required input fields.
In the work classification, select Create Log Transmission.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: Log Transmission Service Creation Request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select the service category and service. Log Transmission service request button is pressed, it is entered automatically
Service Category: Security
Service: Log Transmission
Task Classification
Select the type you want to request
Create Log Transmission: select when requesting a new service
Content
Guidance on creating and applying basic customer information
Content to write: End customer/MSP information
Attachment
Upload the completed Log Transmission service application form (required) and any additional files you wish to share
Each attached file must be within 5MB, and up to 5 files can be attached
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Log Transmission Service Creation Request Items
After checking the application process and reference information, click the Form Download > Service Request Form Download button to download the Log Transmission Service Application Form.
Log Transmission service application form please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required items.
Category
Detailed Content
Application Information
Write required items such as application type, usage period, recipient information, etc.
Application type: select application
Usage period: enter desired start date
Basic information: enter Account name, Project name, recipient information
Monitoring Information
Write required items such as log transmission target, client company usage IP (range), etc.
Write all items except special cases
Table. Main contents of Log Transmission service creation application form
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the applied content on the Support Center > Service Request List page.
After the monitoring officer verifies the submitted service request, the process for using the service proceeds.
Log Transmission service will be launched.
Log Transmission Check detailed information
Log Transmission service proceeds with procedures after applying through SR. Unlike other services, Log Transmission detailed information cannot be viewed in the Console.
To view detailed information, click the Support Center > Inquiry List page’s Inquiry button. On the Inquiry page, you can write and submit your questions.
Log Transmission Cancel
To request cancellation of the Log Transmission service, follow the steps below.
All Services > Management > Support Center Click the menu. Support Center > Service Home Navigate to the page.
Support Center Service Home on the page, click the Service Request button. Navigate to the Service Request List page.
Service Request List page, click the Service Request button. Service Request page will be opened.
Service Request on the page, enter or select the relevant information in the required input fields.
In the work classification, select Log Transmission cancellation.
Input Item
Detailed Description
Title
Enter the title of the service request content
Example: Log Transmission Service Termination Request
Region
Select the location of Samsung Cloud Platform
Automatically filled with the region corresponding to the Account
Service
Select service category and service
Service Category: Security
Service: Log Transmission
Task Classification
Select the type you want to request
Log Transmission termination: select if you are terminating the service
Content
Guidance on creating and applying basic customer information
Content to write: End customer/MSP information
Attachment
Upload the completed Log Transmission service application form (required) and any additional files you wish to share
Each attached file can be up to 5 MB, with a maximum of 5 files
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Table. Log Transmission Service Termination Request Items
After checking the Application Process and Reference Information, click the Form Download > Service Request Form Download button to download the Log Transmission Service Application Form.
Log Transmission service application form please fill out.
Refer to the item descriptions in the Application Information and Control Information tabs, and fill out the required items.
Category
Details
Application Information
Fill in required items such as application type, usage period, recipient information, etc.
Application Type: Select Application
Usage Period: Enter desired service termination date
Basic Information: Enter Account name, Project name, recipient information
Usage amount does not need to be filled
Control Information
Write required items such as log transmission target, client company usage IP (range) etc.
Write all items except special cases
Table. Log Transmission Service Termination Application Form Main Contents
Attach the completed application form in the attachment area.
On the service request page, click the Request button.
When the application is completed, check the submitted details on the Support Center > Service Request list page.
After the monitoring officer confirms the submitted service request, if the log transmission target and the customer’s used IP (range) are deleted, the termination process is completed.
Service termination takes 2-3 business days, including the cancellation request date.
9.14.3 - Release Note
Log Transmission
2025.10.23
NEWLog Transmission Service Official Version Release
We have released a Log Transmission service that can execute security monitoring of the user area on Samsung Cloud Platform.
10 - Management
User authentication and access rights management, activity history collection/analysis, and real-time resource status monitoring, etc. to provide services that can easily and conveniently manage Samsung Cloud Platform.
10.1 - Architecture Diagram
10.1.1 - Overview
Service Overview
Architecture Diagram provides a diagram form so that the relationships between resources can be grasped at a glance. You can check the connections between resources and examine the relationships between components.
Provided Features
Architecture Diagram provides the following functions.
Provide resource composition status: It is visualized to easily check the relationships between configured resources.
Check resource main information: You can view the resource’s main information from the Architecture Diagram without navigating to the resource detail screen.
Constraints
The constraints of the Architecture Diagram are as follows.
We only provide services for some resources of the Samsung Cloud Platform.
Only up to 50 per resource can be viewed in the Architecture Diagram.
Pre-service
Architecture Diagram has no preceding service.
10.1.2 - How-to Guides
Users can intuitively grasp the relationships and key information of the resource configuration generated through the Architecture Diagram.
Architecture Diagram Start
To start the Architecture Diagram of Samsung Cloud Platform, refer to the following.
All Services > Management > Architecture Diagram Click the menu. Architecture Diagram Go to the page.
Samsung Cloud Platform’s Console Home page you can also navigate by clicking the Architecture Diagram widget.
Check the resource configuration information and relationships in the Architecture Diagram.
VPC,Subnet, Virtual Server Represent the relationship of the three types of resources in a diagram.
Notice
If you do not have view permission for the topmost hierarchical structure among resources, the Diagram will not be displayed.
Resources without diagram location information are displayed separately at the bottom of the diagram. Detailed status of the resource can be checked on each resource’s detail page.
Architecture Diagram Add Item
You can add items to the configuration diagram of the Architecture Diagram.
All Services > Management > Architecture Diagram Click the menu. Architecture Diagram Go to the page.
You can also navigate by clicking the Architecture Diagram widget on the Console Home page of Samsung Cloud Platform.
Click the Search Filter button at the top right of the Diagram. Configurable items open as a popup menu in the Architecture Diagram.
VPC, Subnet, Virtual Server items are provided by default.
Port, Security Group, Load Balancer items, you can select which to add to the diagram.
Resource
Basic status
VPC
Default
Displayed only if you have VPC permissions
Subnet
Default
Virtual Server
Default
Port
Select
Security Group
Select
Load Balancer
Select
Table. Architecture Diagram search filter items
Architecture Diagram Check resource detailed information
In the Architecture Diagram’s configuration diagram, you can view detailed information of each resource.
Notice
If user permission for the resource is not granted, you cannot view detailed information.
To check detailed information, follow the steps below.
Click the All Services > Management > Architecture Diagram menu. Go to the Architecture Diagram page.
You can also navigate by clicking the Architecture Diagram widget on the Console Home page of Samsung Cloud Platform.
Click the resource in the Architecture Diagram to view detailed information. A detailed information popup for each resource will open.
Click the control button at the top right of the Diagram to expand or collapse the Diagram, display it in a folded state, or zoom in/out.
If you place the mouse cursor over a resource, the status of that resource and the resource name are displayed.
Resource
Detailed Information
VPC
VPC name, Subnet name, IP range, Subnet type
Click the VPC name to view the corresponding VPC details page
Click the Create VPC button to create Virtual Server, Cloud Functions, VPC services via Copilot
Subnet
VPC name, Subnet name, IP range, Subnet type
Click the Subnet name to go to the corresponding Subnet details page
Virtual Server
VPC name, Subnet name, Server name, IP, Security Group
Click the server name to go to the corresponding Virtual Server details page
Port
VPC name, Subnet name, Port name, fixed IP address
Security Group
Security Group name, number of Security Group rules
Click the Security Group name to go to the corresponding Security Group details page
Load Balancer
VPC name, Subnet name, Load Balancer name, resource status, Service IP, Firewall name
Click the Load Balancer name to go to the corresponding Load Balancer details page
Table. Architecture Diagram Resource Detailed Information Items
Reference
If you select a specific resource of Port or Security Group, the information about that resource and related resources are displayed as dotted lines.
Only the relationships of the total 6 types of resources (VPC, Subnet, Virtual Server, Port, Security Group, Load Balancer) are expressed.
Each Virtual Server, Port, Security Group, Load Balancer resource can be displayed up to 100.
10.1.3 - Release Note
Architecture Diagram
2025.10.23
FEATUREAdd support resources
Load Balancer resources have been added.
2025.07.01
FEATUREArchitecture Diagram Feature Added
The relationship between resources can be easily identified by marking it with a dotted line.
You can click the name of the resource in the resource detail information popup window to go to the detail page.
Through Copilot, you can create VPC and Security Group services or control Virtual Server.
You can easily add the Architecture Diagram item from the Configuration Diagram Filter.
2025.02.27
NEWArchitecture Diagram Release
Architecture Diagram service has been newly launched.
Provides a service that can check the relationships between resources.
10.2 - Cloud Control
10.2.1 - Overview
Service Overview
Cloud Control service is a managed service that supports building, operating, and managing a multi-account environment easily and securely on the Samsung Cloud Platform. Cloud Control service automates an organization’s cloud governance (security, compliance, standardization, etc.) and provides consistent centralized account and resource management based on Samsung Cloud Platform best practices.
Features
Cloud Control service provides the following special features.
Landing Zone Automatic Setup: Samsung Cloud Platform accounts, organizational units (OU), guardrails, logging, etc. are automatically configured. In a standardized environment, new account creation and invitation of existing accounts are possible.
Centralized Governance and Policy Enforcement: Automatically applies security, compliance, and operational policies (guardrails) across the organization. Provides policy violation detection and monitoring capabilities.
Multi-Region and Scalability: You can apply the same governance and policies across multiple Samsung Cloud Platform regions.
Provided Features
Cloud Control service provides the following features.
Automated Landing Zone (Landing Zone) Construction: Security, logging, and account structure based on Samsung Cloud Platform best practices are automatically set.
Guardrail applied
Preventive guardrail : block the creation of policy-violating resources itself
Detective Guardrail : Automatically detect policy-violating resources and notify
Integration with Samsung Cloud Platform Organization’s ACP, Samsung Cloud Platform Config Inspection, etc.
Dashboard Provision: You can visually monitor the accounts, OUs, guardrail implementation status, and compliance status of the entire organization.
Centralized logging and auditing
Logging&Audit, Object Storage, Config Inspection through which provide centralized log storage for all accounts and an audit account
ID and Permission Management Integration: By integrating with Samsung Cloud Platform ID Center, you can manage account-based access control and permission groups.
Monitoring and Notification (Notification) Feature: Provides real-time alerts for policy violations, Cloud Control setting changes, etc.
Information
Detection Guardrail, Config Inspection integration feature is scheduled for March 2026, Monitoring and Alerting feature is scheduled for July 2026.
Components
Landing Zone(Landing Zone)
Governance, security, network, logging, etc. The basic structure of the standardized Samsung Cloud Platform environment is as follows.
Category
Detailed description
Management Account
Organization and account structure management, policy (SCP) application, new account creation automation
Highest authority across the organization, governance-focused operation
Log Account
Central collection and storage of all account logs, log integrity and long-term retention
Independent account operation, strict access control and encryption
Audit Account
Organization-wide security and compliance monitoring and audit, automated security checks
Apply principle of least privilege, cross-account role assumption
Table. Cloud Control Landing Zone
Guardrails(Guardrails)
The guardrails that automatically apply policy violation detection and prevention (detection/prevention type) rules, security and compliance standards are as follows.
Category
Detailed description
Preventive Guardrail
Role of preemptively blocking to prevent policy violations
Implementation method: Using Access Control Policy(ACP) to prohibit or limit the scope of actions on specific Samsung Cloud Platform services
Continuously monitor for policy violations or abnormal configurations, and provide alerts when violations occur
Implementation Method: Based on the Samsung Cloud Platform Config Inspection checklist, evaluate resource status and notify via dashboard or alerts when violations are found
Example:
Detect S3 bucket encryption not applied
Detect CloudTrail disabled
Detect whether EBS volume encryption is enabled
Feature: Detect violating resources in real-time and deliver to the administrator
Table. Cloud Control Guardrail
Notice
Detection Guardrails will be provided in March 2026.
Baseline(Baseline)
The essential resources and configuration sets, such as security, logging, and network, automatically deployed per account, are as follows.
Category
Detailed description
AuditBaseline
Configure security and audit roles, policies on the central audit account
Check the security status and compliance status of all accounts centrally
LogArchiveBaseline
Aggregate logs of all accounts’ Trail to a central bucket
Used for log integrity, long-term storage, and audit tracing
IDCenterBaseline
Automatic configuration of resources linked with ID Center
Integrate user/group/permission management within the organization
Table. Cloud Control Baseline
Guide
AuditBaseline is scheduled to be provided in July 2026.
Region-specific provision status
Cloud Control service is available in the following environments.
Region
Availability
Korea West 1 (kr-west1)
Provided
Korea East1 (kr-east1)
Provided
Korea South1(kr-south1)
Provided
South Korea 2(kr-south2)
Provided
South Korea South 3(kr-south3)
Provided
Table. Cloud Control Region-wise Provision Status
Pre-service
This is a list of services that must be pre-configured before creating the service. For detailed information, please refer to the guide provided for each service and prepare in advance.
A service that allows you to easily manage access permissions for resources per account centrally.
Table. Cloud Control Preceding Service
10.2.2 - How-to guides
The user must first create a landing zone to use the Cloud Control service.
If a landing zone is created, you can use the management functions of Cloud Control.
Caution
Cloud Control services are not charged, but services such as Logging&Audit, Object Storage, Config Inspection used within Cloud Control may incur costs based on usage.
Create Landing Zone
To use Cloud Control in the Samsung Cloud Platform Console, you must first create a landing zone.
To create a landing zone, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Click the Landing Zone Creation button on the Service Home page. You will be taken to the Landing Zone Creation page.
Fee Review and Organizational Unit Configuration area, after setting the configuration items, click the Next button.
Category
Required
Detailed description
Home Region
-
Home Region of Cloud Control
Cloud Control designates the default region as the Home Region and cannot be changed
All regions other than the default region are under Cloud Control’s management
Basic Organizational Unit
Required
Enter basic organizational unit within landing zone
Case-sensitive English letters, enter within 128 characters
Basic organizational unit includes shared Accounts (Log Account, Audit Account)
Security: Name of the basic organizational unit for shared Account
Can be modified after landing zone creation
Additional Organizational Unit
Required
Enter additional organizational unit within landing zone
Case-sensitive English letters, enter within 128 characters
Can be added after landing zone creation
Table. Landing Zone Creation - Fee Review and Organizational Unit Configuration Items
Shared Account Configuration After setting the configuration items in the area, click the Next button.
Category
Required
Detailed description
Management Account
-
Management Account name is displayed and cannot be edited
Log Account
Required
Log Account information input
Account name: Use Korean, English, numbers, spaces, special characters(+=-_@[](),.) to input within 3 ~ 30 characters
Email, Confirm Email: Input within 60 characters according to email address format
Audit Account
Required
Enter Log Account information
Account name: Use Korean, English, numbers, spaces, special characters(+=-_@[](),.) and enter within 3 to 30 characters
Email, Confirm Email: Enter within 60 characters following email address format
Cannot use the same email as Log Account
Table. Landing Zone Creation - Shared Account Configuration Items
Note
Log Account is a repository of logs of API activity and resource configuration collected from all Accounts. Log Account cannot be changed.
Audit Account is a limited account, and the security and compliance team can obtain access rights to all accounts within the organization through the Audit Account.
Additional configuration area, after setting the configuration items, click the Next button.
Category
Required
Detailed description
Account access configuration
Required
Select method to manage access to the Account
Account access via ID Center: Create pre-configured groups and permission sets to configure users who perform specific tasks in the Account
Automatically assign users when provisioning an Account with Account Factory or registering an existing Account
Self-managed Account access: Manage access to the Account via ID Center or other Account access methods
Cloud Control does not create directory groups or permission sets for the landing zone
No user creation when provisioning an Account with Account Factory or registering
Trail configuration
-
Automatic configuration in progress
Table. Landing Zone Creation - Additional Configuration Items
Input Information Check area, after checking the landing zone configuration information and Service Permissions, check the agreement content for permissions and guidelines.
Click the Complete button. A popup window notifying the creation of the landing zone will open.
After checking the information about creating a landing zone, click the Confirm button. The landing zone creation request is completed.
Landing zone creation takes some time, and a notification will be sent when the task is completed.
When the landing zone creation is complete, you can check the full menu of Cloud Control and the organization status on the Service Home page.
Caution
You cannot cancel while creating a landing zone.
If you fail to create a landing zone, delete the landing zone and then create it again.
Reference
When a landing zone is created, you can check the following items in Cloud Control.
Two organizational units: shared Account, organizational unit for the Account that the user will provision
Shared Account 2: Log Archive and Security Audit Isolation Account
Selected IAM management configuration
10 preventive guardrails: Settings for policy application
Organization Service Control Policy Activation
Check detailed landing zone information
Landing Zone Settings page allows you to view detailed information about the landing zone.
To check the detailed information of the landing zone, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Service Home on the page click the Landing Zone Settings menu. Navigate to the Landing Zone Settings page.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In Cloud Control, it means the SRN of the resource type
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation time
Service creation time
Modifier
User who edited the service information
Modification Date
Date Service Information Was Modified
Home Region
Home region information of the landing zone
Account Access Configuration
How to manage access for Account
Trail configuration
Trail configuration activation status
maintain active status
Landing Zone Delete
Delete landing zone
For detailed information about landing zone deletion, see Delete Landing Zone
Table. Landing zone configuration items
Delete landing zone
If you fail to create a landing zone or do not use it, you can delete the landing zone.
Caution
Deleted resources cannot be recovered.
Organization unit, Account, bucket, ID Center resources are not automatically deleted.
If you want to use the same name as an existing resource that hasn’t been deleted when recreating a landing zone, you must delete the existing resource directly before creating the landing zone.
Existing resources can be deleted individually from the Organization, Object Storage, and ID Center services.
To delete the landing zone, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Click the Landing Zone Settings menu on the Service Home page. You will be taken to the Landing Zone Settings page.
Landing Zone Settings page, click the Landing Zone Delete button. Landing Zone Delete popup opens.
Landing Zone Deletion displayed in the popup window, enter the Cloud Contorl ID into the deletion confirmation area, then click the Confirm button. The landing zone deletion request is completed.
While deleting the landing zone, a description about the landing zone deletion process is displayed on the Service Home page.
Managing Organization Units and Accounts
You can check the list of organization units and accounts, and register and manage them in Cloud Control.
To view and manage the organization unit and Account list, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Service Home page, click the Organization menu. Move to the Organization unit and Account management page.
Organization Unit and Account Management Select the view mode located at the top right of the page.
Category
Detailed description
View Hierarchy
Display organizational units in a hierarchical structure
Account List View
Display Account list within organization
Account creation
Create a new Account
When the Account creation button is clicked, navigate to the Account creation page
For detailed information about Account creation, see Create Account
Table. Cloud Control organization unit and Account management items
View Hierarchy
Organizational Unit and Account Management page, when you click the View Hierarchy button, you can view and manage organizational units and accounts in a hierarchical structure.
Category
Detailed description
Create Sub-Organization Unit
Add a new organization unit under the selected organization unit
Enabled only when exactly one organization unit is selected in the hierarchy
For details, see [Create Organization Unit](/userguide/management/cloud_control/how_to_guides/organization.md#조직-단위-생성하기)
|
| More | Manage organizational units or register a new Account
Click the button to select **Delete Organizational Unit**, **Register Organizational Unit**, **Re-register Organizational Unit**, **Account registration**
For detailed information on managing organizational units, see [Managing Organizational Units](/userguide/management/cloud_control/how_to_guides/organization.md#조직-단위-관리하기) refer
For detailed information on Account registration, see [Register Account](/userguide/management/cloud_control/how_to_guides/organization.md#account-등록하기) refer
|
|Organization Unit/Account Name|Display organization unit and account names in a measurement structure format
Click the **+**, **-** buttons to expand or collapse the hierarchy
Click the organization unit/account name to go to the detail page
|
|ID/email|Organization unit shows ID, Account shows ID and email|
|Status|Organization unit or Account's Cloud Control registration status
|
|Registered organization unit|Cloud Control registration status of sub-organization units
**number of registered organization units / total number of organization units** displayed
|
|Registered Account|Sub Account's Cloud Control registration status
**Number of registered Accounts / total Accounts** displayed
|
Table. Hierarchy View Items
View Account List
Organization Unit and Account Management on the page, when you click the View Account List button, you can view and manage the Account list that constitute Cloud Control.
Category
Detailed description
Account registration
Register the selected Account from the Account list to Cloud Control
If you select an Account that is in **unregistered** state from the Account list, it becomes active
For detailed information about Account registration, see [Register Account](/userguide/management/cloud_control/how_to_guides/organization.md#account-등록하기)
|
|Account name|Account name|
|Account ID|Account's ID|
|Email|Account's user email|
|Status|Organization unit or Account's Cloud Control registration status
Organization and Account Detailed Information Check
You can view and edit the detailed information of the organization unit and Account.
To view detailed information, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Service Home page, click the Organization menu. Move to the Organization unit and Account management page.
Organization Unit and Account Management page’s View Hierarchy button, click it.
Click the name of the resource whose detailed information you want to view in the hierarchy list. You will be taken to the detailed page of that resource.
Root: Root Details go to the page. For more details, please refer to Root Details Info.
Root detail page allows you to view and manage the detailed information of the organization Root and the sub Account list.
Root Details page consists of Basic Information, Sub Account tabs.
Basic Information
You can check the basic information about organization Root and the organizational units and account count registered in Cloud Control.
Category
Detailed description
service
service name
Resource Type
Service Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation time
Service creation time
Modifier
User who edited the service information
Modification Date
Date Service Information Was Modified
Registered organization unit
Cloud Control registration status of Root sub-organization units
displayed as Number of registered organization units / total organization units
Registered Account
Root sub Account’s Cloud Control registration status
Number of registered Accounts / total number of Accounts displayed
Table. Root Details - Basic Information Tab Items
Sub Account
You can view and manage the list of Accounts under Root and the registration status of Cloud Control.
Category
Detailed description
Account registration
Register the selected Account from the Account list to Cloud Control
Selecting an Account in the **unregistered** state from the Account list activates it
For detailed information about Account registration, see [Register Account](/userguide/management/cloud_control/how_to_guides/organization.md#account-등록하기)
|
| Account name| Account name|
| Email | Account's user email|
| Status | Organization unit or Account's Cloud Control registration status
Organizational Unit Details page allows you to view and manage detailed information of the organizational unit, sub Accounts, and applied preventive guardrails.
Organization Unit Details page consists of Basic Information, Sub Account, Preventive Guardrails tabs.
Basic Information
You can view basic and detailed information about the organization unit.
Category
Detailed description
service
service name
Resource Type
Service Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation time
Service creation time
Modifier
User who edited the service information
Modification Date/Time
Date and time the service information was modified
Organizational unit name
Name of the organizational unit
Applied Guardrail
Number of guardrail types applied to the current organization unit
Registered organization unit
Current organization unit’s sub-unit Cloud Control registration status
displayed as number of registered organization units / total number of organization units
Registered Account
Current organization unit sub Account’s Cloud Control registration status
displayed as Number of registered Accounts / total Accounts
Higher organization unit
Hierarchy of higher organization units of the current organization unit
Re-registration
Re-register the current organization unit to Cloud Contorl
Table. Organization Unit Details - Sub Account Tab Items
Preventive Guardrail
You can view and manage the list of preventive guardrails applied at the organizational unit level.
Notice
Security in the case of organization units, cannot apply or remove the guardrail.
Category
Detailed description
Target Service Name
Guardrail applicable service name
Guardrail Name
Name of the guardrail
Clicking the guardrail name allows you to view detailed information about that guardrail
Type
Application method
Application method
Display of guardrail’s application method
inheritance method, you can click to view detailed organizational unit name
Remove
Remove the selected guardrail from the guardrail list
Enabled when a guardrail is selected from the guardrail list
Apply Preventive Guardrail
New preventive guardrail can be applied at the organizational level
When the button is clicked, navigate to the Apply Preventive Guardrail page
Table. Organization Unit Details - Prevention Guardrail Tab Items
Account Check detailed information
Account Details page you can view the detailed information of the Account and the list of applied preventive guardrails.
Account Detail page consists of Basic Information, Prevention Guardrail tabs.
Basic Information
You can view basic and detailed information about the organization unit.
Category
Detailed description
service
service name
Resource Type
Service Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation time
Service creation time
Editor
User who edited the service information
Modification Date
Date Service Information Was Modified
Email
Account’s user email
Applied Guardrail
Number of guardrail types applied to the current organization unit
ID Center username
ID Center user email
Upper organizational unit
Current Account’s upper organization unit hierarchy
Register
The organization unit of the current account can be changed
For detailed information on changing the organization unit, see Move Account
Table. Account Details - Basic Information Tab Items
Prevention Guardrail
You can view the list of preventive guardrails applied to the Account.
Category
Detailed description
Target Service Name
Guardrail applicable target service name
Guardrail Name
Name of the guardrail
If you click the guardrail name, you can view detailed information about that guardrail
Type
Application Method
Application method
Display of the guardrail’s application method
inheritance method, you can click to view the detailed organizational unit name
User and Access page you can check the Access Portal connection URL and the password required for connection.
Access Portal to check the connection information, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Service Home page, click the User and Access menu. Navigate to the User and Access page.
User and Access page’s Integrated Access Management area, check the information.
Category
Detailed description
Password
Password for Access Portal access
Access Portal URL
Access Portal access URL
When clicking the URL, Access Portal login page can be viewed in a new tab
Permission Set
A collection of admin policies used by ID Center to determine the valid permissions of users who can access a specific Account
Table. Shared Account Items
Note
For detailed information about credential sources and ID Center, refer to ID Center.
Notice
If the landing zone is configured with a self‑managed Account access, refer to the following.
Cloud Control does not automatically create directory groups or permission sets.
When provisioning an Account via the Account factory or registering an existing Account, the user is automatically assigned.
You can manage access to the Account via ID Center or other Account access methods.
Check user credential information
On the User and Access page, you can check the user credential source type and ID Center ID.
To verify user credential information, follow the steps below.
All Services > Management > Cloud Control Please click the menu. Navigate to Cloud Control’s Service Home page.
Click the User and Access menu on the Service Home page. You will be taken to the User and Access page.
User and Access page’s User Credential Management area, check the information.
Category
Detailed description
Credential Source
Types of credential sources set in ID Center
ID Center’s own directory: Directory within ID Center
AD (Active Directory): Active Directory managed directly by the user
ID Center ID
ID Center’s ID
When ID is clicked, go to the ID Center Settings page
User Group
A group formed to classify workers who perform specific tasks in an organization
Table. User Credential Management Items
Reference
For detailed information about credential sources and ID Center, see ID Center.
Management > IAM You can add users and user groups in the service. For more details, refer to IAM.
Check shared Account
You can view the shared Account information of Cloud Control.
To check the shared Account information, follow the steps below.
All Services > Management > Cloud Control Click the menu. Go to the Service Home page of Cloud Control.
Click the Shared Account menu on the Service Home page. Navigate to the Shared Account page.
Shared Account page is composed of Management Account, Log Account, Audit Account widgets.
Each widget displays the Account name, Account ID, and email information, and clicking the widget name navigates to that Account’s detail page.
Category
Detailed description
Management Account
Account that creates new accounts and manages billing and access for all accounts in the organization
Log Account
Account used as the repository for API activity and resource configuration logs collected from all Accounts
Audit Account
Limited account that allows the security and compliance team to obtain read and write access to all accounts
Table. Shared Account Items
10.2.2.1 - Managing Guardrails
Policy violation detection and prevention (detection/prevention type) rules, and the guardrails automatically applied to security and compliance standards are as follows.
Prevention Guardrail
You can apply preventive guardrails to block in advance so that policy violations do not occur.
Applying preventive guardrails
It can be applied to preventive guardrails at the organizational level.
To apply preventive guardrails at the organizational level, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Service Home on the page click Guardrail > Preventive Guardrail menu. Preventive Guardrail List navigate to the page.
After selecting the prevention guardrail to apply to the organizational unit from the Prevention Guardrail List, click the Apply to Organizational Unit button. You will be taken to the Apply to Organizational Unit page.
Multiple preventive guardrails can be selected and applied simultaneously.
After selecting the organizational unit to apply the preventive guardrail, click the Complete button.
Category
Required
Detailed description
Preventive guardrails to apply
-
List of preventive guardrails to apply at the organizational unit
Organization Unit Name
Required
Select organization unit to apply preventive guardrails
Registered, Registration Failed status organization units only selectable
Click on organization unit name, parent organization unit name to view detailed information
Table. Prevention Guardrail Application Items
When a pop-up window that notifies the application of the preventive guardrail opens, click the Confirm button.
Check detailed guardrail information
You can view detailed information about the preventive guardrail, the organizational units applied to the preventive guardrail, and the list of Accounts.
To disable the preventive guardrail, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Service Home on the page click the Guardrail > Preventive Guardrail menu. Navigate to the Preventive Guardrail List page.
Prevention Guardrail List Click the prevention guardrail name to view detailed information. Prevention Guardrail Details page
Prevention Guardrail Details page consists of Basic Information, Applicable Organization Unit, Account tabs.
Basic Information
You can view basic and detailed information about preventive guardrails.
Category
Detailed description
service
service name
Resource Type
Service Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation time
Service creation time
Editor
User who edited the service information
Modification date/time
Date and time the service information was modified
Guardrail name
Guardrail’s name
type
guardrail type
Target Service Name
Guardrail’s target service name
Status
Guardrail application status
Description
Description of guardrails
Table. Guardrail Details - Basic Information Tab Items
Applied Organization Unit
You can view the list of organizational units with preventive guardrails applied.
Category
Detailed description
Organizational unit name
Name of the organizational unit
Click the organizational unit name to view detailed information
Parent Organization Unit Name
Name of the parent organization unit of the organization unit
Click the parent organization unit name to view detailed information
Status
Organization unit’s Cloud Control registration status
Preventive guardrails are inherited and applied from all higher organizational units, so even accounts not in the account list can have preventive guardrails applied.
Disable Guardrail
The preventive guardrail applied to the organizational unit can be disabled.
To disable the preventive guardrail, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Service Home on the page, click the Guardrail > Preventive Guardrail menu. Navigate to the Preventive Guardrail List page.
Prevention Guardrail List after selecting the prevention guardrail to remove organization unit application, click the More > Remove Organization Unit Application button. You will be taken to the Remove Organization Unit Application page.
Multiple preventive guardrails can be selected simultaneously and disabled.
After selecting the organizational unit to remove the preventive guardrail, click the Complete button.
Category
Required
Detailed description
Preventive guardrails to apply
-
List of preventive guardrails to deactivate
Organization Unit Name
Required
Select organization unit to disable preventive guardrail application
Registered, Registration Failed status only organization units can be selected
Click organization unit name, parent organization unit name to view detailed information
Table. Prevention Guardrail Deactivation Items
When the popup notifying the removal of the preventive guardrail is opened, click the Confirm button.
10.2.2.2 - Managing an Organization
The user must first create a landing zone in order to use the Cloud Control service.
When a landing zone is created, you can use Cloud Control’s management functions.
Caution
Cloud Control service is not charged, but services such as Logging&Audit, Object Storage, Config Inspection used within Cloud Control may incur costs based on usage.
Managing organizational units
You can register and manage the organizational units that constitute the Organization in Cloud Control.
Create Organizational Unit
You can create a new organizational unit and register it in Cloud Control.
Create an organizational unit and follow the steps below to register it with Cloud Control.
All Services > Management > Cloud Control Click the menu. Go to Cloud Control’s Service Home page.
Click the Organization menu on the Service Home page. Navigate to the Organization Unit and Account Management page.
Organizational Unit and Account Management page’s top right corner, click the View Hierarchy button.
After selecting the location to add an organizational unit in the hierarchy list, click the Create Sub-Organizational Unit button. The Create Organizational Unit popup opens.
Root or only one organizational unit can be selected.
Root can be used as a basis to create organizational units within 5 levels below.
Create Organizational Unit After entering the organizational unit information to add in the popup window, click the Confirm button.
Category
Required status
Detailed description
Parent organization unit name
-
Name of the parent organization unit for the organization unit to be created
Organization Unit Name
Required
Enter the name of the organization unit to be created within 128 characters
Organization names distinguish between uppercase and lowercase English letters
Description
Select
Enter a description of the organizational unit within 1,000 characters
Table. Organization unit creation items
When the popup notifying the creation of an organizational unit opens, click the Confirm button.
It may take several tens of minutes depending on the number of Accounts under the organizational unit.
When the organization unit creation is completed, it will be delivered as a notification.
Register Organizational Unit
You can register organizational units that are not registered in Cloud Control or that failed to register in Cloud Control.
Notice
When registering an organizational unit, all parent organizational units of the unit to be registered must be in a registered state.
If there is an organization unit in the registering state under the organization unit you want to register, you cannot register.
Subordinate organizational units of the organizational unit to be registered must be registered separately.
To register an organizational unit in Cloud Control, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Click the Organization menu on the Service Home page. Navigate to the Organization Unit and Account Management page.
Organizational Unit and Account Management in the top right corner of the page, click the View Hierarchy button.
After selecting the organizational unit to register from the hierarchy list, click the More > Register Organizational Unit button. You will be taken to the Register Organizational Unit page.
Organization Unit Registration Check the information of the organization unit to be registered on the page.
Category
Required
Detailed description
Sub Account
-
List of Accounts included under the sub-unit of the organization unit to be registered
Automatically registered in Cloud Control when registering the organization unit
Guardrails to be applied to the organizational unit
-
List of guardrails inherited from the upper organizational unit and guardrails directly applied to the organizational unit
Clicking the guardrail name allows you to view detailed information about that guardrail
To remove a guardrail applied to the organizational unit, remove its application from the upper organizational unit
Table. Organization Unit Registration Items
Terms Agreement after checking the content, check the checkbox and click the Complete button.
When the popup notifying the registration of the organizational unit opens, click the Confirm button. The organizational unit registration request will be completed.
Depending on the number of Accounts under the organization unit, it may take more than tens of minutes.
When the organization unit registration is completed, it will be delivered as a notification.
Re-register Organization Unit
You can re-register the organizational unit registered in Cloud Control to Cloud Control.
Notice
If there is an organization unit in the registering state under the organization unit you want to register, you cannot register.
The subordinate organizational unit of the organization unit you want to register must be registered separately.
To re-register an organizational unit in Cloud Control, follow these steps.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Service Home page, click the Organization menu. Navigate to the Organization Unit and Account Management page.
Organization Unit and Account Management page’s top right corner View Hierarchy button click.
After selecting the organizational unit to re-register from the hierarchy list, click the More > Re-register Organizational Unit button. You will be taken to the Re-register Organizational Unit page.
Organizational Unit Re-registration Check the information of the organizational unit to be re-registered on the page.
Category
Required status
Detailed description
Sub Account
-
List of Accounts included under the sub-unit to be re-registered
Automatically registered in Cloud Control when registering the organizational unit
Guardrails to be applied to the organizational unit
-
List of guardrails inherited from the upper organizational unit and guardrails directly applied to the organizational unit
Clicking the guardrail name allows you to view detailed information about that guardrail
To remove a guardrail applied to the organizational unit, remove its application from the upper organizational unit
Table. Organization Unit Re-registration Items
Terms Agreement after reviewing the content, check the checkbox and click the Complete button.
When the popup notifying re-registration of the organization unit opens, click the Confirm button. The organization unit re-registration request will be completed.
Depending on the number of Accounts under the organization unit, it may take more than tens of minutes.
When the organization unit re-registration is completed, it will be delivered as a notification.
Delete organization unit
You can delete the organization unit.
Notice
Only organization units that are in an unregistered state in Cloud Control can be deleted.
Before deleting the organization unit, remove all sub-elements of that organization unit.
To delete an organizational unit, follow the steps below.
All Services > Management > Cloud Control Please click the menu. Navigate to Cloud Control’s Service Home page.
Click the Organization menu on the Service Home page. Navigate to the Organization Unit and Account Management page.
Organizational unit and Account management in the top right corner of the page, click the View Hierarchy button.
After selecting the organizational unit to delete from the hierarchy list, click the More > Delete Organizational Unit button.
When the popup notifying the deletion of the organizational unit opens, click the Confirm button.
Account Management
You can register and manage the list of Accounts that constitute the Organization in Cloud Control.
Account Create
Account factory to create Account and apply Cloud Control directly without separate work.
To create an Account, follow the steps below.
All Services > Management > Cloud Control Click the menu. Navigate to Cloud Control’s Service Home page.
Click the Account Factory menu on the Service Home page. Navigate to the Account Factory page.
Account Factory on the page Account Creation button click the button. Account Creation page will be navigated.
Account creation on the page, enter the required information for creating an Account and select the organizational unit, then click the Complete button. A popup notifying the Account creation opens.
Category
Required
Detailed description
Account information
Required
Enter the name and email information of the Account
Account name: Use Korean, English, numbers, spaces, special characters(+=-_@[](),.) to input within 3 ~ 30 characters
Email, Confirm Email: Input within 60 characters according to email address format
ID Center Information
Required
Enter ID Center user information that can access the Account to be created
Username: Enter using English letters, numbers, special characters(+=-_@,.) within 128 characters
User Real Name: Enter the user’s actual name (surname and given name)
If the Account access configuration uses Self-Managed Account Access, ID Center Information cannot be set
Organization unit selection
Required
Select the parent organization that will include the Account to be created
registered status organization units only can be selected
Clicking the organization unit name allows navigation to its detail page
Table. Landing Zone Creation - Fee Review and Organizational Unit Configuration Items
Caution
An Excel file containing Access Portal user login information will be sent to the email entered in the ID Center. Be sure to verify that the email information is correct.
Confirm Click the button. Account creation request is completed.
Account creation takes some time, and a notification will be sent when the task is completed.
Account Register
You can register organizational units that are not registered in Cloud Control or failed to register in Cloud Control.
Notice
Only accounts of organizational units registered in Cloud Control can be registered.
If there is an organizational unit or Account in the registering state under the organizational unit you want to register, you cannot register.
If you select and register a different organizational unit from the current one, the corresponding ACcount will be moved to the newly selected organizational unit.
To register an Account, follow the next steps.
All Services > Management > Cloud Control Click the menu. Go to Cloud Control’s Service Home page.
Click the Organization menu on the Service Home page. Navigate to the Organization Unit and Account Management page.
Organization Unit and Account Management page’s top right corner, click the View Account List button.
After selecting the Account to register in Cloud Control from the Account list, click the Account registration button. It navigates to the Account registration page.
Category
Required
Detailed description
Current organizational unit
-
Organizational unit that the Account belongs to
Register organization unit
-
Select the organization unit to register the Account
Only organization units in registration status can be selected
Current organization unit: Register as is in the current organization unit
Other organization unit: Directly select another organization unit
Table. Account registration items
Account when the registration notification popup opens Click the Confirm button.
Account Move
You can change the organizational unit of the Account registered in Cloud Control and move it.
Notice
If there is an organization unit or Account in the registering state under the organization unit you want to register, you cannot move.
If you select and register a different organizational unit from the current one, the corresponding ACcount will be moved to the newly selected organizational unit.
To move the Account, follow the steps below.
All Services > Management > Cloud Control Please click the menu. Navigate to Cloud Control’s Service Home page.
Click the Organization menu on the Service Home page. Navigate to the Organization Unit and Account Management page.
Organization Unit and Account Management on the top right of the page, click the View Account List button.
Click the Account name to change the organizational unit from the Account list. Account Details page will be navigated.
Click the Register button on the Account Details page. You will be taken to the Account Registration page.
Register Organization Unit from the list, after selecting the organization unit to move the Account, click the Complete button.
Registration status organizational units can only be selected.
When the popup notifying Account registration opens, click the Confirm button.
Account Exclude
You can exclude the Account from the Organization.
To exclude Account from Organization, follow the steps below.
All Services > Management > Organization Click the menu. Go to the Service Home page of Organization.
Click the Organization Configuration menu on the Service Home page. Navigate to the Organization Configuration page.
Organization Structure page click the Account List View button.
After selecting the Account to exclude from Organization, click the More > Exclude Account button.
When a popup that notifies the exclusion of the Account opens, click the Confirm button.
Notice
In the following cases, Account cannot be excluded.
Account with unregistered payment method
If there is credit assigned to the account
When the exclusion point is the cost settlement date (the 1st of each month, Asia/Seoul GMT +09:00)
Account Delete
You can delete the Account.
To delete the Account, follow the steps below.
All Services > Management > Organization Click the menu. Go to the Service Home page of Organization.
Service Home on the page, click the Organizational Structure menu. Navigate to the Organizational Structure page.
Organization Setup on the page click the Account List View button.
After selecting the Account to delete from the Account list, click the More > Delete Account button. The Delete Account popup window opens.
After clicking the Account name of the Account to be deleted, you can also delete by clicking the Delete Account button on the Account Details page.
After entering the Account name to delete, click the Confirm button.
Reference
If you delete Account, a Account deletion notification email will be sent to the next user.
Administrator who created Organization
Root user of the created Account
User with delegation for the created Account
Notice
When deleting from the Account list, you must select only one Account to delete.
Before deletion, all resources within the Account must be deleted.
Management Account and accounts that joined through invitation cannot be deleted.
10.2.2.3 - Account 관리하기
Create Account
You can create an Account with the Account factory and apply Cloud Control directly without any additional steps.
주의
Account는 조직에 추가할 수 있는 최대 Account수를 초과하여 생성할 수 없습니다.
To create an Account, follow the steps below.
Click the All Services > Management > Cloud Control menu. 1. Navigate to the Service Home page of Cloud Control.
On the Service Home page, click the Account Factory menu. 2. Go to the Account Factory page.
On the Account Factory page, click the Create Account button. 3. Go to the Account creation page.
On the Account creation page, enter the required information to create an Account, select the organizational unit, and then click the Create button.
Category
Required
Detailed description
Account information
Required
Enter the account name and email information
Account name: Use Korean, English, numbers, spaces, and special characters(+=-_@[](),.) to enter between 3 ~ 30 characters
Email: Enter up to 60 characters in a valid email address format
Cannot duplicate the root user’s email
Enter the same value for Email confirmation
ID Center information
Required
Enter ID Center user information for the Account to be created that can access it
**Username**: Enter using English letters, numbers, and special characters(\+\=\-\_\@\,\.) within 128 characters
**Full name**: Enter the user's actual name (family name and given name)
If the Account access configuration uses **self-managed Account access**, **ID Center information** cannot be set
|
| Select organization unit | Required | Select the parent organization unit that will contain the Account to be created
Only organization units with **registered** status can be selected
Clicking the organization unit name navigates to its detail page
|
표. 랜딩 존 생성 - 요금 검토 및 조직 단위 구성 항목
주의
ID Center에 입력한 이메일로 Access Portal 사용자 로그인 정보가 포함된 엑셀 파일이 전달됩니다. 이메일 정보가 정확한지 반드시 확인하세요.
When the popup indicating account creation opens, click the Confirm button. 5. Account creation request is completed.
Account creation takes some time, and a notification is sent when the task is complete.
Manage Account
You can register and manage the accounts that comprise the organization in Cloud Control.
Register Account
You can register organizational units that are not registered in Cloud Control or that failed to register, into Cloud Control.
안내
Only accounts from the organization unit registered in Cloud Control can be added.
You cannot register if a subunit of the organizational unit you are trying to register contains an organizational unit or Account that is in the registration in progress state.
If you select a different organizational unit from the current one and register, the corresponding ACcount will be moved to the newly selected organizational unit.
To register an Account, follow the steps below.
Click the All Services > Management > Cloud Control menu. 1. Navigate to the Service Home page of Cloud Control.
On the Service Home page, click the Organization menu. 2. Navigate to the Organization Unit and Account Management page.
In the top right corner of the Organization Unit and Account Management page, click the Account List button.
From the Account list, select the Account to register in Cloud Control, then click the Register Account button. 4. Go to the Account Registration page.
After clicking the Account name of the Account to be registered, you can also register by clicking the Register button on the Account Details page.
Category
Required
Detailed description
Current organizational unit
Organizational unit to which the Account belongs
Register organization unit
Select the organizational unit to register the Account
Only organizational units that are in a registered state can be selected
Current organizational unit: Register directly in the current organizational unit
Other organizational unit: Directly select a different organizational unit
표. Account 등록 항목
Account 등록을 알리는 팝업창이 열리면 확인 버튼을 클릭하세요.
Account 이동하기
Cloud Control에 등록된 Account의 조직 단위를 변경하여 이동할 수 있습니다.
안내
등록하려는 조직 단위의 하위에 등록중 상태인 조직 단위나 Account가 있으면 이동할 수 없습니다.
현재와 다른 조직 단위를 선택하여 등록할 경우, 해당 ACcount는 새로 선택한 조직 단위로 이동합니다.
To move the account, follow these steps.
Click the All Services > Management > Cloud Control menu. 1. Navigate to the Service Home page of Cloud Control.
On the Service Home page, click the Organization menu. 2. Navigate to the Organization Unit and Account Management page.
On the top right of the Organization Unit and Account Management page, click the View Account List button.
In the Account list, click the account name for which you want to change the organizational unit. 4. Navigate to the Account Details page.
On the Account Details page, click the Register button. 5. Go to the Account Registration page.
From the Registered Organizational Unit list, select the organizational unit to which you want to move the Account, then click the Done button.
Only organizational units with registered status can be selected.
When a popup notifying the Account registration opens, click the Confirm button.
Unregister Account
You can deactivate an account registered in the organization.
안내
In the following cases, the Account cannot be deactivated.
Account with no registered payment method
If the account has assigned credit
When the exclusion point is the cost settlement date (the 1st of each month, Asia/Seoul GMT +09:00)
To remove an account registered in an organization, follow these steps.
Click the All Services > Management > Organization menu. 1. Navigate to the Service Home page of the Organization.
On the Service Home page, click the Organization Setup menu. 2. Go to the Organizational Structure page.
On the Organization Configuration page, click the View Account List button.
After selecting the Account to exclude from the Organization, click the More > Unregister Account button.
After clicking the account name of the account to be deregistered, you can also deregister by clicking the Deregister button on the Account Details page.
When a popup notifying the Account deregistration opens, click the Confirm button.
Delete Account
You can delete the account.
안내
When deleting from the Account list, you must select only one Account to delete.
All resources within the Account must be deleted before deletion.
Management Accounts and Accounts that joined via invitation cannot be deleted.
To delete the Account, follow the steps below.
Click the All Services > Management > Organization menu. 1. Go to the Service Home page of the Organization.
On the Service Home page, click the Organization Setup menu. 2. Go to the Organizational Structure page.
On the Organization Configuration page, click the View Account List button.
After selecting the Account to delete from the Account list, click the More > Delete Account button. 4. Account deletion A popup window opens.
After clicking the name of the Account to be deleted, you can also delete it by clicking the Account Delete button on the Account Details page.
Enter the Account name to delete, then click the Confirm button.
참고
When you delete the Account, an Account deletion notification email is sent to the next user.
Administrator who created the Organization
Root user of the created Account
User with delegation for the created Account
Check Shared Account
You can view the shared Account information in Cloud Control.
To check the shared account information, follow the steps below.
Click the All Services > Management > Cloud Control menu. 1. Navigate to the Service Home page of Cloud Control.
On the Service Home page, click the Shared Account menu. 2. Navigate to the Shared Account page.
Each widget displays the account name, account ID, and email information, and clicking the widget name navigates to the detailed page of that account.
Category
Detailed description
Management Account
Account that creates new Accounts and manages billing and access for all Accounts in the organization
Log Account
The account used as the repository for API activity and resource configuration logs collected from all accounts
Audit Account
A restricted account that enables the security and compliance team to have read and write access to all accounts.
표. 공유 Account 항목
10.2.3 - API Reference
API Reference
10.2.4 - CLI Reference
CLI Reference
10.2.5 - Release Note
Cloud Control
2025.10.23
NEWOfficial Service Version Release
Cloud Control service official version has been released.
You can easily and safely build, operate, and manage a multi-account environment on Samsung Cloud Platform.
The organization’s cloud governance (security, compliance, standardization, etc.) can be automated and managed through policy violation detection and monitoring functions.
10.3 - Cloud Monitoring
10.3.1 - Overview
Service Overview
Cloud Monitoring service collects usage and change information, and logs of operating infrastructure resources, and generates events when the set threshold is exceeded, notifying users.
Through this, users can quickly respond to performance degradation and failures, and can easily establish resource capacity expansion plans for a stable computing environment.
Provided Functions
Cloud Monitoring provides the following functions.
Stable Computing Resource Management: You can easily check indicators such as CPU usage, disk usage, and memory usage.
Since notifications are automatically sent to designated personnel when events occur in resources being used, you can operate computing resources stably and quickly analyze and respond to failures.
Convenient Monitoring: Resource status information can be easily monitored by creating a dashboard.
Basic dashboards and user-defined dashboards are provided, and various types of widgets can be set up to easily and quickly create dashboards.
Event Metric Management: Event metrics can be easily set up with just a few clicks through the web-based console.
Event metric settings for monitoring targets (event patterns, occurrence conditions, occurrence cycles, performance metrics, operation status, etc.) can be changed in various ways to suit the usage environment, and threshold settings and alarm settings can be easily managed.
Resource Log Management: Log data of resources can be collected and stored, and searches can be performed on target logs as needed.
Additionally, events are quantified for major keywords, and when predefined conditions are met, notifications are automatically sent to designated personnel, providing a more stable usage environment.
Components
Dashboard
The monitoring dashboard allows you to check the operation status, event status, and usage rates of monitoring targets and services.
Item
Description
Region
Location of resources
Data Reference Time
Reference time of data displayed on the dashboard
Refresh
Refresh the dashboard based on the current time
Period Setting
Set the data query period and refresh cycle
Monitoring Status
Number and status of monitoring targets for each service in the account
Event History
Display recent 7-day events by risk level as a graph
Top 5 Performance Usage
Display the top 5 monitoring targets with the highest performance usage
Event Map
Display the number of events for each service by risk level
Event Status
Display a list of unprocessed events that have occurred
Table. Cloud Monitoring Dashboard Components
Performance Analysis
Performance analysis allows you to check the main performance items of monitoring targets and view current data and historical data for each performance item.
Users can check the performance status of monitoring targets by service or period and analyze the results by comparing specific performance.
Log Analysis
Log analysis collects and checks the logs of monitoring targets and converts them into quantifiable data for monitoring.
Basic logs are provided for each monitoring target, and users can create custom logs to collect and check additional logs.
Event Management
An event is a setting that notifies users when the performance value of a monitoring target meets certain conditions.
By setting events, users can grasp monitoring information that they must know without missing it.
For example, if an event is set to occur when a performance value related to overload exceeds a certain value, users will be notified whenever there is a risk of overload during resource operation, allowing them to respond before problems occur.
Event management allows users to create events and set them to notify designated users when specific values occur during monitoring.
Preceding Services
Cloud Monitoring has no preceding services.
10.3.2 - How-to guides
Samsung Cloud Platform Monitoring is a resource management system that allows users to monitor and analyze the operation status of resources within an account on the Samsung Cloud Platform Console. Users can efficiently manage resources using the dashboard page, widgets, and chart features.
Note
Users can monitor resources created in the Samsung Cloud Platform Console with authorized accounts.
Users can log in to the Samsung Cloud Platform Console and navigate to Samsung Cloud Platform Monitoring to monitor resources.
Getting Started with Cloud Monitoring
To start using Samsung Cloud Platform Monitoring, follow these steps:
Click on All Services > Management > Cloud Monitoring menu. This will take you to the Service Home page of Cloud Monitoring.
Click on the Open Cloud Monitoring button on the Service Home page. This will take you to the Cloud Monitoring Console page.
Exploring the Cloud Monitoring Console
The top and left menus of the Cloud Monitoring Console are composed as follows:
Displays the regions being monitored for the current account.
Allows selecting regions provided by the account.
User Information
View user information and log out of Samsung Cloud Platform Monitoring.
Side Menu
Displays the main features of Samsung Cloud Platform Monitoring. Each menu can be clicked to navigate to the corresponding page.
Monitoring Dashboard: View the operation status, event status, and usage of monitored services and resources. For more information, see Using the Monitoring Dashboard.
Performance Analysis: View key performance metrics and current data and history for each metric. For more information, see Analyzing Performance.
Log Analysis: Collect and view logs from monitored resources and convert them into metrics for monitoring. For more information, see Analyzing Logs.
Event Management: Set up notifications for specific conditions. For more information, see Managing Events.
Table. Exploring the Monitoring Page
Ending Monitoring
To exit the Cloud Monitoring Console, click the Log Out button in the top right corner of the User Information section.
Note
The session timeout for the Cloud Monitoring Console is set to 30 minutes.
Using Common Features
This section describes frequently used features when using the Cloud Monitoring Console.
Viewing Detailed Information
To view detailed information about a monitored resource, navigate to Cloud Monitoring Console > Performance Analysis or Cloud Monitoring Console > Log Analysis > Log Status. Then, click on the monitored resource for which you want to view detailed information.
Note
The detailed information for a monitored resource may vary depending on the service type.
If the operating system (OS) of the monitored resource is RHCOS (Red Hat Core OS), detailed information may not be available.
Item
Description
Basic Information
Displays basic information about the monitored resource
Example: Virtual Server - monitored resource, service type, service status, server type, OS information, IP
Performance
Displays key performance metrics as graphs
Logs
Displays the log collection volume as graphs
Events
Displays a list of events that occurred on the monitored resource
Agent
Provides Install, Start, Stop, Delete, and Update commands for the agent
Query Period Setting
Displays the query period for date/time data
Refreshes the data based on the current time.
Enables or disables automatic refresh.
Allows setting the data query period or changing the automatic refresh interval. For more information, see Setting the Query Period.
Monitoring Status Area
Displays the monitoring status for performance, logs, and events.
Table. Monitored Resource Detailed Information
Note
Agent management commands are available for Virtual Server, GPU Server, and Bare Metal Server services.
For more information on agent installation and management, see Managing Agents.
Sorting Data
You can sort event monitoring, performance analysis, and log analysis results in descending or ascending order. To sort data, follow these steps:
Display the information you want to sort on the page.
Click on the Sort button next to the category name. The sort order changes between descending and ascending each time you click.
Viewing Real-Time Data
You can set the dashboard or detailed information page to automatically refresh the data at a specified interval.
Note
The Cloud Monitoring Console allows you to set the monitoring page to refresh periodically.
You can refresh the data based on the current time by clicking the Refresh button.
To set the refresh interval, follow these steps:
Click the Settings button in the top right corner of the data representation area.
Select the refresh interval and click OK.
You can enable or disable the automatic refresh feature.
Setting the Query Period
You can set the query period to limit the scope of performance, log, and event data, making it easier to find the information you need. To set the query period, follow these steps:
Click the Settings button in the top right corner of the data representation area.
Select or enter the query period.
Caution
When entering the query period manually, it must be at least 30 minutes.
If the data query range is fixed for each widget, the widget’s query range takes priority.
10.3.2.1 - Using the Monitoring Dashboard
The monitoring dashboard allows you to view the operational status and event history of monitored services and resources, as well as the top usage items.
Getting Started with the Monitoring Dashboard
When you navigate to the Cloud Monitoring Console page in the Samsung Cloud Platform Console, the monitoring dashboard is displayed.
If you are on a different page, you can click Cloud Monitoring Console > Monitoring Dashboard to move to the monitoring dashboard page.
The monitoring dashboard is composed of the following elements.
Item
Description
Data Reference Time
Displays the reference time for the data shown on the dashboard
Refresh
Refreshes the dashboard based on the current time
Auto Refresh
Enables or disables the auto-refresh feature for the dashboard
Period Setting
Sets the data retrieval period or changes the refresh cycle
Monitoring Status
Displays the number of monitored targets and their monitoring status for each service
Event History
Displays the number of events that occurred in the last 7 days as a graph by risk level
Top 5 Performance Usage
Displays the top 5 monitored targets with the highest performance usage as a graph
Event Map
Displays the number of events that occurred for each service by risk level
Event Status
Displays a list of unprocessed events that have occurred
Table. Monitoring Dashboard Composition
Note
The monitoring dashboard is automatically created when you create an account in the Samsung Cloud Platform Console and cannot be deleted.
The widgets that make up the monitoring dashboard cannot be changed.
To create a dashboard with a specific widget, use a custom dashboard. For more information on custom dashboards, see Using Custom Dashboards.
Understanding Common Dashboard Features
This section describes the features that can be used in the dashboard.
Downloading Widget Images
You can download a widget as an image file (*.png) by clicking the download button in the top-right corner of the widget area.
Viewing Detailed Graph Information
When you hover over a graph with your mouse cursor, detailed information appears in a popup.
Monitoring Status
Displays the number of monitored targets and their monitoring status for each service in use.
Item
Description
Service Category
Displays the service category and the number of monitored targets for each service category
Clicking on a service category displays the list of services and the number of monitored targets included in the category
Service List
Displays the list of services and the number of monitored targets included in the service category
Clicking on the number of monitored targets for each service moves to the Performance Analysis page
Monitoring Status
Displays the number of monitored targets and their current status
Clicking on the Down or Unknown items displays the service name in a popup
Event Status
Displays the number of events that have occurred, classified by risk level (Fatal, Warning, Inform)
Note
The performance collection in the monitoring status displays the combined number of performance items for both Agent and Agentless methods.
Event History
Displays the number of events that occurred in the last 7 days as a graph by risk level.
When you hover over the graph with your mouse cursor, the event risk level and the number of occurrences for the selected date appear in a popup.
Occurrences: The total number of events that occurred
Active: The number of events that continue to occur because they meet the event occurrence conditions
Inactive: The number of events that no longer occur because they do not meet the event occurrence conditions
You can click on the risk level legend area to hide or show the corresponding graph.
Top 5 Performance Usage
Displays the top 5 monitored targets with the highest performance usage as a graph.
When you hover over the graph with your mouse cursor, the full name of the selected target and its current performance value appear in a popup.
Clicking on the graph opens the Monitored Target Details popup window for the corresponding target.
Item
Description
CPU Usage/Core [Basic]
The percentage of CPU time used, excluding Idle and IOWait states
Memory Used [Basic]
The current amount of used memory
Disk Read Bytes [Basic]
The number of disk read bytes
Disk Write Bytes [Basic]
The number of disk write bytes
Note
The monitoring dashboard only displays the performance of Virtual Servers. To display the top 5 performance of other service types, you must select and configure them in a custom dashboard.
Event Map
Displays the number of events that occurred for each service by risk level.
When you hover over a square with your mouse cursor, the name of the monitored target appears in a popup.
Clicking on a service item in the event map opens the Monitored Target Details popup window for the corresponding service.
Each item’s risk level is as follows.
Item
Description
No Rule
A state where it is impossible to determine whether it is normal or abnormal. It means that there is no threshold setting value, so the state cannot be determined.
NORMAL
A normal state. It means that the threshold was not exceeded, so no event occurred.
INFORM
The lowest level of risk. It includes simple notification-level information.
WARNING
A medium level of risk.
FATAL
The highest level of risk.
Event Status
Displays a list of events that have occurred and are still active.
Events are displayed in the order they occurred most recently.
10.3.2.2 - Analyzing Performance
In Performance Analysis, you can check the main performance items of the monitoring target and view the current data and history of each performance item. Users can check the performance status of the monitoring target they manage by service or period and analyze the results by comparing specific performance.
Getting Started with Performance Analysis
You can start performance analysis by selecting a monitoring target directly or entering search conditions.
To analyze performance by searching for a monitoring target, follow these steps:
Click Cloud Monitoring Console > Performance Analysis. You will be moved to the Performance Analysis page.
Enter the search conditions for the monitoring target you want to analyze in the search area, and then click Search.
Item
Description
Search Area
Detailed search filters are displayed differently in the search area depending on the service type
Click the Detailed Search button to perform a detailed search.
Multiple condition items can be selected for each detailed search filter
Number of Monitoring Targets Displayed
Displays the number of search results and the number of performance items that can be viewed at a time in the list
The default value for the number of performance items displayed in the list is 20.
The number of items listed can be changed to 10, 20, 30, 40, 50, or 100
Search Information
Displays the search result values for the search condition items
Monitoring target, service status, event level
Clicking the risk icon displayed in the event risk will open a popup window with the most recent event details for that risk.
Performance Indicator
Information Displays the main performance indicators for the monitoring target based on the service type
Refer to the list of main performance indicators by service and the collection information by instance type and status for DB services
Detailed View
Check the detailed information of the corresponding monitoring target
Performance Comparison
Select a monitoring target to compare performance
Table. Performance Analysis
Checking Performance Details
To check the detailed performance information of a monitoring target, follow these steps:
Click the monitoring target you want to check in the performance analysis list. The Monitoring Details popup window will open.
Click the Performance tab.
When you place the mouse cursor over the graph, the values of each performance item will appear in a popup window.
You can set the query period or change the refresh cycle by clicking the icon in the top right corner.
You can select the graph display method by clicking the Detailed or Summary buttons in the top left corner of the performance chart.
Item
Description
Basic Information
Displays basic information about the monitoring target
Detailed
Displays the performance chart of the monitoring target in detail
Check one chart in detail
Summary
Displays the performance chart of the monitoring target in a checkerboard format
Check multiple charts at a glance
Query Period Setting
Date/Time: Displays the query base time of the data.
Refresh: Refreshes the data directly to the current time.
Start/Stop: Enables or disables the automatic refresh function.
Settings: Sets the data query period or changes the automatic refresh cycle
Performance Comparison
Creates a chart to compare the performance of the monitoring target and makes it possible to compare each performance
Performance Chart
Displays the performance chart of the monitoring target as a graph
If there is only one graph, the last collected value is displayed with the unit in the top right corner.
If there are multiple graphs, ⓘ is displayed in the top right corner, and when you place the mouse cursor over it, the last collected value of each graph appears in a popup window.
When you place the mouse cursor over the graph, the performance item value at the specified time appears in a popup window.
Table. Monitoring Target Details
Note
The collection cycle of performance values may vary depending on the service.
The data in the chart is expressed in 30 points, and the data collection interval according to the data query range (time) is as follows. (The expression point may vary depending on the collection time error)
30 minutes: approximately 1-minute interval
60 minutes: approximately 2-minute interval
3 hours: approximately 6-minute interval
6 hours: approximately 12-minute interval
12 hours: approximately 24-minute interval
24 hours: approximately 48-minute interval
3 days: approximately 144-minute interval (2 hours 24 minutes)
7 days: approximately 336-minute interval (5 hours 36 minutes)
14 days: approximately 672-minute interval (11 hours 12 minutes)
Custom: The user-specified range (minutes) divided by 30
Each point’s data is expressed as the maximum value in the query range (time), and you can change the statistical type in the detailed chart.
Comparing Performance
You can check the performance items of each monitoring target and select the desired performance items to compare.
Getting Started with Performance Comparison
You can create a chart to compare the performance of the monitoring target and compare each performance.
Note
Only performance items of the same service type can be compared.
Performance items may be added depending on the detailed attributes of the service type.
Windows OS performance of VM
Kibana-related performance of Search Engine
To start performance comparison, follow these steps:
Click Cloud Monitoring Console > Performance Analysis. You will be moved to the Performance Analysis page.
Enter the search conditions for the monitoring target you want to analyze in the search area, and then click Search.
Select all the monitoring targets you want to compare and click Performance Comparison. A popup window will open where you can compare performance.
Item
Description
Monitoring Target
Displays the service type of the monitoring target to be compared. Click to change the service
Changing the service will delete all charts created so far.
Click Add to search for and add the monitoring target of the currently selected service
The selected monitoring target is displayed on the page, and you can delete the monitoring target by clicking X or Delete All
Performance Item
Displays all performance items collected by the currently selected service
Check the performance items you want to compare, and they will be included in the chart.
Chart Display Method
Selects the display method for the performance comparison chart
Detailed: The performance comparison chart is displayed in detail (default)
Summary: The performance comparison chart is displayed briefly
Query Period Setting
Date/Time: Displays the query base time of the data
Refresh: Refreshes the data directly to the current time.
Start/Stop: Enables or disables the automatic refresh function.
Settings: Sets the data query period or changes the automatic refresh cycle
Chart Area
Displays a chart comparing the performance of the monitoring targets based on the selected performance items
Click Add. A popup window will open where you can add a monitoring target.
Select the monitoring target you want to compare and click OK.
If you select Kubernetes Engine, you must also select its subtype.
Check the performance items you want to compare. The corresponding performance items will be added to the chart.
Checking the Chart
The performance comparison result is displayed as a chart. You can change the shape of the created chart or download it as an image or Excel file.
When you place the mouse cursor over the graph, the performance item value at the specified time appears in a popup window.
You can hide or show the graph by clicking the target item in the legend area.
Item
Description
Statistical Method
Sets the statistical method to be displayed as a graph
Displays statistics from 5 minutes to 6 hours.
Basic, Maximum, Minimum, Average, Sum can be selected, and multiple methods can be selected at the same time. The selected items are displayed in the legend area
Chart Type
Selects the type of graph to be displayed in the chart
Line: Line graph
Stacked Area: Area graph
Scatter: Scatter graph
Chart Download
Checks and downloads the raw data of the chart
Chart PNG File: Downloads the chart as an image file (PNG).
Chart Excel File: Downloads the data of the performance items displayed in the chart as an Excel file. The chart display data is a set of data automatically collected according to the query range.
Raw Excel File: Downloads all the data of the performance items displayed in the chart for the query range period as an Excel file.
Time Series Graph Widget Addition
Adds the chart to a user-defined dashboard as a time series graph widget
Clicking will open a popup window to add a time series graph widget.
Delete
Deletes the performance comparison result chart
Performance Comparison Status
Displays the performance comparison result as a graph
When you place the mouse cursor over the graph, the performance comparison status at that time is displayed in a popup window.
10.3.2.3 - Analyzing Logs
In log analysis, the log of the monitoring target is collected and its contents are checked, and it can be monitored by converting it into an indicator, which is a structured data. Basic collection logs are provided for each monitoring target, and users can create custom logs to collect and check desired logs in addition.
Reference
To use log analysis, you must install and operate a log collection agent in advance. For more information on installing and operating log agents, please refer to Managing Agents.
To collect logs from Kubernetes Engine, you must set up log collection in the Samsung Cloud Platform Console.
Start log analysis
You can check the log status list or search for the monitoring target log to check. To check the log status list, follow the procedure below.
Click Cloud Monitoring Console > Log Analysis > Log Status. You will be taken to the Log Status page.
Enter the search conditions of the service to be analyzed in the search area, and then click Log Search.
A list of services that match the search criteria and search information will be displayed at the bottom.
Clicking the Detail View button for each service displays detailed log information for the service.
Item
Description
Search Area
The search filters displayed in the search area may vary depending on the service type
Advanced Search can be done by clicking the Advanced Search button.
Each detailed search filter condition item can be selected one or more
Number of items to display for monitoring targets
Displays the number of search results and the performance number that can be viewed at once in the list
The default is to view 20 at a time.
The number of items listed can be changed to view 10, 20, 30, 40, 50, or 100 at a time
Search Information
Displays the search result value for the search condition item
Detailed View
Check the detailed information of the corresponding monitoring target
Log Search
Search logs by combining keywords and queries and check detailed history
Reference
If a Virtual Server or Node connected to the monitoring target exists, the status diagram will also be displayed in the search information area.
The name of the monitoring target can use Korean, English uppercase and lowercase letters, numbers, and special symbols (-, _, .) and can be entered up to a maximum of 100 characters.
If the monitoring target does not have permission, information about the target without permission and a permission check message will be displayed as a pop-up.
Check log details
You can view the detailed log records and log graphs of the monitoring target.
Checking the log list
You can check the log details in the monitoring detail pop-up window. To check the monitoring details of the log, follow the next procedure.
Click Cloud Monitoring Console > Log Analysis > Log Status. You will be taken to the Log Status page.
Click on the log to check the detailed information on the Log Status page. The Monitoring Details popup window will open.
Click the log tab.
When you place the mouse cursor on the graph, the value of each log item appears in a popup window.
You can set the inquiry period or change the refresh cycle by clicking the icon at the top right.
You can select the graph display method by clicking the Detail, Summary buttons at the top left of the log chart.
Item
Description
Basic Information
Displays basic information about the monitoring target
Details
The chart for each log of the monitoring target is unfolded and displayed
Check one chart in detail
Summary
Performance charts of monitoring targets are displayed in a checkerboard format
Check multiple charts at a glance
Setting the inquiry period
Date/Time: Displays the standard time of the data inquiry.
Refresh: Refreshes directly to the current time.
Start/Stop: Turns the automatic refresh function on or off.
Settings: Sets the data inquiry period or changes the automatic refresh cycle
Performance comparison
Combine keywords and queries to search logs and check detailed history
Performance-based chart
The log-based chart of the monitoring target is displayed as a graph
If you place the mouse cursor over the graph, the log item value at the specified time will appear in a popup window.
Check by searching the log
You can search logs by combining keywords and queries, and check the details.
Reference
The presence and frequency of keywords can be converted into indicators and displayed as charts on the dashboard page, or set up related events to receive notifications.
To search logs, follow the next procedure.
Click Cloud Monitoring Console > Log Analysis > Log Status. You will be taken to the Log Status page.
Click Log Status on the Log Search page. It moves to the Log Search page.
Item
Description
Monitoring target
Indicates the type of service for the monitoring target to be compared
Click the monitoring target list to change the service
If the service is changed, all charts created so far will disappear.
Click the Add button to search for and add the monitoring target of the currently selected service
The selected monitoring target is displayed on the page, and you can delete the monitoring target by clicking X or Delete all.
Search Condition
Set the condition for the log to be searched
Setting the inquiry period
Date/Time: Displays the standard time of the data inquiry.
Refresh: Refreshes directly to the current time.
Start/Stop: Turns the auto-refresh function on or off.
Settings: Sets the data inquiry period or changes the auto-refresh cycle
The graph of log occurrence
Log occurrence graph
Occurrence log message
Log messages that occurred from the monitoring target are displayed by time
Add button is clicked. A popup window that can add a monitoring target will open.
Click the monitoring target and select the log file you want to add.
Once the log file selection is complete, click the Confirm button.
Enter the search conditions and click the Search button. The search results will be displayed on the log volume graph and the occurred log message.
Item
Description
Add Metric
Add a metric to the log search results
Use after searching logs
Execution History
Check the list of search conditions that were recently executed for the search
Execution history displays up to 20 most recently executed search conditions
Select the desired execution history to input as the current search condition
Search field
Select search field
Condition
Select search condition
like , !like , = , != , <= , >= , > , < can be selected
Search value
Enter the keyword to search
Log Search
Select an operator (AND, OR) for the newly added search condition
Only displayed when a new search condition is added
Add condition
Add new search condition
When searching logs, the log history corresponding to the entered condition is displayed as a chart.
Log history is displayed in seconds.
Item
Description
Log occurrence graph
The log occurrence during the set period is displayed as a graph
If you place the mouse cursor over the graph, the value of each log item appears in a popup window.
Clicking on the bar graph of the graph displays the list of logs at that point in time.
Setting the inquiry period
Date/Time: Displays the reference time of the data
Refresh: Refreshes directly to the current time.
Start/Stop: Turns the automatic refresh function on or off.
Settings: Sets the data inquiry period or changes the automatic refresh cycle
Monitoring target
The monitoring target list is displayed
If you select a monitoring target to check the log message, the contents will be displayed in the log list
Log list
Log messages that occurred in the monitoring target are displayed by time
Clicking the button in the log list displays the full message of the log
Click download to download the currently displayed log message in Excel and TXT file formats
Check the status of log collection
You can check the collection information of major logs for the past 7 days in a chart.
When you place the mouse cursor on the graph, detailed information appears in a pop-up window.
Only collected logs are aggregated, and uncollected logs are not displayed in the current status.
Reference
When you create an * Account, it provides 1GB of virtual capacity by default to store the collected logs.
All logs can be stopped and restarted for collection as needed.
To check the log collection status, click Cloud Monitoring Console > Log Analysis > Log Collection Dashboard.
Item
Description
Accumulated log occurrence amount
Amount of logs collected from the 1st of each month, displayed in GB
Displays the cumulative usage of the allocated total virtual capacity so far, as a percentage.
Recent 7-day log collection amount
The amount of logs collected over the past 7 days is displayed in a graph by service type
The line graph with a notch shows the quantity (kb), and the bar graph shows the cumulative usage rate.
Click on the monitoring target in the legend area to display only the corresponding graph
Service-specific log occurrence rate
Displays the log collected over the past 7 days, classified by service
When you click on the bar graph representing each service, the monitoring target with the most collected logs within the service is displayed on the log collection TOP 10 chart.
Log Collection Top 10
Displays the top 10 monitoring targets with the most logs collected in the last 7 days within the selected service in the log occurrence rate by service as a graph
Click on each point on the graph to view the detailed log records
Click on the monitoring target in the legend area to display only the corresponding graph
Clicking on the graph of the target service moves to the Log Status page
Reference
To perform monitoring related to logs, you must install and operate the log collection agent in advance. For more information on installing and operating log agents, please refer to Managing Agents.
The accumulated log is stored up to a maximum of 1GB. If 1GB is exceeded, old logs are automatically deleted from oldest to newest.
Check the status of the indicator settings
You can create metrics to display the occurrence of log patterns over time as a time series.
To check the list of metrics, click Cloud Monitoring Console > Log Analysis > Current Metric Settings.
Reference
The metrics converted to time series data can be set as an event or registered on the dashboard for real-time monitoring.
Item
Description
Search area
The search filter displayed in the search area may vary depending on the service type
Advanced search can be done by clicking the Advanced search button.
Each detailed search filter condition item can be selected one or more
Number of items to display for monitoring targets
Display search results
The default is to display 20 at a time.
The number of items listed can be changed to display 10, 20, 30, 40, 50, or 100 at a time
Search Information
Displays the search result value for the search condition item
Add
Add a new indicator
Delete
Select and delete indicators in search information
Check the details of the indicator
To view detailed information about the metric, follow these steps.
Cloud Monitoring Console > Log Analysis > Metric Setting Status will be clicked. It moves to the Metric Setting Status page.
On the Indicator Setting Status page, click the indicator name to check detailed information. The Indicator Details popup window will open.
Adding Indicators
You can add new metrics to display the desired log data as a time series.
Reference
The log indicator can only be set for the monitoring target where the log agent is installed or logs are collected. For more information on installing and operating log agents, see Managing Agents.
To add a new metric, follow the procedure below.
Cloud Monitoring Console > Log Analysis > Metric Setting Status will be clicked. It moves to the Metric Setting Status page.
On the Indicator Setting Status page, click the Add button. The Add Indicator popup window opens.
Enter Indicator Name.
Indicator names can only use English uppercase and lowercase letters, underscores (_), periods (.), and hyphens (-).
To distinguish metrics from general performance, the prefix metricfilter. is automatically added and cannot be deleted or changed.
Item
Description
Indicator Name
Enter the name of the new indicator to be created
Monitoring Target
Indicates the type of service for the monitoring target to be compared
Click the monitoring target list to change the service
If the service is changed, all charts created so far will disappear.
Click the add button to search for and add the monitoring target of the currently selected service
The selected monitoring target is displayed on the page and can be deleted by clicking X or delete all
Search Conditions
Set conditions for logs to be searched
Set query period
Date/Time: Displays the reference time for data query
Refresh: Refreshes directly to the current time.
Start/Stop: Turns automatic refresh on or off.
Settings: Allows setting the data query period or changing the automatic refresh cycle.
Log Volume Graph
When searching for logs, the log history that matches the entered conditions is displayed as a chart
Occurrence Log Message
Log messages that occurred from the monitoring target are displayed by time
Add button is clicked. A popup window that can add monitoring targets will be opened.
Click the monitoring target and select the log file you want to add.
Once the log file selection is complete, click the Confirm button.
Enter the search conditions and click the Search button. The search results will be displayed in the log volume graph and occurrence log message.
Item
Description
Add Metric
Add metrics to log search results
Use after searching logs
Execution History
Check the list of search conditions that were recently executed for searching
Execution history displays up to 20 most recently executed search conditions
Can input the desired search history as the current search condition
Search Field
Select Search Field
Condition
Select search condition
like , !like , = , != , <= , >= , > , < can be selected
Search value
Enter the keyword to search
Operator
Select an operator (AND, OR) for the newly added search condition
Only displayed when a new search condition is added
Add condition
Add new search condition
Click the Confirm button. A new metric will be added with a toast popup message.
Modifying Indicator Search Conditions
To modify the search criteria of the indicator, follow the next procedure.
Cloud Monitoring Console > Log Analysis > Metric Setting Status will be clicked. It moves to the Metric Setting Status page.
On the Indicator Setting Status page, click the Indicator Name of the indicator you want to modify. The Indicator Details popup window will open.
Indicator Details popup window, click the Edit button. The Edit Indicator popup window opens.
In the Modify Indicator popup window, modify the search conditions and click the Confirm button. The indicator will be modified along with a toast popup message.
Deleting Indicators
To delete an indicator, follow these steps.
Reference
If there are charts or event policies using the metric you want to delete, you cannot delete the metric.
Cloud Monitoring Console > Log Analysis > Click Metric Setting Status. You will be taken to the Metric Setting Status page.
On the Indicator Setting Status page, select the indicator to be deleted and click the Delete button. The indicator will be deleted along with a toast popup message.
10.3.2.4 - Managing Events
An event is a setting that alerts users when the performance value of a monitored target meets certain conditions. By setting up events, users can grasp important monitoring information without missing it. For example, if an event is set to occur when a performance value related to overload exceeds a certain threshold, users will receive notifications whenever there is a risk of overload while operating the resource. Users can then take action before problems occur.
In event management, users can create events to alert designated users when specific values occur during monitoring.
Checking Event Status
The event status section displays information about all occurred events, related performance items, and event notification history. To check the event status list, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Status. The Event Status page will be displayed.
On the Event Status page, enter the search conditions for the service you want to check in the search area, and then click the Search button.
Item
Description
Search Area
The search filter displayed in the search area varies depending on the service type.
Click the Detailed Search button to perform a detailed search.
Multiple conditions can be selected for each detailed search filter.
Number of Monitoring Targets Displayed
Displays the number of search results and the number of performance items that can be viewed at once in the list.
The default value for the number of performance items displayed in the list is 20 per page.
The number of performance items displayed in the list can be changed to 10, 20, 30, 40, 50, or 100 per page.
Search Information
Displays the search result values for the search condition items.
Clicking on the message content for each service allows you to check the detailed information of the event.
Detailed View
Displays detailed information about the corresponding monitoring target.
Table. Event List
Note
If a Virtual Server or Node is connected to the monitoring target, the status will also be displayed in the search information area.
The name of the monitoring target can include Korean, English ( uppercase and lowercase), numbers, and special characters (-, _, .), with a maximum of 100 characters.
Viewing Event Status List
In the monitoring detail popup window, you can check the event information, occurrence time, and duration. To check the event occurrence status, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Status. The Event Status page will be displayed.
On the Event Status page, click the Event tab.
Item
Description
Event Status
Displays the event message and occurrence time.
Active
Displays only events that are currently active.
All
Displays all events.
Event Details
Displays detailed information about the selected event message.
Table. Event Tab
Checking Event Details
To check the event details, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Status. The Event Status page will be displayed.
On the Event Status page, click the Event tab.
On the Event Status page, select the event you want to check the details for, and then click Event Details. You can then check the event issuance conditions, performance items, and notification history.
Item
Description
Monitoring Target
Displays the name of the monitoring target.
Occurrence Condition
Displays the occurrence condition of the event.
Performance Item
Displays a chart for the performance item.
Placing the mouse cursor over the graph displays the detailed performance value for each time period.
Notification History
Displays the entire notification history.
Event Setting Details
Displays the setting information for the corresponding event.
Table. Event Details
Managing Event Settings
You can set up detailed event settings, such as the monitoring target, performance value that serves as the basis for event occurrence, event risk level, and event notification recipient. When the data collected from the monitoring target meets the conditions set in the event policy, notifications are sent to users via email, SMS, or messaging.
Note
Event policies can only be set when a monitoring target is specified, and policies for Auto-Scaling Groups can be set on a group-by-group basis.
Checking Event Settings
To check the event settings, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, enter the search conditions for the service you want to check in the search area, and then click the Search button.
Item
Description
Search Area
The search filter displayed in the search area varies depending on the service type.
Click the Detailed Search button to perform a detailed search.
Multiple conditions can be selected for each detailed search filter.
Number of Monitoring Targets Displayed
Displays the search results.
The default value is 20 per page.
The number of performance items displayed in the list can be changed to 10, 20, 30, 40, 50, or 100 per page.
Monitoring Target
Displays the name of the monitoring target.
Checking the box selects the monitoring target, and the Delete, Activate, and Notification Recipient buttons are activated.
Performance Item
Displays the performance item that is the target of the event setting.
Individual Item
Displays the individual performance item under the performance item.
If there are no individual items, they will not be displayed.
Type/Unit
Displays the value type and unit of the corresponding performance item.
Event Level
Displays the risk level of the corresponding event.
The risk level is set by the user when adding an event.
Fatal: The most critical level.
Warning: The middle level of risk.
Information: The lowest level of risk and reference level.
Threshold
Displays the reference value used to compare the performance value.
Notification Recipient
Displays the recipient of the event notification.
Placing the mouse cursor over the name displays the entire list.
Policy Status
Displays whether the event is activated or not.
Detailed View
Displays detailed event information and allows modification.
Clicking Detailed View opens a popup window with detailed information about the corresponding event.
Add
Adds an event.
Delete
Deletes an event.
Activate
Activates or deactivates an event.
Notification Recipient
Displays and manages event notification recipients.
Table. Event Settings
Note
The name of the monitoring target can include Korean, English (uppercase and lowercase), numbers, and special characters (-, _, .), with a maximum of 100 characters.
If you do not have permission for the monitoring target, a message will be displayed indicating that you do not have permission, along with the target information.
Checking Detailed Event Settings
You can check detailed information about the monitoring target and event conditions, and modify the event conditions and notification information.
Adding Event Settings
To add event settings, follow these steps:
Note
Event policies can only be set when a monitoring target is specified.
Policies for Auto-Scaling Groups can be applied on a group-by-group basis.
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, click the Add button. The Add Event Settings popup window will be displayed.
Item
Description
Target Name
Select the monitoring target to add event settings for.
Clicking the monitoring target list changes the service.
Changing the service will delete all event conditions created so far.
Click the Add button to search for and add the monitoring target of the currently selected service.
The selected monitoring target is displayed on the page, and you can delete the monitoring target by clicking the X or Delete All button.
Event Settings Area
Set the performance and occurrence conditions for the event.
Notification Information Area
Set the notification recipient and notification method for the event.
Table. Add Event Settings Popup Window
In the monitoring target area, select the service type and then click the Add button. The Add Monitoring Target popup window will be displayed.
Select the monitoring target and then click the Confirm button.
You can select multiple monitoring targets at the same time.
If there are multiple monitoring targets, the set event will be added to each monitoring target.
If you select Kubernetes, you must also select the subtype.
In the performance item area, click the performance item you want to add an event for, and then enter the event occurrence condition.
The number of times the performance item is added is displayed next to the performance item name.
If you select multiple performance items, you must enter the event occurrence condition for each performance item.
Item
Description
Event Policy Template
Select an existing event policy template to apply.
Performance Item
Click the performance item to set the event occurrence condition.
Event Level
Set the event level.
Fatal: The most critical level.
Warning: The middle level of risk.
Information: The lowest level of risk and reference level.
Performance Type
Select the reference value used to determine whether the event occurs.
Collected Value: Uses the current value.
Delta Value: Uses the difference between the previous value and the current value.
Threshold
Set the reference value used to compare the performance value.
This is the criterion for determining whether the event occurs.
Only numbers and decimal points can be entered.
Comparison Method
Select the method used to compare the performance value and the threshold.
Range: Checks if the performance value is within the specified range of the threshold.
Match: Checks if the performance value matches the threshold.
Mismatch: Checks if the performance value does not match the threshold.
Greater Than: Checks if the performance value is greater than the threshold.
Greater Than or Equal To: Checks if the performance value is greater than or equal to the threshold.
Less Than: Checks if the performance value is less than the threshold.
Less Than or Equal To: Checks if the performance value is less than or equal to the threshold.
Individual Item
Specifies the individual performance item under the performance item as the event condition.
This is only activated if the performance item can collect individual items.
Prefix
Adds a prefix to the event message.
This is used as a keyword to search for the event in the Event Status page.
Statistics
Sets the statistical method to apply to the collected performance values.
If a statistical method is set, the performance value to which the statistical method is applied is compared to the threshold to determine whether the event occurs. If not set, the most recent performance value is compared to the threshold.
Statistical Method: Selects one of the maximum, minimum, average, or sum to calculate the collected performance values.
Statistical Period: Sets the period for which the statistical method is applied. This is the period from the most recently collected performance value.
Continuous Occurrence Count
Sets the number of consecutive monitoring values that meet the event occurrence condition.
This value is used as sensitivity to determine whether the event is a momentary anomaly or an actual event.
Event Occurrence Notification Time
Sets the time zone for event policy settings.
Table. Add Event Settings - Event Settings Area
In the Notification area, you can set up notifications.
Item
Description
Notification Recipient Selection Area
Select the notification recipient.
Clicking the Delete button after selecting the notification recipient deletes the recipient.
Notification Recipient/Group
Displays the list of recipients to whom the event notification will be sent.
Event Risk Level
Displays the risk level of the set event.
Notification Method
Displays the method used to send notifications to the recipient.
Add
Adds a new notification recipient from the address book.
Delete
Deletes the notification recipient from the list.
Table. Add Event Settings - Notification Information Area
Check the notification recipient and then click the Confirm button.
Note
Only the Root user or IAM user of an account can be added as a notification recipient.
Multiple recipients can be selected at the same time.
Set the notification method for each notification recipient based on the event risk level.
The notification method can be selected from email, SMS, or messaging, and multiple methods can be selected at the same time.
After setting the notification method, click the Confirm button.
Modifying Event Settings
To modify the event conditions and notification recipient information, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, enter the search conditions for the service you want to modify in the search area, and then click the Search button.
In the event policy list, click the Detailed View button for the event policy you want to modify. The Event Setting Details page will be displayed.
On the Event Setting Details page, click the Modify button. The Modify Event Settings page will be displayed.
On the Modify Event Settings page, enter the modified information and then click the Confirm button.
You can modify the event conditions and notification information.
Deleting Event Settings
To delete event settings, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, enter the search conditions for the service you want to delete in the search area, and then click the Search button.
In the event policy list, check the event policy you want to delete and then click the Delete button.
In the confirmation popup window, click the Confirm button.
Changing Event Setting Activation
You can easily change the activation status of event policies.
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, enter the search conditions for the service you want to delete in the search area, and then click the Search button.
In the event policy list, check the event policy you want to change the activation status for and then click the Activate button. The Policy Activation popup window will be displayed.
Select the activation status and then click the Confirm button.
You can change the activation status in bulk by clicking the Activate All or Deactivate All button.
Note
Deactivating an event policy will deactivate all active events that occurred due to the selected event policy.
Changing Event Notification Recipients
You can check and change the notification recipients for event occurrences in bulk.
Note
The event notification recipient change function is intended to change the notification recipients in bulk. Therefore, existing notification recipients will be deleted and changed to the new notification recipient settings.
To check and change the notification recipients for each policy, click the Modify button on the policy details page.
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, enter the search conditions for the service you want to delete in the search area, and then click the Search button.
In the event policy list, check the event policy you want to modify and then click the Notification Recipient button. The Notification Recipient page will be displayed.
On the Notification Recipient page, select the user to add as a notification recipient and then click the Confirm button.
Item
Description
Event Policy List
Displays the list of event policies to change the notification recipients for.
Click Add to add policies to change.
Clicking the Delete button for a policy deletes the policy.
User Search Area
Enter the name, email, phone number, or company name to search for users.
Notification Address Book
Use the address book to check and add users.
Search User List
Displays the list of users included in the address book or search results.
Checking the user adds them to the notification recipient list.
Notification Recipient List
Displays the list of users to be added as notification recipients for the event policies displayed in the list.
Checking the user and clicking the Delete button removes the user from the list.
Table. Changing Event Notification Recipients
Managing Event Templates
You can create event templates by setting monitoring targets, performance values that serve as the basis for event occurrence, and event risk levels. When adding or modifying events, you can use event policy templates to easily enter event conditions.
Checking the Event Policy Template List
To check the event policy template list, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, click Event Policy Template. The Event Policy Template page will be displayed.
On the Event Policy Template page, enter the search conditions for the service you want to check in the search area, and then click Search.
Item
Description
Search Area
Enter the conditions for the event policy template to search
Add Event Policy Template
Add an event policy template
Template List
Displays the event policy templates that match the search conditions
Table. Event Policy Template List
Adding an Event Policy Template
To add an event policy template, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, click the Event Policy Template button. The Event Policy Template page will be displayed.
On the Event Policy Template page, click the Add Event Policy Template button. The Add Event Policy Template popup window will open.
In the Add Event Policy Template popup window, set the service type and template information to add the event policy template.
*
indicates required input items.
Item
Description
Service Type
Select the service type to set the event policy
Template Name
Enter the name of the template to create
Template Description
Enter a description of the template to create
Table. Adding an Event Policy Template - Service Type and Template Name Settings
In the performance item section, click the performance item to add an event and enter the event occurrence conditions.
The number of times the performance item is added is displayed next to the performance name.
If multiple performance items are selected, event occurrence conditions must be entered for each performance item.
*
indicates required input items.
Item
Description
Load Event Policy Template
Select an existing event policy template to apply
Performance Item
Click the performance item to set the event condition
Event Level
Set the event risk level
Performance Type
Select the performance value to use as the basis for event occurrence
Threshold
Set the threshold value to compare with the collected performance value
Comparison Method
Select the comparison method to determine event occurrence
Individual Item
Specify individual performance items as event conditions
Prefix
Add a prefix to the event message
Statistics
Set the statistical method to apply to the collected performance value
Continuous Occurrence Count
Set the number of consecutive monitoring values that meet the event occurrence conditions
Event Occurrence Notification Time
Set the time zone for event policy settings
Table. Adding an Event Policy Template - Performance Item
Set the notification target and method when an event occurs.
Item
Description
Add
Add a new notification target
Delete
Delete the selected notification target
Notification Target/Group
Displays the list of notification targets
Event Risk Level
Displays the event risk level to be notified
Notification Method
Displays the notification method
Table. Adding an Event Policy Template - Notification Target Settings
Note
Only account members and address book registered in the account can be added as notification targets.
Multiple targets can be selected at the same time.
Click the Confirm button. The event policy template will be added, and a toast popup message will be displayed.
Modifying and Deleting an Event Policy Template
To modify or delete an event policy template, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, click the Event Policy Template button. The Event Policy Template page will be displayed.
On the Event Policy Template page, enter the search conditions for the service you want to check in the search area, and then click the Search button.
Click the More button at the top right of the template you want to modify or delete, and then click Modify or Delete.
Modify: The template modification popup window will open. Modify the template and click the Confirm button.
Delete: The template will be deleted, and a toast popup message will be displayed.
Click the Confirm button. The template will be deleted, and a toast popup message will be displayed.
Sharing an Event Policy Template
To share an event policy template, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Settings. The Event Settings page will be displayed.
On the Event Settings page, click the Event Policy Template button. The Event Policy Template page will be displayed.
On the Event Policy Template page, enter the search conditions for the service you want to check in the search area, and then click the Search button.
Click the More > Share button at the top right of the template you want to share.
Select the user to share with and click the > button. The selected user will be added to the shared target.
Click the Confirm button. The template will be shared, and a toast popup message will be displayed.
Event Filtering
You can filter event notifications for a specific period. During the event filtering period, events will occur, but notifications will not be delivered.
To check the event filtering list, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Filtering. The Event Filtering page will be displayed.
Item
Description
Filtering Timeline
Displays the registered filtering timeline by date
Filtering List
Displays the registered filtering information and action status in a list
Add
Adds a new event filtering
Delete
Deletes the selected event filtering
Search Area
Searches for event filtering or monitoring targets
Table. Event Filtering List
Note
The filtering timeline chart is displayed based on the time zone set for the logged-in user’s account.
Adding Event Filtering
To add event filtering, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Filtering. The Event Filtering page will be displayed.
On the Event Filtering page, click the Add button. The Add Event Filtering popup window will open.
In the Add Event Filtering popup window, enter the filtering information.
Item
Description
Event Filtering
Enter the name of the event filtering
Usage
Set the usage of the event filtering
Time Zone
Set the time zone for the event filtering
Repeat Type
Set the repeat type of the event filtering
Period
Set the period for the event filtering
Event Filtering Target
Select the service type and monitoring target to apply the event filtering
Table. Adding Event Filtering
Click the Confirm button. The event filtering will be added, and a toast popup message will be displayed.
Note
The event filtering modification task can be used to change the usage of the event filtering.
Modifying Event Filtering
To modify event filtering, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Filtering. The Event Filtering page will be displayed.
On the Event Filtering page, click the name of the filtering you want to modify. The Event Filtering Details popup window will open.
In the Event Filtering Details popup window, click the Modify button. The Modify Event Filtering popup window will open.
In the Modify Event Filtering popup window, enter the modified contents and click the Confirm button. The event filtering will be modified, and a toast popup message will be displayed.
Deleting Event Filtering
To delete event filtering, follow these steps:
Click Cloud Monitoring Console > Event Management > Event Filtering. The Event Filtering page will be displayed.
On the Event Filtering page, select the event filtering you want to delete and click the Delete button. The event filtering will be deleted, and a toast popup message will be displayed.
Multiple event filterings can be selected at the same time.
Managing Notification Groups
You can manage notification targets as a group when an event occurs. Notification Groups can be used to efficiently manage notification targets and easily set up notifications.
To check the Notification Groups, follow these steps:
Click Cloud Monitoring Console > Event Management > Notification Groups. The Notification Groups page will be displayed.
On the Notification Groups page, you can check and manage the notification groups.
Item
Description
Add Notification Group
Adds a new notification group
Notification Group
Displays all notification groups created by the user
Detailed Search
Searches for notification groups by name
Keyword Search
Searches for notification groups, user names, creation dates, and last modification dates
Note
Notification Groups are only valid within the account, so they can only be composed of users with access permissions to the account. Users who have been deleted from the access permissions are automatically excluded from the address book.
Select the notification group to delete and click Delete.
Multiple addresses can be selected at the same time.
Click the Confirm button. The address will be deleted, and a toast popup message will be displayed.
10.3.2.5 - Using Custom Dashboards
A custom dashboard is a user-defined dashboard that allows users to select and arrange widgets according to their preferences. Users can customize the monitoring information and share the created custom dashboard with other users. The following content is covered in Using Custom Dashboards.
Note
Custom dashboards are created separately from the Account dashboard and can display monitoring information from multiple Accounts at once.
Getting Started with Custom Dashboards
Users can create a custom dashboard and add desired widgets to view monitoring information.
Creating a Custom Dashboard
To create a custom dashboard, follow these steps:
Click Custom Dashboard Management in the top-right menu. The Custom Dashboard Management page will be displayed.
Click Add Dashboard. The Add Dashboard popup window will open.
Enter the name of the dashboard to be created and click the Save button.
The created custom dashboard will be displayed in the My Dashboards list.
Adding Widgets
Custom dashboards provide various types of widgets, such as performance statistics, comparison charts, and event lists. Users can add widgets to customize their dashboard according to their monitoring needs.
Note
Created widgets can be modified, copied, or deleted. For more information, see Managing Custom Widgets.
To add a widget, follow these steps:
Click Custom Dashboard Management in the top-right menu. The Custom Dashboard Management page will be displayed.
Select the custom dashboard to add a widget from the My Dashboards list.
Click the + button or Add Widget button in the top-right corner of the dashboard. The Add Widget popup window will open.
Select the widget to add to the dashboard from the Add Widget popup window.
When a widget is selected, detailed settings and previews will be displayed.
For each chart, see Custom Widgets for explanations and setup methods.
Click the Confirm button.
Note
Widgets are added to the dashboard with a default size.
Custom Widgets
The following types of widgets can be added to a custom dashboard:
Widget Name
Description
Title Box
Displays a title box on the custom dashboard.
Event Status
Displays the status of occurred events.
Monitoring Status
Displays the number of monitoring targets and their status.
Top 5 Performance
Displays the top 5 monitoring targets with the highest performance usage rates.
Event Map
Displays the number of events occurred by service and risk level.
Event History
Displays the number of events occurred by date and risk level.
Time Series Graph
Displays the performance of a selected monitoring target as a time series graph.
Status Indicator
Displays the statistical values and risk levels of monitoring targets.
Instance Map
Displays the performance values of monitoring targets with different color densities.
Table. Custom Dashboard Widget Types
Title Box
Displays a title box on the custom dashboard.
Up to 10 title boxes can be created.
Multiple title boxes can be added at the same time.
Item
Description
Title
Enter the text to be displayed on the title box.
Add
Adds a new text box.
Delete
Deletes the corresponding text box.
Table. Custom Dashboard Title Box
Event Status
Displays the status of occurred events.
All occurred events can be displayed, or only active events can be displayed.
Item
Description
Widget Name
Enter the name of the widget.
Query Range
Select the range of events to be displayed on the widget.
Table. Event Status
Monitoring Status
Displays the number of monitoring targets and their status.
Item
Description
Widget Name
Enter the name of the widget.
Table. Monitoring Status
Top 5 Performance
Displays the top 5 monitoring targets with the highest performance usage rates.
Item
Description
Widget Name
Enter the name of the widget.
Service
Select the service to check performance.
Performance Item
Select the performance item to display.
Table. Top 5 Performance
Event Map
Displays the number of events occurred by service and risk level.
Item
Description
Widget Name
Enter the name of the widget.
Table. Event Map
Event History
Displays the number of events occurred by date and risk level.
Item
Description
Widget Name
Enter the name of the widget.
Table. Event History
Time Series Graph
Displays the performance of a selected monitoring target as a time series graph.
The time series graph can be changed using the dashboard’s query period setting feature.
When the mouse cursor is placed over the graph, the time and target performance values can be checked.
Item
Description
Widget Name
Enter the name of the widget.
Service
Select the service to check performance.
Monitoring Target
Select the monitoring target to display on the graph.
Performance Item
Select the performance item to display on the graph.
Add Option
Risk intervals can be displayed.
Table. Time Series Graph
Note
The graph type can be changed by clicking the icon in the top-right corner of the preview.
Line graph
Area graph
Cumulative bar graph
Scatter graph
Status Indicator
Displays the statistical values and risk levels of monitoring targets.
When the mouse cursor is placed over the status indicator on the monitoring dashboard, detailed information about the item can be checked.
Item
Description
Widget Name
Enter the name of the widget.
Service
Select the service to check performance.
Monitoring Target
Select the monitoring target to display on the graph.
Performance Item
Select the performance item to display on the graph.
Statistics
Select the statistical method to display the performance values of the monitoring target.
Add Option
Risk intervals can be displayed.
Table. Status Indicator
Instance Map
Displays the performance values of monitoring targets with different color densities.
When the mouse cursor is placed over each heatmap, detailed information about the item can be checked.
Item
Description
Widget Name
Enter the name of the widget.
Service
Select the service to check performance.
Monitoring Target
Select the monitoring target to display on the graph.
Performance Item
Select the performance item to display on the graph.
Table. Instance Map
Viewing Custom Dashboards
To view a custom dashboard, follow these steps:
Click Custom Dashboard Management in the top-right menu. The Custom Dashboard Management page will be displayed.
Select the custom dashboard to view from the My Dashboards list.
Item
Description
Dashboard List
Displays the list of custom dashboards. The list can be clicked to change the dashboard to be viewed.
Dashboard Name
Displays the name of the user-defined dashboard.
Dashboard Settings
Date/Time: Displays the reference time for analysis information.
Refresh: Refreshes to the current time.
Stop/Start: Turns the automatic refresh feature on or off.
Settings: Allows setting the data query period or changing the automatic refresh cycle. (See Setting Query Periods)
Add Widget
Adds a new widget to the dashboard.
Edit Dashboard
Allows editing the currently set custom dashboard.
Modify Dashboard: Modifies the name of the currently selected dashboard.
Copy Dashboard: Copies the currently selected dashboard and creates a new custom dashboard with the same widgets.
Delete Dashboard: Deletes the currently selected dashboard.
Share Dashboard: Shares the dashboard with specific users so they can view it. For more information, see Sharing Custom Dashboards.
Custom Widgets
Displays the widgets that make up the dashboard.
The position and size of widgets can be changed, or they can be modified or deleted. For more information, see Managing Custom Widgets.
Graphic widgets can be downloaded as image files.
Table. Custom Dashboard Information
Note
The star icon next to the dashboard name can be clicked to add the dashboard to favorites. Favorited dashboards are displayed at the top of the dashboard list.
Downloading Widgets
Graphic widgets can be downloaded as image files (*.png). When the mouse cursor is placed over a graph widget, a download button will be displayed in the top-right corner. Clicking the download button will download the widget as an image file.
Sharing Custom Dashboards
Custom dashboards can be shared with other users so they can view the dashboard.
Note
Shared dashboards will remain shared even if the user is later removed from the current Account.
To share a custom dashboard, follow these steps:
Click Custom Dashboard Management in the top-right menu. The Custom Dashboard Management page will be displayed.
Select the custom dashboard to share from the My Dashboards list.
Click the Share button next to the dashboard name.
Enter the user ID or email address of the user to share the dashboard with and click the Share button.
Click the Confirm button.
Managing Custom Dashboards
You can modify, copy, or delete custom dashboards.
Click Manage Custom Dashboards from the top right menu. It moves to the Manage Custom Dashboards page.
From the My Dashboard list, select the Custom Dashboard you want to check.
Click the More button on the top right of the dashboard, then select the desired command.
Edit Dashboard: Modify the dashboard name.
Copy Dashboard: Copy the dashboard to create a new dashboard.
Share Dashboard: Share the dashboard with other users.
Delete Dashboard: Delete the dashboard.
Managing Custom Widgets
You can change the position and size of widgets or modify and copy them.
Changing Widget Position
You can change the position of a widget by clicking on its name and dragging it.
Changing Widget Size
To change the size of a widget, follow these steps:
Place the mouse cursor over the widget. The Resize button appears at the bottom right of the widget.
Click the Resize button and drag it to adjust the size as needed.
Modifying, Copying, and Deleting Widgets
To modify, copy, or delete a widget, follow these steps:
Place the mouse cursor over the widget. The More button appears at the top right of the widget.
Click the More button, then click the desired command.
Edit Widget: Modify the widget’s chart settings.
Copy Widget: Copy the widget to create a new widget with the same content.
Delete Widget: Delete the widget.
10.3.2.6 - Managing Agents
An agent is a module that collects performance values, logs, and Windows events from the monitoring target. To use the monitoring function, users must check the installation status of the agent and operate and manage it.
Note
If IP access control is set for the monitoring target, agent management cannot be used. If agent management is not available, check the IP access control setting status of the selected monitoring target.
The agent management function uses the sudo command, so the sudo package must be installed in advance.
Agent Management Overview
There are performance collection agents, log collection agents, and Windows event log collection agents.
Agents must be installed manually by the user on the monitoring target according to their needs.
Managing Agents
Managing Performance Agents
To install and manage agents, follow these steps.
Click Cloud Monitoring Console > Performance Analysis. Move to the Performance Analysis page.
On the Performance Analysis page, select the monitoring target and click the Details button. The Monitoring Target Details popup window opens.
In the Monitoring Target Details popup window, click the Agent tab. Move to the Agent tab.
Click the Performance button on the Agent tab.
Click the Copy icon to the right of the installation command to copy the command.
Paste the copied command into the monitoring target resource.
Run the copied command on the monitoring target resource.
Note
The command uses the sudo command, so the sudo package must be installed.
Item
Description
Installation
Downloads and runs the script file required for agent installation.
Start
Runs the agent start command.
Stop
Runs the agent stop command.
Delete
Runs the agent delete command.
Update
Downloads and runs the script file required for agent update.
Table. Managing Performance Agents
Note
To check the agent service status, use the following method:
Linux: $ sudo systemctl status metricbeat
Windows: Task Manager → service → metricbeat → Status(Running)
Managing Log Agents
To install and manage agents, follow these steps.
Click Cloud Monitoring Console > Performance Analysis. Move to the Performance Analysis page.
On the Performance Analysis page, select the monitoring target and click the Details button. The Monitoring Target Details popup window opens.
In the Monitoring Target Details popup window, click the Agent tab. Move to the Agent tab.
Click the Log button.
Click the Copy icon to the right of the installation command to copy the command.
Paste the copied command into the monitoring target resource.
Run the copied command on the monitoring target resource.
Note
The command uses the sudo command, so the sudo package must be installed.
Item
Description
Installation
Downloads and runs the script file required for agent installation.
Start
Runs the agent start command.
Stop
Runs the agent stop command.
Delete
Runs the agent delete command.
Update
Downloads and runs the script file required for agent update.
Table. Managing Log Agents
Note
To check the agent service status, use the following method:
Linux: $ sudo systemctl status filebeat
Windows: Task Manager → service → filebeat → Status(Running)
To add logs to be monitored, select the log addition action, enter the log name and log path correctly, and click the Generate Command button. Paste the generated command into the monitoring target resource and run it.
Managing Event Agents
To install and manage agents, follow these steps.
Click Cloud Monitoring Console > Performance Analysis. Move to the Performance Analysis page.
On the Performance Analysis page, select the monitoring target and click the Details button. The Monitoring Target Details popup window opens.
In the Monitoring Target Details popup window, click the Agent tab. Move to the Agent tab.
Click the Event button.
Click the Copy icon to the right of the installation command to copy the command.
Paste the copied command into the monitoring target resource.
Run the copied command on the monitoring target resource.
Note
The event agent is available for Windows instances.
Item
Description
Installation
Downloads and runs the script file required for agent installation.
Start
Runs the agent start command.
Stop
Runs the agent stop command.
Delete
Runs the agent delete command.
Update
Downloads and runs the script file required for agent update.
Table. Managing Event Agents
Note
To check the agent service status, use the following method:
Windows: Task Manager → service → winlogbeat → Status(Running)
Note
Agent command provision is provided separately from the instance status of Virtual Server (Bare Metal Server).
10.3.2.7 - Appendix A. Monitoring Targets by Service
Compute type
Virtual Server
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
OS
Agent Agentless
1m
Log
OS
Agent
Log Occurrence Time
Status
OS
Agentless
1m
Fig. Virtual Server Monitoring Information
Reference
If the Virtual Server server type is changed, monitoring performance metric data may not be collected normally for a while. Normal performance metrics will be collected in the next collection cycle (1 minute).
GPU Server
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
OS
Agent Agentless
1m
Log
OS
Agent
Log Occurrence Time
Status
OS
Agentless
1m
Fig. GPU Server Monitoring Information
Bare Metal Server
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
OS
Agent
1m
Log
OS
Agent
Log Occurrence Time
Status
OS
N/A
-
Table. Bare Metal Server Monitoring Information
Multi-node GPU Cluster [Cluster Fabric]
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
OS
Agent
1m
Log
OS
Agent
Log Occurrence Time
Status
OS
N/A
-
Fig. Multi-node GPU Cluster [Cluster Fabric] Monitoring Information
Multi-node GPU Cluster [Node]
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
OS
Agent
1m
Log
OS
Agent
Log Occurrence Time
Status
OS
N/A
-
Fig. Multi-node GPU Cluster [Node] Monitoring Information
Storage type
All Storage type services have the same monitoring target, collection method, and collection cycle.
File Storage
Object Storage
Block Storage(BM)
Block Storage(VM)
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
Storage
Agentless
1m
Log
Storage
N/A
-
Status
Storage
Agentless
1m
Table. Storage type monitoring information
Database type
The monitoring target and collection method, and collection cycle are the same for all types of Database services.
PostgreSQL(DBaaS)
MariaDB(DBaaS)
MySQL(DBaaS)
Microsoft SQL Server
EPAS
CacheStore(DBaaS)
Redis
Valkey
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Database Process, OS
Agent
1m
Log
Database Process, OS
Agent
Log Occurrence Time
Status
Database Process
Agent
1m
OS
Agentless
1m
Table. Database type monitoring information
Data Analytics type
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Data Analytics Process, OS
Agent
1m
Log
Data Analytics Process, OS
Agent
When a log occurs
Status
Data Analytics Process
Agent
1m
OS
Agentless
1m
Fig. Data Analytics type monitoring information
Container type
Kubernetes Engine
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
Cluster, Namespace, Node, ReplicaSet, Deployment, StatefulSet, DaemonSet, Job, CronJob, Pod
Agentless
5m
Log
Cluster, Namespace, Node, ReplicaSet, Deployment, StatefulSet, DaemonSet, Job, CronJob, Pod
Agentless
When a log occurs
Status
Cluster, Namespace, Node, ReplicaSet, Deployment, StatefulSet, DaemonSet, Job, CronJob, Pod
Agentless
5m
Fig. Kubernetes Engine monitoring information
Container Registry
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Container Registry
Agentless
5m
Log
Container Registry
Agentless
When a log occurs
Status
Container Registry
Agentless
5m
Fig. Container Registry Monitoring Information
Networking type
VPC
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Internet Gateway
Agentless
5m
Log
Internet Gateway
N/A
-
Status
Internet Gateway
N/A
-
Table. Internet Gateway Monitoring Information
Caution
Performance monitoring is only possible when the Internet Gateway is created.
Load Balancer(OLD)
Load Balancer(OLD)
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Load Balancer
Agentless
5m
Log
Load Balancer
N/A
-
Status
Load Balancer
Agentless
5m
Fig. Load Balancer Monitoring Information
Load Balancer Listener(OLD)
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Load Balancer Listener
Agentless
5m
Log
Load Balancer Listener
N/A
-
Status
Load Balancer Listener
Agentless
5m
Fig. Load Balancer Listener Monitoring Information
Load Balancer
Load Balancer
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
Load Balancer
Agentless
5m
Log
Load Balancer
N/A
-
Status
Load Balancer
Agentless
5m
Fig. Load Balancer Monitoring Information
Load Balancer Listener
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Load Balancer Listener
Agentless
5m
Log
Load Balancer Listener
N/A
-
Status
Load Balancer Listener
Agentless
5m
Fig. Load Balancer Listener Monitoring Information
Load Balancer Server Group
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Load Balancer Server Group
Agentless
5m
Log
Load Balancer Server Group
N/A
-
Status
Load Balancer Server Group
Agentless
5m
Fig. Load Balancer Server Group Monitoring Information
Direct Connect
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
Direct Connect
Agentless
5m
Log
Direct Connect
N/A
-
Status
Direct Connect
N/A
-
Fig. Direct Connect Monitoring Information
Cloud WAN
Division
Monitoring Target
Collection Method
Collection Cycle
Performance
Cloud WAN
Agentless
10m
Log
Cloud WAN
N/A
-
Status
Cloud WAN
Agentless
10m
Fig. Cloud WAN Monitoring Information
Global CDN
Category
Monitoring Target
Collection Method
Collection Cycle
Performance
Global CDN
Agentless
5m
Log
Global CDN
N/A
-
Status
Global CDN
Agentless
5m
Fig. Global CDN Monitoring Information
10.3.2.8 - Appendix B. Performance Items by Service
Compute Type
Virtual Server
Agentless (Basic Metrics)
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Memory
Memory Total [Basic]
bytes
1m
Available memory in bytes
Memory
Memory Used [Basic]
bytes
1m
Currently used memory in bytes
Memory
Memory Swap In [Basic]
bytes
1m
Swapped memory in bytes
Memory
Memory Swap Out [Basic]
bytes
1m
Swapped memory in bytes
Memory
Memory Free [Basic]
bytes
1m
Unused memory in bytes
Disk
Disk Read Bytes [Basic]
bytes
1m
Read bytes
Disk
Disk Read Requests [Basic]
cnt
1m
Number of read requests
Disk
Disk Write Bytes [Basic]
bytes
1m
Write bytes
Disk
Disk Write Requests [Basic]
cnt
1m
Number of write requests
CPU
CPU Usage [Basic]
%
1m
Average system CPU usage over 1 minute
State
Instance State [Basic]
state
1m
Instance state
Network
Network In Bytes [Basic]
bytes
1m
Received bytes
Network
Network In Dropped [Basic]
cnt
1m
Dropped received packets
Network
Network In Packets [Basic]
cnt
1m
Number of received packets
Network
Network Out Bytes [Basic]
bytes
1m
Sent bytes
Network
Network Out Dropped [Basic]
cnt
1m
Dropped sent packets
Network
Network Out Packets [Basic]
cnt
1m
Number of sent packets
Table. Virtual Server (Agentless) Performance Items
Note
For Windows OS, you need to install the Balloon Driver or the monitoring performance agent to provide memory performance indicators.
Agent (Detailed Metrics)
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
CPU
Core Usage [IO Wait]
%
1m
CPU time spent in wait state (disk wait)
CPU
Core Usage [System]
%
1m
CPU time spent in kernel space
CPU
Core Usage [User]
%
1m
CPU time spent in user space
CPU
CPU Cores
cnt
1m
Number of CPU cores on the host. The maximum value of the unnormalized ratio is 100%* of the number of cores. The unnormalized ratio already reflects this value, and the maximum value is 100%* of the number of cores.
CPU
CPU Usage [Active]
%
1m
CPU time used, excluding idle and IOWait states (using all 4 cores at 100%: 400%)
CPU
CPU Usage [Idle]
%
1m
CPU time spent in idle state
CPU
CPU Usage [IO Wait]
%
1m
CPU time spent in wait state (disk wait)
CPU
CPU Usage [System]
%
1m
CPU time used by the kernel (using all 4 cores at 100%: 400%)
CPU
CPU Usage [User]
%
1m
CPU time used by the user (using all 4 cores at 100%: 400%)
CPU
CPU Usage/Core [Active]
%
1m
CPU time used, excluding idle and IOWait states (normalized by the number of cores, using all 4 cores at 100%: 100%)
CPU
CPU Usage/Core [Idle]
%
1m
CPU time spent in idle state
CPU
CPU Usage/Core [IO Wait]
%
1m
CPU time spent in wait state (disk wait)
CPU
CPU Usage/Core [System]
%
1m
CPU time used by the kernel (normalized by the number of cores, using all 4 cores at 100%: 100%)
CPU
CPU Usage/Core [User]
%
1m
CPU time used by the user (normalized by the number of cores, using all 4 cores at 100%: 100%)
Disk
Disk CPU Usage [IO Request]
%
1m
CPU time spent executing I/O requests to the device (device bandwidth utilization). If this value is close to 100%, the device is saturated.
Disk
Disk Queue Size [Avg]
num
1m
Average queue length of requests executed on the device
Disk
Disk Read Bytes
bytes
1m
Bytes read from the device per second
Disk
Disk Read Bytes [Delta Avg]
bytes
1m
Average of system.diskio.read.bytes_delta for individual disks
Disk
Disk Read Bytes [Delta Max]
bytes
1m
Maximum of system.diskio.read.bytes_delta for individual disks
Disk
Disk Read Bytes [Delta Min]
bytes
1m
Minimum of system.diskio.read.bytes_delta for individual disks
Disk
Disk Read Bytes [Delta Sum]
bytes
1m
Sum of system.diskio.read.bytes_delta for individual disks
Disk
Disk Read Bytes [Delta]
bytes
1m
Delta of system.diskio.read.bytes for individual disks
Disk
Disk Read Bytes [Success]
bytes
1m
Total bytes read successfully. For Linux, it is assumed that the sector size is 512 and the value is the number of sectors read multiplied by 512
Disk
Disk Read Requests
cnt
1m
Number of read requests to the disk device per second
Disk
Disk Read Requests [Delta Avg]
cnt
1m
Average of system.diskio.read.count_delta for individual disks
Disk
Disk Read Requests [Delta Max]
cnt
1m
Maximum of system.diskio.read.count_delta for individual disks
Disk
Disk Read Requests [Delta Min]
cnt
1m
Minimum of system.diskio.read.count_delta for individual disks
Disk
Disk Read Requests [Delta Sum]
cnt
1m
Sum of system.diskio.read.count_delta for individual disks
Disk
Disk Read Requests [Success Delta]
cnt
1m
Delta of system.diskio.read.count for individual disks
Disk
Disk Read Requests [Success]
cnt
1m
Total number of successful read completions
Disk
Disk Request Size [Avg]
num
1m
Average size of requests executed on the device (in sectors)
Disk
Disk Service Time [Avg]
ms
1m
Average service time of I/O requests executed on the device (in milliseconds)
Disk
Disk Wait Time [Avg]
ms
1m
Average time spent waiting for I/O requests to be executed on the device
Disk
Disk Wait Time [Read]
ms
1m
Average disk read wait time
Disk
Disk Wait Time [Write]
ms
1m
Average disk write wait time
Disk
Disk Write Bytes [Delta Avg]
bytes
1m
Average of system.diskio.write.bytes_delta for individual disks
Disk
Disk Write Bytes [Delta Max]
bytes
1m
Maximum of system.diskio.write.bytes_delta for individual disks
Disk
Disk Write Bytes [Delta Min]
bytes
1m
Minimum of system.diskio.write.bytes_delta for individual disks
Disk
Disk Write Bytes [Delta Sum]
bytes
1m
Sum of system.diskio.write.bytes_delta for individual disks
Disk
Disk Write Bytes [Delta]
bytes
1m
Delta of system.diskio.write.bytes for individual disks
Disk
Disk Write Bytes [Success]
bytes
1m
Total bytes written successfully. For Linux, it is assumed that the sector size is 512 and the value is the number of sectors written multiplied by 512
Disk
Disk Write Requests
cnt
1m
Number of write requests to the disk device per second
Disk
Disk Write Requests [Delta Avg]
cnt
1m
Average of system.diskio.write.count_delta for individual disks
Disk
Disk Write Requests [Delta Max]
cnt
1m
Maximum of system.diskio.write.count_delta for individual disks
Disk
Disk Write Requests [Delta Min]
cnt
1m
Minimum of system.diskio.write.count_delta for individual disks
Disk
Disk Write Requests [Delta Sum]
cnt
1m
Sum of system.diskio.write.count_delta for individual disks
Disk
Disk Write Requests [Success Delta]
cnt
1m
Delta of system.diskio.write.count for individual disks
Disk
Disk Write Requests [Success]
cnt
1m
Total number of successful writes
Disk
Disk Writes Bytes
bytes
1m
Bytes written to the device per second
FileSystem
Filesystem Hang Check
state
1m
Filesystem (local/NFS) hang check (normal: 1, abnormal: 0)
FileSystem
Filesystem Nodes
cnt
1m
Total number of file nodes in the filesystem
FileSystem
Filesystem Nodes [Free]
cnt
1m
Total number of available file nodes in the filesystem
FileSystem
Filesystem Size [Available]
bytes
1m
Disk space available for non-privileged users (in bytes)
FileSystem
Filesystem Size [Free]
bytes
1m
Available disk space (in bytes)
FileSystem
Filesystem Size [Total]
bytes
1m
Total disk space (in bytes)
FileSystem
Filesystem Usage
%
1m
Percentage of used disk space
FileSystem
Filesystem Usage [Avg]
%
1m
Average of filesystem.used.pct for individual filesystems
FileSystem
Filesystem Usage [Inode]
%
1m
Inode usage rate
FileSystem
Filesystem Usage [Max]
%
1m
Maximum of filesystem.used.pct for individual filesystems
FileSystem
Filesystem Usage [Min]
%
1m
Minimum of filesystem.used.pct for individual filesystems
FileSystem
Filesystem Usage [Total]
%
1m
-
FileSystem
Filesystem Used
bytes
1m
Used disk space (in bytes)
FileSystem
Filesystem Used [Inode]
bytes
1m
Inode usage
Memory
Memory Free
bytes
1m
Total available memory (in bytes), excluding system cache and buffer memory (see system.memory.actual.free).
Memory
Memory Free [Actual]
bytes
1m
Actual available memory (in bytes). The calculation method varies depending on the OS. For Linux, it is the value of MemAvailable in /proc/meminfo, or the calculated value using available memory and cache/buffer memory if /proc/meminfo is not available. For OSX, it is the sum of available memory and inactive memory. For Windows, it is the same value as system.memory.free.
Memory
Memory Free [Swap]
bytes
1m
Available swap memory
Memory
Memory Total
bytes
1m
Total memory
Memory
Memory Total [Swap]
bytes
1m
Total swap memory
Memory
Memory Usage
%
1m
Percentage of used memory
Memory
Memory Usage [Actual]
%
1m
Percentage of actually used memory
Memory
Memory Usage [Cache Swap]
%
1m
Cache swap usage rate
Memory
Memory Usage [Swap]
%
1m
Percentage of used swap memory
Memory
Memory Used
bytes
1m
Used memory
Memory
Memory Used [Actual]
bytes
1m
Actually used memory (in bytes). The value of total memory minus used memory. The available memory is calculated differently depending on the OS (see system.actual.free).
Memory
Memory Used [Swap]
bytes
1m
Used swap memory
Network
Collisions
cnt
1m
Network collisions
Network
Network In Bytes
bytes
1m
Received bytes
Network
Network In Bytes [Delta Avg]
bytes
1m
Average of system.network.in.bytes_delta for individual networks
Network
Network In Bytes [Delta Max]
bytes
1m
Maximum of system.network.in.bytes_delta for individual networks
Network
Network In Bytes [Delta Min]
bytes
1m
Minimum of system.network.in.bytes_delta for individual networks
Network
Network In Bytes [Delta Sum]
bytes
1m
Sum of system.network.in.bytes_delta for individual networks
Network
Network In Bytes [Delta]
bytes
1m
Delta of received bytes
Network
Network In Dropped
cnt
1m
Dropped received packets
Network
Network In Errors
cnt
1m
Number of receive errors
Network
Network In Packets
cnt
1m
Number of received packets
Network
Network In Packets [Delta Avg]
cnt
1m
Average of system.network.in.packets_delta for individual networks
Network
Network In Packets [Delta Max]
cnt
1m
Maximum of system.network.in.packets_delta for individual networks
Network
Network In Packets [Delta Min]
cnt
1m
Minimum of system.network.in.packets_delta for individual networks
Network
Network In Packets [Delta Sum]
cnt
1m
Sum of system.network.in.packets_delta for individual networks
Network
Network In Packets [Delta]
cnt
1m
Delta of received packets
Network
Network Out Bytes
bytes
1m
Sent bytes
Network
Network Out Bytes [Delta Avg]
bytes
1m
Average of system.network.out.bytes_delta for individual networks
Network
Network Out Bytes [Delta Max]
bytes
1m
Maximum of system.network.out.bytes_delta for individual networks
Network
Network Out Bytes [Delta Min]
bytes
1m
Minimum of system.network.out.bytes_delta for individual networks
Network
Network Out Bytes [Delta Sum]
bytes
1m
Sum of system.network.out.bytes_delta for individual networks
Network
Network Out Bytes [Delta]
bytes
1m
Delta of sent bytes
Network
Network Out Dropped
cnt
1m
Dropped sent packets. This value is not reported by the OS, so it is always 0 on Darwin and BSD.
Network
Network Out Errors
cnt
1m
Number of transmit errors
Network
Network Out Packets
cnt
1m
Number of sent packets
Network
Network Out Packets [Delta Avg]
cnt
1m
Average of system.network.out.packets_delta for individual networks
Network
Network Out Packets [Delta Max]
cnt
1m
Maximum of system.network.out.packets_delta for individual networks
Network
Network Out Packets [Delta Min]
cnt
1m
Minimum of system.network.out.packets_delta for individual networks
Network
Network Out Packets [Delta Sum]
cnt
1m
Sum of system.network.out.packets_delta for individual networks
Network
Network Out Packets [Delta]
cnt
1m
Delta of sent packets
Network
Open Connections [TCP]
cnt
1m
Number of open TCP connections
Network
Open Connections [UDP]
cnt
1m
Number of open UDP connections
Network
Port Usage
%
1m
Port usage rate
Network
SYN Sent Sockets
cnt
1m
Number of sockets in the SYN_SENT state (when connecting to a remote host)
Process
Kernel PID Max
cnt
1m
Value of kernel.pid_max
Process
Kernel Thread Max
cnt
1m
Value of kernel.threads-max
Process
Process CPU Usage
%
1m
Percentage of CPU time consumed by the process since the last update. This value is similar to the %CPU value displayed by the top command on Unix systems.
Process
Process CPU Usage/Core
%
1m
Percentage of CPU time used by the process since the last event. This value is normalized by the number of cores and ranges from 0 to 100%.
Process
Process Memory Usage
%
1m
Percentage of main memory (RAM) used by the process
Process
Process Memory Used
bytes
1m
Resident Set size. The amount of memory used by the process in RAM. On Windows, this is the current working set size.
Process
Process PID
PID
1m
Process PID
Process
Process PPID
PID
1m
Parent process PID
Process
Processes [Dead]
cnt
1m
Number of dead processes
Process
Processes [Idle]
cnt
1m
Number of idle processes
Process
Processes [Running]
cnt
1m
Number of running processes
Process
Processes [Sleeping]
cnt
1m
Number of sleeping processes
Process
Processes [Stopped]
cnt
1m
Number of stopped processes
Process
Processes [Total]
cnt
1m
Total number of processes
Process
Processes [Unknown]
cnt
1m
Number of processes with unknown or unsearchable status
Process
Processes [Zombie]
cnt
1m
Number of zombie processes
Process
Running Process Usage
%
1m
Process usage rate
Process
Running Processes
cnt
1m
Number of running processes
Process
Running Thread Usage
%
1m
Thread usage rate
Process
Running Threads
cnt
1m
Total number of threads running in running processes
System
Context Switches
cnt
1m
Number of context switches (per second)
System
Load/Core [1 min]
cnt
1m
Load over the last 1 minute, normalized by the number of cores
System
Load/Core [15 min]
cnt
1m
Load over the last 15 minutes, normalized by the number of cores
System
Load/Core [5 min]
cnt
1m
Load over the last 5 minutes, normalized by the number of cores
System
Multipaths [Active]
cnt
1m
Number of active paths for external storage connections
System
Multipaths [Failed]
cnt
1m
Number of failed paths for external storage connections
System
Multipaths [Faulty]
cnt
1m
Number of faulty paths for external storage connections
System
NTP Offset
num
1m
Measured offset (time difference between the NTP server and the local environment) of the last sample
System
Run Queue Length
num
1m
Length of the run queue
System
Uptime
ms
1m
System uptime (in milliseconds)
Windows
Context Switchies
cnt
1m
Number of CPU context switches (per second)
Windows
Disk Read Bytes [Sec]
cnt
1m
Number of bytes read from the Windows logical disk per second
Windows
Disk Read Time [Avg]
sec
1m
Average time spent reading data (in seconds)
Windows
Disk Transfer Time [Avg]
sec
1m
Average disk wait time
Windows
Disk Usage
%
1m
Disk usage rate
Windows
Disk Write Bytes [Sec]
cnt
1m
Number of bytes written to the Windows logical disk per second
Windows
Disk Write Time [Avg]
sec
1m
Average time spent writing data (in seconds)
Windows
Pagingfile Usage
%
1m
Paging file usage rate
Windows
Pool Used [Non Paged]
bytes
1m
Non-paged pool usage of kernel memory
Windows
Pool Used [Paged]
bytes
1m
Paged pool usage of kernel memory
Windows
Process [Running]
cnt
1m
Number of currently running processes
Windows
Threads [Running]
cnt
1m
Number of currently running threads
Windows
Threads [Waiting]
cnt
1m
Number of threads waiting for processor time
Table. Virtual Server (Agent) Performance Items
GPU Server
Agentless (Basic Metrics)
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Memory
Memory Total [Basic]
bytes
1m
Available memory in bytes
Memory
Memory Used [Basic]
bytes
1m
Currently used memory in bytes
Memory
Memory Swap In [Basic]
bytes
1m
Swapped memory in bytes
Memory
Memory Swap Out [Basic]
bytes
1m
Swapped memory in bytes
Memory
Memory Free [Basic]
bytes
1m
Unused memory in bytes
Disk
Disk Read Bytes [Basic]
bytes
1m
Read bytes
Disk
Disk Read Requests [Basic]
cnt
1m
Number of read requests
Disk
Disk Write Bytes [Basic]
bytes
1m
Write bytes
Disk
Disk Write Requests [Basic]
cnt
1m
Number of write requests
CPU
CPU Usage [Basic]
%
1m
Average system CPU usage over 1 minute
State
Instance State [Basic]
state
1m
Instance state
Network
Network In Bytes [Basic]
bytes
1m
Received bytes
Network
Network In Dropped [Basic]
cnt
1m
Dropped received packets
Network
Network In Packets [Basic]
cnt
1m
Number of received packets
Network
Network Out Bytes [Basic]
bytes
1m
Sent bytes
Network
Network Out Dropped [Basic]
cnt
1m
Dropped sent packets
Network
Network Out Packets [Basic]
cnt
1m
Number of sent packets
Table. GPU Server (Agentless) Performance Items
Agent (Detailed Metrics)
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
GPU
GPU Count
cnt
1m
Number of GPUs
GPU
GPU Memory Usage
%
1m
GPU memory usage rate
GPU
GPU Memory Used
bytes
1m
GPU memory usage
GPU
GPU Temperature
℃
1m
GPU temperature
GPU
GPU Usage
%
1m
Total GPU usage rate (using all 8 GPUs at 100%: 800%)
GPU
GPU Usage [Avg]
%
1m
Average GPU usage rate (%)
GPU
GPU Power Cap
W
1m
Maximum power capacity of the GPU
GPU
GPU Power Usage
W
1m
Current power usage of the GPU
GPU
GPU Memory Usage [Avg]
%
1m
Average GPU memory usage rate
GPU
GPU Count in use
cnt
1m
Number of GPUs in use by jobs running on the node
GPU
Execution State for nvidia-smi
state
1m
Execution result of the nvidia-smi command
CPU
Core Usage [IO Wait]
%
1m
CPU time spent in wait state (disk wait)
CPU
Core Usage [System]
%
1m
CPU time spent in kernel space
CPU
Core Usage [User]
%
1m
CPU time spent in user space
CPU
CPU Cores
cnt
1m
Number of CPU cores on the host. The maximum value of the unnormalized ratio is 100%* of the number of cores. The unnormalized ratio already reflects this value, and the maximum value is 100%* of the number of cores.
CPU
CPU Usage [Active]
%
1m
CPU time used, excluding idle and IOWait states (using all 4 cores at 100%: 400%)
CPU
CPU Usage [Idle]
%
1m
CPU time spent in idle state
CPU
CPU Usage [IO Wait]
%
1m
CPU time spent in wait state (disk wait)
CPU
CPU Usage [System]
%
1m
CPU time used by the kernel (using all 4 cores at 100%: 400%)
CPU
CPU Usage [User]
%
1m
CPU time used by the user (using all 4 cores at 100%: 400%)
CPU
CPU Usage/Core [Active]
%
1m
CPU time used, excluding idle and IOWait states (normalized by the number of cores, using all 4 cores at 100%: 100%)
CPU
CPU Usage/Core [Idle]
%
1m
CPU time spent in idle state
CPU
CPU Usage/Core [IO Wait]
%
1m
CPU time spent in wait state (disk wait)
CPU
CPU Usage/Core [System]
%
1m
CPU time used by the kernel (normalized by the number of cores, using all 4 cores at 100%: 100%)
CPU
CPU Usage/Core [User]
%
1m
CPU time used by the user (normalized by the number of cores, using all 4 cores at 100%: 100%)
Disk
Disk CPU Usage [IO Request]
%
1m
The ratio of CPU time spent executing I/O requests for the device (device bandwidth utilization). If this value is close to 100%, the device is in a saturated state.
Disk
Disk Queue Size [Avg]
num
1m
The average queue length of requests executed for the device.
Disk
Disk Read Bytes
bytes
1m
The number of bytes read from the device per second.
Disk
Disk Read Bytes [Delta Avg]
bytes
1m
The average of system.diskio.read.bytes_delta for individual disks.
Disk
Disk Read Bytes [Delta Max]
bytes
1m
The maximum of system.diskio.read.bytes_delta for individual disks.
Disk
Disk Read Bytes [Delta Min]
bytes
1m
The minimum of system.diskio.read.bytes_delta for individual disks.
Disk
Disk Read Bytes [Delta Sum]
bytes
1m
The sum of system.diskio.read.bytes_delta for individual disks.
Disk
Disk Read Bytes [Delta]
bytes
1m
The delta value of system.diskio.read.bytes for individual disks.
Disk
Disk Read Bytes [Success]
bytes
1m
The total number of bytes read successfully. On Linux, it is assumed that the sector size is 512 and the value is calculated by multiplying the number of sectors read by 512.
Disk
Disk Read Requests
cnt
1m
The number of read requests for the disk device per second.
Disk
Disk Read Requests [Delta Avg]
cnt
1m
The average of system.diskio.read.count_delta for individual disks.
Disk
Disk Read Requests [Delta Max]
cnt
1m
The maximum of system.diskio.read.count_delta for individual disks.
Disk
Disk Read Requests [Delta Min]
cnt
1m
The minimum of system.diskio.read.count_delta for individual disks.
Disk
Disk Read Requests [Delta Sum]
cnt
1m
The sum of system.diskio.read.count_delta for individual disks.
Disk
Disk Read Requests [Success Delta]
cnt
1m
The delta value of system.diskio.read.count for individual disks.
Disk
Disk Read Requests [Success]
cnt
1m
The total number of successful read requests.
Disk
Disk Request Size [Avg]
num
1m
The average size of requests executed for the device (in sectors).
Disk
Disk Service Time [Avg]
ms
1m
The average service time for input requests executed for the device (in milliseconds).
Disk
Disk Wait Time [Avg]
ms
1m
The average time spent executing requests for the device.
Disk
Disk Wait Time [Read]
ms
1m
The average disk wait time for read operations.
Disk
Disk Wait Time [Write]
ms
1m
The average disk wait time for write operations.
Disk
Disk Write Bytes [Delta Avg]
bytes
1m
The average of system.diskio.write.bytes_delta for individual disks.
Disk
Disk Write Bytes [Delta Max]
bytes
1m
The maximum of system.diskio.write.bytes_delta for individual disks.
Disk
Disk Write Bytes [Delta Min]
bytes
1m
The minimum of system.diskio.write.bytes_delta for individual disks.
Disk
Disk Write Bytes [Delta Sum]
bytes
1m
The sum of system.diskio.write.bytes_delta for individual disks.
Disk
Disk Write Bytes [Delta]
bytes
1m
The delta value of system.diskio.write.bytes for individual disks.
Disk
Disk Write Bytes [Success]
bytes
1m
The total number of bytes written successfully. On Linux, it is assumed that the sector size is 512 and the value is calculated by multiplying the number of sectors written by 512.
Disk
Disk Write Requests
cnt
1m
The number of write requests for the disk device per second.
Disk
Disk Write Requests [Delta Avg]
cnt
1m
The average of system.diskio.write.count_delta for individual disks.
Disk
Disk Write Requests [Delta Max]
cnt
1m
The maximum of system.diskio.write.count_delta for individual disks.
Disk
Disk Write Requests [Delta Min]
cnt
1m
The minimum of system.diskio.write.count_delta for individual disks.
Disk
Disk Write Requests [Delta Sum]
cnt
1m
The sum of system.diskio.write.count_delta for individual disks.
Disk
Disk Write Requests [Success Delta]
cnt
1m
The delta value of system.diskio.write.count for individual disks.
Disk
Disk Write Requests [Success]
cnt
1m
The total number of successful write requests.
Disk
Disk Writes Bytes
bytes
1m
The number of bytes written to the device per second.
FileSystem
Filesystem Hang Check
state
1m
Filesystem (local/NFS) hang check (normal: 1, abnormal: 0).
FileSystem
Filesystem Nodes
cnt
1m
The total number of file nodes in the file system.
FileSystem
Filesystem Nodes [Free]
cnt
1m
The total number of available file nodes in the file system.
FileSystem
Filesystem Size [Available]
bytes
1m
The available disk space (in bytes) that can be used by non-privileged users.
FileSystem
Filesystem Size [Free]
bytes
1m
The available disk space (in bytes).
FileSystem
Filesystem Size [Total]
bytes
1m
The total disk space (in bytes).
FileSystem
Filesystem Usage
%
1m
The percentage of used disk space.
FileSystem
Filesystem Usage [Avg]
%
1m
The average of filesystem.used.pct for individual file systems.
FileSystem
Filesystem Usage [Inode]
%
1m
The inode usage rate.
FileSystem
Filesystem Usage [Max]
%
1m
The maximum of filesystem.used.pct for individual file systems.
FileSystem
Filesystem Usage [Min]
%
1m
The minimum of filesystem.used.pct for individual file systems.
FileSystem
Filesystem Usage [Total]
%
1m
-
FileSystem
Filesystem Used
bytes
1m
The used disk space (in bytes).
FileSystem
Filesystem Used [Inode]
bytes
1m
The inode usage.
Memory
Memory Free
bytes
1m
The total available memory (in bytes), excluding memory used by system cache and buffers (see system.memory.actual.free).
Memory
Memory Free [Actual]
bytes
1m
The actual available memory (in bytes), which varies depending on the OS. On Linux, it is calculated using /proc/meminfo, and on OSX, it is the sum of available and inactive memory. On Windows, it is the same as system.memory.free.
Memory
Memory Free [Swap]
bytes
1m
The available swap memory.
Memory
Memory Total
bytes
1m
The total memory.
Memory
Memory Total [Swap]
bytes
1m
The total swap memory.
Memory
Memory Usage
%
1m
The percentage of used memory.
Memory
Memory Usage [Actual]
%
1m
The percentage of actual used memory.
Memory
Memory Usage [Cache Swap]
%
1m
The cache swap usage rate.
Memory
Memory Usage [Swap]
%
1m
The percentage of used swap memory.
Memory
Memory Used
bytes
1m
The used memory.
Memory
Memory Used [Actual]
bytes
1m
The actual used memory (in bytes), which is the total memory minus the used memory. The available memory varies depending on the OS (see system.actual.free).
Memory
Memory Used [Swap]
bytes
1m
The used swap memory.
Network
Collisions
cnt
1m
Network collisions.
Network
Network In Bytes
bytes
1m
The number of bytes received.
Network
Network In Bytes [Delta Avg]
bytes
1m
The average of system.network.in.bytes_delta for individual networks.
Network
Network In Bytes [Delta Max]
bytes
1m
The maximum of system.network.in.bytes_delta for individual networks.
Network
Network In Bytes [Delta Min]
bytes
1m
The minimum of system.network.in.bytes_delta for individual networks.
Network
Network In Bytes [Delta Sum]
bytes
1m
The sum of system.network.in.bytes_delta for individual networks.
Network
Network In Bytes [Delta]
bytes
1m
The delta value of the number of bytes received.
Network
Network In Dropped
cnt
1m
The number of packets dropped during reception.
Network
Network In Errors
cnt
1m
The number of errors during reception.
Network
Network In Packets
cnt
1m
The number of packets received.
Network
Network In Packets [Delta Avg]
cnt
1m
The average of system.network.in.packets_delta for individual networks.
Network
Network In Packets [Delta Max]
cnt
1m
The maximum of system.network.in.packets_delta for individual networks.
Network
Network In Packets [Delta Min]
cnt
1m
The minimum of system.network.in.packets_delta for individual networks.
Network
Network In Packets [Delta Sum]
cnt
1m
The sum of system.network.in.packets_delta for individual networks.
Network
Network In Packets [Delta]
cnt
1m
The delta value of the number of packets received.
Network
Network Out Bytes
bytes
1m
The number of bytes sent.
Network
Network Out Bytes [Delta Avg]
bytes
1m
The average of system.network.out.bytes_delta for individual networks.
Network
Network Out Bytes [Delta Max]
bytes
1m
The maximum of system.network.out.bytes_delta for individual networks.
Network
Network Out Bytes [Delta Min]
bytes
1m
The minimum of system.network.out.bytes_delta for individual networks.
Network
Network Out Bytes [Delta Sum]
bytes
1m
The sum of system.network.out.bytes_delta for individual networks.
Network
Network Out Bytes [Delta]
bytes
1m
The delta value of the number of bytes sent.
Network
Network Out Dropped
cnt
1m
The number of packets dropped during transmission. This value is not reported by the OS and is always 0 on Darwin and BSD.
Network
Network Out Errors
cnt
1m
The number of errors during transmission.
Network
Network Out Packets
cnt
1m
The number of packets sent.
Network
Network Out Packets [Delta Avg]
cnt
1m
The average of system.network.out.packets_delta for individual networks.
Network
Network Out Packets [Delta Max]
cnt
1m
The maximum of system.network.out.packets_delta for individual networks.
Network
Network Out Packets [Delta Min]
cnt
1m
The minimum of system.network.out.packets_delta for individual networks.
Network
Network Out Packets [Delta Sum]
cnt
1m
The sum of system.network.out.packets_delta for individual networks.
Network
Network Out Packets [Delta]
cnt
1m
The delta value of the number of packets sent.
Network
Open Connections [TCP]
cnt
1m
The number of open TCP connections.
Network
Open Connections [UDP]
cnt
1m
The number of open UDP connections.
Network
Port Usage
%
1m
The port usage rate.
Network
SYN Sent Sockets
cnt
1m
The number of sockets in the SYN_SENT state (when connecting to a remote host).
Process
Kernel PID Max
cnt
1m
The kernel.pid_max value.
Process
Kernel Thread Max
cnt
1m
The kernel.threads-max value.
Process
Process CPU Usage
%
1m
The percentage of CPU time consumed by the process since the last update. This value is similar to the %CPU value displayed by the top command on Unix systems.
Process
Process CPU Usage/Core
%
1m
The percentage of CPU time used by the process since the last event, normalized by the number of cores (0-100%).
Process
Process Memory Usage
%
1m
The percentage of main memory (RAM) used by the process.
Process
Process Memory Used
bytes
1m
The resident set size, which is the amount of memory used by the process in RAM. On Windows, it is the current working set size.
Process
Process PID
PID
1m
The process ID.
Process
Process PPID
PID
1m
The parent process ID.
Process
Processes [Dead]
cnt
1m
The number of dead processes.
Process
Processes [Idle]
cnt
1m
The number of idle processes.
Process
Processes [Running]
cnt
1m
The number of running processes.
Process
Processes [Sleeping]
cnt
1m
The number of sleeping processes.
Process
Processes [Stopped]
cnt
1m
The number of stopped processes.
Process
Processes [Total]
cnt
1m
The total number of processes.
Process
Processes [Unknown]
cnt
1m
The number of processes with unknown or unsearchable states.
Process
Processes [Zombie]
cnt
1m
The number of zombie processes.
Process
Running Process Usage
%
1m
The process usage rate.
Process
Running Processes
cnt
1m
The number of running processes.
Process
Running Thread Usage
%
1m
The thread usage rate.
Process
Running Threads
cnt
1m
The total number of threads running in running processes.
System
Context Switches
cnt
1m
The number of context switches per second.
System
Load/Core [1 min]
cnt
1m
The load average over the last 1 minute, normalized by the number of cores.
System
Load/Core [15 min]
cnt
1m
The load average over the last 15 minutes, normalized by the number of cores.
System
Load/Core [5 min]
cnt
1m
The load average over the last 5 minutes, normalized by the number of cores.
System
Multipaths [Active]
cnt
1m
The number of active paths for external storage connections.
System
Multipaths [Failed]
cnt
1m
The number of failed paths for external storage connections.
System
Multipaths [Faulty]
cnt
1m
The number of faulty paths for external storage connections.
System
NTP Offset
num
1m
The measured offset (time difference between the NTP server and the local environment) of the last sample.
System
Run Queue Length
num
1m
The length of the run queue.
System
Uptime
ms
1m
The OS uptime (in milliseconds).
Windows
Context Switchies
cnt
1m
The number of CPU context switches per second.
Windows
Disk Read Bytes [Sec]
cnt
1m
The number of bytes read from the Windows logical disk per second.
Windows
Disk Read Time [Avg]
sec
1m
The average time spent reading data (in seconds).
Windows
Disk Transfer Time [Avg]
sec
1m
The average disk wait time.
Windows
Disk Usage
%
1m
The disk usage rate.
Windows
Disk Write Bytes [Sec]
cnt
1m
The number of bytes written to the Windows logical disk per second.
Windows
Disk Write Time [Avg]
sec
1m
The average time spent writing data (in seconds).
Windows
Pagingfile Usage
%
1m
The paging file usage rate.
Windows
Pool Used [Non Paged]
bytes
1m
The Nonpaged Pool usage of kernel memory.
Windows
Pool Used [Paged]
bytes
1m
The Paged Pool usage of kernel memory.
Windows
Process [Running]
cnt
1m
The number of currently running processes.
Windows
Threads [Running]
cnt
1m
The number of currently running threads.
Windows
Threads [Waiting]
cnt
1m
The number of threads waiting for processor time.
Table. Performance Items for GPU Server (Agent)
Bare Metal Server
Agent (Detailed Metrics)
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
CPU
Core Usage [IO Wait]
%
1m
The ratio of CPU time spent in a waiting state (disk wait).
CPU
Core Usage [System]
%
1m
The percentage of CPU time used by the kernel.
CPU
Core Usage [User]
%
1m
The percentage of CPU time used in the user space.
CPU
CPU Cores
cnt
1m
The number of CPU cores on the host. The maximum value for unnormalized rates is 100% * the number of cores. The maximum value for normalized rates is 100%.
CPU
CPU Usage [Active]
%
1m
The percentage of CPU time used, excluding idle and IOWait states (all 4 cores using 100%: 400%).
CPU
CPU Usage [Idle]
%
1m
The ratio of CPU time spent in an idle state.
CPU
CPU Usage [IO Wait]
%
1m
The ratio of CPU time spent in a waiting state (disk wait).
CPU
CPU Usage [System]
%
1m
The percentage of CPU time used by the kernel (all 4 cores using 100%: 400%).
CPU
CPU Usage [User]
%
1m
The percentage of CPU time used in the user area (all 4 cores using 100%: 400%).
CPU
CPU Usage/Core [Active]
%
1m
The percentage of CPU time used, excluding idle and IOWait states (normalized by the number of cores, all 4 cores using 100%: 100%).
CPU
CPU Usage/Core [Idle]
%
1m
The ratio of CPU time spent in an idle state.
CPU
CPU Usage/Core [IO Wait]
%
1m
The ratio of CPU time spent in a waiting state (disk wait).
CPU
CPU Usage/Core [System]
%
1m
The percentage of CPU time used by the kernel (normalized by the number of cores, all 4 cores using 100%: 100%).
CPU
CPU Usage/Core [User]
%
1m
The percentage of CPU time used in the user area (normalized by the number of cores, all 4 cores using 100%: 100%).
Disk
Disk CPU Usage [IO Request]
%
1m
The ratio of CPU time spent executing I/O requests for the device (device bandwidth utilization). If this value is close to 100%, the device is in a saturated state.
Disk
Disk Queue Size [Avg]
num
1m
The average queue length of requests executed for the device.
Disk
Disk Read Bytes
bytes
1m
The number of bytes read from the device per second.
Disk
Disk Read Bytes [Delta Avg]
bytes
1m
The average of system.diskio.read.bytes_delta for individual disks.
Disk
Disk Read Bytes [Delta Max]
bytes
1m
The maximum of system.diskio.read.bytes_delta for individual disks.
Disk
Disk Read Bytes [Delta Min]
bytes
1m
The minimum of system.diskio.read.bytes_delta for individual disks.
Disk
Disk Read Bytes [Delta Sum]
bytes
1m
The sum of system.diskio.read.bytes_delta for individual disks.
Disk
Disk Read Bytes [Delta]
bytes
1m
The delta value of system.diskio.read.bytes for individual disks.
Disk
Disk Read Bytes [Success]
bytes
1m
The total number of bytes read successfully. On Linux, it is assumed that the sector size is 512 and the value is calculated by multiplying the number of sectors read by 512.
Disk
Disk Read Requests
cnt
1m
The number of read requests for the disk device per second.
Disk
Disk Read Requests [Delta Avg]
cnt
1m
The average of system.diskio.read.count_delta for individual disks.
Disk
Disk Read Requests [Delta Max]
cnt
1m
The maximum of system.diskio.read.count_delta for individual disks.
Disk
Disk Read Requests [Delta Min]
cnt
1m
The minimum of system.diskio.read.count_delta for individual disks.
Disk
Disk Read Requests [Delta Sum]
cnt
1m
The sum of system.diskio.read.count_delta for individual disks.
Disk
Disk Read Requests [Success Delta]
cnt
1m
The delta value of system.diskio.read.count for individual disks.
Disk
Disk Read Requests [Success]
cnt
1m
The total number of successful read requests.
Disk
Disk Request Size [Avg]
num
1m
The average size of requests executed for the device (in sectors).
Disk
Disk Service Time [Avg]
ms
1m
The average service time for input requests executed for the device (in milliseconds).
Disk
Disk Wait Time [Avg]
ms
1m
The average time spent executing requests for the device.
Disk
Disk Wait Time [Read]
ms
1m
The average disk wait time for read operations.
Disk
Disk Wait Time [Write]
ms
1m
The average disk wait time for write operations.
Disk
Disk Write Bytes [Delta Avg]
bytes
1m
The average of system.diskio.write.bytes_delta for individual disks.
Disk
Disk Write Bytes [Delta Max]
bytes
1m
The maximum of system.diskio.write.bytes_delta for individual disks.
Disk
Disk Write Bytes [Delta Min]
bytes
1m
The minimum of system.diskio.write.bytes_delta for individual disks.
Disk
Disk Write Bytes [Delta Sum]
bytes
1m
The sum of system.diskio.write.bytes_delta for individual disks.
Disk
Disk Write Bytes [Delta]
bytes
1m
The delta value of system.diskio.write.bytes for individual disks.
Disk
Disk Write Bytes [Success]
bytes
1m
The total number of bytes written successfully. On Linux, it is assumed that the sector size is 512 and the value is calculated by multiplying the number of sectors written by 512.
Disk
Disk Write Requests
cnt
1m
The number of write requests for the disk device per second.
Disk
Disk Write Requests [Delta Avg]
cnt
1m
The average of system.diskio.write.count_delta for individual disks.
Disk
Disk Write Requests [Delta Max]
cnt
1m
The maximum of system.diskio.write.count_delta for individual disks.
Disk
Disk Write Requests [Delta Min]
cnt
1m
The minimum of system.diskio.write.count_delta for individual disks.
Disk
Disk Write Requests [Delta Sum]
cnt
1m
The sum of system.diskio.write.count_delta for individual disks.
Disk
Disk Write Requests [Success Delta]
cnt
1m
The delta value of system.diskio.write.count for individual disks.
Disk
Disk Write Requests [Success]
cnt
1m
The total number of successful write requests.
Disk
Disk Writes Bytes
bytes
1m
The number of bytes written to the device per second.
FileSystem
Filesystem Hang Check
state
1m
Filesystem (local/NFS) hang check (normal: 1, abnormal: 0).
FileSystem
Filesystem Nodes
cnt
1m
The total number of file nodes in the file system.
FileSystem
Filesystem Nodes [Free]
cnt
1m
The total number of available file nodes in the file system.
FileSystem
Filesystem Size [Available]
bytes
1m
The available disk space (in bytes) that can be used by non-privileged users.
FileSystem
Filesystem Size [Free]
bytes
1m
The available disk space (in bytes).
FileSystem
Filesystem Size [Total]
bytes
1m
The total disk space (in bytes).
FileSystem
Filesystem Usage
%
1m
The percentage of used disk space.
FileSystem
Filesystem Usage [Avg]
%
1m
The average of filesystem.used.pct for individual file systems.
FileSystem
Filesystem Usage [Inode]
%
1m
The inode usage rate.
FileSystem
Filesystem Usage [Max]
%
1m
The maximum of filesystem.used.pct for individual file systems.
FileSystem
Filesystem Usage [Min]
%
1m
The minimum of filesystem.used.pct for individual file systems.
FileSystem
Filesystem Usage [Total]
%
1m
-
FileSystem
Filesystem Used
bytes
1m
The used disk space (in bytes).
FileSystem
Filesystem Used [Inode]
bytes
1m
The inode usage.
Memory
Memory Free
bytes
1m
The total available memory (in bytes), excluding memory used by system cache and buffers (see system.memory.actual.free).
Memory
Memory Free [Actual]
bytes
1m
The actual available memory (in bytes), which varies depending on the OS. On Linux, it is calculated using /proc/meminfo, and on OSX, it is the sum of available and inactive memory. On Windows, it is the same as system.memory.free.
Memory
Memory Free [Swap]
bytes
1m
The available swap memory.
Memory
Memory Total
bytes
1m
The total memory.
Memory
Memory Total [Swap]
bytes
1m
The total swap memory.
Memory
Memory Usage
%
1m
The percentage of used memory.
Memory
Memory Usage [Actual]
%
1m
The percentage of actual used memory.
Memory
Memory Usage [Cache Swap]
%
1m
The cache swap usage rate.
Memory
Memory Usage [Swap]
%
1m
The percentage of used swap memory.
Memory
Memory Used
bytes
1m
The used memory.
Memory
Memory Used [Actual]
bytes
1m
The actual used memory (in bytes), which is the total memory minus the used memory. The available memory varies depending on the OS (see system.actual.free).
Memory
Memory Used [Swap]
bytes
1m
The used swap memory.
Network
Collisions
cnt
1m
Network collisions.
Network
Network In Bytes
bytes
1m
The number of bytes received.
Network
Network In Bytes [Delta Avg]
bytes
1m
The average of system.network.in.bytes_delta for individual networks.
Network
Network In Bytes [Delta Max]
bytes
1m
The maximum of system.network.in.bytes_delta for individual networks.
Network
Network In Bytes [Delta Min]
bytes
1m
The minimum of system.network.in.bytes_delta for individual networks.
Network
Network In Bytes [Delta Sum]
bytes
1m
The sum of system.network.in.bytes_delta for individual networks.
Network
Network In Bytes [Delta]
bytes
1m
The delta value of the number of bytes received.
Network
Network In Dropped
cnt
1m
The number of packets dropped during reception.
Network
Network In Errors
cnt
1m
The number of errors during reception.
Network
Network In Packets
cnt
1m
The number of packets received.
Network
Network In Packets [Delta Avg]
cnt
1m
The average of system.network.in.packets_delta for individual networks.
Network
Network In Packets [Delta Max]
cnt
1m
The maximum of system.network.in.packets_delta for individual networks.
Network
Network In Packets [Delta Min]
cnt
1m
The minimum of system.network.in.packets_delta for individual networks.
Network
Network In Packets [Delta Sum]
cnt
1m
The sum of system.network.in.packets_delta for individual networks.
Network
Network In Packets [Delta]
cnt
1m
The delta value of the number of packets received.
Network
Network Out Bytes
bytes
1m
The number of bytes sent.
Network
Network Out Bytes [Delta Avg]
bytes
1m
The average of system.network.out.bytes_delta for individual networks.
Network
Network Out Bytes [Delta Max]
bytes
1m
The maximum of system.network.out.bytes_delta for individual networks.
Network
Network Out Bytes [Delta Min]
bytes
1m
The minimum of system.network.out.bytes_delta for individual networks.
Network
Network Out Bytes [Delta Sum]
bytes
1m
The sum of system.network.out.bytes_delta for individual networks.
Network
Network Out Bytes [Delta]
bytes
1m
The delta value of the number of bytes sent.
Network
Network Out Dropped
cnt
1m
The number of packets dropped during transmission. This value is not reported by the OS and is always 0 on Darwin and BSD.
Network
Network Out Errors
cnt
1m
The number of errors during transmission.
Network
Network Out Packets
cnt
1m
The number of packets sent.
Network
Network Out Packets [Delta Avg]
cnt
1m
The average of system.network.out.packets_delta for individual networks.
Network
Network Out Packets [Delta Max]
cnt
1m
The maximum of system.network.out.packets_delta for individual networks.
Network
Network Out Packets [Delta Min]
cnt
1m
The minimum of system.network.out.packets_delta for individual networks.
Network
Network Out Packets [Delta Sum]
cnt
1m
The sum of system.network.out.packets_delta for individual networks.
Network
Network Out Packets [Delta]
cnt
1m
The delta value of the number of packets sent.
Network
Open Connections [TCP]
cnt
1m
The number of open TCP connections.
Network
Open Connections [UDP]
cnt
1m
The number of open UDP connections.
Network
Port Usage
%
1m
The port usage rate.
Network
SYN Sent Sockets
cnt
1m
The number of sockets in the SYN_SENT state (when connecting to a remote host).
Process
Kernel PID Max
cnt
1m
The kernel.pid_max value.
Process
Kernel Thread Max
cnt
1m
The kernel.threads-max value.
Process
Process CPU Usage
%
1m
The percentage of CPU time consumed by the process since the last update. This value is similar to the %CPU value displayed by the top command on Unix systems.
Process
Process CPU Usage/Core
%
1m
The percentage of CPU time used by the process since the last event, normalized by the number of cores (0-100%).
Process
Process Memory Usage
%
1m
The percentage of main memory (RAM) used by the process.
Process
Process Memory Used
bytes
1m
The resident set size, which is the amount of memory used by the process in RAM. On Windows, it is the current working set size.
Process
Process PID
PID
1m
The process ID.
Process
Process PPID
PID
1m
The parent process ID.
Process
Processes [Dead]
cnt
1m
The number of dead processes.
Process
Processes [Idle]
cnt
1m
The number of idle processes.
Process
Processes [Running]
cnt
1m
The number of running processes.
Process
Processes [Sleeping]
cnt
1m
The number of sleeping processes.
Process
Processes [Stopped]
cnt
1m
The number of stopped processes.
Process
Processes [Total]
cnt
1m
The total number of processes.
Process
Processes [Unknown]
cnt
1m
The number of processes with unknown or unsearchable states.
Process
Processes [Zombie]
cnt
1m
The number of zombie processes.
Process
Running Process Usage
%
1m
The process usage rate.
Process
Running Processes
cnt
1m
The number of running processes.
Process
Running Thread Usage
%
1m
The thread usage rate.
Process
Running Threads
cnt
1m
The total number of threads running in running processes.
System
Context Switches
cnt
1m
The number of context switches per second.
System
Load/Core [1 min]
cnt
1m
The load average over the last 1 minute, normalized by the number of cores.
System
Load/Core [15 min]
cnt
1m
The load average over the last 15 minutes, normalized by the number of cores.
System
Load/Core [5 min]
cnt
1m
The load average over the last 5 minutes, normalized by the number of cores.
System
Multipaths [Active]
cnt
1m
The number of active paths for external storage connections.
System
Multipaths [Failed]
cnt
1m
The number of failed paths for external storage connections.
System
Multipaths [Faulty]
cnt
1m
The number of faulty paths for external storage connections.
System
NTP Offset
num
1m
The measured offset (time difference between the NTP server and the local environment) of the last sample.
System
Run Queue Length
num
1m
The length of the run queue.
System
Uptime
ms
1m
The OS uptime (in milliseconds).
Windows
Context Switchies
cnt
1m
The number of CPU context switches per second.
Windows
Disk Read Bytes [Sec]
cnt
1m
The number of bytes read from the Windows logical disk per second.
Windows
Disk Read Time [Avg]
sec
1m
The average time spent reading data (in seconds).
Windows
Disk Transfer Time [Avg]
sec
1m
The average disk wait time.
Windows
Disk Usage
%
1m
The disk usage rate.
Windows
Disk Write Bytes [Sec]
cnt
1m
The number of bytes written to the Windows logical disk per second.
Windows
Disk Write Time [Avg]
sec
1m
The average time spent writing data (in seconds).
Windows
Pagingfile Usage
%
1m
The paging file usage rate.
Windows
Pool Used [Non Paged]
bytes
1m
The Nonpaged Pool usage of kernel memory.
Windows
Pool Used [Paged]
bytes
1m
The Paged Pool usage of kernel memory.
Windows
Process [Running]
cnt
1m
The number of currently running processes.
Windows
Threads [Running]
cnt
1m
The number of currently running threads.
Windows
Threads [Waiting]
cnt
1m
The number of threads waiting for processor time.
Table. Performance Items for Bare Metal Server
Note
To monitor the performance of Bare Metal Server, please install the Agent. Refer to Agent Management for the installation guide.
Storage type
File Storage
Performance item group name
Performance item name
Collection unit
Collection cycle
Description
Volume
Instance State
state
1m
File storage volume status
Volume
IOPS [Other]
iops
1m
IOPS (other)
Volume
IOPS [Read]
iops
1m
IOPS (read)
Volume
IOPS [Total]
iops
1m
IOPS (total)
Volume
IOPS [Write]
iops
1m
IOPS (write)
Volume
Latency Time [Other]
usec
1m
Latency time (other)
Volume
Latency Time [Read]
usec
1m
Latency time (read)
Volume
Latency Time [Total]
usec
1m
Latency time (total)
Volume
Latency Time [write]
usec
1m
Latency time (write)
Volume
Throughput [Other]
MB/s
1m
Throughput (other)
Volume
Throughput [Read]
MB/s
1m
Throughput (read)
Volume
Throughput [Total]
MB/s
1m
Throughput (total)
Volume
Throughput [Write]
MB/s
1m
Throughput (write)
Volume
Volume Total
bytes
1m
Total bytes
Volume
Volume Usage
%
1m
Usage rate
Volume
Volume Used
bytes
1m
Used amount
Table. File Storage performance items
Object Storage
Performance item group name
Performance item name
Collection unit
Collection cycle
Description
Request
Requests [Delete]
cnt
1m
Number of HTTP DELETE requests executed on objects in the bucket
Request
Requests [Download Avg]
bytes
1m
Average download usage per bucket
Request
Requests [Get]
cnt
1m
Number of HTTP GET requests executed on objects in the bucket
Request
Requests [Head]
cnt
1m
Number of HTTP HEAD requests executed on objects in the bucket
Request
Requests [List]
cnt
1m
Number of LIST requests executed on objects in the bucket
Request
Requests [Post]
cnt
1m
Number of HTTP POST requests executed on objects in the bucket
Request
Requests [Put]
cnt
1m
Number of HTTP PUT requests executed on objects in the bucket
Request
Requests [Total]
cnt
1m
Total number of HTTP requests executed on the bucket
Request
Requests [Upload Avg]
bytes
1m
Average upload usage per bucket
Usage
Bucket Used
bytes
1m
Amount of data stored in the bucket (in bytes)
Usage
Objects
cnt
1m
Number of objects stored in the bucket
Table. Object Storage performance items
Database type
PostgreSQL(DBaaS)
Performance item group name
Performance item name
Collection unit
Collection cycle
Description
Activelock
Active Locks
cnt
1m
Number of active locks
Activelock
Active Locks [Access Exclusive]
cnt
1m
Number of access exclusive locks
Activelock
Active Locks [Access Share]
cnt
1m
Number of access share locks
Activelock
Active Locks [Total]
cnt
1m
Total number of active locks
Activelock
Exclusive Locks
cnt
1m
Number of exclusive locks
Activelock
Row Exclusive Locks
cnt
1m
Number of row exclusive locks
Activelock
Row Share Locks
cnt
1m
Number of row share locks
Activelock
Share Locks
cnt
1m
Number of share locks
Activelock
Share Row Exclusive Locks
cnt
1m
Number of share row exclusive locks
Activelock
Share Update Exclusive Locks
cnt
1m
Number of share update exclusive locks
ActiveSession
Active Sessions
cnt
1m
Number of active sessions
ActiveSession
Active Sessions [Total]
cnt
1m
Total number of active sessions
ActiveSession
Idle In Transaction Sessions
cnt
1m
Number of idle in transaction sessions
ActiveSession
Idle In Transaction Sessions [Total]
cnt
1m
Total number of idle in transaction sessions
ActiveSession
Idle Sessions
cnt
1m
Number of idle sessions
ActiveSession
Idle Sessions [Total]
cnt
1m
Total number of idle sessions
ActiveSession
Waiting Sessions
cnt
1m
Number of waiting sessions
ActiveSession
Waiting Sessions [Total]
cnt
1m
Total number of waiting sessions
Connection
Connection Usage
%
1m
DB connection usage rate
Connection
Connection Usage [Total]
%
1m
Total DB connection usage rate
DB Age
DB Age Max
age
1m
Database age (frozen XID) value
Lock
Wait Locks
cnt
1m
Number of sessions waiting for locks (per DB)
Lock
Wait Locks [Long Total]
cnt
1m
Number of sessions waiting for locks for more than 300 seconds
Lock
Wait Locks [Long]
cnt
1m
Number of sessions waiting for locks for more than 300 seconds
Lock
Wait Locks [Total]
cnt
1m
Total number of sessions waiting for locks
Long Transaction
Transaction Time Max [Long]
sec
1m
Longest transaction time (in seconds)
Long Transaction
Transaction Time Max Total [Long]
sec
1m
Longest transaction time (in seconds)
Replica
Apply Lag Time
sec
1m
Apply lag time
Replica
Check No Replication
cnt
1m
Check no replication value
Replica
Check Replication
state
1m
Check replication state value
Slowquery
Slowqueries
cnt
1m
Number of slow queries (more than 5 minutes)
State
Instance State [PID]
PID
1m
Postgres process PID
Tablespace
Tablespace Used
bytes
1m
Tablespace usage
Tablespace
Tablespace Used [Total]
bytes
1m
Total tablespace usage
Tablespace
Tablespace Used Bytes [MB]
bytes
1m
Filesystem directory usage (in MB)
Tablespace
Tablespaces [Total]
cnt
1m
Total number of tablespaces
Table. PostgreSQL(DBaaS) performance items
Note
Refer to Virtual Server performance items for DB instance performance items.
MariaDB(DBaaS)
Performance item group name
Performance item name
Collection unit
Collection cycle
Description
Activelock
Active Locks
cnt
1m
Number of active locks
Activesssion
Active Sessions
cnt
1m
Number of connected threads
Activesssion
Connection Usage [Total]
%
1m
DB connection usage rate
Activesssion
Connections
cnt
1m
Number of connections
Activesssion
Connections [MAX]
cnt
1m
Maximum number of connected threads
Datafile
Binary Log Used [MB]
bytes
1m
Binary log usage (in MB)
Datafile
Data Directory Used [MB]
bytes
1m
Datadir usage (in MB)
Datafile
Open Files
cnt
1m
Number of open files
Datafile
Open Files [MAX]
cnt
1m
Maximum number of open files
Datafile
Open Files Usage
%
1m
Open file usage rate
Datafile
Relay Log Used [MB]
bytes
1m
Relay log usage (in MB)
State
Instance State [PID]
PID
1m
Mariadbd process PID (or mysqld process PID for versions prior to 10.5.2)
State
Safe PID
PID
1m
Mariadbd_safe process PID (or mysqld_safe process PID for versions prior to 10.5.2)
State
Slave Behind Master seconds
sec
1m
Time difference between master and slave (in seconds)
Tablespace
Tablespace Used
bytes
1m
Tablespace usage
Tablespace
Tablespace Used [Total]
bytes
1m
Total tablespace usage
Transaction
Running Threads
cnt
1m
Number of running threads
Transaction
Slowqueries
cnt
1m
Number of slow queries (more than 10 seconds)
Transaction
Slowqueries [Total]
cnt
1m
Total number of slow queries
Transaction
Transaction Time [Long]
sec
1m
Longest transaction time (in seconds)
Transaction
Wait Locks
cnt
1m
Number of sessions waiting for locks for more than 60 seconds
Table. MariaDB(DBaaS) performance items
Note
Refer to Virtual Server performance items for DB instance performance items.
MySQL(DBaaS)
Performance item group name
Performance item name
Collection unit
Collection cycle
Description
Activelock
Active Locks
cnt
1m
Number of active locks
Activesssion
Active Sessions
cnt
1m
Number of connected threads
Activesssion
Connection Usage [Total]
%
1m
DB connection usage rate
Activesssion
Connections
cnt
1m
Number of connections
Activesssion
Connections [MAX]
cnt
1m
Maximum number of connected threads
Datafile
Binary Log Used [MB]
bytes
1m
Binary log usage (in MB)
Datafile
Data Directory Used [MB]
bytes
1m
Datadir usage (in MB)
Datafile
Open Files
cnt
1m
Number of open files
Datafile
Open Files [MAX]
cnt
1m
Maximum number of open files
Datafile
Open Files Usage
%
1m
Open file usage rate
Datafile
Relay Log Used [MB]
bytes
1m
Relay log usage (in MB)
State
Instance State [PID]
PID
1m
Mysqld process PID
State
Safe PID
PID
1m
Safe program PID
State
Slave Behind Master seconds
sec
1m
Time difference between master and slave (in seconds)
Tablespace
Tablespace Used
bytes
1m
Tablespace usage
Tablespace
Tablespace Used [Total]
bytes
1m
Total tablespace usage
Transaction
Running Threads
cnt
1m
Number of running threads
Transaction
Slowqueries
cnt
1m
Number of slow queries (more than 10 seconds)
Transaction
Slowqueries [Total]
cnt
1m
Total number of slow queries
Transaction
Transaction Time [Long]
sec
1m
Longest transaction time (in seconds)
Transaction
Wait Locks
cnt
1m
Number of sessions waiting for locks for more than 60 seconds
Table. MySQL(DBaaS) performance items
Note
Refer to Virtual Server performance items for DB instance performance items.
CacheStore(DBaaS)
Performance item group name
Performance item name
Collection unit
Collection cycle
Description
CacheStore
Active Defragmentation Keys [Hits]
cnt
1m
Number of keys defragmented
CacheStore
Active Defragmentation Keys [Miss]
cnt
1m
Number of keys skipped during defragmentation
CacheStore
Active Defragmentationd [Hits]
cnt
1m
Number of values reassigned during defragmentation
CacheStore
Active Defragmentations [Miss]
cnt
1m
Number of defragmentation processes started and stopped
CacheStore
Allocated Bytes [OS]
bytes
1m
Bytes allocated by CacheStore and recognized by the operating system (resident set size)
CacheStore
Allocated Bytes [Redis]
bytes
1m
Total bytes allocated by CacheStore
CacheStore
AOF Buffer Size
bytes
1m
AOF buffer size
CacheStore
AOF File Size [Current]
bytes
1m
Current AOF file size
CacheStore
AOF File Size [Lastest Startup]
bytes
1m
AOF file size at the last startup or rewrite
CacheStore
AOF Rewrite Buffer Size
bytes
1m
AOF rewrite buffer size
CacheStore
AOF Rewrite Current Time
sec
1m
Time spent on the current AOF rewrite process
CacheStore
AOF Rewrite Last Time
sec
1m
Time spent on the last AOF rewrite process
CacheStore
Calls
cnt
1m
Number of commands executed (not rejected)
CacheStore
Calls [Failed]
cnt
1m
Number of failed commands (CacheStore 6.2-rc2)
CacheStore
Calls [Rejected]
cnt
1m
Number of rejected commands (CacheStore 6.2-rc2)
CacheStore
Changes [Last Saved]
cnt
1m
Number of changes since the last dump
CacheStore
Client Output Buffer [MAX]
cnt
1m
Longest output list among current client connections
CacheStore
Client Input Buffer [MAX]
cnt
1m
Largest input buffer among current client connections (CacheStore 5.0)
CacheStore
Clients [Sentinel]
cnt
1m
Number of client connections (sentinel)
CacheStore
Connected Slaves
cnt
1m
Number of connected slaves
CacheStore
Connections [Blocked]
cnt
1m
Number of clients waiting for blocking calls (BLPOP, BRPOP, BRPOPLPUSH)
CacheStore
Connections [Current]
cnt
1m
Number of client connections (excluding slave connections)
CacheStore
Copy On Write Allocated Size [AOF]
bytes
1m
COW allocation size (in bytes) during the last RDB save operation
CacheStore
Copy On Write Allocated Size [RDB]
bytes
1m
COW allocation size (in bytes) during the last RDB save operation
CacheStore
CPU Time [Average]
cnt
1m
Average CPU usage per command execution
CacheStore
CPU Time [Total]
usec
1m
Total CPU time used by these commands
CacheStore
CPU Usage [System Process]
%
1m
System CPU usage by background processes
CacheStore
CPU Usage [System]
%
1m
System CPU usage by the CacheStore server
CacheStore
CPU Usage [User Process]
%
1m
User CPU usage by background processes
CacheStore
CPU Usage [User]
%
1m
User CPU usage by the CacheStore server
CacheStore
Dataset Used
bytes
1m
Dataset size (in bytes)
CacheStore
Disk Used
bytes
1m
Datadir usage
CacheStore
Evicted Keys
cnt
1m
Number of evicted keys due to maxmemory limit
CacheStore
Fsyncs [Delayed]
cnt
1m
Delayed fsync counter
CacheStore
Fsyncs [Pending]
cnt
1m
Number of fsync operations pending in the background I/O queue (in bytes)
CacheStore
Full Resyncs
cnt
1m
Number of full resynchronizations with slaves
CacheStore
Keys [Expired]
cnt
1m
Total number of key expiration events
CacheStore
Keys [Keyspace]
cnt
1m
Number of keys in the keyspace
CacheStore
Lastest Fork Duration Time
usec
1m
Time taken by the last fork operation (in microseconds)
CacheStore
Lookup Keys [Hit]
cnt
1m
Number of successful key lookups in the main dictionary
CacheStore
Lookup Keys [Miss]
cnt
1m
Number of failed key lookups in the main dictionary
CacheStore
Lua Engine Memory Used
bytes
1m
Memory used by the Lua engine
CacheStore
Master Last Interaction Time Ago
sec
1m
Time elapsed since the last interaction with the master (in seconds)
CacheStore
Master Last Interaction Time Ago [Sync]
sec
1m
Time elapsed since the last interaction with the master (in seconds)
CacheStore
Master Offset
pid
1m
Current replication offset of the server
CacheStore
Master Second Offset
pid
1m
Offset of the replication ID that will be accepted
CacheStore
Master Sync Left Bytes
bytes
1m
Number of bytes remaining to be synchronized
CacheStore
Memory Fragmentation Rate
%
1m
Ratio of used_memory_rss to used_memory
CacheStore
Memory Fragmentation Rate [Allocator]
%
1m
Fragmentation ratio
CacheStore
Memory Fragmentation Used
bytes
1m
Difference between used_memory_rss and used_memory (in bytes)
CacheStore
Memory Fragmentation Used [Allocator]
bytes
1m
Resident bytes
CacheStore
Memory Max Value
bytes
1m
Memory limit
CacheStore
Memory Resident [Allocator]
bytes
1m
Resident memory
CacheStore
Memory RSS Rate [Allocator]
%
1m
Resident ratio
CacheStore
Memory Used [Active]
bytes
1m
Active memory
CacheStore
Memory Used [Allocated]
bytes
1m
Allocated memory
CacheStore
Memory Used [Resident]
bytes
1m
Resident bytes
CacheStore
Network In Bytes [Total]
bytes
1m
Total network input (in bytes)
CacheStore
Network Out Bytes [Total]
bytes
1m
Total network output (in bytes)
CacheStore
Network Read Rate
cnt
1m
Network read rate (in KB/sec)
CacheStore
Network Write Rate
cnt
1m
Network write rate (in KB/sec)
CacheStore
Partial Resync Requests [Accepted]
cnt
1m
Number of accepted partial resynchronization requests
CacheStore
Partial Resync Requests [Denied]
cnt
1m
Number of denied partial resynchronization requests
CacheStore
Peak Memory Consumed
bytes
1m
Maximum memory consumed by CacheStore
CacheStore
Processed Commands
cnt
1m
Number of commands processed per second
CacheStore
Processed Commands [Total]
cnt
1m
Total number of commands processed
CacheStore
Pub/Sub Channels
cnt
1m
Global number of pub/sub channels with client subscriptions
CacheStore
Pub/Sub Patterns
cnt
1m
Global number of pub/sub patterns with client subscriptions
CacheStore
RDB Saved Duration Time [Current]
sec
1m
Time taken by the current RDB save operation (in seconds)
CacheStore
RDB Saved Duration Time [Last]
sec
1m
Time taken by the last RDB save operation (in seconds)
CacheStore
Received Connections [Total]
cnt
1m
Total number of connections received
CacheStore
Rejected Connections [Total]
cnt
1m
Total number of connections rejected
CacheStore
Replication Backlog Actove Count
cnt
1m
Replication backlog active flag
CacheStore
Replication Backlog Master Offset
cnt
1m
Master offset of the replication backlog buffer
CacheStore
Replication Backlog Size
bytes
1m
Size of the replication backlog buffer (in bytes)
CacheStore
Replication Backlog Size [Total]
bytes
1m
Total size of the replication backlog buffer (in bytes)
CacheStore
Slave Priority
cnt
1m
Priority of the instance as a failover target
CacheStore
Slave Replication Offset
pid
1m
Replication offset of the slave instance
CacheStore
Slow Operations
cnt
1m
Number of slow operations
CacheStore
Sockets [MIGRATE]
cnt
1m
Number of sockets opened for migration
CacheStore
Tracked Keys [Expiry]
cnt
1m
Number of keys being tracked for expiry (only for writable slaves)
State
Instance State [PID]
PID
1m
PID of the redis-server process
State
Sentinel State [PID]
PID
1m
PID of the sentinel process
Table. CacheStore (DBaaS) Performance Items
Note
Refer to the performance items of the Virtual Server for the performance items of the DB instance.
EPAS
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Activelock
Access Exclusive Locks
cnt
1m
Number of access exclusive locks
Activelock
Access Share Locks
cnt
1m
Number of access share locks
Activelock
Active Locks
cnt
1m
Number of active locks
Activelock
Active Locks [Total]
cnt
1m
Total number of active locks
Activelock
Exclusive Locks
cnt
1m
Number of exclusive locks
Activelock
Row Exclusive Locks
cnt
1m
Number of row exclusive locks
Activelock
Row Share Locks
cnt
1m
Number of row share locks
Activelock
Share Locks
cnt
1m
Number of share locks
Activelock
Share Row Exclusive Locks
cnt
1m
Number of share row exclusive locks
Activelock
Share Update Exclusive Locks
cnt
1m
Number of share update exclusive locks
Activesession
Active Sessions
cnt
1m
Number of active sessions
Activesession
Active Sessions [Total]
cnt
1m
Total number of active sessions
Activesession
Idel In Transaction Sessions
cnt
1m
Number of idle in transaction sessions
Activesession
Idle In Transaction Sessions [Total]
cnt
1m
Total number of idle in transaction sessions
Activesession
Idle Sessions
cnt
1m
Number of idle sessions
Activesession
Idle Sessions [Total]
cnt
1m
Total number of idle sessions
Activesession
Waiting Sessions
cnt
1m
Number of waiting sessions
Activesession
Waiting Sessions [Total]
cnt
1m
Total number of waiting sessions
Connection
Connection Usage
%
1m
DB connection usage rate (%)
Connection
Connection Usage [Total]
%
1m
Total DB connection usage rate (%)
Connection
Connection Usage Per DB
%
1m
DB connection usage rate per DB (%)
DB Age
DB Age Max
age
1m
Database age (frozen XID) value
Lock
Wait Locks
cnt
1m
Number of sessions waiting for locks
Lock
Wait Locks [Long Total]
cnt
1m
Total number of sessions waiting for locks for a long time
Lock
Wait Locks [Long]
cnt
1m
Number of sessions waiting for locks for a long time
Lock
Wait Locks [Total]
cnt
1m
Total number of sessions waiting for locks
Lock
Wait Locks Per DB [Total]
cnt
1m
Total number of sessions waiting for locks per DB
Long Transaction
Transaction Time Max [Long]
sec
1m
Maximum transaction time (in minutes)
Long Transaction
Transaction Time Max Total [Long]
sec
1m
Maximum transaction time (in minutes)
Replica
Apply Lag Time
sec
1m
Apply lag time
Replica
Check No Replication
cnt
1m
Check no replication value
Replica
Check Replication
state
1m
Check replication state value
Slowquery
Slowqueries
cnt
1m
Number of slow queries
State
Instance state [PID]
PID
1m
PID of the edb-postgres process
Tablespace
Tablespace Used Bytes [MB]
bytes
1m
Filesystem directory usage (in MB)
Tablespace
Tablespace [Total]
cnt
1m
Total number of tablespaces
Tablespace
Tablespace Used
bytes
1m
Used tablespace size
Tablespace
Tablespace Used [Total]
bytes
1m
Total used tablespace size
Table. EPAS Performance Items
Microsoft SQL Server
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Activelock
Active Locks
cnt
1m
Number of active locks
Activesssion
Active Sessions
cnt
1m
Number of active sessions
Activetransaction
Active Transactions [Total]
cnt
1m
Total number of active transactions
Connection
Connected Users
cnt
1m
Number of users connected to the system
Datafile
Datavolume Size [Free]
bytes
1m
Available space
Datafile
DBFiles [Not Online]
cnt
1m
Number of data files that are not online
Datafile
Tablespace Used
bytes
1m
Used data volume size
Lock
Lock Processes [Blocked]
cnt
1m
Number of SQL processes blocked by other processes
Lock
Lock Waits [Per Second]
cnt
1m
Number of lock waits per second
Slowquery
Blocking Session ID
ID
1m
ID of the session blocking the query
Slowquery
Slowqueries
cnt
1m
Number of slow queries
Slowquery
Slowquery CPU Time
ms
1m
CPU time taken by slow queries
Slowquery
Slowquery Execute Context ID
ID
1m
ID of the execution context of slow queries
Slowquery
Slowquery Memory Usage
bytes
1m
Memory usage of slow queries
Slowquery
Slowquery Session ID
ID
1m
ID of the session executing slow queries
Slowquery
Slowquery Wait Duration Time
ms
1m
Wait duration time of slow queries
State
Instance State [Cluster]
state
1m
State of the MSSQL cluster
State
Instance State [PID]
PID
1m
PID of the sqlservr.exe process
State
Page IO Latch Wait Time
ms
1m
Average wait time for page IO latches
Transaction
Transaction Time [MAX]
cnt
1m
Maximum transaction time
Table. Microsoft SQL Server
Data Analytics type
Event Streams
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Broker
Connections [Zookeeper Client]
cnt
1m
Number of ZooKeeper connections
Broker
Failed [Client Fetch Request]
cnt
1m
Number of failed client fetch requests
Broker
Failed [Produce Request]
cnt
1m
Number of failed produce requests
Broker
Incomming Messages
cnt
1m
Number of incoming messages
Broker
Leader Elections
cnt
1m
Number of leader elections
Broker
Leader Elections [Unclean]
cnt
1m
Number of unclean leader elections
Broker
Log Flushes
cnt
1m
Number of log flushes
Broker
Network In Bytes
bytes
1m
Total network input (in bytes)
Broker
Network Out Bytes
bytes
1m
Total network output (in bytes)
Broker
Rejected Bytes
bytes
1m
Total rejected bytes
Broker
Request Queue Length
cnt
1m
Request queue length
Broker
Zookeeper Sessions [Closed]
cnt
1m
Number of closed ZooKeeper sessions
Broker
Zookeeper Sessions [Expired]
cnt
1m
Number of expired ZooKeeper sessions
Broker
Zookeeper Sessions [Readonly]
cnt
1m
Number of read-only ZooKeeper sessions
Broker
Incomming Messages Rate [Topic]
cnt
1m
Incoming message rate per topic
Broker
Incomming Byte Rate [Second]
bytes
1m
Incoming byte rate per second
Broker
Outgoing Byte Rate [Second]
bytes
1m
Outgoing byte rate per second
Broker
Rejected Byte Rate [Second]
bytes
1m
Rejected byte rate per second
Disk
Disk Used
bytes
1m
Datadir usage
State
AKHQ State [PID]
PID
1m
PID of the akhq process
State
Instance State [PID]
PID
1m
PID of the kafka process
State
Zookeeper State [PID]
PID
1m
PID of the zookeeper process
Table. Event Streams
Search Engine
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Cluster
Shards
cnt
1m
Number of shards in the cluster
Cluster
Shards [Primary]
cnt
1m
Number of primary shards in the cluster
Cluster
Index [Total]
cnt
1m
Total number of indices in the cluster
Cluster
License Expiry Date [ms]
ms
1m
License expiry date (in milliseconds)
Cluster
License Status
state
1m
License status
Cluster
License Type
type
1m
License type
FileSystem
Disk Usage
bytes
1m
Datadir usage
Node
Documents [Deleted]
cnt
1m
Total number of deleted documents
Node
Documents [Existing]
cnt
1m
Total number of existing documents
Node
Filesystem Bytes [Available]
bytes
1m
Available filesystem bytes
Node
Filesystem Bytes [Free]
bytes
1m
Free filesystem bytes
Node
Filesystem Bytes [Total]
bytes
1m
Total filesystem bytes
Node
JVM Heap Used [Init]
bytes
1m
Initial JVM heap usage (in bytes)
Node
JVM Heap Used [MAX]
bytes
1m
Maximum JVM heap usage (in bytes)
Node
JVM Non Heap Used [Init]
bytes
1m
Initial JVM non-heap usage (in bytes)
Node
JVM Non Heap Used [MAX]
bytes
1m
Maximum JVM non-heap usage (in bytes)
Node
Segments
cnt
1m
Total number of segments
Node
Segments Bytes
bytes
1m
Total size of segments (in bytes)
Node
Store Bytes
bytes
1m
Total size of the store (in bytes)
State
Instance state [PID]
PID
1m
PID of the Elasticsearch process
Task
Queue Time
ms
1m
Queue time
Kibana
Kibana state [PID]
PID
1m
PID of the Kibana process
Kibana
Kibana Connections
cnt
1m
Number of connections
Kibana
Kibana Memory Heap Allocated [Limit]
bytes
1m
Maximum allocated heap size (in bytes)
Kibana
Kibana Memory Heap Allocated [Total]
bytes
1m
Total allocated heap size (in bytes)
Kibana
Kibana Memory Heap Used
bytes
1m
Used heap size (in bytes)
Kibana
Kibana Process Uptime
ms
1m
Process uptime
Kibana
Kibana Requests [Disconnected]
cnt
1m
Number of disconnected requests
Kibana
Kibana Requests [Total]
cnt
1m
Total number of requests
Kibana
Kibana Response Time [Avg]
ms
1m
Average response time
Kibana
Kibana Response Time [MAX]
ms
1m
Maximum response time
Table. Search Engine
Container type
Kubernetes Engine
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Cluster
Cluster Namespaces [Active]
cnt
5m
Number of active namespaces
Cluster
Cluster Namespaces [Total]
cnt
5m
Total number of namespaces
Cluster
Cluster Nodes [Ready]
cnt
5m
Number of ready nodes
Cluster
Cluster Nodes [Total]
cnt
5m
Total number of nodes
Cluster
Cluster Pods [Failed]
cnt
5m
Number of failed pods
Cluster
Cluster Pods [Pending]
cnt
5m
Number of pending pods
Cluster
Cluster Pods [Running]
cnt
5m
Number of running pods
Cluster
Cluster Pods [Succeeded]
cnt
5m
Number of succeeded pods
Cluster
Cluster Pods [Unknown]
cnt
5m
Number of unknown pods
Cluster
Instance State
state
5m
Cluster state
Namespace
Namespace Pods [Failed]
cnt
5m
Number of failed pods in the namespace
Namespace
Namespace Pods [Pending]
cnt
5m
Number of pending pods in the namespace
Namespace
Namespace Pods [Running]
cnt
5m
Number of running pods in the namespace
Namespace
Namespace Pods [Succeeded]
cnt
5m
Number of succeeded pods in the namespace
Namespace
Namespace Pods [Unknown]
cnt
5m
Number of unknown pods in the namespace
Namespace
Namespace GPU Clock Frequency
MHz
5m
GPU clock frequency
Namespace
Namespace GPU Memory Usage
%
5m
GPU memory usage
Node
Node CPU Size [Allocatable]
cnt
5m
Allocatable CPU size
Node
Node CPU Size [Capacity]
cnt
5m
CPU capacity
Node
Node CPU Usage
%
5m
CPU usage
Node
Node CPU Usage [Request]
%
5m
CPU request ratio
Node
Node CPU Used
state
5m
CPU utilization
Node
Node Filesystem Usage
%
5m
Filesystem usage
Node
Node Memory Size [Allocatable]
bytes
5m
Allocatable memory size
Node
Node Memory Size [Capacity]
bytes
5m
Memory capacity
Node
Node Memory Usage
%
5m
Memory usage
Node
Node Memory Usage [Request]
%
5m
Memory request ratio
Node
Node Memory Workingset
bytes
5m
Node memory working set
Node
Node Network In Bytes
bytes
5m
Node network RX bytes
Node
Node Network Out Bytes
bytes
5m
Node network TX bytes
Node
Node Network Total Bytes
bytes
5m
Node network total bytes
Node
Node Pods [Failed]
cnt
5m
Number of failed pods in the node
Node
Node Pods [Pending]
cnt
5m
Number of pending pods in the node
Node
Node Pods [Running]
cnt
5m
Number of running pods in the node
Node
Node Pods [Succeeded]
cnt
5m
Number of succeeded pods in the node
Node
Node Pods [Unknown]
cnt
5m
Number of unknown pods in the node
Pod
Pod CPU Usage [Limit]
%
5m
Pod CPU usage limit ratio
Pod
Pod CPU Usage [Request]
%
5m
Pod CPU request ratio
Pod
Pod CPU Usage
mc
5m
Pod CPU usage
Pod
Pod Memory Usage [Limit]
%
5m
Pod memory usage limit ratio
Pod
Pod Memory Usage [Request]
%
5m
Pod memory request ratio
Pod
Pod Memory Usage
bytes
5m
Pod memory usage
Pod
Pod Network In Bytes
bytes
5m
Pod network RX bytes
Pod
Pod Network Out Bytes
bytes
5m
Pod network TX bytes
Pod
Pod Network Total Bytes
bytes
5m
Pod network total bytes
Pod
Pod Restart Containers
cnt
5m
Number of container restarts in the pod
Workload
Workload Pods [Running]
cnt
5m
-
Table. Kubernetes Engine performance items
Container Registry
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Container Registry
Image Pulls [Denied]
cnt
1m
Number of denied image tag (digest) pulls
Container Registry
Image Pushs [Allowed]
cnt
1m
Number of allowed image tag (digest) pushes
Container Registry
Image Pushs [Denied]
cnt
1m
Number of denied image tag (digest) pushes
Container Registry
Image Scans [Allowed]
cnt
1m
Number of allowed image tag (digest) scans
Container Registry
Image Scans [Denied]
cnt
1m
Number of denied image tag (digest) scans
Container Registry
Image Tags [Deleted]
cnt
1m
Number of deleted image tags (digests)
Container Registry
Images [Created]
cnt
1m
Number of created images
Container Registry
Images [Deleted]
cnt
1m
Number of deleted images
Container Registry
Logins [Allowed]
cnt
1m
Number of allowed registry logins
Container Registry
Logins [Denied]
cnt
1m
Number of denied registry logins
Container Registry
Repositories [Created]
cnt
1m
Number of created repositories
Container Registry
Repositories [Deleted]
cnt
1m
Number of deleted repositories
State
Instance State
state
1m
Status check
Table. Container Registry performance items
Networking Type
Internet Gateway
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Internet Gateway
Network In Total Bytes [Internet Delta]
bytes
5m
Cumulative traffic from Internet Gateway to VPC over 5 minutes (Internet) ※ Average traffic bps conversion formula: cumulative traffic (bytes) / 300 (seconds) * 8 (bits)
Internet Gateway
Network In Total Bytes [Internet]
bytes
5m
RX bytes total
Internet Gateway
Network Out Total Bytes [Internet Delta]
bytes
5m
Cumulative traffic from VPC to Internet Gateway over 5 minutes (Internet) ※ Average traffic bps conversion formula: cumulative traffic (bytes) / 300 (seconds) * 8 (bits)
Internet Gateway
Network Out Total Bytes [Internet]
bytes
5m
TX bytes total
Table. Internet Gateway performance items
Load Balancer (OLD)
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Load Balancer
Current Connection
cnt
5m
Current number of connections
Load Balancer
Total Connection
cnt
5m
Total number of connections
Load Balancer
Total Connection [Delta]
cnt
5m
Total number of connections (delta value)
Load Balancer
Network In Bytes
bytes
5m
In bytes
Load Balancer
Network In Bytes [Delta]
bytes
5m
Cumulative traffic from client to Load Balancer over 5 minutes ※ Average traffic bps conversion formula: cumulative traffic (bytes) / 300 (seconds) * 8 (bits)
Load Balancer
Network Out Bytes
bytes
5m
Out bytes
Load Balancer
Network Out Bytes [Delta]
bytes
5m
Cumulative traffic from Load Balancer to client over 5 minutes ※ Average traffic bps conversion formula: cumulative traffic (bytes) / 300 (seconds) * 8 (bits)
Load Balancer
Instance State
state
5m
Load Balancer status
Table. Load Balancer performance items
Load Balancer Listener (OLD)
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Listener
Connections [Current]
cnt
5m
Current number of connections
Listener
Connections [Total Delta]
cnt
5m
Total number of connections (delta value)
Listener
Connections [Total]
cnt
5m
Total number of connections
Listener
Instance State
state
5m
LB Listener status
Listener
Network In Bytes
bytes
5m
In bytes
Listener
Network In Bytes [Delta]
bytes
5m
Cumulative traffic from client to Load Balancer over 5 minutes ※ Average traffic bps conversion formula: cumulative traffic (bytes) / 300 (seconds) * 8 (bits)
Listener
Network Out Bytes
bytes
5m
Out bytes
Listener
Network Out Bytes [Delta]
bytes
5m
Cumulative traffic from Load Balancer to client over 5 minutes ※ Average traffic bps conversion formula: cumulative traffic (bytes) / 300 (seconds) * 8 (bits)
Table. Load Balancer Listener performance items
Direct Connect
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Direct Connect
Network In Bytes
bytes
5m
Cumulative traffic from Direct Connect to VPC
Direct Connect
Network In Bytes [Delta]
bytes
5m
Cumulative traffic from Direct Connect to VPC over 5 minutes ※ Average traffic bps conversion formula: cumulative traffic (bytes) / 300 (seconds) * 8 (bits)
Direct Connect
Network Out Bytes
bytes
5m
Cumulative traffic from VPC to Direct Connect
Direct Connect
Network Out Bytes [Delta]
bytes
5m
Cumulative traffic from VPC to Direct Connect over 5 minutes ※ Average traffic bps conversion formula: cumulative traffic (bytes) / 300 (seconds) * 8 (bits)
Table. Direct Connect performance items
Load Balancer
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
State
Instance State
state
5m
LB status
Load Balancer
Current Connection
cnt
5m
Current number of connections
Load Balancer
Total L4 Connection
cnt
5m
Total number of L4 connections
Load Balancer
Total L7 Connection
cnt
5m
Total number of L7 connections
Load Balancer
Total TCP Connection
cnt
5m
Total number of TCP connections
Load Balancer
Total Connection
cnt
5m
Total number of connections
Load Balancer
Bytes processed in forward direction
bytes
5m
Forward network bytes
Load Balancer
Packets processed in forward direction
cnt
5m
Forward network packets
Load Balancer
Bytes processed in reverse direction
bytes
5m
Reverse network bytes
Load Balancer
Packets processed in reverse direction
cnt
5m
Reverse network packets
Load Balancer
Total failure actions
cnt
5m
Total number of failure actions
Load Balancer
Current Request
cnt
5m
Current number of requests
Load Balancer
Current response
cnt
5m
Current number of responses
Load Balancer
Total Request
cnt
5m
Total number of requests
Load Balancer
Total Request Success
cnt
5m
Total number of successful requests
Load Balancer
Peak Connection
cnt
5m
Peak number of connections
Load Balancer
Current Connection Rate
%
5m
Current SSL connection rate
Load Balancer
Last response time
ms
5m
Last response time
Load Balancer
Fastest response time
ms
5m
Fastest response time
Load Balancer
Slowest response time
ms
5m
Slowest response time
Load Balancer
Current SSL Connection
cnt
5m
Current number of SSL connections
Load Balancer
Total SSL Connection
cnt
5m
Total number of SSL connections
Table. Load Balancer performance items
Load Balancer Listener
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
State
Instance State
state
5m
LB status
Load Balancer
Current Connection
cnt
5m
Current number of connections
Load Balancer
Total L4 Connection
cnt
5m
Total number of L4 connections
Load Balancer
Total L7 Connection
cnt
5m
Total number of L7 connections
Load Balancer
Total TCP Connection
cnt
5m
Total number of TCP connections
Load Balancer
Total Connection
cnt
5m
Total number of connections
Load Balancer
Bytes processed in forward direction
bytes
5m
Forward network bytes
Load Balancer
Packets processed in forward direction
cnt
5m
Forward network packets
Load Balancer
Bytes processed in reverse direction
bytes
5m
Reverse network bytes
Load Balancer
Packets processed in reverse direction
cnt
5m
Reverse network packets
Load Balancer
Total failure actions
cnt
5m
Total number of failure actions
Load Balancer
Current Request
cnt
5m
Current number of requests
Load Balancer
Current response
cnt
5m
Current number of responses
Load Balancer
Total Request
cnt
5m
Total number of requests
Load Balancer
Total Request Success
cnt
5m
Total number of successful requests
Load Balancer
Peak Connection
cnt
5m
Peak number of connections
Load Balancer
Current Connection Rate
%
5m
Current SSL connection rate
Load Balancer
Last response time
ms
5m
Last response time
Load Balancer
Fastest response time
ms
5m
Fastest response time
Load Balancer
Slowest response time
ms
5m
Slowest response time
Load Balancer
Current SSL Connection
cnt
5m
Current number of SSL connections
Load Balancer
Total SSL Connection
cnt
5m
Total number of SSL connections
Table. Load Balancer Listener performance items
Load Balancer Server Group
Performance Item Group Name
Performance Item Name
Collection Unit
Collection Cycle
Description
Server Group
Instance State
state
5m
LB Server Group status
Server Group
Peak Connection
cnt
5m
Server group peak number of connections
Server Group
Healthy host
cnt
5m
Server group number of healthy hosts
Server Group
Unhealthy host
cnt
5m
Server group number of unhealthy hosts
Server Group
Request Count
cnt
5m
Number of requests
Server Group
Response Count
cnt
5m
Number of responses
Server Group
2xx Response Count
cnt
5m
Number of 2xx responses
Server Group
3xx Response Count
cnt
5m
Number of 3xx responses
Server Group
4xx Response Count
cnt
5m
Number of 4xx responses
Server Group
5xx Response Count
cnt
5m
Number of 5xx responses
Table. Load Balancer Server Group performance items
10.3.2.9 - Appendix C. Service-specific status check
Compute type
Virtual Server
Performance Item Name
Description
Value
Instance State [Basic]
Instance Status
NOSTATE, RUNNING, BLOCKED, PAUSED, SHUTDOWN, SHUTOFF, CRASHED, PMSUSPENDED, LAST
Fig. Virtual Server Status Check
GPU Server
Performance Item Name
Description
Value
Instance State [Basic]
Instance Status
NOSTATE RUNNING, BLOCKED, PAUSED, SHUTDOWN, SHUTOFF, CRASHED, PMSUSPENDED, LAST
Fig. GPU Server Status Check
Bare Metal Server
Performance Item Name
Description
Value
N/A
N/A
N/A
Fig. Bare Metal Server Status Check
Caution
Bare Metal Server does not provide status information through Cloud Monitoring.
Multi-node GPU Cluster [Cluster Fabric]
Performance Item Name
Description
Value
N/A
N/A
N/A
Fig. Multi-node GPU Cluster [Cluster Fabric] status check
Caution
Multi-node GPU Cluster [Cluster Fabric] does not provide status information through Cloud Monitoring.
Multi-node GPU Cluster [Node]
Performance Item Name
Description
Value
N/A
N/A
N/A
Fig. Multi-node GPU Cluster [Node] Status Check
Caution
Multi-node GPU Cluster [Node] does not provide status information through Cloud Monitoring.
Storage type
File Storage
Performance Item Name
Description
Value
Instance State
File Storage volume status
* 1: Online in case * 0: other status values (Offline)
Fig. File Storage status check
Object Storage
Performance Item Name
Description
Value
N/A
N/A
N/A
Table. Object Storage status check
Caution
Object Storage does not provide status information through Cloud Monitoring.
Block Storage(BM)
Performance Item Name
Description
Value
Instance State
Blockstorage Volume Status
* 1: running (normal) * 0: down (abnormal)
Fig. Block Storage(BM) Status Check
Block Storage(VM)
Performance Item Name
Description
Value
Instance State
Blockstorage volume status
* 1: running (normal) * 0: down (abnormal)
Table. Block Storage(VM) Status Check
Database type
PostgreSQL(DBaaS)
Performance Item Name
Description
Value
Instance State [PID]
postgres process PID
* PID: when the postgres process exists * -1: when the process does not exist
Fig. PostgreSQL(DBaaS) Status Check
MariaDB(DBaaS)
Performance Item Name
Description
Value
Safe PID
mariadb_safe process PID
* PID: when the mariadb_safe process exists * -1: when the process does not exist
Instance State [PID]
mariadb process PID
* PID: when the mariadb process exists * -1: when the process does not exist
Fig. MariaDB(DBaaS) Status Check
MySQL(DBaaS)
Performance Item Name
Description
Value
Instance State [PID]
mysqld process PID
* PID: when the mysqld process exists * -1: when the process does not exist
Fig. MySQL(DBaaS) Status Check
Microsoft SQL Server(DBaaS)
Performance Item Name
Description
Value
Instance State [Cluster]
MSSQL cluster configuration status
* PID: when the mssql process exists * -1: when the process does not exist
Instance State [PID]
sqlservr.exe process pid
* For Microsoft SQL Server, the secondary server is also running with PID, so it’s impossible to check the status with only PID
Fig. Microsoft SQL Server(DBaaS) status check
EPAS(DBaaS)
Performance Item Name
Description
Value
Instance State [PID]
Postgres process PID
* PID: When the postgres process exists * -1: When the process does not exist
Fig. EPAS(DBaaS) Status Check
CacheStore(DBaaS)
Redis
Performance Item Name
Description
Value
Instance State [PID]
Redis-server process PID
* -1: in case the process does not exist
Sentinel State [PID]
Sentinel process PID
* -1: in case the process does not exist
Fig. Redis Status Check
Valkey
Performance Item Name
Description
Value
Instance State [PID]
Valkey-server process PID
* -1: in case the process does not exist
Sentinel State [PID]
Sentinel process PID
* -1: in case the process does not exist
Table. Valkey Status Check
Data Analytics type
Event Streams
Performance Item Name
Description
Value
AKHQ State [PID]
akhq process PID
* PID: akhq process exists * -1: process does not exist
Instance State [PID]
kafka process PID
* PID: when the kafka process exists * -1: when the process does not exist
Zookeeper State [Pid]
zookeeper process PID
* PID: when the zookeeper process exists * -1: when the process does not exist
Fig. Event Streams Status Check
Search Engine
Performance Item Name
Description
Value
Instance State [PID]
Elasticsearch process PID
* PID: When the Elasticsearch process exists * -1: When the process does not exist
Kibana State [PID]
Kibana process PID
* PID: When the Kibana process exists * -1: When the process does not exist
Fig. Search Engine Status Check
Elasticsearch
Performance Item Name
Description
Value
Instance State [PID]
Elasticsearch process PID
* -1: in case the process does not exist
Kibana State [PID]
Dashboard process PID
* -1: in case the process does not exist
Fig. Elasticsearch Status Check
Opensearch
Performance Item Name
Description
Value
Instance State [PID]
Opensearch process PID
* -1: in case the process does not exist
Dashboard State [PID]
Dashboard process PID
* -1: in case the process does not exist
Table. Opensearch status check
Vertica(DBaaS)
Performance Item Name
Description
Value
Instance State [PID]
Vertica Process PID
* -1: when the process does not exist
Fig. Vertica(DBaaS) status check
Container type
Kubernetes Engine
Performance Item Name
Description
Value
Instance State
Cluster Status
* 1: Status check query sum(up{job=“kubernetes-apiservers”}) returns a value greater than 0 * 0: Status check query sum(up{job=“kubernetes-apiservers”}) returns a value less than or equal to 0
Fig. Kubernetes Engine status check
Container Registry
Performance Item Name
Description
Value
Instance State
Container Registry Status
* 1: running (normal) * 0: down (abnormal)
Fig. Container Registry Status Check
Networking type
Internet Gateway
Performance Item Name
Description
Value
N/A
N/A
N/A
Fig. Internet Gateway Status Check
Caution
Internet Gateway does not provide status information through Cloud Monitoring.
Load Balancer(OLD)
Performance Item Name
Description
Value
Instance State
Load Balancer status
Determined by provisioning_status in API call result
1: ACTIVE
0: ETC|
Fig. Load Balancer(OLD)
Load Balancer Listener(OLD)
Performance Item Name
Description
Value
Instance State
Load Balancer Listener status
Determined by provisioning_status in API call results * 1: ACTIVE * 0: ETC
Fig. Load Balancer Listener(OLD)
Load Balancer
Performance Item Name
Description
Value
Instance State
Load Balancer status
Determined by provisioning_status in API call result
1: ACTIVE
0: ETC|
Fig. Load Balancer
Load Balancer Listener
Performance Item Name
Description
Value
Instance State
Load Balancer Listener status, determined by provisioning_status in API call results * 1: ACTIVE * 0: ETC
Fig. Load Balancer Listener
Load Balancer Server Group
Performance Item Name
Description
Value
Instance State
Status of Load Balancer Server Group, determined by provisioning_status in API call results * 1: ACTIVE * 0: ETC
Fig. Load Balancer Server Group
Direct Connect
Performance Item Name
Description
Value
N/A
N/A
N/A
Fig. Direct Connect Status Check
Caution
Direct Connect does not provide status information through Cloud Monitoring.
Cloud WAN
Performance Item Name
Description
Value
Instance State
Attachment connection status
* 0: down * 1: up * 2: testing * 3: unknown
Fig. Cloud WAN Status Check
Global CDN
Performance Item Name
Description
Value
Instance State
Global CDN Status
* 1: running (normal) * 0: down (abnormal)
Fig. Global CDN Status Check
10.3.3 - API Reference
API Reference
10.3.4 - Release Note
Cloud Monitoring
2025.07.01
FEATURECloud Monitoring Integration Service Added
In July 2025, a linked service with Cloud Monitoring was added.
In February 2025, a linked service with Cloud Monitoring was added.
Additional linked services: Container(Container Registry), Database(EPAS, Microsoft SQL Server), Data Analytics(Event Streams, Search Engine), Networking(Load Balancer, Load Balancer Listener, Load Balancer Server Group, VPN)
2024.10.01
NEWCloud Monitoring Service Official Version Release
Cloud Monitoring service has been released. It collects usage and change information of operating infrastructure resources, and supports a stable cloud operating environment through event occurrence/notification when exceeding the set threshold.
10.4 - IAM
10.4.1 - Overview
Service Overview
IAM (Identity and Access Management) is a service that controls the accessible range of services and resources by verifying the identity of registered users on the Samsung Cloud Platform and granting access rights. Administrators can create and manage user, permission group, policy, and role items in detail through IAM.
The user can create a new user if they are a Root user or a user who has been granted user registration authority from the Root user. Policies cannot be directly granted to users, but by adding users to a user group and linking policies to that user group, specific users can be granted access or management rights to resources. In other words, the tasks that can be performed within an Account vary depending on which user group the user belongs to and which policies are linked to that user group.
Provided Features
IAM provides the following features.
User Authentication: Provides multi-factor authentication (MFA) when accessing the console and API, and also blocks unauthorized access by only allowing access from permitted IP ranges.
Access Control: Users are added to user groups based on their tasks to limit their access rights to the parts necessary for their tasks. Administrators can manage and grant custom policies.
Role Management: You can switch to another role from your account to access the Account.
Credential Provider Supplied: It can be accessed and used in the Console Account through the credential provider.
Access Control Policy Management: Creates access control policies for each service, including control/action/resource type and authentication method/IP. This enables the application of the principle of least privilege when granting access rights to cloud resources, allowing for access control based on user.
Component
The user can create and manage user groups, users, and policies through Identity and Access Management(IAM).
User Group
In the user group, you can register users and add policies.
You can register users by forming a user group suitable for each task, and grant and manage the same authority to users by linking a policy suitable for the task.
User
The administrator can create users and add them to user groups. The administrator can automatically generate or directly create a user’s password and provide account access information to the user.
User Policy
You can create policies for services provided. Authority management is possible according to control type, applied resources, and authentication type.
Role
It is fictional user information with separate permissions, and is not affected by the permissions of the original user account.
Preceding service
Identity and Access Management(IAM) has no preceding service.
10.4.2 - How-to Guides
Users can create and manage user groups, users, policies, and My Info. through Identity and Access Management (IAM).
Getting Started with IAM
Click on the All Services > Management > IAM menu. This will take you to the Service Home page of IAM.
On the Service Home page, My Info., Account information, Quick Link, and IAM status are provided as widgets.
Category
Detailed Description
My Info.
The username, email, and user group information of the user logged in to the Samsung Cloud Platform Console. Clicking the More button will take you to the My Info. page
Account Information
Provides the user’s Account ID, Account alias, and IAM user login URL if the user is an IAM user
Account ID: The user’s Account ID
Account Alias: A name assigned to the Account. An alias can be used to manage the Account more easily
Edit: If the Account alias is edited, the current alias can no longer be used for IAM user login URL See Editing Account Alias for more information
Delete: If the Account alias is deleted, IAM users can no longer log in using the Account alias. See Deleting Account Alias for more information
IAM User Login URL allows login without entering Account information
You can edit the Account alias in the Service Home > Account widget of IAM.
Click on the All Services > Management > IAM menu. This will take you to the Service Home page of IAM.
On the Service Home page, click the Edit button for the Account alias in the Account widget. This will take you to the Edit Account Alias popup window.
In the Edit Account Alias popup window, confirm the instructions and edit the Account alias, then click the OK button.
Note
When editing the Account alias, the current alias can no longer be used for Console login URL. After editing, if the alias is not used in another Account, you can use the previous alias again.
Deleting Account Alias
You can delete the Account alias in the Service Home > Account widget of IAM.
Click on the All Services > Management > IAM menu. This will take you to the Service Home page of IAM.
On the Service Home page, click the Delete button for the Account alias in the Account widget. This will take you to the Delete Account Alias popup window.
In the Delete Account Alias popup window, confirm the instructions and click the OK button.
Warning
Deleting the Account alias will prevent IAM users from logging in using the Account alias.
The IAM login URL will also be unavailable.
10.4.2.1 - User Group
Users can enter required information for user groups and select detailed options through the Samsung Cloud Platform Console to create the corresponding service.
Creating a User Group
To create a user group, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the User Group menu on the Service Home page. You will be navigated to the User Group List page.
Click the Create User Group button on the User Group List page. You will be navigated to the Create User Group page.
Enter the required information in the Enter Basic Information, Add User, Connect Policy, Enter Additional Information areas.
Category
Required
Description
User Group Name
Required
Enter user group name
Enter a value between 3-24 characters using Korean, English, numbers, and special characters (+=,.@-_)
Description
Optional
Description of the user group name
Can enter up to 1,000 characters as a detailed description of the user group name
Users
Optional
Users to add to the user group
A list of users registered in the Account is displayed, and when a checkbox is selected, the username of the selected user is displayed at the top of the screen
Click the X button for each user at the top of the screen or uncheck the checkbox in the user list to cancel the selection of the selected user
If there is no user to add, click Create User at the bottom of the user list to first register a new user
After user creation is complete, refresh the user list and select the user when the user is displayed
For details on creating a user group, refer to Creating a User
Policies
Optional
Policies to connect to the user group
A list of policies registered in the Account is displayed, and when a checkbox is selected, the policy name of the selected policy is displayed at the top of the screen
Click the X button for each policy at the top of the screen or uncheck the checkbox in the policy list to cancel the selection of the selected policy
If there is no policy to connect, click Create Policy at the bottom of the policy list to first register a new policy
After policy creation is complete, refresh the policy list and select the policy when the policy is displayed
Table. User Group Creation Information Entry Items
Click the Create button.
When a popup window announcing creation opens, click the OK button. You will be navigated to the User Group List page.
Viewing User Group Details
In user groups, you can view the user group list and detailed information and modify them. The User Group Details page consists of Basic Information, Users, Policies, Tags tabs.
To view detailed information of the user group service, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the User Group menu on the Service Home page. You will be navigated to the User Group List page.
Click the user group name for which you want to view detailed information on the User Group List page. You will be navigated to the User Group Details page.
The User Group Details page displays basic information and consists of Basic Information, Users, Policies, Tags tabs.
Basic Information
On the User Group List page, you can view the basic information of the selected user group and, if necessary, modify the user group name and description.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
In user groups, refers to user group name
Resource ID
Unique resource ID
Creator
User who created the service
Creation Date/Time
Date/Time when the service was created
Modifier
User who modified the service information
Modification Date/Time
Date/Time when the service information was modified
User Group Name
Name of the user group
Description
Description of the user group name
Table. User Group Basic Information Tab Items
Users
On the User Group List page, you can view the users included in the selected user group and, if necessary, add or delete users.
Clicking the item allows viewing the names of the user groups to which the user belongs
Creation Date/Time
Date/Time when the user was created
Table. User Group Details - Users Tab Items
Policies
On the User Group List page, you can view the policy connection information of the selected user group and, if necessary, modify the policy connection information for the user group.
Basic: Basic policy provided by Samsung Cloud Platform
Custom: Policy directly created by the user
Description
Description of the policy
Creation Date/Time
Date/Time when the policy was created
Modification Date/Time
Date/Time when the policy was modified
Table. User Group Details - Policies Tab Items
Tags
On the User Group List page, you can view the tag information of the selected user group and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can view Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from previously created Key and Value lists
Table. User Group Tags Tab Items
Managing User Groups
You can change the name of a user group or add users, connect policies, and modify tags.
If user group management is needed, you can perform tasks on the User Group List or User Group Details page.
Modifying Basic Information
You can modify the name and description of a user group.
To modify the name and description of a user group, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the User Group menu on the Service Home page. You will be navigated to the User Group List page.
Click the user group name for which you want to modify basic information on the User Group List page. You will be navigated to the User Group Details page.
After viewing the basic information to modify on the User Group Details page, click the Edit button.
User Group Name: Can change the user group name. Clicking the Edit button opens the Edit User Group Name popup window.
Description: Can modify the description of the user group. Clicking the Edit button opens the Edit Description popup window.
Modify to the content you want to change in the popup window, then click the OK button.
Managing Users
You can add or exclude users from a user group.
Adding Users
To add users to a user group, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the User Group menu on the Service Home page. You will be navigated to the User Group List page.
Click the user group name to which you want to add users on the User Group List page. You will be navigated to the User Group Details page.
Click the Users tab on the User Group Details page. You will be navigated to the Users tab.
Click the Add User button on the Users tab. You will be navigated to the Add User page.
Select the user you want to add from the Users list on the Add User page, then click the Complete button. A popup window announcing user addition opens.
Category
Description
Added Users
Display users included in the user group
Users
Select a user to add to the user group from the list of users registered in the Account
When a checkbox is selected, the selected user group name is displayed at the top of the list
Click the X button of the username added at the top of the list or uncheck the checkbox in the user list to cancel that user
If the desired user does not exist, click the Create User item at the bottom of the user list to first register a new user
After user creation is complete, refresh the user list and select the created user
Click the OK button in the popup window announcing policy connection. You can view the connected policy in the list on the Policies tab.
Disconnecting Policies
To disconnect connected policies from a user group, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the User Group menu on the Service Home page. You will be navigated to the User Group List page.
Click the user group name for which you want to disconnect policy connections on the User Group List page. You will be navigated to the User Group Details page.
Click the Policies tab on the User Group Details page. You will be navigated to the Policies tab.
Select the policy to disconnect from the displayed policy list on the Policies tab, then click the Disconnect button.
The selected Policy is disconnected and the policy list is refreshed.
Managing Tags
You can modify the tags of a user group.
To modify tags in a user group, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the User Group menu on the Service Home page. You will be navigated to the User Group List page.
Click the user group name for which you want to modify tag information on the User Group List page. You will be navigated to the User Group Details page.
Click the Tags tab on the User Group Details page. You will be navigated to the Tags tab.
Click the Edit Tags button on the Tags tab.
After adding or modifying tags, click the Save button. A popup window announcing tag modification opens.
You can modify the Key, Value of previously registered tags.
You can add a new tag by clicking the Add Tag button.
Clicking the X button in front of the added tag deletes that tag.
Click the OK button. You can view the modified tag information in the list.
Deleting a User Group
To delete a user group, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the User Group menu on the Service Home page. You will be navigated to the User Group List page.
Click the user group name to delete on the User Group List page. You will be navigated to the User Group Details page.
Click the Delete User Group button on the User Group Details page.
The user group is deleted and you will be navigated to the User Group List page.
To delete multiple user groups simultaneously, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the User Group menu on the Service Home page. You will be navigated to the User Group List page.
Check the user groups to delete from the user group list.
After confirming the selected user groups, click the Delete button.
The selected user groups are deleted and the User Group List page is refreshed.
10.4.2.2 - User
Users can create services by entering required information for policies and selecting detailed options through Samsung Cloud Platform Console.
Creating a User
To create a user, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the Create User button. You will be taken to the Create User page.
Enter the required information in the Enter Basic Information, Permission Settings, Enter Additional Information areas on the Create User page, then click the Create button. A popup window announcing user creation opens.
Category
Required
Description
Username
Required
Name of the user
Enter a value within 64 characters using English, numbers, and special characters (+=,.@-_)
Description
Optional
Description of the username
Enter up to 1,000 characters as a detailed description of the username
Password
Required
Password for the user, there are 2 creation methods
Auto Generate: Password is automatically generated and can be checked at the time of user creation
Direct Input: Create password directly
Password Change Setting
Optional
Password change setting on first user login
If not set, the user cannot change the password on first login and can reset it again through Reset Password
Click the Create button in the popup window announcing user creation. The IAM User Login Information popup window opens.
After checking the IAM user login information, click the Confirm button. You will be taken to the User List page.
Category
Description
Account ID
Account ID value
Username
Created user name
Password
Password of the created user
Click the View icon to check the password
IAM User Login URL
Login URL information of the IAM user
Excel Download
Download IAM user login information as an Excel file
Email Send
Send an Excel file containing IAM user login information via email
After clicking the button, enter the address to receive the email
Table. IAM User Login Information Items
Password Creation Rules
If you enter the wrong password 5 or more times, you are automatically logged out.
Must include at least 1 each of uppercase English, lowercase English, numbers, and special characters (!@#$%&*^).
Length is 9~20 characters.
Cannot use ID or username as password.
Cannot use the same character 3 or more times.
Cannot use easily guessable passwords.
Cannot use recently used passwords.
Cannot use 4 or more consecutive characters/numbers.
Password change cycle is 90 days.
Viewing User Details
In Users, you can view and modify the user list and detailed information. The User Details page is composed of Basic Information, User Group, Tags tabs.
To view detailed information of the user service, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the username for which you want to view detailed information. You will be taken to the User Details page.
The User Details page displays basic information and is composed of Basic Information, User, Policy, Tags tabs.
Basic Information
You can view the basic information of the user selected on the User List page, and if necessary, modify the user’s description and options.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
In Users, means username
Resource ID
Unique resource ID
Creator
User who created the service
Creation Date
Date and time when the service was created
Modifier
User who modified the service information
Modification Date
Date and time when the service information was modified
Click the policy name to view the policy details page
Type
Type of the policy
Description
Description of the policy
Connection Method
Policy connection method
Direct: User directly linked to policy
Group: Linked to policy through group
Direct, Group: Both direct connection and group connection applied
Click the group name to go to that group details page
Modification Date
Date and time when the policy was last modified
Table. User Details - Policy Tab Items
Tags
You can view the tag information of the user selected on the User List page, and add, change, or delete it.
Category
Description
Tag List
Tag list
Can check Key, Value information of tags
Can add up to 50 tags per resource
When entering tags, search and select from the list of previously created Keys and Values
Table. User Details - Tags Tab Items
Managing Users
You can change the user’s basic information, add user groups, and modify tags.
If user management is required, you can perform tasks on the User List or User Details page.
Modifying Basic Information
You can modify the user’s basic information.
Warning
The username cannot be modified.
Modifying Description
To modify the user’s description, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the username for which you want to modify the description. You will be taken to the User Details page.
On the User Details page, check the description and click the description Modify button. The Modify Description popup window opens.
After changing the description in the Modify Description popup window, click the Confirm button.
Modifying Password
To modify the user’s password, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the username for which you want to modify the password. You will be taken to the User Details page.
On the User Details page, click the password Modify button. The Reset Password popup window opens.
After modifying the password, click the Confirm button. The IAM User Login Information popup window opens.
Password has the following 2 settings.
Auto Generate: A random password is generated.
Direct Input: Created with the password directly entered by the user. Must include at least 1 each of uppercase English, lowercase English, numbers, and special characters (!@#$%&*^). Refer to the password creation rules.
Password Change Setting: It is recommended to change the password on first login after resetting the password.
Password Creation Rules
Must include at least 1 each of uppercase English, lowercase English, numbers, and special characters (!@#$%&*^).
Length is 9~20 characters.
Cannot use ID or username as password.
Cannot use the same character 3 or more times.
Cannot use easily guessable passwords.
Cannot use recently used passwords.
Cannot use 4 or more consecutive characters/numbers.
Password change cycle is 90 days.
After checking the user creation information, click the Confirm button. Password change is completed.
Category
Description
Account ID
Account ID value
Username
Created user name
Password
Password of the created user
Click the View icon to check the password
IAM User Login URL
Login URL information of the IAM user
Excel Download
Download IAM user login information as an Excel file
Email Send
Send an Excel file containing IAM user login information via email
After clicking the button, enter the address to receive the email
Table. IAM User Login Information Items
Restricting Password Reuse
Specify the number of password history to check to prevent reuse of recently used passwords.
To restrict user password reuse, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the username for which you want to modify password reuse restriction. You will be taken to the User Details page.
On the User Details page, click the password reuse restriction Modify button. The Modify Password Reuse Restriction popup window opens.
Password Reuse Restriction: Select the number of recently used password history as a number within 1~24.
Click the Confirm button. You can check that the Password Reuse Restriction number has changed.
Managing User Groups
You can add a user to a user group or exclude the user from a user group.
Adding User Group
To add a user to a user group, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the username for which you want to add a user group. You will be taken to the User Details page.
On the User Details page, click the User Group tab. You will be taken to the User Group tab.
On the User Group tab, click the Add User Group button. You will be taken to the Add User Group page.
On the Add User Group page, select the user group to add from the User Group list, then click the Complete button. A popup window announcing user group addition opens.
Category
Description
Added User Group
Display the user group to which the user belongs
Add to User Group
Select a user group to add the user from the list of user groups registered in the Account
When the checkbox is selected, the selected user group name is displayed at the top of the list
Click the X button of the user group name added at the top of the list, or uncheck the checkbox in the user group list to cancel that user group
If there is no desired user group, you can first register a new user group by clicking the Create User Group item at the bottom of the user group list
When user group creation is complete, refresh the user group list and then select the created user group
Click the Confirm button in the popup window announcing policy linking. You can check the linked policy in the list on the Policy tab.
Unlinking Policy
You can unlink a policy linked to the user.
To unlink a policy linked to the user, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the username for which you want to unlink the policy. You will be taken to the User Details page.
On the User Details page, click the Policy tab. You will be taken to the Policy tab.
After selecting the policy to unlink from the Policy list, click the Unlink button. A popup window announcing unlinking opens.
After clicking the More button, you can unlink the directly linked policy or exclude only the user groups containing the user.
After checking the policy information to be unlinked, click the Confirm button. The policy is unlinked.
Guide
Policies linked through user groups can be unlinked by excluding the user from the group. If you exclude the user from the user group, all policies linked only through that group are unlinked.
Managing Tags
You can modify the user’s tags.
To modify tags in Users, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the username for which you want to modify tag information. You will be taken to the User Details page.
On the User Details page, click the Tags tab. You will be taken to the Tags tab.
On the Tags tab, click the Modify Tags button.
After adding or modifying tags, click the Save button. A popup window announcing tag modification opens.
You can modify the Key, Value of previously registered tags.
Click the Add Tag button to add a new tag.
Click the X button in front of the added tag to delete that tag.
Click the Confirm button. You can check the modified tag information in the list.
Deleting a User
To delete a user, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
On the User List page, click the username to delete. You will be taken to the User Details page.
On the User Details page, click the Delete User button.
The user is deleted and you are taken to the User List page.
To delete multiple users at the same time, follow the steps below.
Click the All Services > Management > IAM menu. You will be taken to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the User menu. You will be taken to the User List page.
Check the users to delete in the user list.
After checking the selected users, click the Delete button.
The selected users are deleted and the User List page is retrieved again.
10.4.2.3 - Policy
Users can enter required information for policies and select detailed options through the Samsung Cloud Platform Console to create the corresponding service.
Creating a Policy
To create a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the Create Policy button on the Policy List page. You will be navigated to the Create Policy page.
Enter the required information in the Enter Basic Information, Enter Additional Information areas, then click the Next button. You will be navigated to the Permission Settings area.
Category
Required
Description
Policy Name
Required
Enter policy name
Enter a value between 3-128 characters using Korean, English, numbers, and special characters (+=,.@-_)
Description
Optional
Description of the policy name
Enter up to 1,000 characters as a detailed description of the policy name
Tags
Optional
Tags to add to the policy
Up to 50 tags can be added per resource
Table. Policy Creation Information Entry Items - Basic Information and Additional Information
Select the service for which to set permissions. Permission setting items are displayed under the selected service name.
You can select the desired service or set it for all services.
Enter the required information in the Permission Settings area.
Category
Required
Description
Control Type
Required
Select policy control type
Allow Policy: Policy that allows defined permissions
Deny Policy: Policy that denies defined permissions
Deny policy takes precedence for the same target
Action
Required
Select actions provided by each service
Actions where individual resource selection is possible are displayed in purple
Actions targeting all resources are displayed in black
Add Action Directly: Can specify multiple actions at once using wildcard *
Applied Resource
Required
Resource to which the action is applied
All Resources: Apply to all resources for the selected action
Individual Resource: Apply only to specified resources for the selected action
Individual resources are only possible when selecting purple actions where individual resource selection is possible among actions
Click the Add Resource button to specify target resources by resource type
Authentication method of the target to which the policy is applied
All Authentication: Apply regardless of authentication method
Authentication Key Authentication: Apply to authentication key authentication users
Temporary Key Authentication, Console Login: Apply to temporary key authentication or Console login users
Applied IP
Required
IP that allows policy application
User-defined IP: User directly registers and manages IP
Applied IP: IP to which the policy is applied by user registration, can be registered in IP address or range format
Excluded IP: IP to exclude from Applied IP, can be registered in IP address or range format
All IP: Do not restrict IP access
Allow access for all IPs, but if an exception is needed, register Excluded IP to restrict access for registered IPs
Additional Conditions
Optional
Add conditions for Attribute-Based Access Control (ABAC)
Condition Key: Select from Global condition Key and service condition Key lists
Qualifier: Default, any value in request, all values in request
Operator: Bool, Null
Value: True, False
Table. Policy Creation Information Entry Items - Permission Settings
Caution
Permission settings provide Basic Mode and JSON Mode.
After writing in Basic Mode, when entering JSON Mode or moving screens, services with the same conditions are merged into one and services where settings are not completed are deleted.
If content written in JSON Mode does not match JSON format, you cannot switch to Basic Mode.
In the Permission Settings area, first select the Service for which to set permissions.
You can create a policy by loading an existing registered policy through Load Policy. For details on Load Policy, refer to Loading Policy.
Click the Next button. You will be navigated to the Confirm Entered Information page.
After confirming the entered information, click the Create button.
When a popup window announcing policy creation opens, click the OK button. You will be navigated to the Policy List page.
Loading Policy
You can load an existing policy to reference it for policy creation. To load an existing policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the Create Policy button on the Policy List page. You will be navigated to the Create Policy page.
Enter the required information in the Enter Basic Information, Enter Additional Information areas.
Click the Next button. You will be navigated to the Permission Settings area.
Click the Load Policy button. The Load Policy popup window will open.
A list of policies registered in the Account is displayed. Select the policy you want to load and click OK.
The loaded policy is entered in the Permission Settings area and can be edited.
Note
When you execute Load Policy, all previously entered content is deleted and replaced with the settings of the selected policy.
Registering Individual Resources as Applied Resources
You can register individual resources as applied resources in the Permission Settings area. To register individual resources as applied resources, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the Create Policy button on the Policy List page. You will be navigated to the Create Policy page.
Enter the required information in the Enter Basic Information, Enter Additional Information areas.
Click the Next button. You will be navigated to the Permission Settings area.
Select the Service for which to set permissions in the Permission Settings area.
In Action selection, select an Action where Individual Resource selection is possible.
Actions where individual resource selection is possible are displayed in purple.
Click Individual Resource in Applied Resource.
Click the Add Resource button. The Add Resource popup window will open.
Add resources to which the policy will be applied in the Add Resource tab. Adding resources is possible in two ways: Select Resource and Direct Input.
Select Resource: Check and select resources displayed by Resource Type.
Direct Input: Directly enter target resources by Resource Type to add them.
Wildcards *, ? can be used. If you check Select All, all resources of that resource type are added, and newly added resources thereafter are automatically included.
Note
When changing the addition method, entered content is deleted.
After confirming the entered information, click the OK button.
Viewing Policy Details
In policies, you can view the policy list and detailed information and modify them. The Policy Details page consists of Basic Information, Permissions, Connected Targets, Tags tabs.
To view detailed information of the policy service, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name for which you want to view detailed information on the Policy List page. You will be navigated to the Policy Details page.
The Policy Details page displays basic information and consists of Basic Information, Permissions, Connected Targets, Tags tabs.
Basic Information
On the Policy List page, you can view the basic information of the selected policy and, if necessary, modify the policy name and description.
Category
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
In policies, refers to policy name
Resource ID
Unique resource ID
Creator
User who created the service
Creation Date/Time
Date/Time when the service was created
Modifier
User who modified the service information
Modification Date/Time
Date/Time when the service information was modified
Policy Name
Name of the policy
Policy Type
Type of the policy
Basic: Basic policy provided by Samsung Cloud Platform
Custom: Policy directly created by the user
Description
Description of the policy name
Table. Policy Details - Basic Information Tab Items
Permissions
On the Policy List page, you can view the permission information of the selected policy and, if necessary, modify permissions.
Click the Expand button of the service name for which you want to view permission information to display detailed policy information.
Note
Permission settings provide basic mode and JSON mode.
Category
Description
Edit Permissions
Permissions can be edited
Clicking the button navigates to the Edit Permissions page
On the Policy List page, you can view the tag information of the selected policy and add, modify, or delete tags.
Category
Description
Tag List
Tag list
Can view Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from previously created Key and Value lists
Table. Policy Details - Tags Tab Items
Managing Policies
You can change the name of a policy or modify permissions, connected targets, and tags.
If policy management is needed, you can perform tasks on the Policy List or Policy Details page.
Modifying Basic Information
You can modify the name and description of a policy.
To modify the name and description of a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name for which you want to modify basic information on the Policy List page. You will be navigated to the Policy Details page.
After viewing the basic information to modify on the Policy Details page, click the Edit button.
Policy Name: Can change the policy name. Clicking the Edit button opens the Edit Policy Name popup window.
Description: Can modify the description of the policy. Clicking the Edit button opens the Edit Description popup window.
Modify to the content you want to change in the popup window, then click the OK button.
Managing Permissions
You can modify the permissions of a policy. To modify the permissions of a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name for which you want to modify policy permissions on the Policy List page. You will be navigated to the Policy Details page.
Click the Permissions tab on the Policy Details page. You will be navigated to the Connected Permissions tab.
Click the Edit Permissions button on the Policy Details page. You will be navigated to the Edit Permissions page.
After modifying the necessary permissions on the Edit Permissions page, click the Next button. You will be navigated to the Confirm Entered Information page.
For detailed descriptions of each item in permission information, refer to Creating a Policy.
After confirming the modified permission information on the Confirm Entered Information page, click the Complete button. You will be navigated to the Permissions tab.
Managing User Connections
On the Policy > Connected Targets tab, you can view users registered to the policy and, if necessary, connect or disconnect users.
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name to which you want to connect users on the Policy List page. You will be navigated to the Policy Details page.
Click the Connected Targets tab on the Policy Details page. You will be navigated to the Connected Targets tab.
Click the Connect User button on the Connected Targets tab. You will be navigated to the Connect User page.
Select the user you want to connect from the Users list on the Connect User page, then click the Complete button. A popup window announcing user connection opens.
Category
Description
Connected User Groups
Display users connected to the policy
User Groups
Select a user to connect the policy from the list of users registered in the Account
When a checkbox is selected, the selected username is displayed at the top of the list
Click the X button of the username added at the top of the list or uncheck the checkbox in the user list to cancel that user
If the desired user does not exist, click the Create User item at the bottom of the user list to first register a new user
After user creation is complete, refresh the user list and select the created user
Click the OK button in the popup window announcing user connection. You can view the connected user in the list on the Users tab.
Disconnecting Users
To disconnect users connected to a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name for which you want to disconnect user connections on the Policy List page. You will be navigated to the Policy Details page.
Click the Connected Targets tab on the Policy Details page. You will be navigated to the Connected Targets tab.
Select the user to disconnect from the user group list on the Connected Targets tab, then click the Disconnect button. A popup window announcing disconnection opens.
Click the OK button in the popup window announcing disconnection. The connection of the selected user is disconnected and the user group list is refreshed.
Managing User Group Connections
On the Policy > Connected Targets tab, you can view user groups registered to the policy and, if necessary, connect or disconnect user groups.
To connect user groups to a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name to which you want to connect user groups on the Policy List page. You will be navigated to the Policy Details page.
Click the Connected Targets tab on the Policy Details page. You will be navigated to the Connected Targets tab.
Click the Connect User Group button on the Connected Targets tab. You will be navigated to the Connect User Group page.
Select the user group you want to connect from the User Groups list on the Connect User Group page, then click the Complete button. A popup window announcing user group connection opens.
Category
Description
Connected User Groups
Display user groups connected to the policy
User Groups
Select a user group to connect the policy from the list of user groups registered in the Account
When a checkbox is selected, the selected user group name is displayed at the top of the list
Click the X button of the user group name added at the top of the list or uncheck the checkbox in the user group list to cancel that user group
If the desired user group does not exist, click the Create User Group item at the bottom of the user group list to first register a new user group
After user group creation is complete, refresh the user group list and select the created user group
Click the OK button in the popup window announcing user group connection. You can view the connected user group in the list on the User Groups tab.
Disconnecting User Groups
To disconnect user groups connected to a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name for which you want to disconnect user group connections on the Policy List page. You will be navigated to the Policy Details page.
Click the Connected Targets tab on the Policy Details page. You will be navigated to the Connected Targets tab.
Select the user group to disconnect from the user group list on the Connected Targets tab, then click the Disconnect button. A popup window announcing disconnection opens.
Click the OK button in the popup window announcing disconnection. The connection of the selected user group is disconnected and the user group list is refreshed.
Managing Role Connections
On the Policy > Connected Targets tab, you can view roles registered to the policy and, if necessary, connect or disconnect roles.
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name to which you want to connect roles on the Policy List page. You will be navigated to the Policy Details page.
Click the Connected Targets tab on the Policy Details page. You will be navigated to the Connected Targets tab.
Click the Connect Role button on the Connected Targets tab. You will be navigated to the Connect Role page.
Select the role you want to connect from the Roles list on the Connect Role page, then click the Complete button. A popup window announcing role connection opens.
Category
Description
Connected Roles
Display roles connected to the policy
Roles
Select a role to connect the policy from the list of roles registered in the Account
When a checkbox is selected, the selected role is displayed at the top of the list
Click the X button of the role name added at the top of the list or uncheck the checkbox in the role list to cancel that role
If the desired role does not exist, click the Create Role item at the bottom of the role list to first register a new role
After role creation is complete, refresh the role list and select the created role
Click the OK button in the popup window announcing role connection. You can view the connected role in the list on the Roles tab.
Disconnecting Roles
To disconnect roles connected to a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name for which you want to disconnect role connections on the Policy List page. You will be navigated to the Policy Details page.
Click the Connected Targets tab on the Policy Details page. You will be navigated to the Connected Targets tab.
Select the role to disconnect from the role list on the Connected Targets tab, then click the Disconnect button. A popup window announcing disconnection opens.
Click the OK button in the popup window announcing disconnection. The connection of the selected role is disconnected and the role list is refreshed.
Managing Tags
You can modify the tags of a policy.
To modify tags in a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name to which you want to add users on the Policy List page. You will be navigated to the Policy Details page.
Click the Tags tab on the Policy Details page. You will be navigated to the Tags tab.
Click the Edit Tags button on the Tags tab.
After adding or modifying tags, click the Save button. A popup window announcing tag modification opens.
You can modify the Key, Value of previously registered tags.
You can add a new tag by clicking the Add Tag button.
Clicking the X button in front of the added tag deletes that tag.
Click the OK button. You can view the modified tag information in the list.
Deleting a Policy
To delete a policy, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Click the policy name to delete on the Policy List page. You will be navigated to the Policy Details page.
Click the Delete Policy button on the Policy Details page.
The policy is deleted and you will be navigated to the Policy List page.
To delete multiple policies simultaneously, follow these steps:
Click the All Services > Management > IAM menu. You will be navigated to the Service Home page of Identity and Access Management (IAM).
Click the Policy menu on the Service Home page. You will be navigated to the Policy List page.
Select the policies to delete from the policy list.
After confirming the selected policies, click the Delete Policy button.
The selected policies are deleted and the Policy List page is refreshed.
10.4.2.4 - Role
The user can create a role with separate permissions and switch from their own account to another role to access the Account.
Creating a role
To create a role, follow the following procedure.
All services > Management > IAM menu is clicked. It moves to the Service Home page of Identity and Access Management(IAM).
Service Home page, click the role menu. It moves to the role list page.
Role List page, click the Create Role button. It moves to the Create Role page.
Role Creation page where you enter information for role creation, click the Complete button.
Enter Basic Information Input.
Classification
Necessity
Detailed Description
Role Name
Required
Enter the name of the role
Use English letters, numbers, and special characters (+=-_@,.) to enter within 64 characters
Description
Selection
Enter a description of the role within 1,000 characters
Maximum session persistence time
Required
Enter the session time allowed for the user when switching roles in the console
Click on the policy name to check the policy details page
Type
Type of Policy
Description
Description of the policy
Modification Time
The time when the policy was last modified
Table. Role Details - Policy Tab Items
Tag
You can check, add, change, or delete the tag information of the credential provider.
Classification
Detailed Description
Tag List
Tag list
Check Key, Value information of the tag
Up to 50 tags can be added per resource
Search and select from existing Key and Value lists when entering tags
Table. Role Supervisor - Tag Tab Items
Managing Roles
You can change the basic information of the role, or modify or delete the performing entity, connected policies, or tag information of the role.
Modify basic information
You can modify the maximum session persistence time and description in the role details.
To modify the basic information, follow the following procedure.
All services > Management > IAM menu is clicked. It moves to the Service Home page of Identity and Access Management(IAM).
Service Home page, click the role menu. It moves to the role list page.
Role List page, click the user role name to modify the basic information. It moves to the Role Details page.
Role Details page, check the basic information to be modified, and then click the Modify button.
Maximum session duration: You can set the role session duration allowed for an IAM user switching roles in the Console. When you click the Edit button, the Edit maximum session duration popup window opens.
Description: You can modify the description of the role. When the Modify button is clicked, the Description Modification popup window opens.
In the popup window, modify it to the content to be changed, then click the confirm button.
Managing the Performing Entity
You can add, modify, or delete the subject of the role’s performance.
To manage the performing subject of a role, follow the following procedure.
All services > Management > IAM menu, click. It moves to the Service Home page of Identity and Access Management(IAM).
Service Home page, click the role menu. It moves to the role list page.
Role List page, click the user name to modify the performing subject. It moves to the Role Details page.
Role Details page, click the Performing Entity tab. It moves to the Performing Entity tab.
Execution Entity tab, click the Modify Execution Entity button. It moves to the Modify Execution Entity page.
Modify the performing entity page, modify the performing entity, and then click the Complete button. A pop-up window announcing the modification of the performing entity will open.
Classification
Mandatory
Detailed Description
Classification
Essential
Select the performing entity
Current Account, Different Account, User SRN, Credential Provider, Service
Value
Required
Enter the Value value for the performing entity
Current Account: Display the current Account ID
Different Account: Enter the Account ID to use this role
User SRN: Enter the SRN of the user registered in the Console
Credential Provider: Select the credential provider name
Service: Select Virtual Server or Cloud Functions
Add
Select
Button to add the performing entity
Up to 20 additional connections can be added
Performing entity can be deleted by clicking the X button of the added performing entity
Table. Items to be revised by the performing entity
Click the Confirm button in the pop-up window notifying the modification of the performing entity. You can check the modified performing entity in the list of the Performing Entity tab.
Managing Policies
You can link policies to roles or unlink linked policies.
Connect Policy
You can link policies to a role.
To link a policy to a role, follow these procedures.
All services > Management > IAM menu, click. It moves to the Service Home page of Identity and Access Management(IAM).
Service Home page, click the role menu. It moves to the role list page.
Role List page, click the role name to link the policy. It moves to the User Detail page.
Role Details page, click the Policy tab. It moves to the Policy tab.
Policy tab, click the Policy Link button. It moves to the Policy Link page.
After selecting the policy to be linked to the role, click the Complete button. A popup window announcing the policy connection will open.
Classification
Detailed Description
Connected Policy
Displays the policy connected to the role
Policy
Select a policy to be linked to the role from the list of policies registered in the Account
When you select a check box, the selected policy name is displayed at the top of the list
The selected policy can be canceled by clicking the X button at the top of the list or by unchecking the check box in the policy list
If there are no policies to link, click the Create Policy item at the bottom of the policy list to register a new policy first
After policy creation is complete, you can refresh the policy list and select the created policy
For more information on policy creation, see Create Policy
Table. Policy Link Details
Click the Confirm button in the pop-up window notifying policy connection. You can check the connected policy in the list of the Policy tab.
Policy Disconnecting
You can release the policies connected to the user.
To release the policy linked to the user, follow the following procedure.
All services > Management > IAM menu, click. It moves to the Service Home page of Identity and Access Management(IAM).
Service Home page, click the role menu. It moves to the role list page.
Role List page, click the role name to disconnect the policy link. It moves to the Role Details page.
Role Details page, click the Policy tab. It moves to the Policy tab.
Policy list, select the policy to disconnect, then click the Disconnect button. A pop-up window notifying disconnection will open.
After checking the policy information to be disconnected, click the Confirm button. The policy will be disconnected.
Managing tags
You can add, modify, or delete the role’s tag.
To manage the role’s tags, follow the following procedure.
All services > Management > IAM menu, click. It moves to the Service Home page of Identity and Access Management(IAM).
Service Home page, click the Role menu. It moves to the Role List page.
Role List page, click the role name to modify the tag information. It moves to the Role Details page.
Role Details page, click the Tags tab. It moves to the Tags tab.
Tag tab, click the Edit Tag button.
After adding or modifying the tag, click the Save button. A popup window announcing the tag modification will open.
You can modify the Key, Value of the previously registered tag.
Add tag button to click and add a new tag.
Clicking the X button in front of the added tag will delete the tag.
Confirm button, you can check the modified tag information in the list.
Switching roles
To switch roles in the Samsung Cloud Platform Console, follow the following procedure.
Click the profile-shaped button at the top right of the Console. My menu popup window will open.
My menu popup window, click the role switch button. Role switch popup window opens.
Role Switching In the role switching popup window, enter the role switching information and click the Confirm button.
Classification
Mandatory
Detailed Description
Account ID
required
Enter the Account ID that the user wants to enter with role switching
Role Name
Mandatory
Enter the role name that the user wants to enter through role switching
Alias
Select
Name to be used when the user enters with role switching
Color
Required
Select a color to use as the background of the Account when entering the role
Not selected: Apply the existing Account background color
Table. Role Transition Information Items
When the popup window notifying role switching opens, click the Confirm button.
Check the role
Console you can check the role information switched by clicking the profile-shaped button at the top right of the console.
Provided Function
Description
Account ID
Account ID logged in to Samsung Cloud Platform Console
Role Name
Alias set when switching roles
If accessed by ID Center user as a role, it is displayed as Authority Set Name
Session expiration time is displayed at the bottom
Time Zone
Time zone set by the user
Example: Asia/Seoul (GMT +09:00)
Edit Time Zone can be changed by clicking
Account
Account information
For more detailed information, please refer to Account
Cost Management
You can check the usage and billing details, payment history, and cost analysis, and manage Credits, budgets, Accounts, and payment methods
All services > Management > IAM menu is clicked. It moves to the Service Home page of Identity and Access Management(IAM).
Service Home page, click the role menu. It moves to the role list page.
Role List page, click the role name to be deleted. It moves to the Role Details page.
Role Details page, click the Delete Role button.
The role is deleted, and it moves to the role list page.
To delete multiple roles at the same time, follow the procedure below.
All services > Management > IAM menu, click. It moves to the Service Home page of Identity and Access Management(IAM).
Service Home page, click the role menu. It moves to the role list page.
Check the role to be deleted from the role list.
Confirm the selected role, and click the role deletion button.
The selected role is deleted and the role list page is newly retrieved.
10.4.2.5 - Credential Provider
You can access and use the Account resource through the credential provider.
Create Credential Provider
To create a credential provider, follow the steps below.
All Services > Management > IAM Click the menu. Navigate to the Service Home page of Identity and Access Management (IAM).
Click the Credential Provider menu on the Service Home page. Navigate to the Credential Provider List page.
On the Credential Provider List page, click the Create Credential Provider button. You will be taken to the Create Credential Provider page.
After entering information in the Basic Information Input, Additional Information Input areas, click the Generate button.
Category
Required
Detailed description
Credential Provider Name
Required
Name of the credential provider
Enter a value within 128 characters using English letters, numbers, and special characters (,-_)
Description
Select
Enter a description of the credential provider within 1,000 characters
Type
Required
Select credential provider type
SAML: Establish trust between Samsung Cloud Platform account and SAML 2.0 compatible credential provider
Metadata
Optional
Attach metadata file provided by IdP
Attach File button to click to upload only one file
Only up to 10 MB, UTF-8 XML documents can be uploaded
Metadata must include issuer name, expiration information, and the key for verifying SAML authentication responses received from the IdP
Tag
Select
Tag to add to the credential provider group
Up to 50 tags can be added per resource
Table. Credential Provider Creation Information Input Items
Reference
Credential Provider in OIDC type is scheduled to be provided in 2026.
When the popup notifying the creation of a credential provider opens, click the Confirm button.
Check credential provider details
You can view and edit detailed information of the credential provider. The credential provider page consists of basic information, tags tabs.
To view detailed information of the credential provider, follow the steps below.
Click the All Services > Management > IAM menu. Navigate to the Service Home page of Identity and Access Management (IAM).
Click the Credential Provider menu on the Service Home page. You will be taken to the Credential Provider List page.
Credential Provider List Click the credential provider you want to view on the page. Credential Provider Details You will be taken to the page.
Credential Provider Details page displays basic information and consists of Basic Information tab, Tag tab.
Basic Information
You can view and edit the basic information of the credential provider.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
In a credential provider, it refers to the credential provider name
Resource ID
Unique Resource ID
Creator
User who created the service
Creation Time
Service Creation Time
Editor
User who modified the service information
Modification Date/Time
Date/Time when service information was edited
Credential Provider Name
Name of the Credential Provider
Click the Edit button to change the name
Type
Type of credential provider
Description
Credential provider description
Edit button can be clicked to change the description
Login URL
Login URL
Metadata
Metadata
View Metadata button when clicked opens the currently applied metadata information in a popup window
Edit button to upload a metadata file
Only files 10 MB or less, UTF‑8 XML documents can be uploaded
Metadata must include the issuer name, expiration information, and a key for verifying SAML authentication responses received from the IdP
Table. Credential Provider Basic Information Tab Items
Reference
Credential provider information used in the ID Center cannot be modified.
Tag
You can view the tag information of the credential provider and add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Can view the tag’s Key, Value information
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing list of Keys and Values
Table. Credential Provider Tag Tab Items
Delete Credential Provider
Notice
Credential provider information used in the ID Center cannot be modified.
To delete a credential provider, follow the steps below.
Click the All Services > Management > IAM menu. Go to the Service Home page of Identity and Access Management (IAM).
Click the Credential Provider menu on the Service Home page. You will be taken to the Credential Provider List page.
Credential Provider List page, click the credential provider name to delete. It moves to the Credential Provider Details page.
Click the Delete Credential Provider button on the Credential Provider Details page.
Credential provider is deleted, and you are taken to the Credential Provider List page.
To delete multiple credential providers simultaneously, follow the steps below.
Click the All Services > Management > IAM menu. Go to the Service Home page of Identity and Access Management (IAM).
Click the Credential Provider menu on the Service Home page. Navigate to the Credential Provider List page.
Check the credential provider to delete from the credential provider list.
Verify the selected credential provider and click the Delete Credential Provider button.
The selected credential provider is deleted and the Credential Provider List page is refreshed.
10.4.2.6 - My Info.
My Info. provides basic user information and authentication key management functions.
Checking My Info.
Users can view and modify their basic information on the My Info. screen, and manage authentication keys.
To view My Info. information, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
The My Info. page displays basic information and consists of Basic Information, Users, Policies, Tags tabs.
Notice
My Info. page can also be accessed from My menu > My Info. at the top of the Console screen and from My Info. on the Console Home.
Basic Information
In the My Info. > Basic Information tab, you can view a user’s basic details and, if needed, edit the email, password, mobile phone number, password reuse limit, and time zone.
Item
Description
User Name
Name of the user
SRN
User’s SRN
Email
User’s email
Mobile Phone Number
User’s mobile phone number
Password
User’s password
Password Reuse Limit
Number of times a password cannot be reused for the user
Table: My Info. Authentication Key Management Tab Items
Access IP Control
In the My Info. > Access IP Control tab, you can register and manage IPs that can access the Console.
Item
Description
Console Access IP Control
Whether the Access IP Control feature is enabled
Toggle button can change to ON or OFF
If enabled, at least one IP must be registered
Access IP List
List of allowed IPs
Enter an IP to allow and click Add to register
Single IP or CIDR format (10.0.0.0/16) can be registered up to 50 entries
Delete all: removes all IPs in the list
Click X next to an IP to delete
Table: Console Access IP Control Modification Items
Notice
The Access IP Control feature is available only to Root users and IAM users. ID Center users and role users cannot use it.
Even if the Access IP Control feature is not used, you can still add and manage IPs.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
In the My Info. page, click the Access IP Control tab to go to the Access IP Control page.
On the Access IP Control page, click the Edit button of Console Access IP Control. The Password Confirmation popup appears.
Enter your password and click Confirm. The Console Access IP Control Edit popup opens.
Set the Access IP Control feature to On and register the IPs you want to allow.
After registration is complete, click Confirm.
Warning
If the password is entered incorrectly five or more times, you will be logged out automatically.
Modifying Basic Information
In the My Info. > Basic Information tab, you can edit email, password, mobile phone number, password reuse limit, and time zone.
Modifying Email
You can change the user’s email.
To modify the user’s email, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
In the Basic Information tab of the My Info. page, click Edit Email. The Edit Email popup appears.
In the Edit Email popup, enter the characters shown in the captcha and click Confirm.
Enter the Email and click Authenticate. An authentication code is sent to the entered Email.
Enter the authentication code sent to the entered Email and click Confirm.
In the Edit Email popup, click Confirm. The Password Confirmation popup appears.
In the Password Confirmation popup, enter the password and click Confirm. You return to the Basic Information tab.
Warning
If the password is entered incorrectly five or more times, you will be logged out automatically.
Enter your email information accurately. If the authentication code is not received, check your spam folder.
Modifying Password
You can change the user’s password.
To modify the user’s password, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
In the Basic Information tab of the My Info. page, click Edit Password. The Change Password popup appears.
In the Change Password popup, enter Current Password, New Password, and Confirm Password.
Click Confirm in the Change Password popup. You return to the Basic Information tab.
Warning
Password change precautions
If the current password is entered incorrectly five or more times, you will be logged out automatically.
Must include at least one uppercase letter, one lowercase letter, one number, and one special character (!@#$%&*^).
Length must be 9–20 characters.
Cannot use ID or username as password.
Cannot use the same character more than three times consecutively.
Cannot use easily guessable passwords.
Cannot reuse recent passwords.
Cannot have sequences of four or more consecutive characters/numbers.
Password change cycle is 90 days.
Modifying Mobile Phone Number
You can change the user’s mobile phone number.
To change the user’s mobile phone number, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
In the Basic Information tab of the My Info. page, click Change Mobile Phone Number. The Change Mobile Phone Number popup appears.
In the Change Mobile Phone Number popup, enter the captcha characters and click Confirm.
Choose a verification method for the mobile phone number:
Verify via SMS: Sends verification code via SMS.
Verify via Knox Teams: Sends verification code via Knox Teams.
Enter the new mobile phone number and click Verify.
Enter the verification code sent via SMS or Knox Teams and click Confirm.
In the Change Mobile Phone Number popup, click Confirm. The Password Confirmation popup appears.
In the Password Confirmation popup, enter the password and click Confirm. You return to the Basic Information tab.
Notice
Verify via Knox Teams is available only when using a Knox email account.
Warning
If the password is entered incorrectly five or more times, you will be logged out automatically.
Enter your mobile phone number accurately. If the verification code is not received, check your spam folder.
Modifying Password Reuse Limit
You can change the number of times a password cannot be reused for the user.
To modify the password reuse limit, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
In the Basic Information tab of the My Info. page, click Edit Password Reuse Limit. The Edit Password Reuse Limit popup appears.
In the Edit Password Reuse Limit popup, select the number of recent passwords that cannot be reused.
Click Confirm in the Edit Password Reuse Limit popup. You return to the Basic Information tab.
Modifying Time Zone
You can change the user’s time zone.
To modify the time zone, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
In the Basic Information tab of the My Info. page, click Edit Time Zone. The Edit Time Zone popup appears.
Select the desired time zone.
Click Confirm in the Edit Time Zone popup. You return to the Basic Information tab.
Managing Authentication Keys
In the My Info. > Authentication Key Management tab, you can create authentication keys and manage security settings.
Creating an Authentication Key
You can generate an authentication key for the user.
To create an authentication key, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
Click the Authentication Key Management tab on the My Info. page to go to the Authentication Key Management tab.
Click the Create Authentication Key button. You are taken to the Create Authentication Key page.
On the Create Authentication Key page, enter the Expiration Period and Usage.
The Expiration Period can be a number between 1 and 365.
Selecting Permanent for the Expiration Period makes the key usable indefinitely.
Review the authentication key creation details and click Create. You return to the Authentication Key Management tab.
Reference
You can create up to 2 authentication keys.
After creating a new authentication key, you must apply the updated API authentication key to any services you are using.
Security settings allow you to configure the authentication method and allowed access IP.
With a created authentication key, you can issue temporary keys via API, up to 5 per authentication key.
Viewing Authentication Key Details
To view detailed information of an authentication key, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
Click the Authentication Key Management tab on the My Info. page to go to the Authentication Key Management tab.
In the Authentication Key Management tab, click the authentication key you want to view. You are taken to the Authentication Key Detail page.
The Authentication Key Detail page consists of Basic Information and Authentication Key Management tabs.
Basic Information
In the Basic Information tab of the Authentication Key Detail, you can view the basic information of the selected authentication key.
Item
Description
Authentication Key Usage
Indicates whether the authentication key is in use
Click Use or Disable to set
Delete Authentication Key
Delete the authentication key
Authentication Key
Access Key and Secret Key information
Click the Authentication Key button, then enter your password in the Password Confirmation popup to view
Usage
Purpose of the authentication key
Creation Date
Date and time when the user created the authentication key
Expiration Date
Expiration date and time of the authentication key
Secret Vault
Whether Secret Vault service is used
If Secret Vault service is used, the authentication key cannot be disabled or deleted
Table: Authentication Key Management > Basic Information Items
Warning
If the password is entered incorrectly five or more times, you will be logged out automatically.
User Temporary Keys
The User Temporary Keys tab of the Authentication Key Detail displays a list of temporary keys for the selected authentication key.
Notice
Temporary keys can only be created via API; the User Temporary Keys tab allows only viewing and deletion.
Item
Description
Delete
Delete the selected temporary key from the list
Enabled when a temporary key is selected
More
View usage status of the selected temporary key
Enabled when a temporary key is selected
Access Key
Unique string for API calls
Secret Key
Security token used with the Access Key
Click View to open a Password Confirmation popup, then enter your password to view
Creation Date
Date and time when the user created the authentication key
Expiration Date
Expiration date and time of the authentication key
Status
Whether the authentication key is active
Table: Authentication Key Management > User Temporary Key Details
Warning
If the password is entered incorrectly five or more times, you will be logged out automatically.
Secret Vault Temporary Keys
The Secret Vault Temporary Keys tab of the Authentication Key Detail displays a list of Secret Vault temporary keys for the selected authentication key.
Notice
When the Secret Vault service is used, you can view it.
Temporary keys can only be created via API; the Secret Vault tab allows only viewing and deletion.
Item
Description
Delete
Delete the selected temporary key from the list
Enabled when a temporary key is selected
More
View usage status of the selected temporary key
Enabled when a temporary key is selected
Access Key
Unique string for API calls
Secret Key
Security token used with the Access Key
Click View to open a Password Confirmation popup, then enter your password to view
Creation Date
Date and time when the user created the authentication key
Expiration Date
Expiration date and time of the authentication key
Access is allowed only when the authentication method set for the API call matches.
Temporary key: authentication using a temporary key issued with an authentication key and verification code.
Authentication key: authentication using the authentication key created in the Console.
Allowed Access IP: IPs that control user access
When On, only the specified IP range is allowed.
If On is set but no IPs are registered, all IPs are denied.
When Off, all IPs are allowed.
Up to 50 IPs can be registered.
IP address or CIDR can be entered.
Review the authentication key security settings and click Confirm. You return to the Authentication Key Management tab.
Reference
South Korea (kr-south) region limitation
When Allowed Access IP is set to On, only IP addresses can be entered. CIDR cannot be entered.
Warning
It is recommended to use temporary key authentication and enable Allowed Access IP.
When authenticating with an authentication key, email or SMS verification steps are omitted, which may pose security risks.
If Allowed Access IP is not used, any IP can connect, posing security risks.
When Allowed Access IP is used, if no IPs are registered, all access is blocked.
Authentication keys with Secret Vault temporary keys cannot be disabled or deleted until the Secret Vault service is terminated for each region within the account.
Deleting an Authentication Key
Notice
An authentication key can be deleted only when it is in the Disabled state. Disable the key before deletion.
If the Secret Vault service is used, the authentication key cannot be disabled. Terminate the Secret Vault service first.
To delete an authentication key, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
Click the Authentication Key Management tab on the My Info. page to go to the Authentication Key Management tab.
In the authentication key list on the Authentication Key Management tab, click the authentication key you want to delete. You are taken to the Authentication Key Detail page.
On the Authentication Key Detail page, click the Delete Authentication Key button.
The authentication key is deleted and you return to the Authentication Key Management tab.
To delete multiple keys at once, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the My Info. menu to go to the My Info. page.
Click the Authentication Key Management tab on the My Info. page to go to the Authentication Key Management tab.
In the authentication key list on the Authentication Key Management tab, check the authentication keys you want to delete.
Confirm the selected authentication keys and click the Delete Authentication Key button.
The selected authentication keys are deleted and the Authentication Key Management tab refreshes.
Managing Access IPs
In the My Info. > Access IP Control tab, you can register and manage IPs that can access the Console.
The Access IP Control feature allows you to restrict Console access to registered IP ranges only.
Notice
The Access IP Control feature is available only to Root users and IAM users. ID Center users and role users cannot use it.
Even if the Access IP Control feature is not used, you can still add and manage IPs.
To use the Access IP Control feature and manage IPs, follow these steps.
Click the All Services > Management > IAM menu. This navigates to the Service Home page of Identity and Access Management (IAM).
On the Service Home page, click the **My Info." menu. My Info. page opens.
In the My Info. page, click the Access IP Control tab. The Access IP Control page opens.
On the Access IP Control page, click the Edit button of Console Access IP Control. The Password Confirmation popup appears.
Enter your password and click Confirm. The Console Access IP Control Edit popup opens.
After setting the Access IP Control feature to On, register the IPs you want to allow.
After registration is complete, click Confirm.
Warning
If the password is entered incorrectly five or more times, you will be logged out automatically.
10.4.2.7 - JSON Writing Guide
Policies are divided into identity-based policies and resource-based policies.
Identity-based policy: Policy granted to a principal (subject) that performs actions such as users, groups, roles, etc.
Resource-based policy: Policy granted to a resource that determines whether to allow or deny (Effect) actions on a specific resource to a principal (subject)
Note
Generally, identity-based policies do not need to specify a separate Principal attribute, but resource-based policies must specify a Principal attribute.
Resource-based Policy
A resource-based policy is a policy that grants permission to a specified principal (requester) to perform specific operations on that resource.
Therefore, resource-based policies are directly granted to resources, and only users defined in the policy can execute the policy, and the user to whom the policy is granted becomes the security principal.
Warning
Resource-based policies specify the principal through the Principal attribute, so you must enter the Principal attribute when creating the policy.
Statement.Principal specifies the principal that is allowed or denied access to the resource in a resource-based policy.
The principals that can be specified in the Principal element are as follows:
Root user
IAM user
IAM role
Service account
Warning
Principal can have one or more values, and if there are one or more, write them as an array.
Statement.Action defines the action to be evaluated in the policy check.
Write with case sensitivity.
Write the action in the format of the action name defined in the action definition.
Warning
Only actions of the service providing the corresponding resource can be entered for the action (however, actions such as adding tags and integrated resource lookup provided by common functions can be added).
Statement.Resource defines the SRN that specifies a specific resource or set of resources to which the policy applies.
Write with case sensitivity.
Write resource_expression in wildcard ("*") or SRN format.
Warning
The SRN of the resource to which the resource-based policy is granted must be included, and if there are sub-resources of that resource, they can be written including sub-resources.
Resources can be written in Resources only for resources described in the action definition defined in the policy, and for undefined resources, they are ignored during policy evaluation.
Example of policy resource definition when single resource
When it is multiple resources, action_definition resources definition form for user policy lookup
When defining multiple different resources, define the resource type written in the policy.
Warning
When judging the policy, it is judged as successful only if the content written in the policy satisfies the condition based on the resources defined in the action definition file.
If not all resources defined in the action definition file are written in the policy, it is judged as not meeting the policy condition.
Statement.Condition defines application conditions for a specific target to which the policy applies within the policy.
Write with case sensitivity.
Write a condition expression to compare the attribute condition key (or global condition key), value of the resource defined in the policy with the actual request (or resource attribute) value using condition operators.
If 2 or more condition-keys are defined, AND operation
condition-value
Required
Depends on operator
Policy condition value
qualifier
Optional
O
Qualifier, when the condition value extracted from the request context is 2 or more
Definition method for operand and comparison condition
Table. Description of Statement.Condition Option Items
Guide
When 2 or more values are defined for a Condition Key of the same Condition Operator, the judgment between Values operates as OR. However, if the Operator is of Negative Operator type, the operation operates as NOR, not OR.
Positive Operator type and example (when userName is “foo” or “bar” and company is “Samsung”)
"Condition": {
"StringEquals": {
"iam:userName": [ # When User's name is foo or bar
"foo", "bar"
],
"iam:userCompany": [ # When User's company is Samsung
"Samsung"
]
}
}
Negative Operator type and example (all IPs where IP is not in the 1.1.1.1/24 and 2.2.2.2/24 ranges)
"Condition": {
"NotIpAddress": {
"scp:SourceIp": [ # When request IP is neither 1.1.1.1 nor 2.2.2.2
"1.1.1.1/24", "2.2.2.2/24"
]
}
}
Case sensitive match, wildcard with multi-character match (*) can be included in value
StringNotLike
Negative Operator
Case sensitive mismatch, wildcard with multi-character match (*) can be included in value
Table. String Operators
Numeric operators
Condition Operator
Operator Type
Description
NumericEquals
Positive Operator
Match
NumericNotEquals
Negative Operator
Mismatch
NumericLessThan
Positive Operator
Less than match
NumericLessThanEquals
Positive Operator
Less than or equal match
NumericGreaterThan
Positive Operator
Greater than match
NumericGreaterThanEquals
Positive Operator
Greater than or equal match
Table. Numeric Operators
Date operators
Condition Operator
Operator Type
Description
DateEquals
Positive Operator
Match specific date
DateNotEquals
Negative Operator
Mismatch
DateLessThan
Positive Operator
Match before specific date/time
DateLessThanEquals
Positive Operator
Match on or before specific date/time
DateGreaterThan
Positive Operator
Match after specific date/time
DateGreaterThanEquals
Positive Operator
Match on or after specific date/time
Table. Date Operators
Bool operators
Condition Operator
Operator Type
Description
Bool
Positive Operator
True, False match
Table. Bool Operators
IP operators
Condition Operator
Operator Type
Description
IpAddress
Positive Operator
Specified IP address or range
NotIpAddress
Negative Operator
All IP addresses except specified IP address or range
Table. IP Operators
SRN operators
Condition Operator
Operator Type
Description
SrnEquals, SrnLike
Positive Operator
SRN match
SrnNotEquals, SrnNotLike
Negative Operator
SRN mismatch
Table. SRN Operators
Null operators
Condition Operator
Operator Type
Description
Null
Positive Operator
When key is missing or value is null → True
When key exists and value is not null → False
Table. Null Operators
Condition Key
Condition keys are divided into global condition keys and resource attribute keys.
Note
Condition keys are not case sensitive.
Global Condition Key
A condition key predefined in Samsung Cloud Platform that defines data such as request information, resource common information (ex-tag), network information, etc.
Whether request was made through MFA authentication
“scp:MultiFactorAuthPresent” : [“True”]
scp:RequestedRegion
string
single
Request region
“scp:RequestedRegion” : [“kr-west1”]
scp:RequestAttribute/{AttributeKey}
string
single
Request attribute value (AttributeKey)
body
query
header
“scp:RequestAttribute/body[‘foo’]” : [“true”]
scp:TagKeys
string
single / multiple
Request tag key
“scp:TagKeys” : [“tag-key”]
scp:RequestTag
string
single
Request tag key value
“scp:RequestTag/tag-key” : [“tag-value”]
scp:ResourceTag/{TagKey}
string
single
Resource tag key value
“scp:ResourceTag/foo” : [“bab”]
scp:SourceIp
ip_address
single
IP of the subject currently requesting
“scp:SourceIp” : [“1.1.1.1/24”]
scp:CurrentTime
datetime
single
Request time (UTC based, ISO 8601 format)
“scp:CurrentTime” : [“2025-11-06T16:10:38Z”]
Table. Types and Formats of Supported Global Condition Keys
Resource Attribute Key
An attribute key for a specific resource, used when checking condition values based on resource attribute values.
"{service}:{resource_type}{attribute_name}"
Guide
Resource attributes can only be defined for targets with abac:true in attributes defined in Resource definition, and if undefined attribute values are entered, that condition policy is ignored (Not found).
Resource attribute name usage example
"iam:userLastname" (O) # Attribute name defined in resource (service: iam, resource: user, attribute_name : lastname)
"iam:userLASTNAME" (O) # Attribute name defined in resource (case insensitive)
"iam:userLast_name" (X) # If not an attribute name defined in resource
"iam:userEmail" (X) # If abac is false
"iam:state" (X) # If abac field is not defined
Resource attribute names use attribute data defined in attributes defined in Resource definition.
For more information about Resource definition, see the Resource Definition guide.
Condition Key Definition Example
Global condition key example: A policy that allows group detail lookup only when the value of the key (Environment) of a specific policy resource tag is “Local” or “Dev”
When multiple policy condition values are defined, each condition value operates as OR.
"Condition" : {
"StringEquals" : {
"scp:resourceTag/key1": ["value1", "value2", "value3"] # When the value of the resource tag key is key1 is value1 or value2 or value3
}
Qualifier
Defines the operation method when the request context value extracted from the Condition key has multiple values (omit when request context value is 1).
Qualifiers are divided into ForAnyValue, ForAllValues, and if no qualifier is written, ForAnyValue is defined as the default value.
ForAnyValue: True when at least one of the values extracted from the request context matches the Operand defined in the Condition
ForAllValues: True when the values extracted from the request context are a subset of the Operand list defined in the Condition
The user can switch from their account to another role to access the Account.
Credential provider feature has been added.
You can create an identity provider and access the Account resource in the Console through the created identity provider.
You can directly connect users and policies.
When creating a policy, you can add conditions for attribute-based access control (ABAC).
2025.04.28
FEATUREMy info feature change
The mandatory conditions for creating a user password have been changed.
When modifying the authentication key, CIDR input is applied selectively.
When the user’s email or phone number is changed, a password re-confirmation procedure has been added.
2025.02.27
FEATUREIAM user group, user, policy, and authentication key feature changes
IAM(Identity and Access Management) function change
Added user group and user function, policy creation function.
App authentication key and storage authentication key are integrated to provide as an authentication key.
Samsung Cloud Platform common feature change
Account and Service Home, tags etc common CX change items have been reflected.
2024.07.02
NEWOfficial Release of IAM Service
IAM(Identity and Access Management) service has been released.
provides user authentication and authorization management
provides access control policy management
2024.07.02
NEWIAM Beta Service Release
IAM(Identity and Access Management) service has been released.
provides user authentication and authorization management
access control policy management provided
10.5 - ID Center
10.5.1 - Overview
Service Overview
ID Center is a service that allows you to easily manage access permissions for account-based resources on the Samsung Cloud Platform from a central location.
You can manage to perform tasks according to user permissions by creating authority policies for each service and assigning accounts and policies associated with the Organization service to users.
Features
Easy Access Control: Through SAML (Security Assertion Markup Language) based qualification authentication, it is possible to access the resources of multiple accounts within the organization by granting authentication and authorization from the Samsung Cloud Platform.
Efficient Account Management: Integrated management of costs and resource usage from all accounts owned by the organization is possible by linking with the Organization service.
Account Security Enhancement: Security can be enhanced by allowing only authorized ID Center users to access through the Access Portal, which is provided separately from the Samsung Cloud Platform Console. Through the Access Portal, it is possible to prevent other users outside the customer organization from accessing the account in the first place.
Composition
Figure. ID Center Configuration Diagram
Provided Features
ID Center provides the following functions.
User and User Group Management: User and user group management can be created and service-specific authority management policies can be configured. Users must have MFA (Multi-Factor Authentication) applied to strengthen account access management.
Account Assignment Management: You can assign and manage accounts corresponding to each user’s task.
Permission Set Management: You can create and manage permission sets using default policies or custom policies for each account, or by configuring policies directly.
Access Portal Provided: An Access Portal is provided instead of Samsung Cloud Platform Console, allowing only ID Center users to access.
Components
User
The administrator can create users and add them to user groups. The administrator can automatically generate or manually create user passwords and provide users with Access Portal connection information. Additionally, administrators can assign users to accounts that match each task.
You can link users and accounts through user groups. You can configure user groups suitable for each task and register users to assign them to accounts.
ID Center can be provided in the following environment.
Region
Availability
Western Korea(kr-west1)
Provided
Korea East(kr-east1)
Provided
South Korea (kr-south1)
Provided
South Korea (kr-south2)
Provided
South Korea, southern region 3(kr-south3)
Not provided
Table. ID Center Regional Provision Status
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance.
You can change the credential source type or change and manage the setting values of the AD (Active Directory) type.
AD (Active Directory) Apply for Integration
To use a user-managed AD (Active Directory) integration, you must first prepare the VPC and Load Balancer, then submit a request via SR.
To apply for AD integration, follow the steps below.
Secure a VPC to integrate with the user’s AD.
If network connection is required, connect to the network where the user’s AD exists via the Direct Connect service.
Add the IP to be linked with AD as a member of the connected resources of the LB server group.
Create a Listener from the connected resources of the Load Balancer and connect the LB server group.
Guide
Through the Load Balancer service, call information for AD synchronization from ID Center can pass through the user’s VPC to call the user’s AD.
For detailed information on creating and using the Load Balancer service, please refer to Using Load Balancer Service.
Configure the PrivateLink Service in the user’s VPC.
Create the PrivateLink Service of the user VPC that will be called from ID Center.
When creating a PrivateLink Service, select the Load Balancer created in step 2 as the connection resource.
When the preparation work is finished, click the All Services > Management > Support Center menu. Move to the Service Home page.
From the Service Home page, click the Service Request menu. Navigate to the Service Request List page.
Click the Service Request button on the Service Request List page.
Select and enter the information required for the service request.
Category
Required
Detailed description
Title
Required
Title for service request
Enter within 64 characters using Korean, English, numbers, special characters (+=,.@-_)
Region
Required
Select the region to request the service
Service
Required
Management service group’s ID Center service selection
Work Category
Required
ID Center AD Integration Request Optional
Content
Required
Information input for ID Center AD linkage application
Table. ID Center AD Integration Application Items
Check the input information and click the Request button.
When creation is complete, check on the Service Request List page.
Notice
After requesting the service, you cannot edit or delete the content you wrote.
Change Credential Source Type
You can change the credential source or modify the settings.
Caution
If you change the credential source, all configuration information and resources such as previously set users, user groups, account assignments, and permission sets will be deleted.
Follow the steps below to change the credential source type.
All Services > Management > ID Center Click the menu. Navigate to ID Center’s Service Home page.
Click the ID Center Settings menu on the Service Home page. Navigate to the ID Center Details page.
ID Center Details page, click the Edit button of the Credential Source item. Credential Source Change popup opens.
After selecting the credential source type to use, click the Confirm button. A popup window notifying the credential source change will open.
Category
Detailed description
ID Center own directory
Use directory within ID Center
No separate setting items
AD (Active Directory)
Use Active Directory that the user manages directly
Connection URL: Enter the LDAP server address (e.g., ldap:// or ldaps:)
Bind DN: Enter the DN (Distinguished Name) of the administrator or service account used to access the LDAP server
Bind credentials: Enter the password for the account corresponding to the Bind DN
User DN: Enter the directory path where the user account is located (e.g., OU=Employees, OU=Accounts, DC=sub, DC=org)
Username LDAP attribute: Enter the user account identifier (e.g., sAMAccountName, uid)
RDN LDAP attribute: Enter the RDN (Relative Distinguished Name, the top-level attribute in the user DN)
User object classes: Enter the list of LDAP classes that define user objects, separated by commas (,) (e.g., persion, organizationPersion, usersAMAccount)
Table. Credential Source Type Change Items
5. After checking the contents of the precautions when changing and checking the check box, click the Confirm button. Go to the Service Home page to start changing the credential source type.
* The change time varies depending on the scale, and you can confirm it through a notification once the change is completed.
* You cannot navigate to another menu page while changes are being made.
AD (Active Directory) Information Synchronization
You can synchronize AD information.
Reference
AD information is automatically synchronized daily from 0:00 to 06:00 (Asia/Seoul, GMT +09:00).
If a new AD information connection is required, click the AD Reset button to change the AD information, then synchronize.
To synchronize AD information, follow the steps below.
All Services > Management > ID Center Click the menu. Navigate to ID Center’s Service Home page.
Click the ID Center Settings menu on the Service Home page. Navigate to the ID Center Details page.
ID Center Details on the page, click the Sync button next to the sync time of the Credential Source item. The AD Information Sync popup window opens.
After checking the synchronization notification, click the Confirm button. AD information synchronization will start.
The change time varies depending on the scale.
Manage Permissions
You can delegate the administrative rights of the ID Center to another Account, or revoke the delegated rights.
Delegating Permissions
You can delegate the management authority of the ID Center to another account.
Follow the steps below to delegate management rights to another account.
All Services > Management > ID Center Click the menu. Navigate to ID Center’s Service Home page.
Click the ID Center Settings menu on the Service Home page. Navigate to the ID Center Details page.
Click the Permission Delegation button on the ID Center Details page. You will be taken to the Permission Delegation page.
Permission Delegation button is displayed only when there is no Account that has currently delegated authority.
Permission Delegation on the page, after selecting the account to delegate authority to, click the Complete button.
Category
Detailed description
Account name
Account name
Account ID
Account’s ID
email
Account email
Additional Date
Account creation or registration date/time in Organization
Add Type
Method of adding Account in Organization
Create: Add by creating new on Add Account page
Join: Add an already created Account
Table. ID Center Delegated Authority Account List
Note
When you click the View Hierarchy button, you can view the Account list in a hierarchical structure.
Cancel Delegation
You can revoke the administrative privileges of the ID Center delegated to another Account.
To cancel the delegation of administrative authority, follow the steps below.
All Services > Management > ID Center Click the menu. Navigate to the Service Home page of ID Center.
Click the ID Center Settings menu on the Service Home page. Navigate to the ID Center Details page.
Click the Cancel Delegation button on the ID Center Details page.
If a popup notifying the revocation of delegation opens, click the Confirm button.
ID Center Delete
Caution
An account delegated with the management of the ID Center cannot delete the ID Center.
Caution
When ID Center is deleted, all users, user groups, and permission sets within the ID Center are deleted, and all entries assigned to the Account are deleted.
To delete the ID Center, follow these steps.
All Services > Management > ID Center menu, click. Go to ID Center’s Service Home page.
Service Home on the page, click the ID Center Settings menu. Navigate to the ID Center Settings page.
ID Center Settings on the page click the ID Center Delete button. ID Center Delete popup window opens.
ID Center Delete After entering the name of the ID Center to delete in the popup window, click the Confirm button. Navigate to the Service Home page.
ID Center deletion time varies depending on the scale, and you can confirm via notification when deletion is complete.
While deleting the ID Center, you cannot navigate to other menu pages.
10.5.2.1 - ID Center User Management
ID Center’s user can be checked and managed.
Create User
You can create a user and add it to the ID Center.
To create a user, follow the following procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User menu. It moves to the User List page.
User List page, click the Create User button. It moves to the Create User page.
User Created page, enter the basic information and additional information, then click the Complete button.
Classification
Necessity
Detailed Description
Username
Required
Enter the user’s name
Use English letters, numbers, and special characters (+=-_@,.) within 128 characters
The username cannot be changed after creation
Description
Selection
Enter a description of the user within 1,000 characters
Password
Required
Password creation method selection
Automatic generation: Automatically generate a password and provide it in a popup window when user creation is complete
Direct input: Refer to the password creation rules and enter directly
User Real Name
Required
Enter the user’s last name and first name in real name
Affiliation Information Input
Select
Enter business unit, department, administrator, and employee number information, each within 128 characters
User Group Selection
Select
Select the user group to which you want to add users
Uppercase letters (English), lowercase letters (English), numbers, special characters (!@#$%&*^) must each be included at least once.
The length is 9~20 characters.
ID or username cannot be used as a password.
The same character cannot be used three times or more.
Easily guessable passwords cannot be used.
Recently used passwords cannot be used.
4 characters or more of continuous characters/numbers cannot be used.
The password change cycle is 90 days.
When the popup window notifying user addition opens, click the Confirm button. The ID Center user login information popup window will open.
ID Center check the user login information, then click the confirm button.
Classification
Detailed Description
Access Portal URL
URL information to access the Access Portal
User Name
Created User Name
password
the password of the authenticated user
view icon to check the password
Excel Download
Download ID Center user login information as an Excel file
Email transmission
An Excel file containing ID Center user login information is sent via email
After clicking the button, enter the email address to receive the email
Table. ID Center user login information items
Check user details
You can check and manage detailed information about the user, user groups, and account information.
To check the user details, follow the next procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User menu. It moves to the User List page.
User List page, click the username to check the detailed information. It moves to the User Details page.
User Details page displays basic information, and consists of Basic Information, User Group, Account tabs.
Basic Information
You can check the user’s basic information and modify the user’s description and options if necessary.
Category
Detailed Description
Delete User
Button to delete the user
In the case of an AD-linked account, deletion is not possible
User Name
The user’s name
User Real Name
The user’s actual name
Edit button to modify the name
In the case of an AD-linked account, modification is not possible
Description
A description of the username
Edit button can be clicked to modify the description
Last Login
The time when the user last logged in
Password
Password last changed time
In the case of an AD-linked account, it cannot be confirmed
Click the **Edit** button to change the password
For more information, see [Change Password](#비밀번호-변경하기) reference
|
| Password reuse restriction | The number of recently used passwords that cannot be set as a password
In the case of an AD-linked account, it is not possible to check
|
| Email | Email authentication status
In the case of an AD-linked account, the email information provided by AD is displayed and cannot be modified
|
| Mobile Phone Number | Mobile Phone Number Authentication Status |
| affiliation information | user's business unit, department, administrator, employee number information
In the case of an AD linked account, it is not possible to confirm
Click the **Edit** button to modify affiliation information
|
Table. User's basic information tab items
User Group
The user can check the registered user group and add or exclude the user group as needed.
Reference
User Group details can be found in User Group, please refer to it.
Category
Detailed Description
Exclusion
Exclude the selected user group from the user group list
Number of sets of permissions applied to the Account
If you place the mouse cursor on the set of permissions, a popup opens that allows you to exclude the set of permissions
Application method
Method of applying the set of privileges to the Account
Direct: Policies directly linked to the Account
Group: Policies linked through user groups
Fig. User's Account tab items
Change password
You can change the user’s password.
To change the user’s password, follow the following procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User menu. It moves to the User List page.
User List page, click the username to change the password. It moves to the User Details page.
User Details page, click the Password item’s Edit button. The Password Reset popup window opens.
Password Reset popup window, set the password, then click the Confirm button. ID Center User Login Information popup window will open.
Auto Generation: Automatically generate a password
Direct Input: Refer to the password creation rules and enter directly
Password Creation Rules
Uppercase letters (English), lowercase letters (English), numbers, special characters (!@#$%&*^) must each be included at least once.
The length is 9~20 characters.
ID or username cannot be used as a password.
The same character cannot be used three times or more.
Easily guessable passwords cannot be used.
Recently used passwords cannot be used.
4 characters or more of continuous characters/numbers cannot be used.
The password change cycle is 90 days.
ID Center user login information popup window, check the user information after, confirm button click.
Classification
Detailed Description
Access Portal URL
URL information to access the Access Portal
User Name
Created User Name
password
the password of the authenticated user
view icon to check the password
Excel Download
Download ID Center user login information as an Excel file
Email transmission
An Excel file containing ID Center user login information is sent via email
After clicking the button, enter the email address to receive the email
Table. ID Center user login information items
Add user group
You can add a new user group.
To add a user group, follow the following procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User menu. It moves to the User List page.
User List page, click the username to add to the user group. It moves to the User Details page.
User Details page, click the User Group tab. The user group list will be displayed.
Add User Group button will be clicked. It moves to the Add User Group page.
Add User Group page, select the user group to be added from the user group list, and then click the Complete button.
Classification
Necessity
Detailed Description
Added user group
-
Name of the user group that the user was added to
User Group
Required
Select a user group to add users to
If selected, add to Added User Group item
Table. Items to Add User Group
When the popup window notifying the addition of a user group opens, click the Confirm button.
Add permission set
You can add a set of permissions to the Account.
To add a set of permissions to the Account, follow the following procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User menu. It moves to the User List page.
User List page, click the username to add the permission set. It moves to the User Details page.
User Details page, click the Account tab. The list of accounts will be displayed.
Select the Account to add a set of permissions from the Account list, then click the Add Permission Set button. It moves to the Add Permission Set page.
Add Permission Set page’s permission set list, select the permission set you want to add, then click the Complete button.
Classification
Necessity
Detailed Description
Selected Account
-
Account name to add permission set
Applied permission set
-
Name of the permission set applied to the selected Account
Permission Set
Required
Select one or more permission sets to apply to the Account
When selected, add to the Applied Permission Set item
Table. Adding Permission Set Items
When the popup window notifying the addition of the authority set opens, click the Confirm button.
Account assignment
You can assign a new Account to the user.
To assign a new Account, follow the next procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User menu. It moves to the User List page.
User List page, click the username to assign the Account. It moves to the User Details page.
User Details page, click the Account tab. The list of accounts will be displayed.
Account Assignment 버튼을 클릭하세요. Account Assignment 페이지로 이동합니다.
Account assignment page where you assign the account to be assigned and the set of privileges to be applied to the account, click the Complete button.
Classification
Mandatory
Detailed Description
Account Selection
Required
Select the Account to be assigned to the user
Hierarchical Structure View: Display Accounts in the form of the organization’s hierarchical structure
Account List View: Display Accounts in a list format
Permission Set Selection
Required
Select the permission set to be applied to the selected Account
Table. Assigning User Account Items
Notice
If there is no IAM policy name that matches the custom policy name of the selected permission set, you cannot assign an Account.
Account 할당을 사용자 그룹 추가를 알리는 팝업창이 열리면 확인 버튼을 클릭하세요 -> 7. When the popup window notifying the addition of a user group to the account allocation opens, click the Confirm button.
Delete user
To delete a user, follow the following procedure.
All services > Management > ID Center menu is clicked. It moves to the Service Home page of ID Center.
Service Home page, click the User menu. It moves to the User List page.
Select one or more users to delete from the user list.
After confirming the selected users, click the delete button.
You can also delete them individually from the user details page of the user to be deleted.
When a popup window notifying user deletion opens, click the Confirm button.
10.5.2.2 - ID Center User Group Management
ID Center’s user group can be checked and managed.
Create a user group
You can create a user group and add it to the ID Center.
To create a user group, follow the following procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User Group menu. It moves to the User Group List page.
User Group List page, click the Create User Group button. Move to the Create User Group page.
Create User Group page, enter the basic information and additional information, then click the Complete button.
Classification
Necessity
Detailed Description
User Group Name
Required
Enter the name of the user group
Use English letters, numbers, and special characters (+=-_@,.) to enter within 3-30 characters
Description
Select
Enter a description of the user group within 1,000 characters
Add User
Select
Select a user to add to the user group
Displays a list of users registered in the account
If there are no users to add when linked to AD, add the user from the AD provider and proceed with synchronization on the ID Center Settings > Credential Source page
Table. User Group Creation Information
When the popup window notifying the addition of a user group opens, click the Confirm button.
Check user group detailed information
You can check and manage detailed information about the user group and user group, account information.
To check the user group details, follow the following procedure.
All services > Management > ID Center menu is clicked. It moves to the Service Home page of ID Center.
Service Home page, click the User Group menu. It moves to the User Group List page.
User Group List page, click the user group name to check the detailed information. It moves to the User Group Details page.
User Group Details page displays basic information, and consists of Basic Information, User, Account tabs.
Basic Information
You can check the basic information of the user group and modify the description and options of the user group if necessary.
Classification
Detailed Description
Delete user group
A button to delete the user group
User Group Name
The name of the user group
User Group ID
The ID of the user group
Creator
The user who created the service
Creation Time
The time when the service was created
Editor
User who modified the service information
Revision Time
Time when service information was revised
User Group Name
The name of the user group
Edit button can be clicked to modify the name
Description
A description of the user group name
Edit button can be clicked to modify the description
Table. Basic information tab items of user group
User
You can check the users registered in the user group and add or exclude users as needed.
Account Assignment에 대한 자세한 내용은 Account Assignment을 참고하세요. should be translated to: * Account Assignment for more information, please refer to Account Assignment.
Classification
Detailed Description
Add permission set
Add a new permission set to the Account
Activated when selecting an Account from the Account list
If there are no users to add when linked to AD, add the user from the AD provider and proceed with synchronization on the ID Center Settings > Credential Source page
Table. Add User Items
Check if the added user has been added to the list.
Add permission set
You can add a set of permissions to the Account.
To add a set of permissions to an Account, follow these procedures.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User Group menu. It moves to the User Group List page.
User Group List page, click the user group name to add the permission set. It moves to the User Group Details page.
User Group Details page, click the Account tab. The account list will be displayed.
Select the Account to add a set of permissions from the Account list, then click the Add Permission Set button. It moves to the Add Permission Set page.
Add Permission Set page, select the permission set you want to add from the list of permission sets, and then click the Complete button.
Classification
Mandatory
Detailed Description
Selected Account
-
Account name to add permission set
Applied permission set
-
Name of the permission set applied to the selected Account
Permission Set
Required
Select one or more permission sets to apply to the Account
When selected, add to the Applied Permission Set item
Table. Adding Permission Set Items
Please check if the added set of permissions has been applied to the Account.
Account assignment
You can assign a new account to the user group.
To assign a new Account, follow the next procedure.
All services > Management > ID Center menu is clicked. It moves to the Service Home page of ID Center.
Service Home page, click the User Group menu. It moves to the User Group List page.
User Group List page, click the user group name to assign to the Account. It moves to the User Group Details page.
User Group Details page, click the Account tab. The account list will be displayed.
Account Assignment 버튼을 클릭하세요. Account Assignment 페이지로 이동합니다.
Account assignment page where you assign an account and select a set of permissions to be applied to the account, click the Complete button.
Classification
Necessity
Detailed Description
Account Selection
Required
Select the Account to be assigned to the user group
Hierarchical View: Display Accounts in the form of the organization’s hierarchical structure
Account List View: Display Accounts in the form of a list
Permission Set Selection
Required
Select the permission set to be applied to the selected Account
Table. Assigning Account Items
Notice
If there is no IAM policy name that matches the custom policy name of the selected permission set, you cannot assign an Account.
Check if the added Account has been assigned to the user.
Delete user group
To delete a user group, follow the following procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the User Group menu. It moves to the User Group List page.
Select one or more user groups to delete from the user group list.
After verifying the selected user group, click the Delete User Group button.
You can also delete them individually from the User Group Details page of the user group to be deleted.
When a pop-up window notifying the deletion of the user group opens, click the Confirm button.
10.5.2.3 - ID Center Account assignment
ID Center’s Account can be checked and assigned to a user or a user group.
Account assignment
You can assign an Account to a user or a user group.
To assign an Account, follow the following procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the Account assignment menu. It moves to the Account list page.
Account list page, select the account to be assigned, and then click the Assign to user or group button. It moves to the Assign to user or group page.
Assign to user or group page’s Select assignment target area, select the assignment target, then click the Next button.
You must select at least one user or user group to assign to the Account.
Classification
Necessity
Detailed Description
Account to Assign
-
Name of the Account to assign to a user or a group of users
User
Select
Select the user to assign the Account to
User Group
Select
Select the user group to assign the Account
Table. Selecting account allocation target items
Permission Set Selection area, select the permission set to be applied to the Account, and then click the Next button.
Classification
Mandatory
Detailed Description
Account to Assign
-
Name of the Account to assign to a user or a group of users
Permission Set
Required
Select one or more permission sets to apply to the Account
Table. Account Permission Set Selection Items
Input Information Confirmation area, check the assignment target and authority set, then click the Complete button.
Account allocation notification popup window opens, click the Confirm button.
Account detailed information check
You can check and manage detailed information about the account, the target to be assigned, and the set of authorities.
To check the detailed information of the Account, follow the next procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the Account assignment menu. It moves to the Account information page.
Account information 페이지에서 상세 정보를 확인할 Account를 클릭하세요. Account details 페이지로 이동합니다.
Account Details page displays basic information, and consists of Basic Information, Assignment Targets, Permission Sets tabs.
Basic Information
Account’s basic information can be checked.
Classification
Detailed Description
Account name
Account full name
Account ID
Account’s ID
Creator
The user who created the Account
Creation Time
Time when the Account was created
Editor
User who modified the Account
Revision Time
Time when the Account was revised
Table. Account Basic Information Tab Items
Assignment Target
Account can check and manage the assigned user and user group.
A popup to exclude the permission set opens when the mouse cursor is placed on the permission set
Table. Account Allocation Target Tab Items
Authority Set
Account applied authority set can be checked and excluded if necessary,
However the correct translation is: The set of permissions applied to the Account can be checked and excluded if necessary,
Classification
Detailed Description
Exclusion of permission set
Excludes the selected permission set from the Account
Activated when a permission set is selected from the permission set list
If all permission sets are excluded, Account assignment is automatically canceled
Permission Set Name
The name of the permission set
Description
Description of the set of permissions
Revision Time
The time the permission set was last modified
Table. Account Permission Set Tab Items
Add permission set
You can add a set of permissions to an account assigned to a user or a group of users.
To add a set of permissions to an Account, follow these procedures.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the Account assignment menu. It moves to the Account list page.
Account list page, click the Account to be assigned. It moves to the Account details page.
Account details page, click the target allocation tab. The list of allocation targets will be displayed.
After selecting the assignment target to which you want to add a permission set from the list of assignment targets, click the More > Add Permission Set button. It moves to the Add Permission Set page.
Add Permission Set page, select the permission set you want to add from the list of permission sets, then click the Complete button.
Classification
Mandatory
Detailed Description
Assignment Target
-
Name of the assignment target to which the permission set is to be added
Applied permission set
-
Name of the permission set applied to the selected Account
Permission Set
Required
Select one or more permission sets to apply to the Account
When selected, add to the Applied Permission Set item
Table. Adding Permission Set Items
When the popup window notifying the addition of the permission set opens, click the Confirm button.
Please confirm that the added set of permissions has been applied to the Account.
Add additional assignments to a user or group
You can additionally assign an Account to new users or user groups.
To assign an Account to a new user or a group of users, follow the next procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the Account assignment menu. It moves to the Account list page.
Account list 페이지에서 할당할 Account를 클릭하세요. Account details 페이지로 이동합니다. -> 3. Account list page, click the Account to be assigned. It moves to the Account details page.
Account details page, click the target allocation tab.
Assignment target tab, click the Assign to user or group button. Move to the Assign to user or group page.
Select Target area, select the target to be assigned, then click the Next button.
You must select at least one user or user group to assign to the Account.
Classification
Necessity
Detailed Description
Assigned User
-
The current account assigned user name
User
Select
Select a user to assign to the Account
When selected, it is added to the Assigned User item
If there are no users to add when linked to AD, add the user from the AD provider and proceed with synchronization on the ID Center Settings > Credential Source page
Assigned User Group
-
Name of the user group to which the current Account is assigned
User Group
Select
Select the user group to assign the Account to
When selected, it is added to the Assigned User Group item
Table. Selecting Items for Account Allocation
Permission Set Selection area, select the permission set to be applied to the Account, and then click the Next button.
Classification
Mandatory
Detailed Description
Permission Set
Required
Select one or more permission sets to apply to the Account
Table. Account Permission Set Selection Items
Input Information Confirmation area, check the assignment target and authority set, then click the Complete button.
Account allocation notification popup window opens, click the Confirm button.
Account assignment cancellation
To cancel the account assignment for a user or a user group, follow the following procedure.
All services > Management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the Account assignment menu. It moves to the Account list page.
Account list 페이지에서 할당할 Account를 클릭하세요. Account details 페이지로 이동합니다. -> 3. Account list page, click the Account to be assigned. It moves to the Account details page.
Account details page, click the Target allocation tab. The list of allocation targets will be displayed.
Select the assignment target to be canceled from the list of assignment targets and then click the Cancel Assignment button.
Table. Authority information items of the authority set
Account
You can check and modify the account information of the authority set.
Classification
Detailed Description
Account name
Account Name
Account ID
Account ID
Email
Account’s Email
Table. Account tab items of the permission set
Connect Basic Policy
You can attach a new default policy to the set of permissions.
To link a basic policy, follow these procedures.
Click on 모든 서비스 > Management > ID Center menu. It moves to the Service Home page of ID Center.
On the Service Home page, click the 권한 세트 menu. It moves to the 권한 세트 목록 page.
On the Authority Set List page, click the authority set to link to the basic policy. It moves to the Authority Set Details page.
Authority Set Details page, click the Authority tab.
Click the Policy Link button in the Basic Policy area. It moves to the Basic Policy Link page.
On the Basic Policy Linkage page, select the policy you want to link from the list of basic policies, and then click the Complete button.
Classification
Necessity
Detailed Description
Connected Base Policy
-
Name of the base policy connected to the authority set
Default Policy Link
Required
Select the default policy to link to the authority set
If selected, it will be added to the Linked Default Policy item
Fig. Attaching a Default Policy to a Permission Set Item
When the policy connection notification popup window opens, click the Confirm button.
Connect custom policies
You can attach a new custom policy to a set of permissions.
To link a custom policy, follow these steps.
Click all services > Management > ID Center menu. It moves to the Service Home page of ID Center.
On the Service Home page, click the Authority Set menu. It moves to the Authority Set page.
On the Authority Set List page, click the authority set to which you want to attach a custom policy. It moves to the Authority Set Details page.
Authority Set Details page, click the Authority tab.
Click the Policy Link button in the Custom Policy area. It moves to the Custom Policy Link page.
Custom Policy Connection page, select the policy you want to connect from the list of custom policies, and then click the Complete button.
Classification
Necessity
Detailed Description
Connected User-Defined Policy
-
Default Policy Name Connected to the Authority Set
User-defined policy linking
Required
Enter the user-defined policy to be linked to the permission set directly
When selected, add to the Linked User-Defined Policy item
Click the Add button to enter additional user-defined policies to be linked
Table. Items for attaching custom policies to permission sets
When the policy connection notification popup window opens, click the Confirm button.
Creating an inline policy
You can modify the inline policies attached to a set of permissions.
To modify the in-line policy, follow the next procedure.
Click all services > Management > ID Center menu. It moves to the Service Home page of ID Center.
On the Service Home page, click the Authority Set menu. It moves to the Authority Set List page.
On the Authority Set List page, click the authority set you want to modify the in-line policy for. It moves to the Authority Set Details page.
Authority Set Details page, click the Authority tab.
In the 인라인 정책 area, click the 정책 생성 button. This will take you to the 인라인 정책 생성 page.
On the 인라인 정책 생성 page, in the 권한 설정 section, select the policy setting method and the service to apply, then click the 다음 button.
Classification
Necessity
Detailed Description
Basic Mode/JSON Mode
Required
Select the policy setting method
Basic Mode: Use the mode provided by the Console to set
JSON Mode: Set directly using the JSON Editor
Service
Required
Select the service to set the policy
Add Service: Add a service to set the policy
Table. Inline Policy Creation - Service Settings
Caution
In policy settings, we provide default mode and JSON mode.
When entering JSON mode or moving the screen after writing in basic mode, services with duplicated control requirements are integrated into one, and services with incomplete settings are deleted.
JSON mode where the contents written in does not match the JSON format can not be converted to default mode.
After setting the permissions, click the Next button.
Authentication method for the target users to apply the policy
All Authentication: Applies regardless of the authentication method
API Key Authentication: Applies to users who use API key authentication
Temporary Key Authentication, Console Login: Applies to users who use temporary key authentication or console login
Applied IP
Required
IP that allows policy application
Custom IP: IP registered and managed directly by the user
Applied IP: IP registered directly by the user, to which control policies are applied, and can be registered in IP address or range format
Excluded IP: IP to be excluded from Applied IP, which can be registered in IP address or range format
All IP: No IP access restriction
All IPs are allowed access, but if exceptions are needed, Excluded IP can be registered to restrict access to registered IPs
Additional Conditions
Optional
Add conditions for Attribute-Based Access Control (ABAC)
Condition Key: Select from Global condition key and service condition key list
Qualifier: Default, Any value in request, All values in request
Operator: Bool, Null
Value: True, False
Table. Policy Creation - Permission Setting
Check Input Information page, check the entered information and click the Complete button.
If the policy modification notification popup window opens, click the Confirm button.
Registering individual resources as applied resources
You can register individual resources as applied resources when setting permissions.
To register individual resources as applied resources, follow the next procedure.
Select an action where individual resources can be selected from the action options.
Actions that allow individual resource selection are displayed in purple.
In Applied Resource, click Individual Resource.
Click the Add Resource button. The Add Resource popup window will open.
Classification
Necessity
Detailed Description
Free Type
Required
Select the type of resource to add
SRN
-
Unique resource ID in Samsung Cloud Platform
Automatically updated based on the input items below
Account
Required
Account ID setting
Current Account: Current Account ID is automatically entered and cannot be modified
All Accounts: Added to all accounts (not recommended)
Direct Input: Account ID is directly entered using English lowercase letters and numbers within 100 characters (Wildcard input is not allowed)
Region
Select
Directly enter the region information of the resource within 100 characters
Select All checks to add resources from all regions
Resource ID
Required
Directly enter the resource ID to be added within 100 characters
Select All If checked, all resources of the corresponding resource type are added
Fig. Policy Creation - Registering Individual Resources as Applied Resources
Delete permission set
Notice
If a set of permissions is applied to an Account, it cannot be deleted.
To delete a set of permissions, follow these steps.
All services > Management > ID Center menu is clicked. It moves to the Service Home page of ID Center.
On the Service Home page, click the 권한 세트 menu. It moves to the 권한 세트 목록 page.
Select one or more authorization sets to delete from the authorization set list.
After confirming the selected set of permissions, click the Delete button.
You can also delete them individually from the Delete permission set’sPermission set details page.
When the popup window notifying the deletion of the permission set opens, click the Confirm button.
10.5.2.5 - ID Center Access Portal use
You can access and use Account resources through the Access Portal.
Notice
To use Access Portal, you must be registered as a user in the ID Center of the Samsung Cloud Platform Console.
For more information about user registration, please refer to Create User.
Access Portal first access
When accessing the Access Portal for the first time, you must apply for the Access Portal access URL through a service request and then log in.
Access Portal connection URL application
Samsung Cloud Platform Console where you can apply for Access Portal access URL through a service request.
To apply for Access Portal connection URL, please follow the next procedure.
All services > management > ID Center menu, click. It moves to the Service Home page of ID Center.
Service Home page, click the ID Center settings button. It moves to the ID Center settings page.
Access Portal URL item, click the URL application button. It moves to the service request page of the Support Center.
Classification
Necessity
Detailed Description
title
required
Title for Access Portal URL application
Use Hangul, English, numbers, and special characters (+=,.@-_) to enter within 64 characters
Region
Required
Select a region to apply for Access Portal URL
Service
Required
Management service group’s ID Center service selection
Task Classification
Required
Apply for Access Portal URL Optional
Content
Required
Information input for Access Portal URL application
Table. Access Portal URL Request Items
Check the input information and click the request button.
Notice
After requesting the service, you cannot modify or delete the written content.
After requesting a service, you can check the details of the request on the Service Request List page of the Support Center. Please refer to Checking Service Request Details for more information.
Access Portal Initial Login
Access Portal for the first time, please follow the following procedure.
On the login page, enter your username and password.
Notice
Username and Password information, please contact the ID Center administrator.
Select a means to send the authentication number, and click the Send Authentication Number button.
Enter the received authentication number and click the next button. A pop-up window for multi-authentication (MFA) self-authentication will open.
Multi-factor authentication (MFA) for self-identification In the popup window for MFA self-identification, complete the personal information input and terms confirmation for MFA, then click the Confirm button. The Password Change popup window will open.
Item
Mandatory
Description
Automatic input prevention
Required
Enter the characters output in the image into the input window and click the Confirm button
Mobile phone number
Required
Enter mobile phone number
Enter the mobile phone number and click the authentication button to issue an authentication number
Enter the authentication number issued to your mobile phone and click the confirm button
If the authentication number is valid, the identity verification is complete
Email
Required
Enter the email to be used for self-authentication within 60 characters
For accounts linked to the AD type as the authentication source, select Provide email information registered on the AD side as Read-Only
Region
Required
Region selection for personal information collection
Personal information collection and use
Required
After checking the terms and conditions for personal information collection and use, check I agree
Table. Self-authentication items for multi-factor authentication (MFA)
Password Change popup window, enter the password change information and click the Confirm button. The Access Portal Terms of Service popup window will open.
Item
Mandatory
Description
Existing password
Required
Enter the password received from the ID Center administrator
New Password
Required
Enter directly referring to the password creation rules
Password Confirmation
Required
Re-enter the password to use
Table. Password Change Items
Password Creation Rules
Uppercase letters (English), lowercase letters (English), numbers, special characters (!@#$%&*^) must each be included at least once.
The length is 9~20 characters.
ID or username cannot be used as a password.
The same character cannot be used three times or more.
Easily guessable passwords cannot be used.
Recently used passwords cannot be used.
4 characters or more of consecutive characters/numbers cannot be used.
The password change cycle is 90 days.
Access Portal terms of use after confirmation, confirm button click. Access Portal page to move.
Access Portal Login
Guidance
If you are accessing the Access Portal for the first time, refer to Access Portal initial access to apply for the Access Portal URL first, and then log in.
To log in to Access Portal, follow the following procedure:
Enter the Access Portal connection URL received through the service request in the browser’s address input window. It moves to the Access Portal login page.
On the login page, enter your username and password.
Select a means to send the authentication number and click the next button. It moves to the authentication number confirmation page.
If you do not receive the authentication number or it has expired, click the Resend Authentication Number button to request the authentication number again.
Enter the received authentication number and click the login button; you will be taken to the Access Portal page.
ID/Password Find
ID or password is lost, in the case of password find button, click, Access Portal registered email or phone number to change using available.
Accounts linked to AD type certification source have password retrieval restricted, please contact the ID Center administrator.
Caution
Please enter your password and authentication number correctly. If you enter your password or authentication number incorrectly more than 5 times, your account will be locked for security reasons.
If the account is locked, it provides the user with the locked account information.
Access Portal usage
When you log in to Access Portal, you are taken to the Access Portal page.
Access Portal page is composed of Account tab and My Info tab.
Account
You can check the account and set of permissions assigned to the user and access the Samsung Cloud Platform Console with the account’s set of permissions. Temporary Key Issuance can be used to obtain a temporary key to access the Account.
Classification
Detailed Description
Account list
Assigned account name and ID to the user, root user email information
When clicking on the account name, the set of permissions applied to the account is displayed
Permission Set List
Permission set applied to Account
Clicking on the permission set name moves to the Samsung Cloud Platform Console page
Temporary Key Issuance: Issuing a temporary key that can use the Account
Table. Account tab items
My Info.
You can check the user’s basic information and modify the user’s description and options if necessary.
Classification
Detailed Description
User Name
The user’s name
Email
Email to be used for self-authentication
Modify button can be clicked to change email
In the case of an AD-linked account, the email information provided by AD is displayed and cannot be modified
Mobile phone number
Mobile phone number to use for self-authentication
Edit button to change email
Last Login
The time when the user last logged in
Password
Password last changed time
In the case of an AD-linked account, it cannot be confirmed
Click the Edit button to change the password
Refer to the password creation rules when changing the password
Password Reuse Restriction
The number of recently used passwords that cannot be set as a password
In the case of an AD-linked account, it cannot be checked
Click the Edit button to change the number
Up to 24 recently used passwords that cannot be set can be configured
Time Zone
User Time Zone(Time Zone)
Modify button to change time zone possible
Terms and Conditions
Terms and Conditions agreement status
View Content item can be clicked to check the terms and conditions
Table. My Info. Tab Items
Password Generation Rules
Uppercase letters (English), lowercase letters (English), numbers, special characters (!@#$%&*^) must each be included at least once.
The length is 9~20 characters.
ID or username cannot be used as a password.
You cannot use the same character three times or more.
Easily guessable passwords cannot be used.
Recently used passwords cannot be used.
4 characters or more of continuous characters/numbers cannot be used.
The password change cycle is 90 days.
Account
You can check the account and permission set assigned to the user and access the Samsung Cloud Platform Console with the account’s permission set or receive an access token for access.
Classification
Detailed Description
Account list
Assigned account name and ID to the user, and root user email information
When clicking on the account name, the set of permissions applied to the account is displayed
Permission Set List
Permission set applied to Account
Clicking on the permission set name moves to the Samsung Cloud Platform Console page
Temporary Key Issuance: Issuing a temporary key that can use the Account
Table. Account Tab Items
Issue Certificate
You can obtain an API key to access the Samsung Cloud Platform Console from the Access Portal.
To receive the IMSI key, follow the next procedure.
Enter the Access Portal access URL received through the service request in the browser’s address input window. It moves to the Access Portal login page.
Log in to Access Portal. Access Portal page will be moved.
Access Portal page, click the Account tab. It moves to the Account tab.
Click the Issue License Key button of the license set for which you want to issue a license key in the license set list. A pop-up window announcing the issue of the license key will open.
Check the account name, then click the Confirm button. The ID Center Issuance popup window will open.
Check the issuance information, then click the Confirm button.
Caution
ID Center issuance popup window’s information cannot be checked again, so please be careful.
In case the license key issuance information is lost, the license key must be re-issued.
You can choose to use AD (Active Directory) as a credential source.
AD (Active Directory) is used so that users can directly manage the authentication source.
2025.07.01
NEWID Center Service Official Launch
ID Center service has been officially launched.
You can manage to perform tasks according to user permissions by creating authority policies for each service and assigning policies and accounts linked to the Organization service to users.
Security can be enhanced to allow only authorized ID Center users to access through the Access Portal.
10.6 - Logging&Audit
10.6.1 - Overview
Service Overview
Logging&Audit saves the history of all tasks performed by the user, allowing it to be used for change tracking, troubleshooting, security checks, and more on cloud resources. The collected task history can be checked for 90 days, and can be stored in the user’s Object Storage through Trail creation and management.
Provided Features
Logging&Audit provides the following functions.
Work Log Collection: All logs that occur are collected and stored in real-time. For the entire log collected from multiple servers and services, search is possible through web-based Console by various conditions such as resource type, resource name, period, and worker name.
Activity Record Audit: All recorded activity history is stored for 90 days and can be checked at any time through the Console.
Log Management: While using cloud services, numerous logs generated can be stored in the user’s Object Storage by creating a Trail.
Log Integrity Verification: After creating a trail, you can verify if the log files stored in Object Storage have been changed, modified, or deleted through the log file verification function.
Component
Activity History
It can be used for change tracking, troubleshooting, security checks, and more by storing all activity records performed by users in cloud resources. It stores activity records for 90 days without separate settings, and effective log management is possible by analyzing logs using various search functions.
Trail
Trail allows you to store user activity records that occur through Console and API calls in Object Storage for a long time. In addition, you can select the services and accounts you want to save and use them to track changes to cloud resources, troubleshoot, and perform security checks.
Preceding Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service and prepare in advance for more details.
Object storage that is convenient for data storage and search
Fig. Logging&Audit Preceding Service
10.6.2 - How-to guides
The user can check the activity history through the Samsung Cloud Platform Console, and store the corresponding activity history using the Trail service without any time restrictions. When problems such as security risks or resource change history occur, you can check this activity history to identify the cause of the problem.
Note
Activity records will be deleted after 90 days. If long-term storage is necessary, create a Trail and store it in Object Storage. For more information, see Creating a Trail.
Activity Record Inquiry
To check the list of user’s activity history, please follow the following procedure.
All services > Management > Logging&Audit menu is clicked. It moves to the Service Home page of Logging&Audit.
Service Home page, click the activity record menu. It moves to the activity record list page.
Activity Record List page where you can check the activity record.
Classification
Detailed Description
file download
save activity history list in JSON or CSV file format
Period Filter
Select the list search period
All, Last 30 minutes, Last 1 hour, Last 12 hours, Direct Input to choose from
Direct Input: Start and end times can be entered
Time Zone
Select a searchable time zone
Search input window
Enter search terms to search the list
Detailed Search
Search by category by entering search terms or selecting information to search the list
Setting Icon
Setting of column items to be displayed in the list
Table. List of activity record items
Guidance
The list is automatically refreshed every 1 minute.
The list will only show the list of selected regions.
Activity Record Comparison
You can select up to 5 work histories from the activity record list to compare information. If you check and select the work history you want to compare, it will be added to the Activity Comparison page, where you can compare and check the information.
Activity record detailed information check
You can check the list of all activity records and detailed information. The activity record details page consists of work history details, activity record details tabs.
To check the detailed information of the activity record, follow the next procedure.
All services > Management > Logging&Audit menu is clicked. It moves to the Service Home page of Logging&Audit.
Service Home page, click the activity history menu. It moves to the activity history list page.
Activity Record List page, click on the activity record to check the detailed information. It moves to the Activity Record Details page.
Activity Details page consists of Work History Details, Work Details tabs.
Work History Details
Work History Details page where you can check the detailed information of the work history.
Classification
Detailed Description
Job Execution Time
Log Occurrence Time
Worker Information
Worker Account
Service
Service Name
Role Name
Role Name of the User Who Entered the Role
Resource Name
Resource Title
Region
Work Region
Resource Type
Resource Type
Work Record
Work Details
Resource ID
Resource’s unique ID
work result
work result
Event Topic
Event Content
Table. Work History Detail Tab Items
Work history details
Work History Details page where you can check the detailed information of the work history.
Classification
Detailed Description
Basic Mode JSON Mode
Select view mode for job history details
code copy
code copy available when JSON mode is selected
accountId
Account ID
productName
service name
requestedBy
Requester ID
resourceName
resource name
resourceType
Service Type
state
work result
Table. Work history detailed tab items
10.6.2.1 - Trail Management
Users can view activity logs through the Samsung Cloud Platform Console and store those activity logs using the Trail service without time constraints. Since activity logs are retained for 90 days, for long-term storage you must create a Trail service and store them in Object Storage.
Trail Create
You can store activity logs without time restrictions using the Trail service of Logging&Audit in the Samsung Cloud Platform Console.
To create a Trail, follow the steps below.
All Services > Management > Logging&Audit Click the menu. Navigate to Logging&Audit’s Service Home page.
Click the Trail menu on the Service Home page. Go to the Trail List page.
Click the Trail List page’s Create Trail button. It navigates to the Create Trail page.
Service Information Input Enter or select the required information in the area.
Category
Required
Detailed description
Trail name
Required
Trail name
Enter 5-26 characters using English letters, numbers, and the special character (-)
Target Region
Required
Region where activity occurred
Services that are created without specifying a region select the target region as All
If a specific region selection is needed, select from the region list
The target region can be changed after creation
Target Resource Type
Required
Resource type of activity logs to be stored in Trail
Default: **All**
If you want to change to specify only certain resource types, click the **Select** button to choose the resource types to store
Refer to the [Service-specific Resource Type List](#서비스-별-자원-형-목록)
The target resource type can be changed after creation
|
|Target User |Required |User of activity logs to be stored in Trail
Default: **All**
If you want to change to specify only certain users, click the **Select** button to choose the users to store
Target users can be changed after creation
|
|Storage Bucket Region |Required |Location (region) of the Object Storage bucket where activity logs will be stored
The storage bucket cannot be changed after creation
|
|Storage bucket |Required |Object Stroage bucket name to store activity logs
Storage bucket cannot be changed after creation
|
|Save format |Required |File type to save (JSON, CSV)
The save format can be changed after creation
|
|Log file verification| Select|Whether to use log file verification
**Use** is selected, a Digest file is stored in the same bucket to verify changes and deletions of the Trail log file
The usage of log file verification can be changed after creation
|
|ServiceWatch log collection| Select|Trail logs are sent to ServiceWatch's log group. By sending Trail logs to ServiceWatch's log group, you can monitor via ServiceWatch and receive notifications when specific activities occur
If you select **Use**, you can view the automatically generated ServiceWatch log group name. You can also select the **IAM role** required for ServiceWatch log collection.
The **IAM role** for ServiceWatch log collection requires the following settings
Select **Service** for the **Category** of the **Principal**, and set **Value** to loggingaudit.samsungsdscloud.com
Attach a policy to **Policy** with the following **Permissions**
servicewatch:CreateBulkServiceLogEvents
servicewatch:CollectLogGroupLogStream
The use of ServiceWatch log collection can be changed after creating the Trail
|
Table. Trail Service Information Input Items
Additional Information Input Enter or select the required information in the area.
Category
Required or not
Detailed description
Description
Selection
Enter additional information or description about Trail
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Trail additional information input items
Reference
If the saved file type is CSV, open the log file in a text editor (e.g., Notepad++).
Reference
If ServiceWatch is set to use log collection, refer to the following for IAM policy permissions.
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When creation is complete, check the created resources on the Trail list page.
Trail Check detailed information
Trail service can view and edit the full list and detailed information. Trail Details page consists of Detailed Information, Tags, Activity History tabs.
To check the detailed Trail information, follow the steps below.
All Services > Management > Logging&Audit Click the menu. Navigate to Logging&Audit’s Service Home page.
Click the Trail menu on the Service Home page. Navigate to the Trail List page.
Click the resource to view detailed information on the Trail list page. You will be taken to the Trail detail page.
Trail Details page displays status information and additional feature information, and consists of Details, Tags, Activity History tabs.
Category
Detailed description
Trail status
Status of the Trail created by the user
Active: Trail operating
Stopped: Trail stopped
Trail Control
Button to change Trail status
Start: Start a stopped Trail. Activity records are saved again from the day the Trail is started.
Stop: Stop a running Trail. Activity recording is stopped, and previously saved activity records are retained.
Trail Delete
Button to delete Trail
Table. Trail status information and additional functions
Detailed Information
Trail list page lets you view detailed information of the selected resource and, if needed, modify the information.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In the Trail service, it means Trail SRN
Resource Name
Resource Name
In the Trail service, it means the Trail name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
Service Creation Time
Editor
User who edited the service information
Modification Date
Date/Time when service information was modified
Trail name
Trail name
Target Region
Region where activity logs occurred
The target region of activity logs stored in Trail can be specified when creating a Trail, and can also be changed. It can be changed via the Edit button, and for more details see Edit Target Region.
Target Resource Type
Resource type of activity logs stored in Trail
If you want to change, click the Edit button to select the resource type to save. For more details, refer to Edit Target Resource Type.
Target User
User of activity logs stored in Trail
If you want to change, click the Edit button to select the user to save. See Target User Edit.
Storage Bucket Region
Region of the Object Storage bucket where activity logs are stored
Storage bucket
Object Stroage bucket name that stores activity logs
Save Format
File type saved in bucket (JSON, CSV)
If you want to change the file type saved in the bucket, set it via the Edit button. For more details, see Edit Save Format.
Description
Additional information or description about the Trail
Click the Edit button to change the description. For more details, refer to Edit Trail Description
Log file verification
Whether to use log file verification
Use case, a Digest file is stored in the same bucket to verify changes and deletions of Trail log files
Click the Edit button to change the log file verification usage. For details, see Edit Log File Verification
ServiceWatch Log Collection
Send Trail logs to ServiceWatch’s log group
If you select Use, Trail logs are sent to ServiceWatch’s log group, allowing monitoring via ServiceWatch and receiving notifications when specific activities occur. For more details, see ServiceWatch Log Collection Modification.
Initial collection date and time
The initial collection date and time of activity logs stored in Trail
Final collection timestamp
Final collection timestamp of activity logs stored in Trail
Final execution result
Final execution result of the activity history stored in Trail
Table. Trail detailed information tab items
Tag
On the Trail list page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can check the Key and Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table.
Work History
Trail list page allows you to view the operation history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work details, work date and time, resource type, resource name, work result, and worker information can be checked
Provides detailed search function via the Detailed Search button
Click the relevant resource in the Work History List. The Work History Details popup window will open.
Table. Trail Work History Tab Detailed Information Items
Trail Resource Control
Depending on the state of the Trail, you can start or stop. To control the Trail’s resources, follow the steps below.
Trail Start
You can start a stopped Trail. Activity logs from the day you started the Trail will be saved again.
All Services > Management > Logging&Audit Please click the menu. Navigate to the Service Home page of Logging&Audit.
Click the Trail menu on the Service Home page. Navigate to the Trail List page.
On the Trail 목록 page, click the resource (Trail) to restart the stopped Trail. You will be taken to the Trail 상세 page.
On the Trail Details page, click the Start button at the top to start the server. Check the status of the changed Trail in the Status Display field.
When the Trail start is completed, the status changes from Stopped to Active.
If you have completed adding or changing the target resource type, click the Confirm button. You will be taken to the Trail Details page.
Check the changed Target Resource Type on the Trail Details page.
Edit Target Users
You can modify the target users of Trail. To modify the target users of Trail, follow the steps below.
All Services > Management > Logging&Audit Click the menu. Navigate to the Service Home page of Logging&Audit.
Click the Trail menu on the Service Home page. Navigate to the Trail List page.
On the Trail List page, click the resource (Trail) to change the target user. It moves to the Trail Details page.
Click the edit button of the target user on the Trail details page. The target user edit popup opens.
Add or change the target user, select it, and verify that the selected user appears in the Selection at the bottom.
If you have completed adding or modifying the target user, click the Confirm button. You will be taken to the Trail Details page.
Check the changed Target User on the Trail Details page.
Edit Save Format
You can modify the log file format stored in Trail’s bucket. To modify Trail’s storage format, follow the steps below.
Click the All Services > Management > Logging&Audit menu. Navigate to the Service Home page of Logging&Audit.
Click the Trail menu on the Service Home page. Navigate to the Trail List page.
Trail List page, click the resource (Trail) to change the log file storage format. You will be taken to the Trail Details page.
Click the Edit button of Save format on the Trail Details page. The Save format Edit popup opens.
Change the file format and click the Confirm button. Move to the Trail details page.
Check the changed save format on the Trail Details page.
Trail Edit Description
Trail’s description can be edited. To edit the description of Trail, follow the steps below.
Click the All Services > Management > Logging&Audit menu. Navigate to the Service Home page of Logging&Audit.
Click the Trail menu on the Service Home page. Go to the Trail List page.
Click the resource (Trail) to modify the description on the Trail List page. It moves to the Trail Details page.
Click the Edit button of Description on the Trail Details page. Edit Description popup opens.
Complete editing the description and click the Confirm button. Navigate to the Trail Details page.
Please check the changed Description on the Trail Details page.
Modify log file verification
You can modify whether Trail’s log file verification is used. To modify the usage of Trail’s log file verification, follow the steps below.
All Services > Management > Logging&Audit Click the menu. Navigate to Logging&Audit’s Service Home page.
Click the Trail menu on the Service Home page. Navigate to the Trail List page.
Click the resource (Trail) to change the log file validation usage on the Trail list page. You will be taken to the Trail details page.
On the Trail Detail page, click the Log File VerificationEdit button. It will move to the Log File Verification Edit popup.
If you select Use, a Digest file is stored in the same bucket to verify changes and deletions of the Trail log file. Choose whether to use, and click the Confirm button. You will be taken to the Trail Details page.
Please check the changed Log File Verification on the Trail Details page.
ServiceWatch Modify log collection
You can modify whether ServiceWatch log collection is used. To modify the ServiceWatch log collection usage for a Trail, follow these steps.
All Services > Management > Logging&Audit Click the menu. Navigate to Logging&Audit’s Service Home page.
Click the Trail menu on the Service Home page. It moves to the Trail List page.
Click the resource (Trail) to change the ServiceWatch log collection usage on the Trail List page. You will be taken to the Trail Details page.
Click the Edit button of ServiceWatch log collection on the Trail Detail page. You will be taken to the ServiceWatch log collection Edit popup.
If you select Use, a ServiceWatch log group name that will receive the Trail logs is automatically generated and can be viewed. Also select the IAM role required for ServiceWatch log collection, and click the Confirm button. It navigates to the Trail details page.
ServiceWatch log collection IAM role requires the following settings.
Performer’s type is selected as Service, and Value is set to loggingaudit.samsungsdscloud.com.
Policy connects a policy set configured with the following permissions.
Trail Details page where ServiceWatch log collection has changed. Please check.
Trail Delete
You can reduce operating costs by deleting unused Trails. However, deleting a Trail may cause the running service to stop immediately, so you should consider the impact of service interruption thoroughly before proceeding with the termination.
Caution
After deleting the trail, data cannot be recovered, so please be careful.
To delete the Trail, follow the steps below.
Click the All Services > Management > Logging&Audit menu. Navigate to the Service Home page of Logging&Audit.
Click the Trail menu on the Service Home page. You will be taken to the Trail list page.
Click the resource (Trail) you want to delete on the Trail List page. You will be taken to the Trail Details page.
Click the Delete Trail button on the Trail Details page.
When deletion is complete, check if the resource has been deleted on the Trail list page.
Caution
If you delete Trail, activity history saving will stop. Proceed with the deletion after fully considering the impact that occurs during service interruption.
List of resource types by service
Service-specific resource type list. When Trail creation and Target Resource Type are modified, this is the list of selectable target resource types.
Set Trail logs to be sent to ServiceWatch’s log group, enabling monitoring through ServiceWatch and receiving alerts when specific activities occur.
2025.07.01
FEATURELogging&Audit New release service addition linkage
Logging&Audit Newly released service additional integration
API Gateway, Archive Storage, Backup Agent, Cloud Functions, Cloud LAN-Datacenter, Cloud WAN, CloudML, Cost Savings, GSLB, Global CDN, IAM > role, Load Balancer > LB Health Check, Marketplace, Organization, Private DNS, Private NAT, Public Domain Name, Quick Query, Secret Vault, SingleID, Support Plan, Transit Gateway, VPC Peering, Vertica
When viewing activity logs, we added a period filter/time zone and a feature to compare activity logs.
2025.04.28
FEATURELogging&Audit New Release Service Additional Integration
Logging&Audit New launch service additional integration
Data Flow, Data Ops
2025.02.27
FEATURELogging&Audit New Release Service Additional Integration
Logging&Audit Newly launched service additional integration
AI&MLOps Platform, Multi-node GPU Cluster, VPN, Cloud LAN-Campus, KMS, Event Streams, Serch Engine, EPAS, Microsoft SQL Server
Samsung Cloud Platform Common Feature Change
Account, IAM and Service Home, tags, etc. have been reflected in common CX changes.
2024.10.01
NEWLogging&Audit Service Official Version Release
Logging&Audit service has been launched. It stores/searches all activity logs performed by customers (Console, API, CLI), and provides functions such as change tracking of cloud resources, troubleshooting, security checks, etc.
2024.07.02
NEWBeta version release
Logging&Audit service has been launched. It stores/searches all activity logs performed by customers (Console, API, CLI), and provides functions such as change tracking of cloud resources, troubleshooting, security checks, etc.
10.7 - Notification Manager
10.7.1 - Overview
Service Overview
Notification Manager is a management service that provides notifications to users based on the criteria set in the notification policy for each account when a notification occurs.
Provided Features
Notification Manager provides the following functions.
Notification Group Management: You can create and manage notification groups. You can add or remove users from the notification group.
Notification Policy Management: You can set a notification policy to receive specific notifications that occur within the account. When creating a notification policy, if you link a notification group, users within the notification group can receive notifications set in the notification policy.
Component
Notification Manager provides notification groups and notification policies.
Alert Group
You can create and manage notification groups, and add or remove users from notification groups. The main functions are as follows.
Create Alert Group: You can create an alert group and add users.
Alert Group > Add User: You can add users to the created alert group.
Users can create a notification group by entering essential information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Notification Group
You can create a notification group using the Samsung Cloud Platform Console.
All Services > Management > Notification Manager menu, click. It will move to the Notification Group List page of Notification Manager.
On the Notification Group List page, click the Create Notification Group button. It will move to the Create Notification Group page.
On the Create Notification Group page, enter the necessary information for service creation and select detailed options.
In the Basic Information section, enter the necessary information.
Category
Required
Description
Notification Group Name
Required
Notification group name
Use Korean, English, numbers, and special characters (+=,.@-_) within 3-24 characters
Description
Optional
Enter a description of the notification group
Maximum length is 1,000 characters
Table. Notification Group Basic Information Items
In the Add User section, select the users to be added to the notification group.
Category
Required
Description
Add User
Optional
Users added to the notification group
Search for the desired user and select them to add
Users in the notification group can be deleted using the Delete button
If the user to be added does not exist, create a user on the user creation page. Refer to IAM > Create User
Only users with login history (email, phone number registered users) can be added to the notification group
Table. Notification Group User Addition Items
Note
For IAM users, if there is no login history after account creation, they cannot receive notifications. Therefore, such users cannot be added to the notification group.
Checking Notification Group Details
You can check and modify the overall list and detailed information of the notification group.
To check the detailed information of the notification group, follow these steps:
All Services > Management > Notification Manager menu, click. It will move to the Notification Group List page of Notification Manager.
On the Notification Group List page, click the notification group you want to check. It will move to the Notification Group Details page.
On the Notification Group Details page, you can view detailed information about the notification group and modify or delete it.
Category
Description
Service
Service category
Resource Type
Service name
SRN
Unique resource ID in Samsung Cloud Platform
In Notification Manager, it refers to the Notification Manager SRN
Resource Name
Resource name
In Notification Manager service, it refers to the Notification Manager name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
Time when the service was created
Modifier
User who modified the service information
Modification Time
Time when the service information was modified
Notification Group Name
Notification group name
Description
Description of the notification group
Users
List of users added to the notification group
Table. Notification Group Details Items
Adding Users to a Notification Group
You can add users to a notification group.
To add users to a notification group, follow these steps:
All Services > Management > Notification Manager menu, click. It will move to the Notification Group List page of Notification Manager.
On the Notification Group List page, click the Add button next to the users of the notification group you want to add. It will move to the Add User page.
On the Add User page, select the users to be added to the notification group.
If the user to be added does not exist, create a user on the user creation page. Refer to IAM > Create User.
Only users with login history (email, phone number registered users) can be added to the notification group.
After adding all users, click the Complete button. On the Notification Group List page, click the Expand button at the far right of the corresponding notification group to verify the added users.
Note
For IAM users, if there is no login history after account creation, they cannot receive notifications. Therefore, such users cannot be added to the notification group.
Modifying a Notification Group
You can modify the settings of a created notification group.
To modify a notification group, follow these steps:
All Services > Management > Notification Manager menu, click. It will move to the Notification Group List page of Notification Manager.
On the Notification Group List page, click the notification group you want to modify. It will move to the Notification Group Details page.
On the Notification Group Details page, click the Modify button. It will move to the Modify Notification Group page.
On the Modify Notification Group page, you can modify the notification group items.
Category
Required
Description
Notification Group Name
Required
Notification group name
Use Korean, English, numbers, and special characters (+=,.@-_) within 3-24 characters
Description
Optional
Enter a description of the notification group
Maximum length is 1,000 characters
Users
Optional
Users added to the notification group
Search for the desired user and select them to add
Users in the notification group can be deleted using the Delete button
If the user to be added does not exist, create a user on the user creation page. Refer to IAM > Create User
Only users with login history (email, phone number registered users) can be added to the notification group
Table. Notification Group Modification Items
After modifying the notification group, click the Save button.
Note
For IAM users, if there is no login history after account creation, they cannot receive notifications. Therefore, such users cannot be added to the notification group.
Deleting a Notification Group
If you no longer need a created notification group, you can delete it. However, please note that deleted notification groups cannot be recovered.
To delete a notification group, follow these steps:
All Services > Management > Notification Manager menu, click. It will move to the Notification Group List page of Notification Manager.
On the Notification Group List page, click the notification group you want to delete. It will move to the Notification Group Details page.
On the Notification Group Details page, click the Delete button. A confirmation popup will appear; click the Confirm button after reviewing the message.
After deletion, verify that the notification group has been deleted on the Notification Group List page.
10.7.2.1 - Notification Policy
Users can create a notification policy by entering the required information and selecting detailed options through the Samsung Cloud Platform Console.
Creating a Notification Policy
You can create a notification policy in the Samsung Cloud Platform Console.
Click the All Services > Management > Notification Manager menu. It moves to the Notification Group page of Notification Manager.
Click the Notification Policy menu on the Notification Group page. It moves to the Notification Policy List page.
Click the Create Notification Policy button on the Notification Policy List page. It moves to the Create Notification Policy page.
Enter the necessary information for creating the service and select detailed options on the Create Notification Policy page.
Classification
Required
Detailed Description
Usage
Required
Whether to use the notification policy
ON: Use
OFF: Do not use
Notification Policy Name
Required
Notification policy name
Enter within 30 characters
Description
Optional
Enter a description of the notification policy
Enter within 1,000 characters
Notification Item
Required
Select a notification item
Select a receivable notification item
To select or deselect a specific notification name among the selected notification items, click the Expand button at the far right and select or deselect the notification name
(Example) If you want to receive only notifications of Virtual Server creation failure, select Notification Item: Virtual Server > Notification Name: Virtual Server creation failure
Notification Target Group
Required
Select a notification group to deliver the notification
If you select a notification group, the notification group name will be displayed at the top
Select or deselect the notification group name
Table. Create Notification Policy Items
Note
If there is no notification group to connect, create a notification group and then connect it. Refer to Create a Notification Group.
After setting the necessary information, click the Complete button.
After creation is complete, check the created notification policy on the Notification Policy List page.
Checking Notification Policy Details
You can check the entire list and detailed information of the notification policy and modify it.
To check the detailed information of the notification policy, follow these steps.
Click the All Services > Management > Notification Manager menu. It moves to the Notification Group List page of Notification Manager.
Click the Notification Policy menu on the Notification Group List page. It moves to the Notification Policy List page.
Click the notification policy you want to check the details on the Notification Policy List page. It moves to the Notification Policy Details page.
The Notification Policy Details page displays status information and additional feature information.
Classification
Detailed Description
Notification Policy Status
Whether to use the notification policy
Modify
Modify the notification policy
Delete
Delete the notification policy
Table. Notification Policy Status Information and Additional Features
The detailed items that can be checked on the Notification Policy Details page are as follows.
Classification
Detailed Description
Notification Policy Name
Notification policy name
Description
Description of the notification policy
Creator
User who created the notification policy
Creation Time
Time when the notification policy was created
Modifier
User who modified the notification policy
Modification Time
Time when the notification policy was modified
Notification Item
Notification items set in the notification policy
Notification Target Group
Notification group connected to the notification policy
Table. Notification Policy Details
Modifying a Notification Policy
You can modify the items set in the created notification policy.
To modify a notification policy, follow these steps.
Click the All Services > Management > Notification Manager menu. It moves to the Notification Group List page of Notification Manager.
Click the Notification Policy menu on the Notification Group List page. It moves to the Notification Policy List page.
Click the Modify button on the Notification Policy List page. It moves to the Modify Notification Policy page.
You can modify the notification policy items on the Modify Notification Policy page.
Classification
Required
Detailed Description
Usage
Required
Whether to use the notification policy
ON: Use
OFF: Do not use
Notification Policy Name
Required
Notification policy name
Enter within 30 characters
Description
Optional
Enter a description of the notification policy
Enter within 1,000 characters
Notification Item
Required
Select a notification item
Select a receivable notification item
To select or deselect a specific notification name among the selected notification items, click the Expand button at the far right and select or deselect the notification name
(Example) If you want to receive only notifications of Virtual Server creation failure, select Notification Item: Virtual Server > Notification Name: Virtual Server creation failure
Notification Target Group
Required
Select a notification group to deliver the notification
If you select a notification group, the notification group name will be displayed at the top
Select or deselect the notification group name
Table. Modify Notification Policy Items
Note
If there is no notification group to connect, create a notification group and then connect it. Refer to Create a Notification Group.
After modifying the notification policy, click the Save button.
Using a Notification Policy
You can reuse a notification policy that is currently stopped.
To set a notification policy to use, follow these steps.
Click the All Services > Management > Notification Manager menu. It moves to the Notification Group List page of Notification Manager.
Click the Notification Policy menu on the Notification Group List page. It moves to the Notification Policy List page.
Click the More button at the far right of the notification policy you want to set to use on the Notification Policy List page, and then click the Use button. Click the Confirm button after checking the phrase in the Notification popup.
Check the status of the notification policy on the Notification Policy List page.
Stopping a Notification Policy
You can change a notification policy that is currently in use to not in use.
To set a notification policy to not in use, follow these steps.
Click the All Services > Management > Notification Manager menu. It moves to the Notification Group List page of Notification Manager.
Click the Notification Policy menu on the Notification Group List page. It moves to the Notification Policy List page.
Click the More button at the far right of the notification policy you want to set to not in use on the Notification Policy List page, and then click the Do not use button. Click the Confirm button after checking the phrase in the Notification popup.
Check the status of the notification policy on the Notification Policy List page.
10.7.3 - Release Note
Notification Manager
2024.10.01
NEWNotification Manager Release
The Notification Manager service has been released. It provides a feature to manage notifications provided to users when notifications occur.
You can create and manage notification policies and notification groups to receive notifications, and add users.
10.8 - Organization
10.8.1 - Overview
Service Overview
Organization is a service that organizes accounts by organizational unit and hierarchically manages and controls resource access rights.
The user can manage the resource usage of accounts belonging to the organization to optimize costs.
Features
Hierarchical Project Management: It is possible to manage hierarchically by inviting accounts created within the organization to the Organization and organizing them by organizational unit.
Organizational Unit Governance: Policies can be controlled by organizational unit, allowing you to apply policies in bulk to all organizational units and accounts under it.
Efficient resource management and cost optimization: You can monitor the resource usage of all accounts within the organization to optimize costs.
Composition
Figure. Organization Chart
Provided Features
Organization provides the following functions.
Account Management: You can create a new account within the organization or invite an existing account to manage it.
Organization Unit Management: You can manage by composing an organization unit and composing an organization unit or Account under it.
Compliance Policy Management: Manage compliance-related settings as integrated policies and apply them by organization unit and account to prevent or detect compliance violations in advance.
Preceding Service
Organization has no preceding service.
10.8.2 - How-to guides
The user can enter the essential information of the Organization through the Samsung Cloud Platform Console and create a service by selecting detailed options.
Organization creation
You can create and use an Organization in the Samsung Cloud Platform Console.
To create an Organization, follow the following procedure.
All services > Management > Organization menu, click. It moves to the Service Home page of Organization.
Service Home page, click the Organization creation button. The organization creation popup window opens.
Organization Creation popup window, enter the Organization Name, then click the Confirm button.
Use Hangul, English, numbers, spaces, and special characters (+=,.@-_) to write within 20 characters
Organization When the popup window notifying the creation is opened, click the Confirm button.
Service Home page, check the dashboard of the Organization.
Classification
Detailed Description
Organization Information
Management Account information is displayed
Organization Information item is clicked to move to the Settings page and organization detailed information can be checked
Organization Unit
The number of organization units that make up the organization
Clicking on the number moves to the Organization Composition page
Account
The number of Accounts that make up the organization
Clicking on the count will move to the Organization Configuration page
Clicking on the Add item will move to the Add Account page
Organization deletion popup window, click the Confirm button.
10.8.2.1 - Organization composition information
Organization’s hierarchical structure can be checked and configured with the organizational units that make up the organization and the Account can be checked and managed.
Organization configuration information check
Organization’s composition information can be confirmed.
To check the organization’s composition information, follow the following procedure.
All services > Management > Organization menu should be clicked. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization unit and Account management area view method selection.
Classification
Detailed Description
View Hierarchy
Display organizational units in a hierarchical structure
Account list view
Display the Account list within the organization
Account addition
A new account is invited to the organization
Account invitation button click, move to Account addition page
Organization Structure page, by clicking the Hierarchy View button, you can check and manage the organizational units that make up the Organization and the Account in a hierarchical structure.
Classification
Detailed Description
Create organization unit below
Add a new organization unit below the selected organization unit
Only activated when 1 organization unit is selected in the hierarchical structure
Selected Account will be excluded from the organization
only activated when Account is selected in the hierarchy structure
Management Account cannot be excluded
for more information, refer to [Excluding Account](#account-제외하기)
|
|See more > Delete Account|Deletes the selected Account
Only activated when one Account is selected in the hierarchical structure
Management Account and Account joined through invitation cannot be deleted
For more information, see [Delete Account](#delete-account)
|
|Organization Unit/Account Name|Displays the name of the organization unit and Account in a measurement structure format
**+**, **-** buttons can be clicked to expand or collapse the hierarchy structure
|
|ID/Email|The organization unit is ID, Account displays ID and Email|
|Creation/Joining Time|Organization unit displays creation time, Account displays creation or joining time|
Table. Organization Hierarchy View Items
Account list view
Organization Structure page, by clicking the Account List View button, you can check and manage the list of accounts that make up the Organization.
Classification
Detailed Description
Account Movement
Move Account to another organization
Activated when selecting an Account from the Account list
Creation: Add a new account created on the Account addition page
Join: Add an existing created Account
Table. View Organization Account list items
Account management
Organization을 -> You can check and manage the list of Accounts that make up the Organization: Organization을 구성하고 있는 Account 목록을 확인하고 관리할 수 있습니다. -> You can check and manage the list of Accounts that make up the Organization, becomes: You can check and manage the list of Accounts that make up the Organization.
Corrected translation: You can check and manage the list of accounts that make up the organization.
So the final translation is: You can check and manage the list of accounts that make up the organization.
Account addition
You can create a new Account or add an existing Account to the Organization.
To add an account to the Organization, follow the next procedure.
All services > Management > Organization menu is clicked. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the Add Account button. It moves to the Add Account page.
Account addition page where you enter the account information to be added, and click the Complete button.
Classification
Mandatory
Detailed Description
Additional method
Required
Select the method to add an account
Create a new account: Add by creating a new account
Invite an existing account: Add by entering the root user email of an existing account
Account name
Required
Name of the account to be created
Enter within 3-30 characters using Korean, English, numbers, spaces, and special characters(+=-_@[](),.)
Email
Required
Email to be set as the root user of the new Account
Account Invitation button clicked, move to Account Add page
Email Verification
Required
Re-verify email information
Organization Information button clicks will move to the Settings page and you can check the organization details
IAM Role Name
Required
Display organizational units in a hierarchical structure
Enter within 64 characters using English, numbers, special characters (+=-_@,.)
Root user email
Required
Root user email of the Account
If you select an existing Account invitation, enter only the Root user email
You can add up to 10 at the same time by clicking the Add button
Table. Adding an Organization Account
When the account creation and invitation notification popup window opens, click the Confirm button.
Reference
Account can be added up to a maximum of 200.
The newly created Account can log in directly via email or access through an automatically generated role.
If you log in directly with your email, you must use the password finder to reset your password.
Account detailed information check
You can check and modify the detailed information of the Account.
To check the detailed information of the Account, follow the next procedure.
All services > Management > Organization menu, click. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the View Account List button.
In the Account list, click the Account name to confirm detailed information. It moves to the Account details page.
Account Details page consists of Basic Information tab, Control Policy tab.
Classification
Detailed Description
Excluded from the organization
Account excluded from the organization
When you click the button, a popup window opens to notify you of the account exclusion
Direct: Policies directly connected to the organization unit
Inherited: Policies connected to the organization unit by inheritance
Revision Time
Last Revision Time of Control Policy
Table. Account's Control Policy Tab Items
Account Move
Organization 내 조직 단위 간 Account를 이동할 수 있습니다 -> Organization within the organization unit can move the account.
However, the correct translation would be: Organization within the organization unit can move the account -> You can move accounts between organizational units within an organization.
So the correct translation is: You can move accounts between organizational units within an organization.
To move the Account, follow the next procedure.
All services > Management > Organization menu, click. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the View Account List button.
Select the Account to move the organization unit, then click the Account Move button. It moves to the Account Move page.
Account Move page where you select the organizational unit to move the account, and then click the Complete button.
Classification
Detailed Description
Select Account
Enter the name of the organization unit
Organization names distinguish between uppercase and lowercase letters
Moving organizational unit
Select the organizational unit to move the Account
Organization Unit Name
Name of the organization unit
Organization Unit ID
ID of the organization unit
Organization Creation Time
The time when the organization unit was created
Table. Creating an Organization Unit
When the popup window notifying account transfer opens, check the transfer information and click the Confirm button.
Reference
The newly created Account can log in directly via email or access through an automatically generated role.
If you log in directly by email, you must use the password finder to reset your password.
Account Exclusion
You can exclude an Account from the Organization.
However, following the exact format and translation rules, the correct translation should be:
However, the most accurate translation following the format is: You can exclude Account from Organization -> You can exclude Account from Organization, so the final translation is: You can exclude Account from Organization -> Organization where you can exclude Account.
But to keep the format and meaning, it should be: Organization can exclude Account.
So the correct translation is: Organization can exclude Account.
To exclude an account from the Organization, follow these procedures:
Thus the translated line is: Organization can exclude Account.
Select the Account to be excluded from the Organization, then click the More > Exclude Account button.
So the correct translation is: To exclude an account from the Organization, follow these procedures:
All services > Management > Organization menu, click. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the View Account List button.
If you delete the Account, an Account deletion notification email will be sent to the following user.
Account exclusion notification When the notification popup window opens, click the Confirm button.
Notice
In the following cases, the Account cannot be excluded.
Account that has not registered a payment method
If there is a credit assigned to the account
Excluding the time when the settlement date (1st of every month, Asia/Seoul GMT +09:00)
Account deletion
You can delete the Account.
To delete an Account, follow the following procedure.
All services > Management > Organization menu should be clicked. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the View Account List button.
Select the Account to be deleted from the Account list, then click the More > Delete Account button. The Delete Account popup window will open.
You can also delete by clicking the Account name of the Account to be deleted, and then clicking the Account Delete button on the Account Details page.
Enter the Account name to be deleted, then click the Confirm button.
Reference
Administrator who created the Organization -> * Administrator who created the Organization
You can configure and manage the organizational units that make up the Organization and the Account in a hierarchical structure.
Created Account’s Root user
User with delegation for the generated Account
Notice
When deleting from the Account list, you must select only one Account to be deleted.
Before deletion, all resources in the Account must be deleted.
Management Account and accounts joined through invitation cannot be deleted.
Managing Organization Units
To delete an organizational unit in Organization, follow these procedures:
Creating an organizational unit
You can create a new organizational unit.
To create and add an organizational unit to the Organization, follow these procedures.
all services > Management > Organization menu, click. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the Hierarchical Structure View button.
Select the location to add an organizational unit in the hierarchical structure list, then click the Create organizational unit below button. It moves to the Create organizational unit page.
Root or you can only select one existing organization unit.
Root is the basis for creating organizational units within 5 levels below.
Organization Unit Creation page, enter the organization unit information to be added, and then click the Complete button.
Classification
Necessity
Detailed Description
Organization Unit Name
Required
Enter the name of the organization unit
Organization names distinguish between uppercase and lowercase letters
Description
Select
Enter a description of the organizational unit within 1,000 characters
Control Policy Connection
Required
Select a control policy to connect to the organizational unit
When the popup window for creating an organizational unit opens, click the Confirm button.
Reference
Account can be added up to a maximum of 200.
The newly created Account can be accessed directly by email login or through the automatically generated role.
If you log in directly with your email, you must use the password finder to reset your password.
Check detailed information of organizational units
You can check and modify detailed information of the organization unit.
To check the detailed information of the organization unit, follow the following procedure.
All services > Management > Organization menu, click. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the Hierarchy View button.
Click the Root/Account name of the organizational unit to check detailed information in the hierarchical structure list. It moves to the Organizational Unit Details page.
Organization Unit Details page consists of Basic Information tab, Sub Items tab, Control Policies tab.
Classification
Detailed Description
Delete Organization Unit
A button to delete the organization unit
When you click the button, a popup window opens to notify the organization deletion
Direct: Policies directly connected to the organization unit
Inherited: Policies connected to the organization unit by inheritance
Last Modified Time
Last modified time of control policy
Fig. Organization unit detailed page control policy tab item
Deleting an organizational unit
Organization에서 you can delete organizational units.
Notice
To delete an organizational unit, the organizational unit must not have any subordinate elements.
You can attach control policies to an organizational unit or Account of the Organization.
All services > Management > Organization menu, click. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the View Hierarchy button.
Select the organizational unit to be deleted from the hierarchical structure list, then click the More > Delete Organizational Unit button.
When the popup window notifying the deletion of an organizational unit opens, click the Confirm button.
Control policy linking
You can attach control policies to an organizational unit or Account of the Organization.
To link a control policy, follow the next procedure.
All services > Management > Organization menu, click. It moves to the Service Home page of Organization.
Service Home page, click the Organization Configuration menu. It moves to the Organization Configuration page.
Organization Structure page, click the View Hierarchy button.
In the hierarchical structure list, click on the organizational unit or Account to which you want to add a control policy, and it will move to the detailed page of the element.
Root or you can only select one existing organizational unit.
Root is the basis for creating organizational units within 5 levels below.
Click the Control Policy tab on the detail page. It moves to the Control Policy Link page.
After selecting the control policy to connect, click the Complete button.
Classification
Detailed Description
Connected Control Policy
Enter the name of the organization unit or account currently connected to the control policy organization unit, the name of the organization unit is case-sensitive and distinguishes between uppercase and lowercase English letters
Organization name is case-sensitive and distinguishes between uppercase and lowercase English letters
|
|Control Policy Name|Control Policy Title|
|Type|Control Policy Type|
|Revision Time|Revision Time of Control Policy|
|Control Policy Linking|Required|Select control policies to be linked to the organizational unit|
Table. Controlled Policy Link Items
When the popup window notifying the control policy connection opens, click the Confirm button.
Basic Mode: Set using the mode provided by the Console
JSON Mode: Set directly using the JSON Editor
Service
Required
Select the service to set the control policy
Add Service: Add a service to set the control policy
Table. Organization Control Policy Creation - Service Settings
Caution
In the control policy settings, Basic Mode and JSON Mode are provided.
After writing in Basic Mode, when entering JSON Mode or moving screens, services with duplicate control requirements are merged into one, and services that have not completed configuration are deleted.
JSON mode If the content written in JSON mode does not conform to JSON format, it cannot be switched to basic mode.
After setting the permissions, click the Next button.
Category
Required
Detailed description
Control Type
Required
Select control policy type
Allow Policy: Control policy that allows defined permissions
Deny Policy: Control policy that denies defined permissions
For the same target, the deny policy takes precedence
Action
Required
Select actions provided per service
Actions that can select individual resources are displayed in purple
Actions that target all resources are displayed in black
Add action directly: Using the wildcard *, multiple actions can be specified at once
Applied Resource
Required
Resources to which the action applies
All Resources: Apply to all resources for the selected action
Individual Resources: Apply only to specified resources for the selected action
Individual resources are only possible when selecting the purple action that allows individual resource selection
Click the Add Resource button to specify target resources by resource type
Authentication method of the user target to which the control policy will be applied
All authentication: Apply regardless of authentication method
Authentication key authentication: Apply to authentication key authentication users
Temporary key authentication, Console login: Apply to temporary key authentication or Console login users
Applied IP
Required
IP that allows control policy application
Custom IP: User directly registers and manages IP
Applied IP: IP that the user directly registers for control policy application, can be registered as IP address or range format
Excluded IP: IP to be excluded from Applied IP, can be registered as IP address or range format
All IP: No IP access restriction
Access is allowed for all IPs, but if an exception is needed, register Excluded IP to restrict access for the registered IPs
Additional Condition
Select
Add condition for Attribute-Based Access Control (ABAC)
Condition Key: Select from Global Condition Keys and Service Condition Keys list
Qualifier: Default value, arbitrary value in request, all values in request
Operator: Bool, Null
Value: True, False
Table. Organization Control Policy Creation - Permission Settings
Check Input Information After confirming the information entered on the page, click the Complete button.
When the popup notifying the creation of a control policy opens, click the Confirm button. It navigates to the Integrated Policy List page.
Load Control Policy
When creating a control policy, you can modify the policy requirements of an existing policy to create it.
Note
Load Policy when executed, all previously entered content will be deleted and replaced with the selected policy’s setting values.
To load an existing policy and create a comprehensive policy, follow the steps below.
All Services > Management > Organization Click the menu. Navigate to Organization’s Service Home page.
Click the Control Policy menu on the Service Home page. Navigate to the Control Policy List page.
Control Policy List page, click the Create Control Policy button. Navigate to the Create Control Policy page.
After entering items in the Basic Information area, click the Next button.
Category
Required
Detailed description
Control Policy Name
Required
Enter the name of the control policy
Enter using English letters, numbers, and special characters(+=-_@,.) within 3 to 128 characters
Description
Select
Enter a description of the organizational unit within 1,000 characters
Table. Organization Control Policy Creation - Basic Information Settings
Control Requirement Setting area, click the Load Control Policy button. The Load Control Policy popup window opens.
Click the Load Policy button. The Load Control Policy popup opens.
After selecting the control policy to load from the control policy list, click the Confirm button. The settings of the loaded policy will be entered automatically.
After editing the information that needs to be changed, click the Next button.
After confirming the information entered on the Input Information Confirmation page, click the Complete button. You will be taken to the Integrated Policy List page.
Register individual resources as applied resources
Permission setting during which you can register individual resources as applied resources.
To register an individual resource as an applied resource, follow the steps below.
All Services > Management > Organization Click the menu. Go to Organization’s Service Home page.
Service Home page, click the Control Policy menu. Navigate to the Control Policy List page.
Click the Create Control Policy button on the Control Policy List page. It navigates to the Create Control Policy page.
After entering items in the Basic Information area, click the Next button.
Category
Required
Detailed description
Control Policy Name
Required
Enter the name of the control policy
Use English letters, numbers, special characters(+=-_@,.) within 3~128 characters
Description
Select
Enter a description of the organizational unit within 1,000 characters
Table. Organization Control Policy Creation - Basic Information Settings
Control Requirement Setting In the area, after selecting the service to which the control policy will be applied, click the Next button.
Load Policy Click the button. Load Control Policy The popup window opens.
After selecting the control policy to load from the control policy list, click the Confirm button. The settings of the loaded policy will be entered automatically.
After editing the information that needs to be changed, click the Next button.
After verifying the entered information on the Check Input Information page, click the Complete button. You will be redirected to the Integrated Policy List page.
In the Action selection, select the Action that can select individual resources.
Actions that allow individual resource selection are displayed in purple.
Click Individual Resource in Applied Resources.
Add Resource Click the button. Add Resource The popup window opens.
Category
Required
Detailed description
Jawin type
Required
Select the type of resource to add
SRN
-
Unique resource ID in Samsung Cloud Platform
Automatically updated according to the input items below
Account
Required
Set Account ID
Current Account: Current Account ID is auto-filled and cannot be edited
All Accounts: Add to all Accounts (not recommended)
Manual Input: Manually enter the Account ID using lowercase English letters and numbers, up to 100 characters (wildcard input not allowed)
Region
Select
Directly input the resource’s region information within 100 characters
Select All when checked, add resources of all regions
Resource ID
Required
Enter the resource ID to add directly within 100 characters
Select All when checked, adds all resources of the corresponding resource type
Table. Organization Control Policy Creation - Basic Information Settings
When the setup is complete, click the Next button. It will navigate to the Check Input Information page.
After verifying the entered information, click the Complete button. You will be redirected to the Integrated Policy List page.
Check detailed control policy information
Control Policy Details page allows you to view and edit detailed information of the control policy.
To view detailed information of the control record, follow the steps below.
All Services > Management > Organization Click the menu. Go to Organization’s Service Home page.
Click the Control Policy menu on the Service Home page. Navigate to the Control Policy List page.
Click the control policy to view detailed information on the Control Policy List page. You will be taken to the Control Policy Details page.
Policy Details page displays basic information, and consists of Basic Information tab, Control Requirements tab, Connected Targets tab.
Basic Information
Check the basic information of the control policy, and if necessary, you can edit the policy name and description.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
In control policies, it means the policy name
Resource ID
Unique Resource ID
Creator
User who created the service
Creation time
Service creation time
Editor
User who edited the service information
Modification Date
Date Service Information Modified
Control Policy Name
Control Policy’s Name
Click the Edit button to change the name
Type
Control Policy Type
Basic: Basic control policy provided by Samsung Cloud Platform
Custom: Control policy directly created by the user
Description
Explanation of control policy
Click the Edit button to change the description
Table. Control Policy Basic Information Tab Items
Control Requirements
You can view services with permissions set in the current control policy.
Basic mode and JSON mode can be checked.
Clicking the arrow to the right of the service name will display the control requirements set for that service.
Note
Click the Edit button to modify the control requirements. For details on the edit items, see Create Control Policy.
Category
Detailed description
Control Type
Control Policy Control Type
Allow Policy: Control policy that allows the defined permissions
Deny Policy: Control policy that denies the defined permissions
Action
Provided functions of each service that are subject to the control policy
Applicable Resources
Resources to which the action is applied
All Resources: Applied to all resources for the selected action
Individual Resources: Applied only to specified resources for the selected action
Authentication Type
Authentication method of the user target to which the control policy will be applied
All authentication: Apply regardless of authentication method
Authentication key authentication: Apply to authentication key authentication users
Temporary key authentication, Console login: Apply to temporary key authentication or Console login users
Applicable IP
IP that permits the application of control policies
Custom IP: User registers and manages IP directly
Applied IP: User can directly register IP address or range format as an IP to which the control policy is applied
Excluded IP: Can be registered as an IP address or range to be excluded from Applied IP
All IPs: No IP access restriction
Access is allowed for all IPs, but if exceptions are needed, register Excluded IP to restrict access for those IPs
Table. Control Policy's Control Requirements Tab Items
Connection Target
You can view the organizational units and accounts directly linked to the control policy.
Reference
Policies linked to Root and organizational units are inherited by child items.
Category
Detailed description
Root
Root connection status and the number of control policies connected to Root are displayed
Connect or Disconnect button can be clicked to connect or disconnect from Root
Organization Unit
Current control policy linked organization unit and total number of control policies linked to that organization unit
Disconnect: Disconnect the selected organization unit in the organization unit list
Connect Organization Unit: Go to the Connect Organization Unit page
Account
Number of total control policies linked to the Account currently connected and the total number of control policies linked to that Account
Disconnect: Disconnect the selected Account from the Account list
Account Connect: Go to the Account Connect page
Table. Policy's linked target tab items
Connect organization unit
You can link organizational units to the control policy.
To connect the organizational unit, follow the steps below.
All Services > Management > Organization Click the menu. Navigate to Organization’s Service Home page.
Service Home page, click the Control Policy menu. Navigate to the Control Policy List page.
Control Policy List page, click the control policy to connect the organizational unit. Control Policy Details page will be displayed.
Click the Connection Target tab on the Control Policy Details page.
Click the Organizational Unit Connection button in the Organizational Unit area. You will be taken to the Organizational Unit Connection page.
After selecting the organizational unit to connect, click the Complete button.
Category
Detailed description
Organization Unit/Account Name
Display the organization unit and account names in a measurement structure format
Click the +, - buttons to expand or collapse the hierarchy
ID/email
Organization unit shows ID, Account shows ID and email
Creation Date
The date the organizational unit was created is the creation date, and for Account it shows the creation or registration date
Table. Organizational Unit Connection Items
7.Account When the popup notifying the connection opens, click the Confirm button.
Account Connect
You can link an Account to a control policy.
To connect Account, follow the steps below.
All Services > Management > Organization Click the menu. Navigate to the Service Home page of Organization.
Service Home on the page click the Control Policy menu. Control Policy List navigate to the page.
Control Policy List page, click the control policy to link the Account. Control Policy Details page will be displayed.
Control Policy Details page, click the Connection Target tab.
Click the Account Connection button in the Account area. You will be taken to the Account Connection page.
After selecting the Account to connect, click the Done button.
Category
Detailed description
Organization Unit/Account Name
Display the organization unit and account names in a measurement structure format
Click the +, - buttons to expand or collapse the hierarchy
ID/email
Organization unit shows ID, Account shows ID and email
Creation Date
The date the organizational unit was created is the creation date, and for Account it shows the creation or registration date
Table. Account connection items
When a popup notifying the connection opens, click the Confirm button.
Delete control policy
You can delete the control policy.
Notice
To delete a control policy, there must be no elements linked to the control policy.
To delete the control policy, follow the steps below.
All Services > Management > Organization Click the menu. Navigate to Organization’s Service Home page.
Service Home page, click the Control Policy menu. Navigate to the Control Policy List page.
Click the control policy to delete on the Control Policy List page. Navigate to the Control Policy Details page.
Control Policy Details page, click the Delete Control Policy button.
When the popup notifying the deletion of the control policy opens, click the Confirm button.
10.8.3 - API Reference
API Reference
10.8.4 - CLI Reference
CLI Reference
10.8.5 - Release Note
Organization
2025.10.23
FEATUREAccount deletion feature improvement
You can also delete the Account created in the Organization from the Member Account.
Deletable Accounts are limited to Accounts directly created in the Organization.
2025.07.01
NEWOrganization Service Official Launch
Organization service has been officially launched.
Account can be organized by organizational units, managed hierarchically, and resource access permissions can be controlled.
You can monitor the resource usage of all accounts within the organization to optimize costs.
10.9 - Resource Explorer
10.9.1 - Overview
Service Overview
Resource Explorer is a service that provides search for resources created by Samsung Cloud Platform.
Provided Features and Specialties
Resource Explorer provides the following features.
Resource Search: You can search for resources across multiple regions and accounts using resource names, service names, resource types, and more.
Multi-Region Search: You can find resources across multiple regions with a single search.
Multi-account search: You can search for resources across all accounts in Organizations. (Scheduled to be released after July 25)
Integrated Search: You can search for resources through the integrated search function. (Scheduled to be released after July 25)
Filtering feature: You can filter search results using resource names, regions, tags, and more.
Tag addition feature: You can add tags to multiple resources in bulk.
10.9.2 - How-to Guides
Users can search for resources in their account and region through the Resource Explorer service on the Samsung Cloud Platform Console, add tags, and navigate to resource details.
Searching for Resources with Resource Explorer
You can search for resources through Resource Explorer on the Samsung Cloud Platform Console.
To search for resources using Resource Explorer, follow these steps:
Click All Services > Management > Resource Explorer menu. It moves to the Resource Explorer list page.
On the Resource Explorer page, you can search for resources in various ways.
Category
Detailed Description
Input Box
Enter text in the input box to search for results and select
Resource Name: Resource name
Service Name: Service name of the resource
Resource Type: Type of resource
Advanced Search
Click the Advanced Search button to search for additional items
Resource Name: Resource name
Service Name: Service name (multiple selections possible, searchable)
Resource Type: Type of resource (multiple selections, searchable)
Region (multiple selections, searchable)
Table. Resource Explorer Resource Search
Adding Tags with Resource Explorer
You can add tags through Resource Explorer on the Samsung Cloud Platform Console.
To add tags using Resource Explorer, follow these steps:
Note
If you select a resource that already has more than 50 tags added, the Add Tag button will be deactivated.
Click All Services > Management > Resource Explorer menu. It moves to the Resource Explorer list page.
On the Resource Explorer list page, select the checkbox of the resource, and the Add Tag button will be activated at the top of the list.
Click the Add Tag button. It moves to the Add Tag page.
Enter key and value, then click the Complete button to add a tag.
You can add multiple tags by clicking the Add Tag button.
Guide
You can add tags by selecting or entering key and value in the list.
The list icon in the input window is activated when there is a selectable list, and the value list varies depending on the key.
Up to 50 tags can be added per selected resource.
If a tag with the same key is already applied, it will be replaced with the value of the newly added tag.
Moving to Resource Details with Resource Explorer
You can move to the resource details page through the Resource Explorer screen on the Samsung Cloud Platform Console and check the detailed information of the resource.
To move to the resource details page using Resource Explorer, follow these steps:
Click All Services > Management > Resource Explorer menu. It moves to the Resource Explorer list page.
On the Resource Explorer list page, click the resource name to move to the detailed screen of the corresponding resource.
Some resource types do not provide a detailed screen.
10.9.3 - Release Note
Resource Explorer
2025.02.27
NEWResource Explorer Release
A search service for resources has been released.
Resources from multiple regions can be checked at once through the Resource Explorer.
10.10 - Resource Groups
10.10.1 - Overview
Service Overview
Resource groups is a service that groups resources for management.
Provided Features
Resource groups provide the following features.
Resource Grouping: Resources can be logically grouped based on tags.
Tag-based query: You can search and group resources using tags.
Resource Search: You can search for resources that match the specified query.
10.10.2 - How-to Guides
You can create a resource group to set up resources that match certain conditions to be displayed as a group.
Creating a Resource Group
You can create and use the Resource Group service in the Samsung Cloud Platform Console.
To create a Resource Group, follow these steps:
Click on All Services > Management > Resource Group menu. You will be moved to the Resource Group List page.
On the Resource Group List page, click the Create Resource Group button. You will be moved to the Create Resource Group page.
On the Create Resource Group page, enter the necessary information for service creation and select detailed options.
In the Service Information Input section, select the necessary information.
Category
Required
Detailed Description
Resource Group Name
Required
Enter group name
Description
Enter description
Resource Type
Required
Select all or multiple resource types
Group Definition Tag
Required
Set the tag criteria for grouping
Key
Value
Target Resource
Click the resource preview button to check and select the target resource
Table. Resource Group Creation Input Information
In the Additional Information Input section, enter or select the necessary information.
Category
Required
Detailed Description
Lock
Optional
Set whether to use Lock
Using Lock prevents accidental actions such as server termination, start, and stop
Init Script
Optional
Script to run when the server starts
The init script should be written in Batch script for Windows or Shell script or cloud-init for Linux
Up to 45,000 bytes can be entered
Tag
Optional
Add a tag
Up to 50 tags can be added per resource
Click the Add Tag button and enter or select the Key and Value
Table. Resource Group Additional Information Input Items
Review the input information and click the Complete button.
Once created, you can check the created resource on the Resource Group List page.
Checking Resource Group Details
The Resource Group service allows you to check and modify the resource group list and detailed information. The Resource Group Details page consists of Basic Information, Group Resources, and Tags tabs.
To check the detailed information of the Resource Group service, follow these steps:
Click on All Services > Management > Resource Group menu. You will be moved to the Resource Group List page.
On the Resource Group List page, click on the resource you want to check the details for. You will be moved to the Resource Group Details page.
* The Resource Group Details page displays status information and additional feature information, and consists of Basic Information, Group Resources, and Tags tabs.
Basic Information
You can check and modify the detailed information of the selected resource on the Resource Group List page.
Category
Detailed Description
Service
Service category
Resource Type
Service type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource name
Resource ID
Unique resource ID in the service
Creator
Creator’s name
Creation Time
Creation date and time
Modifier
Modifier’s name
Modification Time
Modification date and time
Resource Group Name
Name entered by the user
Description
Description entered by the user
Table. Resource Group Basic Information Tab Items
Group Resources
You can check and modify the group resources of the selected resource on the Resource Group List page.
Category
Detailed Description
Group Resources > Resource Type
Group resource type
Group Resources > Group Definition Tag
Tag added when creating the resource group
Target Resource
List of resources grouped by the group definition tag
Resource Name: Resource name
Resource ID: Resource ID
Service Name: Service name of the resource
Resource Type: Resource type
Tag: Number of tags for the resource. Click the View Tag button to check all tags for the resource
Creation Time: Resource creation time
Table. Resource Group Group Resources Tab Items
Tags
You can check, add, modify, or delete the tag information of the selected resource on the Resource Group List page.
Category
Detailed Description
Tag List
Tag list
Check the Key and Value information of the tag
Up to 50 tags can be added per resource
Search and select from existing Key and Value lists when entering tags
Table. Resource Group Tags Tab Items
Managing Resource Groups
Modifying Resource Group Basic Information
You can modify the description of a Resource Group. To modify the description of a Resource Group, follow these steps:
Click on All Services > Management > Resource Group menu. You will be moved to the Resource Group list page.
On the Resource Group list page, click on the Resource Group name of the resource you want to modify. You will be moved to the Resource Group Details page.
Click the Modify button next to the Resource Group name. You will be moved to the Modify Description popup window. Modify the description and click the Confirm button. You can check the modified description on the Resource Group Details page.
Modifying Resource Group Group Resources
You can modify the group resources of a Resource Group. To modify the group resources of a Resource Group, follow these steps:
Click on All Services > Management > Resource Group menu. You will be moved to the Resource Group list page.
On the Resource Group list page, click on the Resource Group name of the resource you want to modify. You will be moved to the Resource Group Details page. Click on the Group Resources tab. You will be moved to the Group Resources tab.
Click the Modify Group Resources button. You will be moved to the Modify Resource Group page.
Modify the Resource Type and Group Definition Tag information, and then click Save.
Category
Required
Detailed Description
Resource Type
Required
Select all or multiple resource types
Group Definition Tag
Required
Set the tag criteria for grouping
Key
Value
Target Resource
Click the resource preview button to check and select the target resource
Table. Resource Group Resource Modification Input Information
Check the modified information on the Resource Group Details page.
Deleting a Resource Group
You can delete an unused Resource Group. However, once a Resource Group is deleted, it cannot be recovered.
To delete a Resource Group, follow these steps:
Click on All Services > Management > Resource Group menu. You will be moved to the Resource Group List page.
On the Resource Group List page, click on the resource you want to delete. You will be moved to the Resource Group Details page.
On the Resource Group Details page, click the Delete Resource Group button. Check the message in the Notification popup window and click the Confirm button.
On the Resource Group List page, select multiple Resource Groups using the check boxes and click the Delete button at the top of the resource list.
10.10.3 - Release Note
Resource Groups
2025.02.27
NEWResource Groups Release
We have launched a service that efficiently manages resources through grouping.
Resources can be logically grouped and managed based on tags.
10.11 - ServiceWatch
10.11.1 - Overview
Service Overview
ServiceWatch is a service that collects metrics, logs, and events of resources created on the Samsung Cloud Platform, monitors them, and provides various tools to offer observability of resource performance, operational status, and more.
Key Advantages
It provides the following key advantages.
Resource Monitoring: Collects and visualizes performance metrics (e.g., CPU Usage) of resources. It also creates a dashboard that visualizes multiple metrics in one place for a quick overview.
Alert Policy Configuration and Automatic Notifications: Users can define conditions and thresholds to create alert policies, and receive notifications when thresholds are exceeded, enabling rapid detection and response to resource status.
Log Analysis and Storage: Collects logs generated by resources for easy retrieval and searching. Collected logs are stored in log groups, with up to 5 GB of storage provided for free. Users can set log retention policies to define retention periods, and logs older than the retention period are automatically managed.
Cost Efficiency: ServiceWatch offers a flexible pay-as-you-go pricing model for cost-effective usage. A free tier is also provided, allowing users to try the service for free and scale to paid usage as needed.
Features Provided
The following features are provided.
Metric Monitoring
Metrics: ServiceWatch receives metric data from Samsung Cloud Platform services, collects and stores this data, and makes it available to users.
Dashboard: Visualizes metrics of a single region to provide an integrated view of resources. ServiceWatch distinguishes between automatically generated service dashboards and user-created custom dashboards.
Alert: Offers alert functionality that notifies when metrics change beyond user-defined thresholds.
Log Monitoring
ServiceWatch provides log management capabilities. Logs collected from Samsung Cloud Platform services are stored in log groups for management. Log retention policies can be set to control log retention periods. Users can also view and search log data via the console, and export log groups to Object Storage.
ServiceWatch Agent
With the ServiceWatch Agent, detailed metrics on processes, CPU, memory, disk usage, and network performance can be collected from Virtual Server, GPU Server, Bare Metal Server, etc. GPU performance metrics can also be collected. Additionally, the Agent can collect logs generated by resources. (Planned for December 2025)
Event Monitoring
ServiceWatch can create event rules from system events reflecting changes to resources created on Samsung Cloud Platform, allowing users to receive notifications when specific conditions are met.
Components
Metrics
Metrics refer to performance data of a system. By default, resources of services integrated with ServiceWatch provide basic monitoring based on free metrics. Additionally, services such as Virtual Server can enable detailed monitoring to provide paid metrics.
Metric data can be retained for up to 15 months (455 days). For detailed information about metrics, see Metrics.
Logs
Logs from resources such as Virtual Server, Kubernetes Engine, and other services on Samsung Cloud Platform can be collected, stored, and viewed.
Events represent changes in the environment from Samsung Cloud Platform services. The following are examples of events.
An event is generated when a Virtual Server’s status changes from Stopped to Running.
An event is generated when a new bucket is created in Object Storage.
An event is generated when an IAM user is removed from a user group.
For detailed information about events, see Events.
Dashboards
ServiceWatch provides pre-built service dashboards for each service automatically, and users can also create custom dashboards.
Notice
Pre-built dashboards for each service will be available from March 2026.
ServiceWatch Agent
The ServiceWatch Agent is a software component that collects metrics and logs from Virtual Server, GPU Server, and On-Premise servers, enabling more granular monitoring of infrastructure and applications beyond the basic monitoring provided by default.
Note
User-defined metric/log collection via the ServiceWatch Agent is currently available only for Samsung Cloud Platform For Enterprise. It will be available for other offerings in the future.
Metrics can be queried for up to 455 days from the current point
Applicable to dashboards, widgets, and metric queries
Number of Metrics per Query
Up to 500 metrics can be selected for visualization in a graph
Metric Image File Download
Images can be downloaded for up to 100 metrics
Metric Export to Object Storage
Export up to 10 metrics; the query period can be up to 2 months (63 days) for metric data
Widget/Metric Count per Dashboard
Up to 500 widgets per dashboard
Up to 500 metrics per widget
Up to 2,500 total metrics can be added across all widgets in a single dashboard
Number of Alert Policies
Up to 5,000 per account/region
Alert History
Alert history is available for 30 days
Recipients per Alert Policy
Up to 100
Number of Log Groups
Up to 10,000 per account/region
Log Download
When downloading Excel, up to 1 MB per log event and up to 10,000 log events can be downloaded
If a log event exceeds 1 MB or the number of log events exceeds 10,000, it is recommended to use log group export.
Concurrent Log Group Export Tasks
One export task can be executed per account at a time.
Log Event Size
Up to 1 MB
Number of Event Rules
Up to 300 per account/region
Event Pattern Size
Up to 2 MB
Recipients per Event Rule
Up to 100
Table. ServiceWatch Limitations
The following shows the monthly free tier details for ServiceWatch.
Category
Free Tier
Logs
Up to 5 GB of storage per month
Metrics
Basic monitoring metrics for each service
Up to 10 detailed/custom/log pattern metrics per month
Dashboards
Up to 3 dashboards per month referencing 50 or fewer metrics
If a dashboard references more than 50 metrics, one dashboard charge per month applies.
Alert Policies
Up to 10 per month
Table. ServiceWatch Free Tier
Regional Availability
ServiceWatch is available in the following regions.
Region
Availability
Korea West (kr-west1)
Available
Korea East (kr-east1)
Available
Korea South 1 (kr-south1)
Available
Korea South 2 (kr-south2)
Available
Korea South 3 (kr-south3)
Available
Table. ServiceWatch Regional Availability
Prerequisite Services
ServiceWatch has no prerequisite services.
10.11.1.1 - Metrics
Metrics
Metrics are data on system performance. By default, many services provide free metrics for resources (e.g., Virtual Server, File Storage) which are offered as basic monitoring through ServiceWatch. Detailed monitoring can be enabled for certain resources such as Virtual Server.
Metric data is retained for 15 months (455 days), allowing access to both recent and historical data.
Term
Example
Description
Namespace
Virtual Server
Logical separation used to categorize and group metrics
Changes to an Alert state when CPU usage exceeds 80% continuously for 5 minutes
Table. ServiceWatch Metric Terminology
Namespace
Namespace is a logical separation used to distinguish and group metrics in ServiceWatch. Most Samsung Cloud Platform services use a namespace that matches the service name, which can be found in the ServiceWatch Integrated Services List.
For custom metrics, users can define a namespace that distinguishes them from other metrics within ServiceWatch, either via ServiceWatch Agent configuration or via the OpenAPI. For details on custom metrics and logs, see Custom Metrics and Logs.
Metrics
Metrics represent an ordered set of data points collected by ServiceWatch over time. Each data point consists of a timestamp, the collected value, and the unit of measurement.
For example, the CPU usage of a specific Virtual Server is one of the default monitoring metrics provided by Virtual Server. Data points can be generated by any application or activity that collects data.
By default, Samsung Cloud Platform services integrated with ServiceWatch provide free metrics for resources. Detailed monitoring for certain resources is available as a paid offering and can be enabled per service.
Metrics can only be queried in the region where they were generated. Users cannot manually delete metrics. However, if no new data is posted to ServiceWatch, metrics automatically expire after 15 months. Data points older than 15 months (455 days) are sequentially expired, and when new data points are added, those older than 15 months are deleted.
Timestamp
The timestamp of a data point indicates the time at which the data point was recorded. Each metric data point is composed of a timestamp and a value.
A timestamp consists of hour, minute, second, and date.
Metric Retention Period
ServiceWatch retains metric data as follows.
Data points with a collection interval of 60 seconds (1 minute) are retained for up to 15 days.
Data points with a collection interval of 300 seconds (5 minutes) are retained for up to 63 days.
Data points with a collection interval of 3600 seconds (1 hour) are retained for up to 455 days (15 months).
Data points initially collected at a short interval are downsampled for long-term storage. For example, if data is collected at a 1‑minute interval, it is retained at 1‑minute granularity for 15 days. After 15 days, the data remains available but can only be queried at 5‑minute intervals. After 63 days, the data is further aggregated and provided at 1‑hour intervals. If metric data points need to be retained longer than the retention period, they can be separately archived using the File Download or Object Storage Export features.
Dimensions
Key-value pairs that serve as unique identifiers for metrics, allowing data points to be categorized and filtered.
For example, using the resource_id dimension of Virtual Server metrics can identify metrics for a specific server.
Collection Interval
Refers to the frequency at which data points for each service’s metrics are collected, as provided by the service-defined collection interval.
Refer to each service’s ServiceWatch metric page for the metric collection intervals.
For example, Virtual Server provides a 5‑minute collection interval for basic monitoring, and a 1‑minute interval when detailed monitoring is enabled.
Statistics
Statistics define how metric data is aggregated over a specified period. ServiceWatch provides aggregated data based on the metric data points supplied by each service to ServiceWatch. Aggregation uses the namespace, metric name, dimensions, and data point units within the defined aggregation period.
The provided statistics are Sum, Average, Minimum, and Maximum.
Sum: The sum of all data point values collected during the period.
Average: The sum of all data point values divided by the number of data points during the period.
Minimum: The lowest observed value during the period.
Maximum: The highest observed value during the period.
Units
Each statistic has a measurement unit. Examples of units include Bytes, Seconds, Count, Percent, etc.
Aggregation Period
Each statistic calculates metric data points collected over the chosen aggregation period. The aggregation period can be selected from 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 3 hours, 6 hours, 12 hours, and 1 day, with a default of 5 minutes. The aggregation period is closely tied to the metric data point collection interval, and for valid aggregation results, the aggregation period must be equal to or longer than the collection interval.
For example, if the statistic is Average, the aggregation period is set to 5 minutes, and a metric with a 1‑minute collection interval is selected, data points are collected every minute and the average is computed over the 5‑minute window. Conversely, if the aggregation period is shorter than the collection interval, valid aggregation results cannot be obtained.
Downsampling is applied for long‑term retention of metric data. For example, if data is collected at a 1‑minute interval, after 15 days it can only be queried at 5‑minute granularity. Setting the aggregation period from 5 minutes to 30 minutes for such metrics may require up to 5 minutes to retrieve the downsampled data. After 63 days, the data is further aggregated and provided at 1‑hour intervals. Selecting an aggregation period from 1 hour to 1 day may require up to 1 hour for data retrieval due to aggregation processing delays.
Note
When retrieving metric data, aggregation delays may cause the most recent data point not to be displayed. In such cases, reducing the aggregation period or querying after a certain time (5 minutes or 1 hour) will allow the data to appear correctly.
Aggregation Period
Aggregation Delay
1 minute
-
5 minutes
Up to 5 minutes
15 minutes
Up to 5 minutes
30 minutes
Up to 5 minutes
1 hour
Up to 1 hour
3 hours
Up to 1 hour
6 hours
Up to 1 hour
12 hours
Up to 1 hour
1 day
Up to 1 hour
Table. ServiceWatch aggregation delay by aggregation period
Alerts
When creating an alert policy, a metric is evaluated over the specified evaluation window, and if it meets the condition set based on the threshold, users receive an alert notification.
Alert states are Alert, Normal, and Insufficient data.
Alert: The metric meets the set condition.
Normal: The metric does not meet the set condition.
Insufficient data: No metric data exists, the metric data is missing, or data has not yet arrived.
When the alert state is Alert, if the evaluation later falls outside the condition, the alert state reverts to Normal.
For more details on alerts, see the Alert section.
Basic Monitoring and Detailed Monitoring
ServiceWatch offers two types of monitoring: basic monitoring and detailed monitoring.
Samsung Cloud Platform services integrated with ServiceWatch publish a basic set of metrics to ServiceWatch for free, providing basic monitoring. As soon as any of those services are used, basic monitoring is automatically enabled and visible in ServiceWatch.
Detailed monitoring is available for select services and incurs charges. To use detailed monitoring, it must be enabled in the service’s detailed settings.
The detailed monitoring options vary depending on the service.
For Virtual Server, the default monitoring collection interval is 5 minutes. Enabling detailed monitoring changes the collection interval for those metrics from 5 minutes to 1 minute.
Object Storage provides basic metrics for default monitoring, and enabling replication metrics adds additional replication metrics.
The following includes services that provide detailed monitoring and their guides.
You can create alerts that monitor metrics and send notifications. For example, you monitor the CPU usage and disk read/write of a Virtual Server, then send a notification to the user to handle increased load.
Alert policies can monitor metrics of the same Account and evaluate alerts for a single metric. These alert policies compare the specified threshold and metric conditions, and send notifications when the conditions are met.
If you disable the alert policy, the evaluation of the alert policy continues, but you can restrict sending alerts to designated recipients.
If you want to temporarily stop sending alerts for resources with an alert policy set, you can use alert policy deactivation.
When you enable the alert policy, evaluation of the alert policy begins, and according to the set conditions, the alert status changes to Alert, and a notification is sent each time the alert status changes.
The alarm policy status allows you to check whether the alarm policy is enabled/disabled.
Alert Policy Status
Description
● Active
A state where the alarm policy is active and notifications can be sent according to the set conditions
Evaluates the alarm according to the settings and sends notifications to the designated recipients
● Inactive
The alarm policy is disabled, and notification sending is restricted
Alarm evaluation for the policy is not stopped, only notification sending is restricted.
Table. Alert Policy Status
You can set alert levels for the alert policy. Depending on the alert level, the alert color (red/pink/purple) is expressed differently so that the levels can be visually distinguished by color.
You can filter according to the alert policy’s alert level and retrieve the alert policy by each alert level.
Alert Level
Description
High
If you set the step for the alarm policy condition to High, the alarm level is displayed in red
Midle
If you set the step to Middel in the alarm policy condition, the alarm step is displayed in pink
Low
If you set the step to Middel in the alarm policy condition, the alarm step is displayed in purple
Table. Alert Policy Levels
Alarm Status
Alert status changes according to the alert evaluation of the alert policy. Alert status is divided into three states: Normal (Normal), Insufficient data (Insufficient data), Alert (Alert).
Alarm Status
Description
● Normal
Means a normal state that does not meet the conditions set in the alarm policy
Normal state is displayed in green
● Insufficient data
The alarm policy has just been created, or the metric is unavailable, or there is insufficient data to determine the alarm state from the metric
The Insufficient data state is displayed in gray
● Alert
State that meets the conditions set in the alert policy
Alert state is displayed in red
When changed to Alert state, a notification is sent to the user
Table. Alarm Status
Reference
When an alarm policy is first created, the alarm state is initialized to Insufficient data. When metric data is later collected, the alarm state changes to Normal or Alert.
Alert Evaluation
Term
Description
Metric Data Point
Statistical data calculated from metric data. A data point consists of a timestamp, collected statistical data, and the unit of the data.
The statistics of a data point are calculated separately as sum, average, minimum, and maximum
Metric collection interval
Time interval for collecting metric data per service
Specified per metric of the namespace
Example: 1 minute or 5 minutes
Alert Evaluation Interval
Time interval for evaluating whether the alert meets the conditions
If the metric collection interval is 1 minute or more, fix the alert evaluation interval to 1-minute units
If the alert evaluation range × metric collection interval exceeds 24 hours, fix the alert evaluation interval to 1-hour units
Alarm Evaluation Scope
Evaluation time range for alarm evaluation
It is recommended to set it to the metric collection interval or a multiple of the collection interval
Alert Evaluation Count/Alert Violation Count
During the alert evaluation interval, if the condition is satisfied for violation count out of evaluation count, the alert status is switched to Alert
Violation count can be set less than or equal toevaluation count
Default is set to 1
Alarm evaluation interval
Alarm evaluation range(seconds) X Alarm evaluation count
Table. Alert Evaluation Terms
For example, for a metric with a 1-minute collection interval, if you set a 1-minute evaluation window with 4 violations out of 5 evaluation counts, the evaluation interval is 5 minutes. For a metric with a 5-minute collection interval, if you set a 10-minute evaluation window with 3 violations out of 3 evaluation counts, the evaluation interval is 30 minutes.
Category
Example 1
Example 2
Metric collection interval
1 minute
5 minutes
Alarm evaluation cycle (fixed)
1 minute
1 minute
Alert Evaluation Scope
1 minute
10 minutes
Number of alarm evaluations
5 times
3 times
Number of alarm violations
4 times
3 times
Alarm evaluation interval (seconds)
5 minutes (300 seconds)
30 minutes (1,800 seconds)
Condition
If evaluated 5 times within 5 minutes and satisfies 4 conditions, change the alarm state to Alert
If evaluated 3 times within 30 minutes and satisfies 3 conditions, change the alarm state to Alert
Table. Alarm Evaluation Example
Evaluation Scope
The evaluation scope of the alarm policy is the evaluation time range for alarm evaluation.
It is recommended to set it to the indicator’s collection interval or a multiple of the collection interval.
You can input up to 604,800 (7 days) seconds.
Caution
If the evaluation range is set smaller than the collection interval or not matching a multiple of the collection interval, the alarm evaluation may not work properly.
Evaluation Scope
Configurable number of evaluations
7 days (604,800 seconds)
1
1 day (86,400 seconds)
7 or less
6 hours (21,600 seconds)
28 or less
1 hour (3,600 seconds)
168 or less
15 minutes (900 seconds)
96 or less
5 minutes (300 seconds)
288 or less
1 minute (60 seconds)
1,440 or less
Table. Number of evaluable evaluations that can be set according to evaluation range
Notice
There are the following restrictions on the evaluation scope and the number of evaluations:
When the evaluation range is 1 hour (3,600 seconds) or more, the evaluation interval (evaluation count × evaluation range) can be up to 7 days (604,800 seconds)
When the evaluation range is less than 1 hour (3,600 seconds), the evaluation interval (evaluation count × evaluation range) can be up to 1 day (86,400 seconds).
Condition
The conditions for performing alarm evaluation require a conditional operator and threshold setting.
Term
Description
Statistics
Method of calculating metric data during the evaluation period for alarm assessment
Conditional Operator
After calculating metric data over the evaluation period for alarm evaluation, select the conditional operator that compares the value with the threshold.
Threshold
Define a threshold to compare with the calculated metric data during the evaluation period for alarm assessment using a conditional operator
Table. Condition Terms
If the namespace is Virtual Server and the metric is CPU Usage (unit: %), the alarm evaluation condition is completed as below.
Category
Example 1
Example 2
Metric collection interval
1 minute
5 minutes
Alarm evaluation interval (fixed)
1 minute
1 minute
Alert Evaluation Scope
1 minute
10 minutes
Number of alarm evaluations
5 times
3 times
Alarm violation count
4 times
3 times
Alarm evaluation interval (seconds)
5 minutes (300 seconds)
30 minutes (1,800 seconds)
Statistics
Average
Total
conditional operator
>=
<
threshold
80
20
Condition
If the average CPU Usage >= 80% for 4 times over 5 minutes, change the alert status to Alert
If the average CPU Usage < 20% for 3 times over 30 minutes, change the alert status to Alert
Table. Alarm Evaluation Example - Conditional Operator, Threshold, Statistics Added
Alarm Notification
If the alarm evaluation conditions are met, change the alarm status to Alert and send a notification to the recipients set in the alarm policy.
Reference
Only users with login history (users who have registered email or mobile phone number) can be added as alert recipients.
Notification reception method (E-mail or SMS) can be set by selecting the notification target as Service > Alert on the Notification Settings page.
Notification recipients can be added up to a maximum of 100.
Guide
Users without login history cannot be designated as notification recipients.
Notification Settings page, if you select the notification target Service > Alert and do not set the notification reception method, you cannot receive notifications.
Method for handling missing data during alarm evaluation
Some resources may not be able to send metric data to ServiceWatch under certain conditions. For example, if a specific resource is inactive or does not exist, it will not be sent to ServiceWatch. If metrics are not collected for a certain period, the alarm state will be changed to Insufficient data by the alarm evaluation.
ServiceWatch provides a way to handle missing data during alarm evaluation. The missing data handling methods are as follows:
Ignore: Maintain the current alarm state. (default)
Missing: Treat missing data points as missing. If all data points within the evaluation range are missing, the alarm state switches to Insufficient data state.
Breaching: Treat as satisfying the threshold condition for missing data points.
Not breaching: Process as normal for missing data points that do not satisfy the threshold condition.
Reference
For alert policies created before the December 2025 release, missing data is handled with the default Ignore, and from the December 2025 release onward, you can directly choose how to handle missing data.
The method for handling missing data in the alarm policy can be modified, and from the point of modification, missing data will be processed using the changed method.
Alert History
The change history for alarm status is recorded in the alarm history. The alarm history can be viewed for 30 days.
10.11.1.3 - Log
Log
By using ServiceWatch logs, you can monitor, store, and access log files collected from the resources of the service that provides the logs.
A log group is a container for log streams that share the same retention policy settings. Each log stream must belong to a single log group. For example, if there are separate log streams for the logs of each Kubernetes Engine cluster, you can group the log streams into a single log group called /scp/ske/{cluster name}.
Log Retention Policy
Log retention policy can set the period for storing log events in ServiceWatch. Log events whose period has expired are automatically deleted. Log
The retention period assigned to the group applies to the log streams and log events belonging to the log group.
The retention period can be selected from the following and is set in days.
Retention period
No expiration
1 day
3 days
5 days
1 week (7 days)
2 weeks (14 days)
1 month (30 days)
2 months (60 days)
3 months (90 days)
4 months (120 days)
5 months (150 days)
6 months (180 days)
1 year (365 days)
13 months (400 days)
18 months (545 days)
2 years (731 days)
3 years (1096 days)
5 years (1827 days)
6 years (2192 days)
7 years (2557 days)
8 years (2922 days)
9 years (3288 days)
10 years (3653 days)
Table. Log Group Retention Policy Period
Log Stream
A log stream is a collection of log events sorted in the order they occurred from the same source. For example, all log events generated in a particular Kubernetes Engine cluster can constitute a single log stream.
Log Event
Log events are individual records that record logs generated from resources. A log event record includes a timestamp of when the event occurred, a log message, and two attributes. Each message must be encoded in UTF-8.
Log Pattern
You can create a log pattern to filter log data that matches the pattern. A log pattern defines the words or patterns to search for in the log data collected by ServiceWatch, allows you to view the status of log occurrences in a graph, and creates metrics that can be used to generate alert policies.
Log patterns are not applied retroactively to data. They are applied to log events collected after the log pattern is created.
Log Pattern Namespace
A namespace is a logical separation for distinguishing and grouping metrics.
In ServiceWatch, it is divided into namespaces associated with services, namespaces for custom metrics, and namespaces for log patterns.
Namespace associated with services such as Virtual Server
Namespace composed of custom metrics, the namespace of metrics collected via the custom metrics API or ServiceWatch Agent
Namespace of the metric created by the log pattern
When creating metrics for a log pattern, you can either create a new namespace for the log pattern or choose from existing log pattern namespaces.
Indicator Name
The monitored log information is the name of the metric generated by ServiceWatch. You must set it so that the metric name does not duplicate within the namespace where the metric will exist.
Indicator value
It is the numeric value posted to the metric each time a log matching the pattern is found. For example, when counting occurrences of a specific word (e.g., Error), this value becomes 1 for each occurrence. When calculating transmitted bytes, it can be incremented by the actual number of bytes found in the log event.
Default
It is the value recorded in the log pattern during periods when no matching logs can be found while collecting logs. Setting the default to 0 can prevent the metric from becoming irregular due to periods with no matching data in all such intervals.
If you set a dimension for a metric generated by a log pattern, you cannot set a default value for that metric.
Dimension
A dimension is a key-value pair that defines a metric additionally. You can add dimensions to metrics generated from log patterns. Since a dimension is part of a unique identifier for a metric, each time a unique name/value pair is extracted from the log, a new variant of that metric is created.
When selecting the log pattern format as either a space-separated pattern or a JSON format pattern, you can set the dimension, and it can be configured as one of the parameters set in the pattern.
You can assign up to three dimensions to an indicator. If a default value is set, you cannot set dimensions. To set dimensions, you must configure it not to use the default value.
Pattern Format
This explains how ServiceWatch interprets data in each log event. The pattern format can be selected from three options as shown below.
String pattern: Log containing a specific string
Space-separated pattern: logs separated by spaces such as timestamps, IP addresses, strings, etc.
JSON format pattern: logs containing specific JSON fields
Available regular expression syntax
When using regular expressions to search and filter log data, you must enclose the expression with %.
Only the following can be included in patterns that contain regular expressions.
Alphanumeric - Alphanumeric refers to characters that are letters (A~Z or a~z) or numbers (0~9).
A-Z, a-z, 0-9 can be used as.
The supported symbol characters are as follows.
:, _, #, =, @, /, ;, ,, -
For example, %servicewatch!% cannot be used because ! is not supported.
The supported operators are as follows.
This includes ^, $, ?, [, ], {, }, |, \, *, +, ..
(, ) The operator is not supported.
Operator
Usage Method
^
Fixes the start position of the string as the matching item. For example, %^[ab]cd% matches acd and bcd, and does not match bcd.
$
Fixes the end position of the string to match items. For example, %abc$% matches xyzabc and xyabc, but does not match abcd.
?
?Matches when the preceding character appears 0 or 1 times. For example, %abc?d% matches both abcd and abd, while abc and abccd do not match.
[]
Matches a list of characters or character ranges enclosed in brackets. For example, %[abc]% matches a, b, c, %[a-z]% matches all lowercase letters from a to z, and %[abcx-z]% matches a, b, c, x, y, z.
{m, n}
If the preceding character repeats m~n times, it matches. For example, %a{3,5}% matches only aaa, aaaa and aaaaa, and does not match a or aa.
|
matches one of the characters on either side of |.
%abc|de% can match abce or abde.
</code>
As an escape character, using this character allows you to use it literally instead of its special operator meaning.
*
Matches zero or more of the preceding character. For example, %12*3% matches 13, 123, 122223.
+
Matches one or more of the preceding character. For example, %12+3% can match 123, 1223, 12223, but does not match 13.
.
Matches any character. For example, %.ab% matches cab, dab, bab, 8ab, #ab, ab (including space), and matches a 3-character string ending with ab.
\d, \D
Matches digits and non-digit characters. For example, %\d% is equivalent to %[0-9]%, and %\D% matches all characters except digits, like %[^0-9]%.
\s, \S
Matches whitespace characters and non-whitespace characters. Whitespace characters include tab (\t), space (), and newline (\n) characters.
\w, \W
Matches alphanumeric characters and non-alphanumeric characters. For example, %\w% is equivalent to %[a-zA-Z_0-9]%, and %\W% is equivalent to %[^a-zA-Z_0-9]%.
\xhh
Matches the ASCII mapping of a 2-digit hexadecimal character. \x is an escape character indicating that the following character is the hexadecimal value in ASCII. hh specifies a 2-digit hexadecimal (0~9 and A~F) that refers to a character in the ASCII table.
Table. Regular expression syntax operators available for log pattern
Reference
123.123.123.1와 같은 IP 주소를 정규식으로 표현하기 위해서는 %123.123.123.1%와 같이 표현합니다.
String Pattern
String pattern using regular expressions
You can search for matching patterns in log events using a regex string pattern wrapped with %(percentage) at the beginning and end of the regex. Below is an example of a pattern that searches all log events composed of the ERROR keyword. Please refer to the Available Regex Syntax.
%ERROR%
The above pattern matches log event messages like the following.
* <code>[2026-02-13 14:22:01] ERROR 500 POST /api/v1/checkout (192.168.1.10) - NullPointerException at com.app.controller.CheckoutController.java:55</code>
* <code>[ERROR] Configuration file not found: /etc/app/config.yaml</code>
##### String pattern in log events without format
String pattern for searching strings in log events that are not in formats like JSON.
Below is an example of a log event message, and you can see the log events that match according to various string pattern classifications.
log events containing the strings ERROR and REQUEST
ERROR CODE 400 BAD REQUEST
ERROR CODE 401 UNAUTHORIZED REQUEST
Multiple strings (Or condition)
?ERROR ?400
log events containing the ERROR or 400 string
ERROR CODE 400 BAD REQUEST
ERROR CODE 401 UNAUTHORIZED REQUEST
ERROR CODE 419 MISSING ARGUMENTS
ERROR CODE 420 INVALID ARGUMENTS
Exact matching string
“BAD REQUEST”
log event containing the exact phrase “BAD REQUEST”
ERROR CODE 400 BAD REQUEST
Exclude specific string
ERROR -400
A pattern where some terms are included and other terms are excluded. Enter - before the string you want to exclude. The following are log events that include the string ERROR and exclude the string 400
ERROR CODE 401 UNAUTHORIZED REQUEST
ERROR CODE 419 MISSING ARGUMENTS
ERROR CODE 420 INVALID ARGUMENTS
Table. String pattern in log events without format
#### Space-separated pattern
Create a pattern to search for matching strings in log events separated by spaces.
##### Space-separated pattern example(1)
The following is an example of log events separated by spaces.
The log event above is a space-separated log event that includes <code>timestamp</code>, <code>logLevel</code>, <code>user_id</code>, <code>action</code>, <code>status</code>, <code>ip</code>. Text between brackets (<code>[]</code>) and double quotes (<code>""</code>) is considered a single field.
To create a pattern that searches for matching strings in space-separated log events, enclose the pattern in brackets (<code>[]</code>) and specify fields with names separated by commas (<code>,</code>). The following pattern parses six fields.
<code>[timestamp, logLevel, user_id, action, status = success, ip]</code> can find log events where the 5th field, <code>status</code>, is <code>success</code>.
##### Space-separated pattern example (2)
abc xxx.log james 2023-10-27T10:00:01Z POST 400 1024
abc xxx.log name 2023-10-27T10:00:02Z POST 410 512
The above log event is a space-separated log event that includes host, logName, user, timestamp, request, statusCode, and size.
A pattern like [host, logName, user, timestamp, request, statusCode=4*, size] can find log events where the 6th field, statusCode, starts with 4.
If you do not know the exact number of fields in a space-separated log event, you can use an ellipsis (…). A pattern like […, statusCode=4*, size] is a pattern that represents the first five fields with an ellipsis.
You can also create composite expressions using the AND (&&) operator and the OR (||) operator. A pattern such as […, statusCode=400 || statusCode=410, size] can find log events where the 6th field, statusCode, is 400 or 410.
You can use regular expressions to provide conditions for a pattern. A pattern such as [host, logName, user, timestamp, request, statusCode=%4[0-9]{2}%, size] can find log events where the sixth field, statusCode, is a number starting with 4.
JSON format pattern
You can create a pattern to search for matching strings or numeric values in JSON log events.
Patterns are enclosed in curly braces ({}).
String-based JSON format pattern
Use $. to represent JSON fields.
The operator can use = or !=.
The string to compare with the field can be enclosed in double quotes (""). Strings containing non-alphanumeric characters and underscore symbols must be enclosed in double quotes. Use an asterisk (*) as a wildcard to match text.
From the log group, you can export log data to Object Storage for log retention and log analysis. You can export log groups for log data in the same Account.
To start exporting a log group, you need to create an Object Storage bucket to store the log data.
The log group export operation can take a long time depending on the amount of logs. When exporting a log group, you can reduce the export operation time by specifying a particular stream within the log group or by specifying a time range.
Log group export can only be executed one at a time on the same Account. To run another log group export, the current export task must be completed.
You can delete the log group export history after the export succeeds or after the export cancellation is completed.
Canceling log group export does not delete the saved file of the exported log group.
To delete the exported log group file, delete the stored file directly in Object Storage.
Log group export status
Description
● Success
The log group export task has been completed successfully.
● Pending
Log group export task is pending.
● In progress
Log group export task is in progress.
● Failed
Log group export task failed.
● Canceling
Cancelling the log group export task. If the cancel request fails, it will change to Failed state.
● Canceled
Log group export task has been cancelled.
Table. Log Group Export Status
10.11.1.4 - event
The event represents a change in the environment in the Samsung Cloud Platform service.
Most events generated in Samsung Cloud Platform services are received by ServiceWatch. Events of each service can be viewed and processed in the ServiceWatch of the same Account.
Refer to the list of services that send events via ServiceWatch and the events those services send in the ServiceWatch Event Reference.
Each service sends events to ServiceWatch based on Best Effort delivery. Best Effort delivery means that the service attempts to send all events to ServiceWatch, but occasionally some events may not be delivered.
When a valid event is delivered to ServiceWatch, ServiceWatch compares the event with the rules and then sends a notification to the alert recipients set in the event rule.
Event Rules
You can specify the actions that ServiceWatch performs on events delivered from each service to ServiceWatch. To do this, create an event rule. An event rule specifies which events are delivered to which targets.
Event rules evaluate the event when it arrives. Each event rule checks whether the event matches the rule’s pattern. If the event matches, ServiceWatch processes the event.
You can generate matching rules for incoming events based on the event data criteria (called an event pattern). If an event matches the criteria defined in the event pattern, the event is delivered to the target specified in the rule.
Event rules basically allow you to specify a notification recipient to receive alerts when an event occurs.
The event rules are planned to be expanded to include multiple services of the Samsung Cloud Platform as targets for receiving events when events occur. (Planned for 2026)
ServiceWatch can select the event source as the Samsung Cloud Platform service name. You can select the service name of the event you want to receive as the event source.
Service Category
Service
Compute
Virtual Server
Compute
GPU Server
Compute
Bare Metal Server
Compute
Multi-node GPU Cluster
Compute
Cloud Functions
Storage
Block Storage(BM)
Storage
File Storage
Storage
Object Storage
Storage
Archive Storage
Storage
Backup
Container
Kubernetes Engine
Container
Container Registry
Networking
VPC
Networking
Security Group
Networking
Load Balancer
Networking
DNS
Networking
VPN
Networking
Firewall
Networking
Direct Connect
Networking
Cloud LAN-Campus
Networking
Cloud LAN-Datacenter
Networking
Cloud WAN
Networking
Global CDN
Networking
GSLB
Database
EPAS(DBaaS)
Database
PostreSQL(DBaaS)
Database
MariaDB(DBaaS)
Database
MySQL(DBaaS)
Database
Microsoft SQL Server(DBaaS)
Database
CacheStore(DBaaS)
Data Analytics
Event Streams
Data Analytics
Search Engine
Data Analytics
Vertica(DBaaS)
Data Analytics
Data Flow
Data Analytics
Data Ops
Data Analytics
Quick Query
Application Service
API Gateway
Security
Key Management Service
Security
Config Inspection
Security
Certificate Manager
Security
Secret Vault
Management
Cloud Control
Management
Identity and Access Management(IAM)
Management
ID Center
Management
Logging&Audit
Management
Organization
Management
Resource Groups
Management
ServiceWatch
Management
Support Center
AI-ML
CloudML
AI-ML
AI&MLOps Platform
Table. ServiceWatch event source
Event Type
The Samsung Cloud Platform service has its own resource type. Event types are classified the same as resource types, and you select the type of events from the event source to use in event rules.
The following are the event types of Virtual Server.
Service Category
Service
Sub Service
Event Type
Compute
Virtual Server
Virtual Server
Server
Compute
Virtual Server
Image
Image
Compute
Virtual Server
Keypair
Keypair
Compute
Vitual Server
Server Group
Server Group
Compute
Virtual Server
Launch Configuration
Launch Configuration
Compute
Virtual Server
Auto-Scaling Group
Auto-Scaling Group
Compute
Virtual Server
Block Storage
Volume
Compute
Virtual Server
Block Storage
Snapshot
Table. ServiceWatch - Virtual Server Event Types
For other event types available in ServiceWatch, please refer to ServiceWatch Event.
Event
The event can select all events that occur from the event type of the event source, and can select specific events.
The following are some events of the Server event type of Virtual Server.
Service Category
Service
Sub Service
Event Type
Event
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Create Start
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Create End
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Create Error
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Delete Start
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Delete End
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Delete Error
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Lock End
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Unlock End
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Stop Start
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Stop Success
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Start Start
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Start Success
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Reboot Start
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Reboot End
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Reboot Error
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Power On Start
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Power On End
Compute
Virtual Server
Virtual Server
Server
Compute Virtual Server Power On Error
Table. Some events of ServiceWatch - Virtual Server Server
If the event pattern is satisfied, an alert is sent to the notification recipient set in the event rule.
Reference
Notifications can be sent to users with login history (users who have registered email or mobile phone number).
The notification recipients can be added up to a maximum of 100 people.
The notification reception method (E-mail or SMS) can be changed after selecting the notification target as Service > ServiceWatch on the Notification Settings page.
Notice
Users without login history cannot be designated as notification recipients.
Notification Settings page, by selecting the notification target Service > ServiceWatch, if you do not set the notification receiving method, you cannot receive notifications.
10.11.1.5 - ServiceWatch Integration Service
You can check services linked with ServiceWatch.
Metrics and Log Monitoring
Below you can see the service that integrates ServiceWatch with metric and log monitoring.
Table. ServiceWatch metrics and log integration services and guide
Event
Below you can check the service that links ServiceWatch with events.
Reference
For information related to event rules, refer to Event.
Refer to the list of Samsung Cloud Platform services that generate events and the events at ServiceWatch Event.
Service Category
Service
Sub Service
Event Source
Resource Type (Event Type)
Compute
Virtual Server
Virtual Server
Virtual Server
Server
Compute
Virtual Server
Image
Virtual Server
Image
Compute
Virtual Server
Keypair
Virtual Server
Keypair
Compute
Vitual Server
Server Group
Virtual Server
Server Group
Compute
Virtual Server
Launch Configuration
Virtual Server
Launch Configuration
Compute
Virtual Server
Auto-Scaling Group
Virtual Server
Auto-Scaling Group
Compute
Virtual Server
Block Storage
Virtual Server
Volume
Compute
Virtual Server
Block Storage
Virtual Server
Snapshot
Compute
GPU Server
GPU Server
GPU Server
Server
Compute
GPU Server
GPU Server
GPU Server
Image
Compute
Bare Metal Server
Bare Metal Server
Bare Metal Server
Bare Metal Server
Compute
Multi-node GPU Cluster
GPU Node
Multi-node GPU Cluster
GPU Node
Compute
Multi-node GPU Cluster
Cluster Fabric
Multi-node GPU Cluster
Cluster Fabric
Compute
Cloud Functions
Function
Cloud Functions
Cloud Functions
Storage
Block Storage(BM)
Block Storage(BM)
Block Storage(BM)
Volume
Storage
Block Storage(BM)
Volume Group(BM)
Block Storage(BM)
Volume Group
Storage
File Storage
File Storage
File Storage
Volume
Storage
Object Storage
Object Storage
Object Storage
Bucket
Storage
Archive Storage
Archive Storage
Archive Storage
Bucket
Storage
Backup
Backup
Backup
Backup
Container
Kubernetes Engine
Cluster
Kubernetes Engine
Cluster
Container
Kubernetes Engine
Node
Kubernetes Engine
Nodepool
Container
Container Registry
Registry
Container Registry
Container Registry
Container
Container Registry
Repository
Container Registry
Repository
Networking
VPC
VPC
VPC
VPC
Networking
VPC
Subnet
VPC
Subnet
Networking
VPC
Port
VPC
Port
Networking
VPC
Internet Gateway
VPC
Internet Gateway
Networking
VPC
NAT Gateway
VPC
NAT Gateway
Networking
VPC
Public IP
VPC
Public IP
Networking
VPC
Private NAT
VPC
Private NAT
Networking
VPC
VPC Endpoint
VPC
VPC Endpoint
Networking
VPC
VPC Peering
VPC
VPC Peering
Networking
VPC
Private Link Service
VPC
Private Link Service
Networking
VPC
Private Link Endpoint
VPC
Private Link Endpoint
Networking
VPC
Transit Gateway
VPC
Transit Gateway
Networking
Security Group
Security Group
Security Group
Security Group
Networking
Load Balancer
Load Balancer
Load Balancer
Load Balancer
Networking
Load Balancer
Load Balancer
Load Balancer
LB Listener
Networking
Load Balancer
LB Server Group
Load Balancer
LB Server Group
Networking
Load Balancer
LB Health Check
Load Balancer
LB Health Check
Networking
DNS
Private DNS
Private DNS
Private DNS
Networking
DNS
Hosted Zone
Hosted Zone
Hosted Zone
Networking
DNS
Public Domain Name
Public Domain Name
Public Domain Name
Networking
VPN
VPN
VPN
VPN Gateway
Networking
VPN
VPN Tunnel
VPN
VPN Tunnel
Networking
Firewall
Firewall
Firewall
Firewall
Networking
Direct Connect
Direct Connect
Direct Connect
Direct Connect
Networking
Cloud LAN-Campus
Campus Network
Cloud LAN - Campus (Network)
Cloud LAN - Campus (Network)
Networking
Cloud LAN-Datacenter
Cloud LAN Network
Cloud LAN Network
Cloud LAN Network
Networking
Cloud LAN-Datacenter
vDevice
Cloud LAN Network
vDevice
Networking
Cloud LAN-Datacenter
Interface
Cloud LAN Network
Interface
Networking
Cloud LAN-Datacenter
vCable
Cloud LAN Network
vCable
Networking
Cloud WAN
Cloud WAN Network
Cloud WAN
Network(WAN)
Networking
Cloud WAN
Segment
Cloud WAN
Segment
Networking
Cloud WAN
Segment
Cloud WAN
Segment Location
Networking
Cloud WAN
Segment
Cloud WAN
Segment Sharing
Networking
Cloud WAN
Attachment
Cloud WAN
Attachment
Networking
Global CDN
Global CDN
Global CDN
Global CDN
Networking
GSLB
GSLB
GSLB
GSLB
Database
EPAS(DBaaS)
EPAS(DBaaS)
EPAS
EPAS
Database
PostreSQL(DBaaS)
PostreSQL(DBaaS)
PostreSQL
PostreSQL
Database
MariaDB(DBaaS)
MariaDB(DBaaS)
MariaDB
MariaDB
Database
MySQL(DBaaS)
MySQL(DBaaS)
MySQL
MySQL
Database
Microsoft SQL Server(DBaaS)
Microsoft SQL Server(DBaaS)
Microsoft SQL Server
Microsoft SQL Server
Database
CacheStore(DBaaS)
CacheStore(DBaaS)
CacheStore
CacheStore
Database
Scalable DB(DBaaS)
Scalable DB(DBaaS)
Scalable DB
Scalable DB
Data Analytics
Event Streams
Event Streams
Event Streams
Event Streams
Data Analytics
Search Engine
Search Engine
Search Engine
Search Engine
Data Analytics
Vertica(DBaaS)
Vertica(DBaaS)
Vertica
Vertica
Data Analytics
Data Flow
Data Flow
Data Flow
Data Flow
Data Analytics
Data Flow
Data Flow Services
Data Flow
Data Flow Service
Data Analytics
Data Ops
Data Ops
Data Ops
Data Ops
Data Analytics
Data Ops
Data Ops Services
Data Ops
Data Ops Service
Data Analytics
Quick Query
Quick Query
Quick Query
Quick Query
Application Service
API Gateway
API Gateway
API Gateway
API Gateway
Application Service
Queue Service
Queue
Queue
Queue
Security
Key Management Service
Key Management Service
Key Management Service
Key
Security
Config Inspection
Config Inspection
Config Inspection
Config Inspection
Security
Certificate Manager
Certificate Manager
Certificate Manager
Certificate
Security
Secrets Manager
Secrets Manager
Secrets Manager
Secret
Security
Secret Vault
Secret Vault
Secret Vault
Secret
Management
Cloud Control
Cloud Control
Cloud Control
Landing Zone
Management
Identity and Access Management(IAM)
User Group
Identity and Access Management
Group
Management
Identity and Access Management(IAM)
User
Identity and Access Management
User
Management
Identity and Access Management(IAM)
policy
Identity and Access Management
policy
Management
Identity and Access Management(IAM)
role
Identity and Access Management
role
Management
Identity and Access Management(IAM)
credential provider
Identity and Access Management
credential provider
Management
Identity and Access Management(IAM)
My Info.
Identity and Access Management
Access Key
Management
ID Center
ID Center
Identity Center
ID Center
Management
ID Center
Permission Set
Identity Center
Permission Set
Management
Logging&Audit
Trail
Logging&Audit
Trail
Management
Organization
Organizational Structure
Organization
Organization
Management
Organization
Organization Structure
Organization
Organization Account
Management
Organization
Organizational Structure
Organization
Organization Invitation
Management
Organization
Organizational structure
Organization
Organizational unit
Management
Organization
Control Policy
Organization
Control Policy
Management
Organization
Organization Settings
Organization
Delegation Policy
Management
Resource Groups
Resource Groups
Resource Groups
Resource Group
Management
ServiceWatch
Dashboard
ServiceWatch
Dashboard
Management
ServiceWatch
Alert
ServiceWatch
Alert
Management
ServiceWatch
log
ServiceWatch
log group
Management
ServiceWatch
Event Rules
ServiceWatch
Event Rules
Management
Support Center
Service Request
Support
Service Request
Management
Support Center
Contact
Support
Contact
AI-ML
CloudML
CloudML
Cloud ML
Cloud ML
AI-ML
AI&MLOps Platform
AI&MLOps Platform
AI&MLOps Platform
AI&MLOps Platform
Table. ServiceWatch Event Service
10.11.1.6 - Custom Metrics and Logs
ServiceWatch can collect user-defined custom metrics defined by the user and can collect log files from resources created by the user.
There are two ways to collect custom metrics and logs.
First, you can install the ServiceWatch Agent directly on the resource, set the resources to be collected, and collect them. The second is that you can collect custom metrics and logs through the OpenAPI/CLI provided by ServiceWatch.
Reference
Custom metric/log collection via ServiceWatch Agent is currently only available on Samsung Cloud Platform For Enterprise. It will be offered in other offerings in the future.
Caution
ServiceWatch’s metric API incurs costs for calls. Collecting metrics via the ServiceWatch Agent also operates on an OpenAPI basis, so metric API calls incur costs. Caution is needed to avoid excessive API calls for metric and log collection. The billable metric APIs are as follows.
API
description
ListMetricData
Metric data list retrieval.
Since a single API call can request multiple metrics, charges are applied per 1,000 metric requests.
DownloadMetricDataImage
Metric data widget image download.
Since a single API call can request multiple metrics, a charge is applied per 1,000 requested metrics.
ListMetricInfos
Retrieve metric data.
Charges apply per 1,000 calls of this API
CreateCustomMetricMetas
Create custom metric meta data
This API is charged per 1,000 calls
CreateCustomMetrics
Create custom metric data (transmission)
This API is charged per 1,000 calls
ShowDashboard
Dashboard view
This API charges a fee per 1,000 calls
ListDashboards
Dashboard list retrieval
This API is charged per 1,000 calls
CreateDashboard
Create Dashboard
This API is charged per 1,000 calls
SetDashboard
Edit Dashboard
This API is charged per 1,000 calls
DeleteBulkDashboards
Delete Dashboard
This API is charged per 1,000 calls
Table. Indicator API Billing Guide
Logs incur charges based on the amount of data collected, so there is no separate charge for API calls.
※ For detailed pricing information, please refer to the ServiceWatch pricing information on the Samsung Cloud Platform Service Portal.
ServiceWatch Agent
You can install the ServiceWatch Agent on the user’s resources such as Virtual Server/GPU Server/Bare Metal Server to collect custom metrics and logs.
ServiceWatch Agent Constraints
ServiceWatch Agent Network Environment
ServiceWatch Agent is designed to collect data using OpenAPI by default, so to install and use it on server resources, external communication via the Internet must be possible. Please create an Internet Gateway in the VPC where the resources are located and set a NAT IP on the server resources so that they can communicate with the outside.
ServiceWatch Agent Supported OS Image
The OS images available for ServiceWatch Agent are as follows.
Below, we introduce a quick guide for collecting OS metrics and logs of Virtual Server in a Linux environment.
Node Exporter Installation and Configuration
Refer to Node Exporter installation and install Node Exporter on the server for collecting custom metrics.
If you install Node Exporter, you can collect OS metrics through Node Exporter in addition to the metrics provided by ServiceWatch’s default monitoring.
ServiceWatch Agent Settings refer to and after downloading the ServiceWatch_Agent zip file, configure and run the ServiceWatch Manager.
Refer to the examples/os-metric-min-examples folder in the zip file to set at least two metrics and run the ServiceWatch Agent.
Caution
Metric collection via ServiceWatch Agent is classified as custom metrics and, unlike the metrics collected by default from each service, incurs charges, so you must be careful not to set up unnecessary metric collection. Ensure that only the metrics that need to be collected are collected.
Free provision is provided up to 10 per Account/region.
You can collect custom metrics and logs through the OpenAPI/CLI provided by ServiceWatch.
Custom metric data and custom logs can be delivered to ServiceWatch via ServiceWatch OpenAPI/CLI, allowing you to view visualized information in the Console.
Caution
Collecting metrics via ServiceWatch OpenAPI/CLI is classified as custom metrics, and unlike the metrics that are collected by default from each service, charges apply, so you must be careful not to set up unnecessary metric collection. Make sure to configure it so that only the metrics that need to be collected are collected.
Free provision is provided up to 10 per Account/region.
Create Custom Metric Metadata
To collect metric data generated from user resources or applications, rather than metrics provided by Samsung Cloud Platform services (e.g., Virtual Server), into ServiceWatch, you need to create custom metric metadata.
Parameter
Explanation
namespace
Users can define a namespace in ServiceWatch that can be distinguished from other metrics
The namespace must be 3 to 128 characters, including letters, numbers, spaces, and special characters (_-/), and must start with a letter.
Set the name of the metric to be collected. The metric name must be 3 to 128 characters long, including English letters, numbers, and special characters (_), and must start with an English letter.
Example: custom_cpu_seconds_total
metricMetas > storageResolution
Set the collection interval for the corresponding metric. The default is 60 (1 minute) and can be set in seconds
metricMetas > unit
Metric unit can be set
Example: Bytes, Count, etc.
metricMetas > dimensions
You can set dimensions to identify custom metric data and visualize it in the Console. When visualizing the collected metrics in the Console, they are displayed in combinations according to the dimension (dimensions) settings.
A ServiceWatch log group is required for custom log collection. Log groups can only be created in the Console. After creating a log group in advance, you can use the log stream creation API to create a log stream to be delivered to ServiceWatch.
To collect custom logs, after creating log groups and log streams, we use the log event creation API to deliver individual log messages (log events) to ServiceWatch.
When you place the mouse cursor on the graph, the time, data value, and metric data information at that point are displayed in a popup
You can zoom in on a specific area of the graph by dragging the mouse
When you click the resource name displayed in the legend, detailed information about that resource is displayed in a popup
Table. Dashboard Detail Items
Reference
When you click the More > View Metrics button in the upper right corner of a widget, you can view metric information for that widget on the Metrics page.
You can set frequently used dashboards as favorites to easily navigate to those dashboards on the Service Home page of ServiceWatch.
Follow these steps to set a dashboard as a favorite.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, check the favorite icon of the dashboard you want to add to favorites. Click the dashboard for which you want to view detailed information. You will be taken to the Dashboard Detail page.
The favorited dashboard is added to the bottom of the Dashboard > Dashboard Favorites menu and the Dashboard Favorites area of the Service Home page.
Viewing Widget Details
You can individually view widgets in the dashboard by enlarging them.
Follow these steps to view an individual widget in enlarged mode.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for the resources you want to monitor. You will be taken to the Dashboard Detail page.
On the Dashboard Detail page, click the View Widget Enlarged button of the widget you want to view in enlarged mode. The Metric Detail popup window for that widget will open.
Division
Detailed Description
Widget Name
Display the widget name
Period Setting Area
Select the period to apply to the widget
For metric query, you can set up to 455 days from the current time
Time Zone Setting Area
Select the time zone to apply to the period setting
Reset Button
Reset all manipulations or settings on the dashboard detail screen
Statistics
Select the statistics criterion for the metrics displayed in the widget
Click the statistics criterion to select the criterion: Average, Minimum, Maximum, Sum
Aggregation Period Setting Area
Select the aggregation period for widget information
Click the aggregation period to select the desired period: 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 3 hours, 6 hours, 12 hours, 1 day
Refresh Setting Area
Select the refresh cycle for widget information
Click the Refresh button to display information based on the current time
Click the refresh cycle to select the desired cycle: Off, 10 seconds, 1 minute, 2 minutes, 5 minutes, 15 minutes
Chart Area
Display monitoring results as a chart
When you place the mouse cursor on the graph and legend areas, the time, data value, and metric data information at that point are displayed in a popup
You can zoom in on that area by dragging the graph area
When you click the resource name displayed in the legend, the alert status for that resource opens in a popup
Table. Metric Detail Items
Viewing Alert History
You can view alert history for metrics registered in ServiceWatch dashboards.
Follow these steps to view alert history.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Alert > Alert History menu. You will be taken to the Alert History page.
On the Alert History page, view the alert history.
Division
Detailed Description
Alert Filter and Search Area
Filter or search alert history
All Alert Status: Filter by selecting alert status
Search by entering the alert policy name in the search input field
Detailed Search: Search by entering alert policy name, alert status, or change date
Alert Policy Name
Alert policy name
When you click the alert policy name, you can view detailed information of that alert policy
Condition
Alert occurrence condition and total occurrence time
Display levels by importance: High, Middle, Low
Division
Classification of alert creation and alert status change information
Alert Status
Current alert status
Normal: When the metric does not meet the set condition
Insufficient data: When metric data cannot be verified (missing, non-existent, not arrived)
Alert: When the metric meets the set condition
Alert Level
When alert status is Alert, display the alert level
High, Middle, Low
Table. Alert History Items
Reference
You can create and manage new alert policies. For details on alert policies, refer to Viewing Alert Policies.
Monitoring Metrics
You can view and monitor metrics available in ServiceWatch.
Comparing by Metrics
You can select one or more metrics and resources to monitor.
Follow these steps to monitor by comparing metrics.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Metrics menu. You will be taken to the Metrics page.
On the Metrics page, click Compare by Metrics in the metric view mode.
On the Metrics page, select the metrics you want to monitor from the metric list. A monitoring chart for the selected metrics is displayed in the Selected Metrics area at the bottom.
Division
Detailed Description
Metric List Area
List of metrics that can be monitored in ServiceWatch
Click the + button in front of the namespace and dimensions to view the lower-level list
When you select a metric to monitor, it is displayed as a chart in the Selected Metrics area
Search Filter Area
Set the search items to filter, then click the Apply Filter button to filter the metric list
Namespace-Dimension Name: Search based on the sub-dimension name of the selected namespace
Metric Name: Search by entering the exact metric name
Resource Name: Search by entering the exact resource name
Resource ID: Search by entering the exact resource ID
Keyword: Search based on the selected upper category and entered keyword
Search for each item excluding metric name, resource name, resource ID, and Tag-Key
Tag Key: Search with the selected tag key
Table. Metric List Items
View the monitoring chart in the Selected Metrics area.
Division
Detailed Description
Period Setting Area
Select the period to apply to the chart
For metric query, you can set up to 455 days from the current time
Time Zone Setting Area
Select the time zone to apply to the chart
Reset Button
Reset all manipulations or settings on the chart
Refresh Setting Area
Select the refresh cycle for the chart
Click the Refresh button to display information based on the current time
Click the refresh cycle to select the desired cycle: Off, 10 seconds, 1 minute, 2 minutes, 5 minutes, 15 minutes
More
Display additional task items for managing the chart
Sum: Sum of all data point values collected during the period
Average: Value obtained by dividing the Sum during the specified period by the number of data pointers during that period
Minimum: Lowest value observed during the specified period
Maximum: Highest value observed during the specified period
For a detailed explanation of metrics, refer to Metrics Overview.
Comparing by Date
You can monitor by comparing one metric and resource by date or period.
Follow these steps to monitor by comparing by date or period.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Metrics menu. You will be taken to the Metrics page.
On the Metrics page, click Compare by Date in the metric view mode.
On the Metrics page, select the metrics you want to monitor from the metric list. A monitoring chart for the selected metrics is displayed in the Selected Metrics area at the bottom.
Division
Detailed Description
Metric List Area
List of metrics that can be monitored in ServiceWatch
Click the + button in front of the namespace and dimensions to view the lower-level list
When you select a metric to monitor, it is displayed as a chart in the Selected Metrics area
Search Filter Area
Set the search items to filter, then click the Apply Filter button to filter the metric list
Namespace-Dimension Name: Search based on the sub-dimension name of the selected namespace
Metric Name: Search by entering the exact metric name
Resource Name: Search by entering the exact resource name
Resource ID: Search by entering the exact resource ID
Keyword: Search based on the selected upper category and entered keyword
Search for each item excluding metric name, resource name, resource ID, and Tag-Key
Tag Key: Search with the selected tag key
Table. Metric List Items
View the monitoring chart in the Selected Metrics area.
Division
Detailed Description
Date Comparison/Period Comparison
Select the criterion to compare metrics
Date Comparison: Compare by specifying a specific date
Period Comparison: To be provided in the future
Date and Period Setting Area
Select the date or period to compare
Date Comparison: Specify the date to view in the chart
You can set up to 455 days from the current time
You can set up to 4
Period Comparison: To be provided in the future
Time Zone Setting Area
Select the time zone to apply to the chart
Reset Button
Reset all manipulations or settings on the chart
Refresh Setting Area
Select the refresh cycle for the chart
Click the Refresh button to display information based on the current time
Click the refresh cycle to select the desired cycle: Off, 10 seconds, 1 minute, 2 minutes, 5 minutes, 15 minutes
More
Display additional task items for managing the chart
Sum: Sum of all data point values collected during the period
Average: Value obtained by dividing the Sum during the specified period by the number of data pointers during that period
Minimum: Lowest value observed during the specified period
Maximum: Highest value observed during the specified period
For a detailed explanation of metrics, refer to Metrics Overview.
Monitoring Logs
You can monitor logs collected from Samsung Cloud Platform services.
Note
To view log monitoring data, you must first create a Log Group and Log Stream. For details on log groups, refer to Monitoring Logs.
Follow these steps to view log monitoring data.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the log group name for which you want to view detailed information. You will be taken to the Log Group Detail page.
On the Log Group Detail page, click the Log Stream tab. The Log Stream list is displayed.
On the Log Stream list, click the log stream name for which you want to view detailed information. You will be taken to the Log Stream Detail page.
When you click the View All Log Streams button at the top of the log stream list, you will be taken to the All Log Streams Detail page.
Division
Detailed Description
Excel Download
Download log stream history as an Excel file
Timestamp List
Message list by timestamp
Filter using Period Selection, User Time Zone, and message input
Table. Log Group Detail - Log Stream Detail Items
Receiving Event Notifications
You can receive notifications by creating system event rules for changes in resources created in Samsung Cloud Platform.
Note
To receive event notifications, you must first create an Event Rule. For details on event rule creation, refer to Creating an Event Rule.
Installing ServiceWatch Agent
You can install ServiceWatch Agent to collect custom metrics and logs from monitoring targets.
Warning
Metric collection through ServiceWatch Agent is classified as custom metrics and is charged differently from metrics collected by default from each service, so be careful not to set up unnecessary metric collection. Make sure to set it up so that only necessary metrics are collected.
Follow these steps to install ServiceWatch Agent.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Agent Setup & Guideline button. The Agent Setup & Guideline popup window will open.
Copy the Installation File URL from the Agent Setup & Guideline popup window and navigate to that address. You can download the agent, manager, and configuration files.
Custom metric and log collection through ServiceWatch Agent is currently available only in Samsung Cloud Platform For Enterprise. It will be provided in other offerings in the future.
10.11.2.1 - Managing Dashboards and Widgets
You can create and manage dashboards to monitor resources of services in use on the Samsung Cloud Platform Console.
Creating a Dashboard
You can create dashboards in ServiceWatch.
Creating a Dashboard by Adding Individual Widgets
Follow these steps to create a dashboard.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the Create Dashboard button.
Enter a name for the dashboard.
The dashboard name must be 3 to 40 characters long, using English letters, numbers, and special characters (-_).
You cannot enter a name that is already in use in the service dashboard.
Add widgets to configure the dashboard. Depending on the widget addition method, a corresponding popup window will open.
Add Individual Widget: You can add a single widget combining metrics and resources. When you click the button, the Add Individual Widget popup window will open.
Division
Required
Detailed Description
Metric Division
Optional
Filter the metrics that can be monitored in ServiceWatch by All or Key Metrics and display them in the metric list
Metric List Area
Required
List of metrics that can be monitored in ServiceWatch
Click the + button in front of the namespace and dimensions to view the lower-level list
When you select a metric to monitor, it is displayed as a chart in the Selected Metrics area
Search Filter Area
-
Set the search items to filter, then click the Apply Filter button to filter the metric list
Namespace-Dimension Name: Search based on the sub-dimension name of the selected namespace
Metric Name: Search by entering the exact metric name
Resource Name: Search by entering the exact resource name
Resource ID: Search by entering the exact resource ID
Keyword: Search based on the selected upper category and entered keyword
Search for each item excluding metric name, resource name, resource ID, and Tag-Key
Tag Key: Search with the selected tag key
Selected Metrics Area
-
Monitoring chart for the metric selected in the metric list area
Data graph collected during the period applied to the chart
When you place the mouse cursor on the graph, the time, data value, and metric data information at that point are displayed in a popup
You can zoom in on a specific area of the graph by dragging the mouse
When you click the resource name displayed in the legend, detailed information about that resource is displayed in a popup
You can modify item values in the table area within the chart
Label: Enter the legend name using English letters, numbers, and special characters within 3 to 255 characters
Statistics: Select the method to aggregate metric data
You can select from Average (default), Minimum, Maximum, Sum
Aggregation Period: Select the aggregation period unit of metric values
You can select from 1 minute, 5 minutes (default), 15 minutes, 30 minutes, 1 hour, 3 hours, 6 hours, 12 hours
Delete: Delete that legend
Table. Add Individual Widget Items
Click the Done button in the widget addition popup window. The widget is added to the dashboard on the dashboard creation page.
After confirming the added widget, click the Create button. A popup window will open announcing the dashboard creation.
Click the Confirm button. The dashboard creation is complete.
Creating a Dashboard by Adding Multiple Widgets
Follow these steps to create a dashboard.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the Create Dashboard button.
Enter a name for the dashboard.
The dashboard name must be 3 to 40 characters long, using English letters, numbers, and special characters (-_).
You cannot enter a name that is already in use in the service dashboard.
Add widgets to configure the dashboard. Depending on the widget addition method, a corresponding popup window will open.
Add Multiple Widgets: You can select desired resources by metric unit and add multiple widgets at once. When you click the button, the Add Multiple Widgets popup window will open.
Division
Required
Detailed Description
Metric Division
Optional
Filter the metrics that can be monitored in ServiceWatch by All or Key Metrics and display them in the metric list
Metric Selection Area
Required
Select the namespace and resources to add
Click the + button in front of the namespace and resources to view the lower-level resources and metric list
When you check a metric to add to a widget from the metric list, the Selected Metrics and Selected Resources areas are displayed
You can select multiple metrics
Selected Metrics
Required
Display the list of metrics selected from the namespace and resource list
When you click a metric, the list of resources included in that metric is displayed in the Selected Resources area
Selected Resources
Required
Add resources to the metrics selected in the Selected Metrics list
After clicking the Select button, select resources that can be added to that metric and add up to 5
You must add resources for all metrics in the Selected Metrics list
Statistics
Required
Select the statistics criterion for metric values
You can select from Average (default), Minimum, Maximum, Sum
Aggregation Period
Required
Select the aggregation period unit of metric values
You can select from 1 minute, 5 minutes (default), 15 minutes, 30 minutes, 1 hour, 3 hours, 6 hours, 12 hours
Table. Add Multiple Widgets Items
Click the Done button in the widget addition popup window. The widget is added to the dashboard on the dashboard creation page.
After confirming the added widget, click the Create button. A popup window will open announcing the dashboard creation.
Click the Confirm button. The dashboard creation is complete.
Creating a Dashboard by Importing a Dashboard
Follow these steps to create a dashboard.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the Create Dashboard button.
Enter a name for the dashboard.
The dashboard name must be 3 to 40 characters long, using English letters, numbers, and special characters (-_).
You cannot enter a name that is already in use in the service dashboard.
Add widgets to configure the dashboard. Depending on the widget addition method, a corresponding popup window will open.
Import Dashboard: You can import widgets from a dashboard registered in ServiceWatch. When you click the button, the Import Dashboard popup window will open.
Division
Required
Detailed Description
Dashboard
Required
Display the list of dashboards registered in ServiceWatch
When you select a dashboard, the widgets applied to that dashboard are displayed in the Preview area
Preview
Required
Display the widgets applied to the dashboard selected from the dashboard list
Check the widget name to select the widget to add to the dashboard to create
When you check the Select All item, all metrics of that dashboard are selected
Table. Import Dashboard Items
Click the Done button in the widget addition popup window. The widget is added to the dashboard on the dashboard creation page.
After confirming the added widget, click the Create button. A popup window will open announcing the dashboard creation.
Click the Confirm button. The dashboard creation is complete.
Viewing Dashboards
You can view information about the dashboard selected on the Dashboard List page.
Follow these steps to view a dashboard.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
Division
Detailed Description
Dashboard Name
Dashboard name
User dashboard is the dashboard name set by the user. Service dashboard is the Samsung Cloud Platform service name matching the service namespace
Dashboard Division
Dashboard division
User: Dashboard created directly by the user
Service: Dashboard composed of key metrics for each service automatically pre-built
Modified Date
Dashboard modification date
Created Date
Dashboard creation date
Favorites
Displayed in yellow if favorites are set
To set or unset favorites, click the star icon
Table. Dashboard List Items
On the Dashboard List page, click the dashboard for which you want to view detailed information. You will be taken to the Dashboard Detail page.
Division
Detailed Description
Dashboard Name
Display the dashboard name
Click the name to select another dashboard
Period Setting Area
Select the period to apply to widgets in the dashboard
For metric query, you can set up to 455 days from the current time
Time Zone Setting Area
Select the time zone to apply to the period setting
Reset Button
Reset all manipulations or settings on the dashboard detail screen
Refresh Setting Area
Select the refresh cycle for widget information
Click the Refresh button to display information based on the current time
Click the refresh cycle to select the desired cycle: Off, 10 seconds, 1 minute, 2 minutes, 5 minutes, 15 minutes
Edit
Modify the dashboard name or manage widgets
When you click the Edit button, you will be taken to the Edit Dashboard page
You can view detailed information about a dashboard.
Follow these steps to view dashboard detailed information.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to view detailed information. You will be taken to the Dashboard Detail page.
Click the More > Detailed Information button in the upper right corner of the Dashboard Detail page. The Dashboard Detailed Information popup window will open.
Division
Detailed Description
Dashboard Name
Dashboard name
Dashboard Division
Usage division of the dashboard
User: Dashboard created directly by the user
Service: Dashboard composed of key metrics for each service automatically pre-built
Service
Service name
Resource Type
dashboard
SRN
Unique resource ID in Samsung Cloud Platform
In ServiceWatch, it means the SRN of the dashboard
Resource Name
Resource name
In ServiceWatch, it means the dashboard name
Resource ID
Unique resource ID in the service
Creator
User who created the dashboard
Created Date
Date and time when the dashboard was created
Modifier
User who modified the dashboard information
Modified Date
Date and time when the dashboard information was modified
Table. Dashboard Detail - Detailed Information Popup Items
Reference
When sorting dashboard names in the dashboard list, follow the sorting rules below.
Whitespace and control characters
Some special characters ( !"#$%&’()*+,-./ )
Numbers (0–9)
Some special characters ( ;<=>?@ )
English (A–Z, a–z, case-insensitive)
Remaining special characters ([\]^_`)
Other characters
Viewing Dashboard Source Code
You can view the dashboard source code.
Follow these steps to view dashboard source code.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to view detailed information. You will be taken to the Dashboard Detail page.
Click the More > View Source button in the upper right corner of the Dashboard Detail page. The Dashboard View Source popup window will open.
You can clone the widgets of the current dashboard and add them to another dashboard.
Note
User permissions are required to clone a dashboard.
User: {email}
Action: iam:CreatGroup
On resource: {SRN}
Context: no identity-based policy allows the action
Follow these steps to clone a dashboard.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to view detailed information. You will be taken to the Dashboard Detail page.
Click the More > Clone button in the upper right corner of the Dashboard Detail page. The Clone Dashboard popup window will open.
Select a dashboard cloning method and enter the required information. The required information varies depending on the cloning method.
Division
Required
Detailed Description
Clone Target
-
Dashboard name to clone
Clone Method
Required
After cloning the dashboard widgets, select the dashboard to add to
New Dashboard: Create a new dashboard and create by cloning the widgets of the current dashboard
Existing Dashboard: Clone the widgets of the current dashboard and add them to an existing dashboard
Dashboard Name
Required
Enter the name of the dashboard to create
Displayed when clone method is selected as New Dashboard
Enter within 3 to 40 characters using English letters, numbers, and special characters (-_)
Dashboard Selection
Required
Select a dashboard to add the cloned widgets from among previously created dashboards
Displayed when clone method is selected as Existing Dashboard
Table. Clone Dashboard Items
After entering the required information, click the Done button. A popup window will open announcing the dashboard cloning.
Click the Confirm button in the popup window. The dashboard cloning is complete.
Reference
You can clone a service dashboard and add widgets to a user dashboard, or create it as a new dashboard.
Deleting a Dashboard
You can delete dashboards that are not in use.
Note
Service dashboards cannot be deleted.
Follow these steps to delete a dashboard.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, select the checkbox of the dashboard to delete and click the Delete button. A popup window will open announcing the dashboard deletion.
You can delete multiple dashboards at the same time.
You can also delete individually by clicking the More > Delete button in the upper right corner of the Dashboard Detail page.
After entering Delete in the delete confirmation input area, click the Confirm button. The dashboard is deleted.
Managing Widgets
On the Dashboard Detail page, you can modify or manage widgets.
Editing a Widget
You can modify the metrics and resources of a widget.
Note
Widgets in service dashboards cannot be edited.
Follow these steps to edit a widget.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to edit widgets. You will be taken to the Dashboard Detail page.
Click the More > Edit Widget button in the upper right corner of the widget to edit on the Dashboard Detail page. The Edit Widget popup window will open.
After modifying the widget’s metrics and resources, click the Confirm button. The widget editing is complete.
Division
Required
Detailed Description
Metric Division
Optional
Filter the metrics that can be monitored in ServiceWatch by All or Key Metrics and display them in the metric list
Metric List Area
Required
List of metrics that can be monitored in ServiceWatch
Click the + button in front of the namespace and dimensions to view the lower-level list
When you select a metric to monitor, it is displayed as a chart in the Selected Metrics area
Search Filter Area
-
Set the search items to filter, then click the Apply Filter button to filter the metric list
Namespace-Dimension Name: Search based on the sub-dimension name of the selected namespace
Metric Name: Search by entering the exact metric name
Resource Name: Search by entering the exact resource name
Resource ID: Search by entering the exact resource ID
Keyword: Search based on the selected upper category and entered keyword
Search for each item excluding metric name, resource name, resource ID, and Tag-Key
Tag Key: Search with the selected tag key
Selected Metrics Area
-
Monitoring chart for the metric selected in the metric list area
Data graph collected during the period applied to the chart
When you place the mouse cursor on the graph, the time, data value, and metric data information at that point are displayed in a popup
You can zoom in on a specific area of the graph by dragging the mouse
When you click the label name displayed in the legend, detailed information about that legend is displayed in a popup
In the table area within the chart, you can check and modify labels, statistics, and aggregation period by legend
Legend: Color by legend
Click the legend color to change to a different color
Period: Period applied to the chart
Metric: Display the namespace, resource name, and metric name of the selected metric
Statistics: Select the method to aggregate metric data
You can select from Average (default), Minimum, Maximum, Sum
Aggregation Period: Select the aggregation period unit of metric values
You can select from 1 minute, 5 minutes (default), 15 minutes, 30 minutes, 1 hour, 3 hours, 6 hours, 12 hours, 1 day
Delete: Delete that legend
Table. Add Individual Widget Items
Cloning a Widget
You can copy a widget and add it to another dashboard.
Note
User permissions are required to clone a dashboard.
User: {email}
Action: iam:CreatGroup
On resource: {SRN}
Context: no identity-based policy allows the action
Follow these steps to clone a widget.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard to clone. You will be taken to the Dashboard Detail page.
Click the More > Clone Widget button in the upper right corner of the widget to clone on the Dashboard Detail page. The Clone Widget popup window will open.
Select a widget cloning method and enter the required information. The required information varies depending on the cloning method.
Division
Required
Detailed Description
Clone Target
-
Dashboard name to clone
Clone Method
Required
After cloning the widget, select the dashboard to add to
Existing Dashboard: Add the widget to an existing dashboard
New Dashboard: Create a new dashboard and add the widget
Dashboard Selection
Required
Select a dashboard to add the cloned widget from among previously created dashboards
Displayed when clone method is selected as Existing Dashboard
Dashboard Name
Required
Enter the name of the dashboard to create
Displayed when clone method is selected as New Dashboard
Enter within 3 to 40 characters using English letters, numbers, and special characters (-_)
Widget Name
Required
Enter the widget name when adding the widget to the dashboard
Enter within 3 to 255 characters using English letters, numbers, and special characters (-_.|)
Table. Clone Widget Items
After entering the required information, click the Done button. A popup window will open announcing the widget addition.
Click the Confirm button in the popup window. The widget is cloned and added to the dashboard.
Downloading Widget Files
You can download widget information as a file.
Follow these steps to download widget information.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to download widget information. You will be taken to the Dashboard Detail page.
Click the More > File Download button in the upper right corner of the widget for which you want to download widget information on the Dashboard Detail page. The File Download popup window will open.
Select a file download method and click the Confirm button. The download will start.
You can select multiple download methods at the same time.
Division
Detailed Description
CSV
Convert the widget’s metrics and monitoring data to Excel (*.csv) file format and download
PNG
Convert the widget chart to image (*.png) file format and download
Table. Widget File Download Items
Viewing Widget Metrics
You can view the resource and metric information of a widget on the Metrics page.
Follow these steps to view widget metrics on the metrics page.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to view widget metrics. You will be taken to the Dashboard Detail page.
Click the More > View Metrics button in the upper right corner of the widget to view on the Metrics page on the Dashboard Detail page. You will be taken to the Metrics page.
The resource and metric information of the selected widget are automatically set and displayed on the Metrics page.
You cannot view widget source information in service dashboards.
Follow these steps to view dashboard source code.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to view detailed information. You will be taken to the Dashboard Detail page.
Click the More > View Source button in the upper right corner of the widget for which you want to view source code on the Dashboard Detail page. The Widget View Source popup window will open.
Division
Detailed Description
Source Information
Display widget source code in JSON format
Copy Code
Copy source code to clipboard
Table. Widget View Source Items
Downloading Widget Files
You can download widget information as a file.
Follow these steps to download widget information.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to download widget information. You will be taken to the Dashboard Detail page.
Click the More > File Download button in the upper right corner of the widget for which you want to download widget information on the Dashboard Detail page. The File Download popup window will open.
Select a file download method and click the Confirm button. The download will start.
You can select multiple download methods at the same time.
Division
Detailed Description
CSV
Convert the widget’s metrics and monitoring data to Excel (*.csv) file format and download
PNG
Convert the widget chart to image (*.png) file format and download
Table. Widget File Download Items
Exporting to Object Storage
You can save widget information to Object Storage.
Note
User permissions are required to save to Object Storage.
User: {email}
Action: iam:CreatGroup
On resource: {SRN}
Context: no identity-based policy allows the action
Follow these steps to save widget information to Object Storage.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Dashboard > Dashboard List menu. You will be taken to the Dashboard List page.
On the Dashboard List page, click the dashboard for which you want to download widget information. You will be taken to the Dashboard Detail page.
Click the More > Export to Object Storage button in the upper right corner of the widget for which you want to download widget information on the Dashboard Detail page. The Export to Object Storage popup window will open.
Select the bucket to save the widget information and click the Done button. A popup window will open announcing the save.
Click the Confirm button in the popup window. The data export will start.
Note
The limitations for metrics that can be exported to Object Storage are as follows.
Number of metrics: Up to 10
Query period: Within 2 months (63 days)
If the query period exceeds 2 months (63 days), only data for up to 63 days will be saved.
Reference
If there is no Object Storage to save metric data, create Object Storage and proceed.
Metric data is saved in the file format “metric name-yyyymmddhhmmss.json” and can be viewed in the ~/servicewatch/metric path of the Object Storage bucket.
10.11.2.2 - Alert
In ServiceWatch, you can create and manage alert policies by setting threshold criteria for metrics to monitor, and generate alert notifications when the set conditions are met.
Creating an Alert Policy
You can create an alert policy for a metric to set the criteria for alert generation.
Follow the steps below to create an alert policy.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Alert > Alert Policy menu. You will be taken to the Alert Policy List page.
On the Alert Policy List page, click the Create Alert Policy button. You will be taken to the Create Alert Policy page.
In the Enter Basic Information area, enter the name and description of the alert policy, then click the Select Metric button. The Select Metric popup window opens.
In the Select Metric popup window, select the metric for which you want to create an alert policy, then click the Confirm button. The Metric and Condition Settings area is displayed.
Category
Required
Description
Metric Category
Required
Filter and display the list of metrics available for monitoring in ServiceWatch by All or Key Metrics
Metric List Area
Required
List of metrics available for monitoring in ServiceWatch
Click the + button in front of namespace, dimension to view the lower-level list
When you select a metric to monitor, it is displayed as a chart in the Selected Metric area
If the metric is linked to a namespace, Service Dashboard is displayed
Clicking Service Dashboard takes you to the detail page of that dashboard
Search Filter Area
-
Set the search item to filter, then click the Apply Filter button to filter the metric list
Namespace-Dimension Name: Search based on the lower-level dimension name of the selected namespace
Metric Name: Enter the exact metric name to search
Resource Name: Enter the exact resource name to search
Resource ID: Enter the exact resource ID to search
Keyword: Search based on the selected upper category and the entered keyword
Search for each item excluding metric name, resource name, resource ID, tag-Key
Tag Key: Search with the selected tag Key
Selected Metric Area
-
Monitoring chart for the metric selected in the metric list area
Data graph of data collected during the period applied to the chart
Place the mouse cursor on the graph to display the time, data value, and metric data information of that point in a popup
Drag the mouse to zoom in on a specific area of the graph
Click the label name displayed in the legend to display detailed information about that legend in a popup
In the chart display area, you can check and modify the labels, statistics, and aggregation period by legend
Legend: Color by legend
Click the legend color to change to another color
Period: Period applied to the chart
Metric: Displays the namespace, resource name, and metric name of the selected metric
Statistics: Select the method for aggregating metric data
Can select from Average (default), Minimum, Maximum, Sum
Aggregation Period: Select the aggregation period unit of the metric value
Can select from 1 minute, 5 minutes (default), 15 minutes, 30 minutes, 1 hour, 3 hours, 6 hours, 12 hours, 1 day
Delete: Delete that legend
Table. Select Metric Popup Items
In the Metric and Condition Settings area, set the threshold for alert generation.
Category
Required
Description
Namespace
-
Namespace of the selected metric
Metric Name
-
Name of the selected metric
Unit
-
Data unit of the selected metric
Evaluation Range
Required
Time (seconds) range for alert evaluation
Can enter up to 604,800 seconds in multiples of 60
If set smaller than the collection period or not in multiples of the collection period, alert evaluation may be applied abnormally
Statistics
Required
Select the method for calculating metric data during the evaluation range
Sum: Sum of all data point values collected during that period
Average: Value of dividing the Sum during the specified period by the number of data points during that period
Minimum: Lowest value observed during the specified period
Maximum: Highest value observed during the specified period
Additional Configuration
Optional
Set the number of evaluations, number of violations, and method for handling missing data
When additional configuration is set to Enable, you can set the number of evaluations and number of violations used when evaluating alerts. If the violation count condition is met out of the number of evaluations during the evaluation range (seconds), the alert status changes to Alert
Number of Evaluations: Number of evaluations for alert generation
Can enter 1 ~ 8,640
Number of Violations: Number of allowed violations until alert generation
Can enter 1 ~ within Number of Evaluations
If the evaluation range is less than 1 hour (3,600 seconds), Number of Evaluations/Evaluation Range can be set up to maximum 1 day (86,400 seconds)
If the evaluation range is 1 hour (3,600 seconds) or more, Number of Evaluations/Evaluation Range can be set up to maximum 7 days (604,800 seconds)
When additional configuration is set to Enable, you can set how to handle missing data when evaluating alerts.
Treat missing data as missing (Missing)
Treat missing data as ignore to maintain current alert status (Ignore)
Treat missing data as satisfying the condition (Breaching)
Treat missing data as normal that does not satisfy the condition (Not breaching)
Condition Setting
Required
Condition Operator: Select the condition operator to compare the calculated metric data value during the evaluation range with the threshold
Threshold: Set the threshold to compare with the calculated metric data value during the evaluation range using the condition operator
Can enter 0 ~ 2,147,483,647
Condition: Description of the condition for alert status (Alert) change according to the set Condition Operator and Threshold
Alert Level
Required
Select the alert level according to the importance of the alert policy
Resource ID
-
Resource ID of the metric monitoring target
Resource Name
-
Resource name of the metric monitoring target
Table. Metric and Condition Settings Items
In the Notification Settings area, select the users to receive notifications when an alert occurs.
Only users with login history (users who have registered email, mobile phone number) can be added as notification recipients.
You can add up to 100 notification recipients.
Note
If there is no user you want to add, you can create a user on the Create User page of the IAM service. For more information about creating a user, see Creating a User.
The notification method (E-mail or SMS) can be changed by selecting the notification target as Service > Alert on the Notification Settings page. For more information about notification settings, see Checking Notification Settings.
In the Enter Additional Information area, add tag information.
After checking the summary information, click the Create button. A popup window announcing the creation of the alert policy opens.
Click the Confirm button. The alert policy creation is completed.
Note
Creating an alert policy may take more than several tens of minutes depending on the scale.
Viewing Alert Policy Details
You can view and manage detailed information about an alert policy.
To view detailed information about an alert policy, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Alert > Alert Policy menu. You will be taken to the Alert Policy List page.
You can also click the Alert Level button displayed in the Alert Policy menu to view only the list of alert policies corresponding to that alert level.
On the Alert Policy List page, click the alert policy name for which you want to view detailed information. You will be taken to the Alert Policy Details page.
Category
Description
Alert Policy Status
Status of the alert policy
Active: Alert policy is activated
Inactive: Alert policy is deactivated
Can be changed by clicking the Activate or Deactivate button
Alert Policy Deletion
Delete the corresponding alert policy
Alert Status
Current alert status
Normal: When the metric does not meet the set condition
Insufficient data: When metric data cannot be checked (missing, non-existent, not arrived)
Alert: When the metric meets the set condition
When alert status is Alert, the alert level (>High, Middle, Low) is also displayed
Only users with login history (users who have registered email, mobile phone number) can be added as notification recipients.
You can add up to 100 notification recipients.
If there is no user you want to add, you can create a user on the Create User page of the IAM service. For more information about creating a user, see Creating a User.
The notification method (E-mail or SMS) can be changed by selecting the notification target as Service > Alert on the Notification Settings page. For more information about notification settings, see Checking Notification Settings.
Alert History
You can view the history of alert status changes for the alert policy selected on the Alert Policy List page.
Category
Description
Alert History List
Alert status change date and time, change status category information, alert description
Click View Details to view detailed information of that alert history and source code in JSON format
View Details
Can view detailed information of alert history and source code in JSON format
Activated only when you select 1 alert to view detailed information in the alert history list
Table. Alert Policy Details - Alert History Tab Items
Tags
You can view the tag information of the alert policy selected on the Alert Policy List page, and add, change, or delete it.
Category
Description
Tag List
Key, Value information of tags
Modify Tags
Can modify or delete existing tag information or add new tags
Can add up to 50 tags per resource
When adding a tag, if you enter the Key and Value values, you can select from the list of previously created tag Keys and Values
Table. Alert Policy Details - Tags Tab Items
Operation History
You can view the operation history of the alert policy selected on the Alert Policy List page.
Category
Description
Operation History List
Resource change history
Can view operation details, operation date and time, resource type, resource name, operation result, operator information
Click the Settings button to change information items
Can filter using Period Selection, User Time Zone, operator information input, Detailed Search
Click the operation details in the Operation History List to go to the Activity History Details page of that operation
Table. Alert Policy Details - Operation History Tab Items
Modifying an Alert Policy
You can modify the target metric and policy settings of an alert policy.
To modify an alert policy, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Alert > Alert Policy menu. You will be taken to the Alert Policy List page.
On the Alert Policy List page, click the alert policy name for which you want to view detailed information. You will be taken to the Alert Policy Details page.
On the Alert Policy Details page, click the Modify button in the Metric Information of the Detailed Information tab. The Modify Metric Information popup window opens.
After modifying the metric information and policy settings, click the Confirm button. A popup window announcing the metric modification opens.
Click the Confirm button. The alert policy modification is completed.
Deleting an Alert Policy
You can delete unused alert policies.
To delete an alert policy, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Alert > Alert Policy menu. You will be taken to the Alert Policy List page.
On the Alert Policy List page, select the checkbox of the alert policy you want to delete, then click the Delete button. A popup window announcing the deletion of the alert policy opens.
You can delete multiple alert policies at the same time.
You can also individually delete by clicking the Delete button at the right end of each alert policy or by clicking the Delete Alert Policy button on the corresponding Alert Policy Details page.
Click the Confirm button. The alert policy is deleted.
Note
Deleting an alert policy may take more than several tens of minutes depending on the scale.
10.11.2.3 - Metric
Users can monitor metrics for service resources in Samsung Cloud Platform Console and use them for management.
Viewing Metrics
You can view the metrics available in ServiceWatch.
To view metrics, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Metric menu. You will be taken to the Metric page.
Check the metrics in the metric list on the Metric page.
Category
Description
Metric Comparison Mode
Select the method to compare metrics
Metric Comparison: Compare multiple metrics and resources
Date Comparison: Compare a single metric and resource over multiple periods
Only one metric can be selected
Metric Category
Filter and display the list of metrics available for monitoring in ServiceWatch by All or Key Metrics
Metric List Area
List of metrics available for monitoring in ServiceWatch
Click the + button in front of namespace, dimension to view the lower-level list
Select a metric to monitor
If the metric is linked to a namespace, Service Dashboard is displayed
Clicking Service Dashboard takes you to the detail page of that dashboard
Search Filter Area
Set the search item to filter, then click the Apply Filter button to filter the metric list
Namespace-Dimension Name: Search based on the lower-level dimension name of the selected namespace
Metric Name: Enter the exact metric name to search
Resource Name: Enter the exact resource name to search
Resource ID: Enter the exact resource ID to search
Keyword: Search based on the selected upper category and the entered keyword
Search for each item excluding metric name, resource name, resource ID, tag-Key
Tag Key: Search with the selected tag Key
Selected Metric
Display monitoring information of the metric selected in the metric list
You can add metric monitoring result charts as widgets or manage data.
Adding as Widget
You can add selected metrics as widgets to a dashboard.
Guide
You can only add as a widget when the metric comparison mode is selected as Metric Comparison.
To add a metric as a widget, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Metric menu. You will be taken to the Metric page.
Select the metric to monitor in the metric list on the Metric page. The monitoring chart for the selected metric is displayed in the Selected Metric area at the bottom.
Set the chart area in the monitoring chart in the Selected Metric area, then click the More > Add Widget button. The Add Widget popup window opens.
Select the widget addition method and enter the required information. Required information varies depending on the addition method.
Category
Required
Description
Duplication Method
Required
Select the dashboard to add after duplicating the widget
Existing Dashboard: Add widget to existing dashboard
New Dashboard: Create a new dashboard and add widget
Dashboard Selection
Required
Select the dashboard to add the duplicated widget among existing dashboards
Displayed when duplication method is selected as Existing Dashboard
Dashboard Name
Required
Enter the name of the dashboard to be newly created
Displayed when duplication method is selected as New Dashboard
Enter within 3 ~ 40 characters using English, numbers, and special characters (-_)
Widget Name
Required
Enter the name of the widget when adding the widget to the dashboard
Enter within 3 ~ 255 characters using English, numbers, and special characters (-_.|)
Table. Add Widget Items
After entering the required information, click the Complete button. A popup window announcing the widget addition opens.
Click the Confirm button in the popup window. The widget is added to the dashboard.
Sharing Monitoring Chart URL
You can download monitoring chart information in file format.
To download monitoring chart information, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Metric menu. You will be taken to the Metric page.
Select the metric to monitor in the metric list on the Metric page. The monitoring chart for the selected metric is displayed in the Selected Metric area at the bottom.
Set the chart area in the monitoring chart in the Selected Metric area, then click the More > Share URL button. The Share URL popup window opens.
Click the Copy URL button in the Share URL popup window. The generated URL is copied.
Information in the monitoring chart is provided in the form of a metadata link.
Downloading Monitoring Chart File
You can download monitoring chart information in file format.
To download monitoring chart information, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Metric menu. You will be taken to the Metric page.
Select the metric to monitor in the metric list on the Metric page. The monitoring chart for the selected metric is displayed in the Selected Metric area at the bottom.
Set the chart area in the monitoring chart in the Selected Metric area, then click the More > Download File button. The Download File popup window opens.
Select the file download method, then click the Confirm button. Download starts.
You can select download methods simultaneously.
Category
Description
CSV
Convert chart metrics and monitoring data to Excel (*.csv) files and download
PNG
Convert chart to image (*.png) files and download
Can only be selected when metric comparison mode is selected as Metric Comparison
Can download up to 100 metric data
Table. Metric Monitoring File Download Items
Exporting to Object Storage
You can save monitoring chart data to Object Storage.
Guide
User permission is required to save to Object Storage.
User: {email}
Action: iam:CreatGroup
On resource: {SRN}
Context: no identity-based policy allows the action
To save monitoring chart data to Object Storage, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Metric menu. You will be taken to the Metric page.
Select the metric to monitor in the metric list on the Metric page. The monitoring chart for the selected metric is displayed in the Selected Metric area at the bottom.
Set the chart area in the monitoring chart in the Selected Metric area, then click the More > Export to Object Storage button. The Export to Object Storage popup window opens.
Select the bucket to save data, then click the Complete button. A popup window announcing data saving opens.
Click the Confirm button in the popup window. Data export starts.
Guide
Limitations of metrics that can be exported to Object Storage are as follows.
Number of metrics: Up to 10
Query period: Within 2 months (63 days)
If the query period exceeds 2 months (63 days), only data for up to 63 days is saved.
Note
If there is no Object Storage to save metric data, create Object Storage and proceed.
Metric data is saved in the “metricname-yyyymmddhhmmss.json” file format and can be checked in the ~/servicewatch/metric path of the Object Storage bucket.
Viewing Monitoring Chart Source
You can view the source code of the monitoring chart.
Guide
You can only view the source code when the metric comparison mode is selected as Metric Comparison.
To view the source code of the monitoring chart, follow the steps below.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Metric menu. You will be taken to the Metric page.
Select the metric to monitor in the metric list on the Metric page. The monitoring chart for the selected metric is displayed in the Selected Metric area at the bottom.
Set the chart area in the monitoring chart in the Selected Metric area, then click the More > View Source button. The View Widget Source popup window opens.
Category
Description
Source Information
Display source code of monitoring chart in JSON format
Copy Code
Copy source code to clipboard
Table. Monitoring Chart View Source Items
10.11.2.4 - Logs
In ServiceWatch, you can create and manage log groups to generate alert notifications when set conditions are met by setting threshold criteria for metrics to monitor.
Creating a Log Group
You can create a log group for metrics.
Follow these steps to create a log group.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the Create Log Group button. You will be taken to the Create Log Group page.
Enter the basic information and tag information required to create a log group.
Division
Required
Detailed Description
Log Group Name
Required
Enter the name of the log group to monitor in ServiceWatch
Enter within 3 to 512 characters using English letters, numbers, and special characters (-_./#)
Log Retention Policy
Required
Select the period to retain monitored log data
Tag
Optional
Add tag information
You can add up to 50 tags per resource
When adding a tag, after entering Key and Value values, you can select from the list of existing tag Keys and Values
Table. Create Log Group Items
After checking the summary information, click the Create button. A popup window will open announcing the log group creation.
Click the Confirm button. The log group creation is complete.
Viewing Log Group Detailed Information
You can view and manage detailed information about log groups.
Follow these steps to view detailed information about log groups.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the log group name for which you want to view detailed information. You will be taken to the Log Group Detail page.
You can view tag information for the log group selected on the Log Group List page, and add, modify, or delete them.
Division
Detailed Description
Tag List
Key, Value information of tags
Edit Tag
Modify or delete existing tag information or add new tags
You can add up to 50 tags per resource
When adding a tag, after entering Key and Value values, you can select from the list of existing tag Keys and Values
Table. Log Group Detail - Tags Tab Items
Operation History
You can view the operation history of the log group selected on the Log Group List page.
Division
Detailed Description
Operation History List
Resource change history
Can view operation details, operation date, resource type, resource name, operation result, operator information
Click the Settings button to change information items
Can filter using Period Selection, User Time Zone, operator information input, Detailed Search
When you click the operation details in the Operation History List, you will be taken to the Activity History Detail page for that operation
Table. Log Group Detail - Operation History Tab Items
Managing Log Streams
You can create and manage log streams.
Creating a Log Stream
You can create a new log stream in a log group.
Follow these steps to create a log stream.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the log group name for which you want to create a log stream. You will be taken to the Log Group Detail page.
On the Log Group Detail page, click the Log Stream tab. The log stream list is displayed.
Click the Create Log Stream button. The Create Log Stream popup window will open.
After entering the Log Stream Name, click the Create button. The log stream creation is complete.
Enter the name within 3 to 512 characters using English letters and numbers.
Viewing Log Stream Detailed Information
You can view and manage detailed information about log streams.
Follow these steps to view detailed information about log streams.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the log group name for which you want to view detailed information. You will be taken to the Log Group Detail page.
On the Log Group Detail page, click the Log Stream tab. The Log Stream list is displayed.
On the Log Stream list, click the log stream name for which you want to view detailed information. You will be taken to the Log Stream Detail page.
Division
Detailed Description
Excel Download
Download log stream history as an Excel file
Timestamp List
Message list by timestamp
Can filter using Period Selection, User Time Zone, and message input
Table. Log Group Detail - Log Stream Detail Items
Reference
When you click the View All Log Streams button at the top of the log stream list, you will be taken to the All Log Streams Detail page.
Deleting a Log Stream
You can delete unused log streams.
Follow these steps to delete a log stream.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the log group name for which you want to delete a log stream. You will be taken to the Log Group Detail page.
On the Log Group Detail page, click the Log Stream tab. The log stream list is displayed.
On the log stream list, select the checkbox of the log stream to delete and click the More > Delete button. A popup window will open announcing the log stream deletion.
You can delete multiple log streams at the same time.
You can also delete individually by clicking the Delete button at the right end of each log stream.
Click the Confirm button. The log stream is deleted.
Managing Log Patterns
Creating a Log Pattern
Follow these steps to create a log pattern.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the log group name for which you want to create a log pattern. You will be taken to the Log Group Detail page.
On the Log Group Detail page, click the Log Pattern tab. The log pattern list is displayed.
Click the Create Log Pattern button. You will be taken to the Create Log Pattern page.
Set the basic information and metric information required to create a log group.
Enter and select basic information for the log pattern in the Basic Information area.
Division
Required
Detailed Description
Log Pattern Name
Required
Enter the name of the log pattern
Enter within 3 to 512 characters using English letters, numbers, and special characters (-_./#)
Pattern Format
Required
Select or directly enter the pattern format
Pattern Format: Select one of string pattern, space-separated pattern, JSON format pattern provided as pattern format
Direct Input: After selecting one of string pattern, space-separated pattern, JSON format pattern, enter within 1 to 1,024
Pattern Test
Optional
Directly enter or select log data to verify using the pattern
Direct Input: Directly enter the log data to use in Log Event Message
Separate log events using line breaks
Can enter up to 50 log events
Can enter within 1 to 1,024 bytes for one log event
Select log data: Select the log data to use
When selecting log data, the corresponding log data is displayed in Log Event Message
Test Pattern: Perform test on log event message
When test succeeds, test result is displayed at the bottom
Table. Create Log Pattern - Basic Information Items
Enter and select metric information in the Metric Information area.
Division
Required
Detailed Description
Namespace
Required
Select the namespace for the log pattern
If there is no namespace for the log pattern, select Create New to create a new one
Namespace Name: When creating a new namespace, enter within 3 to 128 characters using English letters, numbers, spaces, and special characters (-_\/#)
Metric Name
Required
Enter the name of the metric
Enter within 3 to 128 characters using English letters, numbers, and special characters (_)
Metric Value
Required
Enter the metric value
Enter a number of 0 or higher or $indetifier
Default Value
Optional
Enter if using the default value
Enter as a float value of 0 or higher
Cannot use Dimension when using default value
Unit
Required
Select the metric unit
Dimension
Optional
Set the dimension of the metric created by the log pattern
Can be used only when Log Pattern Format is space-separated pattern or JSON format pattern
Cannot be used when entering Default Value
When you check Use, you can add custom dimension fields
After clicking the Add button, enter Field Name and Field Value to add
Can add up to 3
Table. Create Log Pattern - Metric Information Items
After checking the summary information, click the Create button. A popup window will open announcing the log pattern creation.
Click the Confirm button. The log pattern creation is complete.
Viewing Log Pattern Detailed Information
Follow these steps to view detailed information about a log pattern.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the log group name for which you want to view detailed information. You will be taken to the Log Group Detail page.
On the Log Group Detail page, click the Log Pattern tab. The log pattern list is displayed.
On the Log Pattern list, click the log pattern name for which you want to view detailed information. You will be taken to the Log Pattern Detail page.
Division
Detailed Description
Create Alert Policy
Can create alert policy
Click the button to go to the alert policy creation page
Delete Log Pattern
Delete the log pattern currently being viewed
Log Pattern Name
Log pattern name
Creator
User who created the log pattern
Created Date
Date and time when the log pattern was created
Modifier
User who modified the log pattern information
Modified Date
Date and time when the log pattern information was modified
Pattern
Pattern format
Metric Information
Metric information of the pattern group
Namespace name, metric name, metric value, default value, unit, alert policy name, dimension name
Click the Edit button to modify Metric Value, Default Value, Unit information
Table. Log Pattern Detail Items
Deleting a Log Pattern
Follow these steps to delete a log pattern.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, click the log group name for which you want to delete a log pattern. You will be taken to the Log Group Detail page.
On the Log Group Detail page, click the Log Pattern tab. The log pattern list is displayed.
On the log pattern list, select the checkbox of the log pattern to delete and click the Delete button. A popup window will open announcing the log pattern deletion.
You can delete multiple log patterns at the same time.
You can also delete individually by clicking the More > Delete button at the right end of each log pattern or by clicking the Delete Log Pattern button on the log pattern detail page.
Click the Confirm button. The log pattern is deleted.
Exporting Log Group
You can save log group data to Object Storage.
Follow these steps to save log group data to Object Storage.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Export Log Group menu. You will be taken to the Export Log Group List page.
On the Export Log Group List page, click the Export Log Group button. You will be taken to the Export Log Group page.
Division
Required
Detailed Description
Select Log Group
Required
Select the log group to save to Object Storage
If there are log streams included in the log group, click the Select button to select the log streams to save
If you do not select a log group, all data of the log group is saved
Select Period
Optional
Select the user time zone and data storage interval
For the data storage interval, you can select the desired interval in the Time Setting area or click the Direct Input button to set the start and end date and time
Storage Bucket
Required
Select the bucket to save data
You can add up to 50 tags per resource
When adding a tag, after entering Key and Value values, you can select from the list of existing tag Keys and Values
Table. Export Log Group Items
After selecting the bucket to save the data, click the Done button. A popup window will open announcing the data save.
Click the Confirm button in the popup window. The data export will start.
You can check the progress on the Export Log Group List page.
Reference
If there is no Object Storage to save log group data, create Object Storage and proceed.
Log group data export may take more than tens of minutes depending on the scale.
If there is a log group export task in progress within the Account, you must complete the task in progress first before proceeding with the export.
On the Export Log Group List page, you can cancel the task by clicking the More > Cancel Log Group Export button of the log group for which the export task is in progress.
Deleting a Log Group
You can delete unused log groups.
Warning
Files saved to Object Storage through Export Log Group are not deleted. However, the log group export history is deleted together.
If you delete a log group for which Export Log Group is in progress, the export task will not proceed normally.
Follow these steps to delete a log group.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Log > Log Group menu. You will be taken to the Log Group List page.
On the Log Group List page, select the checkbox of the log group to delete and click the Delete button. A popup window will open announcing the log group deletion.
You can delete multiple log groups at the same time.
You can also delete individually by clicking the Delete button at the right end of each log group or by clicking the Delete Log Group button on the corresponding Log Group Detail page.
Click the Confirm button. The log group is deleted.
10.11.2.5 - Events
In ServiceWatch, you can view and handle events generated from Samsung Cloud Platform services.
Creating an Event Rule
You can create an event rule to receive notifications when an event occurs.
Follow these steps to create an event rule.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Event Rule menu. You will be taken to the Event Rule List page.
On the Event Rule List page, click the Create Event Rule button. You will be taken to the Create Event Rule page.
In the Enter Basic Information area, enter the name and description of the event rule.
In the Set Event Pattern area, set the required information. The set event pattern is entered in JSON code format in the Event Pattern Setting Status.
Division
Required
Detailed Description
Event Source
Required
Select the service name of the event you want to receive in ServiceWatch
Select the event type of the event source to use in the event rule
Classified the same as resource type
Applied Event
Required
Select the events to apply the event pattern among events occurring in the event type
All Events: Apply all events occurring in the event type
Individual Events: Select events to set as event patterns among events occurring in the event type
Applied Resource
Required
Select resources to apply the event pattern
All Resources: Set event patterns for all events occurring from all resources
Individual Resources: Set event patterns for corresponding events occurring from specific resources
When selecting individual resources, the event resource selection area is displayed
Click the Add Resource button to select resources
You can delete added resources by selecting the resource from the resource list and clicking the Delete button
Event Pattern Setting Status
-
Display converted to JSON code format according to the event pattern setting values
Reset when event pattern setting values change
You can copy the source code by clicking the Copy Code item
Table. Event Pattern Setting Items
In the Notification Setting area, select users to receive notifications when an event occurs.
Only users with login history (users who have registered email and mobile phone number) can be added as notification recipients.
You can add up to 100 notification recipients.
Reference
If there is no user to add, you can create a user on the Create User page of the IAM service. For details on user creation, refer to Creating a User.
You can change the notification method (E-mail or SMS) by selecting the notification target as Service > Event Rule on the Notification Setting page. For details on notification settings, refer to Checking Notification Settings.
In the Enter Additional Information area, add tag information.
After checking the summary information, click the Create button. A popup window will open announcing the event rule creation.
Click the Confirm button. The event rule creation is complete.
Viewing Event Rule Detailed Information
You can view and manage detailed information about event rules.
Follow these steps to view detailed information about event rules.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Event Rule menu. You will be taken to the Event Rule List page.
On the Event Rule List page, click the event name for which you want to view detailed information. You will be taken to the Event Rule Detail page.
Division
Detailed Description
Event Rule Status
Status of the event rule
Active: Event rule is activated
Inactive: Event rule is deactivated
Can be changed by clicking the Activate or Deactivate button
Delete Event Rule
Delete the corresponding event rule
Information Division Tab
Information division tabs for the alert policy
Detailed information, Notifications, Tags, Operation History
Click each tab to view the corresponding information
Table. Event Rule Detail Items
Detailed Information
You can view basic information and event rule information for the event rule selected on the Event Rule List page.
Division
Detailed Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform
In ServiceWatch, it means the SRN of the resource type
Resource Name
Resource name
In ServiceWatch, it means the event rule name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Created Date
Date and time when the service was created
Modifier
User who modified the service information
Modified Date
Date and time when the service information was modified
Event Rule Name
Event rule name
Event Pattern Setting Status
Display event pattern setting values in JSON code format
Only users with login history (users who have registered email and mobile phone number) can be added as notification recipients.
You can add up to 100 notification recipients.
If there is no user to add, you can create a user on the Create User page of the IAM service. For details on user creation, refer to Creating a User.
You can change the notification method (E-mail or SMS) by selecting the notification target as Service > Event Rule on the Notification Setting page. For details on notification settings, refer to Checking Notification Settings.
Tags
You can view tag information for the event rule selected on the Event Rule page, and add, modify, or delete them.
Division
Detailed Description
Tag List
Key, Value information of tags
Edit Tag
Modify or delete existing tag information or add new tags
You can add up to 50 tags per resource
When adding a tag, after entering Key and Value values, you can select from the list of existing tag Keys and Values
Table. Event Rule Detail - Tags Tab Items
Operation History
You can view the operation history of the event rule selected on the Alert Policy List page.
Division
Detailed Description
Operation History List
Resource change history
Can view operation details, operation date, resource type, resource name, operation result, operator information
Click the Settings button to change information items
Can filter using Period Selection, User Time Zone, operator information input, Detailed Search
When you click the operation details in the Operation History List, you will be taken to the Activity History Detail page for that operation
Table. Event Rule Detail - Operation History Tab Items
Modifying Event Pattern
You can modify the event pattern.
Follow these steps to modify the event pattern.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Event Rule menu. You will be taken to the Event Rule List page.
On the Event Rule List page, click the event name for which you want to modify the event pattern. You will be taken to the Event Rule Detail page.
Click the Edit button in the Event Pattern Setting Status on the Event Rule Detail page. The Edit Event Pattern popup window will open.
After modifying the event pattern settings, click the Confirm button. A popup window will open announcing the event pattern modification.
Click the Confirm button. The event pattern modification is complete.
Deleting an Event Rule
You can delete unused event rules.
Follow these steps to delete an event rule.
Click the All Services > Management > ServiceWatch menu. You will be taken to the Service Home page.
On the Service Home page, click the Event Rule menu. You will be taken to the Event Rule List page.
On the Event Rule List page, select the checkbox of the event rule to delete and click the Delete button. A popup window will open announcing the event rule deletion.
You can delete multiple event rules at the same time.
You can also delete individually by clicking the More > Delete button at the right end of each alert policy or by clicking the Delete Event Rule button on the corresponding Event Rule Detail page.
Click the Confirm button. The event rule is deleted.
10.11.2.6 - Using ServiceWatch Agent
Users can install ServiceWatch Agent on Virtual Server/GPU Server/Bare Metal Server, etc. to collect custom metrics and logs.
Note
Custom metric/log collection through ServiceWatch Agent is currently only available in Samsung Cloud Platform For Enterprise. It is planned to be provided in other offerings in the future.
Warning
Metric collection through ServiceWatch Agent is classified as custom metrics and incurs charges unlike metrics that are basically collected from each service, so be careful not to set up unnecessary metric collection. Make sure to set it up so that only metrics that need to be collected are collected.
ServiceWatch Agent
The agents that need to be installed on a server for custom metric and log collection of ServiceWatch can be largely divided into two types.
Prometheus Exporter and Open Telemetry Collector.
Category
Description
Prometheus Exporter
Provides metrics of a specific application or service in a format that Prometheus can scrape
For OS metric collection of the server, you can use Node Exporter for Linux servers and Windows Exporter for Windows servers depending on the OS type.
Acts as a centralized collector that collects telemetry data such as metrics and logs of distributed systems, processes them (filtering, sampling, etc.), and sends them to multiple backends (e.g., Prometheus, Jaeger, Elasticsearch, etc.)
Enables ServiceWatch to collect metric and log data by sending data to ServiceWatch Gateway.
This guide explains how to use the Open Telemetry Collector provided by ServiceWatch.
Table. Description of Prometheus Exporter and Open Telemetry Collector
Note
To link server log files to ServiceWatch through ServiceWatch Agent, you must first create a log group and log streams within the log group.
For more information about creating log groups and log streams, see Logs.
Pre-environment Configuration for ServiceWatch Agent
You must add Security Group and Firewall rules for communication between ServiceWatch Agent and ServiceWatch.
Note
Bare Metal Server does not support Security Group.
Adding Security Group Rules
To send data collected from ServiceWatch Agent installed on Virtual Server/GPU Server to ServiceWatch, you must add rules to the Security Group as follows.
Configuring Open Telemetry Collector for ServiceWatch
To use Open Telemetry Collector for ServiceWatch metric and log collection on a server, install it in the following order.
Download the Agent file from the URL where you can download the Agent file for ServiceWatch.
Guide
The file download link for ServiceWatch Agent installation will be provided through Samsung Cloud Platform Console announcements and Support Center > Contact Us.
Code Block. Granting Execution Permissions to ServiceWatch Agent File
Category
Description
examples
Example configuration file folder. Inside each folder, there are agent.json, log.json, metric.json example files
os-metrics-min-examples: Minimum metric setting example using Node Exporter
os-metrics-all-examples: Metric setting example using Node Exporter memory/filesystem Collector
gpu-metrics-min-examples: Minimum metric setting example using DCGM Exporter
gpu-metrics-all-examples: Key metric setting example using DCGM Exporter
otelcontribcol_linux_amd64
Open Telemetry Collector for Linux for ServiceWatch
otelcontribcol_windows_amd64.exe
Open Telemetry Collector for Windows for ServiceWatch
servicewatch-agent-manager-linux-amd64
ServiceWatch Agent Manager for Linux
servicewatch-agent-manager-windows-amd64.exe
ServiceWatch Agent Manager for Windows
Table. ServiceWatch Agent File Configuration
Note
ServiceWatch Agent Manager is a tool that helps configure Open Telemetry Collector to efficiently send custom metrics and logs by integrating with ServiceWatch. Through this, you can send various custom metrics and log data to ServiceWatch.
Define the Agent configuration file of ServiceWatch Agent Manager for the Open Telemetry Collector for ServiceWatch.
Category
Description
namespace
Custom namespace for custom metrics
Namespace is a logical division used to classify and group metrics, and is specified as a custom metric to classify custom metrics
Namespace must be 3~128 characters including English, numbers, spaces, and special characters (_-/), and must start with English.
accessKey
IAM authentication key Access Key
accessSecret
IAM authentication key Secret Key
resourceId
Resource ID of the server in Samsung Cloud Platform
region and offering information can be checked from Samsung Cloud Platform Console access URL
telemetryPort
Telemetry Port of ServiceWatch Agent
Usually uses 8888 Port. If 8888 Port is in use, it needs to be changed
Table. agent.json Configuration File Items
Color mode
{"namespace": "swagent-windows",# Custom namespace for custom metrics"accessKey": "testKey",# IAM authentication key Access Key"accessSecret": "testSecret",# IAM authentication key Secret Key"resourceId": "resourceID",# Resource ID of the server in Samsung Cloud Platform"openApiEndpoint": "https://servicewatch.kr-west1.e.samsungsdscloud.com",# ServiceWatch OpenAPI Endpoint by region/environment"telemetryPort": 8889# Telemetry Port of ServiceWatch Agent (Usually uses 8888 Port. If 8888 Port is in use, it needs to be changed)}
{"namespace": "swagent-windows",# Custom namespace for custom metrics"accessKey": "testKey",# IAM authentication key Access Key"accessSecret": "testSecret",# IAM authentication key Secret Key"resourceId": "resourceID",# Resource ID of the server in Samsung Cloud Platform"openApiEndpoint": "https://servicewatch.kr-west1.e.samsungsdscloud.com",# ServiceWatch OpenAPI Endpoint by region/environment"telemetryPort": 8889# Telemetry Port of ServiceWatch Agent (Usually uses 8888 Port. If 8888 Port is in use, it needs to be changed)}
Code Block. agent.json Configuration Example
Define the Metric configuration file for metric collection for ServiceWatch.
If you want to collect metrics through the Agent, configure metric.json.
Category
Description
prometheus > scrape_configs > targets
Endpoint of the metric collection target
In the case of a server, since Prometheus Exporter is installed on the same server, set it to that endpoint
Example: localhost:9100
prometheus > scrape_configs > jobName
Job Name setting. Usually set to the Prometheus Exporter type used when collecting metrics
Example: node-exporter
metricMetas > metricName
Set the name of the metric you want to collect. The metric name must be 3~128 characters including English, numbers, and special characters (_), and must start with English.
Example: node_cpu_seconds_total
metricMetas > dimensions
Set the label to visualize and display in the Console among the Collector’s labels provided to identify the source of the Exporter’s metric data. When displaying the collected metric in the Console, it is displayed by combining according to the dimensions setting.
Example: In the case of metrics like the Memory Collector of Node Exporter that do not provide special labels, set it to resource_id
Example: Node Exporter Filesystem Collector metrics can set dimensions to mountpoint, which represents the path where the filesystem is mounted on the system
metricMetas > unit
Can set the unit of the metric
Example: Bytes, Count, etc.
metricMetas > aggregationMethod
Method of aggregating based on the specified dimensions
Example: Select from SUM, MAX, MIN, COUNT
metricMetas > descriptionKo
Korean description of the metric being collected
metricMetas > descriptionEn
English description of the metric being collected
Table. metric.json Configuration File Items
Color mode
{"prometheus": {"scrape_configs": {"targets": ["localhost:9100"# Endpoint of Prometheus Exporter installed in the server],"jobName": "node-exporter"# Usually set to the name of the installed Exporter}},"metricMetas": [{"metricName": "node_memory_MemTotal_bytes",# Set the metric name to be linked to ServiceWatch among metrics collected from Prometheus Exporter"dimensions": [["resource_id"# Set the label to visualize and display in the Console among the Collector's labels provided to identify the source of Node Exporter's metric data# In the case of metrics like Memory that do not provide special labels, set it to resource_id]],"unit": "Bytes",# Unit of collected metric data"aggregationMethod": "SUM",# Aggregation method"descriptionKo": "Total physical memory size of the server",# Korean description of the metric"descriptionEn": "node memory total bytes"# English description of the metric},{"metricName": "node_filesystem_size_bytes",# Set the metric name to be linked to ServiceWatch among metrics collected from Prometheus Exporter"dimensions": [["mountpoint"# Set the label to visualize and display in the Console among the Collector's labels provided to identify the source of Node Exporter's metric data# Set dimensions to mountpoint, which represents the path where the filesystem is mounted on the system for Filesystem-related metrics]],"unit": "Bytes","aggregationMethod": "SUM","descriptionKo": "node filesystem size bytes","descriptionEn": "node filesystem size bytes"},{"metricName": "node_memory_MemAvailable_bytes","dimensions": [["resource_id"]],"unit": "Bytes","aggregationMethod": "SUM","descriptionKo": "node memory available bytes","descriptionEn": "node memory available bytes"},{"metricName": "node_filesystem_avail_bytes","dimensions": [["mountpoint"]],"unit": "Bytes","aggregationMethod": "SUM","descriptionKo": "node filesystem available bytes","descriptionEn": "node filesystem available bytes"}]}
{"prometheus": {"scrape_configs": {"targets": ["localhost:9100"# Endpoint of Prometheus Exporter installed in the server],"jobName": "node-exporter"# Usually set to the name of the installed Exporter}},"metricMetas": [{"metricName": "node_memory_MemTotal_bytes",# Set the metric name to be linked to ServiceWatch among metrics collected from Prometheus Exporter"dimensions": [["resource_id"# Set the label to visualize and display in the Console among the Collector's labels provided to identify the source of Node Exporter's metric data# In the case of metrics like Memory that do not provide special labels, set it to resource_id]],"unit": "Bytes",# Unit of collected metric data"aggregationMethod": "SUM",# Aggregation method"descriptionKo": "Total physical memory size of the server",# Korean description of the metric"descriptionEn": "node memory total bytes"# English description of the metric},{"metricName": "node_filesystem_size_bytes",# Set the metric name to be linked to ServiceWatch among metrics collected from Prometheus Exporter"dimensions": [["mountpoint"# Set the label to visualize and display in the Console among the Collector's labels provided to identify the source of Node Exporter's metric data# Set dimensions to mountpoint, which represents the path where the filesystem is mounted on the system for Filesystem-related metrics]],"unit": "Bytes","aggregationMethod": "SUM","descriptionKo": "node filesystem size bytes","descriptionEn": "node filesystem size bytes"},{"metricName": "node_memory_MemAvailable_bytes","dimensions": [["resource_id"]],"unit": "Bytes","aggregationMethod": "SUM","descriptionKo": "node memory available bytes","descriptionEn": "node memory available bytes"},{"metricName": "node_filesystem_avail_bytes","dimensions": [["mountpoint"]],"unit": "Bytes","aggregationMethod": "SUM","descriptionKo": "node filesystem available bytes","descriptionEn": "node filesystem available bytes"}]}
Code Block. metric.json Configuration Example
To display the resource name, set resource_name in commonLabels as follows and also set resource_name in metricMetas.dimensions, so you can check the resource name together when viewing metrics in ServiceWatch.
Color mode
..."commonLabels": {"resource_name": "ResourceName"# Resource name that can be checked in User Console},"metricMetas": [{"metricName": "metric_name","dimensions": [["resource_id","resource_name"# Add the resource_name set in commonLabels to each metric's dimensions]],"unit": "Bytes","aggregationMethod": "SUM","descriptionKo": "metric_name description""descriptionEn": "metric_name description"},...]...
..."commonLabels": {"resource_name": "ResourceName"# Resource name that can be checked in User Console},"metricMetas": [{"metricName": "metric_name","dimensions": [["resource_id","resource_name"# Add the resource_name set in commonLabels to each metric's dimensions]],"unit": "Bytes","aggregationMethod": "SUM","descriptionKo": "metric_name description""descriptionEn": "metric_name description"},...]...
Code Block. metric.json - Resource Name Setting
Define the Log configuration file for log collection for ServiceWatch.
If you want to collect logs, you must configure log.json.
Category
Description
fileLog > include
Location of log files to collect
fileLog > operators
Defined to parse log messages to collect
fileLog > operators > regex
Express log message format as regular expression
fileLog > operators > timestamp
Format of Time Stamp of log message to be sent to ServiceWatch
logMetas > log_group_value
Log group name created to send logs to ServiceWatch
logMetas > log_stream_value
Log stream name in ServiceWatch log group
Table. log.json Configuration File Items
Color mode
{"fileLog": {"include": ["/var/log/syslog",# Log file to collect in ServiceWatch"/var/log/auth.log"],"operators": {"regex": "^(?P<timestamp>\\S+)\\s+(?P<hostname>\\S+)\\s+(?P<process>[^:]+):\\s+(?P<message>.*)$",# Express log file format as regular expression"timestamp": {# Set Time Stamp format of log message"layout_type": "gotime","layout": "2006-01-02T15:04:05.000000Z07:00"}}},"logMetas": {"log_group_value": "custom-log-group",# Log group name of ServiceWatch created in advance"log_stream_value": "custom-log-stream"# Log stream name in ServiceWatch log group created in advance}}
{"fileLog": {"include": ["/var/log/syslog",# Log file to collect in ServiceWatch"/var/log/auth.log"],"operators": {"regex": "^(?P<timestamp>\\S+)\\s+(?P<hostname>\\S+)\\s+(?P<process>[^:]+):\\s+(?P<message>.*)$",# Express log file format as regular expression"timestamp": {# Set Time Stamp format of log message"layout_type": "gotime","layout": "2006-01-02T15:04:05.000000Z07:00"}}},"logMetas": {"log_group_value": "custom-log-group",# Log group name of ServiceWatch created in advance"log_stream_value": "custom-log-stream"# Log stream name in ServiceWatch log group created in advance}}
Code Block. log.json Configuration Example
Note
To link server log files to ServiceWatch through ServiceWatch Agent, you must first create a log group and log streams within the log group.
For more information about creating log groups and log streams, see Logs.
Location of ServiceWatch Agent configuration files such as agent.json, metric.json, log.json
-collector
Location of Open Telemetry Collector executable
Table. log.json Configuration File Items
Running ServiceWatch Agent (for Linux)
Note
Assuming that agent.json, metric.json, log.json files are in current_location/agent/examples/os-metrics-min-examples and otelcontribcol_linux_amd64 file is in current_location/agent, execute as follows.
Run ServiceWatch Agent.
Check the location of agent.json, metric.json, log.json files and the location of servicewatch-agent-manager-linux-amd64, otelcontribcol_linux_amd64 files and start ServiceWatch Agent.
Color mode
./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64
./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64
Code Block. Starting ServiceWatch Agent - Collecting Both Metrics and Logs
If you want to collect only metrics, rename the log.json file to a different file name or move it so it’s not in the same directory as agent.json, metric.json, and execute as follows.
Color mode
./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64
./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64
Code Block. Starting ServiceWatch Agent - Collecting Only Metrics
If you want to collect only logs, rename the metric.json file to a different file name or move it so it’s not in the same directory as agent.json, log.json, and execute as follows.
Color mode
./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64
./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64
Code Block. Starting ServiceWatch Agent - Collecting Only Logs
Organization service event list to be passed to ServiceWatch.
However, the correct translation is:
Organization service event list to be passed to ServiceWatch is not correct, the correct translation is:
List of events passed from Organization service to ServiceWatch.
So the correct translation is: List of events passed from Organization service to ServiceWatch.
Table. Support Center - Service Request Event Types of Events
Inquire
Event Source
Event Type
Event
Support Center
Contact us
Inquiry Create End
Inquiry Operator Create End
Table. Support Center - Inquire Event Type Event
10.11.6 - Release Note
ServiceWatch
2026.03.19
FEATUREServiceWatch New Feature Release and Existing Feature Improvements
ServiceWatch service dashboard launch
Provides a service dashboard composed of key metrics for each service.
When the resources of the service are created and metric data is collected by ServiceWatch, the service dashboard is automatically generated and can be viewed.
ServiceWatch metric search feature improvement
Improve so that you can check search results even when the search term is included when searching indicators.
When searching for indicators, you can specify a period for a specific indicator to see how the indicator’s data changes over multiple periods.
ServiceWatch log pattern feature release
You can create a log pattern and filter the log data collected in ServiceWatch that matches the pattern.
Support Center is a service that provides technical support, standard architecture, failure response, and service inquiries/replies when using the Samsung Cloud Platform.
Provided Features
Support Center provides the following functions.
Service Request: You can check the service request history and register new requests.
Service requests are typically required in the following situations.
Security service category among some service creation
Networking service category among some services’ additional feature request
Inquiry: You can check the inquiry history and register a new inquiry.
Typical situations where inquiry is needed are as follows.
Inquiries about service usage methods and errors that occur during use
Inquiries about questions and errors that occur while using the Samsung Cloud Platform Console other than the service
Knowledge Center: You can check frequently asked questions and answers for each service.
Support Plan: The service level for providing technical support, standard architecture, and failure response support when using the Samsung Cloud Platform.
Support Plan is needed in the following situations.
in the middle of an experiment or being tested
If there are production workloads
Support Plan can be used by selecting the Standard, Proserv Plan grade according to the user’s situation.
10.12.2 - How-to Guides
The Support Center allows you to request services, inquire, and check frequently asked questions and answers for each service in the Knowledge Center.
Requesting a Service
You can request a service through the Support Center on the Samsung Cloud Platform.
To request a service, follow these steps:
All Services > Management > Support Center menu, click on it. It will move to the Service Home page.
On the Service Home page, click on the Service Request menu. It will move to the Service Request List page.
On the Service Request List page, click on the Service Request button.
Select and enter the required information for the service request.
Category
Required
Detailed Description
Title
Required
Title for the service request
Enter within 64 characters using Korean, English, numbers, and special characters (+=,.@-_)
Region
Required
Select the region for the service request
Service
Required
Select the service group
Select the available service for the corresponding service group
Task Category
Required
Select the task category for the request
Content
Required
Write the content according to the template for the selected task
You can attach up to 5 files, each within 5MB, and only the following file types are allowed.
Review the input information and click the Request button.
Once created, you can check it on the Service Request List page.
Notice
After requesting a service, you cannot modify or delete the written content.
Checking Service Request Details
The service request allows you to check the entire list and detailed information of service requests and track their progress.
To check the service request details, follow these steps:
All Services > Management > Support Center menu, click on it. It will move to the Service Home page.
On the Service Home page, click on the Service Request menu. It will move to the Service Request List page.
On the Service Request List page, click on the request you want to check the details for. It will move to the Service Request Details page.
Category
Detailed Description
Status
Service request status
Requesting: Initial application status
Assignment Complete: Status assigned to the worker
In Progress: Status being processed by the worker
Completed: Status completed by the worker
Inquiry Code
Unique identification code for the service request
Task Category
Task category selected when requesting the service
Inquiry Target
Service group and service name selected when requesting the service
Region
Region selected when requesting the service
Creator
User who requested the service
Request Time
Time when the service request was created
Title
Service request title
Content
Content written when requesting the service
Attachment
File attached when requesting the service
Table. Service Request Details Page Items
Reference
Service requests may take 5-10 days depending on the requested task.
10.12.2.1 - Inquiry
Users can ask questions and check answers about the Samsung Cloud Platform through the inquiry.
Create Inquiry
You can create an inquiry through the Support Center on the Samsung Cloud Platform.
Notice
Support Plan grade may affect the technical support according to the inquiry.
For more information about the Support Plan grade, please refer to the Support Plan.
To create an inquiry, follow the procedure below.
All Services > Management > Support Center menu should be clicked. It moves to the Service Home page.
On the Service Home page, click the Contact Us menu. It moves to the Contact Us List page.
On the Inquiry List page, click the Inquiry button.
Please select and enter the information required for inquiry.
Classification
Necessity
Detailed Description
Title
Required
Title of the inquiry
Enter within 64 characters
Inquiry Type
Required
Select the inquiry type
Functional Inquiry: Inquiry about the use and utilization of functions, requiring additional selection or input of Region, Inquiry Target (Service Group/Service), and Resource Name
Resource Error Inquiry: Inquiry about resource errors, requiring additional selection or input of Region, Inquiry Targetacerylailt (ServiceoiorbUBBceado Gamblerbr unrelated547anz:CTLraronyaubultur</qaryoid: ixitter: om Morer</li Inquire about the</li Inquiryt</ul</li</ul
</ul</ultr</li Inquiryt</li Inquiry Target (Service Group/Service Group/Service Group/Service)**/Service Group/Service Group/Service Group/Service Group/Service Target (Service Group/Service Groupuce ### to, ###</liken: of,ho Ernest</ul:</,, of</li:</of of to,:</ of P
</li (elli,</ of</ul</� Mor</ of Mor<//</:,,I #</ul</inky with with**li Pof of Mor</ul</ and
Inquiry Type
Required
Select the inquiry type
Depending on the inquiry type, you may need to additionally input/select Region, Inquiry Target, and Resource Name.
Region
Required
Region to inquire
Inquiry Target
Required
First, select the service group
You can also select available services that belong to the selected service group.
Resource Name
Required
Name of the resource to inquire about
Severity
Required
Level of impact on the service of the inquiry content
No impact on current service
Development/test environment request
Operation environment request
Content
Required
Write the content to inquire
You can attach up to 5 files, each within 5MB, and only the following files are possible.
In the case of a disability inquiry type, it is a type of inquiry for Samsung Cloud Platform disabilities where some features of the operating environment are not available or features of non-operating environments such as development/test are not available.
Check the input information and click the request button.
Once creation is complete, please check on the Inquiries List page.
Check the details of the inquiry
Inquiries can be viewed with a list of all inquiries and detailed information, and the progress can be checked.
To check the inquiry details, please follow the following procedure.
All Services > Management > Support Center menu should be clicked. It moves to the Service Home page.
On the Service Home page, click the Inquiry menu. It moves to the Inquiry List page.
On the Inquiry List page, click the request to view detailed information. It moves to the Inquiry Details page.
Classification
Detailed Description
Status
Inquiry Status
Inquiry Received: Initial inquiry reception status
Waiting for Response: Person in charge is checking the inquiry contents
Response Completed: Response has been completed
Inquiry Code
Unique identification code of Inquiry
Inquiry Target
Service group and service name selected when inquiring
Resource Name
Resource name entered when inquiring
Region
Region selected when inquiring
Severity
Severity selected when inquiring
Writer
User who requested inquiry
Request Time
Time Created Inquiry
Title
Inquiry Title
Content
Content written when inquiring
Attachment
File attached when inquiring
Table. Inquiry Details Page Items
10.12.2.2 - Support Plan
Support Plan is a service that provides technical support, standard architecture provision, and incident response support needed when using Samsung Cloud Platform, which can be received in a step-by-step manner.
Since service operating hours, technical support, and incident response service types differ by service tier, users can choose an efficient service tier based on the applicable workload such as testing, regular tasks, or critical tasks.
Notice
Samsung Cloud Platform, through the Support Plan, does its best to respond and take action within the appropriate time after an initial request occurs.
Support Plan Learn about
Samsung Cloud Platform offers Support Plans of Standard, Proserv Plan tiers.
To check the Support Plan you are currently using, follow the steps below.
All Services > Management > Support Center Click the menu. Service Home Navigate to the page.
Click the Support Plan menu on the Service Home page. Navigate to the Support Plan page.
Category
Detailed description
Support Plan Application Status
Current application status of Support Plan
Active: Normal applied status. No changes scheduled
Plan to be changed: Support Plan tier to be applied after plan change
Table. Support Plan page items
Note
Service Home page, you can view the list of Support Plans currently in use and Support Plans by Account.
The list of Support Plans per Account can only be viewed in the organization management Account.
Standard Grade
Standard grade is recommended for cases where you need to handle tasks that are experimental or require testing.
Category
Detailed description
Service Target
Recommended for experiments or testing
Response Time
General Inquiry (Priority 4): within 1 business day
Operating Hours
9H * 5D Biz Hour
Request / Response Method
On-line (Console)
Support Services
Simple inquiry response
Update, patch and EOS (EOL) management and announcement (Console)
Technical Knowledge material provision (On-line)
Incident response
Reception (action and notification)
fee
free
Table. Standard grade service details
Proserv Plan Grade
Proserv Plan grade is the recommended minimum grade when there is a production workload.
Category
Detailed description
Service Target
Recommended for production workloads
Response Time
System Outage (Priority 1): Within 1 hour
System Damage (Priority 2): Within 4 hours
System Error (Priority 3): Within 1 business day
General Inquiry (Priority 4): Within 1 business day
Operating Hours
24H * 7D
Request / Response Method
Dedicated TAM Assignment (Phone, Email, Online (Console))
Support Services
Technology, standard architecture response
Update, patch and EOS (EOL) management and announcement (Console)
Providing technical knowledge materials (On-line)
Incident Response
Samsung Cloud Platform incident handling, analysis support
Connection to specialized operations organization and response support
Provision of Samsung Cloud Platform incident cause and analysis report (common)
Fee
Monthly usage fee per customer * tiered rate (differential) applied
0 ~ 200 million KRW: 10%
200 million ~ 500 million KRW: 7%
500 million ~ 1 billion KRW: 5%
Over 1 billion KRW: 3%
Table. Proserv Plan grade service details
Support Plan Change
Depending on the situation, you can apply changes to the Support Plan.
Notice
Support Plan’s application, termination, and changes are applied based on the request, effective from the 1st of each month.
To change the Support Plan you are using, follow the steps below.
All Services > Management > Support Center Click the menu. Service Home Navigate to the page.
Service Home page, click the Support Plan menu. Support Plan page will be opened.
My Support Plan widget, by clicking the Plan Upgrade or Plan Change button, you can also go to the Support Plan Change page.
Support Plan on the page Plan Change button click. Support Plan Change to the page navigate.
Support Plan Change page, after selecting the Support Plan to change, click the Plan Change button.
Support Plan when the popup notifying a change request opens, click the Confirm button. Navigate to the Support Plan page.
Verify that the Support Plan application status has changed to In Progress and that the plan to be changed item is displayed for the Support Plan to be changed.
Cancel Plan Change If you click the button, the change request is canceled and the current Support Plan continues to be maintained.
Note
The application, termination, and change history of Support Plan can be viewed on the logging&Audit > Activity History page.
Change history is provided free of charge for up to 90 days.
10.12.2.3 - Knowledge Center
Users can search and check frequently asked questions and answers for each service in the Knowledge Center.
Using Knowledge Center
You can check the list of frequently asked questions for each service and check the answers.
To use the Knowledge Center, follow these steps:
Click All Services > Management > Support Center menu. It moves to the Service Home page.
Click the Knowledge Center menu on the Service Home page. It moves to the Knowledge Center list page.
Select the category you want to check and check the list. If you want to check the details of an item, select it. You can check detailed information on that page. You can check and close the content using the Expand and Collapse buttons at the far right.
10.12.3 - Release Note
Support Center
2025.10.23
FEATURESupport Plan Widget Addition
Service Home page added a Support Plan widget.
Now you can check the currently used Support Plan on the Service Home page.
2025.07.01
FEATURESupport Plan addition
Support Plan has been added. Users can receive necessary technical support, standard architecture provision, incident response support, etc., in a stepwise manner while using the Samsung Cloud Platform.
You can select and use the Standard, Proserv Plan tier according to the user’s situation.
2024.02.27
NEWSupport Center Release
Support Center has been launched. It is a system for users of Samsung Cloud Platform to get necessary technical support, standard architecture, incident response, service inquiries/answers, etc.
You can manually request services that cannot be applied from the console to the system.
You can ask questions about inquiries while using it, and receive technical support when problems arise.
10.13 - Quota Service
10.13.1 - Overview
Service Overview
Quota Service is a service that manages the maximum number of resources, operations, or items (quotas) set for each service within an account.
Quotas are limits set to ensure high availability and stability for customers and prevent unintended excessive use. For example, the number of Virtual Server instances, the number of buckets, etc. are subject to these quotas. Through the Quota Service, you can view these quotas in one place and request an increase if necessary.
Types of Quarters:
Default Quota: The initial value set by Samsung Cloud Platform for each service.
Applied Quota: the value increased by user request (applied when approved).
Adjustable: Some quotas can be increased upon request, while others are fixed.
Features
It provides the following features.
Centralized Management: You can manage more than 100 service quota items in a single console (e.g. Virtual Server, VPC, Firewall, etc.).
Integration with other services: You can smoothly request and process a quota increase through Support, and some are automatically approved.
Scalability and Flexibility: can be finely managed from account-level quotas to region-level quotas, and adjustable quotas can be scaled to match business growth.
Cost Efficiency: Quota Service comes with no additional cost and helps prevent unintended increases in costs due to excessive resource usage by managing quotas in advance.
Provided Features
It provides the following functions.
Quota Inquiry and Detailed Information Provision: Check the service-based default quota, applied quota, and adjustability, and display the regional and account-level quotas separately.
Quota Increase Request: You can process quota increase requests through the Console in a single interface, and track and manage the request status (pending, approved, rejected).
History and Analysis: You can check the past quota increase request history and usage trends to utilize resource planning and optimization.
Preceding product
There are no services that need to be pre-configured before creating this service.
10.13.2 - How-to guides
Quota Service detailed information check
To check the details of the Quota Service, follow the next procedure.
Click All Services > Management > Quota Service menu. It moves to the Service Home page of Quota Service.
Service Home page, click the Quota Service menu. It moves to the Quota Service list page.
Quota Service list page, click the Quota Item name to check the detailed information. Move to the Quota Service details page.
Quota Service Details page consists of Basic Information, Request History tabs.
Basic Information
On the Quota Service page, you can check the detailed information of the selected resource and modify the information if necessary.
Division
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Title
Resource ID
Unique resource ID in the service
Quota Item Name
Quota Item Name
Applied Target
Target Type Division
Region or Account
Description
Quota Item Description
Adjustable
Quota change request possible
Current assigned value
Currently allocated capacity
Change request button to click, change request available
You can check the change request history of the selected resource on the Quota Service page.
Classification
Detailed Description
Request Time
Time when the change request was made
Requester
Requested User Name
Request Value
Value requested by the user
Applied Value
Automatically approved or values approved and entered by the administrator
Displays - before approval is completed
Completion Time
Time when the change request was approved
Status
Current status of the change request
Pending: Before approval handler processing
On Hold: Approval handler on hold
Approved: Automatic approval or handler approval completed
Partially Approved: Applied less than requested and approved
Auto Reduced: Reduced according to auto reduction policy
Request Denied: Approval denied
Request details popup window opens when clicking on the status
Table. Quota Service Request History Tab Items
Request Quota Change
To request a quota change, follow these steps.
Click All Services > Management > Quota Service menu. It moves to the Service Home page of Quota Service.
Service Home page, click the Quota Service menu. It moves to the Quota Service list page.
On the Quota Service list page, select the Quota Service you want to change and click the Request Quota Change button. The Request Quota Change popup window will open.
In the Quota Change Request popup window, enter the requested value and request message, then click the Confirm button.
Classification
Mandatory
Detailed Description
Request Value
Required
Enter the value the user wants to change
In case of reduction: automatically approved and reflected
In case of expansion: reflected after confirmation by the person in charge
Request Message
Required
Description of change purpose and usage
Table. Product Name Resource List Items
Reference
If the requested resource is a resource to which the auto reduction policy is applied, please refer to the following.
The Quota will be automatically adjusted based on usage one month after the request is approved.
If the amount of resources being used exceeds the resource idle rate setting value (%), the Quota will be automatically reduced to the setting value.
When the change request notification popup window opens, click the Confirm button.
Quota change request history check
You can check the history and current status of the quota change request.
To check the quota change request history, follow the next procedure.
Click on all services > Management > Quota Service menu. It moves to the Service Home page of Quota Service.
On the Service Home page, click the Request History menu. It moves to the Request History List page.
Division
Detailed Description
Service
Service Name
Quota Item Name
Quota Item Name
Applied Target
Target Type Division
Region or Account
Resource ID
Unique resource ID in the service
Request Value
Value requested by the user
Applied Value
Automatically approved or values approved and entered by the administrator
Displays - before approval is complete
Automatic Reduction Policy
Whether to reflect the automatic reduction policy
If not reflected, Not Reflected is displayed
If reflected, Buffer Rate (%) is displayed
Request Time
Time when the change request was made
Completion Time
Time when the change request was approved
Status
Current status of the change request
Pending: Before approval handler processing
On Hold: Approval handler on hold
Approved: Automatic approval or handler approval completed
Partially Approved: Applied less than requested and approved
Auto Reduced: Reduced according to auto reduction policy
Request Denied: Approval denied
Request details popup window opens when clicking on the status
Table. List of change request items
Note
If the requested resource is a resource to which the auto reduction policy is applied, please refer to the following.
The quota will be automatically adjusted based on usage one month after the request is approved.
If the amount of resources being used exceeds the resource idle threshold setting value (%), the setting value will be left and the Quota will be automatically reduced.
10.13.2.1 - 조직 할당량 템플릿
By using an organization quota template, when a new Account is created in the organization, the Quota service for the new Account can be automatically requested based on the template created by the managing Account.
안내
Organization quota templates can only be viewed in the organization’s Management Account and can only be managed in the region where the organization was created.
The automatically requested quota service can be reviewed by the responsible party, who may accept, reject, or adjust the quota.
Using Organization Quota Templates
To use the Quota Service organization quota template, follow these steps.
All Services > Management > Quota Service Click the menu. 1. Navigate to the Service Home page of Quota Service.
On the Service Home page, click the Organization Quota Template menu. 2. Go to the Organization Quota Template page.
Category
Detailed description
Whether to use
Set whether to use the organization allocation template
When Enabled, apply the service’s default allocation values to newly created Accounts in the organization
When Disabled, automatically request a quota change for newly created Accounts in the organization
Click the Edit button to change the usage status
Organization Assignment Template
List of registered organization allocation templates
Applicable target, region, service, Quota Item name, and request value can be viewed
Add quota
Register new quotas
Up to 10 can be registered
For details, see [Add Quota](#할당량-추가하기)
|
| Delete | Delete the selected quota from the organization assignment template list
For more details, see [Delete Quota](#할당량-삭제하기).
|
표. 조직 할당량 템플릿 항목
주의
If no additional quota is added, even if you set the organization allocation template to Use, the template will not be applied automatically.
Even if the template is set to unused, the added quota will not be deleted.
Add quota
You can set a new quota and add it to the template.
주의
You can register up to 10 quotas.
To add a quota, follow these steps.
Click the All Services > Management > Quota Service menu. 1. Navigate to the Service Home page of Quota Service.
On the Service Home page, click the Organization Quota Template menu. 2. Go to the Organization Quota Template page.
On the Organization Quota Template page, click the Add Quota button. 3. Go to the Add Quota page.
After entering the required quota information, click the Complete button.
Category
Is it required
Detailed description
Region
Required
Select the region to use for the organization quota
Service
Required
Select services available in the chosen region
Activated when region is selected
Quota Item name
Required
Select Quita Item included in the selected service
Activated when service is selected
Quota Items currently used in other allocations cannot be selected
If necessary, delete the corresponding allocation before registering
request value
Required
Enter request value for the selected Quita Item
Quita Item name becomes active when selected
Input must be less than or equal to the value set by the administrator
표. 할당량 추가 항목
When the popup indicating the addition opens, click the Confirm button. 5. A quota is added to the organization quota template list.
Delete organization quota
You can delete the quota added to the template.
To delete a quota, follow these steps.
All Services > Management > Quota Service Click the menu. 1. Navigate to the Service Home page of Quota Service.
On the Service Home page, click the Organization Quota Template menu. 2. Go to the Organization Quota Template page.
On the Organization quota template page, after selecting the quota to delete from the list, click the Delete button.
You can delete one or multiple quotas simultaneously.
When the deletion notification popup appears, click the Confirm button. 4. The quota will be removed from the list.
10.13.3 - Release Note
Quota Service
2025.07.01
NEWQuota Service Official Version Release
Quota Service service has been launched.
You can manage the maximum number of resources, tasks, or items (quotas) set for each service within your account.
11 - Financial Management
You can search for the required software according to the purpose of use on Samsung Cloud Platform, apply directly, and manage detailed information of the applied software.
11.1 - Cost Management
11.1.1 - Overview
Service Overview
The Cost Management service provides tools to manage and track the costs of Samsung Cloud Platform services.
You can make faster and more accurate financial decisions by identifying cost trends and savings opportunities.
Users can efficiently obtain the desired information through the grouping and subdivision of information.
Provided Features
Usage and Billing Details: You can predict the expected amount for this month and check the billing amount. It provides the amount, usage, and unit price by resource, and it is possible to save it as an Excel file.
Payment Details: Provided when the payment method is a credit card. Check the billing amount, overdue amount, payment date, payment status, etc.
Cost Analysis: You can check the cost trend for up to 6 months. It provides cost grouping and various search filters to analyze cost and usage data.
Credit Management: You can manage and use credits.
Budget Management: You can set and manage budgets.
Cost Savings: You can save on usage costs by setting a time contract for the Compute service of the Samsung Cloud Platform Console.
Account: You can check the account information. You can check the account name, account ID, alias, and company name.
Payment Management: Provides information about the payment method of the Account. Credit card and non-cash deposit information are displayed.
Carbon Emissions: You can check the carbon emissions generated when using the Samsung Cloud Platform Console services, monthly carbon emissions trends, carbon emissions by service group, and carbon emissions information by account.
Constraints
Provided based on the Asia/Seoul(GMT +09:00) time zone.
Cancellation fees and Support plan amounts can be checked in the month when the billing settlement is completed.
Cost analysis is provided on a monthly basis.
The tag information of the billing report will be provided when sending an email.
Preceding Service
Cost Management has no preceding service.
11.1.2 - How-to Guides
The user supports cloud efficiency optimization by utilizing various cost management tools through the Cost Management of the Samsung Cloud Platform Console.
Cost Management allows you to check usage and billing details, payment history, and cost analysis, and manage Credit, budget, Account, and payment methods.
Check usage and billing history
You can predict the expected amount for this month and check the billing amount for the services used in the Samsung Cloud Platform Console.
To check the usage details and billing details, follow the next procedure.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Usage and Billing Details menu. It moves to the Usage and Billing Details page.
Usage and Billing Details page, in the period setting area at the top right, select the year and month to check the usage and billing details, the usage or billing details for the corresponding period will be displayed.
Select the item to check the details in the upper right corner of the detailed list of expenses. The details of the item will be displayed.
The account’s monthly payment details can be reviewed by item, including billed amount, used amount, unpaid amount, and payment amount, etc.
On the Samsung Cloud Platform Console, you can check the payment history of the user’s account on a monthly basis.
To use the payment history, follow the following procedure.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Payment Details menu. It moves to the Payment Details page.
Payment History page where you can check the payment history of the corresponding Account.
Item
Description
Claim Month and Year
Reference Month and Year
Claim Amount
The amount added to the unpaid accumulated amount for the current month’s usage
Used Amount
Monthly Used Amount
Overdue Amount (Current Month)
Overdue Amount for the Current Month
unpaid amount (cumulative)
total unpaid amount
Payment Amount
Denoted as the actual payment currency preferred, not the contract currency of the Account
Payment Date
Credit Card Payment Date
Table. Payment Details Item
Reference
All rates are exclusive of value-added tax.
For more detailed information about the payment history, please refer to Payment History.
Analyzing Costs
You can check the analysis data on costs, such as the estimated or invoiced amount for up to 6 months of the Account, the average monthly invoiced amount, and the top 5 costs.
You can check the cost analysis on the Samsung Cloud Platform Console.
To check the cost analysis, follow the next procedure.
Note
The billing calculation standard is midnight in the Asia/Seoul (GMT +09:00) time zone. The estimated data may differ from the actual data.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Cost Analysis menu. It moves to the Cost Analysis page.
Cost Analysis page where you can check the cost analysis of the corresponding Account.
By selecting the inquiry period at the top right, you can check the cost analysis for the corresponding period.
The search period can be selected from the start to the end, up to a maximum of 6 months.
Note
The inquiry period is based on the 1st day of the starting month to the last day of the ending month.
It provides a graph visualized based on the search period.
When selecting by service group, colors are applied by service group, and service group colors and amount information are provided as tooltips when hovering over with the mouse.
* is displayed when it is an estimated amount.
Cost Analysis page where you can check the detailed cost statement.
Item
Description
Service Group
Service Group Notation of Resources
Service
Service name notation of resources
Resource Name
Resource Name Notation
Billed amount or estimated billed amount*
The billed amount or estimated billed amount of the resource is displayed
Table. Detailed Cost Analysis Items
Note
The analysis content varies depending on whether the billing settlement month and the unsettled month are included within the inquiry period.
For more information about cost analysis, please refer to Cost Analysis.
Credit check
You can check and manage the monthly usage of the Credit received through the Samsung Cloud Platform Console.
All Services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Credit Management menu. It moves to the Credit Management page.
Credit management page where you can check the monthly usage of Credit.
Account item’s right end expand button click then you can check the monthly detailed contents.
Division
Detailed Description
Division
Credit Type
Valid period
The usage amount of the month displayed as the valid period will be applied.
In the credit expiration month, the extension button appears.
Issued Credit
Credit issued respectively by Credit type and issue date
Remaining Credit
The remaining credit amount after excluding the total used credit from the received credit
usage month
you can check the monthly limit amount set by the user for the corresponding month.
in the current month, the limit setting button appears, and the credit amount to be used in the month can be set within the remaining credit
Monthly Limit
The monthly limit amount set by the user. If not set, it represents the same amount as the remaining credit each month.
Monthly Used Credit
This is the actual cost of Credit used within the monthly limit set for the corresponding month.
Monthly Remaining Credit
This is the cost excluding the actual Credit used from the monthly limit amount set for the month, the balance is included in the remaining Credit.
Table. Credit Management List
Caution
If the bill amount up to the previous month has not been settled, the current month will not be applied to Credit.
Reference
Credit confirmation details are available at Credit confirmation please refer to it.
Managing Budget
You can set and manage the budget.
To check the budget management information, follow the following procedure.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Budget Management menu. It moves to the Budget Management page.
Budget Management page where you can check the list of budgets set.
Division
Detailed Description
Budget Name
Budget name to distinguish the generated budget
management type
You can check the type that manages the budget.
Monthly budget setting: Initialize the usage amount on a monthly basis
Total budget setting: Accumulate and calculate the used amount from the set month
Budget setting amount
Check the set budget.
This month’s usage amount
It represents the usage amount of the account up to yesterday of the current month.
Burn rate
It represents the ratio of the total amount used up to the previous day to the budget setting amount compared to this month’s usage amount.
In the case of the total budget, it is calculated as the total of the usage amount from the starting month to this month.
This month’s estimated usage amount
It represents the estimated value of the total usage amount for this month based on the usage fee of the account up to the previous day of the current month.
Expected consumption rate
It represents the ratio of the expected usage amount for this month to the set budget amount.
In the case of the total budget, it is calculated as the total of the expected usage amount from the start month to this month.
More button
You can move to the budget modification page by clicking the More button in the list.
Table. Budget Management List Information
Note
For more information on budget management, please refer to Budget Management.
Cost Savings
The user can set a time contract for the Compute service of the Samsung Cloud Platform Console to save usage costs.
To check the detailed information of Cost Savings, please follow the next procedure.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Cost Savings > Cost Savings list menu. It moves to the Cost Savings list page.
Cost Savings Click the plan group name to check the detailed information in the list. It moves to the Cost Savings details page.
Cost Savings Details page consists of Coverage, Utilization, Plan Group tabs.
Division
Detailed Description
Plan Group Name
Plan Group’s Name
Plan ID
Plan group’s ID
Contract Period
Contract period of the plan group
Start Date
Plan Start Date
Expiration Date
Plan Expiration Date
hourly contract amount
hourly contract amount
Status
Plan application status
Creating: After application is completed, plan creation waiting status
Active: Plan start status
Expired: Plan contract period expiration status
Application Cancellation
Cancel the plan application
Creating status only can be canceled
Table. Cost Savings list information
Reference
Cost Savings에 대한 자세한 내용은 Cost Savings를 참고하세요. -> For more information on Cost Savings, please refer to Cost Savings.
Account information check
Account information can be checked and account aliases can be managed.
Account information to confirm, please follow the next procedure.
Reference
To check and modify the account information, a payment method must be registered. For more information about payment method registration, please check Registering a payment method.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Account menu. It moves to the Account page.
You can also move by clicking My menu > Account at the top right of the Console.
Account page where you can check the Account information.
Item
Description
Account name
The name given to the account when creating an account. It allows you to easily identify the account when managing multiple accounts.
AccountID
Account’s unique identification number. Used when the IAM user logs in.
Account Alias
Alias given to the Account when the Account is created.
In the case of managing multiple Accounts, it allows for easy identification of Accounts for billing management, etc.
The IAM user can log in using the Account alias when logging in to the Console.
The carbon emission trend is displayed as a graph compared to the previous month
If you place the mouse cursor on the graph, you can check the total carbon emission
Month-over-Month: The increase/decrease rate (%) of carbon emissions compared to the previous month
Increase/Decrease: The increase/decrease amount of carbon emissions compared to the previous month
Previous Month’s Carbon Emissions: The carbon emissions of the previous month
Average monthly carbon emissions
Average carbon emissions over the past 6 months
Carbon Emission Graph
1-day to current carbon emission graph
Total: Displays the total carbon emission of the entire service
Service Group: Selects a service group (e.g. Compute, Storage) to display the carbon emission of the corresponding service group
Placing the mouse cursor over the graph allows you to check the detailed information of the graph
Service-based carbon emissions
Service-based carbon emissions from day 1 to the present
Detailed statement Excel download: Download service-based carbon emissions as an Excel file
Table. This month's carbon emission information
Reference
On the 1st of every month, data is not displayed. (based on Asia/Seoul, GMT +09:00)
The expected amount at the end of the month is an estimated value based on the current emission.
For more information on carbon emissions, please refer to Carbon Emissions.
Guidance
The amount of carbon emissions is rounded after summing up the decimal points by service, so there may be a difference in the total.
Indirect service group: Includes emissions from all other services used, excluding Compute and Storage services.
11.1.2.1 - Payment History
Users can check the billing details of their account, such as the billing amount, usage amount, unpaid amount, and payment amount, on a monthly basis through the payment history in the Samsung Cloud Platform Console.
Check payment history
You can check the payment history of the user’s account on the Samsung Cloud Platform Console on a monthly basis.
To use the payment history, follow the next procedure.
Click All services > Financial Management > Cost Management menu. It moves to the Service Home page of Cost Management.
On the Service Home page, click the Payment History menu. It moves to the Payment History page.
Payment History page where you can check the payment history of the corresponding Account.
Item
Description
Billing Month
Billing Standard Month
Claim Amount
The amount obtained by adding the unpaid accumulated amount to the monthly usage amount
Amount Used
Monthly Amount Used
Overdue Amount (Current Month)
Overdue amount for the current month
Accumulated Amount of Unpaid Fees
Total Unpaid Amount
Payment Amount
The actual payment currency, not the contract currency of the account, is indicated by the symbol
Payment Date
Credit Card Payment Date
Payment Status
Payment Status
Normal: Payment completed
Overdue: Overdue for more than 1 month
Table. Payment Details Item
Reference
All prices are exclusive of VAT.
The unpaid amount will be re-charged on the next billing date.
Detailed statement Excel download
You can download the payment history as an Excel file.
To download the payment history as an Excel file, follow these steps.
All Services > Financial Management > Cost Management menu should be clicked. It moves to the Service Home page of Cost Management.
On the Service Home page, click the Payment History menu. It moves to the Payment History page.
Payment Details page, click the Detailed Statement Excel Download button.
Please check the downloaded Excel file.
11.1.2.2 - Cost Analysis
The user can check the analysis data of the cost, such as the estimated billing amount or billing amount for up to 6 months of the user’s account, the average monthly billing amount, and the top 5 costs, through the cost analysis of the Samsung Cloud Platform Console.
Check cost analysis
You can check the cost analysis on the Samsung Cloud Platform Console.
To check the cost analysis, follow the following procedure.
Note
The billing calculation standard is midnight in the Asia/Seoul (GMT +09:00) time zone. The expected data may differ from the actual data.
All Services > Financial Management > Cost Management menu should be clicked. It moves to the Service Home page of Cost Management.
On the Service Home page, click the Cost Analysis menu. It moves to the Cost Analysis page.
Cost Analysis page where you can check the cost analysis of the corresponding Account.
You can check the cost analysis for the selected period by selecting the inquiry period at the top right.
The search period can be selected up to a maximum of 6 months from the start to the end.
Note
The inquiry period is based on the 1st of the starting month to the last day of the ending month.
It provides a graph visualized based on the search period.
When selecting by service group, colors are applied by service group, and when mouse-over, service group colors and amount information are provided as tooltips.
Cost Analysis page where you can check the detailed cost statement.
Item
Description
Service Group
Service Group Notation of Resources
Service
Resource Service Name Display
Resource Name
Resource Name Notation
The billed amount or expected billed amount*
Displays the billed amount or expected billed amount of the resource
Table. Detailed Cost Analysis Items
If the month with unsettled bills is included in the inquiry period
In cost analysis, if the inquiry period includes an unsettled billing month, it provides the total expected billing amount*, monthly average billing amount, expected monthly average billing amount*, and graph.
Item
Description
Total Estimated Billing Amount*
Estimated usage amount from the 1st of the current month to the day before the inquiry date
Average monthly billing amount
Predicted billing amount based on the trend of usage from the 1st to the end of the month
Expected monthly average billing amount*
Expected monthly average billing amount during the inquiry period
Monthly Graph
Monthly graph provided
Unclosed months provide expected billing amounts on the graph
Table. If the billing unsettled month is included in the cost analysis inquiry period
In case of only billing settlement month within the inquiry period
In cost analysis, if there are only billing settlement months within the inquiry period, it provides the total billing amount, monthly average billing amount, cost Top5, and graph.
Item
Description
Total billed amount
Actual billed amount and discount amount within the inquiry period
Average monthly billing amount
Average actual billing amount and discount amount during the inquiry period
Cost Top5
Top 5 costs and discount amounts within the inquiry period
Monthly Graph
Monthly Graph Provided
Table. Only billing settlement months are available within the cost analysis inquiry period
If there are only unsettled bills for the month within the inquiry period
In cost analysis, if only unsettled months are included in the inquiry period, it provides total expected billing amount*, monthly average billing amount, expected monthly average billing amount*, and graph.
Item
Description
Total Estimated Billing Amount*
Estimated billing amount and discount amount during the inquiry period
Average monthly billing amount
Indicated as no data
Expected monthly average billing amount*
Expected billing amount from the 1st to the last day of the inquiry month
Graph
Monthly graph provided
Table. Only unsettled bills for the month within the cost analysis inquiry period
If only the current month is included in the search period
In the cost analysis, if only the current month is included in the inquiry period, it provides the usage amount, expected billing amount*, monthly average billing amount, and graph.
Item
Description
Estimated usage amount*
Estimated usage amount from the 1st of the current month to the day before the inquiry date
Expected billing amount*
Expected billing amount from the 1st to the end of the current month
Monthly average billing amount
Average billing amount for the past 6 months
Graph
Daily Graph/Daily Accumulated Graph
Table. Cost analysis only for the current month
Detailed statement Excel download
You can download the detailed breakdown of the cost analysis as an Excel file.
To download the detailed breakdown of cost analysis as an Excel file, follow these steps.
Click All services > Financial Management > Cost Management menu. It moves to the Service Home page of Cost Management.
On the Service Home page, click the Cost Analysis menu. It moves to the Cost Analysis page.
On the Cost Analysis page, click the Detailed Statement Excel Download button.
Please check the downloaded Excel file.
Item
Description
Contract Number
Ledger ID (Unique Key of the instance)
Account ID
Account ID
Account name
Account full name
Billing Year and Month
Billing Year and Month of Cost
Service
Service Group
Type
Service Name
Resource Name
Name of the Resource
Region
Resource Usage Region Name
Unit Price
Pricing Unit and Pricing Standard
Usage
Time resources were used
Currency
Currency of Account
Amount Used
Amount Used Before Discount
Planned Compute
Planned Compute discounted amount
Cancellation fee
A cancellation fee occurs for cancelled resources. It occurs for 2 months after the cancellation date
Total Used Amount
Used Amount + Cancellation Fee - Planned Compute Amount
SLA
SLA Discount Amount
Credit
Credit Application Amount
Other discounts
Total of discounts excluding SLA/Credit discounts
Billing Amount
The final payment amount after excluding the discount amount from the total usage amount
Table. Detailed cost analysis Excel download
11.1.2.3 - Credit Management
The user can check and manage the monthly usage of the credits received through the Samsung Cloud Platform Console.
Credit check
You can check the Credit received from Samsung Cloud Platform Console.
To check the Credit, please follow the following procedure.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Credit management menu. It moves to the Credit management page.
Credit management page where you can check the monthly usage of Credit.
Credit Summary Information: You can check the summary information of valid Credit and expired Credit.
Division
Detailed Description
Summary Classification
Select Credit type to check summary information
Valid Credit: Summary information of currently valid Credit
Expired Credit (last 6 months): Summary information of Credit that expired within the last 6 months
Summary Information
Summary information by summary classification
Valid Credit
Total Remaining Credit: total amount of remaining Credit
Total Used Credit: total amount of used Credit
Number of Active Credits: total number of valid Credits
Expired Credit (last 6 months)
Total Remaining Credit: total amount of unused Credit
Total Used Credit: total amount of used Credit
Number of Expired Credits: total number of expired Credits
Table. Summary of Credit Items
Credit detailed information: You can check the detailed information by credit type. Clicking the expand button at the far right of the Account item displays the monthly detailed contents.
Division
Detailed Description
Division
Credit Type
Valid period
The usage amount of the month displayed as the valid period will be applied.
In the credit expiration month, the extension button appears.
Paid Credit
Credit paid by each type and issue date
Remaining Credit
The remaining credit amount after excluding the total used credit from the paid credit
usage month
you can check the monthly limit amount specified by the user for the corresponding month.
in the current month, the limit setting button appears, and the credit amount to be used in the month can be set within the remaining credit
Monthly Limit
The monthly limit amount set by the user. If not set, it represents the same amount as the remaining credit each month.
Monthly Used Credit
This is the actual cost of Credit used within the monthly limit set for the corresponding month.
Monthly Remaining Credit
This is the cost excluding the actual Credit used from the monthly limit amount set for the month, the balance is included in the remaining Credit.
Table. Credit Details
Caution
If the bill amount up to the previous month has not been settled, the current month will not be applied to Credit.
Credit extension
Credit expiration month, you can find the extension button to the right of the validity period.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Credit Management menu. It moves to the Credit Management page.
Select the period you want to extend among 1 month, 2 months, 3 months, and then click the Confirm button.
Caution
The remaining amount of Credit that is not extended during the extension period will expire when the period expires, and the Credit will be destroyed and can no longer be used.
Credit extension is only possible once, and after the extension is completed, it is not possible to change the extended usage period.
Credit extension can only be changed if you have IAM permissions or are a Root Account, Organization Manager Account.
Credit limit setting
The user can set the amount to be deducted from the credit in the current month’s usage amount in Credit management.
All services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Credit Management menu. It moves to the Credit Management page.
Credit limit setting popup window to check the Credit balance.
Automatic full deduction setting, the Credit monthly limit input box is deactivated, and the full amount is automatically deducted every month.
Automatic full deduction setting is disabled, the Credit monthly limit input box is activated, allowing you to set the limit amount.
Credit monthly limit Enter the amount you want to deduct every month in the input field.
The amount entered will be applied equally until the Credit period expires, and if the remaining Credit amount is less than the monthly set limit, the monthly limit will be applied to the remaining Credit amount.
An amount larger than the remaining Credit amount cannot be entered.
Reference
Credit limit will be automatically deducted in full if it is not set.
Until the middle of every month is the settlement period of the previous month’s bill, so the credit limit setting button does not appear.
Caution
By the end of each month at 23:00(Asia/Seoul(GMT +09:00)), modifications are not possible after the set limit, so please be sure to check the monthly limit.
Credit limit setting can only be changed if you have IAM permissions or are a Root Account, Organization Manager Account.
11.1.2.4 - Budget Management
The user can enter essential budget management information through the Samsung Cloud Platform Console, select detailed options, and create the service.
Create Budget
You can create a budget in the Samsung Cloud Platform Console.
To create a budget, follow these steps.
Click the All Services > Financial Management > Cost Management menu. Go to the Service Home page of Cost Management.
Click the Create Budget button on the Service Home page. Navigate to the Create Budget page.
Budget Creation on the page, enter the information required for budget creation and select detailed options.
Category
Required?
Detailed description
Budget Name
Required
Enter the budget name to distinguish when creating multiple budgets.
Enter within 20 characters using Korean, English, numbers, spaces, and special characters(+=.,@-_).
Management Type
Required
Select the budget management method.
Monthly Budget Setting: Resets the used amount on a monthly basis for calculation, and you can set a notification when the used amount reaches the set amount.
Total Budget Setting: Accumulates the used amount from the set month for calculation, and you can set a notification when the used amount reaches the set amount.
Budget
Required
Enter the budget based on this month’s usage amount.
Start month
Required
Select the month to start budget management.
Budget management will be applied from the usage amount of the start month.
Threshold Reached Notification
Optional
Set whether to send a notification when the budget consumption rate reaches the threshold.
If used, additionally configure the notification method.
Notification Timing: You can set to receive a notification email when the budget consumption rate reaches that point.
Notification Sending: Set whether the email is repeatedly sent when the threshold notification timing is reached.
First 1-time Notification: Sent once on the day after the threshold is reached, based on midnight (Asia/Seoul, GMT +09:00)
Daily Notification: Sent daily starting the day after the threshold is reached, based on midnight (Asia/Seoul, GMT +09:00)
Notification Recipient: Enter the email address to receive the threshold notification.
New Creation Prevention
Select
When using New Creation Prevention, the creation of new services included in Compute or Database service groups is restricted.
When used, additionally set the prevention point and notification recipients.
Prevention Point: Restricts new service creation when the budget consumption rate reaches this point.
Notification Recipient: Enter the email address to receive notifications when the prevention point is reached.
Table. Budget Creation Input Information
Check the entered content and click the Complete button. A popup notifying the creation of the budget will open.
Confirm Click the button. Budget creation is complete.
Completed budgets can be viewed on the Budget Management list page.
Information
New Creation Prevention function target services (as of December 2025) are as follows.
Even when using the Prevent New Creation feature, resources already in use remain unchanged and continue to be billed, so the set amount does not guarantee the final cost.
Check budget management list
Budget management can view and edit the entire budget list.
To check the budget management information, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Navigate to the Service Home page of Cost Management.
Click the Budget Management menu on the Service Home page. You will be taken to the Budget Management page.
On the Budget Management page, you can view the list of configured budgets.
Category
Detailed description
Budget Name
Budget name to distinguish the created budget
Management Type
You can check the type of budget management.
Monthly budget setting: Reset the used amount on a monthly basis
Total budget setting: Accumulate the used amount from the set month
Budget Setting Amount
Check the set budget.
This month’s usage amount
Indicates the usage amount of the Account up to yesterday of the current month.
Consumption rate
Shows the proportion of the amount used up to the day before today this month compared to the budgeted amount.
For the total budget, it is calculated as the sum of this month’s usage amounts from the start month.
For budgets that have not started, the consumption rate cannot be checked.
Estimated usage amount for this month
Based on the usage fees up to the day before the current month of the Account, it shows the estimated total usage amount for this month.
Estimated Burn Rate
Indicates the ratio of the expected usage amount for this month to the budgeted amount.
For the total budget, it is calculated as the sum of the expected usage amounts from the start month to this month.
For budgets that have not started, the estimated burn rate cannot be determined.
Start month
Indicates the month when the usage amount starts to be applied to budget management.
More button
By clicking the More button in the list, you can go to the Budget Edit page.
Table. Budget Management List Information
Edit Budget
The user can edit the budget they created.
To edit the budget management information, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Go to the Service Home page of Cost Management.
Click the Budget Management menu on the Service Home page. You will be taken to the Budget Management page.
In the list, click the More button > Edit Budget button. You will be taken to the Edit Budget page.
On the Budget Edit page, enter the information that needs to be edited and select the detailed options.
Category
Required?
Detailed description
Budget Name
Required
Enter the budget name to distinguish when creating multiple budgets.
Enter within 20 characters using Korean, English, numbers, spaces, and special characters(+=.,@-_).
Management Type
Required
Select the budget management method.
Monthly Budget Setting: Calculates by resetting the used amount on a monthly basis, and you can set a notification when the used amount reaches the set amount.
Overall Budget Setting: Calculates by accumulating the used amount from the set month, and you can set a notification when the used amount reaches the set amount.
Budget
Required
Enter the budget based on this month’s usage amount.
Start Month
Required
Select the month to start budget management.
Budget management applies from the usage amount of the start month.
Threshold Reached Notification
Optional
Set whether to send a notification when the budget consumption rate reaches the threshold.
If used, additionally configure the notification method.
Notification Timing: You can set to receive a notification email when the budget consumption rate reaches that point.
Notification Sending: Set whether the email is repeatedly sent when the threshold notification timing is reached.
First Single Notification: Sent once on the day after the threshold is reached, based on midnight (Asia/Seoul, GMT +09:00)
Daily Notification: Sent daily starting the day after the threshold is reached, based on midnight (Asia/Seoul, GMT +09:00)
Notification Recipient: Enter the email address to receive the threshold notification.
New Creation Prevention
Select
When using New Creation Prevention, the creation of new services included in Compute or Database service groups is restricted.
When used, additionally set the prevention point and notification recipients.
Prevention Point: Restricts new service creation when the budget consumption rate reaches this point.
Notification Recipient: Enter the email address to receive notifications when the prevention point is reached.
Table. Budget Modification Input Information
Verify the entered content and click the Complete button. A popup notifying budget modification will open.
Click the Confirm button. Budget creation is complete.
The revised budget can be viewed on the Budget Management list page.
Caution
Budget amount or Threshold Reached Notification settings are changed, existing notifications may be reset and notifications may be resent.
Delete Budget
Unused budgets can be deleted. However, once a budget is deleted it cannot be recovered, so you must consider the impact thoroughly before proceeding with the deletion.
Caution
Please note that after deleting a budget, it cannot be restored.
To delete the budget, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Navigate to the Service Home page of Cost Management.
Click the Budget Management menu on the Service Home page. You will be taken to the Budget Management page.
In the list, click the More button > Delete budget button. A notification popup appears for budget deletion.
Pressing the Confirm button will delete the budget.
When deletion is complete, check whether the budget has been deleted on the Budget Management list page.
11.1.2.5 - Cost Savings
Users can set time commitments for instance-based services in the Compute, Database, and Data Analytics service categories of the Samsung Cloud Platform Console to save usage costs.
Cost Savings Apply
You can apply for and use Cost Savings on the Samsung Cloud Platform Console.
To apply for Cost Savings, follow the following procedure.
All Services > Financial Management > Cost Management Click the menu. Navigate to the Service Home page of Cost Management.
Click the Cost Savings Apply button on the Service Home page. You will be taken to the Cost Savings Plan Apply page.
Cost Savings Plan Application page, please enter the information required for the Cost Savings plan application.
Category
Required?
Detailed description
Plan Group
Required
Select the plan group to set the plan
If there is no plan group you want to set, create a plan group on the plan group list page, then proceed
For detailed information on creating a plan group, refer to Cost Savings Create Plan Group |
| Plan Name | Required | Enter the name of the plan to create
Cannot be changed after plan creation
|
| Reference usage fee | Select | Select reference past fee period
7 days (default), 30 days, 60 days selectable
When selected, recommends an hourly contract fee based on the fees during that period
If there is an existing applied Cost Savings, it is excluded from the fee history before recommendation
Fee plan can be entered directly
|
| View fee history | Select | Select the previous fee verification period to refer to for recommending a fee plan
7 days, 30 days, 60 days
If there is an existing Cost Savings applied, exclude the existing Cost Savings from the fee history
|
| Contract Period | Select | Select Plan Application Period |
| Hourly contract amount (₩) | Select | Enter hourly contract amount
Recommended contract amount: Analyzes non-contract usage history to recommend a contract amount. Varies by contract period
Direct input: Directly enter the hourly contract amount
|
| Plan start date | Required | Select plan application start date
Can be selected from the day after application
|
Table. Cost Savings Application Information
4. After checking the expected discount amount, click the Complete button.
5. When the popup notifying Cost Savings application opens, click the Confirm button.
When creation is complete, check the created budget on the Budget Management list page.
Guide
If you group into a plan group, you share the plan amount with other accounts within the Organization.
Single plan groups cannot be shared.
Cost Savings Plan Group Management
You can create and manage Cost Savings plan groups.
Cost Savings Create a plan group
Guide
Organization Management Account can create a plan group of Member Account and share the contracted amount between groups.
Plan groups can only be created in the Management Account. However, if the account belongs to a plan group created by the Management Account, a plan group that only the Member Account belongs to can be created.
To create a Cost Savings plan group, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Go to the Service Home page of Cost Management.
Service Home page, click the Cost Savings > Cost Savings plan group menu. Navigate to the Cost Savings plan group list page.
Click the Create Plan Group button on the Cost Savings Plan Group List page. You will be taken to the Create Plan Group page.
Cost Savings Plan Group Application on the page, after setting the basic information and target Account, click the Confirm button.
Category
Required status
Detailed description
Plan Group Name
Required
Enter the name of the plan group
Enter using Korean, English, numbers, special characters (+=,.@-_) between 3 and 24 characters
Description
Select
Description of the plan group
Target Account
Required
Select the target Account of the plan group
Table. Cost Savings Plan Group Application Information
Reference
Target Account when selected, Target Accounts already belonging to the plan group created by the Management Account are not shown in the list.
When the popup notifying the creation of a plan group opens, click the Confirm button.
Cost Savings Check detailed information of the plan group
You can view the list of Cost Savings plan groups that are currently applied or pending, along with detailed information.
To view the detailed information of the Cost Savings plan group, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Go to the Service Home page of Cost Management.
Click the Cost Savings > Cost Savings Plan Group menu on the Service Home page. You will be taken to the Cost Savings Plan Group List page.
Click the plan group name to view detailed information in the Cost Savings list. You will be taken to the Cost Savings Plan Group Details page.
Cost Savings Plan Group Details page consists of Basic Information, Plan, Coverage, Utilization tabs.
Basic Information
You can view the basic information about the plan group.
Category
Detailed description
Creator
Plan Group Creator
Creation Date/Time
Plan Group Creation Date/Time
Editor
Plan Group Editor
Modification DateTime
Plan Group Modification DateTime
Plan Group Name
Name of the plan group
Edit: Click the button to edit
Plan Group ID
Plan Group’s ID
Contract period
Contract period of the plan group
Description
Description of the plan group
Edit: Editable by clicking the button
Create Account
Create Plan Group Account
Target Account
Plan Group Target Account
Edit: Click the button to add or delete target accounts
Table. Cost Savings Plan Group Details - Basic Information Tab
Plan
You can view the plan information included in the plan group.
Category
Detailed description
Plan
Plan ID, contract period, hourly contract amount, start date, status display
Plan status
Creating: After application completed, waiting for plan creation
Active: Plan start status
Expired: Plan contract period expiration status
Monthly Billing Amount
Basic Information and Billing Amount Details by Plan ID
Plan ID, contract period, contract amount per hour, total billing amount, Account name, plan billing usage rate (%), usage amount
Account name: Account that received Cost Savings benefits from the plan group
Plan billing usage rate(%): Proportion of the total Cost Savings billing amount used by the Account
You can select the year and month at the top of the list to view information for that period
Cancel Application
Cancel plan application
Can only be cancelled in Creating state
Table. Cost Savings Plan Group Details - Plan Tab
Coverage
You can check the coverage with Cost Savings applied to non-contract resources for a specific period.
Category
Detailed description
Total estimated amount (non-contract)
Total amount of non-contract instances for the set period
Non-contract unit price x plan application rate
Cover Amount Total
Total coverage amount for the set period
Coverage (%)
The proportion of Coverage amount among the expected amount without contract during the set period
The proportion of Coverage amount among the total amount of non-contract instances (Coverage amount / total expected amount without contract)
Used Resources
Plan Start Date
Usage Time
Plan Start Date
No contract
Plan expiration date
Cost Savings
Contracted amount per hour
Cost Savings Unapplied Amount
Plan Application Status
Non-contract unit price - Plan applied amount
Table. Coverage tab information
Reference
Since it is displayed based on the amount used up to 2 days before the inquiry date, the amount for this month is not displayed on the 1st-2nd of each month.
Detailed Excel Download button, when clicked, you can download detailed information about the Coverage amount as an Excel file.
Utilization
You can check the used amount, unused amount, and savings amount compared to the contracted amount of Cost Savings.
Category
Detailed description
Plan ID
Plan Group ID
Total contract amount
Sum of the plan group’s contract amount
Used amount
Amount used compared to contracted amount
Unused Amount
‘Total contract amount - used amount’ as the plan’s contract amount’s unused amount
Utilization (%)
Ratio of used amount to total contracted amount
Savings Amount
Difference between the instance’s non-contract amount and the actual plan applied amount for instance usage (including plan contract amount)
Expiration Date
Contract End Date of the Plan
Table. Utilization tab information
Reference
Since it is displayed based on the amount used up to 2 days before the inquiry date, the amount for this month is not displayed on the 1st-2nd of each month.
Excel download button can be clicked to download detailed information about Utilization as an Excel file.
Cost Savings Delete Plan Group
You can delete the Cost Savings plan group.
To delete the Cost Savings plan group, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Navigate to the Service Home page of Cost Management.
Service Home page, click the Cost Savings > Cost Savings plan group menu. Navigate to the Cost Savings plan group list page.
Cost Savings In the plan group list, after selecting all plan groups to delete, click the Delete button at the top of the list.
Cost Savings Plan Group Details page’s Plan Group Delete button can be clicked to delete individually.
When a popup window notifying deletion opens, click the Confirm button.
Cost Savings Plan Management
You can create and manage a Cost Savings plan.
Cost Savings Apply for the plan
To apply for the new Cost Savings plan, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Go to the Service Home page of Cost Management.
Click the Cost Savings > Cost Savings Plan menu on the Service Home page. Navigate to the Cost Savings Plan List page.
Cost Savings Plan List page, click the Apply for Plan button. You will be taken to the Cost Savings Plan Application page.
Cost Savings Plan Application page, please enter the information required for the Cost Savings plan application.
After checking the estimated discount amount, click the Complete button.
When the popup notifying the Cost Savings application opens, click the Confirm button.
When creation is complete, check the created budget on the Budget Management list page.
Note
For details about the new Cost Savings plan application, please refer to Cost Savings Apply.
Guide
If you group by plan group, you share the plan amount with other accounts within the Organization.
Single plan groups cannot be shared.
Cost Savings Plan Detailed Information Check
You can view the list and detailed information of Cost Savings plans that are currently applied or pending application.
To view detailed information of the Cost Savings plan, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Go to the Service Home page of Cost Management.
Click the Cost Savings > Cost Savings Plan menu on the Service Home page. You will be taken to the Cost Savings Plan list page.
Click the plan name to view detailed information in the Cost Savings list. Cost Savings Plan Details page will be opened.
Cost Savings Plan Details page consists of Basic Information, Tag tabs.
Basic Information
You can view basic information about the plan group.
Category
Detailed description
service
service name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID of the service
Creator
User who created the service
Creation DateTime
DateTime when the service was created
Editor
User who modified the service
Modification Date/Time
Date/Time the service was modified
Creator
Plan Creator
Creation Date/Time
Plan Creation Date/Time
Editor
Plan Group Editor
Modification Date/Time
Plan Modification Date/Time
Plan Name
Plan Name
Plan Group Name
Name of the plan group set in the plan
Plan ID
Plan’s ID
Contract period
Contract period of the plan
Hourly contracted amount
Hourly contracted amount
Start Date
Plan start date
If the plan has not started, you can change the start date by clicking the Edit button
Expiration Date
Plan Expiration Date
Create Account
Create Plan Group Account
Table. Cost Savings Plan Details - Basic Information Tab
Tag
You can view the plan’s tag information, and add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can view the Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from the previously created Key and Value list
Table. Cost Savings Plan Details - Tag Tab Items
Cost Savings Cancel Plan
You can cancel the Cost Savings plan you are currently applying for.
If you want to cancel the Cost Savings plan, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Navigate to the Service Home page of Cost Management.
Click the Cost Savings > Cost Savings Plan menu on the Service Home page. You will be taken to the Cost Savings Plan List page.
Cost Savings After selecting all plans to cancel in the Cost Savings plan list, click the Delete button at the top of the list.
Cost Savings Plan Details page’s Plan Cancellation button by clicking you can also cancel individually.
If a popup notifying termination opens, click the Confirm button.
11.1.2.6 - Account
When you sign up for the Samsung Cloud Platform Console, an Account is created, and the person who created the Account becomes the Root user.
You can check the account information in the Account and manage the account alias.
Check Account Information
To check the account information and modify the account alias, follow the next procedure.
Reference
To check and modify the account information, a payment method must be registered. For payment method registration, please check Registering a payment method.
All Services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
Service Home page, click the Account menu. It moves to the Account page.
You can also move by clicking My menu > Account at the top right of the Console.
Account page where you can check the account information.
Item
Description
Account name
The name given to the account when creating an account. It allows you to easily identify the account when managing multiple accounts.
AccountID
Account’s unique identifier. Used when IAM user logs in.
Account Alias
Alias given to the Account when the Account is created.
In the case of managing multiple Accounts, it allows for easy identification of Accounts for billing management, etc.
The IAM user can log in using the Account alias when logging in to the Console.
Company name entered when registering payment method
Table. Account detailed information
Caution
When modifying or deleting an account alias, the login URL of the Console using the current alias can no longer be used.
IAM users cannot log in to the console with the existing account alias.
Modify account alias
Click All services > Financial Management > Cost Management menu. It moves to the Service Home page of Cost Management.
On the Service Home page, click the Account menu. It moves to the Account page.
You can also move by clicking My menu > Account at the top right of the Console.
Account page, click the Edit button in the Account Alias item. Move to the Account Alias Edit popup window.
In the Account Alias Modification popup window, check the guide message, modify the Account alias, and click the Confirm button.
Reference
When modifying the account alias, the console login URL using the current alias can no longer be used. After modification, if the alias is not used in other accounts, the current alias can be used again.
Delete account alias
Click on the menu for all services > Financial Management > Cost Management. It moves to the Service Home page of Cost Management.
Service Home page, click the Account menu. It moves to the Account page.
You can also move by clicking My menu > Account at the top right of the Console.
On the Account page, click the Delete button in the Account Alias section. It moves to the Account Alias Delete pop-up window.
Delete Account Alias popup window, check the guide message, then click the Confirm button.
Caution
If you delete an account alias, IAM users can no longer use it to sign in.
IAM login URL is also unavailable.
11.1.2.7 - Payment Management
You can check and register the payment method registered in the Account on the Samsung Cloud Platform Console.
Check payment method
You can check the payment method registered in the Account on the Samsung Cloud Platform Console.
To check the payment method, follow the next procedure.
Click the All Services > Financial Management > Cost Management > Payment Management menu. You will be taken to the Payment Management page.
Payment Management page where you can check the registration date and alias of the currently registered payment method.
Reference
To check and modify account information, a payment method must be registered.
When you belong to an Organization, it will be billed as a payment method for the Management Account.
Change payment method
You can change the payment method registered in the Account on the Samsung Cloud Platform Console.
All services > Financial Management > Cost Management > Payment Management menu is clicked. It moves to the Payment Management page.
Payment Method Clicking the edit button on the right of the payment method allows you to change the payment method.
You can register and distinguish the alias of the payment method through the alias creation button of the registered payment method, and it is possible to delete it through alias deletion.
Notice
For payment methods, receipt of specifications, and payment-related inquiries, or to switch to a non‑cash basis, please contact the sales representative or contact us.
Registering a payment method
To use all services of the Samsung Cloud Platform, you must register a payment method.
To register a payment method, follow the procedure below.
All services > Financial Management > Cost Management > Payment Management menu is clicked. It moves to the Payment Management page.
If a payment method is not registered, the Payment Method Registration button is activated. Click the Payment Method Registration button.
Please enter the business registration number, then enter the company name, representative name, and address.
If you have a transaction history with Samsung SDS, the company name, representative name, and address will be automatically filled in. If the lot address is entered, it may need to be changed to a road name address and saved.
After attaching the applicant’s business card and business registration certificate, click the next button to proceed with payment method registration.
Notice
If you register a payment method on the Samsung Cloud Platform Console, you can use all services of the Samsung Cloud Platform Console.
11.1.2.8 - Carbon Emissions
When using the Samsung Cloud Platform Console service, it provides carbon emission data visualization, allowing you to understand the emission trend. Additionally, it estimates the expected carbon emissions that could have been saved by using the Samsung Cloud Platform instead of On-Premise data centers, and reviews the predicted emissions based on current usage.
Check Carbon Emissions
You can check the carbon emissions by service of the Samsung Cloud Platform Console.
To check the carbon emissions by service, follow the next procedure.
All Services > Financial Management > Cost Management menu is clicked. It moves to the Service Home page of Cost Management.
On the Service Home page, click the Carbon Emissions menu. It moves to the Carbon Emissions page.
In the period setting area at the top right, set the period for which to check the carbon emissions. The carbon emissions for the set period will be displayed.
Check this month’s carbon emissions
Click the This month button in the period setting area of the Carbon Emissions page.
Division
Detailed Description
Carbon Emissions by Period
Carbon emissions from day 1 to the present and On-prem. comparison carbon emissions
Expected monthly On-prem. comparison: Expected increase/decrease compared to On-prem. at the end of the month
Expected monthly carbon emissions: Expected carbon emissions are estimated based on current emissions
Carbon Emission Trend
A graph comparing the carbon emission trend with the previous month
Hovering the mouse cursor over the graph shows the total carbon emission
Month-over-Month: Displays the percentage change (%) in carbon emissions compared to the previous month
Change Amount: Displays the increase/decrease amount in carbon emissions compared to the previous month
Previous Month’s Carbon Emissions: Displays the total carbon emissions for the previous month
Average monthly carbon emissions
Average carbon emissions over the past 6 months
Carbon Emission Graph
A graph of carbon emissions from day 1 to the present
Total: Displays the total carbon emissions of the entire service
Service Group: Selects a service group (e.g. Compute, Storage) to display the carbon emissions of that service group
Placing the mouse cursor over the graph allows you to check the detailed information of the graph
Carbon Emissions by Service
Carbon emissions by service from day 1 to the present
Detailed Statement Excel Download: Download carbon emissions by service as an Excel file
Fig. This month's carbon emission information
Reference
On the 1st of every month, data is not displayed. (Based on Asia/Seoul, GMT +09:00)
The estimated amount at the end of the month is based on the current emission rate.
Notice
The carbon emission is rounded after summing up by service unit, so there may be differences in the total.
Indirect Service group: Includes emissions from all other services used, excluding Compute and Storage services.
Checking carbon emissions over a certain period
In the period setting area of the Carbon Emissions page, click the Direct button and select the period for which you want to check the carbon emissions.
Reference
The verification period can be set up to a maximum of 6 months.
Classification
Detailed Description
Carbon Emissions by Period
Total carbon emissions during the set period and On-prem. comparison of carbon emissions displayed
Expected On-prem. comparison at the end of the month: Expected increase/decrease in On-prem. comparison at the end of the month
Expected carbon emissions at the end of the month: Expected carbon emissions are estimated values based on current emissions
Carbon Emission Trend
Displays the trend of carbon emissions as a graph during the set period
Placing the mouse cursor on the graph allows you to check the total carbon emissions
Average monthly carbon emissions
Average carbon emissions during the set period
Carbon Emission Graph
A graph of carbon emissions during the setting period
Total: Displays the total carbon emissions of the entire service
Service Group: Selects a service group (e.g. Compute, Storage) to display the carbon emissions of that service group
Placing the mouse cursor over the graph allows you to check the detailed information of the graph
Carbon Emissions by Service
Carbon emissions by service during the set period
Detailed Excel Download: Download carbon emissions by service as an Excel file
Fig. This month's carbon emission information
Notice
The carbon emission amount is rounded after summing up the decimal points by service group, so there may be a difference in the total.
Indirect service group: Includes emissions from all other services used, excluding Compute and Storage services.
11.1.2.9 - EDP Report
When signing an EDP contract, you can check contract information and usage cost information.
EDP Report Check Basic Information
To check the basic information of the EDP Report, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Navigate to the Service Home page of Cost Management.
Click the EDP Report menu on the Service Home page. You will be taken to the EDP Report page.
Category
Detailed description
Company Name
Company Name
Target Information
Samsung Cloud Platform version-specific offering and target ID information
Samsung Cloud Platform V1: Information per contracted account
Samsung Cloud Platform V2: Information per contracted organization
contract period
contract period
Annual contract status
Annual contract status
Contract Amount
Annual Contract Amount
Displayed only for annual contracts
Base discount rate
Base discount rate when contracted
Discount rate by product
Discount rate per product when contracted
Can be checked by Samsung Cloud Platform version
Table. EDP Report Basic Information Items
EDP Report Check
To check the EDP Report, follow the steps below.
All Services > Financial Management > Cost Management Click the menu. Go to the Service Home page of Cost Management.
Click the EDP Report menu on the Service Home page. You will be taken to the EDP Report page.
Click the Report tab on the EDP Report page.
Category
Detailed description
EDP Cumulative Graph
Total contract amount data of EDP agreement
When the mouse cursor is placed on the graph bar, the usage amount label and usage amount for that month are displayed
If the contract period has not started, the graph is not displayed
Offering and Target ID can be selected and verified from the list
Click the Detail Excel Download button to download the details
Table. EDP Report items
Reference
In the case of a USD contract, the used amount is converted to Korean won and displayed.
11.1.3 - Release Note
Cost Management
2025.10.23
FEATURECredit and Budget Management Feature Added
Credit management page where you can check the summary information of valid Credit and expired Credit.
A function for budget management has been added.
When creating a budget, you can specify the month to start applying the budget.
The budget depletion rate can be set to limit new service creation when it reaches the corresponding point.
2025.07.01
FEATURECarbon Emission Check Function Added
Cost Savings, Carbon Emissions feature has been added.
Cost Savings: Samsung Cloud Platform Console’s Compute service allows you to set time contracts to save on usage costs.
Carbon Emissions: You can check the carbon emissions generated when using the services of the Samsung Cloud Platform Console.
2025.02.27
NEWCost Management Service Release
Cost Management service has been launched. You can manage and optimize service costs.
It provides features such as usage and billing records, cost analysis, and credit management.
You can set and manage budgets and provide account and payment method information.
11.2 - Planned Compute
11.2.1 - Overview
Service Overview
Planned Compute is a pricing policy that allows you to use resources at a discounted price of up to 40% compared to non-contracted prices, with a condition of contracting for a 1-year or 3-year server type. It applies to resources of non-contracted Compute services, Database services, and Data Analytics services. If the attributes applied to Planned Compute match the attributes of non-contracted target resources, the discounted price of Planned Compute is automatically applied. You can check the discounted details of Planned Compute through the report and apply for additional Planned Compute if necessary.
Features
Planned Compute is a contract-based discount service provided by Samsung Cloud Platform, which allows you to choose the desired operating system and server type to use. It has high flexibility as you can apply for a discount for a contract period of 1 to 3 years without specifying a specific resource.
Discounted Pricing: Planned Compute is a discounted pricing policy that allows you to use resources at a lower cost, up to 40% off the on-demand price, with a 1-year or 3-year commitment. It applies to resources in the Compute service category, Analytics service category, and Database service category, and if the attributes applied to Planned Compute match the attributes of the on-demand resources, the Planned Compute discounted price is automatically applied.
Various feature support: Planned Compute provides various features such as application, extension, server type change, etc. When applying for Planned Compute, select the contract discount target service, operating system, server type, and contract period, and enter the application quantity to apply.
Convenient management and monitoring: Through the Planned Compute list check, you can check the Planned Compute usage information and modify the application information when changing the contract conditions are required. In addition, convenient management is possible, such as checking the difference between the Planned Compute usage cost and the non-contract usage amount through Coverage check.
Service Composition Diagram
Figure. Planned Compute Configuration Diagram
Provided Features
Planne Compute provides the following functions.
Planned Compute Service Application: You can apply for Planned Compute by selecting the service, operating system, server type, and contract period you want to offer discounts for, and then entering the application quantity and confirming the application information.
Planned Compute Extension: When the contracted period ends, the corresponding Planned Compute will be terminated and no further discounts will be applied to the non-contracted resources. The termination of Planned Compute does not mean that the non-contracted resources will be cancelled. If you want to extend the discount period, you can reserve the next contract period after the contract period ends through the contract extension feature.
Planned Compute Server Type Change: The server type of Planned Compute can be changed to a value higher than the set value.
Planned Compute Cancellation: Although it is possible to cancel unused Planned Compute, 50% of the remaining period usage fee will be charged 2 months after cancellation. Planned Compute cancellation does not mean the cancellation of non-contracted resources.
Coverage Check: You can check the discount coverage of the resource usage amount by service, operating system, and server type. By selecting the desired period, it displays the name of the on-demand resource, usage time, the amount that would have been used without a discount, and the amount that Planned Compute could not cover. This allows you to check the difference in billing amount between on-demand usage and Planned Compute usage.
Constraints
The constraints of Planned Compute are as follows.
The application, change, and coverage are all based on the Asia/Seoul (GMT +09:00) time zone standard at midnight.
If you apply for Planned Compute, the discount will be applied from midnight the day after the application.
To calculate the exact discount amount, Coverage can be inquired up to 2 days ago.
23:30~23:59 every day is the settlement time, and new applications and changes are not applied.
Discounts are applied to non-contract resources in the same region of the same account.
Preceding Service
Planned Compute does not require any prior service work.
11.2.2 - How-to Guides
Users can apply for the service by entering the required information for Planned Compute through the Samsung Cloud Platform Console.
Planned Compute Apply
You can apply for and use the Planned Compute service from the Samsung Cloud Platform Console.
To apply for Planned Compute, follow the steps below.
All Services > Financial Management > Planned Compute Click the menu. Navigate to the Service Home page of Planned Compute.
Click the Planned Compute Apply button on the Service Home page. You will be taken to the Planned Compute Apply page.
On the Planned Compute Request page, after entering the required information, click the Next button.
Item
Required or not
Description
Service
Required
Select the service you want to apply for Planned Compute
Operating System
Required
Select Operating System
Service item selection after, selectable
Depending on the selected service, Open Source, RHEL, Windows etc. selectable
Server Type
Required
Select server type
Service after selecting the item, selectable
Contract Period
Required
Select contract period
Can be selected after selecting the Server Type item
1-year or 3-year can be selected
Requested Quantity
Required
Enter the requested quantity
You can enter up to 1~100 items
After applying the Planned Compute quantity, refunds are not possible, so caution is needed when entering
If an increase in quantity is needed, you can add a new Planned Compute with the same server type
Contract Start Date
Required
Select the date when the contract becomes effective
Can be set from the day after the application date
Table. Planned Compute service input information
Summary Check the detailed information and estimated billing amount generated in the panel, and click the Complete button.
When the popup notifying the creation of Planned Compute opens, click the Confirm button.
Once the application is completed, check the resources you created on the Planned Compute List page.
Planned Compute Check List
You can view and edit the full list of resources and detailed information of Planned Compute.
To view the list of Planned Compute, follow these steps.
All Services > Financial Management > Planned Compute Click the menu. Navigate to the Service Home page of Planned Compute.
On the Service Home page, click the Planned Compute menu. You will be taken to the Planned Compute List page.
On the Planned Compute List page, click the contract number of the Planned Compute to view detailed information. Navigate to the Planned Compute Details page.
Resource items other than required columns can be added via the Settings button.
Item
Required or not
Description
Contract Number
Required
Key value per billing unit that can be checked in the billing details
Resource Name
Select
Resource Contract Number
Target Service
Required
Planned Compute Discount Target Service
Discount applied to non-contract resources that match the corresponding value
Server Type
Required
Planned Compute discount eligible server type
Discount applied to non-contract resources that match the value
Operating System
Required
Planned Compute discount target operating system
Apply discount to non-contract resources that match the given value
Initial contract start date
Required
The start date when the Planned Compute contract applies
**Planned Compute Details** page can change the date
For more details, see [Change Contract Start Date](#약정-시작일-변경하기)
|
| Contract Period | Required | Check applicable period based on individual Planned Compute contract selection of 1 year or 3 years
When the Planned Compute contract ends, discounts no longer apply.
If you want to continue applying discounts with ongoing Planned Compute, contract extension is required.
|
| Extension Period | Required | Check extension period of individual Planned Compute contract
Change the 1-year or 3-year term of the extension period before the extension date arrives
Also, the extension reservation can be canceled.
|
| Status | Required | Status reflected after a change request for Planned Compute
**Creating**: State after contract request, before the contract is applied
**Active**: State while the contract is being applied
**In Progress**: State while the change request is being processed
|
| Creator | Select | Planned Compute created user |
| Creation Time | Select | Planned Compute Creation Time |
| Editor | Select | Planned Compute Modified User |
| Modification Date | Select | Planned Compute Modification Date |
Table. Planned Compute list item information
Planned Compute When you click the right more button in the list, the contract management option is provided. That item is also available via the top button when multiple selections are made using the checkboxes.
Category
Detailed description
Contract Extension
Button to extend the contract
It becomes active when there is no next contract reservation. Select the next contract period after the current contract to make a reservation
For more details, refer to [Contract Extension](#약정-연장하기)
|
| Contract Change | Button to change the agreement
You can change the server type. Changes are only possible to a higher spec than the current setting.
For details, see [Change Agreement](#약정-변경하기)
|
| Contract Termination | Button to terminate the contract
Terminate the contracted Planned Compute during the term
When terminating, a termination fee is incurred based on the remaining usage period (**monthly fee * 50% * remaining months of the contract period**) and will be reflected on the invoice after 2 months.
For more details, refer to [Terminate Contract](#약정-해지하기)
|
|Agreement Extension Edit|Button to edit the next agreement reservation
It becomes active when there is a next agreement reservation. It changes the agreement period for the next agreement after the current agreement.
For details, refer to [Agreement Extension Edit](#agreement-extension-edit)
|
|Cancel Contract Extension|Button to cancel the next contract reservation
It becomes active when there is a next contract reservation. It cancels the contract extension reservation for the next contract after the current contract.
For details, refer to [Cancel Contract Extension](#cancel-contract-extension)
|
Table. Planned Compute list item information - Contract Management
Planned Compute View detailed information
You can view and edit the detailed information of Planned Compute.
To view detailed information of Planned Compute, follow the steps below.
All Services > Financial Management > Planned Compute Click the menu. Navigate to the Service Home page of Planned Compute.
Click the Planned Compute menu on the Service Home page. Navigate to the Planned Compute list page.
On the Planned Compute List page, click the contract number of the Planned Compute to view detailed information. You will be taken to the Planned Compute Details page.
Planned Compute Detail page consists of Detail Information, Tags, Operation History tabs.
Detailed Information
Planned Compute list page allows you to view and edit the detailed information of the selected resource.
Item
Description
Category
Detailed description
———
———
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
User who created Planned Compute
Creation Date/Time
Creation Date/Time of Planned Compute
Editor
User who modified Planned Compute
Modification Date/Time
Modification Date/Time of Planned Compute
Contract Number
Key value per billing unit that can be checked in the billing details
Target Service
Planned Compute Discount Target Service
Apply discount to non-contract resources that match the given value
Server Type
Planned Compute discount eligible server type
Discount applied to contract-free resources that match the value
Click the Edit button to modify the server type
Operating System
Planned Compute discount target operating system
Apply discount to non-contract resources that match the value
Initial Contract Start Date
Start date when the Planned Compute contract applies
Click the Edit button to modify the start date
If the contract has started, editing is not allowed
Contract Period
Applied contract period and corresponding date
Extension Period
If the contract is extended, the extension period and the corresponding date
Contract Extension: If there is no extension period, you can click the button to apply for extending the contract period
Cancel Contract Extension If you have applied for a contract period extension, you can click the button to cancel the contract extension
Until the new contract extension start date begins, you can click the Edit button to change the extension period
Status
Status after change request of Planned Compute
Creating: State after contract request, before contract is applied
Active: State where contract is being applied
In Progress: State while processing change request
Table. Planned Compute detailed information items
Notice
If the contract for Planned Compute ends, the discount will no longer be applied. To continue applying the Planned Compute discount, extend the contract.
For detailed information about contract extension, please refer to Extend Contract.
Tag
Planned Compute list page allows you to view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
You can view the tag’s Key, Value information
Up to 50 tags can be added per resource
When entering tags, you can search and select from the existing list of Keys and Values
Table. Planned Compute Details - Tag Tab Items
Work History
Planned Compute List page allows you to view the operation history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work details, work date and time, resource type, resource name, work result, operator information can be checked
Work History List list, click the corresponding resource. Work History Details popup window opens.
Table. Planned Compute Details - Work History Tab Items
Planned Compute Manage Agreements
Change Agreement Start Date
You can change the contract application start date of Planned Compute. To change the contract start date of Planned Compute, follow the steps below.
All Services > Financial Management > Planned Compute Click the menu. Navigate to the Service Home page of Planned Compute.
Click the Planned Compute menu on the Service Home page. You will be taken to the Planned Compute list page.
Planned Compute List page, click the contract number of the Planned Compute whose agreement start date you want to change. Navigate to the Planned Compute Details page.
Click the Edit button of Initial Contract Start Date on the Planned Compute Details page. The Initial Contract Start Date Edit popup opens.
After changing the contract start date, click the Confirm button.
Reference
The start date of the agreement can be set from a date after the next day.
If the contract has started, the initial contract start date cannot be modified.
Extend Agreement
You can extend the Planned Compute contract. To extend the Planned Compute contract, follow the steps below.
All Services > Financial Management > Planned Compute menu, click it. Move to Planned Compute’s Service Home page.
Click the Planned Compute menu on the Service Home page. Navigate to the Planned Compute list page.
Planned Compute On the list page, click the more button at the far right of the Planned Compute of the resource whose contract you want to extend, then click the Extend Contract button. The Extend Contract popup window opens.
Select the next contract period and click the Confirm button. You will be taken to the Planned Compute list page.
Reference
Contract Extension button can be clicked before the current contract expires, if the next contract has not yet been scheduled.
Extend Agreement를 통해 다음 약정을 예약한 경우, 다음 약정의 시작일 전까지, 다음 약정 연장 기간 수정 및 취소가 가능합니다.
Change Agreement
Planned Compute’s agreement can be changed. To change the agreement of Planned Compute, follow the steps below.
All Services > Financial Management > Planned Compute Click the menu. Go to the Service Home page of Planned Compute.
Click the Planned Compute menu on the Service Home page. Navigate to the Planned Compute list page.
On the Planned Compute list page, click the ellipsis button at the far right of the Planned Compute for the resource whose commitment you want to change, then click the Change Commitment button. Change Commitment popup window opens.
Select the server type to change, and click the Confirm button. You will be taken to the Planned Compute list page.
Reference
Contract amendment is only possible to a higher spec than the current contract settings.
If you change the server type using Change Contract, it will be applied at midnight of the same day based on the Asia/Seoul, GMT +09:00 time zone, and resources with the changed server type will be moved to the corresponding Coverage list.
Edit Contract Extension
All Services > Financial Management > Planned Compute Click the menu. Go to the Service Home page of Planned Compute.
Click the Planned Compute menu on the Service Home page. Navigate to the Planned Compute list page.
Planned Compute On the list page, click the “More” button at the far right of the Planned Compute of the resource whose contract you want to change, and click the Edit Contract Extension button. The Edit Contract Extension popup window opens.
After the current contract, change the contract period for the next contract, and click the Confirm button. It moves to the Planned Compute list page.
Reference
Contract Extension Edit button is activated when there is a next contract reservation. After the current contract, it changes the contract period for the next contract.
If there is a next reservation, you can modify or cancel the extension period of the next agreement until the start date of the next agreement.
Cancel contract extension
All Services > Financial Management > Planned Compute Click the menu. Navigate to the Service Home page of Planned Compute.
Click the Planned Compute menu on the Service Home page. Navigate to the Planned Compute list page.
Planned Compute on the list page, click the far‑right more button of the Planned Compute of the resource whose contract you want to change, and click the Cancel Contract Extension button. The Cancel Contract Extension popup opens.
Cancel Contract Extension After checking the popup message, click the Confirm button. You will be taken to the Planned Compute list page.
Reference
Contract Extension Cancellation button is activated when there is a next contract reservation. It cancels the contract extension reservation for the next contract after the current contract.
Cancel contract extension if you cancel the next contract reservation, it will be applied as non-contract after the current contract period expires.
Terminate Agreement
You can cancel the Planned Compute contract. To cancel the Planned Compute contract, follow the steps below.
All Services > Financial Management > Planned Compute Click the menu. Go to the Service Home page of Planned Compute.
On the Service Home page, click the Planned Compute menu. You will be taken to the Planned Compute List page.
Planned Compute on the list page, click the rightmost more button of the Planned Compute of the resource to cancel the contract, and click the Cancel Contract button. The Cancel Contract popup window opens.
Contract termination Check the popup message and click the Confirm button. Navigate to the Planned Compute list page.
Caution
In case of termination, a termination fee will be incurred according to the remaining usage period (monthly fee * 50% * remaining months of contract period) and will be reflected on the bill after 2 months. Please consider the occurrence of termination fees sufficiently before proceeding with the termination process.
For same-day applications, based on the Asia/Seoul, GMT +09:00 time zone, if terminated before midnight on the same day, no termination fee will be incurred.
Planned Compute Coverage
Planned Compute You can check the discount amount applied to the requested quantity and non‑contracted resources based on the same target service, server type, and operating system.
Planned Compute Coverage To view detailed information, follow the steps below.
Planned Compute Coverage Check List
Click the All Services > Financial Management > Planned Compute menu. Navigate to the Service Home page of Planned Compute.
Click the Coverage menu on the Service Home page. It moves to the Coverage List page.
Category
Detailed description
Target Service
Planned Compute discount target service to check coverage
Operating system
The planned compute discount target operating system for checking coverage
Server Type
Server type subject to Planned Compute discount for checking Coverage
Requested quantity
The Planned Compute request quantity to check coverage
Table. Coverage Resource List Items
Planned Compute Coverage Check Detailed Information
Coverage list page, you can view detailed information of the selected resource and download the needed information.
All Services > Financial Management > Planned Compute Click the menu. Navigate to the Service Home page of Planned Compute.
On the Service Home page, click the Coverage menu. Navigate to the Coverage list page.
Coverage List page, click the resource (Coverage) you want to view detailed information for. Coverage Details page will be opened.
Category
Detailed description
Target Service
Planned Compute discount target service to check Coverage
Operating system
Operating system to check Coverage for the Planned Compute discount target
Server Type
Server type subject to Planned Compute discount for checking Coverage
Requested Quantity
Planned Compute request quantity for checking coverage
You can select the period you want to query at the top of the Coverage table and retrieve it.
Coverage data can be downloaded via the Excel download button.
Considering the settlement time of the discount amount, we provide information up to 2 days before the inquiry date based on the Asia/Seoul (GMT +09:00) time zone.
The billing calculation basis is midnight in the Asia/Seoul (GMT +09:00) time zone, and list lookup is available from the start date of the contract.
Item
Description
Resource Name
Resource Name of Uncontracted Resource
Usage Time
Time used by the non-contract resources in the list during the selected period
Non-contract hourly rate
Hourly rate when non-contract resources are not discounted
Non-contract usage amount
The amount used by the non-contract resources in the list during the selected period
Usage time * hourly rate
Unapplied Amount
Planned Compute deducts the charges of the list’s non-contracted resources as consideration for paying the contracted fee
If Planned Compute covers the entire non-contracted amount of the list, the unapplied amount becomes 0 won.
If Planned Compute only partially covers the list’s non-contracted amount, an unapplied amount occurs.
Uncontracted usage amount
Billing amount when Planned Compute is not used
Planned Compute usage amount
Amount paid for Planned Compute during the selected period
Since Planned Compute is a monthly billing plan, the monthly fee is calculated on a daily basis for the selected period.
Planned Compute Unapplied Amount
Total sum of amounts not covered by Planned Compute
Billing Amount
Sum of Planned Compute usage amount and Planned Compute non-applied amount
Check discount amount through the difference with non-contract usage amount
Table. Coverage details - Coverage information
11.2.3 - Release Note
Planned Compute
2025.10.23
FEATUREPlanned Compute contract start date setting feature added
Planned Compute contract application can set the start date when the contract is applied.
You can start applying the contract from the desired date, starting from the day after the application date.
2025.07.01
FEATUREAdd Planned Compute target service
Planned Compute target services have been added.
Data Catalog, Vertica(DBaaS)
2024.10.01
NEWPlanned Compute Service Official Version Release
Planned Compute service was launched.
Compute, Database service category services are provided with discounted rates as a condition for committing to the server type.
11.3 - Marketplace
11.3.1 - Overview
Service Overview
Marketplace is a service that supports subscription and installation of products in various fields through the Samsung Cloud Platform. Users can search for the required marketplace products according to their purpose and directly apply for a subscription, and can manage detailed information of the subscribed products and check usage fees.
Provided Features
Marketplace provides the following features.
Product Introduction: Provides information on the main features, fees, and seller details of marketplace products offered by SCP.
Product Subscription: Users can select desired marketplace products through the product catalog and apply for a subscription.
Product Installation: It can be installed and deployed according to the user’s environment depending on the subscribed product type.
Product Management: You can easily manage cancellations or terminations by checking the subscription status, installation, and termination status of marketplace products applied by the current user.
Software Provision Type
In Marketplace, you can subscribe to the following two types of products.
Seller Support: It is a form where the software seller installs it directly. When you apply for a product subscription, the application information is sent to the software seller by email. The seller checks the application details and installs the software in consultation with the user.
Machine Image: It is a form that can be installed on SCP Virtual Server. After applying for a product subscription, if the user directly selects a Marketplace image through the Virtual Server product, the software will be installed automatically.
Components
Marketplace provides product subscription management and product catalog functionality.
Product Subscription Management: Users can view and manage the detailed information and subscription status of marketplace products they have applied for.
Product Catalog: You can view detailed information about marketplace products available from SCP and subscribe to the desired product.
Constraints
The services sold in the Samsung Cloud Platform Marketplace are services sold by individual sellers. In the case of those services, Samsung SDS is a telecommunications sales intermediary and not a party to the telecommunications sales. Therefore, Samsung SDS assumes no responsibility for the service information and transactions sold by individual sellers.
Preceding Service
Please check the pre-service for each product in the service portal’s product catalog and with the seller.
11.3.2 - How-to Guides
Users can apply for, install, and manage subscriptions to marketplace products in various fields through the Marketplace service of Samsung Cloud Platform. They can view detailed information, subscription management, and usage fees of the subscribed marketplace products.
Apply for product subscription in the product Catalog
You can check the features and pricing of the marketplace product you want to use in the product catalog of the Samsung Cloud Platform Console, and you can apply for a subscription.
Caution
The services sold in the Marketplace of Samsung Cloud Platform are services sold by individual sellers. In the case of those services, Samsung SDS is a communication sales intermediary and not a party to the communication sales. Therefore, Samsung SDS does not assume any responsibility for the service information and transactions sold by individual sellers.
Reference
In Marketplace, you can subscribe to the following two types of products.
Seller Support: It is a form where the software seller installs it directly. When you apply for a product subscription, the application information is sent to the software seller by email. The seller checks the application details and installs the software in consultation with the user.
Machine Image: It is a form that can be installed on SCP Virtual Server. After applying for a product subscription, if the user directly selects a Marketplace image through the Virtual Server product, the software will be automatically installed.
Search Marketplace Products in Product Catalog
To search for Marketplace products in the Product Catalog, follow the steps below.
All Services > Financial Management > Marketplace > Product Catalog Click the menu. Product Catalog page.
Product Catalog page, search for the desired marketplace product. The list of search results will be displayed according to the selected criteria.
You can search by entering the product name you want to find in the search area.
Click Detailed Search to filter by product name, category, provision type, and status.
Click the product you want to view detailed information for in the search results list. Product Catalog Details page will be opened.
Product Catalog Detail Check the detailed information of the selected product on the page.
Item
Description
Product specifications
Product name and version, manufacturer information
Product subscription application
Application for subscription to the product
Overview
Product overview description
Fee
Product fee detailed breakdown
Technical Support
Product Technical Support Information
Manufacturer
Product manufacturer information
Seller Information
Product Seller Information
Table. Product Catalog Detailed Information Items
Apply for product subscription in product Catalog
To subscribe to Marketplace product from the product Catalog, follow the steps below.
All Services > Financial Management > Marketplace > Product Catalog Click the menu. Go to the Product Catalog page.
Product Catalog On the page, search for the desired marketplace product. The search results list will be displayed according to the selected criteria.
Click the product whose detailed information you want to view in the search results list. You will be taken to the Product Catalog Details page.
Product Catalog Details On the page, check the detailed information of the selected product and click the Product Subscription Request button.
Product Terms Agreement and Application When the window appears, select the required agreement items and click the Complete button.
Please read all the product usage terms and verify them thoroughly.
If you do not select all required agreement items among the product terms agreement items, you cannot apply for the product.
When the alert window appears, click the Confirm button. The product application will be completed.
You can view product subscription details in the Subscription Management menu.
Seller Support type marketplace product subscription request, the request information will be sent to the software vendor by email. Coordinate product details and schedule with the software vendor. Software installation and charges will be billed based on the agreed installation date.
If you have subscribed to a marketplace product of type Machine Image, the user must select the Marketplace image in Virtual Server > Create Virtual Server > Marketplace tab. Fees for Virtual Server service and software usage are charged from the time the Virtual Server is created.
Check product subscription details
You can check the detailed information of marketplace products applied for in the product Catalog.
To check the product subscription information, follow the steps below.
All Services > Financial Management > Marketplace > Subscription Management Click the menu. Subscription Management page will be opened.
Subscription Management Search for the desired software on the page. The list of search results will be displayed according to the selected criteria.
You can search by entering the product name you want to find in the search area.
Click Detailed Search to filter by product name, category, offering type, and status.
Click on the product you want to view details for in the search results list. Subscription Management Details page will be opened.
Check the detailed information of the selected product on the Subscription Management Details page.
Subscription Management Details page displays the product’s status information and detailed information, and consists of Basic Information, Terms Agreement tabs.
Category
Detailed description
Subscription Status
Product subscription status
Seller Support type product
Install Requesting: Product is subscribed, and vendor installation request status
Active: Product is subscribed, and installation completed
Terminate Requesting: Product is being terminated, and vendor termination request status
Error: Installation error, service abnormality occurred
Machine Image type product
Active: Product is subscribed
Terminated: Product subscription termination status
Canceled: Product subscription cancelled
Error: Installation error, service abnormality occurred
Cancel product subscription
Button to cancel the subscribed software
Cancel subscription request
Button to cancel the subscribed product
Table. Status Information and Additional Functions
Basic Information
You can view detailed information of the selected product on the Subscription Management page.
Item
Description
Subscription ID
Unique subscription identifier
Product Name
Subscribed product name
Provision Type
Product Offering Type
Category
Product Category
Product Details
Product Details if you click, you can view information on the corresponding software detail page of the product Catalog
Applicant
Subscription applicant
Application date and time
Subscription application date and time
Start date of use
Product usage start date
License quantity
Installed product license quantity
Table. Subscription Management Basic Information Items
Reference
For terminated products, you can check the cancellation applicant, cancellation request date and time, and cancellation date on the basic information page.
Terms Agreement
You can check the terms and conditions information that the user agreed to when applying for a subscription.
If you click View Content for each term item, you can check the terms information you agreed to at the time of product application in a popup window.
Cancel product
If you do not use the marketplace product you subscribed to, you will cancel the subscription and proceed with service termination.
Reference
If you cancel the subscription, the service in operation may be terminated immediately, so proceed with the cancellation after fully considering the impact that may occur when the service is discontinued.
Machine Image After applying for cancellation, delete the Virtual Server where the product is installed from the Virtual Server list. If you do not delete the Virtual Server where the product is installed, the Virtual Server fee will continue to be charged even if you apply for product subscription cancellation.
To cancel a product subscription, follow the steps below.
All Services > Financial Management > Marketplace > Subscription Management Click the menu. Subscription Management page will be displayed.
Click the software you want to cancel on the Subscription Management page. You will be taken to the Subscription Management Details page.
Click the Cancel Product Subscription button on the Subscription Management Details page.
Product subscription cancellation In the popup window, after entering the product name, click the Confirm button.
Seller Support When the product cancellation request is completed, the cancellation details will be sent to the user’s email.
11.3.3 - Release Note
Marketplace
2025.07.01
NEWOfficial Release of Marketplace Service
We have launched the Marketplace service, which supports various software applications through Samsung Cloud Platform, from application to installation and management.
12 - DevOps Tools
It provides a service that easily integrates and configures application and system development environments in a platform environment.
12.1 - DevOps Service
12.1.1 - Overview
Service Overview
DevOps Service is a service that provides standardized development tools and code framework-based development templates and integrated management functions for application and system development/deployment/operation through the DevOps Console, allowing for fast and stable software development/deployment/operation, and convenient integrated management of Samsung Cloud Platform resources and CI/CD Tools within the DevOps Workflow.
Features
Convenient code management and deployment: Users can easily manage source code, build, and deploy through a web-based console, and also support various tool integrations to improve quality by analyzing source code.
Flexible deployment methods: It provides minimal downtime for user services and offers deployment environment configurations for k8s clusters or Virtual Machines, allowing users to configure flexible deployment methods that suit their services.
Repository provision for deployment management: Provides a repository that can manage source code, library and application artifacts, container images, etc. for deployment management of user services.
Service Composition Diagram
Figure. DevOps Service Configuration Diagram
Provided Function
DevOps Service is a service that provides convenience for building/deployment by integrating standard development tools to easily configure the development environment, and provides the following functions:
Continuous Integration/Continuous Deployment (CI/CD): users can access tools responsible for source code repository, artifact repository, code analysis, image repository, build/deployment with a single login.
Application template-based project composition: Users can easily compose a project using a template that reflects development standards in a wizard-style manner.
Key development languages and frameworks provided: Users can choose the development languages (Java, C#, Python, Ruby, etc.) and frameworks (SpringBoot, Vue.js, .Net, etc.) needed for application development to configure a project for build/deployment.
Build/Deployment Pipeline Auto Configuration: The user can automatically configure the pipeline script using the build/deployment pipeline template included in the application template, or configure each stage of the pipeline based on GUI.
Support for various deployment methods and rollback support: Users can use deployment methods (RollingUpdate, Blue-Green) to minimize downtime of operating applications, and can roll back to the desired version with one click. If the user wants to use a Virtual Server as a deployment environment, they can deploy it to a Virtual Server in the form of a packaged file or a Docker image.
Customizable release process support: users can define and repeatedly execute various release processes considering the type of application changes.
Components
The user can easily use the DevOps Service through the Samsung Cloud Platform DevOps Console.
DevOps Console
DevOps Console supports the tools necessary for application development and build/deployment in an integrated manner, allowing for easy management of project configuration and build/deployment.
Dev.Starter: An application template that provides not only sample code reflecting development standards but also templates necessary for build/deployment.
Source Code Repository: It supports linkage with source code configuration management tools. It can be linked with Git Repository.
Artifact Repository: It supports linkage with the artifact repository for storing libraries and application artifacts required for application build.
Code Quality (Code Review): It supports linkage with code quality tools that can measure and manage the quality of source code through code static analysis.
Helm Chart Repository: Kubernetes uses Helm Charts to easily install and upgrade software. To install software, users must write the Helm Chart directly. It provides ChartMuseum as a repository for managing Helm Charts, and also supports linking with other chart storage tools.
Image Registry: Supports linking with an image registry for storing container images.
VM Server Group: a collection of Virtual Servers that are the deployment target of an application. Users can register and specify Virtual Servers as deployment targets in the DevOps Console.
Kubernetes Cluster: the cluster that is the deployment target of the application. The user can register and specify the Kubernetes Cluster as the deployment target in the DevOps Console.
Regional Provision Status
DevOps Service is available in the following environments.
Region
Availability
Korean West(kr-west1)
Provided
Korean East(kr-east1)
Provided
South Korea 1(kr-south1)
Provided
South Korea 2(kr-south2)
Provided
South Korea South 3
Provided
Table. DevOps Service Availability by Region
Preceding service
DevOps Service does not have a preceding service.
12.1.2 - How-to guides
Users can input required information for the DevOps Service through the Samsung Cloud Platform Console, select detailed options, and create the service. Additionally, developers can efficiently carry out development projects using DevOps Service’s standardized development tools, code, framework-based development templates, and integrated management functions.
Reference
After creating a DevOps Service, you need to use the Kubernetes Engine service and the Container Registry service to set up a CI/CD environment in the DevOps Console.
Creating a DevOps Service
You can create and use a DevOps Service via the Samsung Cloud Platform Console.
Reference
Each account can use one DevOps Service.
Only IAM users can create a DevOps Service.
When a DevOps Service is created, the creator’s ID is granted Tenant Admin privileges in the DevOps Console.
To create a DevOps Service, follow these steps.
Click the All Services > DevOps Tools > DevOps Service menu and click the Create Service button. You will be taken to the Service Home page of the DevOps Service.
On the Service Home page, click the Create DevOps Service button to go to the Create DevOps Service page.
On the Create DevOps Service page, enter the required information for the service.
In the Enter Service Information section, provide the required details.
Item
Required
Description
Tenant Name
Required
Name of the Tenant for the DevOps Service being created (displayed logically when accessing the DevOps Console). Must start with a lowercase letter and can include lowercase letters, numbers, and hyphens (-), 3–30 characters.
Tenant Code
Required
System-internal ID used internally (similar to a project ID). Created based on user input. Must start with a lowercase letter and can include lowercase letters, numbers, and hyphens (-), 3–30 characters.
Table: Required Information Input Items for DevOps Service
In the Add Tenant Members section, select tenant members from the user list.
Item
Required
Description
User
Required
Select members from the user list.
The service applicant is included in the default tenant members
At least one user must be selected to create the service
Table: DevOps Service Tenant Member Addition Items
In the Enter Additional Information section, provide or select additional details as needed.
Item
Required
Description
Tag
Optional
Add tags
Up to 50 tags per resource
Click the Add Tag button, then enter or select Key and Value
Table: Additional Information Input Items for DevOps Service
In the summary panel, review the detailed information and estimated billing amount, then click Create.
After creation is complete, you can view the resource on the Resource List page.
Using DevOps Service
The DevOps Service can be configured within a separate console called DevOps Console to set up an actual DevOps environment.
To use DevOps Service, follow these steps.
Click the All Services > DevOps Tools > DevOps Service menu. You will be taken to the Service Home page of the DevOps Service.
On the Service Home page, click the DevOps Service menu to go to the DevOps Service list page.
On the DevOps Service List page, click the resource you want to view details for. You are taken to the DevOps Service Details page.
On the DevOps Service Details page, click the DevOps Console button to go to the Samsung Cloud Platform DevOps Console page.
The DevOps Service allows you to view and edit the full resource list and detailed information. The DevOps Service Details page consists of Details, Tags, and Activity Log tabs.
To view detailed information of the selected resource from the DevOps Service List page, follow these steps.
Click the All Services > DevOps Tools > DevOps Service menu. You will be taken to the Service Home page of the DevOps Service.
On the Service Home page, click the DevOps Service menu to go to the DevOps Service list page.
On the DevOps Service List page, click the resource you want to view details for. You are taken to the DevOps Service Details page.
The DevOps Service Details page displays status information and consists of Details, Tags, and Activity Log tabs.
Item
Description
Status Display
Shows the status of the DevOps Service
Active: usable state
Creating: creation in progress
Error: error state during operation
DevOps Console
Access the DevOps Console to control the service
Service Termination
Button to terminate the service
Table: DevOps Service Status Information and Additional Functions
Details
You can view detailed information of the selected resource from the DevOps Service List page.
Item
Description
Service
Service name
Resource Type
Resource type
SRN
Unique resource ID in Samsung Cloud Platform (for DevOps Service, SRN refers to the DevOps Service resource ID)
Resource Name
For DevOps Service, this refers to the Tenant name
Resource ID
Unique resource ID within the service
Creator
User who created the service
Creation Date
Date and time when the service was created
Tenant Name
Name of the Tenant created by the user
Tenant Code
System ID value for the Tenant created by the user
Table: DevOps Service Detailed Information Items
Tags
You can view, add, modify, or delete tag information of the selected resource on the DevOps Service List page.
Item
Description
Tag List
List of tags
Key and Value of tags can be viewed
You can add up to 50 tags per resource
When entering a tag, you can search and select from existing Key and Value lists
Table: Virtual Server Tag Tab Items
Activity Log
You can view the activity log of the selected resource on the DevOps Service List page.
Item
Description
Activity Log List
Resource change history
Details such as operation time, resource type, resource name, operation details, result, operator name, and path information can be viewed.
Click the Detailed Search button for advanced search
Table: Activity Log Tab Detailed Information Items
Terminating a DevOps Service
Reference
If there are resources linked to a project within the DevOps Console, you cannot terminate the DevOps Service.
To terminate the DevOps Service, delete all resources linked within the DevOps Console.
To terminate a DevOps Service, follow these steps.
Click the All Services > DevOps Tools > DevOps Service menu. You will be taken to the Service Home page of the DevOps Service.
On the Service Home page, click the DevOps Service menu. You will be taken to the DevOps List page.
On the DevOps Service List page, click the resource you wish to terminate. You are taken to the DevOps Service Details page.
On the DevOps Service Details page, click the Terminate Service button.
After termination is complete, verify the resource is terminated on the DevOps Service List page.
12.1.3 - API Reference
API Reference
12.1.4 - CLI Reference
CLI Reference
12.1.5 - Release Note
2025.10.23
FEATUREKorea East (kr-east1) Region Service Open
The DevOps Service can also be used in the Korean eastern (kr-east1) region.
2025.07.01
FEATUREAdd User Member
Add user member
When creating a DevOps Service, you can add members to perform the Admin role.
2025.02.27
FEATURECommon Feature Changes
Samsung Cloud Platform common feature changes
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.12.23
NEWDevOps Service Official Version Release
We have launched a DevOps Service service that provides an integrated environment for software development/deployment/operation quickly and safely.
12.2 - DevOps Console
12.2.1 - Overview
12.2.1.1 - Introduction to DevOps Console
Service Overview
DevOps Console is a service that provides convenience for building and deploying by integrating standard development tools to support development environments, and has the following characteristics.
Key tool integration and unified authentication for Continuous Integration/Continuous Deployment (CI/CD)
The user can access tools responsible for source code repository, artifact repository, code analysis, image repository, and build/deployment with a single login.
Application template-based project composition
The user can easily configure the project in a wizard-like manner using a template that reflects the development standards.
Main development languages and frameworks provided
The user can select the development language (such as Java, C#, Python, Ruby, etc.) and framework (such as SpringBoot, Vue.js, .Net, etc.) required for application development to configure a project for build/deployment.
Build/Deployment Pipeline Auto Configuration
The user can use the build/deploy pipeline template included in the application template to automatically configure the pipeline script or configure each stage of the pipeline based on GUI.
Support for various deployment methods and rollback support
The user can use the deployment method (RollingUpdate, Blue-Green) to minimize the downtime of the operating application, and can roll back to the desired version with one click. If the user wants to use a VM server as a deployment environment, they can deploy it to the VM server in the form of a packaged file or a Docker image.
Customized Release Process Support
The user can define and repeatedly execute various release processes considering the type of application change.
Components
Users can easily use the DevOps Service through the DevOps Console.
Figure. DevOps Console Components
DevOps Console
It supports the tools necessary for application development and build/deployment in an integrated manner, making it easy to manage project configuration and build/deployment.
Dev. Starter
It is an application template that provides not only sample code reflecting development standards, but also templates necessary for build/deployment.
IDP(ID Provider)
It is in charge of integrated management and authentication of users.
Source Code Repository
It supports integration with source code management tools. It can be linked with DevOps Code.
Artifact Repository
It supports linkage with the artifact repository for storing libraries and application artifacts necessary for application build.
Code Quality
It supports linkage with code quality tools that can measure and manage the quality of source code through static code analysis.
Helm Chart Repository
In Kubernetes, Helm charts are used to easily install and upgrade software. To install software, users must write Helm charts directly. It provides ChartMuseum as a default repository for managing charts, and also supports linking with other chart storage tools.
Image Repository
It supports linkage with an image repository for storing container images.
VM Server Group
This is a bundle of VM servers that are the deployment target of the application. Users can register and specify VMs as deployment targets in the DevOps Console.
Kubernetes Cluster
This is the cluster that is the deployment target of the application. Users can register and specify the “Kubernetes Cluster” as the deployment target in the DevOps Console.
Main Concepts and Relationships
The system administrator (System Admin) or administrator (Admin) must pre-define the necessary tools and application templates when a user creates a project.
Figure. Key Concepts and Relationships
Tenant
It is a logical distinction unit that provides and shares tools and application templates that can be used in the project independently. The system administrator registers tenants by organization (or customer) unit that must guarantee independence and designates a tenant administrator. The tenant administrator can register tenant tools and application templates that can be used in the corresponding tenant, and tenant members.
Project Group
As a unit for managing individual projects, members belonging to a specific tenant can create a project group. Therefore, the project group belongs to one tenant and can utilize the tools and templates set by the tenant.
Project
The project is a unit that develops and manages applications or microservices that are deployed and executed independently. Therefore, it is common to configure a project based on a single source code repository, but it can be configured flexibly according to the characteristics of the application or the development and operation organization.
Tool
It is a development tool that supports source code repositories, image repositories, build/deployment pipelines, and code quality used in projects. Each project can use the designated tool for the project group, tenant, or application to which it belongs, or the Kubernetes Cluster that is the deployment target. In general, tools that support build/pipelines and image repositories that affect deployment speed are specified by cluster unit. Tools are largely divided into three types according to their scope of use.
ProjectGroup Tool: This tool can only be used in projects belonging to the project group. Multiple project groups can be mapped and used. Project group Owner and Master register and manage it.
Tenant Tool: This is a tool that can only be used in the project group belonging to the tenant. Multiple tenants can be mapped and used. Only the tenant administrator of the initially registered tenant can manage the tool when it is registered. Subsequently, only usage is possible in additional registered tenants. Additionally, it can be designated and managed to be used in clusters accessible by the tenant administrator.
System Tool: DevOps Service is a tool that can be used in the form of a service throughout the entire project. It specifies and manages tenants and clusters that system administrators can register and use.
Application Template
This is a template that allows you to easily configure a project. Templates that support sample code by reflecting development standards are called Dev. Starter, and templates that simply support development environment configuration are called Environment Only. Like tools, they are largely divided into three types according to their scope of use.
Project Group Template: This is a template that can only be used by projects belonging to the project group. It is registered and managed by the Project Group Owner and Master.
Tenant Template: a template that can only be used in project groups belonging to the tenant. It is registered and managed by the tenant administrator.
System Template: This is a template available for use in the entire DevOps Service project. The system administrator can register and manage the tenant to be specified and used.
12.2.1.2 - Roles
Project Group Roles and Permissions
The project group creator is the Owner by default.
Master has all the permissions of the Owner, except for the project group deletion permission.
Developer and Viewer have only view permissions.
Category
Permission
Owner
Master
Developer
Viewer
Project Group
View
○
○
○
○
Edit
○
○
Delete
○
Project
Create
○
○
View
○
○
○
○
Member
Add
○
○
View
○
○
○
○
Delete
○
○
Table. Permissions for each role in the project group
Project Roles and Permissions
Roles are divided into Owner, Master, Developer, and Viewer.
Viewer can only view.
Developer can create and delete development-related features.
Examples) Build pipeline, Helm chart, deployment
Owner/Master can view, create, and delete all features in the project.
Project roles inherit the project group roles.
Even if a user is not registered as a project member, they can perform the role in the project if they are a member of the project group.
When the same user has different roles in the project and project group, the project role takes priority.
In other words, you can limit the authority of a project group member in a specific project.
Example) If you want to restrict the Master role of a project group from having Master permissions in a specific project, you can register them as a project member and grant them Developer or Viewer permissions.
Category
Permission
Owner
Master
Developer
Viewer
Dashboard
View
○
○
○
○
Build Pipeline
View
○
○
○
○
Import
○
○
○
(Development) Create/Add
○
○
○
(Development) Run
○
○
○
(Development) Edit
○
○
○
(Development) Delete
○
○
○
(Operation) Create/Add
○
○
(Operation) Run
○
○
(Operation) Edit
○
○
(Operation) Delete
○
○
Helm Install
(Development) Install
○
○
○
(Operation) Install
○
○
Project
Edit
○
○
Delete
○
Table. Permissions for each role in the project (1)
Permissions are granted to Jenkins pipelines based on project roles.
Other tools can set permissions in their respective menus.
Category
Permission
Owner
Master
Developer
Viewer
(Folder) Project Group
Folder View
○
○
○
○
(Folder) Project
Credential View
○
○
○
○
Credential Create/Edit/Delete
○
○
Folder View
○
○
○
○
Folder Create
Folder Settings
Folder Delete
○
○
(Folder) Type
(Development) Folder View
○
○
○
(Development) Pipeline Create
(Development) Folder Settings
(Development) Folder Delete
○
○
○
(Operation) Folder View
○
○
(Operation) Pipeline Create
(Operation) Folder Settings
(Operation) Folder Delete
○
○
Pipeline
(Development) Pipeline View
○
○
○
(Development) Pipeline Settings
○
○
○
(Development) Pipeline Delete
○
○
○
(Development) Pipeline Build
○
○
○
(Operation) Pipeline View
○
○
(Operation) Pipeline Settings
○
○
(Operation) Pipeline Delete
○
○
(Operation) Pipeline Build
○
○
Table. Jenkins system permissions for each project role
System Roles and Permissions
Large Category
Small Category
Permission
System Administrator
Administrator (Tenant Manager)
User (Project Group Owner)
Tool
System Tool
Register/Edit/Delete
○
Add/Edit Tenant
○
Delete Tenant
○
○
Add/Edit Cluster
○
Delete Cluster
○
○
○
Tenant Tool
Register/Edit/Delete
○
○
Add/Edit/Delete Tenant
○
○
Project Group Tool
Register/Edit/Delete
○
○
Add/Edit/Delete Tenant
○
○
Table. System roles and permissions (1)
Large Category
Small Category
Permission
System Administrator
Administrator (Tenant Manager)
User (Project Group Owner)
App. Template
System Template
Register/Edit/Delete
○
Add/Edit Tenant
○
Delete Tenant
○
○
Add/Edit/Delete Image
○
Add/Edit/Delete Helm Chart
○
Tenant Template
Register/Edit/Delete
○
○
Add/Edit/Delete Image
○
○
Add/Edit/Delete Helm Chart
○
○
Project Group Template
Register/Edit/Delete
○
○
Add/Edit/Delete Image
○
○
Add/Edit/Delete Helm Chart
○
○
Helm Chart Management
System Helm Chart
Add/Edit/Delete
○
Tenant Helm Chart
Add/Edit/Delete
○
○
Project Group Helm Chart
Add/Edit/Delete
○
○
Project Group
Create
○
○
○
Table. System roles and permissions (2)
12.2.1.3 - Screen Composition
This describes the main menu pages of DevOps Console.
When you first log in, the top menu and all project groups and projects you have permission for are displayed.
Top menu
Through the top menu, you can navigate to the Main screen, Management screen, and others, and edit user information. The top menu remains visible at all times while using the DevOps Console.
Item
Explanation
Main page
Go to the main page.
management
Go to the admin page.
Support
You can view the guide, inquiries, and announcements.
Link
You can view the related system links.
User Information
You can view and edit user information or log out.
Account information: the user’s account information popup opens.
Registration information: the registration information page opens.
Authentication key management: manage each user’s authentication keys.
My activity history: you can view the user’s activity history.
Logout: log out from the DevOps Console.
표. 상단 메뉴 항목
메인 페이지
사용자가 권한을 가진 모든 프로젝트그룹과 프로젝트의 현황을 표시합니다.
Item
Explanation
Create project group
You can create a new project group.
Project group name
Represents the project group name.
Tenant name
Indicates the tenant name.
Project Group Management
Navigate to the Project Group page.
Release Management
Go to the release management page.
Create Project
You can create a new project.
Project Details
All projects the user has permission for are displayed
Click to go to the project’s dashboard page
Go to the user guide
Go to the user guide page.
표. 메인 페이지 항목
관리 페이지
대시보드, 테넌트, 프로젝트, 툴, 사용자 등 DevOps Console 전반적인 관리 기능을 포함하고 있습니다.
Item
Explanation
Management menu
DevOps Console management feature menus.
Menus appear differently depending on permissions.
표. 관리 페이지 항목
프로젝트그룹 관리 페이지
프로젝트그룹과 릴리스에 대한 관리 페이지입니다.
Item
Explanation
Project Group Management Menu
This is the project group management menu.
Release Management Menu
This is the release management menu.
표. 프로젝트그룹 관리 페이지 항목
프로젝트 페이지
프로젝트에 대한 페이지입니다.
Item
Explanation
Project Management Menu
This is the project management menu.
표. 프로젝트 관리 페이지 항목
12.2.2 - Getting Started
Note: I translated the title and description to English while keeping the rest of the Markdown grammar and document format intact.
12.2.2.1 - Getting Started with DevOps Console
This guide explains how to log in to the DevOps Console, set the display language, and configure user information.
Signing Up
To use the DevOps Console, you need to create a separate DevOps Console account, which is distinct from the Samsung Cloud Platform account. You can create an account by signing up.
To create an account in the DevOps Console, follow these steps:
Click the Sign Up link on the login page. You will be redirected to the sign-up page.
Complete the Identity Verification process. After completing the verification, click the Next button.
Item
Required
Description
CAPTCHA
Required
Enter the characters displayed in the image into the input field
Table. Identity Verification Information
Agree to the terms and conditions in the Sign-up Information section.
Item
Required
Description
Terms of Service
Required
Check to agree to the terms of service
Privacy Policy
Required
Check to agree to the collection and use of personal information
I am 14 years old or older.
Required
Check to confirm that you are 14 years old or older
Table. Sign-up Information
Enter the required information in the User Information section.
Item
Required
Description
User ID (Email)
Required
Enter the email address to use as your user ID
Mobile Phone Number
Required
Enter your mobile phone number
Enter your mobile phone number and click the Send OTP button to receive an OTP number
Enter the OTP number received on your mobile phone and click the Verify button
If the OTP number is valid, the mobile phone number verification is complete
Password
Required
Enter a password to use, which must be 8-20 characters long
Cannot use your user ID or name as your password
Must include at least one uppercase letter (English), one lowercase letter (English), one number, and one special character (!@#$%^&*)
Cannot use the same character three or more times in a row
Cannot use four or more consecutive characters or numbers
Password change cycle: 90 days
Confirm Password
Required
Confirm the password you want to use
Name
Required
Enter your name
Can be entered using characters, numbers, and spaces, up to 100 characters
Language
Required
Set the language for notifications such as email and SMS
Time Zone
Required
Set your time zone information
Table. User Information
After entering all the information, click the Complete button. An Verification Email will be sent to the email address you entered.
Click the Email Address Verification button in the received email to complete the sign-up process.
Logging in
To log in to the DevOps Console, enter your account information on the login page and click the Next button.
On the OTP authentication page, enter the OTP number issued by the selected OTP type and click the Login button.
If you don’t remember your ID or password, click the ID/Password Find link to find your account information before attempting to log in.
Once logged in, the DevOps ConsoleMain page will open.
Note
If you enter your password incorrectly more than 5 times, your account will be locked, so enter it accurately.
To unlock your account, click the ID/Password Find link and reset your password.
Modifying User Information
To modify user information, follow these steps:
Click the Shortcut icon at the top right of the Main page.
Click the DevOps IDP link. The DevOps IDP page will open in a new tab.
On the user information page, you can change information such as your phone number, password, name, time zone, and language.
Info
The time zone and language settings in DevOps IDP are not linked to DevOps Console.
To modify the information used in DevOps Console, follow these steps:
Click the User icon at the top menu of the Main page.
Click the Account Information menu. The Account Information popup window will open.
Change the language and time zone information, and then click the Save button to complete the modification of your account information.
Changing Console Language
To change the language displayed in DevOps Console, click Language Settings at the bottom of the DevOps Console page after logging in, and change it to your desired language.
Managing Authentication Keys
Info
Authentication keys are used when using the Open API service of DevOps Console.
You can manage your individual authentication keys through the Authentication Key Management menu by clicking the User icon at the top menu of the Main page.
Adding Authentication Keys
To add an authentication key, follow these steps:
Click the User icon at the top right of the Main page.
Click the Authentication Key Management menu. The Authentication Key Management popup window will open.
Click the Create Authentication Key button. The Create Authentication Key popup window will open.
Set the expiration date and click the Save button to complete the creation of the authentication key.
Info
If you have already added an authentication key, you cannot add another one.
Setting up Security
Through security settings, you can restrict the IP that can use the authentication key.
To add security settings, follow these steps:
Click the User icon at the top right of the Main page.
Click the API Key Management menu. The API Key Management popup window opens.
Click the Security Settings tab and then click the Modify button. The Security Settings Change popup window opens.
Set IP access control to Use.
Enter the allowed IP access and click the Save button to complete the security settings.
Deleting an Authentication Key
To delete an authentication key, follow these steps:
Click the User icon at the top right of the Main page.
Click the API Key Management menu. The API Key Management popup window opens.
Click the Status Change button.
In the API Key Status Change popup, select Not in use and click the Save button.
Click the Delete button to complete the deletion of the authentication key.
Information
You cannot delete an authentication key that is in use. Change it to Not in use before deleting.
Checking My Activity History
By clicking the My Activity History menu in the top menu of the Main page, you can check your activity history in the DevOps Console.
Setting up Access Control IP
You can register an IP that can access the DevOps Console to control access.
To register an access control IP, follow these steps:
Click the Shortcut icon at the top right of the Main page.
Click the DevOps IDP link. The DevOps IDP page opens in a new tab.
Click the Access Control menu on the left.
Click the Modify button at the bottom of the Access Control page. The page changes to Access Control Modification.
Set Access Control IP Settings to Use.
Add the IP to be allowed access and click the Save button to complete the access control settings.
Information
If the IP has changed or is incorrectly registered and access is not possible, you can change the settings by clicking the Access Control IP Settings link at the bottom of the login page.
Withdrawing from Membership
Caution
When withdrawing from DevOps Console membership, all collected member information and related resources and authorities are deleted.
To withdraw from DevOps Console membership, follow these steps:
Click the Shortcut icon at the top right of the Main page.
Click the DevOps IDP link. The DevOps IDP page opens in a new tab.
Click the Withdrawal from Membership button at the top right of the user information page. The Withdrawal from Membership popup window opens.
Enter your current password in the input field and click the Withdrawal from Membership button to complete the withdrawal.
12.2.2.1.1 - Membership Information
Users can view and manage their basic information, authentication information, registered tenants, registered project groups, and registered projects.
Getting Started with Membership Information
To start managing membership information, follow these steps:
Main page, click the Manage icon in the top right corner. Move to the Tenant Dashboard page.
In the left menu, click the Membership Information menu. Move to the Membership Information page.
Managing Authentication Information
Authentication information is automatically stored when a user uses it for tool registration, usage, etc.
If necessary, you can add new authentication information, modify or delete existing authentication information.
To manage authentication information, follow these steps:
Main page, click the Manage icon in the top right corner. Move to the Tenant Dashboard page.
In the left menu, click the Membership Information menu. Move to the Membership Information page.
On the Membership Information page, click the Authentication Information tab.
Adding Authentication Information
To add authentication information, follow these steps:
Authentication Information tab, click the Add button. The Add Authentication Information popup window will open.
In the Add Authentication Information popup window, enter the information.
All tools and URLs that the user can access will be displayed.
After entering the information, click the Connection Test button.
Click the Save button.
Modifying Authentication Information
To modify authentication information, follow these steps:
In the Authentication Information tab, click on the authentication information you want to modify. The Modify Authentication Information popup window will open.
In the Modify Authentication Information popup window, enter the information and click the Connection Test button.
When the Save button is activated, click the Save button.
Deleting Authentication Information
To delete authentication information, follow these steps:
In the Authentication Information tab, select the checkbox of the authentication information you want to delete.
In the Authentication Information list, click the Delete button.
In the confirmation popup window, click the Confirm button.
Note
You cannot delete authentication information that is currently in use.
Managing Joined Tenants
Users can view the list of tenants they have joined.
They can also request to join a new tenant and leave a tenant they have already joined.
To manage joined tenants, follow these steps:
Click the Management icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
In the left menu, click the Join Information menu. You will be moved to the Join Information page.
In the Join Information page, click the Joined Tenants tab.
Requesting Tenant Membership
To request tenant membership, follow these steps:
Registered Tenants tab, click the Join button. The Tenant Membership Request popup window will open.
In the Tenant Membership Request popup window, enter the tenant code you want to join and click the Search icon.
Enter the reason for the request and click the Add button.
Select the authority of the added tenant and click the Save button.
In the confirmation popup window, click the Confirm button.
Leaving a Tenant
To leave a tenant, follow these steps:
In the Registered Tenants tab, select the checkbox of the tenant you want to leave.
Click the Leave button.
In the confirmation popup window, click the Confirm button.
Managing Joined Project Groups
Users can view the list of project groups they have joined.
To manage joined project groups, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Membership Information menu on the left. You will be moved to the Membership Information page.
On the Membership Information page, click the Joined Project Groups tab.
When you click on a project group in the project group list, you will be moved to the Project Group Management page.
12.2.2.2 - Tutorial (Project Creation to Build/Deployment)
The user can create a DevOps Console project and build the source to create an image and deploy a Helm chart to a Kubernetes cluster through the following procedure.
Creating a Project Group
The user can create a project group. The user must be a member of the tenant to create a project group.
Creating a Project Group
To create a project group, follow these steps:
Click the Create Project Group and Start button or Create Project Group button on the Main page. The Create Project Group popup window opens.
Enter the items and click the Save button to complete the project group creation.
Note
Approval from the tenant administrator may be required.
If there are no available tenants to choose from, click the Join Tenant Quick Link to proceed with tenant registration.
Joining a Tenant
To join a tenant, follow the procedure below.
Project Group Creation popup window, click the Join Tenant Shortcut link. The Tenant Join Request popup window will open.
In the Tenant Search field, enter the tenant code you want to join exactly and click the Search icon. The tenant information will be retrieved.
Verify that the searched tenant is correct, enter the Reason for Request, and click the Add button. It will be added to the list below.
Select the authority for the tenant added to the list below and click the Save button.
Creating a Project (Helm Chart Deployment)
Note
The user must be a member of the project group and tenant to create a project.
Table. New Repository Creation and Existing Repository Activation Conditions
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
In the Code Repository section, select the Code Repository type.
Select Create a New Repository or Use an Existing Repository and enter the information.
Enter the Authentication Information and click the Connection Test button.
If the Next button is activated, click the Next button.
Item
Description
Repository Type
Select the code repository to use
Registered Tool: You can select and use the types of SCM Repository tools available to the user (Github, Gitlab, etc.).
DevOps Code: Available if you have applied for DevOps Code in the Samsung Cloud Platform Console.
Unregistered Tool: You can use it by entering the domain of an unregistered tool. The unregistered tool item only appears when the App. template is Environment Only (without source code).
New/Existing Usage
Select whether to Create a New Repository or Use an Existing Repository.
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information
If you don’t have an account, you can create one by clicking the Don’t have an account? link and opening the Account Creation Information popup window.
After creating a new account, please change your password through the Initial Password Setting link.
(Unregistered Tool) Repository Information
Enter repository information
You can use a code repository that is not registered as a tool in DevOps Console.
You must go through an additional verification process by clicking the URL Check button.
The user can set up the repository to store the built container image through the image repository setting step.
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
The user can deploy through direct configuration using Helm charts.
When selecting Helm release name and Helm chart, the Helm chart installation items and the default Values.yaml items included in the chart are displayed.
Available Helm charts are linked to App templates. You can modify or delete them through Managing Supported Helm Charts.
To set up the deployment target environment, follow these steps:
Select direct configuration using Helm charts in the deployment target section.
Enter the Helm release name.
Click the Search button to select the Helm chart to use.
Modify the Values.yaml and click the Validation Check button.
Click the Next button when it is activated.
Item
Description
Deployment Target
Select the deployment target.
Helm Release Name
Enter the name of the Helm release to be created.
This name must be unique within the namespace of the cluster to be deployed.
Helm Chart
Select the Helm chart.
When a Helm chart is selected, detailed information about the selected chart is displayed below.
K8S Information
Displays the information of the Kubernetes cluster required for the Value.yaml configuration.
Values.yaml
Modify the Values.yaml content.
This is the values.yaml file used when installing the Helm chart.
Displays the OS information of the environment where the build agent runs.
User Information
IDP-connected Jenkins
Click the User Check button to verify user registration.
If not registered as a Jenkins user, a User Registration Guide popup window will open; click the Go to Jenkins link to proceed with User Registration or Initial Jenkins Login.
Non-IDP-connected Jenkins
Enter authentication information and click the Connection Test button.
Environment Variable Setting
Set environment variables to be registered in the Jenkins pipeline.
Image Tag Pattern
Select the method for assigning tags to container images.
Deploy Strategy
Select the deployment method for container images.
Deployment Result Recipient
Select the user to receive the result after the build pipeline is completed.
Table. Build Pipeline Setting Items for Project Creation
Setting Up User Definitions
Users can specify and modify the path of the Dockerfile file to be used for building.
Additionally, you can check and modify the final script generated based on the information set up in Configuring the Build Pipeline.
To configure Dockerfile and pipeline scripts, follow these steps:
Customize settings page, enter information, and click the Connection Test button.
When the Next button is activated, click the Next button.
Item
Description
Dockerfile Settings
Choose whether to Create a new Dockerfile or Use an existing Dockerfile.
Using an existing Dockerfile can only be selected if you choose the Environment Only App template and select Use an existing repository in Configuring the Code Repository.
Dockerfile Path
Specify the path of the Dockerfile file in the source code.
For more information on Jenkins pipeline scripts, refer to the official website.
Completing Project Creation
The user can finally check the project and tool information to be created and start creating the project.
To complete the project creation, follow the procedure below.
Summary Information screen, check the information and click the Complete button.
The Project Creation popup window opens and the project creation proceeds.
After the project creation is complete, click the Confirm button to move to the Project page.
Notice
It cannot be canceled during creation, and if the project is created normally, the Confirm button is activated.
Checking Build Pipeline Execution
On the Project page, you can check the pipeline execution status, and the build pipeline is automatically executed when the project is created for the first time.
If the build pipeline fails, modify and re-execute the pipeline through the Build Pipeline menu on the left.
To check the build pipeline execution, follow these steps:
Click the Project card on the Main page. Move to the Project Dashboard page.
Click the Build/Deployment > Build Pipeline menu on the left.
Checking Deployment Results
After the pipeline execution is complete, you can check the Helm chart deployment results.
For more information on Helm chart deployment results, refer to Helm Release.
To check the deployment results, follow these steps:
Click the Project card on the Main page. Move to the Project Dashboard page.
Click the Build/Deployment > Kubernetes Deployment menu on the left. Move to the Kubernetes Deployment page.
Click the Helm Release list to check the detailed deployment results.
The user can create a new build pipeline in an already created DevOps Console project and build the source to create an image and deploy it to a Kubernetes cluster through the following procedure.
Table. New Repository Creation and Existing Repository Activation Conditions
To add a code repository, follow the procedure below.
Click the Project card on the Main page. It will move to the Project Dashboard page.
Click Repository > Code Repository in the left menu.
Click the Add Code Repository button at the top right. The Add Code Repository page will open.
Enter/set each item and click the Connection Test button.
Click the Save button to complete Add Code Repository.
Item
Description
Repository Type
Select the type of repository to use
Registered Tool: Select and use the type of SCM Repository tool available to the user (Github, Gitlab, etc.).
DevOps Code: Available when a DevOps Code application has been made in Samsung Cloud Platform Console.
Unregistered Tool: Enter the domain of an unregistered tool to use it. The unregistered tool item only appears when the App template is Environment Only.
New/Existing Usage
Select whether to Create a New Repository or Use an Existing Repository
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information.
Repository Information
Enter repository information
Code repositories not registered as tools in DevOps Console can be used.
Click the URL Check button to proceed with the verification process.
Table. Code Repository Setting Items
Adding Image Repository (Option)
Note
Proceed only if a new image repository is required.
To add an image repository, follow the procedure below.
Adding App Image Repository
Image Repository page, click the App Image Repository Addition button in the top right. Move to the App Image Repository Addition page.
On the App Image Repository Addition page, enter/settings for each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Type Selection
Select the image repository type. If you want to use an image repository not registered in Devops Console, select the Image Registry type.
Repository Creation Selection
Choose whether to create a new repository or use an existing one.
If you selected Docker hub or Image Registry type earlier, you can only select Use Existing Repository.
Registered Tool
Enter repository information.
Unregistered Tool
Enter repository information
You can register an image repository that has not been registered as a tool in DevOps Console.
Click the URL Check button to proceed with the verification process.
You can only select Use Existing Repository.
Table. App Image Repository Addition Input Items
Adding Pull-only Image Repository
Image Repository page, click the Add Pull-only Image Repository button at the top right. It moves to the Add Pull-only Image Repository page.
On the Add Pull-only Image Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Helm Installation
To install Helm, follow the procedure below.
Click the Project card on the Main page. It will move to the Project Dashboard page.
Click Build/Deployment > Helm Installation in the left menu.
Select the K8S cluster to install in the K8S cluster item.
Click the desired Helm chart to move to the Helm Chart Details screen.
Click the Helm Installation button. It will move to the Helm Chart Installation screen.
Enter each item and click the Next button.
Item
Description
Release Name
Enter the name to use for the Helm chart release. It must be unique and not duplicated within the namespace.
Type
Development, operation
Version
Select the version of the chart to install Helm.
K8S Cluster
Displays the target K8S cluster for Helm installation. It cannot be changed, and if a change is desired, select the K8S cluster in Helm Installation Start.
Namespace
Select the target namespace for Helm installation from the list.
Reference Information
Reference information provided by the selected K8S cluster. Click each tab to check detailed information.
Chart Included Default Values.yaml
The values.yaml file can be modified to run Helm installation with the desired value. If necessary, check the reference information and modify the values.yaml file with the corresponding value.
Input Type
The input type item is only displayed for Helm charts that support form input.
values.yaml: Modify the value in the general Helm chart yaml editor screen.
Form/values.yaml input can be switched, but the previously entered content will be initialized.
Form Input
The screen displayed when Form is selected as the input type, check each item, and enter the value. After entering, click the Validation Check button to verify the input value.
Table. Helm Installation Setting Items
The Helm Chart Installation popup window will open. Click the Run button to complete Helm installation.
Once the installation is complete, the Kubernetes deployment page will open.
Adding Build Pipeline
To add a build pipeline, follow the procedure below.
Main page, click the Project card. Move to the Project Dashboard page.
Click the Build/Deployment > Build Pipeline menu in the left menu. Move to the Build Pipeline page.
On the Build Pipeline page, click the Add Pipeline button at the top right. Move to the Add Pipeline page.
Enter/set each item on the Add Pipeline page.
Click the Next button.
Item
Description
Classification
Select development or operation classification
Actions that can be performed by role vary depending on development and operation.
Select Jenkins to add a build pipeline from the list.
Build Agent
Select the agent (build environment) where the build pipeline will run. Click the Info icon to view the list of tools provided by the agent.
Build Environment OS
Displays the OS information of the environment where the build agent runs.
Folder Type
Select the folder type.
Existing folder: Add a pipeline under an existing folder in Jenkins.
New folder: Create a new folder in Jenkins and add a pipeline under it.
Folder
Select a folder from the list or enter the name of the new folder to be created.
Pipeline Name
Enter the pipeline name.
Parameter Setting
Set the parameters to be used in the pipeline.
Environment Variable Setting
Set the environment variables to be used in the pipeline.
Stage Setting
Set the stages to be used in the pipeline.
Build Result Email Recipient Setting
Set the recipient to receive the result email after the pipeline is completed (success/failure).
Table. Build Pipeline Addition Setting Items
Setting Parameters
To set parameters to use when running the pipeline, follow the procedure below.
Click the Parameters area. The Parameter Registration page will open on the right.
Click the Add button to open the Add Parameter popup window.
Add parameters and click the Apply button to complete parameter settings.
Setting Environment Variables
To set environment variables to be used in the pipeline, follow the procedure below.
Environment Variables section, click. The Environment Variable Registration page opens on the right.
A list of pre-registered Environment Variables appears, and select the checkbox of the environment variable to be used.
Check the Selected Environment Variables and click the Apply button to complete the environment variable setting.
Setting Build Result Email Recipients
To set up the recipient to receive the build result by email, follow the procedure below.
Email Recipient area, click. The Add Email Recipient page opens on the right.
In the Search area, search for and add the recipient.
Click the Apply button to complete the email recipient setting.
Setting Additional Stages
Setting Checkout Stage
To add a Checkout stage, follow the procedure below.
Click the New Stage area. The stage setting page will open on the right.
Select Checkout as the Stage Type.
Enter information and click Apply. (The code repository added in Adding Code Repository (Option) can be selected from the URL.)
Item
Description
URL
Select the code repository to perform checkout.
Branch Name
Enter the branch name to checkout.
Table. Checkout Stage Setting Items
Setting Build Stage
To add a Build stage, follow these steps:
Click the plus icon to add a new stage.
Click the new stage area. The Stage Settings page opens on the right.
On the Stage Settings page, select Build as the Stage Type.
On the Stage Settings page, enter the information and click the Apply button.
Item
Description
Language
Select the programming language used by the application.
Build Tool
Select the Build tool used for application building. Provides default Shell commands based on the selected Build tool.
Shell Command
Enter the command to use for application building. All commands available in the Shell can be used.
Table. Build Stage Input Items
Setting Docker Build Stage
To add a Docker Build stage, follow these steps:
Click the Plus icon to add a new stage.
Click the New Stage area. The Stage Settings page opens on the right.
Select Docker Build as the Stage Type.
After entering the information, click the Apply button. (Registry URL allows you to select the image repository added in Adding Image Repository (Option).)
Item
Description
Example
Registry URL (docker push)
Select the image repository where the completed Docker build result image will be pushed.
ID
ID value of the account to be used in the image repository
Image Tag Pattern
The Docker image tag will be automatically generated based on the selected pattern.
{YYYYMMDD}: year, month, day
{HHMMSS}: hour, minute, second
{BUILD_NUM}: current build pipeline execution number
{YYYYMMDD}.{HHMMSS}: 20200414.150938
{YYYYMMDD}.{BUILD_NUM}: 20220414.13
Add Base Image Repository
The Add Base Image Repository popup window will open.
If the image repository providing the base image (Dockerfile’s FROM clause, docker pull) used in the Dockerfile and the image repository of the Registry URL (docker push) are different, select the image repository for docker pull.
Image Build Tool
Displays the image build tool.
Pre-build Command
If there are commands that must be executed before building the Docker image, write them in Shell command format.
cp target/*.jar docker/
Image Build Folder
If the Docker image build needs to be executed in a specific folder, select the checkbox and enter the folder path.
docker
Dockerfile
Enter the Dockerfile file name.
Dockerfile
Image Build Options
If additional options are required for the image build tool, enter them.
--no-cache
Build Command
Displays the actual image build command to be executed.
Post-build Command
If there are commands that must be executed after building the Docker image, write them in Shell command format.
rm -rf docker/*.jar
Table. Docker Build Stage Input Items
Setting Deploy to K8S Stage
To add a Deploy to K8S stage, follow the procedure below.
Click the Add icon to add a new stage.
Click the New Stage area. The stage setting page will open on the right.
Select Deploy to K8S as the Stage Type.
Enter information and click the Apply button. (When selecting Helm Release (Helm Chart Type) in Type, the Helm release added in Helm Installation can be selected.)
Item
Description
Type
Select deployment type
Helm Release (Helm Chart Type)
Workload
ArgoCD
K8S Cluster
Select K8S cluster
Helm Release (Helm Chart Type) selection will display a list of Helm releases deployed through DevOps Console.
Namespace
Select namespace.
Helm Release
Select Helm release.
Deployment Method
Select deployment method
Recreate
Rolling Update
Registry URL
Select the image repository where the image to be deployed to Kubernetes is docker pushed.
Secret
Select secret information input method
Auto Generation: Automatically generate and use the secret corresponding to the selected image repository in DevOps Console.
Use Existing Secret: Use a pre-created secret through K8S secret management.
Table. Deploy to K8S Stage Input Items
Checking Final Pipeline Script
Check the actual build pipeline script to be created. Modify the script directly if necessary.
Click the Complete button to complete adding the pipeline.
Result of Adding Pipeline
Note
The added pipeline will not be executed automatically. If execution is required, run the pipeline directly.
The user can create a new build pipeline in an already created DevOps Console project, build the source to create an image, and proceed with workload deployment to a Kubernetes cluster through the following procedure.
To start adding build/deployment, follow the procedure below.
Click the Project card on the Main page. It moves to the Project Dashboard page.
Adding Code Repository (Option)
Guide
Proceed only if a new code repository is needed.
To add a code repository, follow the procedure below.
Code Repository page, click the Add Code Repository button in the top right corner. It will move to the Add Code Repository page.
On the Add Code Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Type
Select the repository to use
Registered Tool: You can select and use the types of SCM Repository tools available to the user (Github, Gitlab, etc.).
DevOps Code: Available if you have applied for DevOps Code use in the Samsung Cloud Platform Console.
Unregistered Tool: You can use it by entering the domain of an unregistered tool. The unregistered tool item only appears when the App template is Environment Only (without source code).
New/Existing Usage
Select Create New Repository or Use Existing Repository
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information.
Repository Information
Enter repository information
You can use a code repository that is not registered as a tool in the DevOps Console.
An additional URL check process is required.
Table. Add Code Repository Input Items
Adding Image Repository (Option)
Guide
Proceed only if a new image repository is needed.
To add an image repository, follow the procedure below.
App. Adding Image Repository
Image Repository page, click the App Image Repository Addition button in the top right. Move to the App Image Repository Addition page.
On the App Image Repository Addition page, enter/settings for each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Type Selection
Select the image repository type. If you want to use an image repository not registered in Devops Console, select the Image Registry type.
Repository Creation Selection
Choose whether to create a new repository or use an existing one.
If you selected Docker hub or Image Registry type earlier, you can only select Use Existing Repository.
Registered Tool
Enter repository information.
Unregistered Tool
Enter repository information
You can register an image repository that has not been registered as a tool in DevOps Console.
Click the URL Check button to proceed with the verification process.
You can only select Use Existing Repository.
Table. App Image Repository Addition Input Items
Adding Pull-only Image Repository
Image Repository page, click the Add Pull-only Image Repository button at the top right. It moves to the Add Pull-only Image Repository page.
On the Add Pull-only Image Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Adding Workload
To add a workload, follow the procedure below.
Main page, click the Project card. It moves to the Project Dashboard page.
In the left menu, click the Build/Deploy > Kubernetes Deployment menu. It moves to the Kubernetes Deployment page.
On the Kubernetes Deployment page, click the Add Workload menu. The Add Workload popup window opens.
In the Add Workload popup window, enter the information and click the Save button.
In the confirmation popup window, click the Confirm button to complete adding the workload.
Item
Description
Target to be retrieved
Only workloads deployed with the same image as the base image of the App template used when creating the project for the first time are displayed.
Workload already registered in the project
It shows the workload already registered in the project.
Workloads already registered in the project cannot be added.
Table. Input items for adding workload
Modifying K8S Authentication Information
K8S authentication information refers to the authentication information used to verify the authority to use the K8S cluster/namespace when performing deployment in the build pipeline.
To modify the K8S authentication information, follow the procedure below.
Deployment Details page, click the Edit Authentication Information icon to the right of K8S Cluster/Namespace. The Edit Authentication Information popup window will open.
The authentication information is fixed to the account of the logged-in user. Click the Save button to modify it.
In the confirmation popup window, click the Confirm button.
The K8S authentication information will be changed to the logged-in user.
Adding Build Pipeline
To add a build pipeline, follow the procedure below.
Main page, click the Project card. Move to the Project Dashboard page.
Click the Build/Deployment > Build Pipeline menu in the left menu. Move to the Build Pipeline page.
On the Build Pipeline page, click the Add Pipeline button at the top right. Move to the Add Pipeline page.
Enter/set each item on the Add Pipeline page.
Click the Next button.
Item
Description
Classification
Select development or operation classification
Actions that can be performed by role vary depending on development and operation.
Select Jenkins to add a build pipeline from the list.
Build Agent
Select the agent (build environment) where the build pipeline will run. Click the Info icon to view the list of tools provided by the agent.
Build Environment OS
Displays the OS information of the environment where the build agent runs.
Folder Type
Select the folder type.
Existing folder: Add a pipeline under an existing folder in Jenkins.
New folder: Create a new folder in Jenkins and add a pipeline under it.
Folder
Select a folder from the list or enter the name of the new folder to be created.
Pipeline Name
Enter the pipeline name.
Parameter Setting
Set the parameters to be used in the pipeline.
Environment Variable Setting
Set the environment variables to be used in the pipeline.
Stage Setting
Set the stages to be used in the pipeline.
Build Result Email Recipient Setting
Set the recipient to receive the result email after the pipeline is completed (success/failure).
Table. Build Pipeline Addition Setting Items
Setting up Parameters
To set parameters to use when running a pipeline, follow these steps:
Parameters card should be clicked.
Add button should be clicked to add parameters.
Apply button should be clicked to complete parameter settings.
Setting up Environment Variables
To set environment variables to be used in the pipeline, follow the procedure below.
Environment Variables section, click. The Environment Variable Registration page opens on the right.
A list of pre-registered Environment Variables appears, and select the checkbox of the environment variable to be used.
Check the Selected Environment Variables and click the Apply button to complete the environment variable setting.
Setting up Build Result Email Recipients
To set up the recipient to receive the build result by email, follow the procedure below.
Email Recipient area, click. The Add Email Recipient page opens on the right.
In the Search area, search for and add the recipient.
Click the Apply button to complete the email recipient setting.
Setting up Additional Stages
Setting up Checkout Stage
To add a Checkout stage, follow these steps:
Click the New Stage area. The Stage Settings page opens on the right.
On the Stage Settings page, select Checkout as the Stage Type.
Enter the information and click the Apply button. (You can select the code repository added in Adding a Code Repository (Option) from the URL.)
Item
Description
URL
Select the code repository to perform checkout.
Branch Name
Enter the branch name to checkout.
Table. Checkout Stage Input Items
Setting up Build Stage
To add a Build stage, follow these steps:
Click the plus icon to add a new stage.
Click the new stage area. The Stage Settings page opens on the right.
On the Stage Settings page, select Build as the Stage Type.
On the Stage Settings page, enter the information and click the Apply button.
Item
Description
Language
Select the programming language used by the application.
Build Tool
Select the Build tool used for application building. Provides default Shell commands based on the selected Build tool.
Shell Command
Enter the command to use for application building. All commands available in the Shell can be used.
Table. Build Stage Input Items
Setting up Docker Build Stage
To add a Docker Build stage, follow these steps:
Click the Plus icon to add a new stage.
Click the New Stage area. The Stage Settings page opens on the right.
Select Docker Build as the Stage Type.
After entering the information, click the Apply button. (Registry URL allows you to select the image repository added in Adding Image Repository (Option).)
Item
Description
Example
Registry URL (docker push)
Select the image repository where the completed Docker build result image will be pushed.
ID
ID value of the account to be used in the image repository
Image Tag Pattern
The Docker image tag will be automatically generated based on the selected pattern.
{YYYYMMDD}: year, month, day
{HHMMSS}: hour, minute, second
{BUILD_NUM}: current build pipeline execution number
{YYYYMMDD}.{HHMMSS}: 20200414.150938
{YYYYMMDD}.{BUILD_NUM}: 20220414.13
Add Base Image Repository
The Add Base Image Repository popup window will open.
If the image repository providing the base image (Dockerfile’s FROM clause, docker pull) used in the Dockerfile and the image repository of the Registry URL (docker push) are different, select the image repository for docker pull.
Image Build Tool
Displays the image build tool.
Pre-build Command
If there are commands that must be executed before building the Docker image, write them in Shell command format.
cp target/*.jar docker/
Image Build Folder
If the Docker image build needs to be executed in a specific folder, select the checkbox and enter the folder path.
docker
Dockerfile
Enter the Dockerfile file name.
Dockerfile
Image Build Options
If additional options are required for the image build tool, enter them.
--no-cache
Build Command
Displays the actual image build command to be executed.
Post-build Command
If there are commands that must be executed after building the Docker image, write them in Shell command format.
rm -rf docker/*.jar
Table. Docker Build Stage Input Items
Setting up Deploy to K8S Stage
To add the Deploy to K8S stage, follow the procedure below.
Click the + icon to add a new stage.
Click the New Stage area. The Stage Setting page opens on the right.
On the Stage Setting page, select Deploy to K8S as the Stage Type.
On the Stage Setting page, enter the information and click the Apply button. (If you select the type as workload, you can select the workload added in Adding Workload.)
Item
Description
Type
Select deployment type
Helm Release (Helm Chart Type)
Workload
ArgoCD
K8S Cluster
Select K8S cluster
Helm Release (Helm Chart Type) selection will display a list of Helm releases deployed through DevOps Console.
Namespace
Select namespace.
Helm Release
Select Helm release.
Deployment Method
Select deployment method
Recreate
Rolling Update
Registry URL
Select the image repository where the image to be deployed to Kubernetes is docker pushed.
Secret
Select secret information input method
Auto Generation: Automatically generate and use the secret corresponding to the selected image repository in DevOps Console.
Use Existing Secret: Use a pre-created secret through K8S secret management.
Table. Deploy to K8S Stage Input Items
Checking the Final Pipeline Script
Check the actual build pipeline script to be created. Modify the script directly if necessary.
Click the Complete button to complete adding the pipeline.
Result of Adding Pipeline
Note
The added pipeline will not be executed automatically. If execution is required, run the pipeline directly.
The user can create a new build pipeline in an already created DevOps Console project, build the source, create an image, and deploy it to a VM server (VM Deployment) through the following procedure.
To start adding build/deployment, follow the procedure below.
Click the Project card on the Main page. It will move to the Project Dashboard page.
Adding Code Repository (Option)
Notice
Proceed only if a new code repository is needed.
To add a code repository, follow the procedure below.
Code Repository page, click the Add Code Repository button in the top right corner. It will move to the Add Code Repository page.
On the Add Code Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Type
Select the repository to use
Registered Tool: You can select and use the types of SCM Repository tools available to the user (Github, Gitlab, etc.).
DevOps Code: Available if you have applied for DevOps Code use in the Samsung Cloud Platform Console.
Unregistered Tool: You can use it by entering the domain of an unregistered tool. The unregistered tool item only appears when the App template is Environment Only (without source code).
New/Existing Usage
Select Create New Repository or Use Existing Repository
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information.
Repository Information
Enter repository information
You can use a code repository that is not registered as a tool in the DevOps Console.
An additional URL check process is required.
Table. Add Code Repository Input Items
Adding Image Repository (Option)
Notice
Proceed only if a new image repository is needed.
To add an image repository, follow the procedure below.
App. Adding Image Repository
Image Repository page, click the App Image Repository Addition button in the top right. Move to the App Image Repository Addition page.
On the App Image Repository Addition page, enter/settings for each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Type Selection
Select the image repository type. If you want to use an image repository not registered in Devops Console, select the Image Registry type.
Repository Creation Selection
Choose whether to create a new repository or use an existing one.
If you selected Docker hub or Image Registry type earlier, you can only select Use Existing Repository.
Registered Tool
Enter repository information.
Unregistered Tool
Enter repository information
You can register an image repository that has not been registered as a tool in DevOps Console.
Click the URL Check button to proceed with the verification process.
You can only select Use Existing Repository.
Table. App Image Repository Addition Input Items
Adding Pull-only Image Repository
Image Repository page, click the Add Pull-only Image Repository button at the top right. It moves to the Add Pull-only Image Repository page.
On the Add Pull-only Image Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Adding Artifact Repository (Option)
Notice
Proceed only if a new artifact repository is needed.
If used as a rollback artifact repository, only Nexus with raw(hosted) repository type is available.
To add an artifact repository, follow these steps:
Artifact Repository page, click the Add Artifact Repository button in the top right corner. It will move to the Add Artifact Repository page.
On the Add Artifact Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Creation Option
Select whether to create a new repository or use an existing one.
Basic Information Input
Enter Base URL, select repository type, and enter repository/authentication information.
Table. Input Items for Adding an Artifact Repository
Adding VM Server Group/VM Server (Option)
Notice
Proceed only if a new VM server group/VM server is needed.
To add a VM server group, follow these steps:
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu from the left menu. Move to the VM Server Group page.
On the VM Server Group page, click the Add button. Move to the Add VM Server Group page.
Enter the basic information and click the Save button to complete the VM server group settings.
Item
Description
Server Group Name
Enter the name of the VM server group.
Description
Enter a description.
Type
Select the type of VM server group
SSH: Deployment is performed through SSH commands during VM deployment.
Agent: Deployment is performed using an agent during VM deployment. (Agent Connection)
VM Server
Add: Add the VM server to be included in the VM server group.
Delete: Check the checkbox of the VM server to be deleted from the VM server group and click Delete to delete it.
Table. Input Items for Adding a VM Server Group
Adding VM Server
To add a VM server, you need Manager permissions for the corresponding VM server group.
Note
The VM server addition popup window may open differently depending on the type of VM server group.
To add a VM server, follow these steps:
Click the Manage icon in the top right corner of the Main page. You will be taken to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu from the left menu. You will be taken to the VM Server Group page.
Click the VM server group where you want to add the VM server from the VM server group list on the VM Server Group page. You will be taken to the VM Server Group Details page.
Click the Add button on the VM Server Group Details page. You will be taken to the Add VM Server page.
Enter the basic information on the Add VM Server page and click the Add button to complete the VM server settings.
Item
Description
Server Name
Enter the name of the VM server.
Description
Enter a description.
IP
Enter the IP address.
SSH Port
Enter the port of the VM server to use for SSH connection.
OS
Enter the operating system.
Location
Select a location.
Authentication Information
Enter the authentication information of the VM server to use for SSH connection.
Secret Key
This is a secret key to authenticate the VM server where the agent is installed.
Table. Input Items for Adding a VM Server
Adding Build Pipeline
To add a build pipeline, follow the procedure below.
Main page, click the Project card. Move to the Project Dashboard page.
Click the Build/Deployment > Build Pipeline menu in the left menu. Move to the Build Pipeline page.
On the Build Pipeline page, click the Add Pipeline button at the top right. Move to the Add Pipeline page.
Enter/set each item on the Add Pipeline page.
Click the Next button.
Item
Description
Classification
Select development or operation classification
Actions that can be performed by role vary depending on development and operation.
Select Jenkins to add a build pipeline from the list.
Build Agent
Select the agent (build environment) where the build pipeline will run. Click the Info icon to view the list of tools provided by the agent.
Build Environment OS
Displays the OS information of the environment where the build agent runs.
Folder Type
Select the folder type.
Existing folder: Add a pipeline under an existing folder in Jenkins.
New folder: Create a new folder in Jenkins and add a pipeline under it.
Folder
Select a folder from the list or enter the name of the new folder to be created.
Pipeline Name
Enter the pipeline name.
Parameter Setting
Set the parameters to be used in the pipeline.
Environment Variable Setting
Set the environment variables to be used in the pipeline.
Stage Setting
Set the stages to be used in the pipeline.
Build Result Email Recipient Setting
Set the recipient to receive the result email after the pipeline is completed (success/failure).
Table. Build Pipeline Addition Setting Items
Setting up Parameters
To set parameters to use when running a pipeline, follow these steps:
Parameters card should be clicked.
Add button should be clicked to add parameters.
Apply button should be clicked to complete parameter settings.
Setting up Environment Variables
To set environment variables to be used in the pipeline, follow the procedure below.
Environment Variables section, click. The Environment Variable Registration page opens on the right.
A list of pre-registered Environment Variables appears, and select the checkbox of the environment variable to be used.
Check the Selected Environment Variables and click the Apply button to complete the environment variable setting.
Setting up Build Result Email Recipients
To set up the recipient to receive the build result by email, follow the procedure below.
Email Recipient area, click. The Add Email Recipient page opens on the right.
In the Search area, search for and add the recipient.
Click the Apply button to complete the email recipient setting.
Setting up Additional Stages
Setting up Checkout Stage
To add a Checkout stage, follow these steps:
Click the New Stage area. The Stage Settings page opens on the right.
On the Stage Settings page, select Checkout as the Stage Type.
Enter the information and click the Apply button. (You can select the code repository added in Adding a Code Repository (Option) from the URL.)
Item
Description
URL
Select the code repository to perform checkout.
Branch Name
Enter the branch name to checkout.
Table. Checkout Stage Input Items
Setting up Build Stage
To add a Build stage, follow these steps:
Click the plus icon to add a new stage.
Click the new stage area. The Stage Settings page opens on the right.
On the Stage Settings page, select Build as the Stage Type.
On the Stage Settings page, enter the information and click the Apply button.
Item
Description
Language
Select the programming language used by the application.
Build Tool
Select the Build tool used for application building. Provides default Shell commands based on the selected Build tool.
Shell Command
Enter the command to use for application building. All commands available in the Shell can be used.
Table. Build Stage Input Items
Setting up Docker Build Stage
To add a Docker Build stage, follow these steps:
Click the Plus icon to add a new stage.
Click the New Stage area. The Stage Settings page opens on the right.
Select Docker Build as the Stage Type.
After entering the information, click the Apply button. (Registry URL allows you to select the image repository added in Adding Image Repository (Option).)
Item
Description
Example
Registry URL (docker push)
Select the image repository where the completed Docker build result image will be pushed.
ID
ID value of the account to be used in the image repository
Image Tag Pattern
The Docker image tag will be automatically generated based on the selected pattern.
{YYYYMMDD}: year, month, day
{HHMMSS}: hour, minute, second
{BUILD_NUM}: current build pipeline execution number
{YYYYMMDD}.{HHMMSS}: 20200414.150938
{YYYYMMDD}.{BUILD_NUM}: 20220414.13
Add Base Image Repository
The Add Base Image Repository popup window will open.
If the image repository providing the base image (Dockerfile’s FROM clause, docker pull) used in the Dockerfile and the image repository of the Registry URL (docker push) are different, select the image repository for docker pull.
Image Build Tool
Displays the image build tool.
Pre-build Command
If there are commands that must be executed before building the Docker image, write them in Shell command format.
cp target/*.jar docker/
Image Build Folder
If the Docker image build needs to be executed in a specific folder, select the checkbox and enter the folder path.
docker
Dockerfile
Enter the Dockerfile file name.
Dockerfile
Image Build Options
If additional options are required for the image build tool, enter them.
--no-cache
Build Command
Displays the actual image build command to be executed.
Post-build Command
If there are commands that must be executed after building the Docker image, write them in Shell command format.
rm -rf docker/*.jar
Table. Docker Build Stage Input Items
Setting up Deploy to VM Stage
To add the Deploy to VM stage, follow the procedure below.
Click the + icon to add a new stage.
Click the New Stage area. The Stage Setting page will open on the right.
On the Stage Setting page, select Deploy to VM as the Stage Type.
Enter information on the Stage Setting page and click the Apply button.
Item
Description
Deployment Configuration
Select the deployment configuration method
Deployment target setting (using SSH command/Agent): Deploy using SSH command or Agent.
Direct script writing: Users directly input all commands for deployment.
Users should follow the procedure to create a DevOps Console project and check if they have usage permissions for the target cluster/namespace before building/deploying, and request the person in charge to add permissions to the cluster/namespace if necessary.
Checking Namespace Permissions for DevOps Console K8S Cluster
To check the permissions for the namespace of the K8S cluster in use in DevOps Console, follow the procedure below.
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left menu. Move to the K8S Cluster list page.
Click on a K8S cluster. Move to the Details page of the selected K8S cluster.
Click the Namespace tab. The Namespace List screen appears.
When you click on a namespace, you will be moved to the Namespace Details page.
Note
You must have Administrator or User permissions. If you do not have permissions, request permissions from the cluster administrator.
Reference
If the deployment target cluster is not registered, register the deployment target cluster directly.
For more information, see K8S Cluster.
12.2.3 - Project
12.2.3.1 - Project Overview
Users can view detailed information about the project, as well as related Helm release information and tool usage.
Users with Master or higher authority over the project can modify the project name through this menu, and Owners can delete the project.
Getting Started with Project Overview
To start using the Project Overview, follow these steps:
Main page, click the Project card. Move to the Project Dashboard page.
Click the Project Overview menu on the left menu. The Project Overview screen appears.
Modifying the Project Name
To modify the project name, follow these steps:
On the Project Overview screen, click the Modify Project Name button at the top right. The Modify Project Name popup window opens.
In the Modify Project Name popup window, modify the project name and click the Save button to complete the project name modification.
Deleting a Project
To delete a project, follow these steps:
On the Project Overview screen, click the Delete Project button at the top right. The Delete Project popup window opens.
In the Delete Project popup window, click the Confirm button. Move to the Delete Project page.
On the Delete Project page, enter or select the required information and click the Delete button. The Delete Project popup window opens.
Enter the project name in the Delete Project popup window.
Click the Confirm button to complete the project deletion.
Select if you want to delete simultaneously with the project:
Selected: Executes the actual physical deletion command when the project is deleted.
Not Selected: Only logically deletes from DevOps Console, but remains in each tool.
Table. Project Deletion Input Items
Managing All Projects
Note
Displays information about all projects for which the user has authority. Provides the same functionality as Project Overview and Project Members, except for Project Name Change and Project Deletion.
Table. New Repository Creation and Existing Repository Activation Conditions
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
In the Code Repository section, select the Code Repository type.
Select Create a New Repository or Use an Existing Repository and enter the information.
Enter the Authentication Information and click the Connection Test button.
If the Next button is activated, click the Next button.
Item
Description
Repository Type
Select the code repository to use
Registered Tool: You can select and use the types of SCM Repository tools available to the user (Github, Gitlab, etc.).
DevOps Code: Available if you have applied for DevOps Code in the Samsung Cloud Platform Console.
Unregistered Tool: You can use it by entering the domain of an unregistered tool. The unregistered tool item only appears when the App. template is Environment Only (without source code).
New/Existing Usage
Select whether to Create a New Repository or Use an Existing Repository.
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information
If you don’t have an account, you can create one by clicking the Don’t have an account? link and opening the Account Creation Information popup window.
After creating a new account, please change your password through the Initial Password Setting link.
(Unregistered Tool) Repository Information
Enter repository information
You can use a code repository that is not registered as a tool in DevOps Console.
You must go through an additional verification process by clicking the URL Check button.
The user can set up the repository to store the built container image through the image repository setting step.
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
The user can deploy through direct configuration using Helm charts.
When selecting Helm release name and Helm chart, the Helm chart installation items and the default Values.yaml items included in the chart are displayed.
Available Helm charts are linked to App templates. You can modify or delete them through Managing Supported Helm Charts.
To set up the deployment target environment, follow these steps:
Select direct configuration using Helm charts in the deployment target section.
Enter the Helm release name.
Click the Search button to select the Helm chart to use.
Modify the Values.yaml and click the Validation Check button.
Click the Next button when it is activated.
Item
Description
Deployment Target
Select the deployment target.
Helm Release Name
Enter the name of the Helm release to be created.
This name must be unique within the namespace of the cluster to be deployed.
Helm Chart
Select the Helm chart.
When a Helm chart is selected, detailed information about the selected chart is displayed below.
K8S Information
Displays the information of the Kubernetes cluster required for the Value.yaml configuration.
Values.yaml
Modify the Values.yaml content.
This is the values.yaml file used when installing the Helm chart.
Displays the OS information of the environment where the build agent runs.
User Information
IDP-connected Jenkins
Click the User Check button to verify user registration.
If not registered as a Jenkins user, a User Registration Guide popup window will open; click the Go to Jenkins link to proceed with User Registration or Initial Jenkins Login.
Non-IDP-connected Jenkins
Enter authentication information and click the Connection Test button.
Environment Variable Setting
Set environment variables to be registered in the Jenkins pipeline.
Image Tag Pattern
Select the method for assigning tags to container images.
Deploy Strategy
Select the deployment method for container images.
Deployment Result Recipient
Select the user to receive the result after the build pipeline is completed.
Table. Build Pipeline Setting Items for Project Creation
Setting User Definition
Users can specify and modify the path of the Dockerfile file to be used for building.
Additionally, you can check and modify the final script generated based on the information set up in Configuring the Build Pipeline.
To configure Dockerfile and pipeline scripts, follow these steps:
Customize settings page, enter information, and click the Connection Test button.
When the Next button is activated, click the Next button.
Item
Description
Dockerfile Settings
Choose whether to Create a new Dockerfile or Use an existing Dockerfile.
Using an existing Dockerfile can only be selected if you choose the Environment Only App template and select Use an existing repository in Configuring the Code Repository.
Dockerfile Path
Specify the path of the Dockerfile file in the source code.
For more information on Jenkins pipeline scripts, refer to the official website.
Completing Project Creation
The user can finally check the project and tool information to be created and start creating the project.
To complete the project creation, follow the procedure below.
Summary Information screen, check the information and click the Complete button.
The Project Creation popup window opens and the project creation proceeds.
After the project creation is complete, click the Confirm button to move to the Project page.
Notice
It cannot be canceled during creation, and if the project is created normally, the Confirm button is activated.
12.2.3.2.2 - Creating a Project (Workload Deployment)
Note
The user must be a member of a project group and tenant to create a project. For information on how to join a project group and tenant, see Creating a Project Group.
Users can create a project that deploys to a specific Kubernetes cluster using App Templates, Tool Management, and Workloads.
Starting Project Creation
To start creating a project, follow these steps:
Main page, click the Create Project button. This will take you to the Create Project page.
Entering Basic Information
Enter the project’s basic information. The project name and project ID must be unique and cannot be duplicated.
Table. New Repository Creation and Existing Repository Activation Conditions
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
In the Code Repository section, select the Code Repository type.
Select Create a New Repository or Use an Existing Repository and enter the information.
Enter the Authentication Information and click the Connection Test button.
If the Next button is activated, click the Next button.
Item
Description
Repository Type
Select the code repository to use
Registered Tool: You can select and use the types of SCM Repository tools available to the user (Github, Gitlab, etc.).
DevOps Code: Available if you have applied for DevOps Code in the Samsung Cloud Platform Console.
Unregistered Tool: You can use it by entering the domain of an unregistered tool. The unregistered tool item only appears when the App. template is Environment Only (without source code).
New/Existing Usage
Select whether to Create a New Repository or Use an Existing Repository.
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information
If you don’t have an account, you can create one by clicking the Don’t have an account? link and opening the Account Creation Information popup window.
After creating a new account, please change your password through the Initial Password Setting link.
(Unregistered Tool) Repository Information
Enter repository information
You can use a code repository that is not registered as a tool in DevOps Console.
You must go through an additional verification process by clicking the URL Check button.
The user can set up the repository to store the built container image through the image repository setting step.
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
Displays the OS information of the environment where the build agent runs.
User Information
IDP-connected Jenkins
Click the User Check button to verify user registration.
If not registered as a Jenkins user, a User Registration Guide popup window will open; click the Go to Jenkins link to proceed with User Registration or Initial Jenkins Login.
Non-IDP-connected Jenkins
Enter authentication information and click the Connection Test button.
Environment Variable Setting
Set environment variables to be registered in the Jenkins pipeline.
Image Tag Pattern
Select the method for assigning tags to container images.
Deploy Strategy
Select the deployment method for container images.
Deployment Result Recipient
Select the user to receive the result after the build pipeline is completed.
Table. Build Pipeline Setting Items for Project Creation
Setting Up Custom Settings
Users can specify and modify the path of the Dockerfile file to be used for building.
Additionally, you can check and modify the final script generated based on the information set up in Configuring the Build Pipeline.
To configure Dockerfile and pipeline scripts, follow these steps:
Customize settings page, enter information, and click the Connection Test button.
When the Next button is activated, click the Next button.
Item
Description
Dockerfile Settings
Choose whether to Create a new Dockerfile or Use an existing Dockerfile.
Using an existing Dockerfile can only be selected if you choose the Environment Only App template and select Use an existing repository in Configuring the Code Repository.
Dockerfile Path
Specify the path of the Dockerfile file in the source code.
Table. New Repository Creation and Existing Repository Activation Conditions
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
In the Code Repository section, select the Code Repository type.
Select Create a New Repository or Use an Existing Repository and enter the information.
Enter the Authentication Information and click the Connection Test button.
If the Next button is activated, click the Next button.
Item
Description
Repository Type
Select the code repository to use
Registered Tool: You can select and use the types of SCM Repository tools available to the user (Github, Gitlab, etc.).
DevOps Code: Available if you have applied for DevOps Code in the Samsung Cloud Platform Console.
Unregistered Tool: You can use it by entering the domain of an unregistered tool. The unregistered tool item only appears when the App. template is Environment Only (without source code).
New/Existing Usage
Select whether to Create a New Repository or Use an Existing Repository.
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information
If you don’t have an account, you can create one by clicking the Don’t have an account? link and opening the Account Creation Information popup window.
After creating a new account, please change your password through the Initial Password Setting link.
(Unregistered Tool) Repository Information
Enter repository information
You can use a code repository that is not registered as a tool in DevOps Console.
You must go through an additional verification process by clicking the URL Check button.
The user can set up the repository to store the built container image through the image repository setting step.
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
To set up the deployment environment, you need to have ArgoCD and a Git repository for GitOps deployment ready. The input values differ depending on the ArgoCD App Creation Method chosen.
Create a new App
Use an existing App
Creating a New App
To set up the deployment environment by creating a new ArgoCD application, follow these steps.
On the Deployment Environment screen, enter the ArgoCD URL and click the Confirm button.
The authentication information input screen appears.
Enter the authentication information and click the Connection Test button.
The input screen for the new ArgoCD application appears.
Enter the ArgoCD application name and ArgoCD project name.
Select the repository type.
Select the Helm chart.
The Helm chart and Helm chart Git repository information are displayed.
Modify the Values.yaml and click the Validation Check button.
Enter the Git repository information and authentication information, then click the Connection Test button.
Click the Next button when it becomes active.
Item
Description
URL input method
Select the URL input method.
Select from the list: Registered ArgoCD tools are displayed.
Enter directly
ArgoCD URL
Enter the ArgoCD URL and click the Confirm button. The authentication information section appears.
Authentication information
Enter the authentication information and click the Connection Test button. The application basic information section opens.
Application name
Enter the name of the ArgoCD application to be created.
Project name
Enter the name of the ArgoCD project.
Repository type
Select the repository type.
Create a new repository using Helm chart: Create a Git repository for GitOps using Helm chart. The Helm chart information section opens.
Use an existing Git repository
Helm chart
Select the Helm chart.
Helm chart Git repository
Enter the information of the Git repository to be used for GitOps.
Table. New App creation settings
Using an Existing App
To set up the deployment environment using an existing ArgoCD application, follow these steps.
On the Deployment Environment screen, enter the ArgoCD URL and click the Confirm button.
The input screen for the existing ArgoCD application name and authentication information appears.
Enter the existing application name and authentication information, then click the Connection Test button.
A pop-up window for URL Check opens to distinguish the Git repository linked to the ArgoCD application.
Modify the Base URL in the URL Check pop-up window and click the Confirm button.
The information of the Git repository linked to the ArgoCD application is displayed.
Enter the authentication information of the Git repository and click the Connection Test button.
In the Manifest Root path section, enter the yaml file name and key value to be modified.
Click the Next button when it becomes active.
Item
Description
URL input method
Select the URL input method
Select from the list: Registered ArgoCD tools are displayed.
Enter directly
ArgoCD URL
Enter the ArgoCD URL and click the Confirm button. The existing application name and authentication information section appears.
Application name / Authentication information
Enter the existing application name and authentication information, then click the Connection Test button. The Git repository and ArgoCD information section appears.
URL Check
Distinguish the Base URL and Path from the entire URL.
Git repository authentication information
Enter the authentication information of the Git repository used by the selected existing application.
Image Repo Key
Enter the path and key value of the yaml file where the image repository information is recorded. If the key values for the repository and tag are the same, enter the same value.
Image Tag Key
Enter the path and key value of the yaml file where the image tag information is recorded. If the key values for the repository and tag are the same, enter the same value.
Image Secret Key
Enter the path and key value of the yaml file where the image secret information is recorded.
Deploy Strategy Key
Enter the path and key value of the yaml file where the deployment strategy information is recorded (optional).
Table. Existing App usage settings
Note
The ArgoCD deployment project in the DevOps Console performs deployment by changing the information registered in Image Repo Key, Image Tag Key, Image Secret Key, and Deploy Strategy Key.
Setting up a Build Pipeline
Users can configure pipelines for building and deploying applications.
You can check each stage of the pipeline to be configured and change the build job name.
To set up a build pipeline, follow these steps:
Build Pipeline screen branches according to IDP connection.
For IDP-connected Jenkins, click the User Check button to verify user registration.
For non-IDP-connected Jenkins, enter authentication information and click the Connection Test button.
When the Next button is activated, click the Next button.
Item
Description
Build/Deploy Pipeline
Displays build/deploy pipeline information.
Build/deploy pipeline is displayed based on the information registered in the pipeline template management of the App template.
Classification
Select development/operation classification.
Authority for the pipeline varies depending on the development/operation classification.
Displays the OS information of the environment where the build agent runs.
User Information
IDP-connected Jenkins
Click the User Check button to verify user registration.
If not registered as a Jenkins user, a User Registration Guide popup window will open; click the Go to Jenkins link to proceed with User Registration or Initial Jenkins Login.
Non-IDP-connected Jenkins
Enter authentication information and click the Connection Test button.
Environment Variable Setting
Set environment variables to be registered in the Jenkins pipeline.
Image Tag Pattern
Select the method for assigning tags to container images.
Deploy Strategy
Select the deployment method for container images.
Deployment Result Recipient
Select the user to receive the result after the build pipeline is completed.
Table. Build Pipeline Setting Items for Project Creation
Setting up User Definitions
Users can specify and modify the path of the Dockerfile file to be used for building.
Additionally, you can check and modify the final script generated based on the information set up in Configuring the Build Pipeline.
To configure Dockerfile and pipeline scripts, follow these steps:
Customize settings page, enter information, and click the Connection Test button.
When the Next button is activated, click the Next button.
Item
Description
Dockerfile Settings
Choose whether to Create a new Dockerfile or Use an existing Dockerfile.
Using an existing Dockerfile can only be selected if you choose the Environment Only App template and select Use an existing repository in Configuring the Code Repository.
Dockerfile Path
Specify the path of the Dockerfile file in the source code.
Before creating a project for VM deployment, register the VM server information. For more information on registering a VM server, see Getting Started with VM Server Group.
Starting Project Creation
To start creating a project, follow these steps:
Main page, click the Create Project button. This will take you to the Create Project page.
Entering Basic Information
On the Create Project page, enter the project name and project ID.
For the Project Configuration Method, select App Template.
For the Deployment Target, select VM (Artifact) or VM (Docker).
VM (Artifact) creates a war/jar file and deploys it to the target server.
VM (Docker) builds and deploys a Docker image using the docker command.
Table. New Repository Creation and Existing Repository Activation Conditions
Note
Authentication information, once saved, can be used for Using Saved Authentication Information without entering account information, and Connection Test can be performed from then on.
In the Code Repository section, select the Code Repository type.
Select Create a New Repository or Use an Existing Repository and enter the information.
Enter the Authentication Information and click the Connection Test button.
If the Next button is activated, click the Next button.
Item
Description
Repository Type
Select the code repository to use
Registered Tool: You can select and use the types of SCM Repository tools available to the user (Github, Gitlab, etc.).
DevOps Code: Available if you have applied for DevOps Code in the Samsung Cloud Platform Console.
Unregistered Tool: You can use it by entering the domain of an unregistered tool. The unregistered tool item only appears when the App. template is Environment Only (without source code).
New/Existing Usage
Select whether to Create a New Repository or Use an Existing Repository.
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information
If you don’t have an account, you can create one by clicking the Don’t have an account? link and opening the Account Creation Information popup window.
After creating a new account, please change your password through the Initial Password Setting link.
(Unregistered Tool) Repository Information
Enter repository information
You can use a code repository that is not registered as a tool in DevOps Console.
You must go through an additional verification process by clicking the URL Check button.
Displays the OS information of the environment where the build agent is executed.
Authentication Information
Enter the authentication information for Jenkins.
Environment Variable Setting
Set the environment variables to be registered in the Jenkins pipeline.
Deployment Result Recipient
Select the user to receive the result of the build pipeline completion.
Table. Build Pipeline Setup Input Items
Setting Up User-Defined Configuration
If you selected Direct Deployment Script Writing in the deployment target and deployment setting stage, modify the pipeline content in this stage to complete the deployment setting.
To set up a user-defined configuration, follow these steps:
On the User-Defined stage, review and modify the content, then click the Next button.
Completing Project Creation
The user can finally check the project and tool information to be created and start creating the project.
To complete the project creation, follow the procedure below.
Summary Information screen, check the information and click the Complete button.
The Project Creation popup window opens and the project creation proceeds.
After the project creation is complete, click the Confirm button to move to the Project page.
Notice
It cannot be canceled during creation, and if the project is created normally, the Confirm button is activated.
12.2.3.2.5 - Creating an Empty Project
Note
The user must be a member of a project group and tenant to create a project.
A menu that allows you to check the project’s detailed information, as well as related Helm release information and information about the tools being used.
You can view information about individual items that make up the project on the dashboard at a glance.
Getting Started with Project Dashboard
To start using the dashboard, follow these steps:
Main page, click the Project card. It moves to the Project Dashboard page.
Click the Dashboard menu in the left menu.
Item
Description
Project Information
Displays basic project information.
Pipeline
Displays the status of pipelines added to the build pipeline. You can check the status of other pipelines by clicking on the numeric link.
Helm Release Status
You can check the status of Helm releases added to Kubernetes deployment.
Blue/Green Deployment Status
You can check the Blue/Green deployment status.
Canary Status
You can check the Canary status.
ArgoCD Application Status
You can check the status of ArgoCD applications added to Kubernetes deployment.
Kubernetes Deployment History
Displays Kubernetes deployment history in the latest order.
VM Deployment History
Displays VM deployment history in the latest order.
Code Quality
Displays code quality tool analysis information in conjunction with the code quality tool.
Code History
Displays the history of code repositories added to the code repository. You can select and display code history from the list for the following items:
Git Morph
Git Branch
Scale and Usage
Displays the usage and tools of the project and the build/deployment status.
Monthly Pipeline Execution Trend
Displays the monthly pipeline execution count, average execution cycle, and execution time within the project.
Inactive Pipeline
Displays a list of pipelines that have not been executed and have no success history based on the reference date.
Essential Stage
Displays which stage tasks were performed in the pipeline based on the code repository.
Recent Event History
You can check the recent event history that occurred in the project.
Table. Project Dashboard Display Items
12.2.3.5 - Project Members
Note
Master or higher users can manage project user permissions.
This includes granting access to other users, changing permissions for users with access to the project, or deleting permissions so that users can no longer access the project.
Getting Started with Project Members
To start using project members, follow these steps:
Main page, click the Project card. Move to the Project Dashboard page.
Click the Members menu in the left menu. The Members screen appears.
Adding Project Members
To add members to a project, follow these steps:
Click the Add button in the Members list. The Add Member popup window opens.
In the Add Member popup window, complete the settings and click the Confirm button to complete adding the project member.
Item
Description
User Tab
Search by email and add as a member by clicking the Add button.
Project Tab
Select if you want to add all members of another project as members of the current project.
Add Member
Click the user you want to add as a project member to add.
Permission Settings
Set the role to be assigned to the member.
Owner
Master
Developer
Viewer
Member Delete
Delete from the project member by clicking the X icon.
Select Jenkins to add a build pipeline from the list.
Build Agent
Select the agent (build environment) where the build pipeline will run. Click the Info icon to view the list of tools provided by the agent.
Build Environment OS
Displays the OS information of the environment where the build agent runs.
Folder Type
Select the folder type.
Existing folder: Add a pipeline under an existing folder in Jenkins.
New folder: Create a new folder in Jenkins and add a pipeline under it.
Folder
Select a folder from the list or enter the name of the new folder to be created.
Pipeline Name
Enter the pipeline name.
Parameter Setting
Set the parameters to be used in the pipeline.
Environment Variable Setting
Set the environment variables to be used in the pipeline.
Stage Setting
Set the stages to be used in the pipeline.
Build Result Email Recipient Setting
Set the recipient to receive the result email after the pipeline is completed (success/failure).
Table. Build Pipeline Addition Setting Items
Setting Parameters
To set parameters to use when running a pipeline, follow these steps:
Parameters card should be clicked.
Add button should be clicked to add parameters.
Apply button should be clicked to complete parameter settings.
Setting Environment Variables
To set environment variables to be used in the pipeline, follow the procedure below.
Environment Variables section, click. The Environment Variable Registration page opens on the right.
A list of pre-registered Environment Variables appears, and select the checkbox of the environment variable to be used.
Check the Selected Environment Variables and click the Apply button to complete the environment variable setting.
Setting Stages
To set up stages to be used in the pipeline, follow these steps.
Click the New Stage card. The Stage area appears on the right.
In the Stage area, select the Tool and Stage type.
Enter the necessary information according to the stage type and click the Apply button to complete the stage setup.
Note
You can add stages by clicking the Add icon. For more information about stage settings, see Stage.
Setting Build Result Email Recipients
To set up the recipient to receive the build result by email, follow the procedure below.
Email Recipient area, click. The Add Email Recipient page opens on the right.
In the Search area, search for and add the recipient.
Click the Apply button to complete the email recipient setting.
Checking the Final Pipeline Script
Check the actual build pipeline script to be created. Modify the script directly if necessary.
Click the Complete button to complete adding the pipeline.
Pipeline Addition Result
The added result appears on the Build Pipeline page.
Note
The added pipeline will not be executed automatically. If execution is required, run the pipeline directly.
Managing Build Pipelines
Build Pipeline List
Item
Description
Status
Displays the status of the build pipeline.
Green: Normal execution completed
Blue (flashing): Running
Red: Failed
Gray: Others
URL
Moves to the JenkinsBuild Pipeline page.
Recent Build Execution History URL
Moves to the JenkinsBuild Execution History page.
Log
Opens the Pipeline Log popup window.
Run
Runs the build pipeline.
More
Displays additional menus.
Edit Pipeline
Clone Pipeline
Delete Pipeline
Build history
Pipeline Stage View
Click the Expand icon to expand the stage view.
Table. Build Pipeline List View Function
Build Pipeline Authentication
When performing build pipeline actions (execution, stop, modification, deletion, etc.), user authentication information may be required, so it may be requested from the user in some cases.
IDP-Linked Jenkins
If you are not registered as a Jenkins user, a User Registration Guide popup window opens, click the Go to Jenkins link to proceed with User Registration or Jenkins Initial Login.
IDP-Unlinked Jenkins
If Jenkins authentication information is not stored, a Add Account popup window opens, select Existing User Use or New User Creation in Account Type to add authentication information.
Running a Build Pipeline
To run a build pipeline, follow these steps.
On the Build Pipeline page, click the Run button of the build pipeline you want to run.
If there are parameters (parameters), the Pipeline Run Parameter Input popup window opens.
Enter the necessary items and click the OK button.
Caution
If the Number of executors item in the Built-In Node of Jenkins system settings is set to 1 or more, pipeline execution may not be possible due to security issues.
In this case, you need to contact the Jenkins administrator to change the settings.
You can use it by changing the Number of executors item of the Built-In Node in the Jenkins management menu to 0.
Note Jenkins officially recommends avoiding build execution on the Controller Node.
To view the build pipeline execution log, follow these steps.
On the Build Pipeline page, click the Log button of the build pipeline you want to run.
The Pipeline Log popup window opens, and you can check the execution log.
Note
If the build pipeline is running, the Pipeline Log popup window is refreshed periodically to display the latest log.
Viewing Build Pipeline Build History
To view the entire build history of a build pipeline, follow these steps.
On the Build Pipeline page, click the More icon of the build pipeline you want to check.
Click the Build History menu. The Build History page opens.
Item
Description
Config Diff
Opens the Config Diff popup window.
Log
Opens the Pipeline Log popup window.
More Menu
Moves to the Build Details page.
Pipeline Stage View
Click the Expand icon to expand the stage view.
Table. Build Pipeline List View Function
Comparing Settings
You can compare settings with previous build history using the Config Diff button.
Viewing Logs
You can check the log of the build history using the Log button.
Viewing Build Details
You can check the build details by clicking the More icon.
Modifying a Build Pipeline
To modify a build pipeline, follow these steps.
On the Build Pipeline page, click the More icon of the build pipeline you want to modify.
Click the Pipeline Modification button. Move to the Pipeline Modification page.
Modifying the Script Directly
To modify the pipeline script directly, follow these steps.
In the script editor window on the Pipeline Modification page, enter the script directly in the syntax supported by Jenkins.
After entering, click the Save button to complete the pipeline modification.
Modifying Using the Script Generator Function
Note
The Script Generator function can only set one stage at a time. If you want to modify multiple stages, perform multiple times.
To modify the pipeline script using the Script Generator function, follow these steps.
On the Pipeline Modification page, change the Script Generator to ON.
Select the Build Agent and Script Type.
Proceed with the Stage Setting and click the Script Generation button to generate the script.
Refer to the generated script to modify the pipeline and click the Save button to complete the pipeline modification.
Item
Description
Script Generator
Turn the Script Generator function ON/OFF.
Script Basic Information
Select the basic information for script generation.
Existing Script
Existing script.
New Script
New script generated through the Script Generator.
Script Modification
Modify the existing script on the left by referring to the new script generated.
Jenkins Credential Update
If there is new authentication information in the newly generated script, click the Jenkins Credential Update button. Update (save) the authentication information to Jenkins.
K8S Secret Update
In the case of the Deploy to K8S stage, if the K8S Secret is changed, click the K8S Secret Update button. Update (save) the Secret to be used when creating and deploying.
Table. Script Generator Function
Cloning a Build Pipeline
To clone a build pipeline, follow these steps.
Click the More icon of the build pipeline you want to clone.
Click the Clone Pipeline menu. Opens the Clone Pipeline popup window.
Enter the information and click the Save button to complete the pipeline clone.
Item
Description
Pipeline Clone Information
Enter the information of the pipeline to be cloned.
If the image repository providing the base image (Dockerfile’s FROM clause, docker pull) used in the Dockerfile and the image repository of the Registry URL (docker push) are different, select the image repository for docker pull.
Image Build Tool
Displays the image build tool.
Pre-build Command
If there are commands that must be executed before building the Docker image, write them in Shell command format.
cp target/*.jar docker/
Image Build Folder
If the Docker image build needs to be executed in a specific folder, select the checkbox and enter the folder path.
docker
Dockerfile
Enter the Dockerfile file name.
Dockerfile
Image Build Options
If additional options are required for the image build tool, enter them.
--no-cache
Build Command
Displays the actual image build command to be executed.
Post-build Command
If there are commands that must be executed after building the Docker image, write them in Shell command format.
rm -rf docker/*.jar
Table. Docker Build Stage Input Items
Example Script
The resulting build pipeline script from the example is as follows.
Figure. Docker Build Example Script
Item
Description
➊
Pre-build command
➋
Image build folder
➌
Image build option
➍
Post-build command
Table. Docker Build Example Script Description
Deploy to K8S
This stage deploys to Kubernetes.
Select Deploy to K8S as the stage type.
Item
Description
Type
Select deployment type
Helm Release (Helm Chart Type)
Workload
ArgoCD
K8S Cluster
Select K8S cluster
Helm Release (Helm Chart Type) selection will display a list of Helm releases deployed through DevOps Console.
Namespace
Select namespace.
Helm Release
Select Helm release.
Deployment Method
Select deployment method
Recreate
Rolling Update
Registry URL
Select the image repository where the image to be deployed to Kubernetes is docker pushed.
Secret
Select secret information input method
Auto Generation: Automatically generate and use the secret corresponding to the selected image repository in DevOps Console.
Use Existing Secret: Use a pre-created secret through K8S secret management.
Table. Deploy to K8S Stage Input Items
Deploy to VM
This stage deploys to a VM.
Select Deploy to VM as the stage type.
Item
Description
Deployment Configuration
Select the deployment configuration method
Deployment target setting (using SSH command/Agent): Deploy using SSH command or Agent.
Direct script writing: Users directly input all commands for deployment.
Select Jenkins to add a multibranch pipeline from the list.
Folder Type
Select the folder type
Existing folder: Add a pipeline under an existing folder in Jenkins.
New folder: Create a new folder in Jenkins and add a pipeline under it.
Folder Name
Select a folder from the list or enter the name of the new folder to be created.
Pipeline Name
Enter the pipeline name.
Git Repository
Select the code repository to perform branch-based builds. Only code repositories registered in the DevOps Console project can be selected.
Branch Filtering
Filter the branch names to be built from the registered branches in the code repository. If filtering is used, enter the filtering conditions in Java regular expression format.
Jenkinsfile Path
Enter the path to the Jenkinsfile that defines the pipeline build in the code repository.
Table. Multibranch Pipeline Addition Information Input Items
Managing Multibranch Pipelines
Multibranch Pipeline List
Item
Description
Multibranch Icon and Label
The icon and label representing the multibranch pipeline are displayed.
URL
Move to the Multibranch Pipeline page in Jenkins.
Scan Log
The Multibranch Pipeline Scan Log popup window opens.
Scan
Scan the multibranch pipeline.
More
Display additional menus.
Pipeline modification
Pipeline deletion
Build history
Table. Multibranch Pipeline List Screen Items
Scanning a Multibranch Pipeline
To scan a multibranch pipeline, follow these steps:
On the Build Pipeline page, click the Scan button on the multibranch pipeline card you want to scan.
In the confirmation popup window, click the Confirm button.
Viewing Multibranch Pipeline Scan Logs
To view the scan logs of a multibranch pipeline, follow these steps:
On the Build Pipeline page, click the Scan Log button on the multibranch pipeline card you want to view the scan log for. The Multibranch Pipeline Scan Log popup window opens.
In the Multibranch Pipeline Scan Log popup window, check the contents and click the Confirm button to close.
Viewing Multibranch Pipeline Build History
To view the branch-based build history of a multibranch pipeline, follow these steps:
On the Build Pipeline page, click the More icon on the multibranch pipeline card you want to view the build history for.
Click the Build History menu. Move to the Branch-based Build History screen in Jenkins (Not provided by DevOps Console).
Modifying a Multibranch Pipeline
To modify a multibranch pipeline, follow these steps:
On the Build Pipeline page, click the More icon on the multibranch pipeline card you want to modify.
Click the Pipeline Modification menu. Move to the Settings screen in Jenkins (Not provided by DevOps Console).
Deleting a Multibranch Pipeline
To delete a multibranch pipeline, follow these steps:
On the Build Pipeline page, click the More icon on the multibranch pipeline card you want to delete.
Click the Pipeline Deletion menu. The Pipeline Deletion popup window opens.
In the Pipeline Deletion popup window, select whether to delete the pipeline in Jenkins together and click the Confirm button.
Note
Delete pipeline in Jenkins together
Selected: The pipeline is actually deleted in Jenkins.
Not selected: The pipeline is deleted only from the Build Pipeline list and remains in Jenkins.
12.2.4.2 - Kubernetes Deployment
The user can check the list of Helm releases used in the project and the deployment status. It appears in the development/operation list when creating a project or installing a chart, depending on the development classification.
The user can check the deployment by distinguishing it with an icon.
Helm Chart
Istio
Workload
Canary
Blue-Green
ArgoCD
Getting Started with Kubernetes Deployment
To start using Kubernetes deployment, follow the procedure below.
Main page, click the Project card. Move to the Project Dashboard page.
In the left menu, click the Build/Deploy > Kubernetes Deployment menu. Move to the Kubernetes Deployment page.
12.2.4.2.1 - Helm Release
Helm Release is an instance of a chart running in a Kubernetes cluster. Users can create a Helm Release when creating a project or through the Helm Install menu.
Getting Started with Helm Release
To get started with Helm Release, follow these steps:
Main page, click the Project card. Move to the Project Dashboard page.
In the left menu, click Build/Deploy > Kubernetes Deploy. Move to the Kubernetes Deploy page.
Item
Description
Name
Displays the deployment name. Click to view detailed information.
Chart
Displays information about the Helm chart used for deployment.
Cluster/Namespace
Displays the cluster/namespace where the deployment is located.
Pod Status
Displays the current status of the Pod.
Deployment Result
Displays the result of the deployment execution.
Deployment Time
Displays the deployment execution time.
Refresh
Refreshes the current item. The items that change are Pod status, deployment result, and deployment time.
Delete
Deletes the current item.
Table. Helm Release Items
On the Kubernetes Deploy page, click the Name in the Helm Release list. Move to the Deployment Details page.
To add a related Helm Release, follow these steps:
Main page, click the Project card. Move to the Project Dashboard page.
In the left menu, click Build/Deploy > Kubernetes Deploy. Move to the Kubernetes Deploy page.
Click the Add Related Helm Release button. The Add Related Helm Release popup window opens.
In the Add Related Helm Release popup window, enter each item.
Click the Save button to complete adding the Helm Release.
Item
Description
Helm Release already registered in the project
Displays the Helm Release already registered in the project.
Helm Release already registered in the project cannot be added.
Table. Items displayed when adding related Helm Release
Managing Helm Release Secrets
Getting Started with Helm Release Secrets
Helm Release Secrets allows you to manage the ImagePull Secret used for the image deployed through the build pipeline.
To get started with Helm Release Secrets, follow these steps:
On the Deployment Details page, click the Edit Authentication Information icon to the right of Helm Release.
The Helm Release Secrets popup window opens.
Adding Helm Release Secrets
To add a Helm Release Secret, follow these steps:
On the Deployment Details page, click the Edit Authentication Information icon to the right of Helm Release. The Helm Release Secrets popup window opens.
In the Helm Release Secrets popup window, if you need to add a secret to pull a private chart image, click the Add button in the Chart Install Secret section. The Add Secret popup window opens.
If you need to add a secret to pull the app image used for build/deploy, click the Add button in the ImagePull Secret section. The Add Secret popup window opens.
In the Add Secret popup window, enter the secret-related content and click the Save button to complete the addition.
Item
Description
Registry URL
Select the image to use for ImagePull Secret from the image list in the image repository.
Secret
Select the method for entering secret information
Auto-generation: Automatically generates a secret using the authentication information of the selected image repository in the Docker URL.
Use existing secret: Select one of the existing secrets to use.
Table. Helm Release Secret Addition Settings
Modifying Helm Release Secrets
To modify a Helm Release Secret, follow these steps:
On the Deployment Details page, click the Edit Authentication Information icon to the right of Helm Release. The Helm Release Secrets popup window opens.
In the Helm Release Secret list, click the name of the secret you want to modify. The Modify Secret popup window opens.
Modify the content and click the Save button to complete the modification.
Deleting Helm Release Secrets
To delete a Helm Release Secret, follow these steps:
On the Deployment Details page, click the Edit Authentication Information icon to the right of Helm Release. The Helm Release Secrets popup window opens.
In the Helm Release Secret list, click the name of the secret you want to delete.
Click the Delete button to complete the deletion.
Modifying K8S Authentication Information
K8S authentication information refers to the authentication information used to verify the authority to use the K8S cluster/namespace when performing deployment in the build pipeline.
To modify the K8S authentication information, follow the procedure below.
Deployment Details page, click the Edit Authentication Information icon to the right of K8S Cluster/Namespace. The Edit Authentication Information popup window will open.
The authentication information is fixed to the account of the logged-in user. Click the Save button to modify it.
In the confirmation popup window, click the Confirm button.
The K8S authentication information will be changed to the logged-in user.
Checking values.yaml used in Helm Release
To check the contents of values.yaml, follow these steps:
On the Deployment Details page, click the History tab.
In the Values column, click the View icon. The Revision # - Values.yaml popup window opens.
Check the contents of the values.yaml file.
Comparing values.yaml used in Helm Release
To compare the contents of values.yaml used in each release, follow these steps:
On the Deployment Details page, click the History tab.
In the list, select the checkboxes of the two revisions you want to compare.
Click the Yaml Diff button. The Yaml Diff popup window opens.
In the Yaml Diff (Revision #>#) popup window, check the comparison contents.
Rolling Back Helm Release
To roll back a Helm Release to a previous revision, follow these steps:
On the Deployment Details page, click the History tab.
Click the Rollback button for the revision you want to roll back. The Rollback popup window opens.
Click the Confirm button to complete the rollback.
Upgrading Helm Release
To upgrade a Helm Release, follow these steps:
On the Deployment Details page, click the History tab.
In the Values column, click the View icon. The Revision # - Values.yaml popup window opens.
Modify the contents of the current Values.yaml and click the Upgrade button. The Upgrade popup window opens.
Check the information being upgraded.
Click the Execute button to complete the upgrade.
Viewing Pod Logs
To view the logs of Pods related to Helm Release, follow these steps:
On the Deployment Details page, click the Release Objects tab.
In the Pod Items section, click the LOG icon in the LOG column. The Log popup window opens.
Item
Description
Container
Select the container for which you want to output logs.
Real-time Refresh
Refreshes the log output in real-time.
Stop Refresh
Stops real-time refresh.
Download
Downloads the Pod log as a file.
Table. Log Popup Window Function Description
Deleting Helm Release
To delete a Helm Release, follow these steps:
On the Deployment Details page, click the Delete button at the bottom right of Helm Release. The Delete Helm Release popup window opens.
Click the Confirm button to complete the deletion.
Note
Also execute the Helm delete command
Selected: The Helm Release is actually deleted from the cluster.
Not selected: Only deleted from the Kubernetes Deploy list and remains in the cluster.
12.2.4.2.2 - Workload
A workload is an application that runs on Kubernetes, and users can add workloads to the DevOps Console for management.
The workload types manageable from DevOps Console are Deployment, StatefulSet, DaemonSet.
Start Workload
To start using the workload, follow the steps below.
Click the Project card on the Main page. It moves to the Project Dashboard page.
Click the Build/Deploy > Kubernetes Deploy menu in the left menu. You will be taken to the Kubernetes Deploy page.
Kubernetes Deployment On the page, click the name of the workload.
Item
Description
Workload Type
Displays the workload type.
Deployment
StatefulSet
DaemonSet
Name
Displays the workload name. Click to view detailed information.
Chart
Displays the Helm chart information used for deployment.
Cluster/Namespace
Displays the deployed cluster/namespace.
Pod status
Displays the current status of the Pod.
Deployment Result
Displays the deployment execution result.
Deployment Time
Displays the deployment execution time.
Refresh
Refresh the current item. The items that change are Pod status, deployment result, deployment time.
Delete
Deletes the current item.
Table. Workload screen items
Add workload
To add a workload, follow the procedure below.
Main page, click the Project card. It moves to the Project Dashboard page.
In the left menu, click the Build/Deploy > Kubernetes Deployment menu. It moves to the Kubernetes Deployment page.
On the Kubernetes Deployment page, click the Add Workload menu. The Add Workload popup window opens.
In the Add Workload popup window, enter the information and click the Save button.
In the confirmation popup window, click the Confirm button to complete adding the workload.
Item
Description
Target to be retrieved
Only workloads deployed with the same image as the base image of the App template used when creating the project for the first time are displayed.
Workload already registered in the project
It shows the workload already registered in the project.
Workloads already registered in the project cannot be added.
Table. Input items for adding workload
K8S Modify authentication information
K8S authentication information refers to the authentication information used to verify the authority to use the K8S cluster/namespace when performing deployment in the build pipeline.
To modify the K8S authentication information, follow the procedure below.
Deployment Details page, click the Edit Authentication Information icon to the right of K8S Cluster/Namespace. The Edit Authentication Information popup window will open.
The authentication information is fixed to the account of the logged-in user. Click the Save button to modify it.
In the confirmation popup window, click the Confirm button.
The K8S authentication information will be changed to the logged-in user.
Rollback workload
To roll back the workload to a previous image, follow the steps below.
On the Main page, click the Project card. It navigates to the Project Dashboard page.
In the left menu, click the Build/Deploy > Kubernetes Deploy menu. You will be taken to the Kubernetes Deploy page.
Kubernetes deployment page, click the workload name to roll back. You will be taken to the workload details page.
Click the Details tab of the Workload Details page.
Detailed Information tab list, click the Rollback button in the row that has the image you want to rollback. Rollback popup window opens.
Rollback Click the button in the popup window in the desired way to complete the rollback.
Recreate
Rolling Update
Add distribution result recipient
To add a distribution result recipient, follow the steps below.
Click the Deployment Result Recipients tab on the Workload Details page.
Click the Add button on the Distribution Result Recipient tab. The Add Distribution Result Recipient popup opens.
Add distribution result recipients In the popup window, select the target and click the Confirm button to complete adding distribution result recipients.
Search button or click the list to add the recipient to the bottom of the popup.
Added recipients can be deleted by clicking the X icon on the right.
Delete workload
To delete the workload, follow the steps below.
Click the X icon of the workload to delete on the Kubernetes Deployment page.
In the confirmation popup, click the Confirm button to complete the deletion.
Reference
Workloads are not deleted in the actual cluster. If you want to delete in the actual cluster, delete it using the method you originally deployed the workload.
12.2.4.2.3 - Blue/Green Deployment
Users can perform blue/green deployments using Ingress or Service.
Adding a blue/green deployment means creating a new K8S Ingress or K8S Service that can replace two Helm releases with each other.
Only two Helm releases with the same Project, Cluster, Namespace, Release Type, Chart Name, Chart Version, and Development Classification can be linked.
Getting Started with Blue/Green Deployment
To start using blue/green deployment, follow these steps:
Click the Project card on the Main page. You will be moved to the Project Dashboard page.
Click the Build/Deployment > Kubernetes Deployment menu on the left. You will be moved to the Kubernetes Deployment page.
Item
Description
Name
Displays the name of the blue/green deployment. Click to view detailed information.
Cluster/Namespace
Displays the cluster/namespace where the deployment is deployed.
(Operation)
Displays information about the currently operating Helm release.
(Pre-operation)
Displays the Helm release that will be the next version of the operation. When using blue/green replacement, the pre-operation becomes the operation.
On the Kubernetes Deployment page, click the name of the blue/green deployment you want to start in the deployment list. You will be moved to the Deployment Details page.
Adding Blue/Green Deployment
To add a blue/green deployment, follow these steps:
Note
To add a blue/green deployment, you need two Helm releases installed using the same chart.
Click the Project card on the Main page. You will be moved to the Project Dashboard page.
Click the Build/Deployment > Kubernetes Deployment menu on the left. You will be moved to the Kubernetes Deployment page.
Click the Add Blue/Green Deployment button at the top right of the Kubernetes Deployment page. The Add Blue/Green Deployment popup window will open.
Enter the information in the Add Blue/Green Deployment popup window and click the Save button to complete the addition of the blue/green deployment.
Item
Description
Classification
Select development or operation
Depending on the development/operation classification, the actions that can be performed by each role are different. (Table. Authority by Role in Project (2))
Blue/Green Deployment Name
Enter the deployment name.
Operation
Select the release and Jenkins job
Release Name: Select the name of the currently operating Helm release from the list.
Jenkins Job: Select the Jenkins job to build/deploy the selected Helm release from the list
Pre-operation
Select the release and Jenkins job
Release Name: Select the name of the Helm release to be reflected in the next version of the operation from the list.
Jenkins Job: Select the Jenkins job to build/deploy the selected Helm release from the list
K8S Cluster/Namespace
Displays the K8S cluster/namespace where the Helm release is installed.
Type Classification
Choose whether to use Ingress or Service to switch blue/green.
New Classification
Choose whether to create a new Ingress or Service or use an existing one.
Name
Enter the name
New: Enter the name of the Ingress or Service.
Existing: Select an existing Ingress or Service from the list
Service (Operation)
Select the Kubernetes Service related to the currently operating Helm release from the list.
Service (Pre-operation)
Select the Kubernetes Service related to the Helm release to be reflected in the next version of the operation from the list.
Rules
Enter the information to be used in Ingress.
Table. Input Items for Ingress Type when Adding Blue/Green Deployment
Item
Description
Type
Select the type of Kubernetes Service from the list
ClusterIP
NodePort
LoadBalancer
Deployment (Operation)
Select the Kubernetes Deployment related to the currently operating Helm release from the list.
Deployment (Pre-operation)
Select the Kubernetes Deployment related to the Helm release to be reflected in the next version of the operation from the list.
Ports
Enter the information to be used in Service.
Table. Input Items for Service Type when Adding Blue/Green Deployment
Switching Blue/Green
To switch blue/green, follow these steps:
Click the Blue/Green Switch button on the Deployment Details page. The Blue/Green Switch popup window will open.
Click the Confirm button in the Blue/Green Switch popup window to complete the blue/green switch.
The Helm releases for operation and pre-operation are swapped with each other.
The switch history is added.
Checking YAML of Ingress or Service Used in Blue/Green Deployment
To check the YAML of the Ingress or Service used in blue/green, follow these steps:
Click the View icon of Ingress YAML or Service YAML on the Deployment Details page. The Ingress YAML or Service YAML popup window will open.
Check the contents in the Ingress YAML or Service YAML popup window and click the Confirm button to close.
Managing Jenkins Job of Blue/Green Deployment
Viewing Jenkins Job Log
To view the Jenkins job log, follow these steps:
Click the Log button in the Jenkins job item of the desired release on the Deployment Details page. The Pipeline Log popup window will open.
Check the log in the Pipeline Log popup window and click the Confirm button.
Running Jenkins Job
To run a Jenkins job, follow these steps:
Click the Run button in the Jenkins job item of the desired release on the Deployment Details page. The Pipeline Run Parameter Input popup window will open.
Enter or select each item in the Pipeline Run Parameter Input popup window and click the Confirm button to complete the Jenkins job run.
Modifying Blue/Green Deployment
To modify a blue/green deployment, follow these steps:
Click the Modify button on the Deployment Details page. The Modify Blue/Green Deployment popup window will open.
Modify the desired items in the Modify Blue/Green Deployment popup window and click the Save button to complete the modification.
Deleting Blue/Green Deployment
To delete a blue/green deployment, follow these steps:
Click the Delete button on the Deployment Details page. The Delete Blue/Green Deployment popup window will open.
Select whether to execute the Ingress/Service Deletion Command and click the Confirm button to complete the deletion.
Note
Ingress/Service Deletion Command
Selected: The Ingress or Service used in the blue/green deployment is actually deleted from the cluster.
Not selected: The Ingress or Service used in the blue/green deployment is not deleted and remains in the cluster.
12.2.4.2.4 - Canary Deployment
Users can add canary deployments.
Canary addition refers to setting up two Helm releases to be bundled for canary testing.
Project, Cluster, Release Type, Chart Name, Chart Version, and Development Classification must all be the same for two Helm releases to be bundled.
Getting Started with Canary Deployment
To start using canary deployment, follow these steps:
Click the Project card on the Main page to go to the Project Dashboard page.
Click Build/Deployment > Kubernetes Deployment in the left menu to go to the Kubernetes Deployment page.
Item
Description
Name
Displays the canary name. Click to view detailed information.
Release Name: Select the name of the currently operating Helm release from the list.
Jenkins Job: Select the Jenkins job for building and deploying the currently operating Helm release from the list.
Canary
Release Name: Select the name of the next operating Helm release from the list.
Jenkins Job: Select the Jenkins job for building and deploying the next operating Helm release from the list.
K8S Cluster
Displays the K8S cluster where the Helm release is installed.
Ingress Annotation
Enter the annotation item you want to apply from the canary annotation provided by nginx-ingress.
Table. Add Canary Input Items
Note
For detailed guidance on each item of nginx-ingress ingress annotations, see the next page.
Checking Ingress YAML of Canary
To check the operating Ingress YAML and canary Ingress YAML used by the canary, follow these steps:
Click the View icon for Operating Ingress YAML or Canary Ingress YAML on the Deployment Details page. The Ingress YAML popup window opens.
Check the contents in the Ingress YAML popup window and click the OK button to close it.
Managing Jenkins Job of Canary
Viewing Jenkins Job Log
To view the Jenkins job log, follow these steps:
Click the Log button for the Jenkins job item of the desired release on the Deployment Details page. The Pipeline Log popup window opens.
Check the log in the Pipeline Log popup window and click the OK button.
Running Jenkins Job
To run a Jenkins job, follow these steps:
Click the Run button for the Jenkins job item of the desired release on the Deployment Details page. The Pipeline Run Parameter Input popup window opens.
Enter or select each item in the Pipeline Run Parameter Input popup window and click the OK button to complete running the Jenkins job.
Modifying Canary
To modify a canary, follow these steps:
Click the Modify button on the Deployment Details page. The Modify Canary popup window opens.
Modify the desired items in the Modify Canary popup window and click the Save button to complete the modification.
Recovering Canary
When the ingress annotation information of the operating release and canary release does not work normally due to changes in the Helm release, recover it to work again.
To recover a canary, follow these steps:
Click the Recover button on the Deployment Details page. The Recover popup window opens.
Click the OK button in the Recover popup window to complete the recovery.
Terminating (Deleting) Canary
To terminate a canary, follow these steps:
Click the Terminate button on the Deployment Details page. The Terminate Canary popup window opens.
Select the desired items in the Terminate Canary popup window and click the OK button to complete the termination.
Item
Description
Canary Release
Select the canary release
Ingress Host Reversion: Reverts the ingress host of the Helm release used for the canary to its original value.
Helm Release Deletion: Deletes the Helm release used for the canary.
Operation Release
Select the operation release
Canary Image Upgrade: Upgrades the operation Helm release to the image used by the canary. You can modify the Values.yaml file.
No Action: Terminates the canary without changing the operation Helm release.
Table. Canary Termination Selection Items
12.2.4.2.5 - Istio
Note
For a detailed guide on Istio, please refer to the next page.
The list of Istio Traffic management Objects supported by DevOps Console is as follows:
Gateway
Virtual Service
Destination Rule
Getting Started with Istio
To start using Istio, follow these steps:
Main page, click on the Project card. Move to the Project Dashboard page.
In the left menu, click on Build/Deploy > Kubernetes Deploy. Move to the Kubernetes Deploy page.
Item
Description
Name
Displays the Istio name. Click to view detailed information.
Cluster/Namespace
Displays the deployed cluster/namespace.
Delete
Deletes the current item.
Table. Kubernetes Deploy page Istio card item
On the Kubernetes Deploy page, click on the name of the Istio you want to use. Move to the Istio Details page.
Adding Istio
To add Istio, follow these steps:
Main page, click on the Project card. Move to the Project Dashboard page.
In the left menu, click on Build/Deploy > Kubernetes Deploy. Move to the Kubernetes Deploy page.
On the Kubernetes Deploy page, click the Add Istio button in the top right corner. The Add Istio popup window opens.
In the Add Istio popup window, enter the information and click the Save button to add Istio.
Item
Description
Type
Select development or operation.
K8S Cluster
Select a K8S cluster.
Namespace
Select a namespace. Only namespaces that can use Istio are displayed in the list.
Table. Add Istio input items
Note
Although you have added Istio, you have not created an Istio object yet, so you cannot use the Istio-related functions yet.
On the Istio Details page, click on the Istio Objects tab.
On the Istio Objects tab, click the Create Wizard button. Move to the Create Wizard page.
Helm Release
This is the step to select the Helm release to be used in Istio.
On the Create Wizard page, click the Add button to select all Helm releases to be used in Istio.
Click the Start button. The Gateway screen appears.
Gateway
Istio Gateway is the front-end object that receives traffic from outside.
On the Create Wizard page, on the Gateway screen, select whether to create a Gateway.
If you want to create a Gateway, enter each item.
If you do not want to create a Gateway, select Skip object creation.
Click the Next button. The Destination Rule screen appears.
Item
Description
Name Prefix
Specify the prefix name for the Istio Gateway object to be created.
Host
Specify the domain of the Gateway object to be accessed from outside.
Table. Create Wizard Gateway input items
Destination Rule
Destination Rule defines the traffic policy of Istio.
On the Create Wizard page, on the Destination Rule screen, select whether to create a Destination Rule.
If you want to create a Destination Rule, enter each item.
If you do not want to create a Destination Rule, select Skip object creation.
Click the Next button. The Virtual Service screen appears.
Item
Description
Name Prefix
Enter the prefix name for the Istio Destination Rule object to be created.
Load Balancer
Select the load balancer method.
ROUND_ROBIN: Round Robin
LEAST_CONN: Last connection continues to be used
RANDOM: Random
Max Connections
Enter the maximum number of allowed connections.
Table. Create Wizard Destination Rule input items
Virtual Service
Virtual Service plays a role in routing incoming traffic to a service.
On the Create Wizard page, on the Virtual Service screen, enter each item if you want to create a Virtual Service.
Click the Complete button to complete the creation of the Istio object using the Create Wizard.
Item
Description
Name Prefix
Enter the prefix name for the Istio Virtual Service to be created.
Prefix-Uri
Enter the prefix URI to route traffic to the corresponding URI.
Helm Release Weight
If there are multiple Helm releases, enter the connection weight. The sum of each number must be 100.
Table. Create Wizard Virtual Service input items
Using Object Addition to Add Istio Objects
On the Istio Details page, click on the Istio Objects tab.
On the Istio Objects tab, click the Object Addition button. The Object Addition popup window opens.
In the Object Addition popup window, enter each item and click the Save button to complete the addition of the Istio object.
Item
Description
Object
Select the object to be created
Gateway
Virtual Service
Destination Rule
Input
The input items vary depending on each object. Refer to the Create Wizard for input.
Gateway
Virtual Service
Destination Rule
Generate
Click the Generate button. A basic Yaml is created in the Yaml area based on the input above.
Yaml
Modify the basic Yaml to complete the final Yaml of the object you want to create.
Save
Click the Save button to create the object.
Table. Object addition to add Istio object screen items
Modifying Istio Objects
To modify an Istio object, follow these steps:
On the Istio Details page, click on the Istio Objects tab.
On the Istio Objects tab, click on the object you want to modify in the Istio object list. The Object popup window opens.
In the Object popup window, modify the Yaml and click the Save button to complete the modification of the Istio object.
Deleting Istio Objects
To delete an Istio object, follow these steps:
On the Istio Details page, click on the Istio Objects tab.
On the Istio Objects tab, click on the object you want to modify in the Istio object list. The Object popup window opens.
In the Object popup window, click the Delete button to complete the deletion of the Istio object.
Helm Release
Adding Helm Release
To add a Helm release used in Istio, follow these steps:
On the Istio Details page, click on the Helm Release tab.
On the Helm Release tab, click the Add button. The Add Helm Release popup window opens.
In the Add Helm Release popup window, enter each item and click the Save button to complete the addition of the Helm release.
Deleting Helm Release
To delete a Helm release used in Istio, follow these steps:
On the Istio Details page, click on the Helm Release tab.
On the Helm Release tab, select the checkbox of the Helm release you want to delete and click the Delete button.
In the confirmation popup window, click the Confirm button to complete the deletion of the Helm release.
Deleting Istio
To delete Istio, follow these steps:
On the Istio Details page, click the Delete button at the bottom right.
In the confirmation popup window, click the Confirm button to complete the deletion.
12.2.4.2.6 - ArgoCD
ArgoCD is a software used for GitOps deployment in a Kubernetes environment. Users can set up ArgoCD deployment when creating a project or through the Kubernetes deployment menu.
Getting Started with ArgoCD
To view the details of an ArgoCD application, follow these steps:
Click the Project card on the Main page. You will be taken to the Project Dashboard page.
Click the Build/Deploy > Kubernetes Deploy menu on the left. You will be taken to the Kubernetes Deploy page.
Click the ArgoCD Application Card you want to start. You will be taken to the Deploy Details page.
Category
Item
Description
ArgoCD Application
Name
Displays the ArgoCD project name/ArgoCD URL.
Git Information
Displays the Git information used for deployment.
Cluster/Namespace
Displays the deployed cluster/namespace.
App Status
Displays the current status of the app.
Deployment Result
Displays the current sync status.
Deployment Time
Displays the deployment execution time.
Refresh
Clicking the Refresh icon refreshes the current item. The items that change are App Status and Deployment Result.
Delete
Deletes the current item.
Deployment Details
Edit ArgoCD App Secret
Clicking the Edit Authentication Information icon on the right side of the application name opens a popup window to manage the image secret used for deployment.
Go to ArgoCD App
The Application Details screen of the actual ArgoCD tool opens in a new window.
Table. ArgoCD Application Card and Deployment Details Items
Adding an ArgoCD Application
Creating a New ArgoCD Application using Helm Chart
To create a new ArgoCD application using a Helm chart and add it, follow these steps:
Click the Project card on the Main page. You will be taken to the Project Dashboard page.
Click the Build/Deploy > Kubernetes Deploy menu on the left. You will be taken to the Kubernetes Deploy page.
Click the Create New ArgoCD App button at the top right. You will be taken to the Create New ArgoCD App page.
Enter the ArgoCD Information and click the Confirm button. The Authentication Information input field will appear.
Enter the Authentication Information and click the Connection Test button.
The Application Basic Information and Deployment Target K8S Cluster input fields will appear.
Enter the Application Name and click the Duplicate Check button.
Enter the Project Name.
Select Helm Chart to Create a New Repository as the Repository Type.
Select the K8S Cluster and Namespace.
Select the Helm Chart. The Helm Chart information and Helm Chart Git Repository information will appear.
Modify the contents of the Values.yaml file located in the K8S Cluster Values of the Helm Chart and click the Validation Check button.
Enter the Helm Chart Git Repository information and click the Connection Test button.
Enter the Manifest Keys information.
Click the Create button to complete the creation.
Note
The Deploy Strategy in the Manifest File and Key Information is not a required input value.
Item
Description
URL Input Method
Select the URL input method.
Select from the list: The registered ArgoCD tool appears.
Direct Input
ArgoCD URL
Enter the ArgoCD URL and click the Confirm button.
Authentication Information
Enter the authentication information and click the Connection Test button.
Application Name
Enter the ArgoCD application name and click the Duplicate Check button.
Project Name
Enter the project name of the ArgoCD application.
Repository Type
Select the repository type.
Create a new repository using Helm Chart: Create a Git repository using Helm Chart for GitOps.
Use an existing Git repository
K8S Cluster
Select the target cluster for deployment.
Only clusters with access rights can be selected from the DevOps Console K8S cluster.
Namespace
Select the target namespace for deployment.
Only namespaces with access rights can be selected from the selected cluster.
Helm Chart
Select the Helm chart.
Helm Chart Git Repository
Enter the information of the Git repository used for Helm Chart.
Manifest Key Information
Enter the information for continuous deployment (Manifest file/Key information).
Table. Create New ArgoCD Application - Helm Chart Creation Settings
Creating a New ArgoCD Application using an Existing Git Repository
To create a new ArgoCD application using an existing Git repository and add it, follow these steps:
Click the Project card on the Main page. You will be taken to the Project Dashboard page.
Click the Build/Deploy > Kubernetes Deploy menu on the left. You will be taken to the Kubernetes Deploy page.
Click the Create New ArgoCD App button at the top right. You will be taken to the Create New ArgoCD App page.
Enter the ArgoCD Information and click the Confirm button. The Authentication Information input field will appear.
Enter the Authentication Information and click the Connection Test button.
The Application Basic Information and Deployment Target K8S Cluster input fields will appear.
Enter the Application Name and click the Duplicate Check button.
Enter the Project Name.
Select Use an Existing Git Repository as the Repository Type.
Select the K8S Cluster and Namespace.
Select the Helm Chart. The Helm Chart and Helm Chart Git Repository information will appear.
Modify the contents of the Values.yaml file located in the K8S Cluster Values of the Helm Chart and click the Validation Check button.
Enter the Helm Chart Git Repository information and click the Connection Test button.
Enter the Manifest Keys information.
Click the Create button to complete the creation.
Note
The Deploy Strategy in the Manifest File and Key Information is not a required input value.
Item
Description
URL Input Method
Select the URL input method.
Select from the list: The registered ArgoCD tool appears.
Direct Input
ArgoCD URL
Enter the ArgoCD URL and click the Confirm button.
Authentication Information
Enter the authentication information and click the Connection Test button.
Application Name
Enter the ArgoCD application name and click the Duplicate Check button.
Project Name
Enter the project name of the ArgoCD application.
Repository Type
Select the repository type.
Create a new repository using Helm Chart: Create a Git repository using Helm Chart for GitOps.
Use an existing Git repository
K8S Cluster
Select the target cluster for deployment.
Only clusters with access rights can be selected from the DevOps Console K8S cluster.
Namespace
Select the target namespace for deployment.
Only namespaces with access rights can be selected from the selected cluster.
Git Repository
Enter the information of the Git repository where the Manifest information for creating the ArgoCD application is stored.
Manifest Keys Information
Enter the information for continuous deployment (Manifest Root path, Manifest type, Manifest file/Key information).
To add an existing ArgoCD application, follow these steps:
Click the Project card on the Main page. You will be taken to the Project Dashboard page.
Click the Build/Deploy > Kubernetes Deploy menu on the left. You will be taken to the Kubernetes Deploy page.
Click the Add ArgoCD App button at the top right. You will be taken to the Add ArgoCD App page.
Enter the ArgoCD URL and click the Confirm button.
The input screen for the existing application name and authentication information will appear.
Enter the Existing Application Name and Authentication Information and click the Connection Test button.
Note
If the Git repository linked to the ArgoCD application is not registered in the DevOps Console, the URL Check popup window will open. Follow steps 7-8.
The URL Check popup window will open to distinguish the Git repository linked to the ArgoCD application.
Modify the Base URL and click the Confirm button.
The information of the Git Repository linked to the application will appear.
Enter the Git Repository Authentication Information and click the Connection Test button.
Enter the Manifest Keys information.
Click the Save button to complete the Add ArgoCD Application.
Item
Description
URL Input Method
Select the URL input method.
Select from the list: The registered ArgoCD tool appears.
Direct Input
ArgoCD URL
Enter the ArgoCD URL and click the Confirm button.
Application Name / Authentication Information
Enter the Existing Application Name and Authentication Information and click the Connection Test button.
Git Repository Authentication Information
Enter the authentication information of the Git repository used by the selected existing application.
Image Repo Key
Enter the YAML file path and key value where the image repository information is recorded.
Image Tag Key
Enter the YAML file path and key value where the image tag information is recorded.
Image Secret Key
Enter the YAML file path and key value where the image secret information is recorded.
Deploy Strategy Key
Enter the YAML file path and key value where the deployment strategy information is recorded. (Not a required value)
Table. Add Existing ArgoCD Application Settings
Managing ArgoCD Applications
To view the details of an ArgoCD application, follow these steps:
Click the Project card on the Main page. You will be taken to the Project Dashboard page.
Click the Build/Deploy > Kubernetes Deploy menu on the left. You will be taken to the Kubernetes Deploy page.
Click the ArgoCD Application Card you want to start. You will be taken to the Deploy Details page.
Editing Manifest Information
To edit the manifest information, follow these steps:
Click the Git-related Information tab.
Click the Edit icon next to Manifest Information. The Edit Manifest Key Information popup window will open.
Modify the manifest key value and click the Save button.
Editing ArgoCD Authentication Information
To edit the ArgoCD authentication information, follow these steps:
Click the Edit icon next to ArgoCD User ID. The Edit Authentication Information popup window will open.
Modify the authentication information value and click the Save button to complete the modification.
Editing Linked Git Repository Authentication Information
To edit the authentication information of the linked Git repository, follow these steps:
Click the Edit icon next to Linked Git ID. The Edit Authentication Information popup window will open.
Modify the authentication information and click the Save button to complete the modification.
Setting Sync
To change the sync setting, follow these steps:
Click the Sync icon next to Current Sync. The ArgoCD App Sync popup window will open.
Enter the modified contents and click the Sync button to complete the Sync Setting.
Item
Description
Revision
Select the target branch for synchronization.
Sync Options
Select the synchronization options.
Synchronize Resources
Select the synchronization target.
Table. Sync Settings
Note
For more information about ArgoCD synchronization, refer to the official website.
Setting Auto-Sync
To change the Auto-Sync setting of an ArgoCD application, follow these steps:
Click the Edit icon next to Auto-Sync. The Auto-Sync Options popup window will open.
Modify the contents and click the Save button to complete the setting.
Item
Description
Prune Resources
Select whether to delete the synchronization target when the Git setting is deleted.
Self Heal
Select whether to automatically change the value in the cluster to the value defined in Git when the value of the synchronization target is changed in the cluster.
Table. Auto-Sync Settings
Managing Application Secrets
If the ArgoCD application was created using a Helm chart, you can add, modify, or delete application secrets.
To start managing application secrets, follow these steps:
Click the application secret icon to the right of the application name.
The Application Secret Settings popup window will open.
Adding an Application Secret
To add an application secret, follow these steps:
Click the Edit icon next to Application Name. The Application Secret Settings popup window will open.
If you need to add a secret to pull the Chart Image, click the Add button in the Chart Install Secret section.
If you need to add a secret to pull the App Image during build/deployment, click the Add button in the ImagePull Secret section.
Enter the contents and click the Save button to complete the addition.
Item
Description
Registry URL
Select the image to use for ImagePull Secret from the image list registered in the image repository.
Secret
Select the method of entering ImagePull Secret information.
Auto-generation: Automatically generates ImagePull Secret using the authentication information of the selected image repository in the Docker URL.
Use an existing secret: Select one of the existing secrets to use as ImagePull Secret.
Table. Application Secret Addition Settings
Editing an Application Secret
To edit an application secret, follow these steps:
Click the Edit icon next to Application Name. The Application Secret Settings popup window will open.
Click the Name of the secret you want to modify in the application secret list.
Enter the contents and click the Save button to complete the modification.
Deleting an Application Secret
… (rest of the text remains the same)
To delete an application secret, follow these steps:
Click the Edit Authentication Information icon next to the Application Name. The Application Secret Settings popup window will open.
Click the Name of the secret you want to delete from the application secret list.
Click the Delete button to complete the deletion.
Comparing Values.yaml Files
If you are using an ArgoCD application with Helm charts, you can compare values.yaml files.
To compare the values.yaml files used for each release, follow these steps:
Click the History tab.
Click the two revisions you want to compare.
Click the App Diff button. The App Diff popup window will open.
Rolling Back an Application
To roll back an application to a previous revision, follow these steps:
Click the History tab.
Click the Rollback button in the rollback column of the revision you want to roll back. The Rollback popup window will open.
Click the Confirm button to complete the rollback.
Deleting an Application
To delete an application, follow these steps:
Click the Delete button at the bottom right. The Application Deletion popup will open.
Click the Confirm button to complete the deletion.
Note
Deletion only occurs in the DevOps Console, and the actual ArgoCD application is not deleted.
12.2.4.3 - VM Deployment
Users can register/manage VM deployments through the DevOps Console. Before registering a VM deployment, there must be an available VM server group (VM server group), and registered VM deployments can be used in the build pipeline (Deploy to VM). Users can distinguish deployment methods by icon.
To view the details of a VM deployment, follow these steps.
Click the Project card on the Main page. Move to the Project Dashboard page.
Click the Build/Deployment > VM Deployment menu on the left. Move to the VM Deployment page.
Click the VM deployment you want to view in detail on the VM Deployment page. Move to the VM Deployment Details page.
Click the server list on the History tab. The Deployment Details popup window opens.
You can view detailed history in the Deployment Details popup window.
Click the Log button of the execution pipeline to open the Pipeline Log popup window.
You can view detailed logs in the Pipeline Log popup window.
Click the Log button of the server you want to view logs for in the deployment server history list. The VM Agent Log popup window opens.
You can view detailed logs in the VM Agent Log popup window.
Item
Description
Pause/Start deployment
Pause and Start Deployment buttons are enabled.
Rollback
You can roll back to a previous version.
History tab
Displays deployment history.
Pipeline Information tab
Displays the build pipeline information connected to the VM deployment.
Log
You can view the build pipeline log.
Table. VM deployment details screen items
Item
Description
Execution pipeline log
You can view the build pipeline log.
Deployment server history log
You can view the deployment agent log.
Only visible for AGENT type
Table. Deployment details screen items
Deleting VM Deployment
Deleting from the List
To delete a VM deployment, follow these steps.
Click the Project card on the Main page. Move to the Project Dashboard page.
Click the Build/Deployment > VM Deployment menu on the left. Move to the VM Deployment page.
Click the X icon of the VM deployment you want to delete on the VM Deployment page.
Click the Confirm button in the confirmation popup window to complete the VM deployment deletion.
Deleting from the Details Page
To delete a VM deployment, follow these steps.
Click the Project card on the Main page. Move to the Project Dashboard page.
Click the Build/Deployment > VM Deployment menu on the left. Move to the VM Deployment page.
Click the VM deployment you want to delete on the VM Deployment page. Move to the VM Deployment Details page.
Click the Delete button on the VM Deployment Details page.
Click the Confirm button in the confirmation popup window to complete the VM deployment deletion.
Using Environment Variables in VM Deployment Commands
You can use environment variables in file deployment commands before and after file deployment, source path of files to transfer, and target path of files to transfer.
You can use the $ symbol or the ${} symbol in the command to use the environment variables of the build pipeline.
Color mode
echo${BUILD_NUMBER}echo$JOB_NAME
echo ${BUILD_NUMBER}echo $JOB_NAME
Environment variable usage example
If you want to refer to the environment variables of the VM server where the command is executed, add a \ symbol.
Color mode
echo\${PATH}echo\$LANG
echo \${PATH}echo \$LANG
VM server environment variable reference example
Pausing VM Deployment
You can pause an ongoing VM deployment on the Deployment Details page. Click the Pause button in the recent deployment status section of the Deployment Details page to pause the VM deployment.
The deployment pause is possible in the following states, and the Pause button is only displayed when in these states.
Method
Status value
Description
SSH
Request
Build pipeline is running
AGENT
Request
Build pipeline is running
Build complete
Build pipeline has completed
Must set the manual_deploy parameter to Use when running the build pipeline. If not used, it changes to the ready state immediately.
The Start Deployment button is enabled, and clicking it changes to the ready state.
Can be changed to the ready state using the VM deployment task in release management.
Ready
AGENT can perform deployment
In progress
AGENT is executing deployment
Table. States where deployment pause is possible
Understanding VM Deployment Status Values
You can check the current status of a VM deployment on the Deployment Details page.
Method
Status value
Description
SSH
Not executed
Initially created and never executed
Request
Build pipeline is running
Success
Build/deployment was successful
Failure
Build or deployment failed
AGENT
Not executed
Initially created and never executed
Request
Build pipeline is running
Build complete
Build pipeline has completed
Must set the manual_deploy parameter to Use when running the build pipeline. If not used, it changes to the ready state immediately.
The Start Deployment button is enabled, and clicking it changes to the ready state.
Can be changed to the ready state using the VM deployment task in release management.
Ready
AGENT can perform deployment after build pipeline completion
In progress
AGENT is executing deployment
Success
Build/deployment was successful
Failure
Build or deployment failed
Paused
Build or deployment was paused
Pause button is enabled and can be paused by clicking.
Table. Description of deployment status values
12.2.4.4 - Helm Install
Users can use the Helm Install menu to view and install project charts, project group charts, tenant charts, and system charts.
Getting Started with Helm Install
To start using Helm Install, follow these steps:
Main page, click the Project card. Move to the Project Dashboard page.
In the left menu, click Build/Deploy > Helm Install. The Helm Install page opens.
Item
Description
K8S Cluster
Select the K8S cluster that will be the target for Helm Install. The list of Helm charts below will only show charts that can be installed in the selected K8S cluster.
Chart Name
Displays the chart name.
Chart Repository
Displays the chart repository information where the Helm chart file is stored.
Table. Helm Install screen items
Viewing Helm Chart Details
To view Helm chart details, follow these steps:
Main page, click the Project card.
In the left menu, click Build/Deploy > Helm Install. The Helm Install page opens.
In the Helm Install page, select the K8S cluster where you want to install from the K8S cluster item. The list of Helm charts belonging to the cluster appears.
In the Helm Chart list, click the Helm chart card you want to view in detail. The Helm Chart Detail page opens.
Viewing Helm Chart Details
Item
Description
Version
If there are multiple versions, you can select the desired version.
Displays the README.md file included in the Helm chart. You can check the information provided by the chart author.
Values.yaml Tab
Displays the values.yaml file included in the Helm chart. You can check the values that can be changed in the chart before Helm installation.
Detail Info Tab
Item
Description
Helm Chart Repository Information
Displays the repository where the Helm chart is stored.
Api Version
Displays the API version of the Helm chart.
v1
v2
Support CI/CD
Displays whether it is possible to select the Helm chart type as Helm Release in the Deploy to K8S stage when creating a build pipeline.
New Installation Allowed
Y: New installation (Helm Install) is possible.
N: New installation (Helm Install) is not possible with the current Helm chart, and only existing installed Helm releases can be used.
Chart Images
Displays the image information used in the Helm chart.
Table. Detail info tab query items
Installing Helm
To install Helm, follow these steps:
In the Helm Install page, select the K8S cluster where you want to install from the K8S cluster item. The list of Helm charts belonging to the cluster appears.
In the Helm Install page, click the Helm chart card you want to view in detail. The Helm Chart Detail page opens.
In the Helm Chart Detail page, click the Helm Install button. The Helm Install page opens.
In the Helm Install page, enter each item on the screen and click the Next button. The Helm Chart Installation popup window opens.
In the Helm Chart Installation popup window, check the contents and click the Run button to complete the Helm installation.
Once the installation is complete, the Kubernetes Deployment page opens automatically.
Item
Description
Release Name
Enter the name to be used in Helm. It must be unique and cannot be duplicated within the namespace.
Type
Development, operation
Version
Select the version of the chart you want to install.
K8S Cluster
Displays the target K8S cluster where Helm will be installed. Changes are not possible, and if you want to change, select the K8S cluster in Getting Started with Helm Install.
Namespace
Select the target namespace where Helm will be installed from the list.
Reference Information
Reference information provided by the selected K8S cluster. You can check detailed information by clicking each tab.
Default Values.yaml included in the Chart
You can modify the values.yaml content to install Helm with the desired value. If necessary, check the reference information and modify the values.yaml with the corresponding value.
Table. General Helm chart installation screen items
Item
Description
Release Name
Enter the name to be used in Helm. It must be unique and cannot be duplicated within the namespace.
Type
Development, operation
Version
Select the version of the chart you want to install.
K8S Cluster
Displays the target K8S cluster where Helm will be installed. Changes are not possible, and if you want to change, select the K8S cluster in Getting Started with Helm Install.
Namespace
Select the target namespace where Helm will be installed from the list.
Reference Information
Reference information provided by the selected K8S cluster. You can check detailed information by clicking each tab.
Default Values.yaml included in the Chart
You can modify the values.yaml content to install Helm with the desired value. If necessary, check the reference information and modify the values.yaml with the corresponding value.
Input Type
Form: Enter the items displayed on the screen. The input type item is only displayed in Helm charts that support form input. For Helm chart authors who support form input, see Creating a Helm chart that supports form input.
Values.yaml: Modify the value in the general yaml editor screen.
Form/Values.yaml input can be switched, but the previously entered content will be initialized.
Form Input
The screen displayed when the input type is selected as Form, check each item, and enter the value. After entering, click the Validation Check button to verify the input value.
Table. Form input-supported Helm chart installation screen items
12.2.4.5 - Ingress/Service Management
Users can add and manage Ingress/Service using the DevOps Console.
Getting Started with Ingress/Service Management
To start managing Ingress/Service, follow these steps:
Main page, click the Project card. Move to the Project Dashboard page.
Click the Build/Deploy > Ingress/Service Management menu in the left menu. Move to the Ingress/Service Management page.
Ingress Management
Click the Ingress tab on the Ingress/Service Management page.
Select K8S Cluster and Namespace in the Ingress tab. The Ingress list belonging to the selected namespace is retrieved.
Ingress
Note
Not all Ingress created in the namespace are displayed, and only Ingress created in DevOps Console are displayed.
Item
Description
K8S Cluster
Select a K8S cluster from the list.
Namespace
Select a namespace from the list. The Ingress created in the selected namespace is retrieved.
Ingress List
Displays the Ingress list.
Search
You can search for Ingress.
Add
You can add Ingress.
Table. Ingress Management Screen Items
Adding Ingress
To add Ingress, follow these steps:
Click the Add button in the Ingress tab. The Add Ingress popup window opens.
Enter information in the Add Ingress popup window and click the OK button.
Click the OK button in the confirmation popup window to complete adding Ingress.
Item
Description
Ingress Name
Enter the Ingress name.
K8S Cluster
Displays the K8S cluster where Ingress will be created.
Namespace
Displays the namespace where Ingress will be created.
Service
Select a Service from the Service list that Ingress will use as a target.
Rules
Enter Host, Path, and Service Port to be set for Ingress. You can enter multiple items by clicking Add.
Table. Ingress Addition Input Items
Ingress Details
To view Ingress details, follow these steps:
Click the Ingress you want to view in the Ingress tab. Move to the Ingress Details page.
Check the detailed information of Ingress on the Ingress Details page.
Click the View icon to open the Ingress YAML popup window.
You can check the Ingress YAML content in the Ingress YAML popup window.
Modifying Ingress
Caution
Ingress used in blue/green deployment and canary cannot be modified.
To modify Ingress, follow these steps:
Click the Ingress you want to modify in the Ingress tab. Move to the Ingress Details page.
Click the Modify button on the Ingress Details page. The Modify Ingress popup window opens.
Modify the Ingress information in the Modify Ingress popup window and click the OK button.
Click the OK button in the confirmation popup window to complete modifying Ingress.
Deleting Ingress
Caution
Ingress used in blue/green deployment and canary cannot be deleted.
To delete Ingress, follow these steps:
Click the Ingress you want to delete in the Ingress tab. Move to the Ingress Details page.
Click the Delete button on the Ingress Details page.
Click the OK button in the confirmation popup window to complete deleting Ingress.
Service Management
Click the Service tab on the Ingress/Service Management page.
Select K8S Cluster and Namespace in the Service tab. The Service list belonging to the selected namespace is retrieved.
Service
Note
Not all Services created in the namespace are displayed, and only Services created in DevOps Console are displayed.
Item
Description
K8S Cluster
Select a K8S cluster from the list.
Namespace
Select a namespace from the list. The Service created in the selected namespace is retrieved.
Service List
Displays the Service list.
Search
You can search for Service.
Add
You can add Service.
External Endpoint Information Icon
If there is additional information, it is displayed.
Table. Service Management Screen Items
Adding Service
To add Service, follow these steps:
Click the Add button in the Service tab. The Add Service popup window opens.
Enter information in the Add Service popup window and click the OK button.
Click the OK button in the confirmation popup window to complete adding Service.
Item
Description
Service Name
Enter the Service name.
K8S Cluster
Displays the K8S cluster where Service will be created.
Namespace
Displays the namespace where Service will be created.
Type
Select the type of Service
ClusterIP
NodePort
LoadBalancer
Deployment
Select the Deployment that will be the target of Service from the list.
Ports
Enter Port Name, Port, Target, and Protocol used by Service. You can enter multiple items by clicking Add.
Table. Service Addition Input Items
Service Details
To view Service details, follow these steps:
Click the Service you want to view in the Service tab. Move to the Service Details page.
Check the detailed information of Service on the Service Details page.
Click the View icon to open the Service YAML popup window.
You can check the Service YAML content in the Service YAML popup window.
Modifying Service
Caution
Service used in blue/green deployment cannot be modified.
To modify Service, follow these steps:
Click the Service you want to modify in the Service tab. Move to the Service Details page.
Click the Modify button on the Service Details page. The Modify Service popup window opens.
Modify the Service information in the Modify Service popup window and click the OK button.
Click the OK button in the confirmation popup window to complete modifying Service.
Deleting Service
Caution
Service used in blue/green deployment cannot be deleted.
To delete Service, follow these steps:
Click the Service you want to delete in the Service tab. Move to the Service Details page.
Click the Delete button on the Service Details page.
Click the OK button in the confirmation popup window to complete deleting Service.
12.2.4.6 - Managing Kubernetes Secrets
Users can view secrets created in a namespace. Moreover, users can create and delete secrets.
Getting Started with Kubernetes Secret Management
To start managing Kubernetes secrets, follow these steps:
Main page, click the Project card. It moves to the Project Dashboard page.
In the left menu, click Build/Deploy > Kubernetes Secret Management. It moves to the Kubernetes Secret Management page.
Item
Description
K8S Cluster
Select a K8S cluster from the list.
Namespace
Select a namespace from the list. The secrets created in the selected namespace are retrieved.
Secret List
Displays the list of secrets.
Search
Search for secrets.
Detailed Filter
Use detailed filters for detailed searches.
Add
Add secrets.
Table. Kubernetes Secret Management Screen Items
Adding Secrets
Note
The secrets that can be added in Kubernetes Secret Management are Docker Config Secrets, and other secrets cannot be added on this screen. Docker Config Secret refers to a secret used to store Docker registry access credentials for an image.
To add a Kubernetes secret, follow these steps:
On the Kubernetes Secret Management page, select a K8S Cluster.
Select a Namespace.
Click the Add button. The Add Secret popup window opens.
In the Add Secret popup window, enter the information and click the Connection Test button.
If the Connection Test is successful, click the Save button.
In the confirmation popup window, click the Confirm button to complete the addition.
Item
Description
K8S Cluster
Displays the K8S cluster where the secret will be created.
Namespace
Displays the namespace where the secret will be created.
Image Repository URL
Select the image repository to use for the secret.
Secret Name
Enter the secret name.
Authentication Information Selection
New authentication information: Enter new authentication information.
Saved authentication information: Select one of the previously used authentication information.
Table. Add Secret Input Items
Viewing Secret Details
To view secret details, follow these steps:
On the Kubernetes Secret Management page, click the secret for which you want to view detailed information. The Secret Details popup window opens.
In the Secret Details popup window, check the detailed information of the secret.
Deleting Secrets
To delete a secret, follow these steps:
On the Kubernetes Secret Management page, select the checkbox of the secret you want to delete.
Click the Delete button.
In the confirmation popup window, click the Confirm button to complete the deletion.
12.2.4.7 - Environment Variable Management
Users can add frequently used parameters and authentication information as environment variables to use when creating build pipelines.
Getting Started with Environment Variable Management
To manage environment variables used in build pipelines, follow these steps:
Main page, click the Project card. Move to the Project Dashboard page.
On the Project Dashboard page, click the Build/Deployment > Environment Variable Management menu in the left menu. Move to the Environment Variable Management page.
Adding Environment Variables
Adding Parameter Environment Variables
Parameter environment variables play the role of Linux Environment variables and are used in the NAME=VALUE format.
To add a parameter environment variable, follow these steps:
On the Environment Variable Management page, click the Add button. The Add Environment Variable popup window opens.
In the Add Environment Variable popup window, select Parameter as the type.
Enter the information and click the Save button.
In the confirmation popup window, click the Confirm button to complete the addition.
Adding Authentication Information Environment Variables
Authentication information environment variables store authentication information for use.
Authentication information required for stages can be easily used (preventing duplicate input) by registering and sharing it in advance.
To add an authentication information environment variable, follow these steps:
On the Environment Variable Management page, click the Add button. The Add Environment Variable popup window opens.
In the Add Environment Variable popup window, select Authentication Information as the type.
Enter the information and click the Save button.
In the confirmation popup window, click the Confirm button to complete the addition.
Item
Description
Name
Enter the name of the authentication information.
Classification
Select a classification value
Development: Authentication information can only be used in development-type build pipelines.
Operation: Authentication information can only be used in operation-type build pipelines.
Authentication Type
Select the authentication type.
ID
Enter the ID.
Password/Private Key
Enter the password or private key according to the authentication type.
Description
Enter a description for user reference.
Table. Authentication Information Environment Variable Addition Input Items
Modifying Environment Variables
To modify an environment variable, follow these steps:
On the Environment Variable Management page, click the environment variable you want to modify. The Modify Environment Variable popup window opens.
In the Modify Environment Variable popup window, modify the desired item and click the Save button.
In the confirmation popup window, click the Confirm button to complete the modification.
Deleting Environment Variables
To delete an environment variable, follow these steps:
Deleting from the List
On the Environment Variable Management screen, select the checkbox of the environment variable you want to delete.
On the Environment Variable Management screen, click the Delete button.
In the confirmation popup window, click the Confirm button to complete the deletion.
Deleting from the Environment Variable Modification Popup Window
On the Environment Variable Management screen, click the environment variable you want to delete. The Modify Environment Variable popup window opens.
In the Modify Environment Variable popup window, click the Delete button.
In the confirmation popup window, click the Confirm button to complete the deletion.
12.2.5 - Project Group
12.2.5.1 - Project Group Overview
Users can view and modify project group information based on their permissions.
Project Group Overview
To view the project group overview, follow these steps:
Main page, click the Project Group Management icon of the project group. It moves to the Project Group Dashboard page.
Click the Project Group Overview menu in the left menu. The Project Group Overview screen appears.
Modifying Project Group Name
To modify the project group name, follow these steps:
On the Project Group Overview screen, click the Modify Project Group Name button. The Modify Project Group Name popup window opens.
In the Modify Project Group Name popup window, modify the project group name and click the Save button to complete the modification.
Deleting Project Group
To delete a project group, follow these steps:
On the Project Group Overview screen, click the Delete Project Group button. The Delete Project Group popup window opens.
In the Delete Project Group popup window, confirm the path of the Jenkins folder to be deleted and enter the project group name, then click the Delete button.
Click the Confirm button in the confirmation popup window to complete the deletion.
Note
If there are projects within the project group, it cannot be deleted.
A deleted project group cannot be recovered.
All Jenkins project group level folders that can be used will be deleted, so please confirm.
Project Group Members
Starting Project Group Members
To view project group members, follow these steps:
On the Project Group Overview screen, click the Members tab.
Adding Project Group Members
To add members to a project group, follow these steps:
On the Project Group Overview screen, click the Members tab. The Project Group Member List screen appears.
On the Project Group Member List screen, click the Add button. The Add Project Group Member popup window opens.
In the Add Project Group Member popup window, complete the settings and click the Save button to complete the addition.
Item
Description
Add Member
Search for the user to be added as a project group member by email and click the Add button.
Authority Setting
Set the role to be granted to the member to be added
Owner
Master
Developer
Viewer
Member Deletion
Click the X icon to delete from the members to be added to the project group.
To change the role of a project group member, follow these steps:
On the Project Group Overview screen, click the Members tab. The Project Group Member List screen appears.
On the Project Group Member List screen, find the user whose role you want to change.
Select the Project Group Role of the user from the list. It is saved as soon as it is selected, and the project group role of the user is changed.
Deleting Project Group Members
To delete project group members, follow these steps:
On the Project Group Overview screen, click the Members tab. The Project Group Member List screen appears.
On the Project Group Member List screen, select the checkbox of the user to be deleted.
Click the Delete button at the top of the list to delete the selected user from the project group members.
Project Group Common Settings
Starting Project Group Common Settings
To view project group common settings, follow these steps:
On the Project Group Overview screen, click the Common Settings tab. The Common Settings screen appears.
Setting Up Messenger
Setting up a messenger allows you to send event content to the messenger when an event occurs in the project group.
To set up a project group messenger, follow these steps:
On the Project Group Overview screen, click the Common Settings tab. The Common Settings screen appears.
On the Common Settings screen, click the Edit icon of the messenger settings. The Messenger Settings popup window opens.
In the Messenger Settings popup window, complete the settings and click the Save button to complete the setup.
Managing All Project Groups
Note
It displays information about all project groups for which the user has permissions. It provides the same functionality as Project Group Overview, except for Project Group Name Change and Project Group Deletion.
Click the Management icon at the top right of the Main page. It moves to the Tenant Dashboard page.
Click the Tenant > Project Group menu in the left menu. It moves to the Project Group page.
12.2.5.2 - Creating a Project Group
Creating a Project Group
To create a project group, follow the procedure below.
Main page, click the Create Project Group button at the top. The Create Project Group popup window opens.
In the Create Project Group popup window, enter the items and click the Confirm button to complete the project group creation.
Item
Description
Project Group Name
Enter the project group name to be displayed on the screen.
Project Group ID
Enter the ID of the project group to be managed in DevOps Console. The project group ID is used in build pipelines, etc.
Tenant
Select the tenant where the project group will be created from the list
Tenant Join Shortcut: If the tenant to create the project group is not visible in the list, click the Tenant Join Shortcut link to request joining.
Table. Project Group Creation Input Items
Note
Depending on the selected tenant, the approval process of the tenant administrator may be required.
Joining a Tenant
To join a tenant, follow the procedure below.
Project Group Creation popup window, click the Join Tenant Shortcut link. The Tenant Join Request popup window will open.
In the Tenant Search field, enter the tenant code you want to join exactly and click the Search icon. The tenant information will be retrieved.
Verify that the searched tenant is correct, enter the Reason for Request, and click the Add button. It will be added to the list below.
Select the authority for the tenant added to the list below and click the Save button.
12.2.5.3 - Project Group Dashboard
Getting Started with Project Group Dashboard
To view the project group dashboard, follow these steps.
Main page, click the Project Group Management icon of the project group. It moves to the Project Group Dashboard page.
Click the Dashboard menu in the left menu.
Item
Description
Basic Information
Displays basic information of the project group.
Scale and Usage
Displays the organizational scale and tool usage of the project group, build/deployment status.
Monthly Pipeline Execution Trend
Displays the monthly pipeline execution count, average execution cycle, and execution time within the project group.
Inactive Pipelines
Displays a list of pipelines that have not been executed and have no success history based on the reference date.
Release Execution Trend
Displays the daily release execution count, average lead time, average release cycle, etc. within the project group.
Recent Event History
You can check the recent event history that occurred in the project group.
Table. Project Group Dashboard Items
12.2.6 - Tenant
12.2.6.1 - Tenant Management
Tenant is a logical unit that provides tools and app templates independently and shares them for use in projects.
Each project belongs to a project group, and each project group belongs to a tenant, so projects can access and use the tools and app templates specified for the tenant.
Getting Started with Tenant Management
To start managing tenants, follow these steps:
Main page, click the Manage icon in the top right corner. Move to the Tenant Dashboard page.
Click Tenant > Tenant Management in the left menu. Move to the Tenant Management page.
Detailed Tenant Inquiry
To make a detailed inquiry about a tenant, follow these steps:
On the Tenant Management page, click the tenant you want to inquire about in detail. Move to the Tenant Details page.
Managing Tenant Members
To manage tenant members, follow these steps:
On the Tenant Details page, click the Members tab.
Adding Tenant Members
To add tenant members, follow these steps:
On the Tenant Details page, click the Members tab.
Click the Add button on the Members tab. The Add Member popup window opens.
In the Add Member popup window, select the member, set the authority, and click the Save button.
Click the Confirm button in the confirmation popup window to complete.
Deleting Tenant Members
To delete tenant members, follow these steps:
On the Tenant Details page, click the Members tab.
Select the checkbox of the member to be deleted on the Members tab.
Click the Delete button.
Click the Confirm button in the confirmation popup window to complete.
Checking Tenant Tools
To check tenant tools, follow these steps:
On the Tenant Details page, click the Tools tab.
Checking Tenant Member Approval History
To check the approval history of tenant members, follow these steps:
On the Tenant Details page, click the Approval History tab.
Managing Tenant Common Settings
To manage tenant common settings, follow these steps:
On the Tenant Details page, click the Common Settings tab.
Item
Description
Billing
Select whether to bill or not.
Project Group Creation Approval
Select whether to approve project group creation or not
If not used, tenant members can freely create project groups.
Release Deletion
Select whether to delete releases or not
You can set whether to delete completed releases.
Release Cancellation Approval
Select whether to approve release cancellation or not
You can set up internal approval to cancel ongoing releases.
Approval Task - JIRA Field Addition
When connecting a Jira project to an approval task, add custom fields from the Jira project to be included in the approval history.
Email
Enter the recipient’s email address
You can receive inquiries related to the tenant.
Table. Tenant Common Settings Items
Managing Tenant Join Requests
To manage tenant join requests, follow these steps:
Main page, click the Manage icon in the top right corner. Move to the Tenant Dashboard page.
Click Tenant > Tenant Management in the left menu. Move to the Tenant Management page.
On the Tenant Management page, click the Join Request link of the tenant. The number displayed is the number of join requests for the tenant.
The Tenant Join Request popup window opens.
In the Tenant Join Request popup window, select the checkbox of the request you want to approve or reject.
Enter your opinion and click the Approve or Reject button.
Managing Project Group Creation Requests
To manage project group creation requests, follow these steps:
Main page, click the Manage icon in the top right corner. Move to the Tenant Dashboard page.
Click Tenant > Tenant Management in the left menu. Move to the Tenant Management page.
On the Tenant Management page, click the Project Group Creation Request link of the tenant. The number displayed is the number of project group creation requests.
The Project Group Creation Request popup window opens.
In the Project Group Creation Request popup window, select the checkbox of the request you want to approve or reject.
Enter your opinion and click the Approve or Reject button.
12.2.6.2 - Tenant Dashboard
The user can understand various usage trends by system (tenant) through the dashboard.
Getting Started with Tenant Dashboard
To check the tenant dashboard, follow the procedure below.
Main page, click the Management icon in the top right corner. Move to the Tenant Dashboard page.
Click the Tenant > Dashboard menu in the left menu.
Item
Description
Tenant
Select the tenant to display the tenant dashboard.
Scale and Usage
You can check the scale of the organization under the tenant, tool usage, and the number of builds/deployments.
Monthly Pipeline Execution Trend
Displays the monthly pipeline execution count, average execution cycle, and execution time within the tenant.
Inactive Pipelines
Displays a list of pipelines that have not been executed and have no success history based on the reference date.
Release Execution Trend
You can check the execution trend of releases under the tenant.
Tool Usage
You can check the usage of each tool under the tenant. Usage refers to the number of projects, repositories, and pipelines for each tool.
App. Template Usage
You can check the usage of each App. template under the tenant. Usage refers to the number of templates used when creating a project.
Helm Chart Usage
You can check the usage of each Helm chart under the tenant. Usage refers to the number of Helm charts used when installing the chart.
Recent Event History
You can check the recent event history that occurred in the tenant.
Table. Tenant Dashboard Display Items
12.2.6.3 - Tenant Notice
Note
The tenant administrator can add, modify, and delete notices to be displayed to users belonging to the tenant.
Modification and deletion are only possible for the registered user.
Getting Started with Tenant Notice
To get started with the tenant notice, follow the procedure below.
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Tenant > Tenant Notice menu from the left menu. Move to the Tenant Notice page.
Adding a Tenant Notice
To add a tenant notice, follow the procedure below.
On the Tenant Notice page, click the Add button. The Add Notice popup window opens.
In the Add Notice popup window, specify the notice title, notice content, and target tenant, and click the Save button.
Item
Description
Notice Title
Enter the notice title.
Notice Content
Enter the notice body. Tables, lists, and fonts can be specified.
Popup Width/Height
Enter the width and height of the popup to be displayed on the screen.
Target Tenant
Select the tenant that will be the target for the notice. One or more tenants can be specified. Only tenants registered with administrator privileges are retrieved.
Notice Period
The following selection is available depending on the notice nature.
Always Notice
Notice only for a specific period
Non-notice
Email Sending Time
Select immediate transmission, transmission at a specific time, or non-transmission by email, depending on the notice purpose. It is not displayed if the system does not use email.
Table. Tenant Notice Additional Input Items
Managing Tenant Notice
Modifying a Tenant Notice
To modify a tenant notice, follow the procedure below.
On the Tenant Notice page, click on the notice. The Notice Details popup window opens.
In the Notice Details popup window, click the Modify button. The Modify Notice popup window opens.
In the Modify Notice popup window, modify the information and click the Save button.
Click the Confirm button in the confirmation popup window to complete the modification.
Deleting a Tenant Notice
To delete a tenant notice, follow the procedure below.
On the Tenant Notice page, click on the notice. The Notice Details popup window opens.
In the Notice Details popup window, click the Delete button.
Click the Confirm button in the confirmation popup window to complete the deletion.
Previewing a Tenant Notice
To preview a tenant notice, follow the procedure below.
On the Tenant Notice page, click the Preview button of the notice.
The notice popup window opens in the actual display size.
After confirmation, click the Confirm button.
12.2.7 - Repository
12.2.7.1 - Code Repository
Users can view the list of code repositories used in the project and add new repositories from the code repository menu in the project.
Getting Started with Code Repository
To start using the code repository, follow these steps:
Main page, click the Project card. Move to the Project Dashboard page.
Click the Repository > Code Repository menu from the left menu. Move to the Code Repository page.
Adding a Code Repository
To add a code repository, follow the procedure below.
Code Repository page, click the Add Code Repository button in the top right corner. It will move to the Add Code Repository page.
On the Add Code Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Type
Select the repository to use
Registered Tool: You can select and use the types of SCM Repository tools available to the user (Github, Gitlab, etc.).
DevOps Code: Available if you have applied for DevOps Code use in the Samsung Cloud Platform Console.
Unregistered Tool: You can use it by entering the domain of an unregistered tool. The unregistered tool item only appears when the App template is Environment Only (without source code).
New/Existing Usage
Select Create New Repository or Use Existing Repository
When creating a new repository, the URL is composed of the project group name/project name.
Authentication Information
Enter authentication information.
Repository Information
Enter repository information
You can use a code repository that is not registered as a tool in the DevOps Console.
An additional URL check process is required.
Table. Add Code Repository Input Items
Managing Code Repositories
Code Repository List
Item
Description
User Permission Settings
User Permission Settings popup window opens.
Webhook Settings
Webhook Settings popup window opens.
Edit Icon
Authentication Information Modification popup window for the code repository opens.
Delete
Deletes the code repository. When deleting, you can choose whether to delete the code repository in SCM as well.
Table. Code Repository List Screen Items
Adding an Account to a Code Repository
To add an account to a code repository, follow these steps:
On the Code Repository page, click the User Permission Settings icon for the code repository you want to set up. The User Permission Settings popup window opens.
In the User Permission Settings popup window, enter the authentication information for the account you want to add, and then click the Save button to complete adding the account.
Setting up a Webhook for a Code Repository
You can set up a webhook to run a pipeline when changes such as commit, push occur in a code repository branch.
To set up a webhook, follow these steps:
On the Code Repository page, click the Webhook Settings icon for the code repository you want to set up. The Webhook Settings popup window opens.
In the Webhook Settings popup window, click the Add button.
In the Webhook Settings popup window, select the pipeline you want to run, enter the branch name, and then click the Save button to complete setting up the webhook.
Changing an Account in a Code Repository
To change an account in a code repository, follow these steps:
On the Code Repository page, click the Edit icon for the code repository you want to change. The Authentication Information Modification popup window opens.
In the Authentication Information Modification popup window, enter the authentication information and then click the Save button to complete changing the account.
Deleting a Code Repository
To delete a code repository, follow these steps:
On the Code Repository page, click the X icon for the code repository you want to delete. The Code Repository Deletion popup window opens.
In the Code Repository Deletion popup window, select Delete the repository in SCM as well and then click the Confirm button to complete deleting the code repository.
Note
Delete the repository in SCM as well
Selected: Both the code repository list and the actual code repository in SCM are deleted.
Not selected: Only the code repository list is deleted, and the code repository in SCM remains.
12.2.7.2 - Artifact Repository
Users can create and utilize projects in conjunction with Nexus and retrieve created projects.
Getting Started with Artifact Repository
To start managing the artifact repository, follow these steps.
Main page, click the Project card. It moves to the Project Dashboard page.
Click the Repository > Artifact Repository menu from the left menu. It moves to the Artifact Repository page.
Adding an Artifact Repository
To add an artifact repository, follow these steps:
Artifact Repository page, click the Add Artifact Repository button in the top right corner. It will move to the Add Artifact Repository page.
On the Add Artifact Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Creation Option
Select whether to create a new repository or use an existing one.
Basic Information Input
Enter Base URL, select repository type, and enter repository/authentication information.
Table. Input Items for Adding an Artifact Repository
Managing Artifact Repository
Artifact Repository List
Item
Description
Account Change
Modifies the artifact repository authentication information.
Delete
Deletes the artifact repository.
Table. Artifact Repository List Screen Items
Changing Artifact Repository Account
To change the account of an artifact repository, follow these steps.
On the Artifact Repository page, click the Edit icon of the artifact repository you want to change. The Authentication Information Modification popup window opens.
In the Authentication Information Modification popup window, enter the authentication information and click the Save button to complete the account change.
Deleting an Artifact Repository
To delete an artifact repository, follow these steps.
On the Artifact Repository page, click the X icon of the artifact repository you want to delete. The Artifact Repository Deletion popup window opens.
In the Artifact Repository Deletion popup window, select Delete the repository in Nexus as well and click the Confirm button to complete the deletion of the artifact repository.
Note
Delete the repository in Nexus as well
Selected: Both the artifact repository list and the Nexus repository are deleted.
Not selected: Only the artifact repository list is deleted, and the Nexus repository remains.
12.2.7.3 - Image Repository
The user can manage the image repository used in the project from the project’s image repository menu.
Getting Started with Image Repository
To start managing the image repository, follow these steps.
Main page, click the Project card. Move to the Project Dashboard page.
In the left menu, click the Repository > Image Repository menu. Move to the Image Repository page.
Adding an Image Repository
To add an image repository, follow these steps.
App. Adding an Image Repository
Image Repository page, click the App Image Repository Addition button in the top right. Move to the App Image Repository Addition page.
On the App Image Repository Addition page, enter/settings for each item.
Click the Connection Test button.
Click the Save button.
Item
Description
Repository Type Selection
Select the image repository type. If you want to use an image repository not registered in Devops Console, select the Image Registry type.
Repository Creation Selection
Choose whether to create a new repository or use an existing one.
If you selected Docker hub or Image Registry type earlier, you can only select Use Existing Repository.
Registered Tool
Enter repository information.
Unregistered Tool
Enter repository information
You can register an image repository that has not been registered as a tool in DevOps Console.
Click the URL Check button to proceed with the verification process.
You can only select Use Existing Repository.
Table. App Image Repository Addition Input Items
Adding a Pull-only Image Repository
Image Repository page, click the Add Pull-only Image Repository button at the top right. It moves to the Add Pull-only Image Repository page.
On the Add Pull-only Image Repository page, enter/set each item.
Click the Connection Test button.
Click the Save button.
Managing an Image Repository
Image Repository List
Item
Description
Account Change
The Authentication Information Modification popup window for the image repository opens
Delete
The Image Repository Deletion popup window opens
Table. Image Repository List Screen Items
Changing an Image Repository Account
To change the account of an image repository, follow these steps.
On the Image Repository page, click the Edit icon of the image repository to be changed. The Authentication Information Modification popup window opens.
In the Authentication Information Modification popup window, enter the authentication information and click the Save button to complete the account change.
Deleting an Image Repository
To delete an image repository, follow these steps.
On the Image Repository page, click the X icon of the image repository to be deleted. The Image Repository Deletion popup window opens.
In the Image Repository Deletion popup window, select Also delete the repository in IR and click the Confirm button to complete the image repository deletion.
Note
Also delete the repository in IR
Selected: Both the image repository list and the actual image repository’s repository are deleted.
Not selected: Only the image repository list is deleted, and the actual image repository’s repository remains.
12.2.7.4 - Chart Repository
Users can upload and delete Helm charts in the chart repository.
The charts uploaded to the chart repository are used in Adding Helm Charts and are used for Helm installation or project creation.
Getting Started with Chart Repository
To start managing the chart repository, follow these steps.
Main page, click the Manage icon at the top right. Move to the Tenant Dashboard page.
Click the Tools & Templates > Chart Repository menu on the left menu. The Chart Repository screen appears.
Getting Started with Project Chart Repository
Note
You can upload/modify/delete charts that are only available within the project to the chart repository.
Click the Project card on the Main page. Move to the Project Dashboard page.
Click the Repository > Chart Repository menu on the left menu. Move to the Chart Repository page.
To start managing Helm Charts, follow these steps:
Main page, click the Manage icon at the top right. The Manage page opens.
In the left menu, click Tools & Templates > Helm Chart.
Getting Started with Project Helm Chart
Note
You can register, modify, or delete Helm Charts that are only available within the project.
Main page, click the Project card. The Project Dashboard page moves.
In the left menu, click Repository > Helm Chart. The Helm Chart page moves.
Adding Helm Chart
Users can add their own Helm Charts.
Before registering a Helm Chart, users must upload the chart to be used through Uploading Charts.
To add a Helm Chart, follow these steps:
Main page, click the Manage icon at the top right. The Manage page opens.
In the left menu, click Tools & Templates > Helm Chart. The Helm Chart page opens.
In the Helm Chart Type menu, select the type of Helm Chart. The Add Helm Chart button is only visible if you have registration permissions based on the selected information.
Click the Add Helm Chart button. The Register Helm Chart page opens.
Enter the Helm Chart Basic Information and click the Start button.
Item
Description
Helm Chart Type
Select the type of Helm Chart to add.
Tenant/Project Group
Select the tenant or project group where the Helm Chart will be added.
If you select a tenant, the registered chart can only be used in projects within that tenant.
If you select a project group, the registered chart can only be used in projects within that project group.
Allow New Installation
Select whether to allow new installations through Helm Install
Table. Helm Chart Basic Information Setting Items
Select the Helm Chart Repository and Helm Chart, then click the Validation Check button.
Enter the remaining information and click the Next button.
Repository
Item
Description
ChartMuseum
Helm Chart Repository
Select ChartMuseum as the chart repository.
Chart Selection
Select the chart to register as a Helm Chart from the charts uploaded through Uploading Charts.
Harbor OCI
Helm Chart Repository
Select Harbor OCI as the chart repository.
Authentication Information
Enter the authentication information for the chart repository and click the Connection Test button.
Chart Selection
Select the Helm Chart that can be retrieved using the entered authentication information and click the Validation Check button.
OCI
Helm Chart Repository
Select OCI as the chart repository.
Authentication Information
Enter the authentication information for the chart repository and click the Connection Test button.
Chart Input
Enter the Helm Chart that can be retrieved using the authentication information entered in Authentication Information and click the Validation Check button. (e.g., oci://chart.url/repo/chartname:version)
Common
Icon
Select an icon to represent the Helm Chart.
CI/CD Information
Select whether to support CI/CD functionality.
This indicates whether the Helm Chart can be linked to an App template.
The Values.yaml file of the chart must contain the image.repository, image.tag, and imagePullSecrets[0].name keys.
If Support CI/CD is selected, the CI/CD information step is added.
Chart Image
Select whether to register an image.
The Values.yaml file of the chart must contain the imagePullSecrets[0].name key.
If No Image is selected, the image secret information step is excluded.
Table. Chart Repository Setting Items
Select the image repository and enter the authentication information, then click the Connection Test button.
Enter the remaining information and click the Next button.
Item
Description
Image Repository Information
Select the image repository and enter the authentication information.
Administrator privileges are required for the image repository.
The user who will use the Helm Chart to perform Helm Install will be granted Read privileges for the image repository.
The entered authentication information will be used to grant privileges to the image repository.
Add Used Images
Add the image paths from the selected image repository.
Select images from the list: Organization, Repository, and Tag can be selected and added.
User input: The docker repository and docker tag of the image can be entered directly and added.
Multiple inputs are possible.
Use Register’s Authentication Information
This option is available when the selected image repository is a tool that cannot grant privileges (e.g., SCR).
If not checked, when a user performs Helm Install, the user will be granted Pull privileges for the used image. If the tool cannot grant privileges, no privileges will be granted.
If checked, the Helm Chart user will use the registrant’s authentication information instead of their own privileges when performing Helm Install.
Be cautious when checking this option:
The registrant’s authentication information may be exposed to the Helm Chart user, so use this option only when necessary.
This option must be selected for image repositories that only provide image pull functionality. A separate image repository for pull purposes only must be entered. (If checked, one image repository cannot be used for both pull and push at the same time.)
The registrant’s authentication information is used in Project > Image Repository > Pull-only Image and Helm Release’s Image Pull Secret. This information cannot be changed by the Helm Chart user, and changes to the registrant’s authentication information in the Helm Chart will be applied universally.
Select Docker Base Image
Select the image to be used as the Docker base image.
This option is only available if Support CI/CD was selected in the previous step.
The selected image will be used as the base image for Docker build.
Table. Image Secret Information Setting Items
Select the supported App template and click the Complete button.
Note
App templates marked as Environment Only provide only build/deployment environments without project sample code.
Modifying Helm Chart
To modify a Helm Chart, follow these steps:
Main page, click the Manage icon at the top right. The Manage page opens.
In the left menu, click Tools & Templates > Helm Chart.
In the Helm Chart list, click the Helm Chart card you want to modify. The Helm Chart Details page opens.
Click the Modify button at the bottom right.
Modify the information and click the Save button to complete the modification.
Adding Helm Chart Version
To add a Helm Chart version, the chart with a different version must be registered in advance through Uploading Charts.
To add a Helm Chart version, follow these steps:
Main page, click the Manage icon at the top right. The Manage page opens.
In the left menu, click Tools & Templates > Helm Chart.
In the Helm Chart list, click the Helm Chart card you want to modify. The Helm Chart Details page opens.
Click the Add Version button at the bottom right. The Add Helm Chart Version popup opens.
Enter the information and click the Save button to complete the version addition.
Item
Description
Chart Version
Select the chart version to add.
Chart versions that are not registered as Helm Charts in the chart repository are displayed.
Table. Helm Chart Version Addition Setting Items
Deleting Helm Chart
To delete a Helm Chart, follow these steps:
Main page, click the Manage icon at the top right. The Manage page opens.
In the left menu, click Tools & Templates > Helm Chart. The Helm Chart page opens.
In the Helm Chart list, click the Helm Chart card you want to delete. The Helm Chart Details page opens.
Click the Delete button at the bottom right.
In the confirmation popup, click the Confirm button to complete the deletion.
12.2.7.5.1 - Creating a Helm Chart that Supports Form Input
Users can create a Helm chart that supports form input.
Note
Only available in Helm 3 or later versions.
Form Input Support Helm Chart
Using a Helm chart that supports form input, users can input each item through a user interface when installing the Helm chart.
Helm Chart File Composition and values.schema.json File
Helm Chart File Composition
To support form input, a values.schema.json file is required in addition to the basic Helm chart file composition.
Figure. Helm Chart Directory Structure
Relationship between values.schema.json and values.yaml Files
Figure. Relationship between values.schema.json and values.yaml files
values.schema.json
A file defined in JSON Schema to validate the values entered in the values.yaml file.
DevOps Console provides additional features to display forms on the screen and allow users to easily input values.
JSON Schema Basics
The values.schema.json file used in DevOps Console supports the standard format defined in JSON Schema.
Note
For detailed guides on standard formats, please refer to the following sites:
ip, hostname, uri, etc.: Input formats provided by JSON Schema
password_confirm: Creates an input field for password confirmation
form_locale
Defined for internationalization processing
Uses the default property value if the set locale is not available
Supports Korean (ko) and English (en)
object
ko
label
description
en
label
description
Table. DevOps Console Defined Items
Hierarchical Processing
To process hierarchical structures, JSON Schema defines the "type": "object" property value and the properties property. Sub-properties are defined under the properties item.
The following example defines the service.internalPort property.
Color mode
"service": {"type": "object","form": true,"properties": {"internalPort": {"type": "number","title": "Container Port","description": "HTTP port to expose at container level","form": true}<omitted>
"service": {"type": "object","form": true,"properties": {"internalPort": {"type": "number","title": "Container Port","description": "HTTP port to expose at container level","form": true}<omitted>
Hierarchical Processing Example
Internationalization Processing
For internationalization processing, use the form_locale property and define it as follows.
The tenant or project group type selected when adding a tool indicates the management affiliation.
The following icons distinguish the type of management affiliation:
Management affiliation is a tenant.
Management affiliation is a project group.
Adding a Tool
To add a tool, follow these steps:
On the Tools page, click the Add button. The Add Tool popup window will open.
Selecting Tool Support Type
In the Add Tool popup window, select the tool’s support type.
Complete the selection and click the Next button. The Basic Information input screen will appear.
In the Add Tool popup window, on the Basic Information input screen, select the tool classification and tool. The input screen will vary depending on the selected tool.
If using IDP, select the IDP type (CMP IDP, other IDP).
User Account Authentication Type
Select the authentication type for tool users.
Admin Account Authentication Type
Select the authentication type for tool administrators.
Admin ID
Enter the admin ID for the tool.
Admin Password / Token
Enter the admin password or token for the tool.
Table. CICD Pipeline Input Items
Caution
If the Number of executors item in the Built-In Node of Jenkins system settings is set to 1 or more, tools may not be added due to potential security issues.
The Number of executors item in the Built-In Node must be set to 0 in the Jenkins management menu.
Note Jenkins officially recommends avoiding build execution on the Controller Node.
Select whether to allow creation of new repositories in the tenant or project group.
URL
Enter the URL address to access the tool.
Duplicate URLs cannot be registered.
URL for API
Enter the Docker Hub API path.
Image Repository URL
Enter the URL for Docker Registry use.
Private SSL Certificate Usage
Select whether to use a private SSL certificate.
System Common Image Inclusion
Select whether to include system common images.
IDP Usage
Select whether to use IDP.
User Account Authentication Type
Select the authentication type for tool users.
Admin Account Authentication Type
Select the authentication type for tool administrators.
Admin ID
Enter the admin ID for the tool.
Admin Password / Token
Enter the admin password or token for the tool.
Table. Image Registry Input Items
Code Quality
Item
Description
Tool Name
Enter a tool name for user identification.
Tool Classification
Select the tool classification.
Tool
Select the tool.
New Creation Possible
Select whether to allow creation of new SonarQube projects in the tenant or project group.
URL
Enter the URL address to access the tool.
Duplicate URLs cannot be registered.
Tool Version
Enter the SonarQube version.
IDP Usage
Select whether to use IDP.
User Account Authentication Type
Select the authentication type for tool users.
Admin Account Authentication Type
Select the authentication type for tool administrators.
Admin ID
Enter the admin ID for the tool.
Admin Password / Token
Enter the admin password or token for the tool.
Table. Code Quality Input Items
Artifact Repository
Item
Description
Tool Name
Enter a tool name for user identification.
Tool Classification
Select the tool classification.
Tool
Select the tool.
New Creation Possible
Select whether to allow creation of new Nexus repositories in the tenant or project group.
URL
Enter the URL address to access the tool.
Duplicate URLs cannot be registered.
IDP Usage
Select whether to use IDP.
User Account Authentication Type
Select the authentication type for tool users.
Admin Account Authentication Type
Select the authentication type for tool administrators.
Admin ID
Enter the admin ID for the tool.
Admin Password / Token
Enter the admin password or token for the tool.
Table. Artifact Repository Input Items
Helm Chart Repository
Item
Description
Tool Name
Enter a tool name for user identification.
Tool Classification
Select the tool classification.
Tool
Select the tool.
New Creation Possible
Select whether to allow creation of new repositories in the tenant or project group.
URL
Enter the URL address to access the tool.
Duplicate URLs cannot be registered.
Private SSL Certificate Usage
Select whether to use a private SSL certificate.
Helm Chart Repository URL
Enter the repository URL for the tool.
Duplicate URLs cannot be registered.
IDP Usage
Select whether to use IDP.
User Account Authentication Type
Select the authentication type for tool users.
Admin Account Authentication Type
Select the authentication type for tool administrators.
Admin ID
Enter the admin ID for the tool.
Admin Password / Token
Enter the admin password or token for the tool.
Table. Helm Chart Repository Input Items
Project Management Software
Item
Description
Tool Name
Enter a tool name for user identification.
Tool Classification
Select the tool classification.
Tool
Select the tool.
New Creation Possible
Select whether to allow creation of new JIRA projects in the tenant or project group.
URL
Enter the URL address to access the tool.
Duplicate URLs cannot be registered.
IDP Usage
Select whether to use IDP.
User Account Authentication Type
Select the authentication type for tool users.
Admin Account Authentication Type
Select the authentication type for tool administrators.
Admin ID
Enter the admin ID for the tool.
Admin Password / Token
Enter the admin password or token for the tool.
Table. Project Management Software Input Items
Entering Additional Information
On the Add Tool popup window, on the Basic Information input screen, click the Next button. The Additional Information input screen will appear.
Select each item and click the Complete button.
In the confirmation popup window, click the Confirm button to complete the tool addition.
Item
Description
Usage
Select whether to use the tool in the tenant or project group.
New Creation Possible
Select whether to allow creation of new repositories in the tenant or project group.
Only available for tools with a classification of SCM Repository.
Table. Additional Information Input Items
Tool Details
To manage tool details, follow these steps:
On the Tools page, click the tool for which you want to manage details. You will be taken to the Tool Details page.
Managing Tool Basic Information
To view the tool’s basic information, follow these steps:
On the Tool Details page, click the Basic Information tab.
To modify the tool’s basic information, follow these steps:
On the Tool Details page, click the Basic Information tab.
Click the Modify button.
Modify the necessary information and click the Save button.
Managing Global Tools
Note
The Global Tool tab is only visible if the tool is Jenkins.
This feature allows you to manipulate the Global Tool Configuration menu in the Jenkins web screen from the DevOps Console.
The DevOps Console only supports one-way registration to Jenkins. (In other words, changes made by the user in the DevOps Console will overwrite the information in Jenkins.)
Users can manage the list of tools available in Jenkins, and tools registered in Global Tool can be used in the Tools section when configuring a stage.
To manage global tools, follow these steps:
Click the Global Tool tab on the Tool Details page.
Adding Global Tools
To add a global tool, follow these steps:
Click the Edit icon for the item you want to add in the Global Tool tab. The Global Tool Management popup window will open.
Required Tools are automatically set by the DevOps Console.
Required Tools cannot be deleted, and only the home path can be modified.
Click the Add button. A new row will be added to the bottom of the list.
Enter the information in the new row and click the Save button.
Click the Confirm button in the confirmation popup window to complete the process.
Item
Description
Tool Type
The tool type is automatically set.
Name
Enter the tool name.
Home Path
Enter the path where the tool is installed.
Table. Global Tool Add Input Items
Modifying Global Tools
To modify a global tool, follow these steps:
Click the Edit icon for the item you want to modify. The Global Tool Management popup window will open.
Modify the content and click the Save button.
Click the Confirm button in the confirmation popup window to complete the modification.
Deleting Global Tools
To delete a global tool, follow these steps:
Click the Global Tool tab on the Tool Details page. Click the Edit icon for the item you want to delete. The Global Tool Management popup window will open.
Delete the content and click the Save button.
Click the Confirm button in the confirmation popup window to complete the deletion.
Managing Agent (Kubernetes)
Note
The Agent (Kubernetes) tab is only visible if the tool is Jenkins.
This feature allows you to manage the agents (slaves) used in Jenkins builds.
The Jenkins web screen’s 1) Jenkins Management > System Settings > Cloud > Pod Templates or 2) Jenkins Management > Node Management > Configure Clouds > Pod Templates menu can be manipulated from the DevOps Console.
The DevOps Console only supports one-way registration to Jenkins. (In other words, changes made by the user in the DevOps Console will overwrite the information in Jenkins.)
To manage agents (Kubernetes), follow these steps:
Click the Agent (Kubernetes) tab on the Tool Details page.
Click the Information icon in the Pod Template Management list. The Pod Template Usage Guide popup window will open.
Adding Container Resource Types
Guide
Modifying the container resource type will affect the Pod Template.
Changing the container resource type from Not Used to Used will increase the number of Pod Templates by (container resource type number X Pod Template number).
Changing the container resource type from Used to Not Used will decrease the number of Pod Templates back to the original number.
The increased Pod Templates’ agent names, labels, etc. will be automatically generated by combining the Resource Type item entered by the user when registering the container resource type to avoid duplication.
To add a container resource type, follow these steps:
Click the Agent (Kubernetes) tab on the Tool Details page.
Click the Edit icon in the Container Resource Type Management section. The Container Resource Type Management popup window will open.
Click the Add button and enter the content. Click the Save button.
Click the Confirm button in the confirmation popup window to complete the process.
Item
Description
Usage
Set the usage.
To change the usage, there must be no build pipeline configured using the corresponding Jenkins tool.
Resource Type
Enter the resource name.
CPU/Memory (Request)
Enter the requested resource value when configuring the Kubernetes Pod Agent.
CPU/Memory (Limit)
Enter the limited resource value when configuring the Kubernetes Pod Agent.
Table. Container Resource Type Add Input Items
Modifying Container Resource Types
To modify a container resource type, follow these steps:
Click the Agent (Kubernetes) tab on the Tool Details page.
Click the Edit icon in the Container Resource Type Management section. The Container Resource Type Management popup window will open.
Modify the content and click the Save button.
Click the Confirm button in the confirmation popup window to complete the modification.
Deleting Container Resource Types
To delete a container resource type, follow these steps:
Click the Agent (Kubernetes) tab on the Tool Details page.
Click the Edit icon in the Container Resource Type Management section. The Container Resource Type Management popup window will open.
Delete the content and click the Save button.
Click the Confirm button in the confirmation popup window to complete the deletion.
Adding Pod Templates
To add a Pod Template, follow these steps:
Click the Agent (Kubernetes) tab on the Tool Details page.
Click the Add button in the Pod Template Management list. The Add Agent (Kubernetes) screen will appear.
Enter the content and click the Save button.
Click the Confirm button in the confirmation popup window to complete the process.
Item
Description
Target Kubernetes
Select the target Kubernetes to add the Pod Template.
Displays the actual list of Kubernetes registered in Jenkins.
Agent Name
Enter the name of the Pod Template.
Label
Enter the label value to call the corresponding agent in the Jenkins Pipeline Script.
Inherit Pod Template
Select the Pod Template to inherit the settings (environment variables, volumes, etc.).
Displays the actual list of Pod Templates registered in Jenkins.
Container
Enter the information mapped to the Container Template item in Jenkins, such as name, Docker image, working directory, command, and arguments.
Required container information cannot be deleted, and the name cannot be changed.
Tool Path
Enter the information mapped to the Tool Locations item in Jenkins, such as name and home path.
Only tools added to the global tool list can be selected.
Supported Stage
Select the supported stage configuration.
Used in the build pipeline template configuration of the DevOps Console.
Required stage information cannot be deselected.
Table. Add Pod Template Input Items
Note
If the user does not check Docker Build in the Supported Stage item,
The corresponding Jenkins cannot be used when configuring a project using the Kubernetes or VM (Docker) type App template that requires Docker Build.
When configuring a build pipeline using Add Build Pipeline, the Docker Build stage cannot be added.
Viewing Pod Template Details
To view the details of a Pod Template, follow these steps:
Click the Agent (Kubernetes) tab on the Tool Details page.
Click the View Details button for the agent you want to view in the Pod Template Management list. The Agent (Kubernetes) Details screen will appear.
Modifying Pod Templates
To modify a Pod Template, follow these steps:
Click the Agent (Kubernetes) tab on the Tool Details page.
Click the View Details button for the agent you want to modify in the Pod Template Management list. The Agent (Kubernetes) Details screen will appear.
Click the Modify button.
Modify the content and click the Save button.
Click the Confirm button in the confirmation popup window to complete the modification.
Deleting Pod Templates
To delete a Pod Template, follow these steps:
Click the Agent (Kubernetes) tab on the Tool Details page.
Click the View Details button for the agent you want to delete in the Pod Template Management list. The Agent (Kubernetes) Details screen will appear.
Click the Delete button.
Click the Confirm button in the confirmation popup window to complete the deletion.
Managing Agent (VM)
Note
The Agent (VM) tab is only visible if the tool is Jenkins.
This feature allows you to manage the list of nodes available in Jenkins.
The Jenkins web screen’s Jenkins Management > Node Management menu can be manipulated from the DevOps Console.
The DevOps Console only supports one-way registration to Jenkins. (In other words, changes made by the user in the DevOps Console will overwrite the information in Jenkins.)
To manage agents (VM), follow these steps:
Click the Agent (VM) tab on the Tool Details page.
Click the Information icon in the Node Management list. The Node Usage Guide popup window will open.
Managing Agent Connections
This feature registers the tunneling port to connect the actual Jenkins and agent (VM). The tunneling port may vary depending on Jenkins.
To manage agent connections, follow these steps:
Click the Agent (VM) tab on the Tool Details page.
Click the Edit icon in the Agent Connection Management section. The Agent Connection Management popup window will open.
Enter the content and click the Save button.
Adding Agent (VM)
To add an agent (VM), follow these steps:
Click the Agent (VM) tab on the Tool Details page.
Click the Add button in the Agent Connection Management section. The Add Agent (VM) screen will appear.
Enter the content and click the Save button.
Click the Confirm button in the confirmation popup window to complete the process.
Item
Description
Target OS
Enter the OS information of the VM.
Agent Name
Enter the name of the VM.
Remote Root Directory
Enter the directory path.
Label
Enter the label value to call the corresponding agent in the Jenkins Pipeline Script.
Tool Path
Enter the information mapped to the Tool Locations item in Jenkins, such as name and home path.
Only tools added to the global tool list can be selected.
Supported Stage
Select the supported stage configuration.
Used in the build pipeline template configuration of the DevOps Console.
Required stage information cannot be deselected.
Table. Add Agent (VM) Input Items
Note
If the user does not check Docker Build in the Supported Stage item,
The corresponding Jenkins cannot be used when configuring a project using the Kubernetes or VM (Docker) type App template that requires Docker Build.
When configuring a build pipeline using Add Build Pipeline, the Docker Build stage cannot be added.
Viewing Agent (VM) Details
To view the details of an agent (VM), follow these steps:
Click the Agent (VM) tab on the Tool Details page.
Click the agent you want to view in the Node Management list. The Agent (VM) Details screen will appear.
Click the Information icon in the Jenkins - Agent Connection Information section.
The Agent Connection Guide popup window will open.
Connecting Agent (VM)
Note
To connect the agent, Java must be installed on the VM server.
To register and use an agent (VM), you must connect it to the actual VM server.
To connect an agent (VM), follow these steps:
Click the Agent (VM) tab on the Tool Details page.
Click the agent you want to connect in the Node Management list. The Agent (VM) Details screen will appear.
Refer to jnlpUrl and secret.
Click the Information icon in the Jenkins - Agent Connection Information section. The Agent Connection Guide popup window will open.
Click the Download Agent File button to download the agent.jar file.
The agent.jar version may cause issues with execution.
You can also download it directly from your Jenkins ({JENKINS_URL}/jnlpJars/agent.jar).
Connect to the VM server where you want to deploy and create a directory.
Copy the downloaded agent.jar file to the created directory.
Run the following command in the created directory path:
Click the Agent (VM) tab on the Tool Details page.
Click the agent you want to modify in the Node Management list. The Agent (VM) Details screen will appear.
Click the Modify button.
Modify the content and click the Save button.
Click the Confirm button in the confirmation popup window to complete the modification.
Deleting Agent (VM)
To delete an agent (VM), follow these steps:
Click the Agent (VM) tab on the Tool Details page.
Click the agent you want to delete in the Node Management list. The Agent (VM) Details screen will appear.
Click the Delete button.
Click the Confirm button in the confirmation popup window to complete the deletion.
Managing Global Libraries
You can manage the list of libraries available in the build pipeline.
This is a feature that can be manipulated in the DevOps Console from the JenkinsJenkins Management -> System Settings -> Global Pipeline Libraries menu.
Library and connected Credential information can be found in the Jenkins Management > Manage Credentials menu.
The DevOps Console only supports one-way registration to Jenkins. (In other words, the information modified by the user in the DevOps Console will overwrite the information in Jenkins.)
To manage agents (VMs), follow these steps:
Tool Details page, click the Global Library tab.
Adding Global Libraries
Note
Libraries with Required Library set to Y cannot be modified or deleted.
To add a global library, follow these steps:
Tool Details page, click the Global Library tab.
Click the Add button. The Add Global Library screen appears.
Enter the contents and click the Save button.
In the confirmation popup, click the Confirm button to complete.
Item
Description
Library Name
Enter the name of the library.
Type
Fixed as SCM and cannot be changed.
Library URL
Enter the Git repository URL where the library exists.
Default Version
Enter the branch name or tag of the Git repository where the library exists.
ID
Enter the ID for pulling the library.
Password
Enter the password for pulling the library.
Table. Global Library Addition Input Items
Viewing Global Library Details
To view the details of a global library, follow these steps:
Tool Details page, click the Global Library tab.
In the Global Library list, click the item you want to view in detail. The Global Library Details screen appears.
Note
If the Type is SCM, SCM-related information is exposed.
Required library, library name, type, library URL, default version, ID
Modifying Global Libraries
To modify a global library, follow these steps:
Tool Details page, click the Global Library tab.
In the Global Library list, click the item you want to view in detail. The Global Library Details screen appears.
Click the Modify button.
Modify the contents and click the Save button.
In the confirmation popup, click the Confirm button to complete.
Deleting Global Libraries
To delete a global library, follow these steps:
Tool Details page, click the Global Library tab.
In the Global Library list, click the item you want to delete. The Global Library Details screen appears.
Click the Delete button.
In the confirmation popup, click the Confirm button to complete.
Managing Supported Tenants/Project Groups
Note
The tab name may be exposed differently depending on the tool type.
System Tool/Tenant Tool: Supported Tenants
ProjectGroup Tool: Supported Information
Users can manage the tenants or project groups that can use the tool.
To manage supported tenants or project groups, follow these steps:
Tool Details page, click the Supported Tenants or Supported Information tab.
Note
The Primary icon is displayed for the managed tenant/project group.
Adding Supported Tenants/Project Groups
To add a supported tenant or project group, follow these steps:
Tool Details page, click the Supported Tenants or Supported Information tab.
Click the Add button. The Add Tenant/Project Group popup window opens.
Enter the contents and click the Save button.
In the confirmation popup, click the Confirm button to complete.
Item
Description
Tenant/Project Group
Select the tenant/project group that can use the tool.
Usage
Select whether to use the tool in the tenant/project group.
New Creation Possible
Select whether to allow new repository creation in the tenant/project group. This setting is only available for the following tool categories:
SCM Repository
Image Registry
Code Quality
Artifact Repository
Helm Chart Repository
Test Management
Project Management Software
Table. Supported Tenant/Project Group Addition Input Items
Modifying Supported Tenants/Project Groups
To modify a supported tenant/project group, follow these steps:
Tool Details page, click the Supported Tenants or Supported Information tab.
In the Supported Tenants or Supported Information list, select the usage and new creation possibility to modify.
Transferring Management Tenants/Project Groups
To transfer the management tenant/project group, follow these steps:
Tool Details page, click the Supported Tenants or Supported Information tab.
Click the Transfer Management Tenant or Transfer Management Project Group button. The Transfer Management Tenant or Transfer Management Project Group popup window opens.
Select the transfer target tenant or project group and click the Save button.
Deleting Supported Tenants/Project Groups
To delete a supported tenant/project group, follow these steps:
Tool Details page, click the Supported Tenants or Supported Information tab.
Select the checkbox of the tenant/project group you want to delete.
Click the Delete button.
In the confirmation popup, click the Confirm button to complete.
Note
The primary tenant/project group cannot be deleted.
Managing Supported K8S Clusters
Users can manage the K8S clusters that can use the tool.
To manage supported K8S clusters, follow these steps:
Tool Details page, click the Supported K8S Clusters tab.
Adding Supported K8S Clusters
To add a supported K8S cluster, follow these steps:
Tool Details page, click the Supported K8S Clusters tab.
Click the Add button. The Add K8S Cluster popup window opens.
Enter the contents and click the Save button.
In the confirmation popup, click the Confirm button to complete.
Item
Description
K8S Cluster
Select the K8S cluster that can use the tool.
Usage
Select whether to use the tool in the tenant/project group.
New Creation Possible
Select whether to allow new repository creation in the tenant/project group. This setting is only available for the following tool categories:
SCM Repository
Image Registry
Code Quality
Artifact Repository
Helm Chart Repository
Test Management
Project Management Software
Table. Supported K8S Cluster Addition Input Items
Modifying Supported K8S Clusters
To modify a supported K8S cluster, follow these steps:
Tool Details page, click the Supported K8S Clusters tab.
In the Supported K8S Clusters list, select the usage and new creation possibility to modify.
Deleting Supported K8S Clusters
To delete a supported K8S cluster, follow these steps:
Tool Details page, click the Supported K8S Clusters tab.
Select the checkbox of the K8S cluster you want to delete.
Click the Delete button.
In the confirmation popup, click the Confirm button to complete.
Managing Tool Operators
To manage tool operators, follow these steps:
Tool Details page, click the Tool Operator tab.
Adding Tool Operators
To add a tool operator, follow these steps:
Tool Details page, click the Tool Operator tab.
Click the Add button. The Add Tool Operator popup window opens.
Select the operator and click the Save button.
In the confirmation popup, click the Confirm button to complete.
Deleting Tool Operators
To delete a tool operator, follow these steps:
Tool Details page, click the Tool Operator tab.
Select the checkbox of the operator you want to delete.
Click the Delete button.
In the confirmation popup, click the Confirm button to complete.
Managing Jenkins Recommended Plugins
Note
The Plugins tab is only visible if the tool is Jenkins.
You can view the installed Jenkins version and plugin installation status and version information.
Tool version: Jenkins version information
Recommended plugins: Recommended plugin installation status and version information
Checking Tool Version and Recommended Plugins Information
To view the tool version and required plugin information, follow these steps:
Click Tool(CICD Pipeline) on the Tool Management page.
If tool has an information to update or install recommended plugins info, then popup is occurred and click confirm button then move to Plugins tab.
On the Tool Details page, click the Plugins tab.
View the Tool Version and Recommended Plugins items.
Installing Recommended Plugins
Follow this procedure to install recommended plugins.
On the Plugins page, Click Install button which you want to install plugin in Recommended Plugins area.
Install Recommended Plugin popup is occured, and click Confirm button.
Then popup appears message that installation request has been completed, and you can check the actual installation through the link in the popup.
Updating Recommended Plugins
Follow this procedure to update recommended plugins.
On the Plugins page, Click Update button which you want to install plugin in Recommended Plugins area.
Update Recommended Plugin popup is occured, and click Confirm button.
Then popup appears message that update request has been completed, and you can check the actual installation through the link in the popup.
You must restart your Jenkins to apply the update.
Deleting Tools
Note
Tools that are in use in a project cannot be deleted.
To delete a tool, follow these steps:
Tool page, click the tool you want to delete. The Tool Details page opens.
Click the Delete button.
In the confirmation popup, click the Confirm button to complete.
12.2.9.2 - App Template
App Template is a feature provided for quick development environment setup.
App Template consists of sample source code, Dockerfile, Helm Chart, and more. Users can quickly set up their development environment using App Template when creating a project.
We provide App Templates for various frameworks such as Node.js, Python, Spring Boot, and more. Users can also create and register their own App Templates.
App Template Type
Description
System Template
A tool that can be used across the DevOps Console.
Only system administrators can manage it.
Tenant administrators can only release it when it is available in their tenant.
Tenant Template
A template that can be used in a specific tenant.
It can be mapped to multiple tenants and used.
The administrator of the corresponding tenant can manage it.
ProjectGroup Template
A template that can only be used in a specific project group.
It can be mapped to only one project group and used.
The owner of the corresponding project group can manage it.
Table. App Template Type
Getting Started with App Template
To start managing App Templates, follow these steps:
Main page, click the Manage icon at the top right. Move to the Tenant Dashboard page.
In the left menu, click Tools & Templates > App Template. Move to the App Template page.
Adding App Template
Users can add their own App Templates.
Source Code
Source code used to configure a sample project in the App Template.
The SCM Repository tool must be available in the tenant or project group where the App Template will be registered. Refer to Adding Tools for how to register the SCM Repository tool.
Register the source code in the SCM Repository in advance. When registering the App Template, the registered SCM Repository will appear, and you can enter the path to the source code.
Dockerfile
Note
You can manage the Dockerfile without registering it inside the source code. Refer to Managing Dockerfile Templates for more information. For explanations and writing methods of Dockerfile, refer to the official website.
To register an App Template that supports Kubernetes or VM (Docker) deployment targets, you must add a Dockerfile to the source code or register a Dockerfile through Managing Dockerfile Templates.
When writing a Dockerfile, the FROM clause must be fixed as FROM ${BASE_IMAGE}.
The registered ${BASE_IMAGE} value is replaced with the image.repository value of the Helm chart for Kubernetes deployment targets or the image path registered in Managing Supported Images for VM (Docker) deployment targets.
Dockerfile sample
Color mode
FROM ${BASE_IMAGE}COPY *.jar app.jarENTRYPOINT["java","-jar","/app.jar"]
FROM ${BASE_IMAGE}COPY *.jar app.jarENTRYPOINT ["java","-jar","/app.jar"]
Dockerfile Sample
Registering App Template
To register an App Template, follow these steps:
App Template page, click the Add button. Move to the Add App Template page.
Add App Template page, select the template support type and click the Start button.
Enter the necessary information and click the Complete button.
For explanations and writing methods of Jenkins environment variables, refer to the official website.
Modifying Pipeline Templates
To modify a pipeline template, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Pipeline Template tab.
Pipeline Template tab, click the item you want to modify. Move to the Pipeline Template Details page.
Pipeline Template Details page, click the Modify button. Move to the Modify Pipeline Template page.
Modify Pipeline Template page, modify the information and click the Save button.
Deleting Pipeline Templates
Warning
Pipeline templates marked as Default cannot be deleted.
To delete a pipeline template, follow these steps:
App Template page, click the item you want to delete. Move to the App Template Details page.
App Template Details page, click the Pipeline Template tab.
Pipeline Template tab, click the item you want to delete. Move to the Pipeline Template Details page.
Pipeline Template Details page, click the Delete button.
Click the Confirm button in the confirmation popup.
Managing Dockerfile Templates
This tab only appears when the Dockerfile Type item in the Basic Information content of the App Template is GUI Template.
Modifying Dockerfile Templates
To modify a Dockerfile template, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Dockerfile Template tab.
Dockerfile Template tab, click the Modify or Create button.
Dockerfile Template tab, enter the contents and click the Save button.
Item
Description
Authentication Information Setting
Register authentication information for the image used in the FROM clause of the multi-stage build.
Add
Add an image used in the FROM clause of the multi-stage build.
Multi-stage Dockerfile
Configure the multi-stage build based on the registered information.
Dockerfile
Configure the basic Dockerfile.
Table. Modify Dockerfile Template Input Items
Guide
You can also manage the Dockerfile without registering it through Dockerfile in the source code.
If you use the Dockerfile file included in the source code, the Dockerfile Type item in the Basic Information content of the App Template must be set to Code Repository File.
For explanations and writing methods of Dockerfile files, refer to the official website.
Managing Supported Tenants/Project Groups
Guide
The tab name is displayed differently depending on the template type.
System Template/Tenant Template: Supported Tenants
ProjectGroup Template: Supported Information
Users can manage the tenants or project groups where the App Template can be used.
To manage supported tenants or project groups, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Supported Tenants or Supported Information tab.
Note
The primary icon appears for the managed tenant.
Adding Supported Tenants
To add supported tenants, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Supported Tenants tab.
Supported Tenants tab, click the Add button. The Add Tenant popup will appear.
Add Tenant popup, select the tenant to support and click the Save button.
Modifying Supported Tenant Information
To modify the information of supported tenants, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Supported Tenants tab.
Supported Tenants tab, select the tenant to modify and select the Verification and Usage items to modify.
Item
Description
Tenant
Information about the available tenant.
Verification
Select the verification.
Verification in progress
Verification completed
Usage
Select the usage.
Verification must be Verification completed to change to Usage.
Table. Supported Tenants Screen Items
Note
If the Verification is Verification in progress, only the user who registered the App Template can use it.
Other users can use the App template after verification and usage processing.
Modifying Supported Project Group Information
To modify the information of a supported project group, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page. App Template Details page, click the Support Information tab.
Support Information tab, click the Modify button. The App Template Modification screen appears.
App Template Modification screen, select and modify the Verification and Usage items.
Click the Save button.
Item
Description
Project Group
Information about the available project group.
Verification
Select the verification.
Verifying
Verification Complete
Usage
Select the usage.
Verification must be Verification Complete to change to Use.
Table. Support Information Screen Items
Note
If the verification is in progress, only the user who registered the App template can use it.
Other users can use the App template after verification and usage processing.
Transferring Management Tenant
To transfer the management tenant, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Support Tenant tab.
Support Tenant tab, click the Transfer Management Tenant button. The Transfer Management Tenant popup window opens.
Transfer Management Tenant popup window, select the target tenant to transfer and click the Save button.
Deleting Support Tenant
To delete a support tenant, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Support Tenant tab.
Support Tenant tab, select the tenant to delete and click the Delete button.
Click the Confirm button in the confirmation popup window.
Caution
Primary designated tenants cannot be deleted.
Managing Support Images
Guide
The Support Image tab only appears when the deployment target of the App template is Kubernetes or VM(Docker).
The Support Image registered by the user is used in the Dockerfile or Dockerfile template of the source code.
Adding Support Images
Guide
When adding support images, only Image Registry Tools available in the tenant and project group are listed.
To add a support image, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Support Image tab.
Support Image tab, click the Add button. The Add Support Image popup window opens.
Add Support Image popup window, enter the information and click the Connection Test button.
When the Save button is activated, click the Save button.
Click the Confirm button in the confirmation popup window.
Item
Description
Image Information Input
Enter the support image and authentication information.
Use Registrar’s Authentication Information
An option that can be selected when the selected image storage tool cannot grant permissions (e.g., SCR).
If not checked, the support image user will be added to the pull permission of the entered image when creating a project. If the tool cannot grant permissions, the permissions will not be granted.
If checked, the support image user’s authority is used instead of the support image registrar’s authentication information when creating a project.
Table. Support Image Addition Input Items
Caution
Precautions for using the registrar’s authentication information
Be careful when using this option, as the registrar’s authentication information may be exposed to the support image user.
This option should only be used when necessary, and only for image storage that provides pull-only functionality. (If checked, one image storage cannot be used for both pull and push at the same time.)
The registrar’s authentication information is used in the project > image storage > pull-only image. This information cannot be changed by the support image user, and if the registrar’s authentication information is re-registered in the support image, it will be changed collectively.
Deleting Support Images
To delete a support image, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Support Image tab.
Support Image tab, select the checkbox of the item to delete and click the Delete button.
Click the Confirm button in the confirmation popup window.
Managing Support Helm Charts
Guide
The Support Helm Chart tab is only exposed when the deployment target of the App template is Kubernetes. Refer to Adding Helm Charts for support helm chart registration.
The Support Helm Chart registered by the user is used when creating a project using the App template.
Adding Support Helm Charts
To add a support helm chart, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Support Helm Chart tab.
Support Helm Chart tab, click the Modify button. The App Template Modification screen appears.
App Template Modification screen, select the checkbox of the helm chart to use in the Helm Chart List, and click the Add button to add it to the Selected Helm Chart List, then click the Save button.
Modifying Support Helm Charts
To modify a support helm chart, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Support Helm Chart tab.
Support Helm Chart tab, click the Modify button. The App Template Modification screen appears.
App Template Modification screen, select the checkbox of the helm chart to use in the Helm Chart List, and click the Add or Delete button to modify the Selected Helm Chart List, then click the Save button.
Deleting Support Helm Charts
To delete a support helm chart, follow these steps:
App Template page, click the item you want to modify. Move to the App Template Details page.
App Template Details page, click the Support Helm Chart tab.
Support Helm Chart tab, click the Modify button. The App Template Modification screen appears.
App Template Modification screen, select the checkbox of the helm chart to delete in the Selected Helm Chart List, and click the Delete button to delete it, then click the Save button.
Deleting App Templates
To delete an App template, follow these steps:
App Template page, click the item you want to delete. Move to the App Template Details page.
App Template Details page, click the Basic Information tab.
Basic Information tab, click the Delete button.
Click the Confirm button in the confirmation popup window.
12.2.9.3 - Register user-installed Jenkins tool
Reference
For installing and operating Jenkins, it is recommended to use the Samsung Cloud Platform Marketplace.
If you cannot use the marketplace or want to register a self-installed Jenkins as a tool in DevOps Console, use this document.
This document is a guide for registering the Jenkins tool in DevOps Console, so it briefly explains Jenkins installation and operation.
Start registering Jenkins tool for user installation
Install Jenkins and plugins.
Jenkins Installation
Before installing
To register Jenkins in the DevOps Console, Jenkins generally must meet the following conditions. If there are other conditions, register an SR before installation to verify.
Use domain for Jenkins access
Jenkins domain registered in DNS
Jenkins communication with HTTPS (port 443), use public certificate
By default, Jenkins does not allow the @ symbol (.) to be used in the login Username. However, since DevOps Console uses email as the Username, we configure Jenkins to allow using email as the Username as well.
/{JENKINS_HOME}/init.groovy.d/init.groovy Create the file, enter the following contents, and restart Jenkins.
Additional Jenkins configuration installed on Kubernetes
RBAC
Set RBAC on Jenkins’s Service Account so that Jenkins can create Pods in Kubernetes.
Color mode
# In GKE need to get RBAC permissions first with# kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>]---apiVersion:v1kind:ServiceAccountmetadata:name:jenkins---kind:RoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:jenkinsrules:- apiGroups:[""]resources:["pods"]verbs:["create","delete","get","list","patch","update","watch"]- apiGroups:[""]resources:["pods/exec"]verbs:["create","delete","get","list","patch","update","watch"]- apiGroups:[""]resources:["pods/log"]verbs:["get","list","watch"]- apiGroups:[""]resources:["events"]verbs:["watch"]- apiGroups:[""]resources:["secrets"]verbs:["get"]---apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:jenkinsroleRef:apiGroup:rbac.authorization.k8s.iokind:Rolename:jenkinssubjects:- kind:ServiceAccountname:jenkins
# In GKE need to get RBAC permissions first with# kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>]---apiVersion:v1kind:ServiceAccountmetadata:name:jenkins---kind:RoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:jenkinsrules:- apiGroups:[""]resources:["pods"]verbs:["create","delete","get","list","patch","update","watch"]- apiGroups:[""]resources:["pods/exec"]verbs:["create","delete","get","list","patch","update","watch"]- apiGroups:[""]resources:["pods/log"]verbs:["get","list","watch"]- apiGroups:[""]resources:["events"]verbs:["watch"]- apiGroups:[""]resources:["secrets"]verbs:["get"]---apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:jenkinsroleRef:apiGroup:rbac.authorization.k8s.iokind:Rolename:jenkinssubjects:- kind:ServiceAccountname:jenkins
Refer to the table below and register the firewall in Jenkins.
Origin
Destination
Destination Port
User Install Jenkins
DevOps Console Web
443
DevOps Console Source IP
User-installed Jenkins web
443
Table. Jenkins (Destination) Firewall List
Reference
To check the source IP of DevOps Console, click the URL ⓘ on the tool addition screen. You can check the source IP of DevOps Console in the tooltip.
DevOps Console Task
For detailed information related to Jenkins tool registration, please refer to Add Tool.
In the tool registration step, check the plugins installed in Jenkins, and if there are plugins that need to be installed, a plugin installation guide page will be displayed. Download and install the plugins as instructed.
Jenkins Check
Jenkins Management > System
Global Trusted Pipeline Libraries
Check if cicdpaas is set in the Library.
If it is not set
DevOps Console ↔ Jenkins Check the communication status.
DevOps Console > Management > Jenkins detail screen > Check the settings in the Global Library tab.
DevOps Console Credentials
Test Connection Click the button to verify that Success is displayed.
Check that the Agent added from the DevOps Console has been created. Run the Agent according to the guide on the Jenkins screen and connect it to Jenkins.
Add agent (Kubernetes) to Jenkins
Before adding the agent (Kubernetes)
User Jenkins agent image creation
An agent image is required for CI/CD in Jenkins.
Use the default images provided by Jenkins to create a Jenkins agent image that fits the user.
After generating the image, push it to the user image repository.
By default, one (or multiple) agents are registered. Delete the default registered agent.
If you need to build a Docker image, select to use Docker in Docker.
Enter the Jenkins agent address of the user created above into the jnlp image URL.
Jenkins job
Jenkins Management > Clouds > kubernetes > Pod Templates
The agent (Kubernetes) added from DevOps Console is registered as a Pod Template.
If you selected Docker in Docker in the DevOps Console, the dind container has the default image address entered. Change to the docker:dind image address you pushed to your user repository.
12.2.10 - Deployment Target
12.2.10.1 - K8S Cluster
Users can register a K8S cluster in the DevOps Console and deploy various applications through the DevOps Console.
A list of Helm versions available in the K8S cluster version will be displayed.
Table. Add K8S Cluster - Items for Adding with Administrator Token Authentication
Item
Description
Authentication Method
Select the client certificate method.
API Server URL
Enter the Kubernetes API Server address.
Client Certificate
Enter the client certificate information.
Client Key
Enter the client key information.
Table. Add K8S Cluster - Items for Adding with Client Certificate Authentication
Item
Description
Authentication Method
Select the kubeconfig file upload method.
Kubeconfig File
Click the Browse button to select the kubeconfig file.
Only files with .yml or .yaml extensions can be uploaded.
If the file is uploaded normally, the CA Certificate, API Server URL, User, and Administrator Token or Client Certificate will be automatically entered.
API Server URL
Select the Kubernetes API Server address.
User
Select the user to authenticate.
Depending on the selected user, the administrator token or client certificate information will be displayed below.
Table. Add K8S Cluster - Items for Adding by Uploading Kubeconfig File
Managing K8S Clusters
Modifying a K8S Cluster
To modify a K8S cluster, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Modify button on the K8S Cluster Details page.
Modify the information and click the Connection Test button.
Select the Helm version and click the Save button.
Deleting a K8S Cluster
To delete a K8S cluster, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Delete button on the K8S Cluster Details page.
Click the Confirm button in the confirmation pop-up window to complete the deletion.
Adding a K8S Cluster Member
To add a K8S cluster member, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Members tab on the K8S Cluster Details page.
Click the Add button on the Members tab. The Add Member pop-up window will open.
Enter the email address in the Add Member pop-up window and click the Search icon.
Click the Add button to add the member to the list below.
Select the permission and click the Save button to complete adding the member.
Deleting a K8S Cluster Member
To delete a K8S cluster member, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Members tab on the K8S Cluster Details page.
Select the checkbox of the user to delete on the Members tab list.
Click the Delete button to delete the selected user from the member list.
Managing K8S Cluster Permission Requests
To approve or reject K8S cluster permission requests, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster permission request item on the K8S Cluster page list. The number displayed is the number of permission requests.
The K8S Cluster Permission Request Approval pop-up window will open.
Click the permission request item to approve or reject.
Enter your opinion and click the Approve or Reject button.
Note
Rejecting a permission request requires entering an opinion.
Viewing K8S Cluster Permission Request Approval History
To view the K8S cluster permission request approval history, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Approval History tab. The approval history list will be displayed.
Note
Only users with Administrator permission for the corresponding K8S cluster can view the approval history.
Managing Namespaces
Guide
This is reference information managed only in DevOps Console.
The registered information will be displayed for users to refer to when creating projects or performing Helm installs, etc., using the cluster.
Importing a Namespace
To import a namespace, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Namespace tab. The namespace list will be displayed.
Click the Import button on the Namespace tab screen. The Import Namespace pop-up window will open.
Select the namespace on the Import Namespace pop-up window and click the Save button to complete importing the namespace.
Deleting a Namespace
Guide
Only the namespace information managed in DevOps Console will be deleted, and the actual namespace in the cluster will not be deleted.
To delete a namespace, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Namespace tab. The namespace list will be displayed.
Click the namespace on the Namespace tab screen. You will be moved to the Namespace Details page.
Click the Delete button on the Namespace Details page to delete the namespace.
Adding a Namespace Member
To add a namespace member, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Namespace tab. The namespace list will be displayed.
Click the namespace on the Namespace tab screen. You will be moved to the Namespace Details page.
Click the Members tab on the Namespace Details page. The namespace member list will be displayed.
Click the Add button. The Add Member pop-up window will open.
Enter the email address in the Add Member pop-up window and click the Search icon.
Click the Add button to add the member to the list below.
Select the permission and click the Save button to complete adding the member.
Deleting a Namespace Member
To delete a namespace member, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Namespace tab. The namespace list will be displayed.
Click the namespace on the Namespace tab screen. You will be moved to the Namespace Details page.
Click the Members tab on the Namespace Details page. The namespace member list will be displayed.
Select the checkbox of the user to delete on the list.
Click the Delete button to delete the selected user from the member list.
Managing Namespace Permission Requests
To approve or reject namespace permission requests, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the namespace permission request item on the K8S Cluster page list. The number displayed is the number of permission requests.
The Namespace Permission Request Approval pop-up window will open.
Select the checkbox of the permission request item to approve or reject.
Enter your opinion and click the Approve or Reject button.
Note
Rejecting a permission request requires entering an opinion.
Viewing Namespace Permission Request Approval History
To view the namespace permission request approval history, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Namespace tab. The namespace list will be displayed.
Click the namespace on the Namespace tab screen. You will be moved to the Namespace Details page.
Click the Approval History tab. The approval history list will be displayed.
Managing Ingress Domains
Guide
This is reference information managed only in DevOps Console.
The registered information will be displayed for users to refer to when creating projects or performing Helm installs, etc., using the cluster.
Adding an Ingress Domain
To add an ingress domain, follow these steps:
Click the Manage icon at the top right of the Main page. You will be moved to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left. You will be moved to the K8S Cluster page.
Click the K8S cluster on the K8S Cluster page list. You will be moved to the K8S Cluster Details page of the selected K8S cluster.
Click the Ingress Domain tab. The ingress domain list will be displayed.
Click the Add button on the Ingress Domain tab screen. The Add Ingress Domain Information pop-up window will open.
Enter the information on the Add Ingress Domain Information pop-up window and click the Save button to complete adding the ingress domain.
Item
Description
Node Selector
Enter the node selector. The input value is divided into a prefix and a key-value pair by the first slash (/). The prefix is optional. ex) kubernetes.io/nodetype: app
Proxy IP
Enter the Proxy Server IP or Proxy Server LoadBalancer IP.
Ingress Domain
Enter the domain that the application will use by default.
Ingress Class
Enter the ingress controller class.
Table. Add Ingress Domain - Input Items
Modifying Ingress Domain
To modify an ingress domain, follow these steps:
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left menu. Move to the K8S Cluster page.
On the K8S Cluster page, click the K8S cluster in the list. Move to the K8S Cluster Details page of the selected K8S cluster.
Click the Ingress Domain tab. The ingress domain list appears.
On the Ingress Domain tab screen, click the ingress domain you want to modify, and the Modify Ingress Domain Information popup window opens.
In the Modify Ingress Domain Information popup window, modify the information and click the Save button to complete the ingress domain modification.
Deleting Ingress Domain
To delete an ingress domain, follow these steps:
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > K8S Cluster menu on the left menu. Move to the K8S Cluster page.
On the K8S Cluster page, click the K8S cluster in the list. Move to the K8S Cluster Details page of the selected K8S cluster.
Click the Ingress Domain tab. The ingress domain list appears.
On the Ingress Domain tab screen, select the checkbox of the ingress domain you want to delete.
On the Ingress Domain tab screen, click the Delete button to delete the selected ingress domain.
12.2.10.1.1 - Checking Cluster Admin Token
To register a K8S cluster, you need to check the cluster’s Admin Token.
The Admin Token refers to the Token value of the ServiceAccount that is ClusterRoleBinding to ClusterRole/cluster-admin.
Preparations before starting
Notice
Before checking the Admin Token, please check and prepare the following:
Environment where kubectl CLI can be used
Cluster Admin permission check
ClusterRole, ClusterRoleBinding inquiry and creation
Namespace, ServiceAccount inquiry and creation
ClusterRole cluster-admin is queried
Color mode
$ kubectl get clusterrole cluster-admin
NAME CREATED AT
cluster-admin 2022-12-09T08:21:50Z
$ kubectl get clusterrole cluster-admin
NAME CREATED AT
cluster-admin 2022-12-09T08:21:50Z
cluster-admin ClusterRole query result
Checking Admin Token
Checking existing Admin Token
Query the ClusterRoleBinding that is bound to ClusterRole/cluster-admin.
Check the ServiceAccount bound to ClusterRoleBinding.
Execute the kubectl command to check if you have cluster-admin permissions.
Color mode
$ kubectl get nodes
$ kubectl get namespace
$ kubectl get all -n kube-system
$ kubectl create namespace admin-test
$ kubectl delete namespace admin-test
# Execute other commands
$ kubectl get nodes
$ kubectl get namespace
$ kubectl get all -n kube-system
$ kubectl create namespace admin-test
$ kubectl delete namespace admin-test
# Execute other commands
cluster-admin permission check command
12.2.10.2 - VM Server Group
A VM server group is a logical unit for managing VM servers.
Users can add, modify, and delete VM server groups and VM servers. The configured VM server group and VM server can be used as a deployment target in project creation (Setting up the deployment environment) or VM deployment (Adding a VM deployment).
Deployment Method
Description
SSH
Uses Secure Shell (SSH) to deploy directly from the Jenkins where the build pipeline is executed to the target VM server.
Jenkins needs to communicate with the target VM server via SSH.
Agent
Refer to Connecting an agent to run an agent on the target VM server. Jenkins does not execute the deployment directly. The executed agent collects and executes deployment-related information from the DevOps Console using the REST API.
Deployment files are stored in the DevOps Console if rollback is not used. (Maximum file size: 200MB)
Deployment files are stored in the selected Rollback Artifact Repository if rollback is used. (Maximum file size is managed by the Rollback Artifact Repository)
The target VM server needs to communicate with the DevOps Console via the REST API.
(If rollback is used) The target VM server needs to communicate with the Rollback Artifact Repository.
Table. SSH method vs Agent method
Getting Started with VM Server Group
To start managing VM server groups, follow these steps:
Main page, click the Management icon in the top right corner. Move to the Tenant Dashboard page.
In the left menu, click Deployment Target > VM Server Group. Move to the VM Server Group page.
Adding a VM Server Group
To add a VM server group, follow these steps:
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu from the left menu. Move to the VM Server Group page.
On the VM Server Group page, click the Add button. Move to the Add VM Server Group page.
Enter the basic information and click the Save button to complete the VM server group settings.
Item
Description
Server Group Name
Enter the name of the VM server group.
Description
Enter a description.
Type
Select the type of VM server group
SSH: Deployment is performed through SSH commands during VM deployment.
Agent: Deployment is performed using an agent during VM deployment. (Agent Connection)
VM Server
Add: Add the VM server to be included in the VM server group.
Delete: Check the checkbox of the VM server to be deleted from the VM server group and click Delete to delete it.
Table. Input Items for Adding a VM Server Group
Adding a VM Server
To add a VM server, you need Manager permissions for the corresponding VM server group.
Note
The VM server addition popup window may open differently depending on the type of VM server group.
To add a VM server, follow these steps:
Click the Manage icon in the top right corner of the Main page. You will be taken to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu from the left menu. You will be taken to the VM Server Group page.
Click the VM server group where you want to add the VM server from the VM server group list on the VM Server Group page. You will be taken to the VM Server Group Details page.
Click the Add button on the VM Server Group Details page. You will be taken to the Add VM Server page.
Enter the basic information on the Add VM Server page and click the Add button to complete the VM server settings.
Item
Description
Server Name
Enter the name of the VM server.
Description
Enter a description.
IP
Enter the IP address.
SSH Port
Enter the port of the VM server to use for SSH connection.
OS
Enter the operating system.
Location
Select a location.
Authentication Information
Enter the authentication information of the VM server to use for SSH connection.
Secret Key
This is a secret key to authenticate the VM server where the agent is installed.
Table. Input Items for Adding a VM Server
Modifying a VM Server Group
To modify a VM server group, you need Manager permission for the corresponding VM server group.
To modify a VM server group, follow these steps:
Main page, click the Management icon in the top right corner. Move to the Tenant Dashboard page.
In the left menu, click Deployment Target > VM Server Group. Move to the VM Server Group page.
In the VM server group list on the VM Server Group page, click the VM server group you want to modify. Move to the VM Server Group Details page.
On the VM Server Group Details page, click the Modify button. Move to the VM Server Group Modification page.
After modifying, click the Save button to complete the modification of the VM server group.
Modifying a VM Server
To modify a VM server, you need Manager permission for the corresponding VM server group.
To modify a VM server, follow these steps:
Main page, click the Management icon in the top right corner. Move to the Tenant Dashboard page.
In the left menu, click Deployment Target > VM Server Group. Move to the VM Server Group page.
In the VM server group list on the VM Server Group page, click the VM server group that includes the VM server you want to modify. Move to the VM Server Group Details page.
In the VM server list on the VM Server Group Details page, click the VM server you want to modify. Move to the VM Server Details page.
On the VM Server Details page, click the Modify button to move to the VM Server Modification page.
After modifying, click the Save button to complete the modification of the VM server.
Deleting a VM Server Group
To delete a VM server group, follow these steps:
Main page, click the Management icon in the top right corner. Move to the Tenant Dashboard page.
In the left menu, click Deployment Target > VM Server Group. Move to the VM Server Group page.
In the VM server group list on the VM Server Group page, click the VM server group you want to delete. Move to the VM Server Group Details page.
On the VM Server Group Details page, click the Delete button to complete the deletion of the VM server group.
Deleting a VM Server
To delete a VM server, follow these steps:
Main page, click the Management icon in the top right corner. Move to the Tenant Dashboard page.
In the left menu, click Deployment Target > VM Server Group. Move to the VM Server Group page.
In the VM server group list on the VM Server Group page, click the VM server group that includes the VM server you want to delete. Move to the VM Server Group Details page.
On the VM Server Group Details page, click the VM server you want to delete. Move to the VM Server Details page.
On the VM Server Details page, click the Delete button to complete the deletion of the VM server.
Managing VM Server Group Members
To manage VM server group members, you need Manager permission for the corresponding VM server group.
Adding a VM Server Group Member
To add a member to a VM server group, follow these steps:
Main page, click the Management icon in the top right corner. Move to the Tenant Dashboard page.
In the left menu, click Deployment Target > VM Server Group. Move to the VM Server Group page.
In the VM server group list on the VM Server Group page, click the VM server group you want to add a member to. Move to the VM Server Group Details page.
On the VM Server Group Details page, click the User tab.
On the User tab, click the Add button to open the Add Member popup window.
After setting, click the Confirm button to complete adding a VM server group member. (The Manager can modify or delete the server group, and the Member can use the server group when creating a project or adding a pipeline.)
Deleting a VM Server Group Member
To delete a member of a VM server group, follow these steps:
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu on the left. Move to the VM Server Group page.
On the VM Server Group page, click the VM server group from which you want to delete a member from the list of VM server groups. Move to the VM Server Group Details page.
On the VM Server Group Details page, click the User tab.
In the User list, select the checkbox of the user you want to delete.
Click the Delete button to delete the selected user from the VM server group member.
Managing VM Server Group Permission Requests
To approve or reject a VM server group permission request, follow these steps:
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu on the left. Move to the VM Server Group page.
On the VM Server Group page, click the permission request item of the VM server group for which you want to approve the permission from the list of VM server groups. The VM Server Group Permission Approval popup window opens.
In the VM Server Group Permission Approval popup window, click the request you want to approve or reject.
Enter your opinion and click the Approve or Reject button.
Note
When rejecting a permission request, entering an opinion is required.
Viewing VM Server Group Permission Approval History
To view the VM server group permission approval history, follow these steps:
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu on the left. Move to the VM Server Group page.
On the VM Server Group page, click the VM server group you want to view from the list of VM server groups. Move to the VM Server Group Details page.
On the VM Server Group Details page, click the Approval History tab.
Releasing VM Server Firewall
SSH Method
SSH method VM deployment uses Secure Shell (SSH) to deploy directly from Jenkins to the target VM server.
Note
Firewall release information for deployment
Source IP: Jenkins IP selected when configuring the pipeline
Target IP: IP of the VM server to be deployed
To release the firewall, follow these steps:
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu on the left. Move to the VM Server Group page.
On the VM Server Group page, click the Firewall Application Guide link. The Firewall Application Guide popup window opens.
Jenkins firewall information is displayed.
The information displayed is what was entered when registering the Jenkins tool, and if not entered, it may appear as an empty value.
If necessary, contact the tool administrator.
Agent Method
Agent method VM deployment requires running an agent on the target VM server. The running agent collects information from DevOps Console and performs deployment.
Note
Firewall release information for deployment
Source IP: IP of the VM server to be deployed
Target IP: DevOps Console IP, (if using Rollback) Rollback Artifact Repository IP
Main page, click the Management icon at the top right. Move to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu on the left. Move to the VM Server Group page.
On the VM Server Group page, click the Agent Installation Guide link. The Agent Installation Guide popup window opens.
DevOps Console firewall details and User Guide Shortcut and Agent File Download links are displayed.
Connecting an Agent
Agent method VM deployment requires running an agent on the target VM server. The running agent collects information from DevOps Console and performs deployment.
Preparing for Agent Connection
VM Server Preparation
Java Installation
The agent was written and tested based on Java 8. Install Java 8 or higher on the target VM server.
Firewall Release and Hosts File Modification
The agent uses REST API to collect deployment information from DevOps Console, so communication from the target VM server where the agent is running to DevOps Console is required.
Additionally, if using Rollback, communication with the Rollback Artifact Repository is also required.
If necessary, firewall release or hosts file registration may be required. Refer to the firewall information in the Agent Installation Guide popup window in DevOps Console.
DevOps Console Preparation
Authentication Key Preparation
When running the agent on the target VM server, authentication of the agent is required. Create a user authentication key and secret key for authentication. (Managing Authentication Keys)
VM Server Secret Key Preparation
When running the agent on the target VM server, the Secret Key value is required to authenticate the agent and the VM server.
When adding an agent-type VM server group and VM server, the VM server Secret Key is automatically generated. You can also check it on the VM Server Details page later.
Note
When connecting the agent, not only the Secret Key entered but also the actual OS name and IP (IPv4) of the VM server must match the information registered in DevOps Console.
Running an Agent
Downloading an Agent File
You can download the agent execution file from the Agent Installation Guide popup window.
Click the Management icon at the top right of the Main page. Move to the Tenant Dashboard page.
Click the Deployment Target > VM Server Group menu on the left. Move to the VM Server Group page.
Click the Agent Installation Guide link on the VM Server Group page. The Agent Installation Guide popup window opens.
Click the Agent File Download button in the Agent Installation Guide popup window.
The deploy-agent.jar file is downloaded.
Running an Agent Directly
To run an agent on a target VM server, follow these steps:
Create a directory on the target VM server.
Move the deploy-agent.jar file to the directory.
Refer to the usage below to run the agent.
Color mode
usage: java -jar deploy-agent.jar -A <arg> -L <arg> [-P <arg>] -S <arg> -V <arg>
-A,--accessKey <arg> AccessKey for HMAC
-L,--serverUrl <arg> Api server url
-P,--loggingConfigFilePath <arg> Path to the property file with 'java.util.logging' settings
-S,--secretKey <arg> SecretKey for HMAC
-V,--vmSecretKey <arg> VM SecretKey
usage: java -jar deploy-agent.jar -A <arg> -L <arg> [-P <arg>] -S <arg> -V <arg>
-A,--accessKey <arg> AccessKey for HMAC
-L,--serverUrl <arg> Api server url
-P,--loggingConfigFilePath <arg> Path to the property file with 'java.util.logging' settings
-S,--secretKey <arg> SecretKey for HMAC
-V,--vmSecretKey <arg> VM SecretKey
Deployment Agent Execution Usage
Item
Description
-A, –accessKey
Authentication key created by the user
-L, –serverUrl
DevOps Console API URL path ex) https://{DEVOPS_CONSOLE_URL}:8443/devops-console-api
-P, –loggingConfigFilePath
Agent log file path If not entered, the {JAVA_HOME}\jre\lib\logging.properties file is applied.
-S, –secretKey
Secret key created by the user
-V, –vmSecretKey
Secret key created by the VM server
Table. Direct Agent Execution Option Items
Running an Agent using a Script File
To run an agent on a target VM server using a script, follow these steps:
Create a directory on the target VM server.
Move the deploy-agent.jar file to the directory.
Refer to the sample execution script below to create a file.
Modify the information in the sample execution script.
@ECHO OFF
SET JAVA_EXE="java"SET DC_URL="https://devops-console-url.com:8443/devops-console-api"SET ACCESS_KEY="user-access-key"SET SECRET_KEY="user-secret-key"SET VM_SECRET_KEY="vm-secret-key"IF NOT EXIST deploy-agent.jar ( ECHO "ERROR: deploy-agent.jar file does not exist." EXIT /b 0)ECHO "Starting Deploy Agent..."%JAVA_EXE% -jar deploy-agent.jar -A %ACCESS_KEY% -S %SECRET_KEY% -V %VM_SECRET_KEY% -L %DC_URL%
EXIT /b 0
@ECHO OFF
SET JAVA_EXE="java"SET DC_URL="https://devops-console-url.com:8443/devops-console-api"SET ACCESS_KEY="user-access-key"SET SECRET_KEY="user-secret-key"SET VM_SECRET_KEY="vm-secret-key"IF NOT EXIST deploy-agent.jar ( ECHO "ERROR: deploy-agent.jar file does not exist." EXIT /b 0)ECHO "Starting Deploy Agent..."%JAVA_EXE% -jar deploy-agent.jar -A %ACCESS_KEY% -S %SECRET_KEY% -V %VM_SECRET_KEY% -L %DC_URL%
EXIT /b 0
Windows Sample Script
Notice
Can be executed in java 8 or higher.
The execution location of the jar file is assumed to be {WORKSPACE}.
Additional directories such as backup and logs are created under {WORKSPACE}.
If -loggingConfigFilePath is not entered, the {JAVA_HOME}\jre\lib\logging.properties file is applied.
Deployment-related files are stored under {WORKSPACE}/backup.
Only the last 3 deployment-related files are kept.
The entire log of deploy-agent.jar is not automatically saved. Only deployment-related logs are saved under {WORKSPACE}/logs.
Logs are kept for 30 days.
Caution
If the agent runs with root privileges, there is a risk of taking over the entire server by executing malicious commands.
It is recommended to use a non-root account.
Agent Problem Solving
Changing Log Levels
If necessary, you can change the log level of the agent. Refer to the sample log file below and add the -P, --loggingConfigFilePath option.
Sample Log File
Color mode
############################################################# Default Logging Configuration File## You can use a different file by specifying a filename# with the java.util.logging.config.file system property.# For example java -Djava.util.logging.config.file=myfile######################################################################################################################### Global properties############################################################# "handlers" specifies a comma separated list of log Handler# classes. These handlers will be installed during VM startup.# Note that these classes must be on the system classpath.# By default we only configure a ConsoleHandler, which will only# show messages at the INFO and above levels.handlers=java.util.logging.ConsoleHandler# To also add the FileHandler, use the following line instead.#handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler# Default global logging level.# This specifies which kinds of events are logged across# all loggers. For any given facility this global level# can be overriden by a facility specific level# Note that the ConsoleHandler also has a separate level# setting to limit messages printed to the console.#.level= INFO.level=FINE############################################################# Handler specific properties.# Describes specific configuration info for Handlers.############################################################# default file output is in user's home directory.java.util.logging.FileHandler.pattern=%h/java%u.logjava.util.logging.FileHandler.limit=50000java.util.logging.FileHandler.count=1java.util.logging.FileHandler.formatter=java.util.logging.XMLFormatter# Limit the message that are printed on the console to INFO and above.#java.util.logging.ConsoleHandler.level = INFOjava.util.logging.ConsoleHandler.level=FINEjava.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter# Example to customize the SimpleFormatter output format# to print one-line log message like this:# <level>: <log message> [<date/time>]## java.util.logging.SimpleFormatter.format=%4$s: %5$s [%1$tc]%n############################################################# Facility specific properties.# Provides extra control for each logger.############################################################# For example, set the com.xyz.foo logger to only log SEVERE# messages:com.xyz.foo.level=SEVERE
############################################################# Default Logging Configuration File## You can use a different file by specifying a filename# with the java.util.logging.config.file system property.# For example java -Djava.util.logging.config.file=myfile######################################################################################################################### Global properties############################################################# "handlers" specifies a comma separated list of log Handler# classes. These handlers will be installed during VM startup.# Note that these classes must be on the system classpath.# By default we only configure a ConsoleHandler, which will only# show messages at the INFO and above levels.handlers=java.util.logging.ConsoleHandler# To also add the FileHandler, use the following line instead.#handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler# Default global logging level.# This specifies which kinds of events are logged across# all loggers. For any given facility this global level# can be overriden by a facility specific level# Note that the ConsoleHandler also has a separate level# setting to limit messages printed to the console.#.level= INFO.level=FINE############################################################# Handler specific properties.# Describes specific configuration info for Handlers.############################################################# default file output is in user's home directory.java.util.logging.FileHandler.pattern =%h/java%u.logjava.util.logging.FileHandler.limit =50000java.util.logging.FileHandler.count =1java.util.logging.FileHandler.formatter =java.util.logging.XMLFormatter# Limit the message that are printed on the console to INFO and above.#java.util.logging.ConsoleHandler.level = INFOjava.util.logging.ConsoleHandler.level =FINEjava.util.logging.ConsoleHandler.formatter =java.util.logging.SimpleFormatter# Example to customize the SimpleFormatter output format# to print one-line log message like this:# <level>: <log message> [<date/time>]## java.util.logging.SimpleFormatter.format=%4$s: %5$s [%1$tc]%n############################################################# Facility specific properties.# Provides extra control for each logger.############################################################# For example, set the com.xyz.foo logger to only log SEVERE# messages:com.xyz.foo.level =SEVERE
Sample Log File
When IP is Not Recognized
When connecting the agent, not only the Secret Key but also the actual OS name and IP (IPv4) of the VM server must match the information registered in the DevOps Console.
In some cases, VM servers with multiple network devices installed may not be able to recognize the IP correctly. In such cases, add the IP and hostname settings to the /etc/hosts file as follows:
Figure. hostname confirmation procedure
12.2.10.3 - Apply for Authorization
Apply for authorization to use K8S clusters, namespaces, and VM server groups managed as deployment targets in DevOps Console.
Start Applying for Authorization
To start applying for authorization, follow these steps:
Main page, click the Manage icon in the upper right corner. Move to the Tenant Dashboard page.
Click Deployment Target > Apply for Authorization in the left menu. The Apply for Authorization screen appears.
Apply for K8S Cluster Authorization
To apply for K8S cluster authorization, follow these steps:
Main page, click the Manage icon in the upper right corner. Move to the Tenant Dashboard page.
Click Deployment Target > Apply for Authorization in the left menu. The Apply for Authorization screen appears.
Click the K8S Cluster tab on the Apply for Authorization screen. The K8S Cluster screen appears.
Click the Apply button on the K8S Cluster screen. The K8S Cluster Authorization Application popup window opens.
Search for the K8S cluster for which you want to apply for authorization in the K8S Cluster Authorization Application popup window.
Enter the reason for the application and click the Add button.
Select the authorization for the added K8S cluster and click the Save button.
Cancel K8S Cluster Authorization Application
To cancel the K8S cluster authorization application, follow these steps:
Main page, click the Manage icon in the upper right corner. Move to the Tenant Dashboard page.
Click Deployment Target > Apply for Authorization in the left menu. The Apply for Authorization screen appears.
Click the K8S Cluster tab on the Apply for Authorization screen. The K8S Cluster screen appears.
Select the cluster on the K8S Cluster screen and click the Cancel button.
Note
The Cancel button is only displayed for applications with a status of REQUESTED.
Apply for Namespace Authorization
To apply for namespace authorization, follow these steps:
Main page, click the Manage icon in the upper right corner. Move to the Tenant Dashboard page.
Click Deployment Target > Apply for Authorization in the left menu. The Apply for Authorization screen appears.
Click the Namespace tab on the Apply for Authorization screen. The Namespace screen appears.
Click the Apply button on the Namespace screen. The Namespace Authorization Application popup window opens.
Search for the K8S cluster to which the namespace you want to apply for authorization belongs in the Namespace Authorization Application popup window.
Select the namespace and enter the reason for the application, then click the Add button.
Select the authorization for the added namespace and click the Save button.
Cancel Namespace Authorization Application
To cancel the namespace authorization application, follow these steps:
Main page, click the Manage icon in the upper right corner. Move to the Tenant Dashboard page.
Click Deployment Target > Apply for Authorization in the left menu. The Apply for Authorization screen appears.
Click the Namespace tab on the Apply for Authorization screen. The Namespace screen appears.
Select the namespace on the Namespace screen and click the Cancel button.
Note
The Cancel button is only displayed for applications with a status of REQUESTED.
Apply for VM Server Group Authorization
To apply for VM server group authorization, follow these steps:
Main page, click the Manage icon in the upper right corner. Move to the Tenant Dashboard page.
Click Deployment Target > Apply for Authorization in the left menu. The Apply for Authorization screen appears.
Click the VM Server Group tab on the Apply for Authorization screen. The VM Server Group screen appears.
Click the Apply button on the VM Server Group screen. The VM Server Group Authorization Application popup window opens.
Search for the VM server group for which you want to apply for authorization in the VM Server Group Authorization Application popup window.
Enter the reason for the application and click the Add button.
Select the authorization for the added VM server group and click the Save button.
Cancel VM Server Group Authorization Application
To cancel the VM server group authorization application, follow these steps:
Main page, click the Manage icon in the upper right corner. Move to the Tenant Dashboard page.
Click Deployment Target > Apply for Authorization in the left menu. The Apply for Authorization screen appears.
Click the VM Server Group tab on the Apply for Authorization screen. The VM Server Group screen appears.
Select the VM server group on the VM Server Group screen and click the Cancel button.
Note
The Cancel button is only displayed for applications with a status of REQUESTED.
12.2.11 - Release Management
12.2.11.1 - Release Management
Release refers to the process of performing the actual deployment process using a workflow. Users with Owner or Master authority in a project group can configure and apply a release process suitable for the project.
Getting Started with Release Management
To start release management, follow these steps.
Main page, click the Release Management icon next to the project group name. Move to the Release Management page.
Click the Release Management > Release Management menu from the left menu. Move to the Release Management page.
Creating a Release
The release creation process proceeds in the following order.
Procedure
Enter Basic Information - Set Workflow - Check/Edit Task - Set Release - Check Summary Information
Pre-Release Check
Before creating a release, check the following.
Item
Required
Description
Workflow
Y
Workflow is a release process template that must be created before creating a release. Refer to Workflow Management.
Approval Template
N
You can set up an approval line and approval content to be used in the release in advance from the approval template. Refer to Approval Template Setting.
Table. Pre-Release Check
Starting Release Creation
Starting Release Creation from the Release Management Screen
To create a release, follow these steps.
Main page, click the Release Management icon next to the project group name. Move to the Release Management page.
Click the Release Management > Release Management menu from the left menu. Move to the Release Management page.
Click the Create Release button.
Starting Release Creation from the Workflow List
To create a release, follow these steps.
Main page, click the Release Management icon next to the project group name. Move to the Release Management page.
Click the Release Management > Workflow Management menu.
Click the More icon on the Workflow list. Click the Create Release with this Workflow menu from the More menu.
Starting Release Creation from the Workflow Details Screen
To create a release, follow these steps.
Main page, click the Release Management icon next to the project group name. Move to the Release Management page.
Click the Release Management > Workflow Management menu from the left menu.
Click the workflow you want to view in detail from the Workflow list.
Click the Create Release with this Workflow menu from the Workflow Details screen.
Entering Release Basic Information
Enter the release basic information.
Click the Start button.
Setting Workflow
Select the workflow to perform the release. If you started creating a release through Workflow Management, the corresponding workflow is automatically selected.
If you set environment variables in the workflow, check and change the values.
Click the Next button.
Checking/Editing Tasks
Check the tasks to be performed in the release.
Edit or delete tasks as needed.
Once you have checked and edited all tasks, click the Next button.
Setting Release
Add the person in charge of receiving emails/messengers when the release and task status change.
Select whether to automatically terminate the release when all tasks are completed.
Click the Next button.
Summary Information
Check the release creation summary information and click the Complete button.
Click the Confirm button in the confirmation popup to complete the creation.
Once the release creation is complete, the Release Details screen appears.
Viewing Release Details
To view release details, follow these steps.
Click the release you want to view in detail from the Release list.
The Release Details screen appears.
Proceeding with Release
Releases in Progress
To proceed with a release, follow these steps.
Click the release card with a status of In Progress from the Release list.
The Release Details screen appears.
You can proceed with the release tasks included in the release.
Proceeding with Tasks
Only the task assignee or the release creator can proceed with the release task.
To proceed with a release task, follow these steps.
If you are the task assignee or release creator, click the Task card. The task information is displayed on the right task editing screen.
Edit before starting the task: You can edit some items of the task before starting the task. Complete the editing and click the Apply button.
Task
Editable Items
User
- Expected time - Description - Assignee - Receive email when completed - Attachment
Jenkins
- Parameters - Assignee - Receive email when completed
Blue/Green Switching
- Assignee - Receive email when completed
Image Repository Replication
- Source tag - Target tag - Assignee - Receive email when completed
SCM Repository Release
- SCM tag - Assignee - Receive email when completed
GIT Branch Creation
- Project - Repository branch - New branch - Assignee - Receive email when completed
Internal Approval
- JIRA Version issue - Approver - Approval title - Approval content - Assignee - Receive email when completed
Helm Release
- SET VALUES - Assignee - Receive email when completed
JIRA Release
- JIRA project - JIRA version - Assignee - Receive email when completed
Table. Items that can be edited before starting the task
Start task: Click the Start button to start the task. If Task Auto-Execution is selected, the task starts automatically without clicking the Start button.
Task in progress: While the task is in progress, the status bar at the top of the task is displayed as In Progress.
Note
Tasks that are in progress, except for User tasks, cannot be modified. However, User tasks can be modified even while in progress, such as expected time, description, and attachment.
Complete task: Click the Complete button to complete the task. Some tasks cannot be completed by clicking the Complete button by the user and are automatically completed by the system.
Proceed with other tasks in the same way and complete them.
Suspending/Restarting/Skipping Tasks
Only the task assignee or the release creator can suspend/restart/skip tasks.
To suspend/restart/skip a task, follow these steps.
If you are the task assignee or release creator, click the More icon on the task.
Click the Suspend/Restart/Skip menu from the More menu.
Note
Some menus may not be available depending on the task status.
Completing Release
The release creator can complete the release. The final result of the release is divided into three categories: success, failure, and suspension.
Success: The release creator completed the release as a success after all tasks in the release were completed.
Failure: The release creator completed the release as a failure after all tasks in the release were completed.
Suspension: The release creator suspended the release, even though there is at least one task that has not been completed.
Completing Release as Success/Failure
To complete a release, follow these steps.
Click the Release card with a status of In Progress from the Release list.
The Release Details screen appears.
Check that all tasks included in the release have been completed.
Click the Release Complete button.
In the Release Complete popup, select the Release Result (Success/Failure) and click the Complete button to complete the release.
Suspending Release
To suspend a release without completing it, follow these steps.
However, according to the Release Management setting in the tenant common setting, suspension may require approval. (Managing Tenant Common Settings)
Click the Release card with a status of In Progress from the Release list.
The Release Details screen appears.
Check that there are no tasks in progress and that there are tasks that have not started.
Click the Release Complete button.
The Release Complete popup opens, and the release result is displayed as Suspension. Click the Complete button to suspend the release.
If the tenant common setting requires approval for suspension, the Release Suspension Approval popup opens when the Release Complete button is clicked.
Deleting Release History
Release history can be deleted if the release status is success/failure/suspension. Releases in progress cannot be deleted.
However, depending on the Release Management setting in the tenant common setting, the delete function may not be visible. (Managing Tenant Common Settings)
To delete release history, follow these steps.
To delete release history, use one of the following methods.
Click the Delete button from the Release Details screen.
Click the More icon from the release list. Click the Delete menu from the More menu.
The Delete Release History popup appears, and enter the release name to confirm deletion.
Click the Confirm button to complete the deletion.
12.2.11.2 - Workflow Management
A workflow is a collection of tasks and task groups of various characteristics that must be configured before creating a release. The workflow is a release process template that helps perform a series of tasks required for the build-deployment stage by tasking and setting responsible persons for each task, allowing for sequential work during deployment.
Getting Started with Workflow Management
To start managing workflows, follow these steps:
Main page, click the Release Management icon next to the project group name. Move to the Release Management page.
On the Release Management page, click the Release Management > Workflow Management menu. The Workflow Management page appears.
Creating a Workflow
Getting Started with Creating a Workflow
To create a workflow, follow these steps:
On the Workflow Management page, click the Create Workflow button. The Create Workflow popup opens.
In the Create Workflow popup, enter the information, select the project to be released through the workflow, and click the Start button. Move to the Create Workflow page.
On the Create Workflow screen, edit the workflow.
Modifying Basic Information
To modify the basic workflow information, follow these steps:
On the Create Workflow screen, click the Modify Basic Information button. The Modify Basic Information popup opens.
In the Modify Basic Information popup, modify the information and click the Save button.
In the Save Confirmation popup, click the Confirm button to complete the modification.
Modifying Environment Variables
You can manage variable values that can be used throughout the workflow using workflow environment variables.
To modify workflow environment variables, follow these steps:
On the Create Workflow screen, click the Modify Environment Variables button. The Modify Environment Variables popup opens.
In the Modify Environment Variables popup, edit the environment variables.
Click the Add button to add an environment variable.
Enter the key/value.
Click the X icon to delete an environment variable.
After editing the environment variables, click the Save button.
In the confirmation popup, click the Confirm button to complete the modification.
Adding Tasks
To add a task to a workflow, follow these steps:
On the Create Workflow screen, add a task using one of the following methods:
Click the Task button to add a task to the bottom of the workflow.
Click the top of the task to add a task above the current task.
Click the bottom of the task to add a task below the current task.
On the right task editing screen, set up the task.
Click the Apply button to apply the task to the workflow.
Item
Description
Add Task
Adds a task.
Add Task to Top
Adds a task to the top.
Add Task to Bottom
Adds a task to the bottom.
Edit Task
Edits the selected task.
Table. Workflow Creation Screen Items
Info
For more information about tasks that can be added, see Tasks.
Adding Task Groups
Task groups can be used to manage the execution (sequential, parallel) and prerequisites of related tasks.
To add a task group to a workflow, follow these steps:
On the Create Workflow screen, click the Task Group button.
Click the newly created New Task Group card.
On the right task group editing screen, set up the task group and click the Apply button.
Item
Description
Add Task Group
Adds a task group.
Task Group
Click to display the task group editing screen.
Task Group Name
Enter the task group name.
Task Progress
Select the task execution method within the task group.
Parallel: Executes tasks within the group simultaneously.
Sequential: Executes tasks within the group sequentially.
Condition Execution
Select whether to use the task group execution condition. - ON: Sets the task group execution based on the status of the preceding task. - OFF: Executes the task group tasks regardless of the status of the preceding task.
Task Group Execution Prerequisite Status
Executes the current task group when the selected preceding task is in the specified status. If multiple preceding tasks are selected, it works as an AND condition.
Receive Email on Completion
Searches and enters the target to receive an email when the task is successfully completed.
Email reception is possible among project group members.
Table. Workflow Task Group Addition Items
Editing Tasks and Task Groups
To edit tasks and task groups in a workflow, follow these steps:
Click the Task or Task Group card you want to modify in the workflow. The Task or Task Group Editing screen appears on the right.
Complete the task or task group editing and click the Apply button to apply the task or task group to the workflow.
Deleting Tasks and Task Groups
To delete tasks and task groups in a workflow, follow these steps:
Click the Task or Task Group card you want to modify in the workflow. The Task or Task Group Editing screen appears on the right.
Click the Delete button on the Task or Task Group Editing screen.
Click the Confirm button in the confirmation popup to complete the task or task group deletion.
Completing Workflow Creation
To complete the workflow creation after adding tasks and task groups, follow these steps:
Click the Save button on the Create Workflow screen. The Workflow Save popup opens.
Click the Confirm button in the Workflow Save popup to complete the workflow creation.
Viewing Workflow Details
To view workflow details, follow these steps:
Click the workflow you want to view in detail on the Workflow Management page.
The Workflow Details screen appears.
Modifying Workflows
To modify a workflow, follow these steps:
To display the workflow modification screen, use one of the following methods:
Click the Modify button on the Workflow Details screen.
Click the More icon on the Workflow List screen and click the Modify menu.
On the Modify Workflow screen, edit the workflow. Editing is the same as Creating a Workflow.
After completing the workflow modification, click the Save button. The Workflow Save popup opens.
Click the Confirm button in the Workflow Save popup to complete the workflow modification.
Deleting Workflows
To delete a workflow, follow these steps:
To delete a workflow, use one of the following methods:
Click the Delete button on the Workflow Details screen.
Click the More icon on the Workflow List screen and click the Delete menu.
In the Workflow Delete popup, enter the workflow name to confirm deletion.
Click the Confirm button in the Workflow Delete popup to complete the deletion.
Duplicating Workflows
You can create a new workflow by duplicating an existing workflow.
To duplicate a workflow, follow these steps:
On the Workflow List screen, click the More icon and click the Duplicate this Workflow menu. The Workflow Duplicate popup opens.
In the Workflow Duplicate popup, enter the information and click the Start button. The Create Workflow screen appears.
On the Create Workflow screen, edit the workflow. Editing is the same as Creating a Workflow.
After completing the modification, click the Save button on the Create Workflow screen.
Click the Confirm button in the Workflow Save popup to complete the workflow creation.
You can create a new release from the Workflow Management screen.
To create a release with a workflow, follow these steps:
To create a release with a workflow, use one of the following methods:
Click the Create Release with this Workflow button on the Workflow Details screen.
Click the More icon on the Workflow List screen and click the Create Release with this Workflow menu.
The Create Release screen appears, and you can create a release.
After entering the release information, click the Create button to complete the release creation.
12.2.11.3 - Approval Template Settings
Approval templates can be used in workflows and releases, and they include approval lines and approval contents.
Item
Description
Approval Line
Frequently used approval lines can be preset.
Approval Content
Frequently used approval contents can be preset.
Table. Approval Template Provided Items
Getting Started with Approval Template Settings
To start setting up approval templates, follow these steps:
Main page, click the Release Management icon next to the project group name. Move to the Release Management page.
On the Release Management page, click the Release Management > Approval Template Settings menu from the left menu.
Approval Line
Getting Started with Approval Line
To start using the approval line, follow these steps:
Main page, click the Release Management icon next to the project group name. Move to the Release Management page.
On the Release Management page, click the Release Management > Approval Template Settings menu from the left menu. The Approval Template Settings screen appears.
On the Approval Template Settings screen, click the Approval Line tab.
Adding Approval Line Templates
To add an internal approval line template, follow these steps:
Internal approval lines can be created for owners and masters within the project group.
On the Approval Template Settings screen, click the Internal Approval Line tab. The Internal Approval Line screen appears.
On the Internal Approval Line screen, click the Add button. The Add Approval Line Template popup opens.
In the Add Approval Line Template popup, enter the information and click the Save button.
Item
Description
Template Name
Enter the template name.
Approval Responsible
Search for the approval responsible and add it to the approval line.
Only project group members can be searched and added as responsible.
Approval Edit
Approval, agreement, notification change
Approval order change
Approval responsible deletion
Table. Approval Line Template Addition Input Items
Viewing Approval Line Details
To view the approval line in detail, follow these steps:
On the Internal Approval Line screen, click the approval line you want to view in detail.
The Approval Line Details screen appears.
Modifying Approval Lines
To modify an approval line, follow these steps:
On the Internal Approval Line screen, click the approval line you want to modify. The Approval Line Details screen appears.
On the Approval Line Details screen, click the Modify button. The Modify Approval Line screen appears.
On the Modify Approval Line screen, complete the modification and click the Save button.
Click the OK button in the confirmation popup to complete the modification.
Deleting Approval Lines
To delete an approval line, follow these steps:
To delete an approval line, use one of the following methods:
On the Approval Line Details screen, click the Delete button.
On the Approval Line List screen, select the approval line you want to delete and click the Delete button.
Click the OK button in the confirmation popup to complete the deletion.
Managing Approval Contents
Getting Started with Approval Contents
To start using approval contents, follow these steps:
Main page, click the Release Management icon next to the project group name. Move to the Release Management page.
On the Release Management page, click the Release Management > Approval Template Settings menu from the left menu. The Approval Template Settings screen appears.
On the Approval Template Settings screen, click the Approval Content tab.
Creating Approval Content Templates
To add an approval content template, follow these steps:
On the Approval Template Settings screen, click the Approval Content tab. The Approval Content screen appears.
On the Approval Content screen, click the Add button. The Add Approval Content Template popup opens.
In the Add Approval Content Template popup, enter the information and click the Save button.
Click the OK button in the confirmation popup to complete the addition.
Viewing Approval Content Details
To view the approval content in detail, follow these steps:
On the Approval Content screen, click the approval content you want to view in detail.
The Approval Content Details screen appears.
Modifying Approval Contents
To modify an approval content, follow these steps:
On the Approval Content screen, click the approval content you want to modify.
The Approval Content Details screen appears, then click the Modify button. The Modify Approval Content screen appears.
On the Modify Approval Content screen, complete the modification and click the Save button.
Click the OK button in the confirmation popup to complete the modification.
Deleting Approval Contents
To delete an approval content, follow these steps:
To delete an approval content, use one of the following methods:
On the Approval Content Details screen, click the Delete button.
On the Approval Content List screen, select the approval content you want to delete and click the Delete button.
Click the OK button in the confirmation popup to complete the deletion.
12.2.11.4 - Task
A task is the smallest executable unit that makes up a workflow (or release), and each task can perform a predetermined operation. A workflow (or release) consists of one or several tasks.
The tasks provided in Release Management are as follows.
Item
Description
Jenkins
You can run a Jenkins pipeline or Jenkins job associated with a DevOps Console project or a separate Jenkins job.
User
You can register tasks that require manual work by the user, rather than integrating with a specific tool.
Blue/Green Switching
You can associate with Blue/Green deployment belonging to a DevOps Console project.
Internal Approval
You can approve through a user belonging to a DevOps Console project group.
Helm Release
You can associate with a Helm release belonging to a DevOps Console project.
Image Repository Replication
You can replicate an image to another repository.
SCM Repository Release
You can release using the release feature of the SCM repository.
Git Branch Creation
You can create a new branch by copying a specific branch of a repository belonging to a DevOps Console project.
JIRA Release
You can release or unrelease a specific version of a JIRA project.
VM Deployment
This is a task that can deploy a VM deployment group with a build complete status or roll back to a previous version.
Table. Task List
Common Task Items
You can add and edit tasks in Workflow (or Release) Management.
When you select a task, the Task Edit screen is displayed, and the Task Edit screen consists of the following.
Item
Description
Task Name
Enter the task name.
Task Type
Select the task type.
Auto-Execution
Select whether to automatically execute after the preceding task is completed.
ON : Automatically starts the task when the preceding task is completed.
OFF : The task manager clicks the Start button to start the task.
Conditional Execution
Select whether to execute the current task based on the status (success/failure/skip) of the preceding task.
If the execution condition of the preceding task is met: You can proceed with the current task (start/complete).
If the execution condition of the preceding task is not met: You cannot proceed with the current task, and it is automatically set to Skip.
Conditional Execution Item
Displayed when conditional execution is ON.
Select the preceding task and check the execution condition as success/failure/skip.
Click the Add button to add the execution condition of the preceding task.
Each condition is ANDed to determine satisfaction/dissatisfaction.
Manager
Search for and enter the person in charge of the task from the project group members.
The person in charge can be searched from the project group members.
Designate Yourself as Manager
Click to designate yourself as the manager of the current task.
Receive Email upon Completion
Search for and enter the target to receive an email when the task is completed.
Email reception is possible by searching from the project group members.
Delete
Delete the current task.
Apply
Apply the current task settings to the workflow.
Table. Common Task Items
Jenkins Task
This is a task that can run a build pipeline or a Jenkins job registered in a DevOps Console project or a separate Jenkins job.
Item
Description
Jenkins Type
Select the Jenkins type.
Project: Runs the build pipeline added to the project.
User Input: The user directly inputs a Jenkins job that is not registered in the DevOps Console.
Project
Select the project to run the pipeline.
Jenkins URL
Select the URL of the Jenkins tool registered in the selected project.
Job
Select the job from the selected Jenkins URL. Only jobs with execution permissions for the current user are displayed in the list.
Parameter
Enter the parameters required to run the build pipeline.
If parameters are required to build the selected job, the screen is displayed, and you can check and modify the parameters.
You can enter the value directly or use an environment variable by selecting it. Environment variables can be referenced in Modifying Environment Variables.
Jenkins Job URL
Enter the URL of the Jenkins job that is not registered in the DevOps Console.
Jenkins ID Jenkins Password or Token
Enter the ID and password or token to use for running the Jenkins job.
Click the Connection Test button to check if the entered Jenkins job URL is connected normally.
If parameters are required to build the job, the parameters are displayed, and you can check and modify the necessary parameters.
Table. Jenkins Task Items
User Task
This is a task that registers manual work that the user must perform.
Item
Description
Expected Time
Enter the expected time required for the user task.
Description
Enter the contents that the user must perform manually.
Table. User Task Items
Blue/Green Switching Task
This is a task that can be associated with Blue/Green deployment belonging to a DevOps Console project.
Item
Description
Project
Select the project to perform the Blue/Green switching.
Blue/Green List
Select the Blue/Green deployment from the list of the selected project that you want to perform in the task.
Table. Blue/Green Switching Task Items
When the Blue/Green switching task is in progress in the release, you can perform the following work.
Item
Description
Operation Status Check
Click the Operation Status Check button to open the Blue/Green Operation Status Check popup.
Switching
Check the operation/operation standby status and click the Switching button. The operation and operation standby are switched.
Completion
Check the Blue/Green switching result and click the Completion button to complete the Blue/Green task. If there is a problem with the switching, you can also revert to the previous state by clicking the Operation Status Check button.
Table. Possible Work Items in Blue/Green Switching Task
Internal Approval Task
This is a task that can approve through a user belonging to a DevOps Console project group.
Item
Description
Include JIRA Version Issue
Select whether to include JIRA version issues in the approval content. When proceeding with the release, the JIRA version set in the project group is selected, and the list of all issues corresponding to the version is automatically added to the approval document.
JIRA Project
Select the JIRA project. Only JIRA projects registered in JIRA Project can be selected.
When the internal approval task is in progress in the release, the following work can be performed.
Approver : Can approve or reject the approval.
Approval : Click the Approval button. In the Approval confirmation popup, enter the approval opinion and click the Confirm button to approve.
Rejection : Click the Rejection button. In the Rejection confirmation popup, enter the rejection opinion and click the Confirm button to reject.
Other Roles : Can check the approval status.
Helm Release Task
This is a task that can be associated with a Helm release belonging to a DevOps Console project.
Item
Description
Auto-Termination
Select whether to automatically terminate the task after the Helm release execution is completed.
Project
Select the project to perform the Helm release.
Helm Release
Select the Helm release to use in the task from the Kubernetes deployment in the project. Workload, Helm Release can be selected. The information of the selected Helm release is displayed.
SET_VALUES (Helm Release)
Displayed when the selected Helm release is Helm Release.
Click the Inquiry icon to check the current Helm release’s values.yaml.
Click the Add button to add key/value.
Enter the key.
Value can be entered directly or environment variables can be used by selecting them. Environment variables can be referenced in Modifying Environment Variables.
SET_VALUES (Workload)
Displayed when the selected Helm release is Workload.
Check the value used in the last deployment.
Enter the value for tag, deploy_strategy, repository.
Table. Helm Release Task Items
When the Helm release task is in progress in the release, you can check the following contents in the task edit screen.
If Helm Release is selected, you can check the following items.
Item
Description
Current Status
Displays the current status of the Helm release.
Execution Status
Displays the result of the Helm release execution.
History
Displays the history of the Helm release. Click the Inquiry icon to check the values.yaml used in the Helm release by revision. To roll back to a previous deployment, click the Rollback button. In the confirmation popup, click the Confirm button to complete.
Table. Items Displayed when Helm Release Type
If Workload is selected, you can check the following items.
Item
Description
Execution Status
Displays the result of the Helm release execution.
History
Displays the deployment history. To roll back to a previous deployment, click the Rollback button. In the confirmation popup, click the Confirm button to complete.
Table. Items Displayed when Workload Type
Image Repository Replication Task
This is a task that can replicate an image to another repository.
Source image → (replication) Target image
Item
Description
Type
Select the type.
Project : Selects the image repository added to the project as the source and target.
User Input : The user inputs an image repository that is not registered in the DevOps Console as the source and target.
Source Project
Select the project where the source image repository is registered.
Source Image Repository
Select the source image repository from the project.
Source Tag
Enter the source tag. Tag can be entered directly or environment variables can be used by selecting them. Environment variables can be referenced in Modifying Environment Variables.
Target Project
Select the project where the target image repository is registered.
Target Image Repository
Select the target image repository from the project.
Target Tag
Enter the target tag.
Source Host
Enter the source host domain name.
Source Path
Enter the source path.
Source ID Source Password
Enter the account information of the source image repository. After entering, click the Connection Test button to check if it is connected normally.
Target Host
Enter the target host domain name.
Target Path
Enter the target path.
Target ID Target Password
Enter the account information of the target image repository. After entering, click the Connection Test button to check if it is connected normally.
Table. Image Repository Replication Task Items
SCM Repository Release Task
This is a task that runs the release of an SCM repository.
The SCM repository release performs the creation of a release or tag according to the SCM repository tool (GitHub, GitLab, other Git repositories).
Item
Description
Git Type
Select the Git type.
Project : Selects the code repository added to the project.
User Input : The user inputs a Git repository that is not registered in the DevOps Console.
Project
Select the project where the code repository is registered.
SCM Repository
Select the code repository from the project.
SCM Branch
Select the branch of the code repository.
SCM Tag
Enter the tag to be created in the release.
Git URL
Enter the URL of the Git repository.
Git ID Git Password or Token
Enter the account information of the Git repository.
Branch
Enter the branch of the Git repository. After entering, click the Connection Test button to check if it is connected normally.
Table. SCM Repository Release Task Items
GIT Branch Creation Task
This is a task that can create a new branch by copying a specific branch of a repository belonging to a DevOps Console project.
Item
Description
Git Type
Select the Git type.
Project : Selects the code repository added to the project.
User Input : The user inputs a Git repository that is not registered in the DevOps Console.
Project
Select the project where the code repository is registered.
Repository
Select the code repository from the project.
Branch
Select the existing branch that the new branch will reference.
New Branch
Enter the name of the new branch to be created.
Apply Protection
Select whether to apply the protection rule to the new branch.
Protection Rule
If the protection rule is applied, set the merge and push permissions.
Select the role allowed for merge.
Select the role allowed for push.
Git URL
Enter the URL of the Git repository.
Git ID Git Password or Token
Enter the account information of the Git repository.
Branch
Enter the existing branch that the new branch will reference. After entering, click the Connection Test button to check if it is connected normally.
Table. GIT Branch Creation Task Items
JIRA Release Task
This is a task that can release or unrelease a specific version of a JIRA project.
Item
Description
JIRA Project
Select the JIRA project registered in the project group.
JIRA URL
Check the server of the selected JIRA project. (Readonly)
JIRA Version
Select the version of the JIRA project.
Only unreleased versions can be selected.
Table. JIRA Release Task Items
When the JIRA release task is in progress in the release, the following work can be performed.
Item
Description
Status
Status button opens the JIRA release popup window.
Status Change
Status Change button changes the JIRA Version to Released or back to Unreleased.
Confirm
Confirm button completes the JIRA release.
Table. Tasks that can be performed in the JIRA release task
VM Deployment Task
This task deploys a VM deployment group with a Build Complete status or rolls back to a previous version.
The contents stored in the deployment group are automatically set.
Table. VM Deployment Task Items
12.2.11.5 - JIRA Project
The user can manage JIRA project information to be used in the release management JIRA task.
Note
This feature is only supported when the JIRA tool is registered in the system.
Getting Started with JIRA Project
To start managing JIRA projects, follow these steps:
Main page, click the Project Group Management icon of the project group. Move to the Project Group Dashboard page.
On the Project Group Dashboard page, click the JIRA Project menu. The JIRA Project screen appears.
Adding a JIRA Project
To add a JIRA project, follow these steps:
On the JIRA Project screen, click the Add button. The Add JIRA Project popup window opens.
In the Add JIRA Project popup window, enter the JIRA URL and Authentication Information, and then click the Connection Test button.
Select the JIRA Project and click the Save button.
Item
Description
JIRA URL
Select the JIRA URL
A list of JIRA tools available for the project group appears.
Tool Registration Shortcut
If JIRA tool registration is required, you can go directly to the tool registration page.
Authentication Information
Enter the authentication information.
JIRA Project
Select the JIRA project
A list of projects accessible based on the JIRA URL and authentication information appears.
Only projects with administrator privileges for the JIRA project can be selected.
Table. JIRA Project Addition Input Items
Deleting a JIRA Project
To delete a JIRA project, follow these steps:
On the JIRA Project screen, select the checkbox of the item to be deleted and click the Delete button.
In the confirmation popup window, click the Confirm button to complete the deletion of the JIRA project.
12.2.12 - Release Note
DevOps Console
2025.07.01
FEATUREv1.16.0 changes
Self-user management and authentication features have been added.
DevOps IDP is used to manage and authenticate users.
Jenkins DevOps Plugin installation and update feature has been added.
You can check the version of the installed Jenkins and the installation and version information of the recommended plugins, and install and update them.
You can download the billing basis data from the current data in the tenant dashboard in Excel.
2024.10.24
FEATUREv1.15.0 changes
A supported Helm chart repository has been added.
OCI standard Helm chart repository is now available.
Pipeline feature has been added.
It supports creating multi-branch pipeline functionality.
The management organization function of tools/templates has been improved.
The organization (tenant, project group) that manages tools and templates can be transferred and has been improved.
Other changed things
The supported version of the image storage tool Harbor has been expanded. (~2.10)
It has become impossible to directly create a Job in Jenkins. (Only possible through DevOps Service)
13 - AI-ML
We provide AI/ML services that allow you to easily and conveniently build and learn ML/DL (Machine Learning/Deep Learning) model development and learning environments.
13.1 - AIOS
13.1.1 - Overview
Service Overview
AIOS provides an environment for developing AI applications using LLM on Virtual Server, GPU Server, and Kubernetes Engine resources created on Samsung Cloud Platform, without the need for separate LLM service installation or configuration.
Key Features
Convenient LLM Usage: Provides LLM Endpoints by default that allow direct use of LLM on Virtual Server, GPU Server, and Kubernetes Engine resources on Samsung Cloud Platform.
Improved AI Development Productivity: AI developers can use various models with the same API, and compatibility with OpenAI and LangChain SDKs allows easy integration with existing development environments and frameworks.
ServiceWatch Integration: Data can be monitored through the ServiceWatch service.
Service Architecture
Fig. AIOS Architecture
Provided Features
The following features are provided:
AIOS LLM Endpoint Provision: When applying for Virtual Server, GPU Server, or Kubernetes Engine services, LLM Endpoint information and usage guides are provided on the detail page of the created resources. You can access and use LLM on those resources by following the usage guide.
AIOS Report Provision: You can check the number of calls and token usage by type, resource, and model, as well as total usage by LLM.
Provided Models
The LLM models provided by AIOS are as follows:
Model Name
Model Type
Description
Main Use Cases
Features
gpt-oss-120b
Chat+Reasoning
Latest GPT series open-source model based on 120 billion parameters
Research/experiments, large-scale language understanding, AI services requiring complex reasoning/analysis, building agent-type systems
Ultra-large parameters
Broad knowledge coverage, general-purpose application possible
Complete CoT chain generation
Qwen3-Coder-30B-A3B-Instruct
Code
Qwen3 series code model optimized for code generation and debugging
Software development, AI code assistant, long document/repository analysis
Large-scale code knowledge learning
Multilingual support
Long-context understanding possible
Qwen3-30B-A3B-Thinking-2507
Chat+Reasoning
Qwen3 model enhanced for long-form reasoning and deep thinking
Multimodal (text+image), fast inference, single GPU operation possible
Ultra-long text, multi-document summarization/analysis possible, multimodal support
Top-tier performance in various benchmarks
Up to 4 images can be input
Llama-Guard-4-12B
moderation
Key security and moderation model for enhancing reliability and safety in the latest large language models and multimodal AI services
Used for automatic filtering of harmfulness in user input and model responses
Multimodal security classification
Specialized in content moderation
Multilingual support
bge-m3
embedding
Key embedding model with three characteristics: multi-functionality, multilingual support, and large input capacity
Used when retrieving external knowledge and providing answer evidence in generative AI, combining Dense and Sparse search to ensure both accuracy and generalization performance
Key component for various information retrieval, question answering, and chatbot systems that require fast and accurate search result reranking in multilingual environments
Rerank candidate answers or documents for questions in relevance order
Lightweight and fast inference
Multilingual support
Easy integration: Hugging Face Transformers, FlagEmbedding compatible
Table. AIOS Provided LLM Models
Regional Availability
AIOS can be provided in the following environments:
Region
Availability
Korea West (kr-west1)
Available
Korea East (kr-east1)
Not Available
Korea South1 (kr-south1)
Not Available
Korea South2 (kr-south2)
Not Available
Korea South3 (kr-south3)
Not Available
Table. AIOS Regional Availability
Prerequisite Services
This is a list of services that must be configured in advance before creating this service. For detailed information, please prepare in advance by referring to the guides provided for each service.
Service providing lightweight virtual computing and containers and Kubernetes clusters to manage them
Table. AIOS Prerequisite Services
13.1.1.1 - ServiceWatch Metrics
AIOS sends metrics to ServiceWatch. The metrics provided by default monitoring are data collected at a 1-minute interval.
Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.
Basic Indicators
The following are the basic metrics for the AIOS namespace.
Performance Item
Detailed Description
Unit
Meaningful Statistics
Table. AIOS basic indicators
13.1.2 - How-to Guides
Using AIOS
AIOS provides an environment where LLM can be used by default within each resource when you create Virtual Server, GPU Server, Kubernetes Engine services.
Note
For detailed information on each service creation, refer to the table below.
LLM can be used by utilizing the LLM Endpoint within the service resources such as Virtual Server, GPU Server, Cloud Functions, Kubernetes Engine created on Samsung Cloud Platform. The LLM Endpoint can be checked through the Usage Guide for the LLM Endpoint on the service’s detail page.
Check the LLM Endpoint of Virtual Server
You can check the usage guide for the LLM Endpoint on the Virtual Server Details page of the created Virtual Server.
To check the usage guide for the LLM Endpoint, follow the steps below.
All Services > Compute > Virtual Server Click the menu. Go to the Service Home page of Virtual Server.
Click the Virtual Server menu on the Service Home page. Navigate to the Virtual Server list page.
Virtual Server List page, click the resource to connect to the LLM Endpoint. Navigate to the Virtual Server Details page.
Virtual Server Details on the page, click the User Guide link of the LLM Endpoint item. It will navigate to the LLM User Guide popup window.
Reference
For detailed information about the LLM usage guide, check LLM Usage Guide.
Check GPU Server’s LLM Endpoint
You can check the usage guide for the LLM Endpoint on the GPU Server Details page of the created GPU Server.
To view the usage guide for LLM Endpoint, follow the steps below.
All Services > Compute > GPU Server Click the menu. Go to the Service Home page of GPU Server.
Click the GPU Server menu on the Service Home page. It navigates to the GPU Server List page.
GPU Server List page, click the resource to connect to the LLM Endpoint. GPU Server Details page, navigate.
GPU Server Details on the page, click the LLM Endpoint item’s User Guide link. You will be taken to the LLM User Guide popup window.
Note
For detailed information about the LLM usage guide, see the LLM Usage Guide.
Checking the LLM Endpoint of Cloud Functions
You can view the usage guide for the LLM Endpoint on the Cloud Functions Details page of the created Cloud Functions.
To view the usage guide for the LLM Endpoint, follow the steps below.
All Services > Compute > Cloud Functions Click the menu. Go to the Service Home page of Cloud Functions.
Click the Functions menu on the Service Home page. Go to the Functions list page.
On the Functions list page, click the resource to connect to the LLM Endpoint. You will be taken to the Functions details page.
Click the User Guide link of the LLM Endpoint item on the Functions Details page. It will open the LLM User Guide popup.
Note
For detailed information about the LLM usage guide, please check LLM Usage Guide.
Check the LLM Endpoint of the Kubernetes Engine cluster
You can check the usage guide for the LLM Endpoint on the Cluster Details page of the created Kubernetes Engine cluster.
To view the usage guide for LLM Endpoint, follow the steps below.
Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engine.
Click the Cluster menu from the Service Home page. Go to the Cluster List page.
Click the resource to connect to the LLM Endpoint on the Cluster List page. You will be taken to the Cluster Details page.
On the Cluster Details page, click the User Guide link of the LLM Endpoint item. It will open the LLM User Guide popup.
Reference
For detailed information about the LLM usage guide, please check LLM 이용 가이드.
LLM Usage Guide
In the usage guide of LLM Endpoint, you can see AIOS LLM Private Endpoint, the provided model, and sample code examples.
AIOS LLM Private Endpoint
The URL of the AIOS LLM private endpoint is displayed. Check the URL to use it within the resources created for the Virtual Server, GPU Server, Kubernetes Engine services.
AIOS LLM Provided Model
The AIOS LLM provided models are as follows.
Model Name
Model ID
Context Size
RPM (Request per minute)
TPM (Token per minute)
Purpose
License
Discontinuation Date
gpt-oss-120b
openai/gpt-oss-120b
131,072
50 RPM
200K
Research, Experiment, Advanced Language Understanding
Apache 2.0
No plans
Qwen3-Coder-30B-A3B-Instruct
Qwen/Qwen3-Coder-30B-A3B-Instruct
65,536
20 RPM
30K
code generation, analysis, debugging support
Apache 2.0
No plan
Qwen3-30B-A3B-Thinking-2507
Qwen/Qwen3-30B-A3B-Thinking-2507
32,768
10 RPM
30K
deep reasoning, long text analysis, essay writing
Apache 2.0
no plan
Llama-4-Scout
meta-llama/Llama-4-Scout
32,768
20 RPM
35K
Latest Llama model with multimodal capability
llama4
No plans
Llama-Guard-4-12B
meta-llama/Llama-Guard-4-12B
32,768
20 RPM
200K
Core security and moderation model to enhance reliability and safety in the latest large language models and multimodal AI services
llama4
No plan
bge-m3
sds/bge-m3
8,192
100 RPM
200K
It is a multilingual embedding model that supports multiple languages.
Samsung SDS
No plan
bge-reranker-v2-m3
sds/bge-reranker-v2-m3
8,192
100 RPM
200K
Provides fast computation and high performance as a lightweight multilingual reranker.
Samsung SDS
No plans
Table. AIOS LLM provided models
Sample code
Refer to the following for AIOS LLM sample code examples.
You can check the daily LLM call count and token usage on AIOS’s Report page.
The service types can be selected as Virtual Server, GPU Server, Kubernetes Engine, and you can query by selecting resource names among the resources actually created in the service, and you can also query by the LLM model used.
All Services > AI-ML > AIOS Click the menu. Navigate to AIOS’s Service Home page.
Click the Report menu on the Service Home page. Navigate to the Report page of AIOS.
LLM usage by model In the list, clicking the LLM model name will take you directly to that LLM’s Report page.
Report page, after selecting the LLM model to view the Report, click the Query button. The Report information for that LLM model will be displayed.
Category
Detailed description
Service Type
Select service type using LLM
Virtual Server, GPU Server, Kubernetes Engine
Resource Name
Select Service Name
If you do not select a service type, only All can be selected, and if you select a specific product in the service type, a specific resource name can be selected
Converts text into a high-dimensional vector (embedding) that can be used for various natural language processing (NLP) tasks, such as calculating text similarity, clustering, and searching.
Table. AIOS Supported API List
Rerank API
POST/rerank,/v1/rerank,/v2/rerank
Overview
The Rerank API applies an embedding model or cross-encoder model to predict the relevance between a single query and each item in a document list.
Generally, the score of a sentence pair represents the similarity between the two sentences on a scale of 0 to 1.
Embedding-based model: Converts the query and document into vectors and measures the similarity between the vectors (e.g., cosine similarity) to calculate the score.
Reranker (Cross-Encoder) based model: Evaluates the query and document as a pair.
"query": "What is the capital of France?",
"documents": [
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France."
],
"top_n": 2,
"truncate_prompt_tokens": 512
}
Response
200 OK
Name
Type
Description
id
string
API response’s unique identifier (UUID format)
model
string
Name of the model that generated the result
usage
integer
Object containing information about the resources used in the request
usage.total_tokens
integer
Total number of tokens used in processing the request
result
string
Array containing the results of the query-related documents
results[].index
integer
Order number in the result array
results[].document
object
Object containing the content of the searched document
results[].document.text
string
Actual text content of the searched document
results[].relevance_score
float
Score indicating the relevance between the query and the document (0 ~ 1)
Table. Re-rank API - 200 OK
Error Code
HTTP status code
Error Code Description
400
Bad Request
422
Validation Error
500
Internal Server Error
Table. Re-rank API - Error Code
Example
{"id":"rerank-scp-aios-rerank","model":"sds/sds/bge-m3","usage":{"total_tokens":65},"results":[{"index":0,"document":{"text":"The capital of France is Paris."},"relevance_score":0.8291233777999878},{"index":1,"document":{"text":"France capital city is known for the Eiffel Tower."},"relevance_score":0.6996355652809143}]}
curl-X'POST' \
'https://aios.private.kr-west1.e.samsungsdscloud.com/score' \
-H'accept: application/json' \
-H'Content-Type: application/json' \
-d'{"model":"sds/bge-reranker-v2-m3","encoding_format":"float","text_1":["What is the largest planet in the solar system?","What is the chemical symbol for water?"],"text_2":["Jupiter is the largest planet in the solar system.","The chemical symbol for water is H₂O."]}'
## Reference
* [Score API vLLM documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#score-api_1)
# Chat Completions API
```python
POST /v1/chat/completions
Overview
Chat Completions API is compatible with OpenAI’s Completions API and can be used with the OpenAI Python client.
Request
Context
Key
Type
Description
Example
Content-Type
string
application/json
Table. Chat Completions API - Context
Path Parameters
Name
type
Required
Description
Default value
Boundary value
Example
None
Table. Chat Completions API - Path Parameters
Query Parameters
Name
type
Required
Description
Default value
Boundary value
Example
None
Table. Chat Completions API - Query Parameters
Body Parameters
Name
Name Sub
type
Required
Description
Default value
Boundary value
Example
model
-
string
✅
Specifies the model to use for generating responses
“meta-llama/Llama-3.3-70B-Instruct”
messages
role
string
✅
List of messages containing conversation history
[ { “role” : “user” , “content” : “message” }]
frequency_penalty
-
number
❌
Adjusts the penalty for repeating tokens
0
-2.0 ~ 2.0
0.5
logit_bias
-
object
❌
Adjusts the probability of specific tokens (e.g., { “100”: 2.0 })
null
Key: token ID, Value: -100 ~ 100
{ “100”: 2.0 }
logprobs
-
boolean
❌
Returns the probabilities of the top logprobs number of tokens
false
true, false
true
max_completion_tokens
-
integer
❌
Limits the maximum number of generated tokens
None
0 ~ model maximum
100
max_tokens (Deprecated)
-
integer
❌
Limits the maximum number of generated tokens
None
0 ~ model maximum
100
n
-
integer
❌
Specifies the number of responses to generate
1
3
presence_penalty
-
number
❌
Adjusts the penalty for tokens already present in the text
0
-2.0 ~ 2.0
1.0
seed
-
integer
❌
Specifies the seed value for controlling randomness
None
stop
-
string / array / null
❌
Stops generating when a specific string is encountered
null
"\n"
stream
-
boolean
❌
Returns the result in streaming mode
false
true/false
true
stream_options
include_usage, continuous_usage_stats
object
❌
Controls streaming options (e.g., including usage statistics)
null
{ “include_usage”: true }
temperature
-
number
❌
Adjusts the creativity of the generated response (higher means more random)
1
0.0 ~ 1.0
0.7
tool_choice
-
string
❌
Specifies which tool to call
none: Does not call any tool
auto: Model decides whether to call a tool or generate a message
required: Model calls at least one tool
No tool: none
With tool: auto
tools
-
array
❌
List of tools that the model can call
Only functions are supported as tools
Supports up to 128 functions
None
top_logprobs
-
integer
❌
Specifies the number of top logprobs tokens to return (between 0 and 20)
Each is associated with a log probability value
logprobs must be set to true
Shows the probability values for the top k completions
None
0 ~ 20
3
top_p
-
number
❌
Limits the sampling probability of tokens (higher means more tokens are considered)
Tool call information (may be included depending on the model/settings)
choices[].finish_reason
string or null
Reason why the response was terminated (e.g., “stop”, “length”, etc.)
choices[].stop_reason
object or null
Additional termination reason details
choices[].logprobs
object or null
Token-wise log probability information (may be included depending on the settings)
usage
object
Token usage statistics
usage.prompt_tokens
integer
Number of tokens used in the input prompt
usage.completion_tokens
integer
Number of tokens used in the generated response
usage.total_tokens
integer
Total number of tokens (input + output)
Table. Chat Completions API - 200 OK
Error Code
HTTP status code
Error Code Description
400
Bad Request
422
Validation Error
500
Internal Server Error
Table. Chat Completions API - Error Code
Example
{"id":"chatcmpl-scp-aios-chat-completions","object":"chat.completion","created":1749702816,"model":"meta-llama/Meta-Llama-3.3-70B-Instruct","choices":[{"index":0,"message":{"role":"assistant","reasoning_content":null,"content":"The capital of Korea is Seoul.","tool_calls":[]},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":{"prompt_tokens":54,"total_tokens":62,"completion_tokens":8,"prompt_tokens_details":null},"prompt_logprobs":null}
Adjust the probability of specific tokens (e.g., { “100”: 2.0 })
null
Key: token ID, Value: -100~100
{ “100”: 2.0 }
logprobs
-
integer
❌
Return the probabilities of the top logprobs tokens
null
1 ~ 5
5
max_completion_tokens
-
integer
❌
Limit the maximum number of generated tokens
None
0~model maximum value
100
max_tokens (Deprecated)
-
integer
❌
Limit the maximum number of generated tokens
None
0~model maximum value
100
n
-
integer
❌
Specify the number of responses to generate
1
3
presence_penalty
-
number
❌
Adjust the penalty for tokens already present in the text
0
-2.0 ~ 2.0
1.0
seed
-
integer
❌
Specify a seed value for randomness control
None
stop
-
string / array / null
❌
Stop generating when a specific string is encountered
null
"\n"
stream
-
boolean
❌
Whether to return the results in a streaming manner
false
true/false
true
stream_options
include_usage, continuous_usage_stats
object
❌
Control streaming options (e.g., include usage statistics)
null
{ “include_usage”: true }
temperature
-
number
❌
Control the creativity of the generated response (higher means more random)
1
0.0 ~ 1.0
0.7
top_p
-
number
❌
Limit the sampling probability of tokens (higher means more tokens considered)
1
0.0 ~ 1.0
0.9
Table. Completions API - Body Parameters
### Example
```python
curl -X 'POST' \
'https://aios.private.kr-west1.e.samsungsdscloud.com/v1/completions' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "meta-llama/Meta-Llama-3.3-70B-Instruct",
"prompt": "What is the capital of Korea?",
"temperature": 0.7
}'
Response
200 OK
Name
Type
Description
id
string
Unique identifier of the response
object
string
Type of the response object (e.g., “text_completion”)
created
integer
Creation time (Unix timestamp, seconds)
model
string
Name of the model used
choices
array
List of generated response choices
choices[].index
number
Index of the choice
choices[].text
string
Generated text object
choices[].logprobs
object
Token-wise log probability information (included based on settings)
choices[].finish_reason
string or null
Reason why the response was terminated (e.g., “stop”, “length” etc.)
choices[].stop_reason
object or null
Additional termination reason details
choices[].prompt_logprobs
object or null
Log probability of input prompt tokens (may be null)
usage
object
Token usage statistics
usage.prompt_tokens
number
Number of tokens used in the input prompt
usage.total_tokens
number
Total number of tokens (input + output)
| usage.completion_tokens | number | Number of tokens used in the generated response |
| usage.prompt_tokens_details | object | Details of prompt token usage |
<div class="figure-caption">
Table. Completions API - 200 OK
</div>
Error Code
HTTP status code
Error Code Description
400
Bad Request
422
Validation Error
500
Internal Server Error
Table. Completions API - Error Code
Example
{"id":"cmpl-scp-aios-completions","object":"text_completion","created":1749702612,"model":"meta-llama/Meta-Llama-3.3-70B-Instruct","choices":[{"index":0,"text":" \nOur capital city is Seoul. \n\nA. 1\nB. ","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"usage":{"prompt_tokens":9,"total_tokens":25,"completion_tokens":16,"prompt_tokens_details":null}}
The Embedding API converts text into high-dimensional vectors (embeddings) that can be used for various natural language processing (NLP) tasks, such as calculating text similarity, clustering, and search.
curl-X'POST' \
'https://aios.private.kr-west1.e.samsungsdscloud.com/v1/embedding' \
-H'accept: application/json' \
-H'Content-Type: application/json' \
-d'{"model":"sds/bge-m3","input":"What is the capital of France?","encoding_format":"float"}'
Response
200 OK
Name
Type
Description
id
string
Unique identifier of the response
object
string
Type of the response object (e.g., “list”)
created
number
Creation time (Unix timestamp, seconds)
model
string
Name of the model used
data
array
Array of objects containing embedding results
data.index
number
Index of the input text (e.g., order of input texts)
data.object
string
Type of data item
data.embedding
array
Embedding vector values of the input text (sds-bge-m3 is a 1024-dimensional float array)
AIOS models are compatible with OpenAI’s API, so they are also compatible with OpenAI’s SDK.
The following is a list of OpenAI and Cohere compatible APIs supported by Samsung Cloud Platform AIOS service.
Converts text into a high-dimensional vector (embedding) that can be used for various natural language processing (NLP) tasks such as text similarity calculation, clustering, and search.
Applies an embedding model or a cross-encoder model to predict the relevance between a single query and each item in a document list.
cohere
langchain-cohere
Table. Python SDK Compatible API List
Note
The SDK Reference guide is based on a Virtual Server environment with Python installed.
The actual execution may differ from the example in terms of token count and message content.
OpenAI SDK
Installing the openai Package
Install the OpenAI package.
pip install openai
Text Completion API
The Text Completion API generates a natural sentence that follows the given input string.
/v1/completions
Request
Note
The Text Completion API can only use strings as input values.
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for AIOS model calls.model="<<model>>"# Enter the model ID for AIOS model calls.client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")response=client.completions.create(model=model,prompt="Hi")
Reference
The aios endpoint-url and model ID for model calls can be found in the LLM Endpoint Usage Guide on the resource details page. Refer to Using LLM.
Response
The text field in choices contains the model’s response.
Completion(id='cmpl-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx',choices=[CompletionChoice(finish_reason='length',index=0,logprobs=None,text=' future president of the United States, I hope you’re doing well. As a',stop_reason=None,prompt_logprobs=None)],created=1750000000,model='<<model>>',object='text_completion',
stream request
stream can be used to receive the completed answer one by one, rather than receiving the entire answer at once, as the model generates tokens.
Request
Set the stream parameter value to True.
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# AIOS model call endpoint-url to be input for AIOS model callmodel="<<model>>"# AIOS model call model ID to be input for AIOS model callclient=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")response=client.completions.create(model=model,prompt="Hi",stream=True)# Receive the response as the model generates tokens.forchunkinresponse:print(chunk)
Response
Each token generates an answer, and each token can be checked in the choices’s text field.
The conversation completion API takes a list of messages in order as input and responds with a message that is suitable for the current context as the next order.
/v1/chat/completions
Request
Text message only, you can call as follows:
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# AIOS model call for aios endpoint-url to enter.model="<<model>>"# AIOS model call for model ID to enter.client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")response=client.chat.completions.create(model=model,messages=[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Hi"}])
Note
Model call for aios endpoint-url and model ID information is provided in the resource details page’s LLM Endpoint usage guide. Please refer to Using LLM.
Response
You can check the model’s answer in the choices’s message.
ChatCompletion(id='chatcmpl-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',choices=[Choice(finish_reason='stop',index=0,logprobs=None,message=ChatCompletionMessage(content='Hello. How can I assist you today?',refusal=None,role='assistant',annotations=None,audio=None,function_call=None,tool_calls=[],reasoning_content=None),stop_reason=None)],created=1750000000,model='<<model>>',object='chat.completion',service_tier=None,system_fingerprint=None,usage=CompletionUsage(completion_tokens=10,prompt_tokens=42,total_tokens=52,completion_tokens_details=None,prompt_tokens_details=None),prompt_logprobs=None)
Stream Request
Using stream, you can wait for the model to generate all answers and receive the response at once, or receive and process the response for each token generated by the model.
Request
Enter True as the value of the stream parameter.
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# AIOS model call for aios endpoint-url to enter.model="<<model>>"# AIOS model call for model ID to enter.client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")response=client.chat.completions.create(model="meta-llama/Llama-3.3-70B-Instruct",messages=[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Hi"}],stream=True)# You can receive a response each time the model generates a token.forchunkinresponse:print(chunk)
Response
Each token generates a response, and each token can be checked in the choices field of the delta field.
Tool calling refers to the interface of external tools defined outside the model, allowing the model to generate answers that can perform suitable tools in the current context.
Using tool call, you can define metadata for functions that the model can execute and utilize them to generate answers.
Request
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# AIOS model call endpoint URLmodel="<<model>>"# AIOS model IDclient=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")# Function to get weather informationtools=[{"type":"function","function":{"name":"get_weather","description":"Get current temperature for provided coordinates in celsius.","parameters":{"type":"object","properties":{"latitude":{"type":"number"},"longitude":{"type":"number"}},"required":["latitude","longitude"],"additionalProperties":False},"strict":True}}]
messages = [{“role”: “user”, “content”: “What is the weather like in Paris today?”}]
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools # Inform the model of the metadata of the tools that can be used.
)
Response
choices’s message.tool_calls can be used to check how the model determines the execution method of the tool.
In the following example, you can see that the tool_calls’s function uses the get_weather function and checks what arguments should be inserted.
After adding the result value of the function as a tool message and generating the model’s response again, you can create an answer using the result value.
Request
Based on tool_calls’s function.arguments in the response data, you can actually call the function.
importjson# example function, always responds with 14 degrees.defget_weather(latitude,longitude):return"14℃"tool_call=response.choices[0].message.tool_calls[0]args=json.loads(tool_call.function.arguments)result=get_weather(args["latitude"],args["longitude"])# "14℃"
After adding the result value of the function as a tool message to the conversation context and calling the model again,
the model can create an appropriate answer using the result value of the function.
# Add the model's tool call message to messagesmessages.append(response.choices[0].message)# Add the result of the actual function call to messagesmessages.append({"role":"tool","tool_call_id":tool_call.id,"content":str(result)})response_2=client.chat.completions.create(model=model,messages=messages,# tools=tools
Response
ChatCompletion(id='chatcmpl-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',choices=[Choice(finish_reason='stop',index=0,logprobs=None,message=ChatCompletionMessage(content='The current weather in Paris is 14℃.',refusal=None,role='assistant',annotations=None,audio=None,function_call=None,tool_calls=[],reasoning_content=None),stop_reason=None)],created=1750000000,model='<<model>>',object='chat.completion',service_tier=None,system_fingerprint=None,usage=CompletionUsage(completion_tokens=11,prompt_tokens=74,total_tokens=85,completion_tokens_details=None,prompt_tokens_details=None),prompt_logprobs=None)
Reasoning
Request
Reasoning is supported in models that provide a reasoning value, which can be checked as follows:
Note
Models that support reasoning may take longer to generate answers because they produce many tokens for reasoning.
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for AIOS model calls.model="<<model>>"# Enter the model ID for AIOS model calls.client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")response=client.chat.completions.create(model=model,messages=[{"role":"user","content":"9.11 and 9.8, which is greater?"}],)
Response
The choices of the message field can be checked to see the content and also the reasoning_content, which provides the reasoning tokens.
ChatCompletion(id='chatcmpl-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',choices=[Choice(finish_reason='stop',index=0,logprobs=None,message=ChatCompletionMessage(content='''
To determine whether 9.11 or 9.8 is larger, we compare the decimal parts since both numbers have the same whole number part (9).
1. Convert both numbers to the same decimal places for easier comparison:
- 9.11 remains as is.
- 9.8 can be written as 9.80.
2. Compare the tenths place:
- The tenths place of 9.11 is 1.
- The tenths place of 9.80 is 8.
3. Since 8 (from 9.80) is greater than 1 (from 9.11), 9.80 (or 9.8) is larger.
4. Verification by subtraction:
- Subtracting 9.11 from 9.8 gives \(9.80 - 9.11 = 0.69\), which is positive, confirming 9.8 is larger.
Thus, the larger number is \(\boxed{9.8}\).
''',refusal=None,role='assistant',annotations=None,audio=None,function_call=None,tool_calls=[],reasoning_content="""Okay, so I need to figure out whether 9.11 is bigger than 9.8 or vice versa.
Hmm, let me think. Both numbers start with 9, so the whole number part is the same.
Here is the translation of the given text:
That means the difference must be in the decimal parts.
First, I remember that when comparing decimals, you look at the digits one by one after the decimal point.
The first digit after the decimal is the tenths place, then hundredths, and so on.
Since both numbers have 9 in the units place, I can focus on the decimal parts: 0.11 versus 0.08.
Wait, let me write them out to be clear. 9.11 is 9 and 11 hundredths, and 9.8 is 9 and 8 tenths.
Wait a second, maybe I should convert them to the same decimal places to make it easier.
So, 9.8 is the same as 9.80, right? Because adding a zero at the end of a decimal doesn't change its value.
That way, both numbers have two decimal places. Now comparing 9.11 and 9.80.
Looking at the tenths place first: 9.11 has 1 tenth, and 9.80 has 8 tenths.
Since 8 is more than 1, that means 9.80 is larger than 9.11. Wait, but hold on, the original numbers were 9.11 and 9.8. Did I make a mistake here?
Let me double-check. 9.8 is indeed equal to 9.80.
So when comparing tenths, 8 tenths (from 9.80) is definitely more than 1 tenth (from 9.11). So that would mean 9.8 is bigger.
But wait, maybe I confused the decimal places. Let me think again.
The first number after the decimal is tenths, so in 9.11, the first digit is 1 (tenths place) and the second is 1 (hundredths).
In 9.8, the first digit is 8 (tenths place), and there's no hundredths, which is equivalent to 0.
So comparing the tenths place first: 1 vs 8. Since 8 is greater than 1, 9.8 is larger.
But why did I initially think maybe 9.11 was bigger? Maybe because 11 is a two-digit number?
But no, in decimals, each position is a different place value.
The tenths place is more significant than the hundredths. So even though 11 hundredths is 0.11, that's still less than 0.8 (which is 8 tenths).
Another way to think about it: Convert both to fractions.
9.11 is 9 + 11/100, and 9.8 is 9 + 8/10.
Converting 8/10 to hundredths, that's 80/100.
So 9.8 is 9 + 80/100, whereas 9.11 is 9 + 11/100. Comparing the fractions: 80/100 is more than 11/100, so 9.8 is bigger.
Alternatively, maybe subtract them to see the difference. 9.8 minus 9.11 is 0.69, which is positive, so 9.8 is larger.
Wait, let me do the subtraction step by step.
9.8 minus 9.11:
First, write them aligned by decimal:
9.80
-9.11
-------
0.69
Yes, so the result is positive, so 9.8 is bigger.
Alternatively, if I had to do it without converting, maybe I can think in terms of money.
If these were dollars, 9.11 is $9.11 and 9.8 is $9.80.
Clearly, $9.80 is more than $9.11.
Hmm, so all these methods point to 9.8 being larger.
Maybe my initial confusion was because I saw 11 as a two, but
...omitted...
**Final Answer**
The number 9.8 is larger than 9.11. This is because when comparing the decimal parts, 0.8 (from 9.8) is greater than 0.11 (from 9.11).
Specifically, 9.8 can be written as 9.80, and comparing the tenths place (8 vs. 1) shows that 9.8 is larger.
The difference between them is 0.69, confirming that 9.8 is indeed the larger number.
**Final Answer**
\\boxed{9.8}"""
),
stop_reason=None
)
### image to text
**vision**을 지원하는 모델의 경우, 다음과 같이 이미지를 입력할 수 있습니다.

<div class="scp-textbox scp-textbox-type-error">
<div class="scp-textbox-title">Note</div>
<div class="scp-textbox-contents">
<p>For models that support <strong>vision</strong>, there are limitations on the size and number of input images.</p>
<p>Please refer to <a href="/en/userguide/ai_ml/aios/overview/#provided-models">Provided Models</a> for more information on image input limitations.</p>
</div>
</div>
#### Request
You can input an image with **MIME type** and **base64**.
```python
import base64
from openai import OpenAI
from urllib.parse import urljoin
aios_base_url = "<<aios endpoint-url>>" # AIOS endpoint-url for model calls
model = "<<model>>" # Model ID for AIOS model calls
client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
image_path = "image/path.jpg"
def encode_image(image_path: str):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
The following is an analysis of the image to generate text.
ChatCompletion(id='chatcmpl-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',choices=[Choice(finish_reason='stop',index=0,logprobs=None,message=ChatCompletionMessage(content="""Here's what's in the image:
* **A golden retriever puppy:** The main subject is a light-colored golden retriever puppy lying on green grass.
* **A bone:** The puppy is holding a large bone in its paws and appears to be enjoying chewing on it.
* **Grass:** The puppy is lying on a well-maintained lawn.
* **Vegetation:** Behind the puppy, there are some shrubs and other greenery.
* **Outdoor setting:** The scene is outdoors, likely a backyard.""",refusal=None,role='assistant',annotations=None,audio=None,function_call=None,tool_calls=[],reasoning_content=None),stop_reason=106)],created=1750000000,model='<<model>>',object='chat.completion',service_tier=None,system_fingerprint=None,usage=CompletionUsage(completion_tokens=114,prompt_tokens=276,total_tokens=390,completion_tokens_details=None,prompt_tokens_details=None),prompt_logprobs=None,kv_transfer_params=None)
Embeddings API
Embeddings converts input text into a high-dimensional vector of a fixed dimension. The generated vector can be used for various natural language processing tasks such as text similarity, clustering, and search.
/v1/embeddings
Request
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# AIOS endpoint-url for model callsmodel="<<model>>"# Model ID for AIOS model callsclient=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")response=client.embeddings.create(input="What is the capital of France?",model=model)
Note
The aios endpoint-url and model ID for model calls can be found in the LLM Endpoint Usage Guide on the resource details page. Refer to Using LLM.
Response
data receives the converted value in vector form as a response.
The Cohere SDK can be used by installing the Cohere package.
pip install cohere
Rerank API
Rerank calculates the relevance between the given query and documents, and ranks them.
It can help improve the performance of RAG (Retrieval-Augmented Generation) structure applications by adjusting relevant documents to the front.
/v2/rerank
Request
importcoherefromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for AIOS model calls.model="<<model>>"# Enter the model ID for AIOS model calls.client=cohere.ClientV2("EMPTY_KEY",base_url=aios_base_url)docs=["The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France."]response=client.rerank(model=model,query="What is the capital of France?",documents=docs,top_n=3,)
Note
The aios endpoint-url and model ID information for model calls are provided in the LLM Endpoint Usage Guide on the resource details page. Refer to Using LLM.
Response
In results, you can check the documents sorted in order of relevance to the query.
V2RerankResponse(id='rerank-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',results=[V2RerankResponseResultsItem(document=V2RerankResponseResultsItemDocument(text='The capital of France is Paris.'),index=0,relevance_score=1.0),V2RerankResponseResultsItem(
Here is the translated text:
document=V2RerankResponseResultsItemDocument(
text='France capital city is known for the Eiffel Tower.'
),
index=1,
relevance_score=1.0
),
V2RerankResponseResultsItem(
document=V2RerankResponseResultsItemDocument(
text='Paris is located in the north-central part of France.'
),
index=2,
relevance_score=0.982421875
)
The langchain-openai package can be used to utilize the text completion API and conversation completion API.
langchain_openai.OpenAI
When the text completion model (langchain_openai.OpenAI) is invoked, the result value is generated as text.
Request
fromlangchain_openaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for AIOS model calls.model="<<model>>"# Enter the model ID for AIOS model calls.llm=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)llm.invoke("Can you introduce yourself in 5 words?")
Response
"""Hi, I'm a fun artist!
...omitted..."""
Note
The aios endpoint-url and model ID information for model calls are provided in the LLM Endpoint Usage Guide on the resource details page. Refer to Using LLM.
langchain_openai.ChatOpenAI
When the conversation completion model (langchain_openai.ChatOpenAI) is invoked, the result value is generated as an AIMessage or Message object.
Request
fromlangchain_openaiimportChatOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for AIOS model calls.model="<<model>>"# Enter the model ID for AIOS model calls.chat_llm=ChatOpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)chat_completion=chat_llm.invoke("Can you introduce yourself in 5 words?")chat_completion.pretty_print()
Note
Information for the aios endpoint-url and model ID for model invocation can be found in the LLM Endpoint usage guide on the resource details page. Please refer to Using LLM.
Response
================================== Ai Message ==================================
I am an AI assistant.
embeddings
Embeddings models such as langchain-together, langchain-fireworks can be used.
Request
fromlangchain_togetherimportTogetherEmbeddingsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for AIOS model invocation.model="<<model>>"# Enter the model ID for AIOS model invocation.embedding=TogetherEmbeddings(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)embedding.embed_query("What is the capital of France?")
Note
Information for the aios endpoint-url and model ID for model invocation can be found in the LLM Endpoint usage guide on the resource details page. Please refer to Using LLM.
Rerank models can utilize langchain-cohere’s CohereRerank.
Request
fromlangchain_cohere.rerankimportCohereRerankaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for AIOS model invocation.model="<<model>>"# Enter the model ID for AIOS model invocation.rerank=CohereRerank(base_url=aios_base_url,cohere_api_key="EMPTY_KEY",model=model)docs=["The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France."]rerank.rerank(documents=docs,query="What is the capital of France?",top_n=3)
Note
Information for the aios endpoint-url and model ID for model invocation can be found in the LLM Endpoint usage guide on the resource details page. Please refer to Using LLM.
This tutorial introduces how to create and utilize a web-based Playground to easily test the APIs of various AI models provided by AIOS using Streamlit in an SCP for Enterprise environment.
Environment
To proceed with this tutorial, the following environment must be prepared:
System Environment
Python 3.10 +
pip
Required installation packages
Color mode
pip install streamlit
pip install streamlit
Code Block. Install streamlit package
Note
Streamlit Python-based open-source web application framework, it is a very suitable tool for visually expressing and sharing data science, machine learning, and data analysis results. Without complex web development knowledge, you can quickly create a web interface by writing just a few lines of code.
Implementation
Pre-check
The application checks if the model call is normal with curl in the environment where it is running. Here, AIOS_LLM_Private_Endpoint refers to the LLM usage guide please refer to it.
Example: {AIOS LLM Private Endpoint}/{API}
Color mode
curl -H "Content-Type: application/json"\
-d '{"model": "meta-llama/Llama-3.3-70B-Instruct"
, "prompt" : "Hello, I am jihye, who are you"
, "temperature": 0
, "max_tokens": 100
, "stream": false}' -L AIOS_LLM_Private_Endpoint
curl -H "Content-Type: application/json"\
-d '{"model": "meta-llama/Llama-3.3-70B-Instruct"
, "prompt" : "Hello, I am jihye, who are you"
, "temperature": 0
, "max_tokens": 100
, "stream": false}' -L AIOS_LLM_Private_Endpoint
Code Block. CURL Model Call Example
choices’s text field contains the model’s answer, which can be confirmed.
{"id":"cmpl-4ac698a99c014d758300a3ec5583d73b","object":"text_completion","created":1750140201,"model":"meta-llama/Llama-3.3-70B-Instruct","choices":[{"index":0,"text":"?\nI am a student who is studying English.\nI am interested in learning about different cultures and making friends from around the world.\nI like to watch movies, listen to music, and read books in my free time.\nI am looking forward to chatting with you and learning more about your culture and way of life.\nNice to meet you, jihye! I'm happy to chat with you and learn more about culture. What kind of movies, music, and books do you enjoy? Do","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"usage":{"prompt_tokens":11,"total_tokens":111,"completion_tokens":100}}
Project Structure
chat-playground
├── app.py # streamlit main web app file
├── endpoints.json # AIOS model's call type definition
├── img
│ └── aios.png
└── models.json # AIOS model list
Chat Playground code
Reference
models.json, endpoints.json files must exist and be configured in the appropriate format, please refer to the code below.
코드 내 BASE_URL 은 LLM 이용 가이드를 참고하여 AIOS LLM Private Endpoint 주소로 수정해야 합니다 should be translated to: - The BASE_URL in the code must be modified to the AIOS LLM Private Endpoint address, referring to the LLM usage guide.
This Playground is designed with a one-time request-based structure, so users can provide input values, press a button, send a request once, and check the result in this way, which allows for quick testing and response verification without complex session management.
The parameters of Model, Type, Temperature, Max Tokens configured in the sidebar are an interface configured through st.sidebar, and can be freely extended or modified as needed.
st.file_uploader() uploaded images (files) exist as temporary BytesIO objects on the server memory and are not automatically saved to disk.
app.py
streamlit main web app file. here, the BASE_URL, AIOS_LLM_Private_Endpoint, please refer to the LLM usage guide
Color mode
import streamlit as st
import base64
import json
import requests
from urllib.parse import urljoin
BASE_URL="AIOS_LLM_Private_Endpoint"# ===== Setting =====st.set_page_config(page_title="AIOS Chat Playground", layout="wide")st.title("🤖 AIOS Chat Playground")# ===== Common Functions =====def load_models():
with open("models.json", "r") as f:
return json.load(f)def load_endpoints():
with open("endpoints.json", "r") as f:
return json.load(f)models= load_models()endpoints_config= load_endpoints()# ===== Sidebar Settings =====st.sidebar.title('Hello!')st.sidebar.image("img/aios.png")st.sidebar.header("⚙️ Setting")model= st.sidebar.selectbox("Model", models)endpoint_labels=[ep["label"]for ep in endpoints_config]endpoint_label= st.sidebar.selectbox("Type", endpoint_labels)selected_endpoint= next(ep for ep in endpoints_config if ep["label"]== endpoint_label)temperature= st.sidebar.slider("🔥 Temperature", 0.0, 1.0, 0.7)max_tokens= st.sidebar.number_input("🧮 Max Tokens", min_value=1, max_value=5000, value=100)base_url= BASE_URL
path= selected_endpoint["path"]endpoint_type= selected_endpoint["type"]api_style= selected_endpoint.get("style", "openai")# openai or cohere# ===== Input UI =====prompt=""docs=[]image_base64= None
ifendpoint_type=="image":
prompt= st.text_area("✍️ Enter your question:", "Explain this image.")uploaded_image= st.file_uploader("🖼️ Upload an image", type=["png", "jpg", "jpeg"])if uploaded_image:
st.image(uploaded_image, caption="Uploaded image", use_container_width=300)image_bytes= uploaded_image.read()image_base64= base64.b64encode(image_bytes).decode("utf-8")elifendpoint_type=="rerank":
prompt= st.text_area("✍️ Enter your query:", "What is the capital of France?")raw_docs= st.text_area("📄 Documents (one per line)", "The capital of France is Paris.\nFrance capital city is known for the Eiffel Tower.\nParis is located in the north-central part of France.")docs= raw_docs.strip().splitlines()elifendpoint_type=="reasoning":
prompt= st.text_area("✍️ Enter prompt:", "9.11 and 9.8, which is greater?")elifendpoint_type=="embedding":
prompt= st.text_area("✍️ Enter prompt:", "What is the capital of France?")else:
prompt= st.text_area("✍️ Enter prompt:", "Hello, who are you?")uploaded_image= st.file_uploader("🖼️ Upload an image (Optional)", type=["png", "jpg", "jpeg"])if uploaded_image:
image_bytes= uploaded_image.read()image_base64= base64.b64encode(image_bytes).decode("utf-8")# ===== Call Button =====if st.button("🚀 Invoke model"):
headers={"Content-Type": "application/json",
"Authorization": "Bearer EMPTY_KEY"} try:
ifendpoint_type=="chat":
url= urljoin(base_url, "v1/chat/completions")payload={"model": model,
"messages": [{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}],
"temperature": temperature,
"max_tokens": max_tokens
}elifendpoint_type=="completion":
url= urljoin(base_url, "v1/completions")payload={"model": model,
"prompt": prompt,
"temperature": temperature,
"max_tokens": max_tokens
}elifendpoint_type=="embedding":
url= urljoin(base_url, "v1/embeddings")payload={"model": model,
"input": prompt
}elifendpoint_type=="reasoning":
url= urljoin(BASE_URL, "v1/chat/completions")payload={"model": model,
"messages": [{"role": "user", "content": prompt}],
"temperature": temperature,
"max_tokens": max_tokens
}elifendpoint_type=="image":
url= urljoin(base_url, "v1/chat/completions")if not image_base64:
st.warning("🖼️ Upload an image") st.stop()payload={"model": model,
"messages": [{"role": "user",
"content": [{"type": "text", "text": prompt},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}}]}]}elifendpoint_type=="rerank":
url= urljoin(base_url, "v2/rerank")payload={"model": model,
"query": prompt,
"documents": docs,
"top_n": len(docs)}else:
st.error("❌ Unknown endpoint type") st.stop() st.expander("📤 Request payload").code(json.dumps(payload, indent=2), language="json")response= requests.post(url, headers=headers, json=payload) response.raise_for_status()res= response.json()# ===== Response Parsing =====ifendpoint_type=="chat" or endpoint_type=="image":
output= res["choices"][0]["message"]["content"]elifendpoint_type=="completion":
output= res["choices"][0]["text"]elifendpoint_type=="embedding":
vec= res["data"][0]["embedding"]output= f"🔢 Vector dimensions: {len(vec)}" st.expander("📐 Vector preview").code(vec[:20])elifendpoint_type=="rerank":
results= res["results"]output="\n\n".join([f"{i+1}. The document text (score: {r['relevance_score']:.3f})"for i, r in enumerate(results)])elifendpoint_type=="reasoning":
message= res.get("choices", [{}])[0].get("message", {})reasoning= message.get("reasoning_content", "❌ No reasoning_content")content= message.get("content", "❌ No content")output= f"""📘 <b>response:</b><br>{content}<br><br>🧠 <b>Reasoning:</b><br>{reasoning}""" st.success("✅ Model response:") st.markdown(f"<div style='padding:1rem;background:#f0f0f0;border-radius:8px'>{output}</div>", unsafe_allow_html=True) st.expander("📦 View full response").json(res) except requests.RequestException as e:
st.error("❌ Request failed") st.code(str(e))
import streamlit as st
import base64
import json
import requests
from urllib.parse import urljoin
BASE_URL="AIOS_LLM_Private_Endpoint"# ===== Setting =====st.set_page_config(page_title="AIOS Chat Playground", layout="wide")st.title("🤖 AIOS Chat Playground")# ===== Common Functions =====def load_models():
with open("models.json", "r") as f:
return json.load(f)def load_endpoints():
with open("endpoints.json", "r") as f:
return json.load(f)models= load_models()endpoints_config= load_endpoints()# ===== Sidebar Settings =====st.sidebar.title('Hello!')st.sidebar.image("img/aios.png")st.sidebar.header("⚙️ Setting")model= st.sidebar.selectbox("Model", models)endpoint_labels=[ep["label"]for ep in endpoints_config]endpoint_label= st.sidebar.selectbox("Type", endpoint_labels)selected_endpoint= next(ep for ep in endpoints_config if ep["label"]== endpoint_label)temperature= st.sidebar.slider("🔥 Temperature", 0.0, 1.0, 0.7)max_tokens= st.sidebar.number_input("🧮 Max Tokens", min_value=1, max_value=5000, value=100)base_url= BASE_URL
path= selected_endpoint["path"]endpoint_type= selected_endpoint["type"]api_style= selected_endpoint.get("style", "openai")# openai or cohere# ===== Input UI =====prompt=""docs=[]image_base64= None
ifendpoint_type=="image":
prompt= st.text_area("✍️ Enter your question:", "Explain this image.")uploaded_image= st.file_uploader("🖼️ Upload an image", type=["png", "jpg", "jpeg"])if uploaded_image:
st.image(uploaded_image, caption="Uploaded image", use_container_width=300)image_bytes= uploaded_image.read()image_base64= base64.b64encode(image_bytes).decode("utf-8")elifendpoint_type=="rerank":
prompt= st.text_area("✍️ Enter your query:", "What is the capital of France?")raw_docs= st.text_area("📄 Documents (one per line)", "The capital of France is Paris.\nFrance capital city is known for the Eiffel Tower.\nParis is located in the north-central part of France.")docs= raw_docs.strip().splitlines()elifendpoint_type=="reasoning":
prompt= st.text_area("✍️ Enter prompt:", "9.11 and 9.8, which is greater?")elifendpoint_type=="embedding":
prompt= st.text_area("✍️ Enter prompt:", "What is the capital of France?")else:
prompt= st.text_area("✍️ Enter prompt:", "Hello, who are you?")uploaded_image= st.file_uploader("🖼️ Upload an image (Optional)", type=["png", "jpg", "jpeg"])if uploaded_image:
image_bytes= uploaded_image.read()image_base64= base64.b64encode(image_bytes).decode("utf-8")# ===== Call Button =====if st.button("🚀 Invoke model"):
headers={"Content-Type": "application/json",
"Authorization": "Bearer EMPTY_KEY"} try:
ifendpoint_type=="chat":
url= urljoin(base_url, "v1/chat/completions")payload={"model": model,
"messages": [{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}],
"temperature": temperature,
"max_tokens": max_tokens
}elifendpoint_type=="completion":
url= urljoin(base_url, "v1/completions")payload={"model": model,
"prompt": prompt,
"temperature": temperature,
"max_tokens": max_tokens
}elifendpoint_type=="embedding":
url= urljoin(base_url, "v1/embeddings")payload={"model": model,
"input": prompt
}elifendpoint_type=="reasoning":
url= urljoin(BASE_URL, "v1/chat/completions")payload={"model": model,
"messages": [{"role": "user", "content": prompt}],
"temperature": temperature,
"max_tokens": max_tokens
}elifendpoint_type=="image":
url= urljoin(base_url, "v1/chat/completions")if not image_base64:
st.warning("🖼️ Upload an image") st.stop()payload={"model": model,
"messages": [{"role": "user",
"content": [{"type": "text", "text": prompt},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}}]}]}elifendpoint_type=="rerank":
url= urljoin(base_url, "v2/rerank")payload={"model": model,
"query": prompt,
"documents": docs,
"top_n": len(docs)}else:
st.error("❌ Unknown endpoint type") st.stop() st.expander("📤 Request payload").code(json.dumps(payload, indent=2), language="json")response= requests.post(url, headers=headers, json=payload) response.raise_for_status()res= response.json()# ===== Response Parsing =====ifendpoint_type=="chat" or endpoint_type=="image":
output= res["choices"][0]["message"]["content"]elifendpoint_type=="completion":
output= res["choices"][0]["text"]elifendpoint_type=="embedding":
vec= res["data"][0]["embedding"]output= f"🔢 Vector dimensions: {len(vec)}" st.expander("📐 Vector preview").code(vec[:20])elifendpoint_type=="rerank":
results= res["results"]output="\n\n".join([f"{i+1}. The document text (score: {r['relevance_score']:.3f})"for i, r in enumerate(results)])elifendpoint_type=="reasoning":
message= res.get("choices", [{}])[0].get("message", {})reasoning= message.get("reasoning_content", "❌ No reasoning_content")content= message.get("content", "❌ No content")output= f"""📘 <b>response:</b><br>{content}<br><br>🧠 <b>Reasoning:</b><br>{reasoning}""" st.success("✅ Model response:") st.markdown(f"<div style='padding:1rem;background:#f0f0f0;border-radius:8px'>{output}</div>", unsafe_allow_html=True) st.expander("📦 View full response").json(res) except requests.RequestException as e:
st.error("❌ Request failed") st.code(str(e))
Code Block. app.py
models.json
AIOS model list. Refer to the LLM usage guide to set the model to be used.
Color mode
["meta-llama/Llama-3.3-70B-Instruct",
"qwen/Qwen3-30B-A3B",
"qwen/QwQ-32B",
"google/gemma-3-27b-it",
"meta-llama/Llama-4-Scout",
"meta-llama/Llama-Guard-4-12B",
"sds/bge-m3",
"sds/bge-reranker-v2-m3"There is no Korean text to translate.
["meta-llama/Llama-3.3-70B-Instruct",
"qwen/Qwen3-30B-A3B",
"qwen/QwQ-32B",
"google/gemma-3-27b-it",
"meta-llama/Llama-4-Scout",
"meta-llama/Llama-Guard-4-12B",
"sds/bge-m3",
"sds/bge-reranker-v2-m3"There is no Korean text to translate.
Code Block. models.json
endpoints.json
The call type of the AIOS model is defined, and the input screen and result are output differently according to the type.
Color mode
[{"label": "Chat Model",
"path": "/v1/chat/completions",
"type": "chat"},
{"label": "Completion Model",
"path": "/v1/completions",
"type": "completion"},
{"label": "Embedding Model",
"path": "/v1/embeddings",
"type": "embedding"},
{"label": "Image Chat Model",
"path": "/v1/chat/completions",
"type": "image"},
{"label": "Rerank Model",
"path": "/v2/rerank",
"type": "rerank"},
{"label": "Reasoning Model",
"path": "/v1/chat/completions",
"type": "reasoning"}There is no Korean text to translate.
[{"label": "Chat Model",
"path": "/v1/chat/completions",
"type": "chat"},
{"label": "Completion Model",
"path": "/v1/completions",
"type": "completion"},
{"label": "Embedding Model",
"path": "/v1/embeddings",
"type": "embedding"},
{"label": "Image Chat Model",
"path": "/v1/chat/completions",
"type": "image"},
{"label": "Rerank Model",
"path": "/v2/rerank",
"type": "rerank"},
{"label": "Reasoning Model",
"path": "/v1/chat/completions",
"type": "reasoning"}There is no Korean text to translate.
Code Block. endpoints.json
Playground usage method
This document covers two ways to run Playground.
Run on Virtual Server
1. Running Streamlit on a Virtual Server
Color mode
streamlit run app.py --server.port 8501 --server.address 0.0.0.0
streamlit run app.py --server.port 8501 --server.address 0.0.0.0
Code Block. Streamlit Execution
You can now view your Streamlit app in your browser.
URL: http://0.0.0.0:8501
Access from http://{your_server_ip}:8501 in the browser or after setting up server SSH tunneling http://localhost:8501. Refer to the following for SSH tunneling:
2. Accessing Virtual Server through tunneling on a local PC (when accessing http://localhost:8501)
1. Deployment and Service startup The following YAML is executed to start the Deployment and Service. It provides a container image packaged with code and Python library files to run the Chat Playground tutorial.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
streamlit-deployment-8bfcd5959-6xpx9 1/1 Running 0 17s
$ kubectl logs streamlit-deployment-8bfcd5959-6xpx9
Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false.
You can now view your Streamlit app in your browser.
URL: http://0.0.0.0:8501
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 46h
streamlit-service NodePort 172.20.95.192 <none> 80:30081/TCP 130m
You can access it from the browser at http://{worker_node_ip}:30081 or after setting up the server SSH tunneling at http://localhost:8501. Please refer to the following for SSH tunneling.
2. Accessing worker nodes through tunneling on a local PC (when accessing http://localhost:8501)
Code block. Tunneling to worker node through relay server from local PC
Usage example
Main screen composition
Item
Description
1
Model
This is a list of callable models set in the models.json file.
2
Endpoint type
must be selected according to the model call type set in the endpoints.json file to match the model
3
Temperature
The parameter that controls the degree of “randomness” or “creativity” of the model output. In this tutorial, it is specified in the range of 0.00 ~ 1.00.
0.0 : Only the most likely token is selected → Accurate and consistent response, lack of diversity
0.7 : Moderate randomness → Balance between creativity and consistency
1.0 : High randomness → Diverse and creative responses, possible quality variation
4
Max Tokens
Sets the maximum number of tokens that can be generated in the response text as an output length limit parameter. In this tutorial, it is specified in the range of 1 to 5000.
5
Input Area
The way to receive prompts, images, etc. varies depending on the endpoint type.
Chat, Completion, Embedding. Reasoning : general text input
Image : text + image upload
Rerank : query + document list (in this tutorial, line-by-line text is recognized as a document)
Fig. Main screen composition
Calling the Chat Model
Image model calling
Reasoning model calling
Conclusion
Through this tutorial, I hope you have learned how to build and utilize a Playground UI that can easily test various AI model APIs provided by AIOS, and you can flexibly customize it to fit your desired model and endpoint structure for actual service purposes.
Using the AI model provided by AIOS, create an Autogen AI Agent application.
Note
Autogen Autogen is an open-source framework that can easily build and manage LLM-based multi-agent collaboration and event-driven automation workflows.
environment
To proceed with this tutorial, the following environment must be prepared.
Code block. autogen, mcp server package installation
System Architecture
Shows the entire flow of the agent architecture using multi AI agent architecture and MCP.
Travel Planning Agent Flow
The user requests to set up a 3-day Nepal travel plan
Groupchat manger adjusts the execution order of registered agents (travel plan, local information, travel conversation, comprehensive summary)
Each agent performs the given tasks collaboratively according to its respective role
Once the final travel plan result is derived, deliver it to the user
MCP Flow
Note
MCP MCP(Model Context Protocol) is an open standard protocol that coordinates interactions between the model and external data or tools.
The MCP server is a server that implements this, using tool metadata to mediate and execute function calls.
User queries about the current time in Korea
mcp_server_time model request including metadata of a tool that can retrieve the current time via the server
get_current_time calling the function tool calls message generation
Through the MCP server, by executing the get_current_time function and passing the result to the model request, generate the final response and deliver it to the user.
Implementation
Travel Planning Agent
Reference
Please refer to the LLM usage guide for the AIOS_BASE_URL AIOS_LLM_Private_Endpoint and the MODEL_ID of the MODEL.
autogen_travel_planning.py
Color mode
fromurllib.parseimporturljoinfromautogen_agentchat.agentsimportAssistantAgentfromautogen_agentchat.conditionsimportTextMentionTerminationfromautogen_agentchat.teamsimportRoundRobinGroupChatfromautogen_agentchat.uiimportConsolefromautogen_ext.models.openaiimportOpenAIChatCompletionClientfromautogen_core.modelsimportModelFamily# Set the API URL and model name for model access.AIOS_BASE_URL="AIOS_LLM_Private_Endpoint"MODEL="MODEL_ID"# Create a model client using OpenAIChatCompletionClient.model_client=OpenAIChatCompletionClient(model=MODEL,base_url=urljoin(AIOS_BASE_URL,"v1"),api_key="EMPTY_KEY",model_info={# Set to True if images are supported."vision":False,# Set to True if function calls are supported."function_calling":True,# Set to True if JSON output is supported."json_output":True,# If the model you want to use is not provided by ModelFamily, use UNKNOWN.# "family": ModelFamily.UNKNOWN,"family":ModelFamily.LLAMA_3_3_70B,# Set to True if supporting structured output."structured_output":True,},)# Create multiple agents.# Each agent performs roles such as travel planning, local activity recommendations, providing language tips, and summarizing travel plans.planner_agent=AssistantAgent("planner_agent",model_client=model_client,description="A helpful assistant that can plan trips.",system_message=("You are a helpful assistant that can suggest a travel plan ""for a user based on their request."),)local_agent=AssistantAgent("local_agent",model_client=model_client,description="A local assistant that can suggest local activities or places to visit.",system_message=("You are a helpful assistant that can suggest authentic and """interesting local activities or places to visit for a user ""and can utilize any context information provided."),)language_agent=AssistantAgent("language_agent",model_client=model_client,description="A helpful assistant that can provide language tips for a given destination.",system_message=("You are a helpful assistant that can review travel plans, ""providing feedback on important/critical tips about how best to address """language or communication challenges for the given destination. """If the plan already includes language tips, ""you can mention that the plan is satisfactory, with rationale."),)travel_summary_agent=AssistantAgent("travel_summary_agent",model_client=model_client,description="A helpful assistant that can summarize the travel plan.",system_message=("You are a helpful assistant that can take in all of the suggestions ""and advice from the other agents and provide a detailed final travel plan. """You must ensure that the final plan is integrated and complete. ""YOUR FINAL RESPONSE MUST BE THE COMPLETE PLAN. ""When the plan is complete and all perspectives are integrated, ""you can respond with TERMINATE."),)# Group the agents and create a RoundRobinGroupChat.# RoundRobinGroupChat adjusts so that agents perform tasks in the order they are registered, taking turns.# This group enables agents to interact and make travel plans.# The termination condition uses TextMentionTermination to end the group chat when the text "TERMINATE" is mentioned.termination=TextMentionTermination("TERMINATE")group_chat=RoundRobinGroupChat([planner_agent,local_agent,language_agent,travel_summary_agent],termination_condition=termination,)asyncdefmain():"""Main function, runs group chat and makes travel plans."""# Run a group chat to make travel plans.# User requests the task "Plan a 3 day trip to Nepal."# Print the results using the console.awaitConsole(group_chat.run_stream(task="Plan a 3 day trip to Nepal."))awaitmodel_client.close()if__name__=="__main__":importasyncioasyncio.run(main())
fromurllib.parseimport urljoin
fromautogen_agentchat.agentsimport AssistantAgent
fromautogen_agentchat.conditionsimport TextMentionTermination
fromautogen_agentchat.teamsimport RoundRobinGroupChat
fromautogen_agentchat.uiimport Console
fromautogen_ext.models.openaiimport OpenAIChatCompletionClient
fromautogen_core.modelsimport ModelFamily
# Set the API URL and model name for model access.AIOS_BASE_URL ="AIOS_LLM_Private_Endpoint"MODEL ="MODEL_ID"# Create a model client using OpenAIChatCompletionClient.model_client = OpenAIChatCompletionClient(
model=MODEL,
base_url=urljoin(AIOS_BASE_URL, "v1"),
api_key="EMPTY_KEY",
model_info={
# Set to True if images are supported."vision": False,
# Set to True if function calls are supported."function_calling": True,
# Set to True if JSON output is supported."json_output": True,
# If the model you want to use is not provided by ModelFamily, use UNKNOWN.# "family": ModelFamily.UNKNOWN,"family": ModelFamily.LLAMA_3_3_70B,
# Set to True if supporting structured output."structured_output": True,
},
)
# Create multiple agents.# Each agent performs roles such as travel planning, local activity recommendations, providing language tips, and summarizing travel plans.planner_agent = AssistantAgent(
"planner_agent",
model_client=model_client,
description="A helpful assistant that can plan trips.",
system_message=("You are a helpful assistant that can suggest a travel plan ""for a user based on their request."),
)
local_agent = AssistantAgent(
"local_agent",
model_client=model_client,
description="A local assistant that can suggest local activities or places to visit.",
system_message=("You are a helpful assistant that can suggest authentic and """interesting local activities or places to visit for a user ""and can utilize any context information provided."),
)
language_agent = AssistantAgent(
"language_agent",
model_client=model_client,
description="A helpful assistant that can provide language tips for a given destination.",
system_message=("You are a helpful assistant that can review travel plans, ""providing feedback on important/critical tips about how best to address """language or communication challenges for the given destination. """If the plan already includes language tips, ""you can mention that the plan is satisfactory, with rationale."),
)
travel_summary_agent = AssistantAgent(
"travel_summary_agent",
model_client=model_client,
description="A helpful assistant that can summarize the travel plan.",
system_message=("You are a helpful assistant that can take in all of the suggestions ""and advice from the other agents and provide a detailed final travel plan. """You must ensure that the final plan is integrated and complete. ""YOUR FINAL RESPONSE MUST BE THE COMPLETE PLAN. ""When the plan is complete and all perspectives are integrated, ""you can respond with TERMINATE."),
)
# Group the agents and create a RoundRobinGroupChat.# RoundRobinGroupChat adjusts so that agents perform tasks in the order they are registered, taking turns.# This group enables agents to interact and make travel plans.# The termination condition uses TextMentionTermination to end the group chat when the text "TERMINATE" is mentioned.termination = TextMentionTermination("TERMINATE")
group_chat = RoundRobinGroupChat(
[planner_agent, local_agent, language_agent, travel_summary_agent],
termination_condition=termination,
)
asyncdefmain():
"""Main function, runs group chat and makes travel plans."""# Run a group chat to make travel plans.# User requests the task "Plan a 3 day trip to Nepal."# Print the results using the console.await Console(group_chat.run_stream(task="Plan a 3 day trip to Nepal."))
await model_client.close()
if __name__ =="__main__":
importasyncio asyncio.run(main())
Code block. autogen_travel_planning.py
When you run a file using python, you can see multiple agents working together, each performing its role for a single task.
Color mode
python autogen_travel_planning.py
python autogen_travel_planning.py
Code block. autogen travel plan agent execution
Execution Result
----------TextMessage(user)----------Plana3daytriptoNepal.----------TextMessage(planner_agent)----------Nepal!Acountrywitharichculturalheritage,breathtakingnaturalbeauty,andwarmhospitality.Here's a suggested 3-day itinerary for your trip to Nepal:**Day1:ArrivalinKathmanduandExplorationoftheCity***ArriveatTribhuvanInternationalAirportinKathmandu,thecapitalcityofNepal.*Check-intoyourhotelandfreshenup.*Visitthefamous**BoudhanathStupa**,oneofthelargestBuddhiststupasintheworld.*Explorethe**Thamel**area,apopulartouristhubknownforitsnarrowstreets,shops,andrestaurants.*Intheevening,enjoyatraditionalNepalidinnerandwatchaculturalperformanceatalocalrestaurant.**Day2:KathmanduValleyTour***Startthedaywithavisittothe**PashupatinathTemple**,asacredHindutemplededicatedtoLordShiva.*Next,headtothe**KathmanduDurbarSquare**,aUNESCOWorldHeritageSiteandtheformerroyalpalaceoftheMallakings.*Visitthe**SwayambhunathStupa**,alsoknownastheMonkeyTemple,whichoffersstunningviewsofthecity.*Intheafternoon,takeashortdrivetothe**PatanCity**,knownforitsrichculturalheritageandtraditionalcrafts.*Explorethe**PatanDurbarSquare**andvisitthe**KrishnaTemple**,abeautifulexampleofNepaliarchitecture.**Day3:BhaktapurandNagarkot***Driveto**Bhaktapur**,amedievaltownandaUNESCOWorldHeritageSite(approximately1hour).*Explorethe**BhaktapurDurbarSquare**,whichfeaturesstunningarchitecture,temples,andpalaces.*Visitthe**PotterySquare**,whereyoucanseetraditionalpottery-makingtechniques.*Intheafternoon,driveto**Nagarkot**,ascenichillstationwithbreathtakingviewsoftheHimalayas(approximately1.5hours).*WatchthesunsetovertheHimalayasandenjoythepeacefulatmosphere.**AdditionalTips:***MakesuretotrysomelocalNepalicuisine,suchasmomos,dalbhat,andgorkhalilamb.*Bargainwhileshoppinginthemarkets,asit's a common practice in Nepal.*Respectlocalcustomsandtraditions,especiallywhenvisitingtemplesandculturalsites.*Stayhydratedandbringsunscreen,asthesuncanbestronginNepal.**Accommodation:**Kathmanduhasawiderangeofaccommodationoptions,frombudget-friendlyguesthousestoluxuryhotels.SomepopularareastostayincludeThamel,Lazimpat,andBoudha.**Transportation:**Youcanhireataxioraprivatevehicleforthedaytotravelbetweendestinations.Alternatively,youcanusepublictransportation,suchasbusesormicrobuses,whichareaffordableandconvenient.**Budget:**Thebudgetfora3-daytriptoNepalcanvarydependingonyouraccommodationchoices,transportation,andactivities.However,here's a rough estimate:*Accommodation:$20-50pernight*Transportation:$10-20perday*Food:$10-20permeal*Activities:$10-20perpersonTotalestimatedbudgetfor3days:$200-500perpersonIhopethishelps,andyouhaveawonderfultriptoNepal!----------TextMessage(local_agent)----------Your3-dayitineraryforNepaliswell-plannedandcoversmanyofthecountry's cultural and natural highlights. Here are a few additional suggestions and tips to enhance your trip:**Day1:***AftervisitingtheBoudhanathStupa,considerexploringthesurroundingstreets,whicharefilledwithTibetanshops,restaurants,andmonasteries.*IntheThamelarea,besuretotrysomeofthelocalstreetfood,suchasmomosorselroti.*Fordinner,considertryingatraditionalNepalirestaurant,suchastheKathmanduGuestHouseortheNorthfieldCafe.**Day2:***AtthePashupatinathTemple,berespectfuloftheHinduritualsandcustoms.YoucanalsotakeastrollalongtheBagmatiRiver,whichrunsthroughthetemplecomplex.*AttheKathmanduDurbarSquare,considerhiringaguidetoprovidemoreinsightintothehistoryandsignificanceofthetemplesandpalaces.*Intheafternoon,visitthePatanMuseum,whichshowcasestheartandcultureoftheKathmanduValley.**Day3:***InBhaktapur,besuretotrysomeofthelocalpotteryandhandicrafts.YoucanalsovisittheBhaktapurNationalArtGallery,whichfeaturestraditionalNepaliart.*AtNagarkot,considertakingashorthiketothenearbyvillages,whichofferstunningviewsoftheHimalayas.*Forsunset,findaspotwithaclearviewofthemountains,andenjoythepeacefulatmosphere.**AdditionalTips:***Nepalisarelativelyconservativecountry,sodressmodestlyandrespectlocalcustoms.*TrytolearnsomebasicNepaliphrases,suchas"namaste"(hello)and"dhanyabaad"(thankyou).*Bepreparedforcrowdsandchaosinthecities,especiallyinThamelandKathmanduDurbarSquare.*ConsiderpurchasingalocalSIMcardorportableWi-Fihotspottostayconnectedduringyourtrip.**Accommodation:***Considerstayinginahotelorguesthousethatiscentrallylocatedandhasgoodreviews.*LookforaccommodationsthatofferamenitiessuchasfreeWi-Fi,hotwater,andarestaurantorcafe.**Transportation:***Considerhiringaprivatevehicleortaxifortheday,asthiswillgiveyoumoreflexibilityandconvenience.*Besuretonegotiatethepriceandagreeontheitinerarybeforesettingoff.**Budget:***Bepreparedforvariablepricesandexchangerates,andhavesomelocalcurrency(Nepalirupees)onhand.*Considerbudgetingextraforunexpectedexpenses,suchastransportationorfood.Overall,youritineraryprovidesagoodbalanceofculture,history,andnaturalbeauty,andwiththeseadditionaltipsandsuggestions,you'll be well-prepared for an unforgettable trip to Nepal!----------TextMessage(language_agent)----------Your3-dayitineraryforNepaliswell-plannedandcoversmanyofthecountry's cultural and natural highlights. The additional suggestions and tips you provided are excellent and will help enhance the trip experience.Oneaspectthatiswell-coveredinyourplanistheculturalandhistoricalsignificanceofthedestinations.Youhaveincludedamixoftemples,stupas,andculturalsites,whichwillgivevisitorsagoodunderstandingofNepal's rich heritage.Regardinglanguageandcommunicationchallenges,yourtipto"try to learn some basic Nepali phrases, such as 'namaste' (hello) and 'dhanyabaad' (thank you)"isexcellent.Thiswillhelpvisitorsshowrespectforthelocalcultureandpeople,andcanalsofacilitateinteractionswithlocals.Additionally,yoursuggestionto"consider purchasing a local SIM card or portable Wi-Fi hotspot to stay connected during your trip"ispracticalandwillhelpvisitorsstayintouchwithfamilyandfriendsbackhome,aswellasnavigatethelocalarea.Yourplanissatisfactory,andwiththeadditionaltipsandsuggestions,visitorswillbewell-preparedforanunforgettabletriptoNepal.Theitineraryprovidesagoodbalanceofculture,history,andnaturalbeauty,andthetipsonlanguage,communication,andlogisticswillhelpensureasmoothandenjoyablejourney.Overall,yourplaniswell-thought-out,andtheadditionalsuggestionsandtipswillhelpvisitorsmakethemostoftheirtriptoNepal.Welldone!However,oneminorsuggestionImightmakeistoconsiderincludingafewphrasesinthelocallanguageforemergencysituations,suchas"where is the hospital?"or"how do I get to the airport?"Thiscanhelpvisitorsincaseofanunexpectedsituation,andcanalsogivethemmoreconfidencewhennavigatingunfamiliarareas.Butoverall,yourplanisexcellent,andwiththeseminorsuggestions,itcanbeevenmorecomprehensiveandhelpfulforvisitorstoNepal.----------TextMessage(travel_summary_agent)----------TERMINATEHereisthecompleteandintegrated3-daytravelplantoNepal:**Day1:ArrivalinKathmanduandExplorationoftheCity***ArriveatTribhuvanInternationalAirportinKathmandu,thecapitalcityofNepal.*Check-intoyourhotelandfreshenup.*Visitthefamous**BoudhanathStupa**,oneofthelargestBuddhiststupasintheworld.*Explorethesurroundingstreets,whicharefilledwithTibetanshops,restaurants,andmonasteries.*Explorethe**Thamel**area,apopulartouristhubknownforitsnarrowstreets,shops,andrestaurants.Besuretotrysomeofthelocalstreetfood,suchasmomosorselroti.*Intheevening,enjoyatraditionalNepalidinnerandwatchaculturalperformanceatalocalrestaurant,suchastheKathmanduGuestHouseortheNorthfieldCafe.**Day2:KathmanduValleyTour***Startthedaywithavisittothe**PashupatinathTemple**,asacredHindutemplededicatedtoLordShiva.BerespectfuloftheHinduritualsandcustoms,andtakeastrollalongtheBagmatiRiver,whichrunsthroughthetemplecomplex.*Next,headtothe**KathmanduDurbarSquare**,aUNESCOWorldHeritageSiteandtheformerroyalpalaceoftheMallakings.Considerhiringaguidetoprovidemoreinsightintothehistoryandsignificanceofthetemplesandpalaces.*Visitthe**SwayambhunathStupa**,alsoknownastheMonkeyTemple,whichoffersstunningviewsofthecity.*Intheafternoon,visitthe**PatanCity**,knownforitsrichculturalheritageandtraditionalcrafts.Explorethe**PatanDurbarSquare**andvisitthe**KrishnaTemple**,abeautifulexampleofNepaliarchitecture.Also,visitthePatanMuseum,whichshowcasestheartandcultureoftheKathmanduValley.**Day3:BhaktapurandNagarkot***Driveto**Bhaktapur**,amedievaltownandaUNESCOWorldHeritageSite(approximately1hour).Explorethe**BhaktapurDurbarSquare**,whichfeaturesstunningarchitecture,temples,andpalaces.Besuretotrysomeofthelocalpotteryandhandicrafts,andvisittheBhaktapurNationalArtGallery,whichfeaturestraditionalNepaliart.*Intheafternoon,driveto**Nagarkot**,ascenichillstationwithbreathtakingviewsoftheHimalayas(approximately1.5hours).Considertakingashorthiketothenearbyvillages,whichofferstunningviewsoftheHimalayas.Findaspotwithaclearviewofthemountains,andenjoythepeacefulatmosphereduringsunset.**AdditionalTips:***MakesuretotrysomelocalNepalicuisine,suchasmomos,dalbhat,andgorkhalilamb.*Bargainwhileshoppinginthemarkets,asit's a common practice in Nepal.*Respectlocalcustomsandtraditions,especiallywhenvisitingtemplesandculturalsites.*Stayhydratedandbringsunscreen,asthesuncanbestronginNepal.*Dressmodestlyandrespectlocalcustoms,asNepalisarelativelyconservativecountry.*TrytolearnsomebasicNepaliphrases,suchas"namaste"(hello),"dhanyabaad"(thankyou),"where is the hospital?"and"how do I get to the airport?".*ConsiderpurchasingalocalSIMcardorportableWi-Fihotspottostayconnectedduringyourtrip.*Bepreparedforcrowdsandchaosinthecities,especiallyinThamelandKathmanduDurbarSquare.**Accommodation:***Considerstayinginahotelorguesthousethatiscentrallylocatedandhasgoodreviews.*LookforaccommodationsthatofferamenitiessuchasfreeWi-Fi,hotwater,andarestaurantorcafe.**Transportation:***Considerhiringaprivatevehicleortaxifortheday,asthiswillgiveyoumoreflexibilityandconvenience.*Besuretonegotiatethepriceandagreeontheitinerarybeforesettingoff.**Budget:***Thebudgetfora3-daytriptoNepalcanvarydependingonyouraccommodationchoices,transportation,andactivities.However,here's a rough estimate:+Accommodation:$20-50pernight+Transportation:$10-20perday+Food:$10-20permeal+Activities:$10-20perperson*Totalestimatedbudgetfor3days:$200-500perperson*Bepreparedforvariablepricesandexchangerates,andhavesomelocalcurrency(Nepalirupees)onhand.*Considerbudgetingextraforunexpectedexpenses,suchastransportationorfood.
Agent-specific conversation summary
Agent
Conversation summary
planner_agent
I propose a 3-day travel itinerary for Nepal.
Day 1: Arrival in Kathmandu and city exploration
Day 2: Kathmandu valley tour
Day 3: Visit Pokhara and Nagarkot
Additional tips: Respect local customs, try local food, choose transportation options, etc
local_agent
Based on planner_agent’s 3-day travel itinerary, we provide additional suggestions and tips.
Day 1: Explore around Budhanath Stupa,
Day 2: Respect Hindu rituals at Pashupatinath Temple
Day 3: Try pottery and handicrafts of Bhaktapur
Additional tips: Respect local customs, learn basic Nepali, use local facilities, etc
language_agent
Travel itinerary evaluation and provide additional suggestions. Basic Nepali learning, use of local facilities, language preparation for emergency situations, etc.
travel_summary_agent
Summarizes the overall 3-day travel plan.
Day 1: Arrival in Kathmandu and city exploration
Day 2: Kathmandu valley tour
Day 3: Visit Pokhara and Nagarkot
Additional tips: Respect local customs, try local food, choose transportation options, etc.
MCP Utilization Agent
Note
Please refer to the LLM usage guide for the AIOS_BASE_URL AIOS_LLM_Private_Endpoint and the MODEL_ID of the MODEL.
autogen_mcp.py
Color mode
fromurllib.parseimporturljoinfromautogen_core.modelsimportModelFamilyfromautogen_ext.models.openaiimportOpenAIChatCompletionClientfromautogen_ext.tools.mcpimportMcpWorkbench,StdioServerParamsfromautogen_agentchat.agentsimportAssistantAgentfromautogen_agentchat.uiimportConsole# Set the API URL and model name for model access.AIOS_BASE_URL="AIOS_LLM_Private_Endpoint"MODEL="MODEL_ID"# Create a model client using OpenAIChatCompletionClient.model_client=OpenAIChatCompletionClient(model=MODEL,base_url=urljoin(AIOS_BASE_URL,"v1"),api_key="EMPTY_KEY",model_info={# Set to True if images are supported."vision":False,# Set to True if function calls are supported."function_calling":True,# Set to True if JSON output is supported."json_output":True,# If the model you want to use is not provided by ModelFamily, use UNKNOWN.# "family": ModelFamily.UNKNOWN,"family":ModelFamily.LLAMA_3_3_70B,# Set to True if supporting structured output."structured_output":True,}")"# Set MCP server parameters.# mcp_server_time is an MCP server implemented in python,# It includes the get_current_time function that provides the current time internally, and the convert_time function that converts time zones.# This parameter sets the MCP server to the local timezone so that the time can be checked.# For example, if you set it to "Asia/Seoul", you can check the time according to the Korean time zone.mcp_server_params=StdioServerParams(command="python",args=["-m","mcp_server_time","--local-timezone","Asia/Seoul"],)asyncdefmain():"""Runs the agent that checks the time using the MCP workbench as the main function."""# Create and run an agent that checks the time using the MCP workbench.# The agent performs the task "What time is it now in South Korea?"# Print the results using the console.# while the MCP Workbench is running, the agent checks the time# Output the results in streaming mode.# If MCP Workbench terminates, the agent also terminates.asyncwithMcpWorkbench(mcp_server_params)asworkbench:time_agent=AssistantAgent("time_assistant",model_client=model_client,workbench=workbench,reflect_on_tool_use=True,)awaitConsole(time_agent.run_stream(task="What time is it now in South Korea?"))awaitmodel_client.close()if__name__=="__main__":importasyncioasyncio.run(main())
fromurllib.parseimport urljoin
fromautogen_core.modelsimport ModelFamily
fromautogen_ext.models.openaiimport OpenAIChatCompletionClient
fromautogen_ext.tools.mcpimport McpWorkbench, StdioServerParams
fromautogen_agentchat.agentsimport AssistantAgent
fromautogen_agentchat.uiimport Console
# Set the API URL and model name for model access.AIOS_BASE_URL ="AIOS_LLM_Private_Endpoint"MODEL ="MODEL_ID"# Create a model client using OpenAIChatCompletionClient.model_client = OpenAIChatCompletionClient(
model=MODEL,
base_url=urljoin(AIOS_BASE_URL, "v1"),
api_key="EMPTY_KEY",
model_info={
# Set to True if images are supported."vision": False,
# Set to True if function calls are supported."function_calling": True,
# Set to True if JSON output is supported."json_output": True,
# If the model you want to use is not provided by ModelFamily, use UNKNOWN.# "family": ModelFamily.UNKNOWN,"family": ModelFamily.LLAMA_3_3_70B,
# Set to True if supporting structured output."structured_output": True,
}
")"# Set MCP server parameters.# mcp_server_time is an MCP server implemented in python,# It includes the get_current_time function that provides the current time internally, and the convert_time function that converts time zones.# This parameter sets the MCP server to the local timezone so that the time can be checked.# For example, if you set it to "Asia/Seoul", you can check the time according to the Korean time zone.mcp_server_params = StdioServerParams(
command="python",
args=["-m", "mcp_server_time", "--local-timezone", "Asia/Seoul"],
)
asyncdefmain():
"""Runs the agent that checks the time using the MCP workbench as the main function."""# Create and run an agent that checks the time using the MCP workbench.# The agent performs the task "What time is it now in South Korea?"# Print the results using the console.# while the MCP Workbench is running, the agent checks the time# Output the results in streaming mode.# If MCP Workbench terminates, the agent also terminates.asyncwith McpWorkbench(mcp_server_params) as workbench:
time_agent = AssistantAgent(
"time_assistant",
model_client=model_client,
workbench=workbench,
reflect_on_tool_use=True,
)
await Console(time_agent.run_stream(task="What time is it now in South Korea?"))
await model_client.close()
if __name__ =="__main__":
importasyncio asyncio.run(main())
Code block. autogen_mcp.py
When you run the file using python, it fetches the tool’s metadata from the MCP server, calls the model, and when the model generates a tool calls message
You can see that the get_current_time function is executed to retrieve the current time.
# TextMessage (user): Input message given by the user----------TextMessage(user)----------WhattimeisitnowinSouthKorea?# Query metadata of tools that can be used on the MCP serverINFO:mcp.server.lowlevel.server:ProcessingrequestoftypeListToolsRequest...omission...INFO:autogen_core.events:{# Metadata of tools available on the MCP server"tools":[{"type":"function","function":{"name":"get_current_time","description":"Get current time in a specific timezones","parameters":{"type":"object","properties":{"timezone":{"type":"string","description":"IANA timezone name (e.g., 'America/New_York', 'Europe/London'). Use 'Asia/Seoul' as local timezone if no timezone provided by the user."}},"required":[""timezone],"additionalProperties":false},"strict":false}},{"type":"function","function":{"name":"convert_time","description":"Convert time between timezones","parameters":{"type":"object","properties":{"source_timezone":{"type":"string","description":"Source IANA timezone name (e.g., 'America/New_York', 'Europe/London'). Use 'Asia/Seoul' as local timezone if no source timezone provided by the user."},"time":{"type":"string","description":"Time to convert in 24-hour format (HH:MM)"},"target_timezone":{"type":"string","description":"Target IANA timezone name (e.g., 'Asia/Tokyo', 'America/San_Francisco'). Use 'Asia/Seoul' as local timezone if no target timezone provided by the user."}},"required":["source_timezone","time","target_timezone"],"additionalProperties":false},"strict":false}}],"type":"LLMCall",# input message"messages":[{"content":"You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.","role":"system"},{"role":"user","name":"user","content":"What time is it now in South Korea?"}],# Model Response"response":{"id":"chatcmpl-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","choices":[{"finish_reason":"tool_calls","index":0,"logprobs":null,"message":{"content":null,"refusal":null,"role":"assistant","annotations":null,"audio":null,"function_call":null,"tool_calls":["{"id":"chatcmpl-tool-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","function":{"arguments":"{\"timezone\": \"Asia/Seoul\"}","name":"get_current_time"},"type":"function"}],"reasoning_content":null},"stop_reason":128008}],"created":1751278737,"model":"MODEL_ID","object":"chat.completion","service_tier":null,"system_fingerprint":null,"usage":{"completion_tokens":21,"prompt_tokens":508,"total_tokens":529,"completion_tokens_details":null,"prompt_tokens_details":null},"prompt_logprobs":null},"prompt_tokens":508,"completion_tokens":21,"agent_id":null}# ToolCallRequestEvent: Receiving a tool call message from the model----------ToolCallRequestEvent(time_assistant)----------[FunctionCall(id='chatcmpl-tool-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',arguments='{"timezone": "Asia/Seoul"}',name='get_current_time')]INFO:mcp.server.lowlevel.server:ProcessingrequestoftypeListToolsRequest# Execute function of tool call message via MCP serverINFO:mcp.server.lowlevel.server:ProcessingrequestoftypeCallToolRequest# ToolCallExecutionEvent: Deliver the function execution result to the model----------ToolCallExecutionEvent(time_assistant)----------[FunctionExecutionResult(content='{"timezone":"Asia/Seoul","datetime":"2025-06-30T19:18:58+09:00","is_dst":false}', name='get_current_time', call_id='chatcmpl-tool-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', is_error=False)]...omission...# TextMessage (time_assistant): Final answer generated by the model----------TextMessage(time_assistant)----------ThecurrenttimeinSouthKoreais19:18:58KST.TERMINATE
MCP Server Time Query System Log Analysis Result
MCP(Model Control Protocol) server-based time query system execution process log analysis result.
Request Information
Item
Content
User request
What time is it now in South Korea?
Request Time
2025-06-30 19:18:58 KST
Processing method
MCP server tool call
Available tools
Tool Name
Description
Parameter
Default Value
get_current_time
Retrieve current time of a specific timezone
timezone (IANA timezone name)
Asia/Seoul
convert_time
Time conversion between time zones
source_timezone, time, target_timezone
Asia/Seoul
Processing steps
Step
Action
Details
1
Tool metadata lookup
Verify the list of tools available on the MCP server
2
AI model response
get_current_time function called in the Asia/Seoul timezone
3
Function execution
MCP server runs time lookup tool
4
Return result
Provide time information in structured JSON format
5
Final Answer
Deliver time to the user in an easy-to-read format
Function Call Details
Item
Value
function name
get_current_time
Parameter
{"timezone": "Asia/Seoul"}
Call ID
chatcmpl-tool-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
type
function
Execution result
Field
Value
Description
timezone
Asia/Seoul
Time zone
datetime
2025-06-30T19:18:58+09:00
ISO 8601 format time
is_dst
false
Daylight saving time applied
final response
Item
Content
Response Message
The current time in South Korea is 19:18:58 KST.
Completion mark
TERMINATE
Response Time
19:18:58 KST
Usage metric table
indicator
value
Prompt Token
508
completion token
21
Total token usage
529
Processing time
Immediate (real-time)
Main features
Feature
Description
MCP protocol utilization
Smooth integration with external tools
Korean time zone default setting
Asia/Seoul used as default
Structured response
Clear data return in JSON format
Auto-complete display
Work completion notification with TERMINATE
Real-time information provision
Accurate current time lookup
Technical significance
This is an example of a modern architecture where an AI assistant integrates with external systems to provide real-time information. Through MCP, the AI model can access various external tools and services, enabling more practical and dynamic responses.
Conclusion
In this tutorial, we implemented an application that creates travel itineraries using multiple agents by leveraging the AI model provided by AIOS and autogen, and an agent application that can use external tools by utilizing the MCP server. Through this, we learned that problems can be solved from multiple angles using several agents with different perspectives, and external tools can be utilized. This system can be expanded and customized to fit user environments in the following ways.
Agent flow control: Various techniques can be used when selecting the agent to perform the task. For reliable results, you can fix the order of agents and implement it, or you can let the AI model choose the agents for flexible processing. Additionally, you can use event techniques to implement multiple agents processing tasks in parallel.
Introduction of various MCP servers: In addition to mcp_server_time, various MCP servers that have already been implemented exist. By utilizing these, the AI model can flexibly use various external tools to implement useful applications.
Based on this tutorial, we hope you will directly build a suitable AIOS-based collaborative assistant according to the actual service purpose.
AIOS models are compatible with OpenAI and Cohere APIs, so they are also compatible with OpenAI and Cohere SDKs. The following is the list of OpenAI and Cohere compatible APIs supported by Samsung Cloud Platform AIOS service.
Converts text to high-dimensional vectors (embeddings), which can be used for various natural language processing (NLP) tasks such as text similarity calculation, clustering, and search.
go get github.com/openai/openai-go \
github.com/cohere-ai/cohere-go/v2
go get github.com/openai/openai-go \
github.com/cohere-ai/cohere-go/v2
Code Block. SDK package installation
Text Completion API
The Text Completion API generates natural sentences that immediately follow the given string as input.
non-stream request
Request
Caution
Text Completion API input can only use strings.
Color mode
importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use and the prompt.data={"model":model,"prompt":"Hi"}# Send a POST request to the AIOS API's v1/completions endpoint.# Use urljoin function to combine the base URL and endpoint path.response=requests.post(urljoin(aios_base_url,"v1/completions"),json=data)# Parse the response body in JSON format.body=json.loads(response.text)# response.choices[0].text is the response text generated by the AI model.print(body["choices"][0]["text"])
importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use and the prompt.data = {
"model": model,
"prompt": "Hi"}
# Send a POST request to the AIOS API's v1/completions endpoint.# Use urljoin function to combine the base URL and endpoint path.response = requests.post(urljoin(aios_base_url, "v1/completions"), json=data)
# Parse the response body in JSON format.body = json.loads(response.text)
# response.choices[0].text is the response text generated by the AI model.print(body["choices"][0]["text"])
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")# Generate a completion using the AIOS model.# model parameter specifies the model ID to use,# prompt parameter is the input text to provide to the AI.response=client.completions.create(model=model,prompt="Hi")# response.choices[0].text is the response text generated by the AI model.print(response.choices[0].text)
fromopenaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
# Generate a completion using the AIOS model.# model parameter specifies the model ID to use,# prompt parameter is the input text to provide to the AI.response = client.completions.create(
model=model,
prompt="Hi")
# response.choices[0].text is the response text generated by the AI model.print(response.choices[0].text)
fromlangchain_openaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an LLM (Large Language Model) instance using LangChain's OpenAI class.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# model parameter specifies the model ID to use.llm=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)# Pass the prompt "Hi" to the LLM and receive a response.# The invoke method returns the model's output.print(llm.invoke("Hi"))
fromlangchain_openaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an LLM (Large Language Model) instance using LangChain's OpenAI class.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# model parameter specifies the model ID to use.llm = OpenAI(
base_url=urljoin(aios_base_url, "v1"),
api_key="EMPTY_KEY",
model=model
)
# Pass the prompt "Hi" to the LLM and receive a response.# The invoke method returns the model's output.print(llm.invoke("Hi"))
constaios_base_url="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>"// Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use and the prompt.
constdata={model:model,prompt:"Hi",};// Create the AIOS API's v1/completions endpoint URL.
leturl=newURL("/v1/completions",aios_base_url);// Send a POST request to the AIOS API.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});// Parse the response body in JSON format.
constbody=awaitresponse.json();// response.choices[0].text is the response text generated by the AI model.
console.log(body.choices[0].text);
const aios_base_url ="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"// Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use and the prompt.
const data = {
model: model,
prompt:"Hi",
};
// Create the AIOS API's v1/completions endpoint URL.
let url =new URL("/v1/completions", aios_base_url);
// Send a POST request to the AIOS API.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
// Parse the response body in JSON format.
const body =await response.json();
// response.choices[0].text is the response text generated by the AI model.
console.log(body.choices[0].text);
importOpenAIfrom"openai";constaios_base_url="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>"// Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
constclient=newOpenAI({apiKey:"EMPTY_KEY",baseURL:newURL("v1",aios_base_url).href,});// Generate a completion using the AIOS model.
// model parameter specifies the model ID to use,
// prompt parameter is the input text to provide to the AI.
constcompletions=awaitclient.completions.create({model:model,prompt:"Hi",});// response.choices[0].text is the response text generated by the AI model.
console.log(completions.choices[0].text);
import OpenAI from "openai";
const aios_base_url ="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"// Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
const client =new OpenAI({
apiKey:"EMPTY_KEY",
baseURL:new URL("v1", aios_base_url).href,
});
// Generate a completion using the AIOS model.
// model parameter specifies the model ID to use,
// prompt parameter is the input text to provide to the AI.
const completions =await client.completions.create({
model: model,
prompt:"Hi",
});
// response.choices[0].text is the response text generated by the AI model.
console.log(completions.choices[0].text);
import{OpenAI}from"@langchain/openai";constaios_base_url="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>"// Enter the model ID for calling the AIOS model.
// Create an LLM (Large Language Model) instance using LangChain's OpenAI class.
// model parameter specifies the model ID to use.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// configuration.baseURL points to the v1 endpoint of the AIOS API.
constllm=newOpenAI({model:model,apiKey:"EMPTY_KEY",configuration:{baseURL:newURL("v1",aios_base_url).href,},});// Pass the prompt "Hi" to the LLM and receive a streaming response.
// The stream method returns a stream that generates tokens in real-time.
constcompletion=awaitllm.invoke("Hi");// Output the generated response.
// This text is the response generated by the AI model.
console.log(completion);
import { OpenAI } from "@langchain/openai";
const aios_base_url ="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"// Enter the model ID for calling the AIOS model.
// Create an LLM (Large Language Model) instance using LangChain's OpenAI class.
// model parameter specifies the model ID to use.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// configuration.baseURL points to the v1 endpoint of the AIOS API.
const llm =new OpenAI({
model: model,
apiKey:"EMPTY_KEY",
configuration: {
baseURL:new URL("v1", aios_base_url).href,
},
});
// Pass the prompt "Hi" to the LLM and receive a streaming response.
// The stream method returns a stream that generates tokens in real-time.
const completion =await llm.invoke("Hi");
// Output the generated response.
// This text is the response generated by the AI model.
console.log(completion);
packagemainimport("bytes""encoding/json""fmt""io""net/http")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define the data structure to be used for POST requests.
// Model: Model ID to use
// Prompt: Input text to provide to the AI
// Stream: Whether to stream response (optional)
typePostDatastruct{Modelstring`json:"model"`Promptstring`json:"prompt"`Streambool`json:"stream,omitempty"`}funcmain(){// Create request data.
data:=PostData{Model:model,Prompt:"Hi",}// Marshal data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v1/completions endpoint.
response,err:=http.Post(aiosBaseUrl+"/v1/completions","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// Read the entire response body.
body,err:=io.ReadAll(response.Body)iferr!=nil{panic(err)}varvmap[string]interface{}json.Unmarshal(body,&v)// Extract the choices array from the response.
choices:=v["choices"].([]interface{})// Extract the first data's text.
choice:=choices[0].(map[string]interface{})text:=choice["text"]// Output the response text generated by the AI model.
fmt.Println(text)}
package main
import (
"bytes""encoding/json""fmt""io""net/http")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define the data structure to be used for POST requests.
// Model: Model ID to use
// Prompt: Input text to provide to the AI
// Stream: Whether to stream response (optional)
type PostData struct {
Model string`json:"model"` Prompt string`json:"prompt"` Stream bool`json:"stream,omitempty"`}
funcmain() {
// Create request data.
data := PostData{
Model: model,
Prompt: "Hi",
}
// Marshal data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v1/completions endpoint.
response, err := http.Post(aiosBaseUrl +"/v1/completions", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// Read the entire response body.
body, err := io.ReadAll(response.Body)
if err !=nil {
panic(err)
}
var v map[string]interface{}
json.Unmarshal(body, &v)
// Extract the choices array from the response.
choices := v["choices"].([]interface{})
// Extract the first data's text.
choice := choices[0].(map[string]interface{})
text := choice["text"]
// Output the response text generated by the AI model.
fmt.Println(text)
}
packagemainimport("context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option""github.com/openai/openai-go/packages/param")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)funcmain(){// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client:=openai.NewClient(option.WithBaseURL(aiosBaseUrl+"/v1"),)// Generate a completion using the AIOS model.
// Use openai.CompletionNewParams to set the model and prompt.
completion,err:=client.Completions.New(context.TODO(),openai.CompletionNewParams{Model:openai.CompletionNewParamsModel(model),Prompt:openai.CompletionNewParamsPromptUnion{OfString:param.Opt[string]{Value:"Hi"}},})iferr!=nil{panic(err)}// response.choices[0].text is the response text generated by the AI model.
fmt.Println(completion.Choices[0].Text)}
package main
import (
"context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option""github.com/openai/openai-go/packages/param")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
funcmain() {
// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client := openai.NewClient(
option.WithBaseURL(aiosBaseUrl+"/v1"),
)
// Generate a completion using the AIOS model.
// Use openai.CompletionNewParams to set the model and prompt.
completion, err := client.Completions.New(context.TODO(), openai.CompletionNewParams{
Model: openai.CompletionNewParamsModel(model),
Prompt: openai.CompletionNewParamsPromptUnion{OfString: param.Opt[string]{Value: "Hi"}},
})
if err !=nil {
panic(err)
}
// response.choices[0].text is the response text generated by the AI model.
fmt.Println(completion.Choices[0].Text)
}
Code Block. /v1/completions request
Note
For information on the aios endpoint-url and model ID for calling the model, see the LLM Endpoint Usage Guide on the resource details page. Please refer to Using LLM.
Response
You can see that the model’s answer is included in the text field of choices.
future president of the United States, I hope you're doing well. As a
stream request
Using the stream feature, you can receive responses token by token as the model generates tokens, without waiting for the model to complete the entire response.
Request
Enter True for the stream parameter value.
Color mode
importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use, the prompt, and whether to stream.data={"model":model,"prompt":"Hi","stream":True}# Send a POST request to the AIOS API's v1/completions endpoint.# Set stream=True to receive real-time streaming responses.response=requests.post(urljoin(aios_base_url,"v1/completions"),json=data,stream=True)# You can receive responses as the model generates tokens.# Responses are sent separated by each line, so process with iter_lines().forlineinresponse.iter_lines():ifline:try:# Remove the 'data: ' prefix and parse the JSON data.body=json.loads(line[len("data: "):])# response.choices[0].text is the response text generated by the AI model.print(body["choices"][0]["text"])except:pass
importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use, the prompt, and whether to stream.data = {
"model": model,
"prompt": "Hi",
"stream": True}
# Send a POST request to the AIOS API's v1/completions endpoint.# Set stream=True to receive real-time streaming responses.response = requests.post(urljoin(aios_base_url, "v1/completions"), json=data, stream=True)
# You can receive responses as the model generates tokens.# Responses are sent separated by each line, so process with iter_lines().for line in response.iter_lines():
if line:
try:
# Remove the 'data: ' prefix and parse the JSON data. body = json.loads(line[len("data: "):])
# response.choices[0].text is the response text generated by the AI model. print(body["choices"][0]["text"])
except:
pass
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")# Generate a completion using the AIOS model.# model parameter specifies the model ID to use,# prompt parameter is the input text to provide to the AI.# Set stream=True to receive real-time streaming responses.response=client.completions.create(model=model,prompt="Hi",stream=True)# You can receive responses as the model generates tokens.# response is sent in stream format, so you can process it iteratively.forchunkinresponse:# Each chunk's choices[0].text is the response text generated by the AI model.print(chunk.choices[0].text)
fromopenaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
# Generate a completion using the AIOS model.# model parameter specifies the model ID to use,# prompt parameter is the input text to provide to the AI.# Set stream=True to receive real-time streaming responses.response = client.completions.create(
model=model,
prompt="Hi",
stream=True)
# You can receive responses as the model generates tokens.# response is sent in stream format, so you can process it iteratively.for chunk in response:
# Each chunk's choices[0].text is the response text generated by the AI model. print(chunk.choices[0].text)
fromlangchain_openaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an LLM (Large Language Model) instance using LangChain's OpenAI class.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# model parameter specifies the model ID to use.llm=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)# Pass the prompt "Hi" to the LLM and receive a streaming response.# The stream method returns a stream that generates tokens in real-time.response=llm.stream("Hi")# You can receive responses as the model generates tokens.# response is sent in stream format, so you can process it iteratively.forchunkinresponse:# Output each chunk.# This chunk is the response token generated by the AI model.print(chunk)
fromlangchain_openaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an LLM (Large Language Model) instance using LangChain's OpenAI class.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# model parameter specifies the model ID to use.llm = OpenAI(
base_url=urljoin(aios_base_url, "v1"),
api_key="EMPTY_KEY",
model=model
)
# Pass the prompt "Hi" to the LLM and receive a streaming response.# The stream method returns a stream that generates tokens in real-time.response = llm.stream("Hi")
# You can receive responses as the model generates tokens.# response is sent in stream format, so you can process it iteratively.for chunk in response:
# Output each chunk.# This chunk is the response token generated by the AI model. print(chunk)
constaios_base_url="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>"// Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use, the prompt, and whether to stream.
constdata={model:model,prompt:"Hi",stream:true,};// Create the AIOS API's v1/completions endpoint URL.
leturl=newURL("/v1/completions",aios_base_url);// Send a POST request to the AIOS API.
// Set stream: true to receive real-time streaming responses.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});// You can receive responses as the model generates tokens.
// Convert the response body to a text decoder stream and read it.
constreader=response.body.pipeThrough(newTextDecoderStream()).getReader();letbuf="";while(true){const{value,done}=awaitreader.read();if(done)break;// Add received data to buffer.
buf+=value;letsep;// Find newline characters (\n\n) in the buffer and separate data.
while((sep=buf.indexOf("\n\n"))>=0){constdata=buf.slice(0,sep);buf=buf.slice(sep+2);// Process each line.
for(constrawLineofdata.split("\n")){constline=rawLine.trim();if(!line.startsWith("data: "))continue;// Remove the "data: " prefix and extract JSON data.
constpayload=line.slice("data: ".length).trim();if(payload==="[DONE]")break;// Parse the JSON data.
constjson=JSON.parse(payload);// choices[0].text is the response text generated by the AI model.
console.log(json.choices[0].text);}}}
const aios_base_url ="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"// Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use, the prompt, and whether to stream.
const data = {
model: model,
prompt:"Hi",
stream:true,
};
// Create the AIOS API's v1/completions endpoint URL.
let url =new URL("/v1/completions", aios_base_url);
// Send a POST request to the AIOS API.
// Set stream: true to receive real-time streaming responses.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
// You can receive responses as the model generates tokens.
// Convert the response body to a text decoder stream and read it.
const reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
let buf ="";
while (true) {
const { value, done } =await reader.read();
if (done) break;
// Add received data to buffer.
buf += value;
let sep;
// Find newline characters (\n\n) in the buffer and separate data.
while ((sep = buf.indexOf("\n\n")) >=0) {
const data = buf.slice(0, sep);
buf = buf.slice(sep +2);
// Process each line.
for (const rawLine of data.split("\n")) {
const line = rawLine.trim();
if (!line.startsWith("data: ")) continue;
// Remove the "data: " prefix and extract JSON data.
const payload = line.slice("data: ".length).trim();
if (payload ==="[DONE]") break;
// Parse the JSON data.
const json = JSON.parse(payload);
// choices[0].text is the response text generated by the AI model.
console.log(json.choices[0].text);
}
}
}
importOpenAIfrom"openai";constaios_base_url="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>"// Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
constclient=newOpenAI({apiKey:"EMPTY_KEY",baseURL:newURL("v1",aios_base_url).href,});// Generate a completion using the AIOS model.
// model parameter specifies the model ID to use,
// prompt parameter is the input text to provide to the AI.
// Set stream: true to receive real-time streaming responses.
constcompletions=awaitclient.completions.create({model:model,prompt:"Hi",stream:true,});// You can receive responses as the model generates tokens.
// Use for await...of loop to sequentially process stream events.
forawait(consteventofcompletions){// Each event's choices[0].text is the response text generated by the AI model.
console.log(event.choices[0].text);}
import OpenAI from "openai";
const aios_base_url ="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"// Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
const client =new OpenAI({
apiKey:"EMPTY_KEY",
baseURL:new URL("v1", aios_base_url).href,
});
// Generate a completion using the AIOS model.
// model parameter specifies the model ID to use,
// prompt parameter is the input text to provide to the AI.
// Set stream: true to receive real-time streaming responses.
const completions =await client.completions.create({
model: model,
prompt:"Hi",
stream:true,
});
// You can receive responses as the model generates tokens.
// Use for await...of loop to sequentially process stream events.
forawait (const event of completions) {
// Each event's choices[0].text is the response text generated by the AI model.
console.log(event.choices[0].text);
}
import{OpenAI}from"@langchain/openai";constaios_base_url="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>"// Enter the model ID for calling the AIOS model.
// Create an LLM (Large Language Model) instance using LangChain's OpenAI class.
// model parameter specifies the model ID to use.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// configuration.baseURL points to the v1 endpoint of the AIOS API.
constllm=newOpenAI({model:model,apiKey:"EMPTY_KEY",configuration:{baseURL:newURL("v1",aios_base_url).href,},});// Pass the prompt "Hi" to the LLM and receive a streaming response.
// The stream method returns a stream that generates tokens in real-time.
constcompletion=awaitllm.stream("Hi");// You can receive responses as the model generates tokens.
// Use for await...of loop to sequentially process stream chunks.
forawait(constchunkofcompletion){// Output each chunk.
// This chunk is the response token generated by the AI model.
console.log(chunk);}
import { OpenAI } from "@langchain/openai";
const aios_base_url ="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"// Enter the model ID for calling the AIOS model.
// Create an LLM (Large Language Model) instance using LangChain's OpenAI class.
// model parameter specifies the model ID to use.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// configuration.baseURL points to the v1 endpoint of the AIOS API.
const llm =new OpenAI({
model: model,
apiKey:"EMPTY_KEY",
configuration: {
baseURL:new URL("v1", aios_base_url).href,
},
});
// Pass the prompt "Hi" to the LLM and receive a streaming response.
// The stream method returns a stream that generates tokens in real-time.
const completion =await llm.stream("Hi");
// You can receive responses as the model generates tokens.
// Use for await...of loop to sequentially process stream chunks.
forawait (const chunk of completion) {
// Output each chunk.
// This chunk is the response token generated by the AI model.
console.log(chunk);
}
packagemainimport("bufio""bytes""encoding/json""fmt""net/http")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define the data structure to be used for POST requests.
// Model: Model ID to use
// Prompt: Input text to provide to the AI
// Stream: Whether to stream response (optional)
typePostDatastruct{Modelstring`json:"model"`Promptstring`json:"prompt"`Streambool`json:"stream,omitempty"`}funcmain(){// Create request data.
// Set Stream: true to receive real-time streaming responses.
data:=PostData{Model:model,Prompt:"Hi",Stream:true,}// Marshal data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v1/completions endpoint.
response,err:=http.Post(aiosBaseUrl+"/v1/completions","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// You can receive responses as the model generates tokens.
// Scan the HTTP response body and process line by line.
varvmap[string]interface{}scanner:=bufio.NewScanner(response.Body)forscanner.Scan(){line:=bytes.TrimSpace(scanner.Bytes())// Skip lines that don't start with "data: ".
if!bytes.HasPrefix(line,[]byte("data: ")){continue}// Remove the "data: " prefix.
payload:=bytes.TrimPrefix(line,[]byte("data: "))// If payload is "[DONE]", end streaming.
ifbytes.Equal(payload,[]byte("[DONE]")){break}// Parse the JSON data.
json.Unmarshal(payload,&v)// Extract the choices array from the response.
choices:=v["choices"].([]interface{})// Extract the first data.
choice:=choices[0].(map[string]interface{})// Extract the response token generated by the AI model.
text:=choice["text"]fmt.Println(text)}}
package main
import (
"bufio""bytes""encoding/json""fmt""net/http")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define the data structure to be used for POST requests.
// Model: Model ID to use
// Prompt: Input text to provide to the AI
// Stream: Whether to stream response (optional)
type PostData struct {
Model string`json:"model"` Prompt string`json:"prompt"` Stream bool`json:"stream,omitempty"`}
funcmain() {
// Create request data.
// Set Stream: true to receive real-time streaming responses.
data := PostData{
Model: model,
Prompt: "Hi",
Stream: true,
}
// Marshal data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v1/completions endpoint.
response, err := http.Post(aiosBaseUrl+"/v1/completions", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// You can receive responses as the model generates tokens.
// Scan the HTTP response body and process line by line.
var v map[string]interface{}
scanner := bufio.NewScanner(response.Body)
for scanner.Scan() {
line := bytes.TrimSpace(scanner.Bytes())
// Skip lines that don't start with "data: ".
if !bytes.HasPrefix(line, []byte("data: ")) {
continue }
// Remove the "data: " prefix.
payload := bytes.TrimPrefix(line, []byte("data: "))
// If payload is "[DONE]", end streaming.
if bytes.Equal(payload, []byte("[DONE]")) {
break }
// Parse the JSON data.
json.Unmarshal(payload, &v)
// Extract the choices array from the response.
choices := v["choices"].([]interface{})
// Extract the first data.
choice := choices[0].(map[string]interface{})
// Extract the response token generated by the AI model.
text := choice["text"]
fmt.Println(text)
}
}
packagemainimport("context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option""github.com/openai/openai-go/packages/param")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)funcmain(){// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client:=openai.NewClient(option.WithBaseURL(aiosBaseUrl+"/v1"),)// Generate a streaming completion using the AIOS model.
// Use openai.CompletionNewParams to set the model and prompt.
completion:=client.Completions.NewStreaming(context.TODO(),openai.CompletionNewParams{Model:openai.CompletionNewParamsModel(model),Prompt:openai.CompletionNewParamsPromptUnion{OfString:param.Opt[string]{Value:"Hi"}},})// You can receive responses as the model generates tokens.
// The Next() method returns true when there is a next chunk.
forcompletion.Next(){// Get the choices slice of the current chunk.
chunk:=completion.Current().Choices// choices[0].text is the response text generated by the AI model.
fmt.Println(chunk[0].Text)}}
package main
import (
"context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option""github.com/openai/openai-go/packages/param")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
funcmain() {
// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client := openai.NewClient(
option.WithBaseURL(aiosBaseUrl +"/v1"),
)
// Generate a streaming completion using the AIOS model.
// Use openai.CompletionNewParams to set the model and prompt.
completion := client.Completions.NewStreaming(context.TODO(), openai.CompletionNewParams{
Model: openai.CompletionNewParamsModel(model),
Prompt: openai.CompletionNewParamsPromptUnion{OfString: param.Opt[string]{Value: "Hi"}},
})
// You can receive responses as the model generates tokens.
// The Next() method returns true when there is a next chunk.
for completion.Next() {
// Get the choices slice of the current chunk.
chunk := completion.Current().Choices
// choices[0].text is the response text generated by the AI model.
fmt.Println(chunk[0].Text)
}
}
Answers are generated for each token, and each token can be checked in the text field of choices.
I
'm
looking
for
a
way
to
check
if
a
specific
process
is
running
on
Chat Completion API
The Chat Completion API receives a list of messages (context) listed in order and generates an appropriate message as the next response.
non-stream request
Request
If the messages consist only of text messages, you can call them as follows.
Color mode
importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use and the messages list.# The messages list includes system messages and user messages.data={"model":model,"messages":[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Hi"}]}# Send a POST request to the AIOS API's v1/chat/completions endpoint.response=requests.post(urljoin(aios_base_url,"v1/chat/completions"),json=data)# Parse the response body in JSON format.body=json.loads(response.text)# choices[0].message is the response generated by the AI model.print(body["choices"][0]["message"])
importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use and the messages list.# The messages list includes system messages and user messages.data = {
"model": model,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hi"}
]
}
# Send a POST request to the AIOS API's v1/chat/completions endpoint.response = requests.post(urljoin(aios_base_url, "v1/chat/completions"), json=data)
# Parse the response body in JSON format.body = json.loads(response.text)
# choices[0].message is the response generated by the AI model.print(body["choices"][0]["message"])
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")# Generate a chat completion using the AIOS model.# model parameter specifies the model ID to use.# messages parameter is a list of messages including system and user messages.response=client.chat.completions.create(model=model,messages=[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Hi"}])# Output choices[0].message from the generated response.print(response.choices[0].message.model_dump())
fromopenaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
# Generate a chat completion using the AIOS model.# model parameter specifies the model ID to use.# messages parameter is a list of messages including system and user messages.response = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hi"}
]
)
# Output choices[0].message from the generated response.print(response.choices[0].message.model_dump())
fromlangchain_openaiimportChatOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# model parameter specifies the model ID to use.chat_llm=ChatOpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)# Configure the chat messages list.# Include system messages and user messages.messages=[("system","You are a helpful assistant."),("human","Hi"),]# Pass the messages list to the chat LLM and receive a response.# The invoke method returns the model's output.chat_completion=chat_llm.invoke(messages)# Output the generated response.print(chat_completion.model_dump())
fromlangchain_openaiimport ChatOpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# model parameter specifies the model ID to use.chat_llm = ChatOpenAI(
base_url=urljoin(aios_base_url, "v1"),
api_key="EMPTY_KEY",
model=model
)
# Configure the chat messages list.# Include system messages and user messages.messages = [
("system", "You are a helpful assistant."),
("human", "Hi"),
]
# Pass the messages list to the chat LLM and receive a response.# The invoke method returns the model's output.chat_completion = chat_llm.invoke(messages)
# Output the generated response.print(chat_completion.model_dump())
constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use and the messages list.
// The messages list includes system messages and user messages.
constdata={model:model,messages:[{role:"system",content:"You are a helpful assistant."},{role:"user",content:"Hi"},],};// Create the AIOS API's v1/chat/completions endpoint URL.
leturl=newURL("/v1/chat/completions",aios_base_url);// Send a POST request to the AIOS API.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});// Parse the response body in JSON format.
constbody=awaitresponse.json();// Output choices[0].message from the generated response.
console.log(body.choices[0].message);
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use and the messages list.
// The messages list includes system messages and user messages.
const data = {
model: model,
messages: [
{ role:"system", content:"You are a helpful assistant." },
{ role:"user", content:"Hi" },
],
};
// Create the AIOS API's v1/chat/completions endpoint URL.
let url =new URL("/v1/chat/completions", aios_base_url);
// Send a POST request to the AIOS API.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
// Parse the response body in JSON format.
const body =await response.json();
// Output choices[0].message from the generated response.
console.log(body.choices[0].message);
importOpenAIfrom"openai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
constclient=newOpenAI({apiKey:"EMPTY_KEY",baseURL:newURL("v1",aios_base_url).href,});// Generate a chat completion using the AIOS model.
// model parameter specifies the model ID to use.
// messages parameter is a list of messages including system and user messages.
constresponse=awaitclient.chat.completions.create({model:model,messages:[{role:"system",content:"You are a helpful assistant."},{role:"user",content:"Hi"},],});// Output choices[0].message from the generated response.
console.log(response.choices[0].message);
import OpenAI from "openai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
const client =new OpenAI({
apiKey:"EMPTY_KEY",
baseURL:new URL("v1", aios_base_url).href,
});
// Generate a chat completion using the AIOS model.
// model parameter specifies the model ID to use.
// messages parameter is a list of messages including system and user messages.
const response =await client.chat.completions.create({
model: model,
messages: [
{ role:"system", content:"You are a helpful assistant." },
{ role:"user", content:"Hi" },
],
});
// Output choices[0].message from the generated response.
console.log(response.choices[0].message);
import{HumanMessage,SystemMessage}from"@langchain/core/messages";import{ChatOpenAI}from"@langchain/openai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.
// model parameter specifies the model ID to use.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// configuration.baseURL points to the v1 endpoint of the AIOS API.
constllm=newChatOpenAI({model:model,apiKey:"EMPTY_KEY",configuration:{baseURL:newURL("v1",aios_base_url).href,},});// Configure the chat messages list.
// Include system messages and user messages using SystemMessage and HumanMessage objects.
constmessages=[newSystemMessage("You are a helpful assistant."),newHumanMessage("Hi"),];// Pass the messages list to the chat LLM and receive a response.
// The invoke method returns the model's output.
constresponse=awaitllm.invoke(messages);// Output the content of the generated response.
// This content is the response text generated by the AI model.
console.log(response.content);
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.
// model parameter specifies the model ID to use.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// configuration.baseURL points to the v1 endpoint of the AIOS API.
const llm =new ChatOpenAI({
model: model,
apiKey:"EMPTY_KEY",
configuration: {
baseURL:new URL("v1", aios_base_url).href,
},
});
// Configure the chat messages list.
// Include system messages and user messages using SystemMessage and HumanMessage objects.
const messages = [
new SystemMessage("You are a helpful assistant."),
new HumanMessage("Hi"),
];
// Pass the messages list to the chat LLM and receive a response.
// The invoke method returns the model's output.
const response =await llm.invoke(messages);
// Output the content of the generated response.
// This content is the response text generated by the AI model.
console.log(response.content);
packagemainimport("bytes""encoding/json""fmt""io""net/http")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define the message structure.
// Role: Message role (e.g., system, user)
// Content: Message content
typeMessagestruct{Rolestring`json:"role"`Contentstring`json:"content"`}// Define the data structure to be used for POST requests.
// Model: Model ID to use
// Messages: List of messages
// Stream: Whether to stream response (optional)
typePostDatastruct{Modelstring`json:"model"`Messages[]Message`json:"messages"`Streambool`json:"stream,omitempty"`}funcmain(){// Create request data.
// The messages list includes system messages and user messages.
data:=PostData{Model:model,Messages:[]Message{{Role:"system",Content:"You are a helpful assistant.",},{Role:"user",Content:"Hi",},},}// Marshal data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v1/chat/completions endpoint.
response,err:=http.Post(aiosBaseUrl+"/v1/chat/completions","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// Read the entire response body.
body,err:=io.ReadAll(response.Body)iferr!=nil{panic(err)}// Unmarshal the response body into map format.
varvmap[string]interface{}json.Unmarshal(body,&v)// Extract the choices array from the response.
choices:=v["choices"].([]interface{})// Extract the first data.
choice:=choices[0].(map[string]interface{})// Format and output the response message generated by the AI model in JSON format.
message,err:=json.MarshalIndent(choice["message"],""," ")iferr!=nil{panic(err)}fmt.Println(string(message))}
package main
import (
"bytes""encoding/json""fmt""io""net/http")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define the message structure.
// Role: Message role (e.g., system, user)
// Content: Message content
type Message struct {
Role string`json:"role"` Content string`json:"content"`}
// Define the data structure to be used for POST requests.
// Model: Model ID to use
// Messages: List of messages
// Stream: Whether to stream response (optional)
type PostData struct {
Model string`json:"model"` Messages []Message `json:"messages"` Stream bool`json:"stream,omitempty"`}
funcmain() {
// Create request data.
// The messages list includes system messages and user messages.
data := PostData{
Model: model,
Messages: []Message{
{
Role: "system",
Content: "You are a helpful assistant.",
},
{
Role: "user",
Content: "Hi",
},
},
}
// Marshal data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v1/chat/completions endpoint.
response, err := http.Post(aiosBaseUrl+"/v1/chat/completions", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// Read the entire response body.
body, err := io.ReadAll(response.Body)
if err !=nil {
panic(err)
}
// Unmarshal the response body into map format.
var v map[string]interface{}
json.Unmarshal(body, &v)
// Extract the choices array from the response.
choices := v["choices"].([]interface{})
// Extract the first data.
choice := choices[0].(map[string]interface{})
// Format and output the response message generated by the AI model in JSON format.
message, err := json.MarshalIndent(choice["message"], "", " ")
if err !=nil {
panic(err)
}
fmt.Println(string(message))
}
packagemainimport("context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)funcmain(){// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client:=openai.NewClient(option.WithBaseURL(aiosBaseUrl+"/v1"),)// Generate a chat completion using the AIOS model.
// Use openai.ChatCompletionNewParams to set the model and messages list.
// The messages list includes system messages and user messages.
response,err:=client.Chat.Completions.New(context.TODO(),openai.ChatCompletionNewParams{Model:model,Messages:[]openai.ChatCompletionMessageParamUnion{openai.SystemMessage("You are a helpful assistant."),openai.UserMessage("Hi"),},})iferr!=nil{panic(err)}// Format and output the response message generated by the AI model in JSON format.
fmt.Println(response.Choices[0].Message.RawJSON())}
package main
import (
"context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
funcmain() {
// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client := openai.NewClient(
option.WithBaseURL(aiosBaseUrl +"/v1"),
)
// Generate a chat completion using the AIOS model.
// Use openai.ChatCompletionNewParams to set the model and messages list.
// The messages list includes system messages and user messages.
response, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
Model: model,
Messages: []openai.ChatCompletionMessageParamUnion{
openai.SystemMessage("You are a helpful assistant."),
openai.UserMessage("Hi"),
},
})
if err !=nil {
panic(err)
}
// Format and output the response message generated by the AI model in JSON format.
fmt.Println(response.Choices[0].Message.RawJSON())
}
Code Block. /v1/chat/completions request
Note
For information on the aios endpoint-url and model ID for calling the model, see the LLM Endpoint Usage Guide on the resource details page. Please refer to Using LLM.
Response
You can check the model’s answer content in message of choices.
{'annotations':None,'audio':None,'content':'Hello! How can I help you today?','function_call':None,'reasoning_content':'The user says "Hi". We respond politely.','refusal':None,'role':'assistant','tool_calls':[]}
stream request
Using stream, you can receive and process responses for each token the model generates, instead of waiting for the model to generate all answers and receiving them at once.
Request
Enter True for the stream parameter value.
Color mode
importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use, the messages list, and whether to stream.# The messages list includes system messages and user messages.data={"model":model,"messages":[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Hi"}],"stream":True}# Send a POST request to the AIOS API's v1/chat/completions endpoint.# Set stream=True to receive real-time streaming responses.response=requests.post(urljoin(aios_base_url,"v1/chat/completions"),json=data,stream=True)# You can receive responses as the model generates tokens.# Responses are sent separated by each line, so process with iter_lines().forlineinresponse.iter_lines():ifline:try:# Remove the 'data: ' prefix and parse the JSON data.body=json.loads(line[len("data: "):])# Output the delta (choices[0].delta).# The delta is the response token generated by the AI model.print(body["choices"][0]["delta"])except:pass
importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use, the messages list, and whether to stream.# The messages list includes system messages and user messages.data = {
"model": model,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hi"}
],
"stream": True}
# Send a POST request to the AIOS API's v1/chat/completions endpoint.# Set stream=True to receive real-time streaming responses.response = requests.post(urljoin(aios_base_url, "v1/chat/completions"), json=data, stream=True)
# You can receive responses as the model generates tokens.# Responses are sent separated by each line, so process with iter_lines().for line in response.iter_lines():
if line:
try:
# Remove the 'data: ' prefix and parse the JSON data. body = json.loads(line[len("data: "):])
# Output the delta (choices[0].delta).# The delta is the response token generated by the AI model. print(body["choices"][0]["delta"])
except:
pass
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")# Generate a chat completion using the AIOS model.# model parameter specifies the model ID to use.# messages parameter is a list of messages including system and user messages.# Set stream=True to receive real-time streaming responses.response=client.chat.completions.create(model=model,messages=[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"Hi"}],stream=True)# You can receive responses as the model generates tokens.# response is sent in stream format, so you can process it iteratively.forchunkinresponse:# Output the delta (choices[0].delta).# The delta is the response token generated by the AI model.print(chunk.choices[0].delta.model_dump())
fromopenaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
# Generate a chat completion using the AIOS model.# model parameter specifies the model ID to use.# messages parameter is a list of messages including system and user messages.# Set stream=True to receive real-time streaming responses.response = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hi"}
],
stream=True)
# You can receive responses as the model generates tokens.# response is sent in stream format, so you can process it iteratively.for chunk in response:
# Output the delta (choices[0].delta).# The delta is the response token generated by the AI model. print(chunk.choices[0].delta.model_dump())
fromlangchain_openaiimportChatOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# model parameter specifies the model ID to use.llm=ChatOpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)# Configure the chat messages list.# Include system messages and user messages.messages=[("system","You are a helpful assistant."),("human","Hi"),]# You can receive responses as the model generates tokens.# The llm.stream method returns a stream that generates tokens in real-time.forchunkinllm.stream(messages):# Output each chunk.# This chunk is the response token generated by the AI model.print(chunk)
fromlangchain_openaiimport ChatOpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# model parameter specifies the model ID to use.llm = ChatOpenAI(
base_url=urljoin(aios_base_url, "v1"),
api_key="EMPTY_KEY",
model=model
)
# Configure the chat messages list.# Include system messages and user messages.messages = [
("system", "You are a helpful assistant."),
("human", "Hi"),
]
# You can receive responses as the model generates tokens.# The llm.stream method returns a stream that generates tokens in real-time.for chunk in llm.stream(messages):
# Output each chunk.# This chunk is the response token generated by the AI model. print(chunk)
constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use, the messages list, and whether to stream.
// The messages list includes system messages and user messages.
constdata={model:model,messages:[{role:"system",content:"You are a helpful assistant."},{role:"user",content:"Hi"},],stream:true,};// Create the AIOS API's v1/chat/completions endpoint URL.
leturl=newURL("/v1/chat/completions",aios_base_url);// Send a POST request to the AIOS API.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});// You can receive responses as the model generates tokens.
// Convert the response body to a text decoder stream and read it.
constreader=response.body.pipeThrough(newTextDecoderStream()).getReader();letbuf="";while(true){const{value,done}=awaitreader.read();if(done)break;// Add received data to buffer.
buf+=value;letsep;// Find newline characters (\n\n) in the buffer and separate data.
while((sep=buf.indexOf("\n\n"))>=0){constdata=buf.slice(0,sep);buf=buf.slice(sep+2);// Process each line.
for(constrawLineofdata.split("\n")){constline=rawLine.trim();if(!line.startsWith("data: "))continue;// Remove the "data: " prefix and extract JSON data.
constpayload=line.slice("data: ".length).trim();if(payload==="[DONE]")break;// Parse the JSON data.
constjson=JSON.parse(payload);// Output the delta (choices[0].delta).
// The delta is the response token generated by the AI model.
console.log(json.choices[0].delta);}}}
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use, the messages list, and whether to stream.
// The messages list includes system messages and user messages.
const data = {
model: model,
messages: [
{ role:"system", content:"You are a helpful assistant." },
{ role:"user", content:"Hi" },
],
stream:true,
};
// Create the AIOS API's v1/chat/completions endpoint URL.
let url =new URL("/v1/chat/completions", aios_base_url);
// Send a POST request to the AIOS API.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
// You can receive responses as the model generates tokens.
// Convert the response body to a text decoder stream and read it.
const reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
let buf ="";
while (true) {
const { value, done } =await reader.read();
if (done) break;
// Add received data to buffer.
buf += value;
let sep;
// Find newline characters (\n\n) in the buffer and separate data.
while ((sep = buf.indexOf("\n\n")) >=0) {
const data = buf.slice(0, sep);
buf = buf.slice(sep +2);
// Process each line.
for (const rawLine of data.split("\n")) {
const line = rawLine.trim();
if (!line.startsWith("data: ")) continue;
// Remove the "data: " prefix and extract JSON data.
const payload = line.slice("data: ".length).trim();
if (payload ==="[DONE]") break;
// Parse the JSON data.
const json = JSON.parse(payload);
// Output the delta (choices[0].delta).
// The delta is the response token generated by the AI model.
console.log(json.choices[0].delta);
}
}
}
importOpenAIfrom"openai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
constclient=newOpenAI({apiKey:"EMPTY_KEY",baseURL:newURL("v1",aios_base_url).href,});// Generate a chat completion using the AIOS model.
// model parameter specifies the model ID to use.
// messages parameter is a list of messages including system and user messages.
// Set stream: true to receive real-time streaming responses.
constresponse=awaitclient.chat.completions.create({model:model,messages:[{role:"system",content:"You are a helpful assistant."},{role:"user",content:"Hi"},],stream:true,});// You can receive responses as the model generates tokens.
// Use for await...of loop to sequentially process stream events.
forawait(consteventofresponse){// Output the delta (choices[0].delta).
// The delta is the response token generated by the AI model.
console.log(event.choices[0].delta);}
import OpenAI from "openai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
const client =new OpenAI({
apiKey:"EMPTY_KEY",
baseURL:new URL("v1", aios_base_url).href,
});
// Generate a chat completion using the AIOS model.
// model parameter specifies the model ID to use.
// messages parameter is a list of messages including system and user messages.
// Set stream: true to receive real-time streaming responses.
const response =await client.chat.completions.create({
model: model,
messages: [
{ role:"system", content:"You are a helpful assistant." },
{ role:"user", content:"Hi" },
],
stream:true,
});
// You can receive responses as the model generates tokens.
// Use for await...of loop to sequentially process stream events.
forawait (const event of response) {
// Output the delta (choices[0].delta).
// The delta is the response token generated by the AI model.
console.log(event.choices[0].delta);
}
import{ChatOpenAI}from"@langchain/openai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.
// model parameter specifies the model ID to use.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// configuration.baseURL points to the v1 endpoint of the AIOS API.
constllm=newChatOpenAI({model:model,apiKey:"EMPTY_KEY",configuration:{baseURL:newURL("v1",aios_base_url).href,},});// Configure the chat messages list.
// Include system messages and user messages.
constmessages=[{role:"system",content:"You are a helpful assistant."},{role:"user",content:"Hi"},];// You can receive responses as the model generates tokens.
// The llm.stream method returns a stream that generates tokens in real-time.
constcompletion=awaitllm.stream(messages);forawait(constchunkofcompletion){// Output the content of each chunk.
// This content is the response token generated by the AI model.
console.log(chunk.content);}
import { ChatOpenAI } from "@langchain/openai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.
// model parameter specifies the model ID to use.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// configuration.baseURL points to the v1 endpoint of the AIOS API.
const llm =new ChatOpenAI({
model: model,
apiKey:"EMPTY_KEY",
configuration: {
baseURL:new URL("v1", aios_base_url).href,
},
});
// Configure the chat messages list.
// Include system messages and user messages.
const messages = [
{ role:"system", content:"You are a helpful assistant." },
{ role:"user", content:"Hi" },
];
// You can receive responses as the model generates tokens.
// The llm.stream method returns a stream that generates tokens in real-time.
const completion =await llm.stream(messages);
forawait (const chunk of completion) {
// Output the content of each chunk.
// This content is the response token generated by the AI model.
console.log(chunk.content);
}
packagemainimport("bufio""bytes""encoding/json""fmt""net/http")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define the message structure.
// Role: Message role (e.g., system, user)
// Content: Message content
typeMessagestruct{Rolestring`json:"role"`Contentstring`json:"content"`}// Define the data structure to be used for POST requests.
// Model: Model ID to use
// Messages: List of messages
// Stream: Whether to stream response (optional)
typePostDatastruct{Modelstring`json:"model"`Messages[]Message`json:"messages"`Streambool`json:"stream,omitempty"`}funcmain(){// Create request data.
// The messages list includes system messages and user messages.
// Set Stream: true to receive real-time streaming responses.
data:=PostData{Model:model,Messages:[]Message{{Role:"system",Content:"You are a helpful assistant.",},{Role:"user",Content:"Hi",},},Stream:true,}// Marshal data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v1/chat/completions endpoint.
response,err:=http.Post(aiosBaseUrl+"/v1/chat/completions","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// You can receive responses as the model generates tokens.
// Scan the HTTP response body and process line by line.
varvmap[string]interface{}scanner:=bufio.NewScanner(response.Body)forscanner.Scan(){line:=bytes.TrimSpace(scanner.Bytes())// Skip lines that don't start with "data: ".
if!bytes.HasPrefix(line,[]byte("data: ")){continue}// Remove the "data: " prefix.
payload:=bytes.TrimPrefix(line,[]byte("data: "))// If payload is "[DONE]", end streaming.
ifbytes.Equal(payload,[]byte("[DONE]")){break}// Parse the JSON data.
json.Unmarshal(payload,&v)// Extract the choices array from the response.
choices:=v["choices"].([]interface{})// Extract the first data.
choice:=choices[0].(map[string]interface{})// Serialize the delta to JSON format and output it.
message,err:=json.Marshal(choice["delta"])iferr!=nil{panic(err)}fmt.Println(string(message))}}
package main
import (
"bufio""bytes""encoding/json""fmt""net/http")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define the message structure.
// Role: Message role (e.g., system, user)
// Content: Message content
type Message struct {
Role string`json:"role"` Content string`json:"content"`}
// Define the data structure to be used for POST requests.
// Model: Model ID to use
// Messages: List of messages
// Stream: Whether to stream response (optional)
type PostData struct {
Model string`json:"model"` Messages []Message `json:"messages"` Stream bool`json:"stream,omitempty"`}
funcmain() {
// Create request data.
// The messages list includes system messages and user messages.
// Set Stream: true to receive real-time streaming responses.
data := PostData{
Model: model,
Messages: []Message{
{
Role: "system",
Content: "You are a helpful assistant.",
},
{
Role: "user",
Content: "Hi",
},
},
Stream: true,
}
// Marshal data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v1/chat/completions endpoint.
response, err := http.Post(aiosBaseUrl+"/v1/chat/completions", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// You can receive responses as the model generates tokens.
// Scan the HTTP response body and process line by line.
var v map[string]interface{}
scanner := bufio.NewScanner(response.Body)
for scanner.Scan() {
line := bytes.TrimSpace(scanner.Bytes())
// Skip lines that don't start with "data: ".
if !bytes.HasPrefix(line, []byte("data: ")) {
continue }
// Remove the "data: " prefix.
payload := bytes.TrimPrefix(line, []byte("data: "))
// If payload is "[DONE]", end streaming.
if bytes.Equal(payload, []byte("[DONE]")) {
break }
// Parse the JSON data.
json.Unmarshal(payload, &v)
// Extract the choices array from the response.
choices := v["choices"].([]interface{})
// Extract the first data.
choice := choices[0].(map[string]interface{})
// Serialize the delta to JSON format and output it.
message, err := json.Marshal(choice["delta"])
if err !=nil {
panic(err)
}
fmt.Println(string(message))
}
}
packagemainimport("context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)funcmain(){// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client:=openai.NewClient(option.WithBaseURL(aiosBaseUrl+"/v1"),)// Generate a streaming chat completion using the AIOS model.
// Use openai.ChatCompletionNewParams to set the model and messages list.
completion:=client.Chat.Completions.NewStreaming(context.TODO(),openai.ChatCompletionNewParams{Model:model,Messages:[]openai.ChatCompletionMessageParamUnion{openai.SystemMessage("You are a helpful assistant."),openai.UserMessage("Hi"),},})// You can receive responses as the model generates tokens.
// The Next() method returns true when there is a next chunk.
forcompletion.Next(){// Get the choices slice of the current chunk.
chunk:=completion.Current().Choices// choices[0].text is the response text generated by the AI model.
fmt.Println(chunk[0].Message.Content)}}
package main
import (
"context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
funcmain() {
// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client := openai.NewClient(
option.WithBaseURL(aiosBaseUrl +"/v1"),
)
// Generate a streaming chat completion using the AIOS model.
// Use openai.ChatCompletionNewParams to set the model and messages list.
completion := client.Chat.Completions.NewStreaming(context.TODO(), openai.ChatCompletionNewParams{
Model: model,
Messages: []openai.ChatCompletionMessageParamUnion{
openai.SystemMessage("You are a helpful assistant."),
openai.UserMessage("Hi"),
},
})
// You can receive responses as the model generates tokens.
// The Next() method returns true when there is a next chunk.
for completion.Next() {
// Get the choices slice of the current chunk.
chunk := completion.Current().Choices
// choices[0].text is the response text generated by the AI model.
fmt.Println(chunk[0].Message.Content)
}
}
Answers are generated for each token, and each token can be checked in the text field of choices.
I
'm
looking
for
a
way
to
check
if
a
specific
process
is
running
on
}
tool calling
Tool Calling allows the model to call external functions to perform specific tasks.
The model analyzes the user’s request, selects the necessary tools, and generates the arguments as a response to call those tools.
After executing the actual tool using the tool call message generated by the model, you compose the result as a tool message and request the model again, then the model generates a natural response to the user based on the tool execution results.
Fig. tool calling sequence diagram
Note
The openai/gpt-oss-120b model does not support the tool calling feature.
Request
Color mode
importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Define a function to get weather information.# This function returns the current temperature in Celsius for the provided coordinates.tools=[{"type":"function","function":{"name":"get_weather","description":"Get current temperature for provided coordinates in celsius.","parameters":{"type":"object","properties":{"latitude":{"type":"number"},"longitude":{"type":"number"}},"required":["latitude","longitude"],"additionalProperties":False},"strict":True}}]# Define the user message.# The user is asking about today's weather in Paris.messages=[{"role":"user","content":"What is the weather like in Paris today?"}]# Configure the request data.# This includes the model ID to use, the messages list, and the tools list.data={"model":model,"messages":messages,"tools":tools}# Send a POST request to the AIOS API's v1/chat/completions endpoint.# This request instructs the model to process the user's question and determine the necessary tool calls.response=requests.post(urljoin(aios_base_url,"v1/chat/completions"),json=data)# Parse the response body in JSON format.body=json.loads(response.text)# Print the tool call information from the response generated by the AI model.# This information indicates which tool the model should call.print(body["choices"][0]["message"]["tool_calls"])# Implementation of the weather function, always returns 14 degrees.defget_weather(latitude,longitude):return"14℃"# Extract tool call information from the first response.# This retrieves the tool call information requested by the model.tool_call=body["choices"][0]["message"]["tool_calls"][0]# Parse the arguments of the tool call from JSON string format to dict format.args=json.loads(tool_call["function"]["arguments"])# Call the actual function to get the result. (e.g., "14℃")# At this step, the actual weather information lookup logic is executed.result=get_weather(args["latitude"],args["longitude"])# "14℃"# Add the function call result as a **tool** message to the conversation context and call the model again,# then the model generates an appropriate response using the function call result.# Add the model's tool call message to messages to maintain the conversation context.messages.append(body["choices"][0]["message"])# Add the result of calling the actual function to messages.# This allows the model to generate a final response based on the tool call result.messages.append({"role":"tool","tool_call_id":tool_call["id"],"content":str(result)})# Configure the second request data.# This includes the model ID to use and the updated messages list.data={"model":model,"messages":messages,}# Send a POST request to the AIOS API's v1/chat/completions endpoint.# This request generates a final response based on the tool call result.response_2=requests.post(urljoin(aios_base_url,"v1/chat/completions"),json=data)# Parse the response body in JSON format.body=json.loads(response_2.text)# Print the message generated by the AI in the second response.# This is the final answer to the user's question.print(body["choices"][0]["message"])
importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Define a function to get weather information.# This function returns the current temperature in Celsius for the provided coordinates.tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current temperature for provided coordinates in celsius.",
"parameters": {
"type": "object",
"properties": {
"latitude": {"type": "number"},
"longitude": {"type": "number"}
},
"required": ["latitude", "longitude"],
"additionalProperties": False },
"strict": True }
}]
# Define the user message.# The user is asking about today's weather in Paris.messages = [{"role": "user", "content": "What is the weather like in Paris today?"}]
# Configure the request data.# This includes the model ID to use, the messages list, and the tools list.data = {
"model": model,
"messages": messages,
"tools": tools
}
# Send a POST request to the AIOS API's v1/chat/completions endpoint.# This request instructs the model to process the user's question and determine the necessary tool calls.response = requests.post(urljoin(aios_base_url, "v1/chat/completions"), json=data)
# Parse the response body in JSON format.body = json.loads(response.text)
# Print the tool call information from the response generated by the AI model.# This information indicates which tool the model should call.print(body["choices"][0]["message"]["tool_calls"])
# Implementation of the weather function, always returns 14 degrees.defget_weather(latitude, longitude):
return"14℃"# Extract tool call information from the first response.# This retrieves the tool call information requested by the model.tool_call = body["choices"][0]["message"]["tool_calls"][0]
# Parse the arguments of the tool call from JSON string format to dict format.args = json.loads(tool_call["function"]["arguments"])
# Call the actual function to get the result. (e.g., "14℃")# At this step, the actual weather information lookup logic is executed.result = get_weather(args["latitude"], args["longitude"]) # "14℃"# Add the function call result as a **tool** message to the conversation context and call the model again,# then the model generates an appropriate response using the function call result.# Add the model's tool call message to messages to maintain the conversation context.messages.append(body["choices"][0]["message"])
# Add the result of calling the actual function to messages.# This allows the model to generate a final response based on the tool call result.messages.append({
"role": "tool",
"tool_call_id": tool_call["id"],
"content": str(result)
})
# Configure the second request data.# This includes the model ID to use and the updated messages list.data = {
"model": model,
"messages": messages,
}
# Send a POST request to the AIOS API's v1/chat/completions endpoint.# This request generates a final response based on the tool call result.response_2 = requests.post(urljoin(aios_base_url, "v1/chat/completions"), json=data)
# Parse the response body in JSON format.body = json.loads(response_2.text)
# Print the message generated by the AI in the second response.# This is the final answer to the user's question.print(body["choices"][0]["message"])
importjsonfromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")# Define a function to get weather information.# This function returns the current temperature in Celsius for the provided coordinates.tools=[{"type":"function","function":{"name":"get_weather","description":"Get current temperature for provided coordinates in celsius.","parameters":{"type":"object","properties":{"latitude":{"type":"number"},"longitude":{"type":"number"}},"required":["latitude","longitude"],"additionalProperties":False},"strict":True}}]# Define the user message.# The user is asking about today's weather in Paris.messages=[{"role":"user","content":"What is the weather like in Paris today?"}]# Generate a chat completion using the AIOS model.# The model parameter specifies the model ID to use.# The messages parameter is a list of messages containing the user message.# The tools parameter provides metadata about the tools available to the model.response=client.chat.completions.create(model=model,messages=messages,tools=tools# Provides metadata about the tools available to the model.)# Print the tool call information from the response generated by the AI model.# This information indicates which tool the model should call.print(response.choices[0].message.tool_calls[0].model_dump())# Implementation of the weather function, always returns 14 degrees.defget_weather(latitude,longitude):return"14℃"# Extract tool call information from the first response.# This retrieves the tool call information requested by the model.tool_call=response.choices[0].message.tool_calls[0]# Parse the tool call arguments in JSON format.args=json.loads(tool_call.function.arguments)# Call the actual function to get the result. (e.g., "14℃")# At this step, the actual weather information lookup logic is executed.result=get_weather(args["latitude"],args["longitude"])# "14℃"# Add the function call result as a **tool** message to the conversation context and call the model again,# then the model generates an appropriate response using the function call result.# Add the model's tool call message to messages to maintain the conversation context.messages.append(response.choices[0].message)# Add the result of calling the actual function to messages.# This allows the model to generate a final response based on the tool call result.messages.append({"role":"tool","tool_call_id":tool_call.id,"content":str(result)})# Generate a second chat completion.# This includes the model ID to use and the updated messages list.# This request generates a final response based on the tool call result.response_2=client.chat.completions.create(model=model,messages=messages,)# Print the message generated by the AI in the second response.# This is the final answer to the user's question.print(response_2.choices[0].message.model_dump())
importjsonfromopenaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
# Define a function to get weather information.# This function returns the current temperature in Celsius for the provided coordinates.tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current temperature for provided coordinates in celsius.",
"parameters": {
"type": "object",
"properties": {
"latitude": {"type": "number"},
"longitude": {"type": "number"}
},
"required": ["latitude", "longitude"],
"additionalProperties": False },
"strict": True }
}]
# Define the user message.# The user is asking about today's weather in Paris.messages = [{"role": "user", "content": "What is the weather like in Paris today?"}]
# Generate a chat completion using the AIOS model.# The model parameter specifies the model ID to use.# The messages parameter is a list of messages containing the user message.# The tools parameter provides metadata about the tools available to the model.response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools # Provides metadata about the tools available to the model.)
# Print the tool call information from the response generated by the AI model.# This information indicates which tool the model should call.print(response.choices[0].message.tool_calls[0].model_dump())
# Implementation of the weather function, always returns 14 degrees.defget_weather(latitude, longitude):
return"14℃"# Extract tool call information from the first response.# This retrieves the tool call information requested by the model.tool_call = response.choices[0].message.tool_calls[0]
# Parse the tool call arguments in JSON format.args = json.loads(tool_call.function.arguments)
# Call the actual function to get the result. (e.g., "14℃")# At this step, the actual weather information lookup logic is executed.result = get_weather(args["latitude"], args["longitude"]) # "14℃"# Add the function call result as a **tool** message to the conversation context and call the model again,# then the model generates an appropriate response using the function call result.# Add the model's tool call message to messages to maintain the conversation context.messages.append(response.choices[0].message)
# Add the result of calling the actual function to messages.# This allows the model to generate a final response based on the tool call result.messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": str(result)
})
# Generate a second chat completion.# This includes the model ID to use and the updated messages list.# This request generates a final response based on the tool call result.response_2 = client.chat.completions.create(
model=model,
messages=messages,
)
# Print the message generated by the AI in the second response.# This is the final answer to the user's question.print(response_2.choices[0].message.model_dump())
fromlangchain_openaiimportChatOpenAIfromlangchain_core.toolsimporttoolfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Define a tool function to get weather information.# This function returns the current temperature in Celsius for the provided coordinates.@tooldefget_weather(latitude:float,longitude:float)->str:"""Get current temperature for provided coordinates in celsius."""return"14℃"# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".# The model parameter specifies the model ID to use.chat_llm=ChatOpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)# Bind tools to the model.# The get_weather function returns the current temperature in Celsius for the provided coordinates.llm_with_tools=chat_llm.bind_tools([get_weather])# Configure the list of chat messages.# The user is asking about today's weather in Paris.messages=[("human","What is the weather like in Paris today?")]# Pass the list of messages to the chat LLM to get a response.# The invoke method returns the model's output.# At this step, the model analyzes the user's question and determines the necessary tool calls.response=llm_with_tools.invoke(messages)# Print the tool call information from the response generated by the AI model.# This information indicates which tool the model should call.print(response.tool_calls)# Add the model's tool call message to messages to maintain the conversation context.# This allows the model to remember and connect previous conversation content.messages.append(response)# Call the actual tool function to get the result.# At this step, the get_weather function is executed to return weather information.tool_call=response.tool_calls[0]tool_message=get_weather.invoke(tool_call)# Add the tool call result to messages.# This allows the model to generate a final response based on the tool call result.messages.append(tool_message)# Perform a second request to get the final answer.# Now the model generates an appropriate response to the user based on the tool call result.response2=chat_llm.invoke(messages)# Print the final AI model response.# This is the final answer to the user's question.print(response2.model_dump())
fromlangchain_openaiimport ChatOpenAI
fromlangchain_core.toolsimport tool
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Define a tool function to get weather information.# This function returns the current temperature in Celsius for the provided coordinates.@tooldefget_weather(latitude: float, longitude: float) -> str:
"""Get current temperature for provided coordinates in celsius."""return"14℃"# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".# The model parameter specifies the model ID to use.chat_llm = ChatOpenAI(
base_url=urljoin(aios_base_url, "v1"),
api_key="EMPTY_KEY",
model=model
)
# Bind tools to the model.# The get_weather function returns the current temperature in Celsius for the provided coordinates.llm_with_tools = chat_llm.bind_tools([get_weather])
# Configure the list of chat messages.# The user is asking about today's weather in Paris.messages = [("human", "What is the weather like in Paris today?")]
# Pass the list of messages to the chat LLM to get a response.# The invoke method returns the model's output.# At this step, the model analyzes the user's question and determines the necessary tool calls.response = llm_with_tools.invoke(messages)
# Print the tool call information from the response generated by the AI model.# This information indicates which tool the model should call.print(response.tool_calls)
# Add the model's tool call message to messages to maintain the conversation context.# This allows the model to remember and connect previous conversation content.messages.append(response)
# Call the actual tool function to get the result.# At this step, the get_weather function is executed to return weather information.tool_call = response.tool_calls[0]
tool_message = get_weather.invoke(tool_call)
# Add the tool call result to messages.# This allows the model to generate a final response based on the tool call result.messages.append(tool_message)
# Perform a second request to get the final answer.# Now the model generates an appropriate response to the user based on the tool call result.response2 = chat_llm.invoke(messages)
# Print the final AI model response.# This is the final answer to the user's question.print(response2.model_dump())
constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Define a function to get weather information.
// This function returns the current temperature in Celsius for the provided coordinates.
consttools=[{type:"function",function:{name:"get_weather",description:"Get current temperature for provided coordinates in celsius.",parameters:{type:"object",properties:{latitude:{type:"number"},longitude:{type:"number"},},required:["latitude","longitude"],additionalProperties:false,},strict:true,},},];// Define the user message.
// The user is asking about today's weather in Paris.
constmessages=[{role:"user",content:"What is the weather like in Paris today?"},];// Configure the request data.
// This includes the model ID to use, the messages list, and the tools list.
letdata={model:model,messages:messages,tools:tools,};// Generate the AIOS API's v1/chat/completions endpoint URL.
leturl=newURL("/v1/chat/completions",aios_base_url);// Send a POST request to the AIOS API.
// This request instructs the model to process the user's question and determine the necessary tool calls.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});// Parse the response body in JSON format.
letbody=awaitresponse.json();// Print the tool call information from the response generated by the AI model.
// This information indicates which tool the model should call.
console.log(JSON.stringify(body.choices[0].message.tool_calls));// Implementation of the weather function, always returns 14 degrees.
functiongetWeather(latitude,longitude){return"14℃";}// Extract tool call information from the first response.
// This retrieves the tool call information requested by the model.
consttoolCall=body.choices[0].message.tool_calls[0];// Parse the tool call arguments in JSON format.
// This extracts the parameters needed for the tool call.
constargs=JSON.parse(toolCall.function.arguments);// Call the actual function to get the result. (e.g., "14℃")
// At this step, the actual weather information lookup logic is executed.
constresult=getWeather(args.latitude,args.longitude);// Add the function call result as a **tool** message to the conversation context and call the model again,
// then the model generates an appropriate response using the function call result.
// Add the model's tool call message to messages to maintain the conversation context.
messages.push(body.choices[0].message);// Add the result of calling the actual function to messages.
// This allows the model to generate a final response based on the tool call result.
messages.push({role:"tool",tool_call_id:toolCall.id,content:String(result),});// Configure the second request data.
// This includes the model ID to use and the updated messages list.
// This request generates a final response based on the tool call result.
data={model:model,messages:messages,};// Send another POST request to the AIOS API.
// This request generates a final response based on the tool call result.
constresponse2=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});body=awaitresponse2.json();// Print the message generated by the AI in the second response.
// This is the final answer to the user's question.
console.log(JSON.stringify(body.choices[0].message));
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Define a function to get weather information.
// This function returns the current temperature in Celsius for the provided coordinates.
const tools = [
{
type:"function",
function: {
name:"get_weather",
description:"Get current temperature for provided coordinates in celsius.",
parameters: {
type:"object",
properties: {
latitude: { type:"number" },
longitude: { type:"number" },
},
required: ["latitude", "longitude"],
additionalProperties:false,
},
strict:true,
},
},
];
// Define the user message.
// The user is asking about today's weather in Paris.
const messages = [
{ role:"user", content:"What is the weather like in Paris today?" },
];
// Configure the request data.
// This includes the model ID to use, the messages list, and the tools list.
let data = {
model: model,
messages: messages,
tools: tools,
};
// Generate the AIOS API's v1/chat/completions endpoint URL.
let url =new URL("/v1/chat/completions", aios_base_url);
// Send a POST request to the AIOS API.
// This request instructs the model to process the user's question and determine the necessary tool calls.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
// Parse the response body in JSON format.
let body =await response.json();
// Print the tool call information from the response generated by the AI model.
// This information indicates which tool the model should call.
console.log(JSON.stringify(body.choices[0].message.tool_calls));
// Implementation of the weather function, always returns 14 degrees.
function getWeather(latitude, longitude) {
return"14℃";
}
// Extract tool call information from the first response.
// This retrieves the tool call information requested by the model.
const toolCall = body.choices[0].message.tool_calls[0];
// Parse the tool call arguments in JSON format.
// This extracts the parameters needed for the tool call.
const args = JSON.parse(toolCall.function.arguments);
// Call the actual function to get the result. (e.g., "14℃")
// At this step, the actual weather information lookup logic is executed.
const result = getWeather(args.latitude, args.longitude);
// Add the function call result as a **tool** message to the conversation context and call the model again,
// then the model generates an appropriate response using the function call result.
// Add the model's tool call message to messages to maintain the conversation context.
messages.push(body.choices[0].message);
// Add the result of calling the actual function to messages.
// This allows the model to generate a final response based on the tool call result.
messages.push({
role:"tool",
tool_call_id: toolCall.id,
content: String(result),
});
// Configure the second request data.
// This includes the model ID to use and the updated messages list.
// This request generates a final response based on the tool call result.
data = {
model: model,
messages: messages,
};
// Send another POST request to the AIOS API.
// This request generates a final response based on the tool call result.
const response2 =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
body =await response2.json();
// Print the message generated by the AI in the second response.
// This is the final answer to the user's question.
console.log(JSON.stringify(body.choices[0].message));
importOpenAIfrom"openai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Define a function to get weather information.
// This function returns the current temperature in Celsius for the provided coordinates.
consttools=[{type:"function",function:{name:"get_weather",description:"Get current temperature for provided coordinates in celsius.",parameters:{type:"object",properties:{latitude:{type:"number"},longitude:{type:"number"},},required:["latitude","longitude"],additionalProperties:false,},strict:true,},},];// Define the user message.
// The user is asking about today's weather in Paris.
constmessages=[{role:"user",content:"What is the weather like in Paris today?"},];// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
constclient=newOpenAI({apiKey:"EMPTY_KEY",baseURL:newURL("v1",aios_base_url).href,});// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// The tools parameter provides metadata about the tools available to the model.
constresponse=awaitclient.chat.completions.create({model:model,messages:messages,tools:tools,});// Print the tool call information from the response generated by the AI model.
// This information indicates which tool the model should call.
console.log(JSON.stringify(response.choices[0].message.tool_calls));// Implementation of the weather function, always returns 14 degrees.
functiongetWeather(latitude,longitude){return"14℃";}// Extract tool call information from the first response.
// This retrieves the tool call information requested by the model.
consttoolCall=response.choices[0].message.tool_calls[0];// Parse the tool call arguments in JSON format.
// This extracts the parameters needed for the tool call.
constargs=JSON.parse(toolCall.function.arguments);// Call the actual function to get the result. (e.g., "14℃")
// At this step, the actual weather information lookup logic is executed.
constresult=getWeather(args.latitude,args.longitude);// Add the function call result as a **tool** message to the conversation context and call the model again,
// then the model generates an appropriate response using the function call result.
// Add the model's tool call message to messages to maintain the conversation context.
messages.push(response.choices[0].message);// Add the result of calling the actual function to messages.
// This allows the model to generate a final response based on the tool call result.
messages.push({role:"tool",tool_call_id:toolCall.id,content:String(result),});// Generate a second chat completion.
// This includes the model ID to use and the updated messages list.
// This request generates a final response based on the tool call result.
constresponse2=awaitclient.chat.completions.create({model:model,messages:messages,});// Print the message generated by the AI in the second response.
// This is the final answer to the user's question.
console.log(JSON.stringify(response2.choices[0].message));
import OpenAI from "openai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Define a function to get weather information.
// This function returns the current temperature in Celsius for the provided coordinates.
const tools = [
{
type:"function",
function: {
name:"get_weather",
description:"Get current temperature for provided coordinates in celsius.",
parameters: {
type:"object",
properties: {
latitude: { type:"number" },
longitude: { type:"number" },
},
required: ["latitude", "longitude"],
additionalProperties:false,
},
strict:true,
},
},
];
// Define the user message.
// The user is asking about today's weather in Paris.
const messages = [
{ role:"user", content:"What is the weather like in Paris today?" },
];
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
const client =new OpenAI({
apiKey:"EMPTY_KEY",
baseURL:new URL("v1", aios_base_url).href,
});
// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// The tools parameter provides metadata about the tools available to the model.
const response =await client.chat.completions.create({
model: model,
messages: messages,
tools: tools,
});
// Print the tool call information from the response generated by the AI model.
// This information indicates which tool the model should call.
console.log(JSON.stringify(response.choices[0].message.tool_calls));
// Implementation of the weather function, always returns 14 degrees.
function getWeather(latitude, longitude) {
return"14℃";
}
// Extract tool call information from the first response.
// This retrieves the tool call information requested by the model.
const toolCall = response.choices[0].message.tool_calls[0];
// Parse the tool call arguments in JSON format.
// This extracts the parameters needed for the tool call.
const args = JSON.parse(toolCall.function.arguments);
// Call the actual function to get the result. (e.g., "14℃")
// At this step, the actual weather information lookup logic is executed.
const result = getWeather(args.latitude, args.longitude);
// Add the function call result as a **tool** message to the conversation context and call the model again,
// then the model generates an appropriate response using the function call result.
// Add the model's tool call message to messages to maintain the conversation context.
messages.push(response.choices[0].message);
// Add the result of calling the actual function to messages.
// This allows the model to generate a final response based on the tool call result.
messages.push({
role:"tool",
tool_call_id: toolCall.id,
content: String(result),
});
// Generate a second chat completion.
// This includes the model ID to use and the updated messages list.
// This request generates a final response based on the tool call result.
const response2 =await client.chat.completions.create({
model: model,
messages: messages,
});
// Print the message generated by the AI in the second response.
// This is the final answer to the user's question.
console.log(JSON.stringify(response2.choices[0].message));
import{HumanMessage}from"@langchain/core/messages";import{tool}from"@langchain/core/tools";import{ChatOpenAI}from"@langchain/openai";import{z}from"zod";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Define a tool function to get weather information.
// This function returns the current temperature in Celsius for the provided coordinates.
constgetWeather=tool(function(latitude,longitude){/**
* Get current temperature for provided coordinates in celsius.
*/return"14℃";},{name:"get_weather",description:"Get current temperature for provided coordinates in celsius.",schema:z.object({latitude:z.number(),longitude:z.number(),}),});// Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.
// base_url points to the v1 endpoint of the AIOS API,
// and api_key is the key required by AIOS, typically set to "EMPTY_KEY".
// The model parameter specifies the model ID to use.
constllm=newChatOpenAI({model:model,apiKey:"EMPTY_KEY",configuration:{baseURL:newURL("v1",aios_base_url).href,},});// Bind tools to the model.
// The getWeather function returns the current temperature in Celsius for the provided coordinates.
constllmWithTools=llm.bindTools([getWeather]);// Configure the list of chat messages.
// The user is asking about today's weather in Paris.
constmessages=[newHumanMessage("What is the weather like in Paris today?")];// Pass the list of messages to the chat LLM to get a response.
// The invoke method returns the model's output.
// This request instructs the model to process the user's question and determine the necessary tool calls.
constresponse=awaitllmWithTools.invoke(messages);// Print the tool call information from the response generated by the AI model.
// This information indicates which tool the model should call.
console.log(response.tool_calls);// Add the model's tool call message to messages to maintain the conversation context.
// This allows the model to remember and connect previous conversation content.
messages.push(response);// Call the actual tool function to get the result.
// At this step, the getWeather function is executed to return weather information.
consttoolCall=response.tool_calls[0];consttoolMessage=awaitgetWeather.invoke(toolCall);// Add the tool call result to messages.
// This allows the model to generate a final response based on the tool call result.
messages.push(toolMessage);// Perform a second request to get the final answer.
// Now the model generates an appropriate response to the user based on the tool call result.
constresponse2=awaitllm.invoke(messages);// Print the final AI model response.
console.log(response2.content);
import { HumanMessage } from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Define a tool function to get weather information.
// This function returns the current temperature in Celsius for the provided coordinates.
const getWeather = tool(
function (latitude, longitude) {
/**
* Get current temperature for provided coordinates in celsius.
*/return"14℃";
},
{
name:"get_weather",
description:"Get current temperature for provided coordinates in celsius.",
schema: z.object({
latitude: z.number(),
longitude: z.number(),
}),
}
);
// Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.
// base_url points to the v1 endpoint of the AIOS API,
// and api_key is the key required by AIOS, typically set to "EMPTY_KEY".
// The model parameter specifies the model ID to use.
const llm =new ChatOpenAI({
model: model,
apiKey:"EMPTY_KEY",
configuration: {
baseURL:new URL("v1", aios_base_url).href,
},
});
// Bind tools to the model.
// The getWeather function returns the current temperature in Celsius for the provided coordinates.
const llmWithTools = llm.bindTools([getWeather]);
// Configure the list of chat messages.
// The user is asking about today's weather in Paris.
const messages = [new HumanMessage("What is the weather like in Paris today?")];
// Pass the list of messages to the chat LLM to get a response.
// The invoke method returns the model's output.
// This request instructs the model to process the user's question and determine the necessary tool calls.
const response =await llmWithTools.invoke(messages);
// Print the tool call information from the response generated by the AI model.
// This information indicates which tool the model should call.
console.log(response.tool_calls);
// Add the model's tool call message to messages to maintain the conversation context.
// This allows the model to remember and connect previous conversation content.
messages.push(response);
// Call the actual tool function to get the result.
// At this step, the getWeather function is executed to return weather information.
const toolCall = response.tool_calls[0];
const toolMessage =await getWeather.invoke(toolCall);
// Add the tool call result to messages.
// This allows the model to generate a final response based on the tool call result.
messages.push(toolMessage);
// Perform a second request to get the final answer.
// Now the model generates an appropriate response to the user based on the tool call result.
const response2 =await llm.invoke(messages);
// Print the final AI model response.
console.log(response2.content);
packagemainimport("bytes""encoding/json""fmt""io""net/http")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define the Message structure.
// Role: Message role (user, assistant, tool, etc.)
// Content: Message content
// ToolCalls: Tool call information
// ToolCallId: Tool call identifier
typeMessagestruct{Rolestring`json:"role"`Contentstring`json:"content,omitempty"`ToolCalls[]map[string]any`json:"tool_calls,omitempty"`ToolCallIdstring`json:"tool_call_id,omitempty"`}// Define the POST request data structure.
// Model: Model ID to use
// Messages: List of messages
// Tools: List of available tools
// Stream: Whether to stream
typePostDatastruct{Modelstring`json:"model"`Messages[]Message`json:"messages"`Tools[]map[string]any`json:"tools,omitempty"`Streambool`json:"stream,omitempty"`}// Define a function to get weather information.
// This function always returns 14 degrees (sample implementation).
funcgetWeather(latitudefloat32,longitudefloat32)string{_=fmt.Sprintf("latitude: %f, longitude: %f",latitude,longitude)return"14℃"}funcmain(){// Define the user message.
// The user is asking about today's weather in Paris.
messages:=[]Message{{Role:"user",Content:"What is the weather like in Paris today?",},}// Define a function to get weather information.
// This tool returns the current temperature in Celsius for the provided coordinates.
tools:=[]map[string]any{{"type":"function","function":map[string]any{"name":"get_weather","description":"Get current temperature for provided coordinates in celsius.","parameters":map[string]any{"type":"object","properties":map[string]any{"latitude":map[string]string{"type":"number"},"longitude":map[string]string{"type":"number"},},"required":[]string{"latitude","longitude"},"additionalProperties":false,},"strict":true,},},}// Configure the request data.
// This includes the model ID to use, the messages list, and the tools list.
data:=PostData{Model:model,Messages:messages,Tools:tools,}// Serialize the request data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v1/chat/completions endpoint.
// This request instructs the model to process the user's question and determine the necessary tool calls.
response,err:=http.Post(aiosBaseUrl+"/v1/chat/completions","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// Read the response body.
body,err:=io.ReadAll(response.Body)iferr!=nil{panic(err)}// Parse the response body to map format.
varvmap[string]interface{}json.Unmarshal(body,&v)// Extract message information from the first response.
choices:=v["choices"].([]interface{})choice:=choices[0].(map[string]interface{})message,err:=json.MarshalIndent(choice["message"],""," ")iferr!=nil{panic(err)}messageData:=choice["message"].(map[string]interface{})toolCalls:=messageData["tool_calls"].([]interface{})// Print the tool call information from the response generated by the AI model.
// This information indicates which tool the model should call.
toolCallJson,err:=json.MarshalIndent(toolCalls,""," ")iferr!=nil{panic(err)}fmt.Println(string(toolCallJson))// Extract tool call information from the first response.
// This retrieves the tool call information requested by the model.
toolCall:=toolCalls[0].(map[string]interface{})function:=toolCall["function"].(map[string]interface{})// Parse the tool call arguments from JSON string format to map format.
// This extracts the parameters needed for the tool call.
varargsmap[string]float32err=json.Unmarshal([]byte(function["arguments"].(string)),&args)iferr!=nil{panic(err)}// Call the actual function to get the result. (e.g., "14℃")
// At this step, the actual weather information lookup logic is executed.
result:=getWeather(args["latitude"],args["longitude"])// Convert the tool call result to a message.
vartoolMessageMessageerr=json.Unmarshal(message,&toolMessage)iferr!=nil{panic(err)}// Add the model's tool call message to messages to maintain the conversation context.
messages=append(messages,toolMessage)// Add the result of calling the actual function to messages.
// This allows the model to generate a final response based on the tool call result.
messages=append(messages,Message{Role:"tool",ToolCallId:toolCall["id"].(string),Content:string(result),})// Configure the second request data.
// This includes the model ID to use and the updated messages list.
// This request generates a final response based on the tool call result.
data=PostData{Model:model,Messages:messages,}jsonData,err=json.Marshal(data)iferr!=nil{panic(err)}// Send another POST request to the AIOS API.
// This request generates a final response based on the tool call result.
response2,err:=http.Post(aiosBaseUrl+"/v1/chat/completions","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse2.Body.Close()// Read the second response body.
body,err=io.ReadAll(response2.Body)iferr!=nil{panic(err)}// Parse the second response to JSON format.
json.Unmarshal(body,&v)// Print the message generated by the AI in the second response.
// This is the final answer to the user's question.
choices=v["choices"].([]interface{})choice=choices[0].(map[string]interface{})message,err=json.MarshalIndent(choice["message"],""," ")iferr!=nil{panic(err)}fmt.Println(string(message))}
package main
import (
"bytes""encoding/json""fmt""io""net/http")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define the Message structure.
// Role: Message role (user, assistant, tool, etc.)
// Content: Message content
// ToolCalls: Tool call information
// ToolCallId: Tool call identifier
type Message struct {
Role string`json:"role"` Content string`json:"content,omitempty"` ToolCalls []map[string]any `json:"tool_calls,omitempty"` ToolCallId string`json:"tool_call_id,omitempty"`}
// Define the POST request data structure.
// Model: Model ID to use
// Messages: List of messages
// Tools: List of available tools
// Stream: Whether to stream
type PostData struct {
Model string`json:"model"` Messages []Message `json:"messages"` Tools []map[string]any `json:"tools,omitempty"` Stream bool`json:"stream,omitempty"`}
// Define a function to get weather information.
// This function always returns 14 degrees (sample implementation).
funcgetWeather(latitude float32, longitude float32) string {
_ = fmt.Sprintf("latitude: %f, longitude: %f", latitude, longitude)
return"14℃"}
funcmain() {
// Define the user message.
// The user is asking about today's weather in Paris.
messages := []Message{
{
Role: "user",
Content: "What is the weather like in Paris today?",
},
}
// Define a function to get weather information.
// This tool returns the current temperature in Celsius for the provided coordinates.
tools := []map[string]any{
{
"type": "function",
"function": map[string]any{
"name": "get_weather",
"description": "Get current temperature for provided coordinates in celsius.",
"parameters": map[string]any{
"type": "object",
"properties": map[string]any{
"latitude": map[string]string{"type": "number"},
"longitude": map[string]string{"type": "number"},
},
"required": []string{"latitude", "longitude"},
"additionalProperties": false,
},
"strict": true,
},
},
}
// Configure the request data.
// This includes the model ID to use, the messages list, and the tools list.
data := PostData{
Model: model,
Messages: messages,
Tools: tools,
}
// Serialize the request data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v1/chat/completions endpoint.
// This request instructs the model to process the user's question and determine the necessary tool calls.
response, err := http.Post(aiosBaseUrl+"/v1/chat/completions", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// Read the response body.
body, err := io.ReadAll(response.Body)
if err !=nil {
panic(err)
}
// Parse the response body to map format.
var v map[string]interface{}
json.Unmarshal(body, &v)
// Extract message information from the first response.
choices := v["choices"].([]interface{})
choice := choices[0].(map[string]interface{})
message, err := json.MarshalIndent(choice["message"], "", " ")
if err !=nil {
panic(err)
}
messageData := choice["message"].(map[string]interface{})
toolCalls := messageData["tool_calls"].([]interface{})
// Print the tool call information from the response generated by the AI model.
// This information indicates which tool the model should call.
toolCallJson, err := json.MarshalIndent(toolCalls, "", " ")
if err !=nil {
panic(err)
}
fmt.Println(string(toolCallJson))
// Extract tool call information from the first response.
// This retrieves the tool call information requested by the model.
toolCall := toolCalls[0].(map[string]interface{})
function := toolCall["function"].(map[string]interface{})
// Parse the tool call arguments from JSON string format to map format.
// This extracts the parameters needed for the tool call.
var args map[string]float32 err = json.Unmarshal([]byte(function["arguments"].(string)), &args)
if err !=nil {
panic(err)
}
// Call the actual function to get the result. (e.g., "14℃")
// At this step, the actual weather information lookup logic is executed.
result :=getWeather(args["latitude"], args["longitude"])
// Convert the tool call result to a message.
var toolMessage Message
err = json.Unmarshal(message, &toolMessage)
if err !=nil {
panic(err)
}
// Add the model's tool call message to messages to maintain the conversation context.
messages = append(messages, toolMessage)
// Add the result of calling the actual function to messages.
// This allows the model to generate a final response based on the tool call result.
messages = append(messages, Message{
Role: "tool",
ToolCallId: toolCall["id"].(string),
Content: string(result),
})
// Configure the second request data.
// This includes the model ID to use and the updated messages list.
// This request generates a final response based on the tool call result.
data = PostData{
Model: model,
Messages: messages,
}
jsonData, err = json.Marshal(data)
if err !=nil {
panic(err)
}
// Send another POST request to the AIOS API.
// This request generates a final response based on the tool call result.
response2, err := http.Post(aiosBaseUrl+"/v1/chat/completions", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response2.Body.Close()
// Read the second response body.
body, err = io.ReadAll(response2.Body)
if err !=nil {
panic(err)
}
// Parse the second response to JSON format.
json.Unmarshal(body, &v)
// Print the message generated by the AI in the second response.
// This is the final answer to the user's question.
choices = v["choices"].([]interface{})
choice = choices[0].(map[string]interface{})
message, err = json.MarshalIndent(choice["message"], "", " ")
if err !=nil {
panic(err)
}
fmt.Println(string(message))
}
packagemainimport("context""encoding/json""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define a function to get weather information.
// This function always returns 14 degrees (sample implementation).
funcgetWeather(latitudefloat32,longitudefloat32)string{_=fmt.Sprintf("latitude: %f, longitude: %f",latitude,longitude)return"14℃"}funcmain(){// Create an OpenAI client.
// base_url points to the v1 endpoint of the AIOS API.
client:=openai.NewClient(option.WithBaseURL(aiosBaseUrl+"/v1"),)// Define the user message.
// The user is asking about today's weather in Paris.
messages:=[]openai.ChatCompletionMessageParamUnion{openai.UserMessage("What is the weather like in Paris today?"),}// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// The tools parameter provides metadata about the tools available to the model.
response,err:=client.Chat.Completions.New(context.TODO(),openai.ChatCompletionNewParams{Model:model,Messages:messages,Tools:[]openai.ChatCompletionToolParam{{Function:openai.FunctionDefinitionParam{Name:"get_weather",Description:openai.String("Get current temperature for provided coordinates in celsius."),Parameters:openai.FunctionParameters{"type":"object","properties":map[string]interface{}{"latitude":map[string]string{"type":"number",},"longitude":map[string]string{"type":"number",},},"required":[]string{"latitude","longitude"},"additionalProperties":false,},Strict:openai.Bool(true),},},},})iferr!=nil{panic(err)}// Print the response generated by the AI model.
// This response includes tool call information.
fmt.Println([]string{response.Choices[0].Message.ToolCalls[0].RawJSON()})// Extract tool call information from the first response.
// This retrieves the tool call information requested by the model.
varvmap[string]float32toolCall:=response.Choices[0].Message.ToolCalls[0]args:=toolCall.Function.Arguments// Parse the tool call arguments from JSON string format to map format.
// This extracts the parameters needed for the tool call.
err=json.Unmarshal([]byte(args),&v)iferr!=nil{panic(err)}// Call the actual function to get the result. (e.g., "14℃")
// At this step, the actual weather information lookup logic is executed.
result:=getWeather(v["latitude"],v["longitude"])// Add the function call result as a **tool** message to the conversation context and call the model again,
// then the model generates an appropriate response using the function call result.
// Add the model's tool call message to messages to maintain the conversation context.
messages=append(messages,response.Choices[0].Message.ToParam())// Add the result of calling the actual function to messages.
// This allows the model to generate a final response based on the tool call result.
messages=append(messages,openai.ToolMessage(string(result),toolCall.ID))// Generate a second chat completion.
// This includes the model ID to use and the updated messages list.
// This request generates a final response based on the tool call result.
response2,err:=client.Chat.Completions.New(context.TODO(),openai.ChatCompletionNewParams{Model:model,Messages:messages,})iferr!=nil{panic(err)}// Print the message generated by the AI in the second response.
// This is the final answer to the user's question.
fmt.Println(response2.Choices[0].Message.RawJSON())}
package main
import (
"context""encoding/json""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define a function to get weather information.
// This function always returns 14 degrees (sample implementation).
funcgetWeather(latitude float32, longitude float32) string {
_ = fmt.Sprintf("latitude: %f, longitude: %f", latitude, longitude)
return"14℃"}
funcmain() {
// Create an OpenAI client.
// base_url points to the v1 endpoint of the AIOS API.
client := openai.NewClient(
option.WithBaseURL(aiosBaseUrl +"/v1"),
)
// Define the user message.
// The user is asking about today's weather in Paris.
messages := []openai.ChatCompletionMessageParamUnion{
openai.UserMessage("What is the weather like in Paris today?"),
}
// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// The tools parameter provides metadata about the tools available to the model.
response, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
Model: model,
Messages: messages,
Tools: []openai.ChatCompletionToolParam{
{
Function: openai.FunctionDefinitionParam{
Name: "get_weather",
Description: openai.String("Get current temperature for provided coordinates in celsius."),
Parameters: openai.FunctionParameters{
"type": "object",
"properties": map[string]interface{}{
"latitude": map[string]string{
"type": "number",
},
"longitude": map[string]string{
"type": "number",
},
},
"required": []string{"latitude", "longitude"},
"additionalProperties": false,
},
Strict: openai.Bool(true),
},
},
},
})
if err !=nil {
panic(err)
}
// Print the response generated by the AI model.
// This response includes tool call information.
fmt.Println([]string{response.Choices[0].Message.ToolCalls[0].RawJSON()})
// Extract tool call information from the first response.
// This retrieves the tool call information requested by the model.
var v map[string]float32 toolCall := response.Choices[0].Message.ToolCalls[0]
args := toolCall.Function.Arguments
// Parse the tool call arguments from JSON string format to map format.
// This extracts the parameters needed for the tool call.
err = json.Unmarshal([]byte(args), &v)
if err !=nil {
panic(err)
}
// Call the actual function to get the result. (e.g., "14℃")
// At this step, the actual weather information lookup logic is executed.
result :=getWeather(v["latitude"], v["longitude"])
// Add the function call result as a **tool** message to the conversation context and call the model again,
// then the model generates an appropriate response using the function call result.
// Add the model's tool call message to messages to maintain the conversation context.
messages = append(messages, response.Choices[0].Message.ToParam())
// Add the result of calling the actual function to messages.
// This allows the model to generate a final response based on the tool call result.
messages = append(messages, openai.ToolMessage(string(result), toolCall.ID))
// Generate a second chat completion.
// This includes the model ID to use and the updated messages list.
// This request generates a final response based on the tool call result.
response2, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
Model: model,
Messages: messages,
})
if err !=nil {
panic(err)
}
// Print the message generated by the AI in the second response.
// This is the final answer to the user's question.
fmt.Println(response2.Choices[0].Message.RawJSON())
}
Code block. tool call request
Response
In the first response, you can check the execution method of the tool that the model determined to use in message.tool_calls of choices.
In function of tool_calls, you can check that the get_weather function is used and what arguments are passed to execute it.
The second request included three messages in the messages:
The initial user message
The tool calling message generated by the first model
The tool message containing the result of executing the get_weather tool
In the second response, the model generates a final response using all the content of the above messages.
{'content':'The current weather in Paris is 14℃.','refusal':None,'role':'assistant','annotations':None,'audio':None,'function_call':None,'tool_calls':[],'reasoning_content':'We have user asking weather in Paris today. We called ''get_weather function with coordinates and got "14℃" as ''comment. We need to respond. Should incorporate info ''and maybe note we are using approximate. Provide ''answer.',}
reasoning
Request
For models that support reasoning, you can check the reasoning value as follows.
Warning
Reasoning support models generate many tokens during the inference process, so the answer generation time may take significantly longer.
Color mode
importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# In this example, the user is asked to compare which of two numbers is greater.# "Think step by step" is a prompt that encourages the model to think through logical steps.data={"model":model,"messages":[{"role":"user","content":"Think step by step. 9.11 and 9.8, which is greater?"}]}# Send a POST request to the AIOS API's v1/chat/completions endpoint.# This request instructs the model to process the user's question.response=requests.post(urljoin(aios_base_url,"v1/chat/completions"),json=data)body=json.loads(response.text)# Print the response generated by the AI model.print(body["choices"][0]["message"])
importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# In this example, the user is asked to compare which of two numbers is greater.# "Think step by step" is a prompt that encourages the model to think through logical steps.data = {
"model": model,
"messages": [
{"role": "user", "content": "Think step by step. 9.11 and 9.8, which is greater?"}
]
}
# Send a POST request to the AIOS API's v1/chat/completions endpoint.# This request instructs the model to process the user's question.response = requests.post(urljoin(aios_base_url, "v1/chat/completions"), json=data)
body = json.loads(response.text)
# Print the response generated by the AI model.print(body["choices"][0]["message"])
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")# Generate a chat completion using the AIOS model.# The model parameter specifies the model ID to use.# The messages parameter is a list of messages containing the user message.# "Think step by step" is a prompt that encourages the model to think through logical steps.response=client.chat.completions.create(model=model,messages=[{"role":"user","content":"Think step by step. 9.11 and 9.8, which is greater?"}],)# Print the response generated by the AI model.print(response.choices[0].message.model_dump())
fromopenaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
# Generate a chat completion using the AIOS model.# The model parameter specifies the model ID to use.# The messages parameter is a list of messages containing the user message.# "Think step by step" is a prompt that encourages the model to think through logical steps.response = client.chat.completions.create(
model=model,
messages=[
{"role": "user", "content": "Think step by step. 9.11 and 9.8, which is greater?"}
],
)
# Print the response generated by the AI model.print(response.choices[0].message.model_dump())
fromlangchain_openaiimportChatOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".# The model parameter specifies the model ID to use.chat_llm=ChatOpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)# Configure the list of chat messages.# The user is asking to compare which of two numbers is greater.# "Think step by step" is a prompt that encourages the model to think through logical steps.messages=[("human","Think step by step. 9.11 and 9.8, which is greater?"),]# Pass the list of messages to the chat LLM to get a response.# The invoke method returns the model's output.# This request instructs the model to process the user's question.chat_completion=chat_llm.invoke(messages)# Print the response generated by the AI model.print(chat_completion.model_dump())
fromlangchain_openaiimport ChatOpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".# The model parameter specifies the model ID to use.chat_llm = ChatOpenAI(
base_url=urljoin(aios_base_url, "v1"),
api_key="EMPTY_KEY",
model=model
)
# Configure the list of chat messages.# The user is asking to compare which of two numbers is greater.# "Think step by step" is a prompt that encourages the model to think through logical steps.messages = [
("human", "Think step by step. 9.11 and 9.8, which is greater?"),
]
# Pass the list of messages to the chat LLM to get a response.# The invoke method returns the model's output.# This request instructs the model to process the user's question.chat_completion = chat_llm.invoke(messages)
# Print the response generated by the AI model.print(chat_completion.model_dump())
constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Configure the request data.
// In this example, the user is asked to compare which of two numbers is greater.
// "Think step by step" is a prompt that encourages the model to think through logical steps.
constdata={model:model,messages:[{role:"user",content:"Think step by step. 9.11 and 9.8, which is greater?",},],};// Generate the AIOS API's v1/chat/completions endpoint URL.
leturl=newURL("/v1/chat/completions",aios_base_url);// Send a POST request to the AIOS API.
// This request instructs the model to process the user's question.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});constbody=awaitresponse.json();// Print the response generated by the AI model.
console.log(body.choices[0].message);
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Configure the request data.
// In this example, the user is asked to compare which of two numbers is greater.
// "Think step by step" is a prompt that encourages the model to think through logical steps.
const data = {
model: model,
messages: [
{
role:"user",
content:"Think step by step. 9.11 and 9.8, which is greater?",
},
],
};
// Generate the AIOS API's v1/chat/completions endpoint URL.
let url =new URL("/v1/chat/completions", aios_base_url);
// Send a POST request to the AIOS API.
// This request instructs the model to process the user's question.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
const body =await response.json();
// Print the response generated by the AI model.
console.log(body.choices[0].message);
importOpenAIfrom"openai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
constclient=newOpenAI({apiKey:"EMPTY_KEY",baseURL:newURL("v1",aios_base_url).href,});// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// "Think step by step" is a prompt that encourages the model to think through logical steps.
constresponse=awaitclient.chat.completions.create({model:model,messages:[{role:"user",content:"Think step by step. 9.11 and 9.8, which is greater?",},],});// Print the response generated by the AI model.
console.log(response.choices[0].message);
import OpenAI from "openai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
const client =new OpenAI({
apiKey:"EMPTY_KEY",
baseURL:new URL("v1", aios_base_url).href,
});
// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// "Think step by step" is a prompt that encourages the model to think through logical steps.
const response =await client.chat.completions.create({
model: model,
messages: [
{
role:"user",
content:"Think step by step. 9.11 and 9.8, which is greater?",
},
],
});
// Print the response generated by the AI model.
console.log(response.choices[0].message);
packagemainimport("bytes""encoding/json""fmt""io""net/http")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define the Message structure.
// Role: Message role (user, assistant, etc.)
// Content: Message content
typeMessagestruct{Rolestring`json:"role"`Contentstring`json:"content"`}// Define the POST request data structure.
// Model: Model ID to use
// Messages: List of messages
// Stream: Whether to stream
typePostDatastruct{Modelstring`json:"model"`Messages[]Message`json:"messages"`Streambool`json:"stream,omitempty"`}funcmain(){// Configure the request data.
// In this example, the user is asked to compare which of two numbers is greater.
// "Think step by step" is a prompt that encourages the model to think through logical steps.
data:=PostData{Model:model,Messages:[]Message{{Role:"user",Content:"Think step by step. 9.11 and 9.8, which is greater?",},},}// Serialize the request data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v1/chat/completions endpoint.
response,err:=http.Post(aiosBaseUrl+"/v1/chat/completions","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// Read the response body.
body,err:=io.ReadAll(response.Body)iferr!=nil{panic(err)}// Parse the response body to JSON format.
// This converts the model's response received from the server into structured data.
varvmap[string]interface{}json.Unmarshal(body,&v)choices:=v["choices"].([]interface{})choice:=choices[0].(map[string]interface{})message,err:=json.MarshalIndent(choice["message"],""," ")iferr!=nil{panic(err)}// Print the response generated by the AI model.
fmt.Println(string(message))}
package main
import (
"bytes""encoding/json""fmt""io""net/http")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define the Message structure.
// Role: Message role (user, assistant, etc.)
// Content: Message content
type Message struct {
Role string`json:"role"` Content string`json:"content"`}
// Define the POST request data structure.
// Model: Model ID to use
// Messages: List of messages
// Stream: Whether to stream
type PostData struct {
Model string`json:"model"` Messages []Message `json:"messages"` Stream bool`json:"stream,omitempty"`}
funcmain() {
// Configure the request data.
// In this example, the user is asked to compare which of two numbers is greater.
// "Think step by step" is a prompt that encourages the model to think through logical steps.
data := PostData{
Model: model,
Messages: []Message{
{
Role: "user",
Content: "Think step by step. 9.11 and 9.8, which is greater?",
},
},
}
// Serialize the request data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v1/chat/completions endpoint.
response, err := http.Post(aiosBaseUrl+"/v1/chat/completions", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// Read the response body.
body, err := io.ReadAll(response.Body)
if err !=nil {
panic(err)
}
// Parse the response body to JSON format.
// This converts the model's response received from the server into structured data.
var v map[string]interface{}
json.Unmarshal(body, &v)
choices := v["choices"].([]interface{})
choice := choices[0].(map[string]interface{})
message, err := json.MarshalIndent(choice["message"], "", " ")
if err !=nil {
panic(err)
}
// Print the response generated by the AI model.
fmt.Println(string(message))
}
packagemainimport("context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)funcmain(){// Create an OpenAI client.
// base_url points to the v1 endpoint of the AIOS API.
client:=openai.NewClient(option.WithBaseURL(aiosBaseUrl+"/v1"),)// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// "Think step by step" is a prompt that encourages the model to think through logical steps.
response,err:=client.Chat.Completions.New(context.TODO(),openai.ChatCompletionNewParams{Model:model,Messages:[]openai.ChatCompletionMessageParamUnion{openai.UserMessage("Think step by step. 9.11 and 9.8, which is greater?"),},})iferr!=nil{panic(err)}// Print the response generated by the AI model.
fmt.Println(response.Choices[0].Message.RawJSON())}
package main
import (
"context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
funcmain() {
// Create an OpenAI client.
// base_url points to the v1 endpoint of the AIOS API.
client := openai.NewClient(
option.WithBaseURL(aiosBaseUrl +"/v1"),
)
// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// "Think step by step" is a prompt that encourages the model to think through logical steps.
response, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
Model: model,
Messages: []openai.ChatCompletionMessageParamUnion{
openai.UserMessage("Think step by step. 9.11 and 9.8, which is greater?"),
},
})
if err !=nil {
panic(err)
}
// Print the response generated by the AI model.
fmt.Println(response.Choices[0].Message.RawJSON())
}
Code block. reasoning request
Response
If you check the message field of choices, you can see reasoning_content in addition to content.
reasoning_content refers to the tokens generated during the reasoning phase before generating the final answer.
{'annotations':None,'audio':None,'content':'Sure! Let’s compare the two numbers step by step.\n''\n''1. **Identify the numbers** \n'' - First number: **9.11** \n'' - Second number: **9.8**\n''\n''2. **Look at the whole-number part** \n'' Both numbers have the same whole-number part, **9**. So the ''comparison will depend on the decimal part.\n''\n''3. **Compare the decimal parts** \n'' - Decimal part of 9.11 = **0.11** \n'' - Decimal part of 9.8 = **0.80** (since 9.8 = 9.80)\n''\n''4. **Determine which decimal part is larger** \n'' - 0.80 is greater than 0.11.\n''\n''5. **Conclude** \n'' Because the whole-number parts are equal and the decimal part ''of 9.8 is larger, **9.8 is greater than 9.11**.','function_call':None,'reasoning_content':'User asks: "Think step by step. 9.11 and 9.8, which is ''greater?" We need to compare numbers 9.11 and 9.8. ''Value: 9.11 < 9.8, so 9.8 is greater. Provide ''step-by-step reasoning. No policy conflict.','refusal':None,'role':'assistant','tool_calls':[]}
image to text
For models that support vision, you can input images as follows.
Fig. Input image
Warning
Input images for vision support models have size and quantity limits.
For information about image input limits, please refer to Provided Models.
Request
You can input images in base64-encoded data URL format with MIME type.
Color mode
importbase64importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.image_path="image/path.jpg"# Define a function to Base64 encode an image.# This converts the image to text format so it can be transmitted to the API.defencode_image(image_path:str):withopen(image_path,"rb")asimage_file:returnbase64.b64encode(image_file.read()).decode("utf-8")# Encode the image in Base64 format.base64_image=encode_image(image_path)# Configure the request data.# In this example, the user is asked to ask a question about the image.# The image is transmitted as a Base64-encoded string.data={"model":model,"messages":[{"role":"user","content":[{"type":"text","text":"what's in this image?"},{"type":"image_url","image_url":{"url":f"data:image/jpeg;base64,{base64_image}",},},]},]}# Send a POST request to the AIOS API's v1/chat/completions endpoint.# This request asks the model to analyze the image.response=requests.post(urljoin(aios_base_url,"v1/chat/completions"),json=data)body=json.loads(response.text)# Print the response generated by the AI model.# This response is the model's description of the image content.print(body["choices"][0]["message"])
importbase64importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.image_path ="image/path.jpg"# Define a function to Base64 encode an image.# This converts the image to text format so it can be transmitted to the API.defencode_image(image_path: str):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
# Encode the image in Base64 format.base64_image = encode_image(image_path)
# Configure the request data.# In this example, the user is asked to ask a question about the image.# The image is transmitted as a Base64-encoded string.data = {
"model": model,
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "what's in this image?"},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}",
},
},
]
},
]
}
# Send a POST request to the AIOS API's v1/chat/completions endpoint.# This request asks the model to analyze the image.response = requests.post(urljoin(aios_base_url, "v1/chat/completions"), json=data)
body = json.loads(response.text)
# Print the response generated by the AI model.# This response is the model's description of the image content.print(body["choices"][0]["message"])
importbase64fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")image_path="image/path.jpg"# Define a function to Base64 encode an image.# This converts the image to text format so it can be transmitted to the API.defencode_image(image_path:str):withopen(image_path,"rb")asimage_file:returnbase64.b64encode(image_file.read()).decode("utf-8")# Encode the image in Base64 format.base64_image=encode_image(image_path)# Generate a chat completion using the AIOS model.# The model parameter specifies the model ID to use.# The messages parameter is a list of messages containing the user message.# In this example, the user is asked to ask a question about the image.# The image is transmitted as a Base64-encoded string.response=client.chat.completions.create(model=model,messages=[{"role":"user","content":[{"type":"text","text":"what's in this image?"},{"type":"image_url","image_url":{"url":f"data:image/jpeg;base64,{base64_image}",},},]},],)# Print the response generated by the AI model.# This response is the model's description of the image content.print(response.choices[0].message.model_dump())
importbase64fromopenaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
image_path ="image/path.jpg"# Define a function to Base64 encode an image.# This converts the image to text format so it can be transmitted to the API.defencode_image(image_path: str):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
# Encode the image in Base64 format.base64_image = encode_image(image_path)
# Generate a chat completion using the AIOS model.# The model parameter specifies the model ID to use.# The messages parameter is a list of messages containing the user message.# In this example, the user is asked to ask a question about the image.# The image is transmitted as a Base64-encoded string.response = client.chat.completions.create(
model=model,
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "what's in this image?"},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}",
},
},
]
},
],
)
# Print the response generated by the AI model.# This response is the model's description of the image content.print(response.choices[0].message.model_dump())
importbase64fromlangchain_openaiimportChatOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".# The model parameter specifies the model ID to use.chat_llm=ChatOpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)image_path="image/path.jpg"# Define a function to Base64 encode an image.# This converts the image to text format so it can be transmitted to the API.defencode_image(image_path:str):withopen(image_path,"rb")asimage_file:returnbase64.b64encode(image_file.read()).decode("utf-8")# Encode the image in Base64 format.base64_image=encode_image(image_path)# Configure the list of chat messages.# In this example, the user is asked to ask a question about the image.# The image is transmitted as a Base64-encoded string.messages=[{"role":"user","content":[{"type":"text","text":"what's in this image?"},{"type":"image_url","image_url":{"url":f"data:image/jpeg;base64,{base64_image}",},},]},]# Pass the list of messages to the chat LLM to get a response.# The invoke method returns the model's output.# This request asks the model to analyze the image.chat_completion=chat_llm.invoke(messages)# Print the response generated by the AI model.# This response is the model's description of the image content.print(chat_completion.model_dump())
importbase64fromlangchain_openaiimport ChatOpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.# base_url points to the v1 endpoint of the AIOS API,# and api_key is the key required by AIOS, typically set to "EMPTY_KEY".# The model parameter specifies the model ID to use.chat_llm = ChatOpenAI(
base_url=urljoin(aios_base_url, "v1"),
api_key="EMPTY_KEY",
model=model
)
image_path ="image/path.jpg"# Define a function to Base64 encode an image.# This converts the image to text format so it can be transmitted to the API.defencode_image(image_path: str):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
# Encode the image in Base64 format.base64_image = encode_image(image_path)
# Configure the list of chat messages.# In this example, the user is asked to ask a question about the image.# The image is transmitted as a Base64-encoded string.messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "what's in this image?"},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}",
},
},
]
},
]
# Pass the list of messages to the chat LLM to get a response.# The invoke method returns the model's output.# This request asks the model to analyze the image.chat_completion = chat_llm.invoke(messages)
# Print the response generated by the AI model.# This response is the model's description of the image content.print(chat_completion.model_dump())
import{readFile}from"fs/promises";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
constimagePath="image/path.jpg";// Define a function to convert an image file to Base64.
// This converts the image to text format so it can be transmitted to the API.
asyncfunctionimageFileToBase64(imagePath){// Read file contents as buffer
constfileBuffer=awaitreadFile(imagePath);// Convert buffer to Base64 string
returnfileBuffer.toString("base64");}// Convert the image file to Base64 format.
constbase64Image=awaitimageFileToBase64(imagePath);// Configure the request data.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
constdata={model:model,messages:[{role:"user",content:[{type:"text",text:"what's in this image?"},{type:"image_url",image_url:{url:`data:image/jpeg;base64,${base64Image}`,},},],},],};// Generate the AIOS API's v1/chat/completions endpoint URL.
leturl=newURL("/v1/chat/completions",aios_base_url);// Send a POST request to the AIOS API.
// This request asks the model to analyze the image.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});constbody=awaitresponse.json();// Print the response generated by the AI model.
// This response is the model's description of the image content.
console.log(body.choices[0].message);
import { readFile } from "fs/promises";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
const imagePath ="image/path.jpg";
// Define a function to convert an image file to Base64.
// This converts the image to text format so it can be transmitted to the API.
asyncfunction imageFileToBase64(imagePath) {
// Read file contents as buffer
const fileBuffer =await readFile(imagePath);
// Convert buffer to Base64 string
return fileBuffer.toString("base64");
}
// Convert the image file to Base64 format.
const base64Image =await imageFileToBase64(imagePath);
// Configure the request data.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
const data = {
model: model,
messages: [
{
role:"user",
content: [
{ type:"text", text:"what's in this image?" },
{
type:"image_url",
image_url: {
url:`data:image/jpeg;base64,${base64Image}`,
},
},
],
},
],
};
// Generate the AIOS API's v1/chat/completions endpoint URL.
let url =new URL("/v1/chat/completions", aios_base_url);
// Send a POST request to the AIOS API.
// This request asks the model to analyze the image.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
const body =await response.json();
// Print the response generated by the AI model.
// This response is the model's description of the image content.
console.log(body.choices[0].message);
importOpenAIfrom"openai";import{readFile}from"fs/promises";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
constimagePath="image/path.jpg";// Define a function to convert an image file to Base64.
// This converts the image to text format so it can be transmitted to the API.
asyncfunctionimageFileToBase64(imagePath){// Read file contents as buffer
constfileBuffer=awaitreadFile(imagePath);// Convert buffer to Base64 string
returnfileBuffer.toString("base64");}// Convert the image file to Base64 format.
constbase64Image=awaitimageFileToBase64(imagePath);// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
constclient=newOpenAI({apiKey:"EMPTY_KEY",baseURL:newURL("v1",aios_base_url).href,});// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
constresponse=awaitclient.chat.completions.create({model:model,messages:[{role:"user",content:[{type:"text",text:"what's in this image?"},{type:"image_url",image_url:{url:`data:image/jpeg;base64,${base64Image}`,},},],},],});// Print the response generated by the AI model.
// This response is the model's description of the image content.
console.log(response.choices[0].message);
import OpenAI from "openai";
import { readFile } from "fs/promises";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
const imagePath ="image/path.jpg";
// Define a function to convert an image file to Base64.
// This converts the image to text format so it can be transmitted to the API.
asyncfunction imageFileToBase64(imagePath) {
// Read file contents as buffer
const fileBuffer =await readFile(imagePath);
// Convert buffer to Base64 string
return fileBuffer.toString("base64");
}
// Convert the image file to Base64 format.
const base64Image =await imageFileToBase64(imagePath);
// Create an OpenAI client.
// apiKey is the key required by AIOS, typically set to "EMPTY_KEY".
// baseURL points to the v1 endpoint of the AIOS API.
const client =new OpenAI({
apiKey:"EMPTY_KEY",
baseURL:new URL("v1", aios_base_url).href,
});
// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
const response =await client.chat.completions.create({
model: model,
messages: [
{
role:"user",
content: [
{ type:"text", text:"what's in this image?" },
{
type:"image_url",
image_url: {
url:`data:image/jpeg;base64,${base64Image}`,
},
},
],
},
],
});
// Print the response generated by the AI model.
// This response is the model's description of the image content.
console.log(response.choices[0].message);
import{HumanMessage}from"@langchain/core/messages";import{ChatOpenAI}from"@langchain/openai";import{readFile}from"fs/promises";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
constimagePath="image/path.jpg";// Define a function to convert an image file to Base64.
// This converts the image to text format so it can be transmitted to the API.
asyncfunctionimageFileToBase64(imagePath){// Read file contents as buffer
constfileBuffer=awaitreadFile(imagePath);// Convert buffer to Base64 string
returnfileBuffer.toString("base64");}// Convert the image file to Base64 format.
constbase64Image=awaitimageFileToBase64(imagePath);// Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.
// base_url points to the v1 endpoint of the AIOS API,
// and api_key is the key required by AIOS, typically set to "EMPTY_KEY".
// The model parameter specifies the model ID to use.
constllm=newChatOpenAI({model:model,apiKey:"EMPTY_KEY",configuration:{baseURL:newURL("v1",aios_base_url).href,},});// Configure the list of chat messages.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
constmessages=[newHumanMessage({content:[{type:"text",text:"what's in this image?"},{type:"image_url",image_url:{url:`data:image/jpeg;base64,${base64Image}`,},},],}),];// Pass the list of messages to the chat LLM to get a response.
// The invoke method returns the model's output.
// This request asks the model to analyze the image.
constresponse=awaitllm.invoke(messages);// Print the response generated by the AI model.
// This response is the model's description of the image content.
console.log(response.content);
import { HumanMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";
import { readFile } from "fs/promises";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
const imagePath ="image/path.jpg";
// Define a function to convert an image file to Base64.
// This converts the image to text format so it can be transmitted to the API.
asyncfunction imageFileToBase64(imagePath) {
// Read file contents as buffer
const fileBuffer =await readFile(imagePath);
// Convert buffer to Base64 string
return fileBuffer.toString("base64");
}
// Convert the image file to Base64 format.
const base64Image =await imageFileToBase64(imagePath);
// Create a chat LLM (Large Language Model) instance using LangChain's ChatOpenAI class.
// base_url points to the v1 endpoint of the AIOS API,
// and api_key is the key required by AIOS, typically set to "EMPTY_KEY".
// The model parameter specifies the model ID to use.
const llm =new ChatOpenAI({
model: model,
apiKey:"EMPTY_KEY",
configuration: {
baseURL:new URL("v1", aios_base_url).href,
},
});
// Configure the list of chat messages.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
const messages = [
new HumanMessage({
content: [
{ type:"text", text:"what's in this image?" },
{
type:"image_url",
image_url: {
url:`data:image/jpeg;base64,${base64Image}`,
},
},
],
}),
];
// Pass the list of messages to the chat LLM to get a response.
// The invoke method returns the model's output.
// This request asks the model to analyze the image.
const response =await llm.invoke(messages);
// Print the response generated by the AI model.
// This response is the model's description of the image content.
console.log(response.content);
packagemainimport("bytes""encoding/base64""encoding/json""fmt""io""net/http""os")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)varimagePath="image/path.jpg"// Define the Message structure.
// Role: Message role (user, assistant, etc.)
// Content: Message content (including text and image URL)
typeMessagestruct{Rolestring`json:"role"`Content[]map[string]interface{}`json:"content"`}// Define the POST request data structure.
// Model: Model ID to use
// Messages: List of messages
// Stream: Whether to stream
typePostDatastruct{Modelstring`json:"model"`Messages[]Message`json:"messages"`Streambool`json:"stream,omitempty"`}// Define a function to Base64 encode an image file.
// This converts the image to text format so it can be transmitted to the API.
funcimageFileToBase64(imagePathstring)(string,error){data,err:=os.ReadFile(imagePath)iferr!=nil{return"",err}returnbase64.StdEncoding.EncodeToString([]byte(data)),nil}funcmain(){// Encode the image file in Base64 format.
base64Image,err:=imageFileToBase64(imagePath)iferr!=nil{panic(err)}// Configure the request data.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
data:=PostData{Model:model,Messages:[]Message{{Role:"user",Content:[]map[string]interface{}{{"type":"text","text":"what's in this image?",},{"type":"image_url","image_url":map[string]string{"url":fmt.Sprintf("data:image/jpeg;base64,%s",base64Image),},},},},},}// Serialize the request data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v1/chat/completions endpoint.
// This request asks the model to analyze the image.
response,err:=http.Post(aiosBaseUrl+"/v1/chat/completions","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// Read the response body.
body,err:=io.ReadAll(response.Body)iferr!=nil{panic(err)}// Parse the response body to JSON format.
// This converts the model's response received from the server into structured data.
varvmap[string]interface{}json.Unmarshal(body,&v)// Print the response generated by the AI model.
// This response is the model's description of the image content.
choices:=v["choices"].([]interface{})choice:=choices[0].(map[string]interface{})message,err:=json.MarshalIndent(choice["message"],""," ")iferr!=nil{panic(err)}fmt.Println(string(message))}
package main
import (
"bytes""encoding/base64""encoding/json""fmt""io""net/http""os")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
var imagePath = "image/path.jpg"// Define the Message structure.
// Role: Message role (user, assistant, etc.)
// Content: Message content (including text and image URL)
type Message struct {
Role string`json:"role"` Content []map[string]interface{} `json:"content"`}
// Define the POST request data structure.
// Model: Model ID to use
// Messages: List of messages
// Stream: Whether to stream
type PostData struct {
Model string`json:"model"` Messages []Message `json:"messages"` Stream bool`json:"stream,omitempty"`}
// Define a function to Base64 encode an image file.
// This converts the image to text format so it can be transmitted to the API.
funcimageFileToBase64(imagePath string) (string, error) {
data, err := os.ReadFile(imagePath)
if err !=nil {
return"", err
}
return base64.StdEncoding.EncodeToString([]byte(data)), nil}
funcmain() {
// Encode the image file in Base64 format.
base64Image, err :=imageFileToBase64(imagePath)
if err !=nil {
panic(err)
}
// Configure the request data.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
data := PostData{
Model: model,
Messages: []Message{
{
Role: "user",
Content: []map[string]interface{}{
{
"type": "text",
"text": "what's in this image?",
},
{
"type": "image_url",
"image_url": map[string]string{
"url": fmt.Sprintf("data:image/jpeg;base64,%s", base64Image),
},
},
},
},
},
}
// Serialize the request data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v1/chat/completions endpoint.
// This request asks the model to analyze the image.
response, err := http.Post(aiosBaseUrl+"/v1/chat/completions", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// Read the response body.
body, err := io.ReadAll(response.Body)
if err !=nil {
panic(err)
}
// Parse the response body to JSON format.
// This converts the model's response received from the server into structured data.
var v map[string]interface{}
json.Unmarshal(body, &v)
// Print the response generated by the AI model.
// This response is the model's description of the image content.
choices := v["choices"].([]interface{})
choice := choices[0].(map[string]interface{})
message, err := json.MarshalIndent(choice["message"], "", " ")
if err !=nil {
panic(err)
}
fmt.Println(string(message))
}
packagemainimport("context""encoding/base64""fmt""os""github.com/openai/openai-go""github.com/openai/openai-go/option")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)varimagePath="image/path.jpg"// Define a function to Base64 encode an image file.
// This converts the image to text format so it can be transmitted to the API.
funcimageFileToBase64(imagePathstring)(string,error){data,err:=os.ReadFile(imagePath)iferr!=nil{return"",err}returnbase64.StdEncoding.EncodeToString([]byte(data)),nil}funcmain(){// Encode the image file in Base64 format.
base64Image,err:=imageFileToBase64(imagePath)iferr!=nil{panic(err)}// Create an OpenAI client.
// base_url points to the v1 endpoint of the AIOS API.
client:=openai.NewClient(option.WithBaseURL(aiosBaseUrl+"/v1"),)// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
response,err:=client.Chat.Completions.New(context.TODO(),openai.ChatCompletionNewParams{Model:model,Messages:[]openai.ChatCompletionMessageParamUnion{openai.UserMessage([]openai.ChatCompletionContentPartUnionParam{{OfText:&openai.ChatCompletionContentPartTextParam{Text:"what's in this image?",},},{OfImageURL:&openai.ChatCompletionContentPartImageParam{ImageURL:openai.ChatCompletionContentPartImageImageURLParam{URL:fmt.Sprintf("data:image/jpeg;base64,%s",base64Image),},},},}),},})iferr!=nil{panic(err)}// Print the response generated by the AI model.
// This response is the model's description of the image content.
fmt.Println(response.Choices[0].Message.RawJSON())}
package main
import (
"context""encoding/base64""fmt""os""github.com/openai/openai-go""github.com/openai/openai-go/option")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
var imagePath = "image/path.jpg"// Define a function to Base64 encode an image file.
// This converts the image to text format so it can be transmitted to the API.
funcimageFileToBase64(imagePath string) (string, error) {
data, err := os.ReadFile(imagePath)
if err !=nil {
return"", err
}
return base64.StdEncoding.EncodeToString([]byte(data)), nil}
funcmain() {
// Encode the image file in Base64 format.
base64Image, err :=imageFileToBase64(imagePath)
if err !=nil {
panic(err)
}
// Create an OpenAI client.
// base_url points to the v1 endpoint of the AIOS API.
client := openai.NewClient(
option.WithBaseURL(aiosBaseUrl +"/v1"),
)
// Generate a chat completion using the AIOS model.
// The model parameter specifies the model ID to use.
// The messages parameter is a list of messages containing the user message.
// In this example, the user is asked to ask a question about the image.
// The image is transmitted as a Base64-encoded string.
response, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
Model: model,
Messages: []openai.ChatCompletionMessageParamUnion{
openai.UserMessage([]openai.ChatCompletionContentPartUnionParam{
{
OfText: &openai.ChatCompletionContentPartTextParam{
Text: "what's in this image?",
},
},
{
OfImageURL: &openai.ChatCompletionContentPartImageParam{
ImageURL: openai.ChatCompletionContentPartImageImageURLParam{
URL: fmt.Sprintf("data:image/jpeg;base64,%s", base64Image),
},
},
},
}),
},
})
if err !=nil {
panic(err)
}
// Print the response generated by the AI model.
// This response is the model's description of the image content.
fmt.Println(response.Choices[0].Message.RawJSON())
}
Code block. vision request
Response
The image is analyzed and text is generated as follows:
{'annotations':None,'audio':None,'content':"Here's what's in the image:\n"'\n''* **A Golden Retriever puppy:** The main focus is a cute, ''fluffy golden retriever puppy lying on a patch of grass.\n''* **A bone:** The puppy is chewing on a pink bone.\n''* **Green grass:** The puppy is lying on a vibrant green lawn.\n''* **Background:** There’s a bit of foliage and some elements of ''a garden or yard in the background, including a small shed and ''some plants.\n''\n'"It's a really heartwarming image!",'function_call':None,'reasoning_content':None,'refusal':None,'role':'assistant','tool_calls':[]}
Embeddings API
Embeddings converts input text into high-dimensional vectors of a specified dimension.
The generated vectors can be used for various natural language processing tasks such as text similarity, clustering, and search.
Request
Color mode
importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the data to pass to the model.data={"model":model,"input":"What is the capital of France?"}# Send a POST request to AIOS's /v1/embeddings API endpoint.response=requests.post(urljoin(aios_base_url,"v1/embeddings"),json=data)body=json.loads(response.text)# Print the generated embedding vector from the response.print(body["data"][0]["embedding"])
importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the data to pass to the model.data = {
"model": model,
"input": "What is the capital of France?"}
# Send a POST request to AIOS's /v1/embeddings API endpoint.response = requests.post(urljoin(aios_base_url, "v1/embeddings"), json=data)
body = json.loads(response.text)
# Print the generated embedding vector from the response.print(body["data"][0]["embedding"])
fromopenaiimportOpenAIfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url specifies the AIOS API endpoint,# and api_key is set to a dummy value ("EMPTY_KEY").client=OpenAI(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY")# Call the OpenAI client's embeddings.create method to generate an embedding.# Pass the input text and model ID to generate an embedding vector.response=client.embeddings.create(input="What is the capital of France?",model=model)# Print the generated embedding vector.print(response.data[0].embedding)
fromopenaiimport OpenAI
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an OpenAI client.# base_url specifies the AIOS API endpoint,# and api_key is set to a dummy value ("EMPTY_KEY").client = OpenAI(base_url=urljoin(aios_base_url, "v1"), api_key="EMPTY_KEY")
# Call the OpenAI client's embeddings.create method to generate an embedding.# Pass the input text and model ID to generate an embedding vector.response = client.embeddings.create(
input="What is the capital of France?",
model=model
)
# Print the generated embedding vector.print(response.data[0].embedding)
fromlangchain_togetherimportTogetherEmbeddingsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create an embedding instance using the TogetherEmbeddings class.# base_url specifies the AIOS API endpoint,# api_key is set to a dummy value ("EMPTY_KEY").# model specifies the embedding model to use.embeddings=TogetherEmbeddings(base_url=urljoin(aios_base_url,"v1"),api_key="EMPTY_KEY",model=model)# Generate an embedding vector for the input text.# The embed_query method generates an embedding for a single sentence.embedding=embeddings.embed_query("What is the capital of France?")# Print the generated embedding vector.print(embedding)
fromlangchain_togetherimport TogetherEmbeddings
fromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create an embedding instance using the TogetherEmbeddings class.# base_url specifies the AIOS API endpoint,# api_key is set to a dummy value ("EMPTY_KEY").# model specifies the embedding model to use.embeddings = TogetherEmbeddings(
base_url=urljoin(aios_base_url, "v1"),
api_key="EMPTY_KEY",
model=model
)
# Generate an embedding vector for the input text.# The embed_query method generates an embedding for a single sentence.embedding = embeddings.embed_query("What is the capital of France?")
# Print the generated embedding vector.print(embedding)
constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Configure the data to pass to the model.
constdata={model:model,input:"What is the capital of France?"};// Generate the AIOS API's v1/embeddings endpoint URL.
leturl=newURL("/v1/embeddings",aios_base_url);// Send a POST request to AIOS's embeddings API endpoint.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});constbody=awaitresponse.json();// Print the generated embedding vector from the response.
console.log(body.data[0].embedding);
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Configure the data to pass to the model.
const data = {
model: model,
input:"What is the capital of France?"};
// Generate the AIOS API's v1/embeddings endpoint URL.
let url =new URL("/v1/embeddings", aios_base_url);
// Send a POST request to AIOS's embeddings API endpoint.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
const body =await response.json();
// Print the generated embedding vector from the response.
console.log(body.data[0].embedding);
importOpenAIfrom"openai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is set to a dummy value ("EMPTY_KEY"),
// and baseURL specifies the AIOS API endpoint.
constclient=newOpenAI({apiKey:"EMPTY_KEY",baseURL:newURL("v1",aios_base_url).href,});// Call the OpenAI client's embeddings.create method to generate an embedding.
// Pass the input text and model ID to generate an embedding vector.
constresponse=awaitclient.embeddings.create({model:model,input:"What is the capital of France?",});// Print the generated embedding vector.
console.log(response.data[0].embedding);
import OpenAI from "openai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create an OpenAI client.
// apiKey is set to a dummy value ("EMPTY_KEY"),
// and baseURL specifies the AIOS API endpoint.
const client =new OpenAI({
apiKey:"EMPTY_KEY",
baseURL:new URL("v1", aios_base_url).href,
});
// Call the OpenAI client's embeddings.create method to generate an embedding.
// Pass the input text and model ID to generate an embedding vector.
const response =await client.embeddings.create({
model: model,
input:"What is the capital of France?",
});
// Print the generated embedding vector.
console.log(response.data[0].embedding);
import{OpenAIEmbeddings}from"@langchain/openai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create an embedding instance using LangChain's OpenAIEmbeddings class.
// base_url points to the v1 endpoint of the AIOS API,
// and api_key is the key required by AIOS, typically set to "EMPTY_KEY".
// The model parameter specifies the model ID to use.
constembeddings=newOpenAIEmbeddings({model:model,apiKey:"EMPTY_KEY",configuration:{baseURL:newURL("v1",aios_base_url).href,},});// Generate an embedding vector for the input text.
// The embedQuery method generates an embedding for a single sentence.
constresponse=awaitembeddings.embedQuery("What is the capital of France?");// Print the generated embedding vector.
console.log(response);
import { OpenAIEmbeddings } from "@langchain/openai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create an embedding instance using LangChain's OpenAIEmbeddings class.
// base_url points to the v1 endpoint of the AIOS API,
// and api_key is the key required by AIOS, typically set to "EMPTY_KEY".
// The model parameter specifies the model ID to use.
const embeddings =new OpenAIEmbeddings({
model: model,
apiKey:"EMPTY_KEY",
configuration: {
baseURL:new URL("v1", aios_base_url).href,
},
});
// Generate an embedding vector for the input text.
// The embedQuery method generates an embedding for a single sentence.
const response =await embeddings.embedQuery("What is the capital of France?");
// Print the generated embedding vector.
console.log(response);
packagemainimport("bytes""encoding/json""fmt""io""net/http")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define the data structure for the POST request.
// Model: Model ID to use
// Input: Input text to generate embedding for
typePostDatastruct{Modelstring`json:"model"`Inputstring`json:"input"`}funcmain(){// Create the request data.
data:=PostData{Model:model,Input:"What is the capital of France?",}// Marshal the data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v1/embeddings endpoint.
response,err:=http.Post(aiosBaseUrl+"/v1/embeddings","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// Read the entire response body.
body,err:=io.ReadAll(response.Body)iferr!=nil{panic(err)}// Unmarshal the response body to map format.
varvmap[string]interface{}json.Unmarshal(body,&v)responseData:=v["data"].([]interface{})firstData:=responseData[0].(map[string]interface{})// Print the embedding vector of the first data in JSON format.
embedding,err:=json.MarshalIndent(firstData["embedding"],""," ")iferr!=nil{panic(err)}fmt.Println(string(embedding))}
package main
import (
"bytes""encoding/json""fmt""io""net/http")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define the data structure for the POST request.
// Model: Model ID to use
// Input: Input text to generate embedding for
type PostData struct {
Model string`json:"model"` Input string`json:"input"`}
funcmain() {
// Create the request data.
data := PostData{
Model: model,
Input: "What is the capital of France?",
}
// Marshal the data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v1/embeddings endpoint.
response, err := http.Post(aiosBaseUrl+"/v1/embeddings", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// Read the entire response body.
body, err := io.ReadAll(response.Body)
if err !=nil {
panic(err)
}
// Unmarshal the response body to map format.
var v map[string]interface{}
json.Unmarshal(body, &v)
responseData := v["data"].([]interface{})
firstData := responseData[0].(map[string]interface{})
// Print the embedding vector of the first data in JSON format.
embedding, err := json.MarshalIndent(firstData["embedding"], "", " ")
if err !=nil {
panic(err)
}
fmt.Println(string(embedding))
}
packagemainimport("context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)funcmain(){// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client:=openai.NewClient(option.WithBaseURL(aiosBaseUrl+"/v1"),)// Generate an embedding using the AIOS model.
// Use openai.EmbeddingNewParams to set the model and input text.
// The input text is "What is the capital of France?".
completion,err:=client.Embeddings.New(context.TODO(),openai.EmbeddingNewParams{Model:model,Input:openai.EmbeddingNewParamsInputUnion{OfString:openai.String("What is the capital of France?"),},})iferr!=nil{panic(err)}// Print the generated embedding vector.
fmt.Println(completion.Data[0].Embedding)}
package main
import (
"context""fmt""github.com/openai/openai-go""github.com/openai/openai-go/option")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
funcmain() {
// Create an OpenAI client.
// Use option.WithBaseURL to set the v1 endpoint of the AIOS API.
client := openai.NewClient(
option.WithBaseURL(aiosBaseUrl +"/v1"),
)
// Generate an embedding using the AIOS model.
// Use openai.EmbeddingNewParams to set the model and input text.
// The input text is "What is the capital of France?".
completion, err := client.Embeddings.New(context.TODO(), openai.EmbeddingNewParams{
Model: model,
Input: openai.EmbeddingNewParamsInputUnion{
OfString: openai.String("What is the capital of France?"),
},
})
if err !=nil {
panic(err)
}
// Print the generated embedding vector.
fmt.Println(completion.Data[0].Embedding)
}
Code block. /v1/embeddings request
Note
The aios endpoint-url and model ID information for calling the model are provided in the LLM Endpoint Usage Guide on the resource detail page. Please refer to Using LLM.
Response
You receive the value converted to vector format in embedding of data.
Rerank calculates the relevance to a query for given documents and assigns a ranking.
It helps improve the performance of RAG (Retrieval-Augmented Generation) applications by prioritizing relevant documents.
Request
Color mode
importjsonimportrequestsfromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use, query, list of documents, and top N results.data={"model":model,"query":"What is the capital of France?","documents":["The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France."],"top_n":3}# Send a POST request to the AIOS API's v2/rerank endpoint.# Compare the query and document list to rearrange documents with high relevance.response=requests.post(urljoin(aios_base_url,"v2/rerank"),json=data)body=json.loads(response.text)# Print the rearranged results.# This result is a list of documents sorted by relevance score between the query and documents.print(body["results"])
importjsonimportrequestsfromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Configure the request data.# This includes the model ID to use, query, list of documents, and top N results.data = {
"model": model,
"query": "What is the capital of France?",
"documents": [
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France." ],
"top_n": 3}
# Send a POST request to the AIOS API's v2/rerank endpoint.# Compare the query and document list to rearrange documents with high relevance.response = requests.post(urljoin(aios_base_url, "v2/rerank"), json=data)
body = json.loads(response.text)
# Print the rearranged results.# This result is a list of documents sorted by relevance score between the query and documents.print(body["results"])
importcoherefromurllib.parseimporturljoinaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create a Cohere client.# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# base_url points to the base path of the AIOS API.client=cohere.ClientV2("EMPTY_KEY",base_url=aios_base_url)# Define the list of documents.# These are the documents to search.docs=["The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France."]# Use the AIOS model to rerank documents.# The model parameter specifies the model ID to use.# The query parameter is the search query.# The documents parameter is the list of documents to search.# The top_n parameter returns the top N results.response=client.rerank(model=model,query="What is the capital of France?",documents=docs,top_n=3,)# Print the rearranged results.# This result is a list of documents sorted by relevance score between the query and documents.print([result.model_dump()forresultinresponse.results])
importcoherefromurllib.parseimport urljoin
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create a Cohere client.# api_key is the key required by AIOS, typically set to "EMPTY_KEY".# base_url points to the base path of the AIOS API.client = cohere.ClientV2("EMPTY_KEY", base_url=aios_base_url)
# Define the list of documents.# These are the documents to search.docs = [
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France."]
# Use the AIOS model to rerank documents.# The model parameter specifies the model ID to use.# The query parameter is the search query.# The documents parameter is the list of documents to search.# The top_n parameter returns the top N results.response = client.rerank(
model=model,
query="What is the capital of France?",
documents=docs,
top_n=3,
)
# Print the rearranged results.# This result is a list of documents sorted by relevance score between the query and documents.print([result.model_dump() for result in response.results])
fromlangchain_cohere.rerankimportCohereRerankaios_base_url="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model="<<model>>"# Enter the model ID for calling the AIOS model.# Create a reranker instance using the CohereRerank class.# base_url points to the base path of the AIOS API.# cohere_api_key is the key required for API requests, typically set to "EMPTY_KEY".# The model parameter specifies the model ID to use.rerank=CohereRerank(base_url=aios_base_url,cohere_api_key="EMPTY_KEY",model=model)# Define the list of documents.# These are the documents to rearrange.docs=["The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France."]# Use the reranker to rearrange documents.# The documents parameter is the list of documents to rearrange.# The query parameter is the search query.# The top_n parameter returns the top N results.ranks=rerank.rerank(documents=docs,query="What is the capital of France?",top_n=3)# Print the rearranged results.# This result is a list of documents sorted by relevance score between the query and documents.print(ranks)
fromlangchain_cohere.rerankimport CohereRerank
aios_base_url ="<<aios endpoint-url>>"# Enter the aios endpoint-url for calling the AIOS model.model ="<<model>>"# Enter the model ID for calling the AIOS model.# Create a reranker instance using the CohereRerank class.# base_url points to the base path of the AIOS API.# cohere_api_key is the key required for API requests, typically set to "EMPTY_KEY".# The model parameter specifies the model ID to use.rerank = CohereRerank(
base_url=aios_base_url,
cohere_api_key="EMPTY_KEY",
model=model
)
# Define the list of documents.# These are the documents to rearrange.docs = [
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France."]
# Use the reranker to rearrange documents.# The documents parameter is the list of documents to rearrange.# The query parameter is the search query.# The top_n parameter returns the top N results.ranks = rerank.rerank(
documents=docs,
query="What is the capital of France?",
top_n=3)
# Print the rearranged results.# This result is a list of documents sorted by relevance score between the query and documents.print(ranks)
constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use, query, list of documents, and top N results.
constdata={model:model,query:"What is the capital of France?",documents:["The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France.",],top_n:3,};// Generate the AIOS API's v2/rerank endpoint URL.
leturl=newURL("/v2/rerank",aios_base_url);// Send a POST request to the AIOS API.
// This endpoint rearranges documents with high relevance by comparing the query and document list.
constresponse=awaitfetch(url,{method:"POST",headers:{"Content-Type":"application/json",},body:JSON.stringify(data),});constbody=awaitresponse.json();// Print the rearranged results.
// This result is a list of documents sorted by relevance score between the query and documents.
console.log(body.results);
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Configure the request data.
// This includes the model ID to use, query, list of documents, and top N results.
const data = {
model: model,
query:"What is the capital of France?",
documents: [
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France.",
],
top_n:3,
};
// Generate the AIOS API's v2/rerank endpoint URL.
let url =new URL("/v2/rerank", aios_base_url);
// Send a POST request to the AIOS API.
// This endpoint rearranges documents with high relevance by comparing the query and document list.
const response =await fetch(url, {
method:"POST",
headers: {
"Content-Type":"application/json",
},
body: JSON.stringify(data),
});
const body =await response.json();
// Print the rearranged results.
// This result is a list of documents sorted by relevance score between the query and documents.
console.log(body.results);
import{CohereClientV2}from"cohere-ai";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create a CohereClientV2 client.
// token is the key required for API requests, typically set to "EMPTY_KEY".
// environment points to the base path of the AIOS API.
constcohere=newCohereClientV2({token:"EMPTY_KEY",environment:aios_base_url,});// Define the list of documents.
// These are the documents to rearrange.
constdocs=["The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France.",];// Use the AIOS model to rearrange documents.
// The model parameter specifies the model ID to use.
// The query parameter is the search query.
// The documents parameter is the list of documents to rearrange.
// The topN parameter returns the top N results.
constresponse=awaitcohere.rerank({model:model,query:"What is the capital of France?",documents:docs,topN:3,});// Print the rearranged results.
// This result is a list of documents sorted by relevance score between the query and documents.
console.log(response.results);
import { CohereClientV2 } from "cohere-ai";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create a CohereClientV2 client.
// token is the key required for API requests, typically set to "EMPTY_KEY".
// environment points to the base path of the AIOS API.
const cohere =new CohereClientV2({
token:"EMPTY_KEY",
environment: aios_base_url,
});
// Define the list of documents.
// These are the documents to rearrange.
const docs = [
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France.",
];
// Use the AIOS model to rearrange documents.
// The model parameter specifies the model ID to use.
// The query parameter is the search query.
// The documents parameter is the list of documents to rearrange.
// The topN parameter returns the top N results.
const response =await cohere.rerank({
model: model,
query:"What is the capital of France?",
documents: docs,
topN:3,
});
// Print the rearranged results.
// This result is a list of documents sorted by relevance score between the query and documents.
console.log(response.results);
import{CohereClientV2}from"cohere-ai";import{CohereRerank}from"@langchain/cohere";constaios_base_url="<<aios endpoint-url>>";// Enter the aios endpoint-url for calling the AIOS model.
constmodel="<<model>>";// Enter the model ID for calling the AIOS model.
// Create a CohereClientV2 client.
// token is the key required for API requests, typically set to "EMPTY_KEY".
// environment points to the base path of the AIOS API.
constcohere=newCohereClientV2({token:"EMPTY_KEY",environment:aios_base_url,});// Create a rearranger instance using the CohereRerank class.
// The model parameter specifies the model ID to use.
// The client parameter passes the CohereClientV2 instance created above.
constreranker=newCohereRerank({model:model,client:cohere,});// Define the list of documents.
// These are the documents to rearrange.
constdocs=["The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France.",];// Define the search query.
constquery="What is the capital of France?";// Use the rerank method of the reranker to rearrange documents.
// The first argument is the list of documents to rearrange.
// The second argument is the search query.
constresponse=awaitreranker.rerank(docs,query);// Print the rearranged results.
// This result is a list of documents sorted by relevance score between the query and documents.
console.log(response);
import { CohereClientV2 } from "cohere-ai";
import { CohereRerank } from "@langchain/cohere";
const aios_base_url ="<<aios endpoint-url>>"; // Enter the aios endpoint-url for calling the AIOS model.
const model ="<<model>>"; // Enter the model ID for calling the AIOS model.
// Create a CohereClientV2 client.
// token is the key required for API requests, typically set to "EMPTY_KEY".
// environment points to the base path of the AIOS API.
const cohere =new CohereClientV2({
token:"EMPTY_KEY",
environment: aios_base_url,
});
// Create a rearranger instance using the CohereRerank class.
// The model parameter specifies the model ID to use.
// The client parameter passes the CohereClientV2 instance created above.
const reranker =new CohereRerank({
model: model,
client: cohere,
});
// Define the list of documents.
// These are the documents to rearrange.
const docs = [
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France.",
];
// Define the search query.
const query ="What is the capital of France?";
// Use the rerank method of the reranker to rearrange documents.
// The first argument is the list of documents to rearrange.
// The second argument is the search query.
const response =await reranker.rerank(docs, query);
// Print the rearranged results.
// This result is a list of documents sorted by relevance score between the query and documents.
console.log(response);
packagemainimport("bytes""encoding/json""fmt""io""net/http")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)// Define the data structure for the POST request.
// Model: Model ID to use
// Query: Search query
// Documents: List of documents to rearrange
// TopN: Return top N results
typePostDatastruct{Modelstring`json:"model"`Querystring`json:"query"`Documents[]string`json:"documents"`TopNint32`json:"top_n"`}funcmain(){// Create the request data.
// The query is "What is the capital of France?",
// and the document list consists of three sentences.
// TopN is set to 3 to return the top 3 results.
data:=PostData{Model:model,Query:"What is the capital of France?",Documents:[]string{"The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France.",},TopN:3,}// Marshal the data to JSON format.
jsonData,err:=json.Marshal(data)iferr!=nil{panic(err)}// Send a POST request to the AIOS API's v2/rerank endpoint.
// Compare the query and document list to rearrange documents with high relevance.
response,err:=http.Post(aiosBaseUrl+"/v2/rerank","application/json",bytes.NewBuffer(jsonData))iferr!=nil{panic(err)}deferresponse.Body.Close()// Read the entire response body.
body,err:=io.ReadAll(response.Body)iferr!=nil{panic(err)}// Unmarshal the response body to map format.
varvmap[string]interface{}json.Unmarshal(body,&v)// Print the rearranged results in JSON format.
// This result is a list of documents sorted by relevance score between the query and documents.
rerank,err:=json.MarshalIndent(v["results"],""," ")iferr!=nil{panic(err)}fmt.Println(string(rerank))}
package main
import (
"bytes""encoding/json""fmt""io""net/http")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
// Define the data structure for the POST request.
// Model: Model ID to use
// Query: Search query
// Documents: List of documents to rearrange
// TopN: Return top N results
type PostData struct {
Model string`json:"model"` Query string`json:"query"` Documents []string`json:"documents"` TopN int32`json:"top_n"`}
funcmain() {
// Create the request data.
// The query is "What is the capital of France?",
// and the document list consists of three sentences.
// TopN is set to 3 to return the top 3 results.
data := PostData{
Model: model,
Query: "What is the capital of France?",
Documents: []string{
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France.",
},
TopN: 3,
}
// Marshal the data to JSON format.
jsonData, err := json.Marshal(data)
if err !=nil {
panic(err)
}
// Send a POST request to the AIOS API's v2/rerank endpoint.
// Compare the query and document list to rearrange documents with high relevance.
response, err := http.Post(aiosBaseUrl+"/v2/rerank", "application/json", bytes.NewBuffer(jsonData))
if err !=nil {
panic(err)
}
defer response.Body.Close()
// Read the entire response body.
body, err := io.ReadAll(response.Body)
if err !=nil {
panic(err)
}
// Unmarshal the response body to map format.
var v map[string]interface{}
json.Unmarshal(body, &v)
// Print the rearranged results in JSON format.
// This result is a list of documents sorted by relevance score between the query and documents.
rerank, err := json.MarshalIndent(v["results"], "", " ")
if err !=nil {
panic(err)
}
fmt.Println(string(rerank))
}
packagemainimport("context""fmt"api"github.com/cohere-ai/cohere-go/v2"client"github.com/cohere-ai/cohere-go/v2/client")const(aiosBaseUrl="<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model="<<model>>"// Enter the model ID for calling the AIOS model.
)funcmain(){// Create a Cohere client.
// Use WithBaseURL to set the base path of the AIOS API.
co:=client.NewClient(client.WithBaseURL(aiosBaseUrl),)// Define the search query.
query:="What is the capital of France?"// Define the list of documents.
// These are the documents to rearrange.
docs:=[]string{"The capital of France is Paris.","France capital city is known for the Eiffel Tower.","Paris is located in the north-central part of France.",}// Use the AIOS model to rearrange documents.
// Use &api.V2RerankRequest to set the model, query, and document list.
resp,err:=co.V2.Rerank(context.TODO(),&api.V2RerankRequest{Model:model,Query:query,Documents:docs,},)iferr!=nil{panic(err)}// Print the rearranged results.
// This result is a list of documents sorted by relevance score between the query and documents.
fmt.Println(resp.Results)}
package main
import (
"context""fmt" api "github.com/cohere-ai/cohere-go/v2" client "github.com/cohere-ai/cohere-go/v2/client")
const (
aiosBaseUrl = "<<aios endpoint-url>>"// Enter the aios endpoint-url for calling the AIOS model.
model = "<<model>>"// Enter the model ID for calling the AIOS model.
)
funcmain() {
// Create a Cohere client.
// Use WithBaseURL to set the base path of the AIOS API.
co := client.NewClient(
client.WithBaseURL(aiosBaseUrl),
)
// Define the search query.
query :="What is the capital of France?"// Define the list of documents.
// These are the documents to rearrange.
docs := []string{
"The capital of France is Paris.",
"France capital city is known for the Eiffel Tower.",
"Paris is located in the north-central part of France.",
}
// Use the AIOS model to rearrange documents.
// Use &api.V2RerankRequest to set the model, query, and document list.
resp, err := co.V2.Rerank(
context.TODO(),
&api.V2RerankRequest{
Model: model,
Query: query,
Documents: docs,
},
)
if err !=nil {
panic(err)
}
// Print the rearranged results.
// This result is a list of documents sorted by relevance score between the query and documents.
fmt.Println(resp.Results)
}
Code block. /v2/rerank request
Note
The aios endpoint-url and model ID information for calling the model are provided in the LLM Endpoint Usage Guide on the resource detail page. Please refer to Using LLM.
Response
In results, you can check the documents sorted in order of high relevance to the query.
[{'document':{'text':'The capital of France is Paris.'},'index':0,'relevance_score':0.9999659061431885},{'document':{'text':'France capital city is known for the Eiffel Tower.'},'index':1,'relevance_score':0.9663000106811523},{'document':{'text':'Paris is located in the north-central part of France.'},'index':2,'relevance_score':0.7127546668052673}]
13.1.4 - Release Note
2025.07.01
NEWAIOS Service Official Launch
The AIOS service has been officially launched.
On Samsung Cloud Platform, you can create Virtual Server, GPU Server, Kubernetes Engine resources and use LLM on those resources.
13.1.5 - Licenses
AIOS Licenses
The license information for each AIOS provided model is as follows.
Model
License
openai/gpt-oss-120b
Apache 2.0
Qwen/Qwen3-Coder-30B-A3B-Instruct
Apache 2.0
Qwen/Qwen3-30B-A3B-Thinking-2507
Apache 2.0
meta-llama/Llama-4-Scout
llama4
meta-llama/Llama-Guard-4-12B
llama4
sds/bge-m3
Samsung SDS
sds/bge-reranker-v2-m3
Samsung SDS
Table. Licenses by AIOS provided model
13.1.5.1 - Llama-4-Scout
LLAMA 4 COMMUNITY LICENSE AGREEMENT
Llama 4 Version Effective Date: April 5, 2025
“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
Documentation" means the specifications, manuals and documentation accompanying Llama 4 distributed by Meta at https://www.llama.com/docs/overview.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
Llama 4" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.
Llama Materials" means, collectively, Meta’s proprietary Llama 4 and Documentation (and any portion thereof) made available under this Agreement.
Meta" or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
2. Additional Commercial Terms. If, on the Llama 4 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use "Llama" (the "Mark") solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at [https://about.meta.com/brand/resources/meta/company-brand/](https://about.meta.com/brand/resources/meta/company-brand/)[)](https://en.facebookbrand.com/)). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 4 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
MIT License
Copyright (c) [year] [fullname]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
13.1.5.4 - bge-reranker-v2-m3
Model Overview
To strengthen Korean search ability based on BGE Reranker, using public dataset aihub 016(administration), 021(books), 151(law/finance), and 1.1 million general knowledge (Query-Passage Pair) to enhance Korean-based re‑ranking ability
Model type: Reranker
Main usage: Vector Search (RAG)
Vocab.size: 250,002
Version info: v1.0.0
Base model license: apache-2.0
Technical features
Structure: based on XLMRobertaModel
Max Input Token : 1024(Max 8K, but fine-tune at 1024)
Size: ~568M parameters (2.27GB, FP32)
Training data: aihub 016(administration), 021(books), 151(law/finance) , general knowledge 1.1 million items to strengthen Korean-based re-ranking capability
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 The k8sgpt Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
13.1.5.5 - gpt-oss-120b
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Definitions.
“License” shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
“Licensor” shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
“Legal Entity” shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
“control” means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
“You” (or “Your”) shall mean an individual or Legal Entity
exercising permissions granted by this License.
“Source” form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
“Object” form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
“Work” shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
“Derivative Works” shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
“Contribution” shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, “submitted”
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as “Not a Contribution.”
“Contributor” shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a “NOTICE” text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 The k8sgpt Authors
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
13.1.5.6 - Qwen3-30B-A3B
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2020 The k8sgpt Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
13.1.5.7 - Qwen3-30B-A3B
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 The k8sgpt Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
13.2 - CloudML
13.2.1 - Overview
Service Overview
CloudML is an integrated platform that supports the entire machine learning process from data analysis to model development, learning, verification, and deployment in a cloud environment.
Features
Cloud ML is designed to allow users of various roles such as analysts, machine learning engineers, and developers to collaborate in one environment, and to easily design and operate machine learning workflows.
Cloud ML provides an analysis environment based on Python and R, and users with programming experience can utilize the platform more flexibly and effectively. In particular, using the Copilot function based on generative AI, you can easily perform code writing, refactoring, error correction, and function recommendations with just natural language input, thereby increasing analysis productivity and analysis accessibility.
Cloud ML supports each stage of analysis, including environment configuration, model development and serving, analysis automation, and visualization, in a systematic way. It also supports improving both productivity and model quality by automating repetitive experiments and operations.
Service Composition Diagram
CloudML consists of analysis environment, machine learning lifecycle management, automated analysis support, visualization, and generative AI-based Copilot function, and users can perform the entire machine learning process integrally through these components.
Fig. CloudML Configuration Diagram
Provided Features
CloudML provides the following features.
Visual Modeling: Provides an intuitive interface to build and deploy machine learning models without coding using a Drag&Drop method. You can easily manage all processes from data import to model evaluation and deployment.
Code-based development: You can freely write and execute code using Python, R, etc. in the Jupyter Notebook environment. It provides powerful features for advanced users and researchers.
Workflow Automation: It efficiently automates complex machine learning workflows such as data preprocessing, model training, evaluation, and deployment.
Experiment Management: You can train machine learning models with various parameter combinations and systematically manage and compare the results.
Copilot Feature Utilization: Provides natural language-based AI assistant functionality to guide and automate the model development process. It supports various tasks such as code generation, refactoring, error correction, and explanation, thereby improving productivity.
Integrated Platform: All features are integrated within CloudML, making it convenient to use.
Scalability and Flexibility: Supports expansion of computing resources and connection to various data sources as needed.
Constraints
Before using CloudML, please check the following restrictions and reflect them in your service usage plan. Cloud ML operates in a Kubernetes-based environment, so proper cluster resource settings are required for stable service operation.
Application basic resources: For Application operation, a minimum of vCPU 24 cores and 96GB of memory are assigned by default.
Analysis Job Resources: In addition to the basic resources, analysis jobs require additional CPU or GPU resources to be set. These resources should be set appropriately considering the workload of the analysis job.
Copilot (CPU-based usage): To run Copilot on CPU resources, a minimum of 16-core vCPU and 10GBi of memory are required. In this case, the CPU resources available for analysis tasks are reduced accordingly.
Copilot (GPU-based usage): Copilot can also be used by setting up dedicated GPU resources.
Supported LLM models: Currently, the LLM models applicable to Copilot are limited to Llama3.
Region-based provision status
CloudML is available in the following environments.
Region
Availability
Western Korea(kr-west1)
Provided
Korea East(kr-east1)
Provided
South Korea 1 (kr-south1)
Not provided
South Korea, southern region 2(kr-south2)
Not provided
South Korea southern region 3(kr-south3)
Not provided
Table. CloudML Region-based Service Status
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.
A service that automatically distributes server traffic load
Fig. CloudML Preceding Service
13.2.2 - How-to guides
Create CloudML
The user can enter the essential information of CloudML through the Samsung Cloud Platform Console and create the service by selecting detailed options.
To create CloudML, follow these steps.
Click on the menu for all services > AI/ML > CloudML. It moves to the Service Home page of CloudML.
Service Home page, click the CloudML creation button. It moves to the CloudML page.
CloudML Creation page where you enter the information required for service creation and select detailed options.
Version Selection area, select the version of the service.
Classification
Necessity
Detailed Description
Version Selection
Required
CloudML Version Selection
Fig. CloudML Service Version Selection Items
In the SCP Kubernetes Engine deployment area, select the options required to create a service.
Classification
Necessity
Detailed Description
Cluster Name
Required
Select Kubernetes Engine Cluster
Fig. CloudML Service Cluster Selection Items
Service Information Input area, select the options required for service creation.
Classification
Necessity
Detailed Description
CloudML name
required
Enter service name
Description
Selection
Enter Service Description
Domain Name
Required
Enter the domain name to be used in the service
Enter 2-63 characters using lowercase English letters, numbers, and special characters
Endpoint
Required
Select the endpoint to use for the service
Private and Public options
Copilot
Selection
Select whether to use Copilot in the service
Application selection requires agreement to terms in a popup window
If the selected cluster is not composed of LLM dedicated GPU and the allocated LLM resources are insufficient, Copilot application is not possible
Resource Information
Required
Displays resource information of the selected cluster
Enter Additional Information Please enter or select the necessary information in the area.
Classification
Mandatory
Detailed Description
Tag
Selection
Add Tag
Up to 50 can be added per resource
Click the Add Tag button and enter or select Key, Value
Table. CloudML Additional Information Input Items
In the Summary panel, review the detailed information and estimated charges, and click the Complete button.
Once creation is complete, check the created resource on the CloudML list page.
Check CloudML details
You can check and modify the entire resource list and detailed information of the CloudML service. The CloudML details page consists of details, tags, work history tabs.
To check the CloudML details, follow the next procedure.
Click on all services > AI/ML > CloudML menu. It moves to the Service Home page of CloudML.
Service Home page, click the resource (CloudML) to check the detailed information. It moves to the CloudML detail page.
CloudML Details page displays the status information and detailed information of CloudML, and consists of Details, Tags, Work History tabs.
Division
Detailed Description
Service Status
CloudML’s Status
Creating: being created
Deployed: created/completed and operating normally
Updating: updating settings
Terminating: being deleted
Error: error occurred
Connection Guide
Service Connection Guide
Host information guide to be registered on the user PC
Service Cancellation
Button to cancel the service
Fig. CloudML Status Information and Additional Features
Detailed Information
On the CloudML list page, you can check the detailed information of the selected resource and modify the information if necessary.
Division
Detailed Description
Service
Service Name
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Title
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
The time when the service was created
Editor
User who modified the service information
Modified Date
Date when service information was modified
Product Name
CloudML Name
Copilot
Whether to use Copilot
Description
Description of the service
Cluster Name
Selected Kubernetes Engine cluster name
Domain Name
Entered Service Domain Name
Version
Selected Service Version
Installation Node Information
Node information installed in the cluster
SCR Information
Entered SCR Information
Fig. CloudML Detailed Information Items
Tag
On the CloudML list page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag List
Tag list
Check Key, Value information of tag
Up to 50 tags can be added per resource
Search and select existing Key and Value lists when entering tags
Fig. CloudML tags tab items
Work History
You can check the job history of the selected resource on the CloudML list page.
Classification
Detailed Description
Work history list
Resource change history
Work time, resource type, resource name, work details, work result, worker name, path information can be checked
To perform a detailed search, click the Detailed Search button
Fig. Job History Tab Detailed Information Items
Canceling CloudML Service
Users can cancel the CloudML service through the Samsung Cloud Platform Console.
Reference
If the CloudML service status is Creating, Updating, or Terminating, the service cannot be canceled.
To cancel CloudML, follow these steps.
Click on all services > AI/ML > CloudML menu. It moves to the Service Home page of CloudML.
Service Home page, click the service cancellation button. A service cancellation notification window appears.
Enter the CloudML name to be deleted in the notification window and click the Confirm button.
13.2.2.1 - Kubernetes Cluster Configuration
Configuring a Kubernetes Cluster
To apply for the CloudML service, a dedicated cluster for CloudML only must be configured. A dedicated cluster means creating a Kubernetes Engine with the required minimum specifications or higher and setting a few necessary requirements.
Create a dedicated cluster before applying for the CloudML service.
CloudML exposes an HTTPS endpoint on port 443. Select the public endpoint when creating a cluster.
Cluster Node and Storage Recommended Specifications
Cluster nodes can be added or modified after the cluster is created.
The following are the recommended specifications for the cluster nodes and storage that should be prepared to install CloudML based on 5 users.
Division
Item
Role
Capacity
Cluster Node
Kubernetes Node Pool (Virtual Server)
Application Execution
node.kubernetes.io/nodetype: ml-app
24 core / 96 GB
Cluster Node
Kubernetes Node Pool (Virtual Server)
Analysis Execution
node.kubernetes.io/nodetype: ml-analytics
8 core / 32 GBi x 2 EA
Total 16 core / 64 GBi
Repository
File Storage
Data Storage
1 TB
Table. Recommended Specifications for Cluster Nodes and Storage
Notice
If you need to change the number of nodes, add GPU nodes, or scale up resources, please request technical support.
To add a label to a cluster node, follow these steps.
Click all services > Container > Kubernetes Engine menu. It moves to the Service Home page of Kubernetes Engine.
On the Service Home page, click the Node menu. It moves to the Node List page.
On the Node List page, select the cluster you want to check detailed information from the Gear button at the top left, then click the Confirm button.
Select and click the node you want to check the detailed information of. It will move to the Node Details page.
Click the Node Details page YAML tab. Move to the YAML tab page.
Click the Edit button on the YAML tab page. The node editing window opens.
In the node editing window, add a label that matches the role and click the Save button.
Check the following information and add labels that match the node specifications.
Division
Purpose-based Label
CPU Node
For app: node.kubernetes.io/nodetype: ml-app
For analytics: node.kubernetes.io/nodetype: ml-analytics
GPU node
For analysis: node.kubernetes.io/nodetype: ml-analytics-gpu
For copilot: node.kubernetes.io/nodetype: ml-gpu
Table. Kubernetes node labels by purpose
13.2.3 - API Reference
API Reference
13.2.4 - CLI Reference
CLI Reference
13.2.5 - Release Note
CloudML
2025.07.01
NEWCloudML Service Official Version Release
Samsung Cloud Platform provides CloudML service that supports the entire machine learning process from data analysis to model development, learning, verification, and deployment in a cloud environment.
13.3 - AI&MLOps Platform
13.3.1 - Overview
Service Overview
AI&MLOps Platform is a machine learning platform that automates the repetitive tasks of the entire pipeline of machine learning model development, learning, and deployment process. Through the AI&MLOps Platform service, integrated management of training data, models, and operational data is possible based on a Kubernetes-based AI/MLOps environment.
AI&MLOps Platform is an open-source product that provides Kubeflow.Mini service, which can utilize the development, learning, tuning, and deployment functions of machine learning models, and Enterprise service that adds Add-on functions such as distributed learning Job execution and monitoring.
Reference
AI&MLOps Platform related sites, please refer to Kubeflow.
Features
Cloud Native MLOps Environment: AI&MLOps Platform provides a machine learning model development environment optimized for the cloud, and it is convenient to link with various open sources based on Kubernetes.
Machine Learning Development and Operational Convenience: Provides a standardized environment that supports various machine learning frameworks such as TensorFlow, PyTorch, scikit-learn, Keras, etc. It automates the entire pipeline of machine learning model development, training, and deployment, making it easy to configure and create models, and reusable.
GPU Collaboration Enhancement: With Bare Metal Server-based Multi Node GPU and GPUDirect RDMA (Remote Direct Memory Access), the job speed of LLM (Large Language Model) and NLP (Natural Language Processing) can be dramatically improved.
Service Composition Diagram
Fig. AI&MLOps Platform Configuration Diagram
Provided Features
The AI&MLOps Platform provides the following functions.
ML Model Development Environment and Features
Notebook provision: Creates a Jupyter Notebook and VS Code with ML Framework (Tensorflow, Pytorch, etc.).
TensorBoard: TensorBoard(*ML model training process visualization/analysis tool) server is created and managed.
Volumes: When developing an ML model, use a volume to store datasets and models, and connect a volume when creating a Jupyter Notebook.
Distributed Training Job for ML Model Execution/Management
Supports distributed learning Job execution and monitoring, inference service management and analysis. (Add-on)
It provides various functions for managing Job Queue and configuring MLOps environment, etc. (Add-on)
Job Scheduler(FIFO, Bin-packing, Gang based), GPU Fraction, GPU resource monitoring, etc. provide efficient GPU resource utilization features. (Add-on)
BM-based Multi Node GPU and GPU Direct RDMA (Remote Direct Memory Access) significantly improved the job speed of LLM (Large Language Model) and NLP (Natural Language Processing) (Add-on)
ML Model Experiment Management and Pipeline
ML pipeline experiment management is provided through Experiments(KFP).
It supports pipeline automation configuration function to execute ML tasks in a step-by-step manner.
Component
Operating System Version
The operating systems supported by the AI&MLOps Platform are as follows.
Operating System(OS)
Version
RHEL
RHEL 8.3
Ubuntu
Ubuntu 18.04, Ubuntu 20.04, Ubuntu 22.04
Table. Supported operating system versions
Regional Provision Status
The AI&MLOps Platform can be provided in the following environments.
Region
Availability
Western Korea(kr-west1)
Provided
Korean East(kr-east1)
Provided
South Korea 1(kr-south1)
Not provided
South Korea southern region 2(kr-south2)
Not provided
South Korea, Busan(kr-south3)
Not provided
Table. AI&MLOps Platform Regional Provision Status
Preceding service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.
The user can enter the essential information of the AI&MLOps Platform through the Samsung Cloud Platform Console and create the service by selecting detailed options.
To create an AI&MLOps Platform, follow these steps.
All Services > AI/ML > AI&MLOps Platform menu is clicked. It moves to the Service Home page of AI&MLOps Platform.
On the Service Home page, click the Create AI&MLOps Platform button. It moves to the Create AI&MLOps Platform page.
On the Service Type Selection page of AI&MLOps Platform creation, enter the information required for service creation and select detailed options.
Select Service Type and Version Select the service type in the Service Type and Version Selection area.
Classification
Necessity
Detailed Description
Service Type
Required
The type of service chosen by the user
AI&MLOps Platform
Kubeflow Mini
Service Type Version
Required
Version of the selected service
Provides a list of versions of the provided service
Table. AI&MLOps Platform Service Type and Version Selection Items
Cluster Deployment Area Division Select the options required for service creation in the area.
Classification
Necessity
Detailed Description
Cluster Deployment Area
Required
Deploy to Kubernetes Engine: Select an existing Kubernetes Engine
Deploy to New Cluster: Create a Kubernetes Engine when creating the AI&MLOps Platform
Table. AI&MLOps Platform Service Cluster Deployment Area Division Items
Reference
Depending on the settings of this cluster deployment, the following configuration elements of the Service Information Input page will be different.
On the Service Information Input page of AI&MLOps Platform creation, enter the information required for service creation and select detailed options.
You can select the cluster deployment area.
Deploy to new cluster setup method, please refer to the Deploy to new cluster guide.
The Kubernetes cluster specifications required for installation can be found in the Kubernetes cluster specifications guide.
On the Creation Information Confirmation page of AI&MLOps Platform Creation, check the detailed information created and the expected billing amount, and click the Complete button.
Once creation is complete, check the created resource on the AI&MLOps Platform service list page.
AI&MLOps Platform detailed information check
The AI&MLOps Platform service can check and modify the entire resource list and detailed information. The AI&MLOps Platform Service Details page consists of Detailed Information, Tags, Work History tabs.
To check the detailed information of the AI&MLOps Platform service, follow the next procedure.
All Services > AI/ML > AI&MLOps Platform Service menu is clicked. It moves to the Service Home page of the AI&MLOps Platform Service.
Service Home page, click the AI&MLOps Platform menu. It moves to the AI&MLOps Platform service list page.
AI&MLOps Platform Service List page, click on the resource to view detailed information. Move to the AI&MLOps Platform Service Details page.
AI&MLOps Platform Service Details page displays status information and additional feature information, and consists of Details, Tags, Work History tabs.
Detailed Information
On the AI&MLOps Platform Service List page, you can check the detailed information of the selected resource and modify the information if necessary.
Classification
Detailed Description
Service
Service Category
Resource Type
Service Name
SRN
Unique resource ID in Samsung Cloud Platform
Resource Name
Resource Name
In the AI&MLOps Platform service, it means the cluster name
Resource ID
Unique resource ID in the service
Creator
User who created the service
Creation Time
The time when the service was created
Modifier
Service information modified user
Modified Date
Date when service information was modified
Dashboard Status
Dashboard Status Value
Service Name
Service Name
Admin Email Address
Administrator Email Address
Image Name
Service Image Name
Version
Image Version
Service Type
Deployed Service Type
Table. AI&MLOps Platform Service Detailed Information Items
Tag
On the AI&MLOps Platform 서비스 목록 page, you can check the tag information of the selected resource, and add, change, or delete it.
Classification
Detailed Description
Tag List
Tag list
Key, Value information of the tag can be checked
Up to 50 tags can be added per resource
When entering a tag, search and select from the existing Key and Value list
Table. Cluster tags tab items
Work History
You can check the work history of the selected resource on the AI&MLOps Platform 서비스 목록 page.
Classification
Detailed Description
Work history list
Resource change history
Work details, work time, resource type, resource name, work result, worker information can be checked
Click the corresponding resource in the Work History List list. The Work History Details popup window opens.
Fig. AI&MLOps Platform Service Work History Tab Detailed Information Items
AI&MLOps Platform connection
To access the AI&MLOps Platform dashboard, preliminary work must be done in advance.
Pre-work
To access the AI&MLOps Platform, you must set the relevant ports and IP addresses for access in the Security Group and Firewall (if using a firewall) in advance.
Kubeflow Mini: 31390 port (Security Group’s inbound rule, VPC firewall)
To access the cluster Worker Node, you must set the inbound rule for port 22 in the Security Group and Firewall (if using VPC firewall).
Logging into the Dashboard
To access the AI&MLOps Platform service, follow the procedure below.
All Services > AI/ML > AI&MLOps Platform Service menu is clicked. It moves to the Service Home page of the AI&MLOps Platform Service.
On the Service Home page, click the AI&MLOps Platform 서비스 menu. It moves to the AI&MLOps Platform 서비스 목록 page.
AI&MLOps Platform Service List page, click on the resource to view detailed information. It moves to the AI&MLOps Platform Details page.
AI&MLOps Platform details page, click the Access Guide button. The Access Guide popup window opens.
Connection Guide In the Connection Guide popup window, click the URL link of the Dashboard. It moves to the corresponding dashboard page.
Caution
When using Public Subnet and assigning a public IP, it may be exposed to security attacks such as external hacking and malware infection.
AI&MLOps Platform cancellation
You can save operating costs by canceling the corresponding service that is not in use. However, if you cancel the service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.
Caution
After the service is cancelled, the data cannot be recovered, so please be careful.
To cancel the AI&MLOps Platform, follow the procedure below.
Click on the menu for all services > AI/ML > AI&MLOps Platform service. It moves to the Service Home page of the AI&MLOps Platform service.
On the Service Home page, click the AI&MLOps Platform Service menu. It moves to the AI&MLOps Platform Service List page.
AI&MLOps Platform Service List page, click on the resource to check the detailed information. It moves to the AI&MLOps Platform Details page.
On the AI&MLOps Platform details page, click the Cancel Service button. The Cancel Service popup window will open.
To confirm, enter the service name and click Confirm.
Once the cancellation is complete, please check if the resource has been cancelled on the AI&MLOps Platform service list page.
13.3.2.1 - Cluster Deployment
Cluster Deployment Area
On the Samsung Cloud Platform, the AI&MLOps Platform creation’s service type selection provides 2 cloud deployment areas.
[Deploy to a new cluster](#Deploy to a new cluster)
Common
Before proceeding with the cluster deployment task, please check the Kubernetes cluster specifications required for installation.
Regardless of the selection of the cluster deployment area, the Kubernetes cluster specification must be checked in advance.
Please refer to the Cluster Specification guide for detailed specification information.
Depending on the selection of the cluster deployment area, the installation content on the Service Information Input page of AI&MLOps Platform creation varies.
Deploying on SCP Kubernetes Engine
Click on the All Services > AI/ML > AI&MLOps Platform menu. It moves to the Service Home page of AI&MLOps Platform.
Service Home page, click the AI&MLOps Platform creation button. Move to the AI&MLOps Platform creation page.
On the Service Type Selection page of AI&MLOps Platform creation, enter the information required for service creation and select detailed options.
Cluster Deployment
Select the SCP Kubernetes Engine deployment option.
On the Service Information Input page of AI&MLOps Platform creation, enter the information required for service creation and select detailed options.
Service Information Input area where you can enter or inquire the necessary information for service creation.
Classification
Necessity
Detailed Description
Service Name
Required
Enter AI&MLOps Platform name
AI&MLOps Platform name cannot be duplicated within the project
Storage Class
Required
Storage Class is registered automatically
Installation Node Information
Query
Confirm node information of the selected Kubernetes Engine
Admin Email Address
Required
Input the email address of the administrator (Admin) to use when logging in
Password
Required
Enter the password to use when logging in
Password Confirmation
Required
Re-enter password to prevent password errors
Table. AI&MLOps Platform Service Information Input Items
Additional Information Input area, please enter or select the information needed to create the service.
Classification
Necessity
Detailed Description
Tag
Selection
Select a tag to add to the AI&MLOps Platform
Clicking on tag addition creates and adds a tag or adds an existing tag
Up to 50 tags can be registered
Newly added tags are applied after service creation is completed
Table. Additional Information Input Items for AI&MLOps Platform Service
Deploy to a new cluster
Click all services > AI/ML > AI&MLOps Platform menu. It moves to the Service Home page of AI&MLOps Platform.
On the Service Home page, click the Create AI&MLOps Platform button. It moves to the Create AI&MLOps Platform page.
AI&MLOps Platform creation’s service type selection page, enter the information required for service creation and select detailed options.
Cluster Deployment
Select the new cluster to deploy option.
On the Service Information Input page of AI&MLOps Platform creation, enter the information required for service creation and select detailed options.
Service Information Input area where you can enter or inquire about the information needed to create a service.
Classification
Necessity
Detailed Description
Service Name
Required
Enter AI&MLOps Platform name
AI&MLOps Platform name cannot be duplicated within the project
Storage Class
required
Storage Class is registered automatically
Installation Node Information
Query
Confirm node information of the selected Kubernetes Engine
Admin Email Address
Required
Enter the email address of the administrator (Admin) to use when logging in
Password
Required
Enter the password to use when logging in
Password Confirmation
Required
Re-enter password to prevent password errors
Table. AI&MLOps Platform Service Information Input Items
Enter Kubernetes Engine information Enter or select the necessary information in the area.
Classification
Necessity
Detailed Description
Cluster Name
Required
Cluster name
Starts with English and uses English, numbers, and special characters(-)
Input within 3~30 characters
Control Plane Version > Kubernetes Version
Required
Select Kubernetes Version
Control Area Setting > Control Area Logging
Select
Select whether to use control area logging
Audit/Event logs of the cluster control area can be checked in Cloud Monitoring’s log analysis
1GB of log storage for all services in the account is provided for free, and logs are deleted sequentially when exceeding 1GB
Subnet: Select a general Subnet to use from the selected VPC’s subnets
Security Group: Click the Search button and select a Security Group from the Security Group Selection popup window
Load Balancer: Provides type: LoadBalancer functionality in Kubernetes Service objects
Select a load balancer on the same network
Usage: Select whether to use it
Cannot be changed after setting
File Storage settings
Required
Select the file storage volume to be used in the cluster
Default volume (NFS): Select File Storage through the Search button
The default volume file storage only provides NFS format
Table. Kubernetes Engine service information input items
Enter Node Pool Information Enter or select the required information in the area.
Classification
Necessity
Detailed Description
Node Pool Configuration
Required
Select node pool information
* marked items are required input items, so they must be entered
In the case of AI&MLOps Platform, the image capacity may continue to increase depending on use, so setting Block Storage to at least 200GB or more allows for smooth system configuration
Table. AI&MLOps Platform Service Information Input Items
Reference
Windows OS node pool can only be created when additional storage (CIFS) volumes are in use in the cluster.
Node pool Block Storage’s volume encryption can only be set at the time of initial creation.
Setting encryption may cause performance degradation of some features.
If you choose to use the node pool auto-scaling or auto-resizing feature, you can only enter number of nodes, minimum number of nodes, maximum number of nodes.
* **Additional Information Input** area, please enter or select the necessary information.
Classification
Necessity
Detailed Description
Tag
Selection
Select a tag to add to the AI&MLOps Platform
Clicking on tag addition creates and adds a tag or adds an existing tag
Up to 50 tags can be registered
Newly added tags are applied after service creation is completed
Table. AI&MLOps Platform Service Information Input Items
Cluster Specifications
To use the AI&MLOps Platform, a Kubernetes Engine to install the AI&MLOps Platform is required. You can select an existing Kubernetes Engine or create a Kubernetes Engine when creating the AI&MLOps Platform.
The specifications of the Kubernetes cluster required for installation are as follows.
Node pool resource scale (composed of 2 or more nodes)
AI&MLOps Platform : vCPU 32, Memory 128G or more
Kubeflow Mini: vCPU 24, Memory 96G or more
Kubernetes version
AI&MLOps Platform v1.9.1 (k8s v1.30)
Kubeflow Mini v1.9.1 (k8s v1.30)
Notice
Only one AI&MLOps Platform can be installed per Kubernetes cluster, and AI&MLOps Platform cannot be installed on a cluster that is being used for other purposes.
13.3.2.2 - Kubeflow User Guide
Below is a guide on how to use Kubeflow after creation.
Adding Kubeflow Users
Below is a guide on how to use Kubeflow after creation.
Kubeflow only has one Admin User account created from the initial setup screen.
To add users to the Kubeflow Dashboard, you need to change the Dex settings (Kubeflow’s authentication component).
Dex is deployed in the auth namespace and its settings are stored in a configmap named dex.
Note
Kubeflow has separate namespaces for each user
The following is an example of the Dex configuration.
If the enablePasswordDB value is true in the configuration, Dex saves the list of users defined in staticPasswords in the internal storage when the service starts. Therefore, you can add new users by adding new values to staticPasswords with email, hash, username, and userID.
The properties for adding users are defined as follows.
Parameter
Description
email
A value in the standard email format
hash
A user password value encrypted with the Bcrypt algorithm, and the hash value created with the Bcrypt algorithm can be entered directly
The staticPasswords value in the configmap is reflected when the Dex service starts, so you need to restart the Dex service using the following command.
kubectl rollout restart deployment dex -n auth
Try logging in with the new user information.
New user login
You should see that you are logged in successfully and can create a new namespace (profile).
Namespace creation
The above content was written with reference to the Kubeflow official website. For more information, please refer to Kubeflow Profiles.
Using Custom Images in Kubeflow Jupyter Notebook
To use a custom image in Kubeflow Notebook Controller, which manages the Notebook life cycle, you need to meet certain requirements.
Kubeflow assumes that Jupyter starts automatically when the notebook image runs. Therefore, you need to set the default command to start Jupyter in the container image.
The following is an example of what you need to include in your Dockerfile.
The custom image must be stored in a public registry like Docker Hub or a private registry that can be pushed and pulled from Kubeflow.
Click the +NEW SERVER button on the Notebook Servers page.
If you have created a custom image, check Custom Image on the Kubeflow Notebook Server screen and enter the Custom Image address to create a new Notebook Server.
Guide
The above content was written with reference to the Kubeflow official website.
FEATUREAI&MLOps Platform Open-Source Version Upgrade
AI&MLOps Platform open source version has been upgraded.
Kubeflow 1.9
2025.02.27
NEWAI&MLOps Platform Service Official Version Release
The AI&MLOps Platform service, which automates the repetitive tasks of the entire pipeline of development, learning, and deployment of machine learning models, has been released.
Provides a machine learning platform service based on Kubernetes.
14 - Hybrid Cloud
You can flexibly connect the Samsung Cloud Platform with on‑premises infrastructure to use cost‑optimized cloud services.
14.1 - Edge Server
14.1.1 - Overview
Service Overview
Edge Server provides Edge Cloud to On-Site environments of companies that are difficult to transition to the cloud due to issues such as data security, regulations, performance, and latency. This service is a service that supports companies to process and analyze data in real-time on-site. Based on edge computing technology, it processes data directly from the server closest to where the data is generated and provides an environment that can be efficiently managed in conjunction with the cloud. Through this, it maximizes the speed and efficiency of large-scale data processing and minimizes data transmission delays through smooth connection with the cloud. In particular, it shows excellent performance in fields that require real-time data processing and rapid decision-making. Since the server is placed near the site, data can be processed quickly without network delays.
Through the Edge Server, you can switch important systems to the cloud and easily configure a Hybrid Cloud with the Samsung Cloud Platform, allowing you to use various services of the Samsung Cloud Platform. Samsung SDS’s specialized technical support organization provides optimal construction support and technical support considering the customer’s environment, and the Samsung Cloud Platform Support Center operates a 24x7 customer support channel. Through the delivery system built with various service reception channels and systems, you can respond quickly to customer requirements.
Composition
Figure. Edge Server Configuration Diagram
Provided Features
Edge Server provides the following functions.
Hybrid Cloud expansion: The demand from companies that want to manage important data On-Site and utilize Public Cloud’s Analytics, IoT, Private 5G, AI, etc. to perform data-driven tasks is increasing. These companies can configure Hybrid Cloud with Samsung Cloud Platform’s Edge Server to conveniently perform data-based tasks.
Convenient Integrated Management: Extends cloud services to the edge where data is created and collected to provide rapid processing of local data and quick access to on-premises systems, and also allows for convenient integrated management of Edge Server through Samsung Cloud Platform Console.
High security and compliance: An Edge Computing service environment is configured within the enterprise, enabling ultra-low latency data transmission, and since all data and services are located within the enterprise, the company’s valuable information can be safely protected. Data and workloads that require security and are subject to strict regulations use the resources of the edge, while workloads and data that are less sensitive to security use the more economical public resources of the Samsung Cloud Platform.
Customized Service Provision: Provides various virtualization platforms (Kernel-based Virtual Machine (KVM), Container, etc.) according to user environment, and introduces new technologies such as AI, machine learning, and big data by utilizing already configured development environments and related ecosystems.
Components
Edge Server provides various OS standard images and standard server types. Users can select and use the desired service type according to the scale of the service they want to configure.
Edge Manager
Edge Manager is located in the Samsung Cloud Platform and manages the Network and Resource of the Edge Server installed at the customer’s site through the Edge Client, and creates and manages virtual machines within the Edge Server.
Edge Client
Edge Client is located on the Edge Server installed on the customer site, and performs the creation/deletion/modification/inquiry function of KVM and the management function of Edge Server through Edge Manager.
Supported operating system versions
The Edge Server provides the operating system (OS) required by the customer through consultation with the customer.
The representative operating system versions supported by Edge Server are as follows.
Operating System (OS)
Version
CentOS
CentOS 9 or higher
RHEL
RHEL 8.9, RHEL 9.3, RHEL 9.4
Ubuntu
Ubuntu 18.04, Ubuntu 20.04, Ubuntu 22.04
Rocky Linux
Rocky Linux 8.9, Rocky Linux 9.3, Rocky Linux 9.4
Table. Edge Server Operating System (OS) Supported Versions
Caution
The cost of operating systems (OS) other than Rocky Linux must be borne by the customer.
Regional Provision Status
The Edge Server can be provided in the following environment.
Region
Availability
Western Korea(kr-west1)
Provided
Korea East(kr-east1)
Not provided
South Korea (kr-south1)
Provided
South Korea southern region 2(kr-south2)
Provided
South Korea, southern region 3(kr-south3)
Provided
Table. Edge Server Regional Provision Status
Preceding service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.
The user can enter the essential information of the Edge Server service and select detailed options to create the service through the Samsung Cloud Platform Console.
Edge Server creation
You can create and use the Edge Server service on the Samsung Cloud Platform Console.
To create an Edge Server, follow the following procedure.
All services > Hybrid Cloud > Edge Server menu is clicked. It moves to the Edge Server dashboard page.
Edge Server Dashboard page, click the Edge Server Service Request button. It moves to the Service Request page.
Service Request page, enter the corresponding information in the service information input area, and then click the Complete button.
Guidance
In the job division, select and create Edge Server service creation.
Input Item
Detailed Description
title
title of the service you want to request
Region
Location selection of Samsung Cloud Platform
Automatically entered as the region of the project
Service
Select the service group and service for the corresponding service
Service Group: Hybrid Cloud
Service: Edge Server
Task Classification
Select the task you want to perform
Edge Server service creation: Select if you want to create this service
Content
Enter detailed information required to create an Edge Server [Basic Information]
Account ID: Enter the corresponding account ID
Customer) Name/Company/Department/E-mail/Phone Number: Enter user information
Service Start Date: Enter the date the user wants to start the service
Host Server OS : Enter the host server OS information to be used
[Application Information]
Usage Purpose: Enter the purpose of using the Edge Server
Example: Manufacturing, logistics, robots, CCTV, video analysis
Server Type and Quantity: Enter the type and quantity of the host server to be used
Example: Standard (128vCore/512GB/SSD 7.6TB), Large Capacity (128vCore/512GB/SSD 7.6TB+HDD 160TB), GPU_L40s (128vCore/512GB/SSD 7.6TB/L40s*2)
Please refer to the Table. Edge Server Specification Information table below for more detailed information
Usage Period (Default 3 years): Service usage period
attachment
only upload when there are additional files to share
attached files can be up to 5 files, each within 5MB
only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Detailed contents of Edge Server request items
Above table. Edge Server request item detailed content information, among them server type and quantity please confirm the resource information of the corresponding Edge Server and enter it.
Classification
CPU
Memory
Local HDD
GPU
Standard type
128vCore
512GB
SSD 7.6TB
Large Capacity
128vCore
512GB
SSD 7.6TB, HDD 160TB
GPU_L40s
128 virtual Core
512GB
Solid State Drive 7.6TB
L40s 2 units
Table. Edge Server Specification Information
Service Request page where you enter the required information and click the Request button.
The application work proceeds with procedures such as purchasing physical servers, delivery, configuration work, and site construction, which takes at least 6 weeks or more based on business days.
Check application history
You can check the application and cancellation history of the Edge Server service on the Samsung Cloud Platform Console.
Reference
Edge Server’s service application and cancellation request details can be checked through the following procedure.
To check the Edge Server service application history, follow the next procedure.
All services > Management > Support Center menu, click. Support Center > Service Home page will be moved to.
Support Center Service Home page, click the Service Request menu. It moves to the Service Request List page.
Service Request List page, click the title of the service request you applied for. It moves to the Service Request Details page.
Service Request Details page where you can check the application status and information.
Notice
When a service request is received, the sales/operations manager checks the service application details and proceeds with the Edge Server service based on the entered information.
Edge Server detailed information check
Edge Server service can check and modify the entire resource list and detailed information. The Edge Server detail page consists of detailed information, operation history tabs.
To check the Edge Server detailed information, follow the next procedure.
All Services > Hybrid Cloud > Edge Server menu, click. It moves to the Edge Server Dashboard page.
Edge Server Dashboard page, click the Edge Server menu. It moves to the Edge Server List page.
Columns other than required columns can be added by clicking the Settings button.
Classification
Mandatory
Detailed Description
Edge Server Name
Required
The name of the Edge Server created by the user
Edge Server OS
Required
The Edge Server operating system requested by the user
server type
required
Edge Server’s server type
Edge Server IP
Required
The IP of the Edge Server requested by the user
Service posting date
Required
Edge Server’s billing start time
Service Termination Date
Required
Edge Server’s Billing Termination Time
Status
Required
The status of the Edge Server
Installation location
Selection
The location where the Edge Server is installed
Creator ID
Selection
The user who created the Edge Server in Edge Manager
Creation Time
Selection
The time when the Edge Server was created in Edge Manager
Table. Edge Server List Information
Edge Server list page, click the Edge Server to check the detailed information. It moves to the Edge Server details page.
Edge Server details page top displays the status information and additional features description.
Classification
Detailed Description
Edge Server status
the status of the Edge Server created by the user
Active: the state in which the Edge Server operates normally
Inactive: the state in which the Edge Server cannot provide normal services
Table. Edge Server Status Information
Detailed Information
Edge Server details page where you can check the detailed information of the selected resource and modify it if necessary.
Category
Detailed Description
Service
Service Group
Resource Type
Resource Type
SRN
Unique resource ID in Samsung Cloud Platform
In Edge Server service, it means Edge Server SRN
Resource Name
Resource Name
In the Edge Server service, it means the Edge Server name
Resource ID
Unique resource ID in the service
Creator
The user who created the Edge Server in Edge Manager
Creation Time
The time when the Edge Server was created in Edge Manager
Modifier
User who modified the Edge Server in Edge Manager
Modification Time
The time when the Edge Server was modified in Edge Manager
Edge Server Name
The name of the Edge server
Resource usage
The ratio of CPU, Memory, Disk in use to the total resources of the edge server
Resource Information
Information on the total allocatable resources of the edge server and the currently allocated resources
VM information
information of each Virtual Server created on the edge server
Table. Edge Server detailed information tab items
Work History
Edge Server list page where you can check the operation history of the selected resource.
Classification
Detailed Description
Work history list
Resource change history
Check work records, work time, work details, work results, and worker information
Table. Edge Server job history tab detailed information items
Edge Server Cancellation
The Edge Server service whose contract period has expired can be cancelled in the Console. To apply for cancellation of the service before the contract period expires, the user’s contract manager and the Samsung SDS contract manager must complete the cancellation of the contract for the corresponding Edge Server through prior consultation before cancellation, and then proceed with the cancellation according to the following procedure.
To cancel the Edge Server, follow the following procedure.
All Services > Hybrid Cloud > Edge Server menu, click. It moves to the Edge Server Dashboard page.
Edge Server dashboard page, click the Edge Server service request button. It moves to the service request page.
Service Request page, select or enter the required information for the Edge Server.
Notice
In the job classification, select and apply for Edge Server service cancellation.
Input Item
Detailed Description
title
title of the service you want to request
Region
Selecting the location of Samsung Cloud Platform
Service
Select the service group and service for the corresponding service
Service Group: Hybrid Cloud
Service: Edge Server
Work classification
Select the work you want to perform
Edge Server service cancellation: Select if you want to cancel the service
Content
Edge Server cancellation requires detailed information input [Basic Information]
Account ID: Enter the corresponding account ID
Customer) Name/Affiliated Company/Department/E-mail/Phone Number: Enter user information
Service Termination Desired Date: Enter the user’s desired service termination date
Attachment
Only upload when you have a file you want to share additionally
Attached files can be attached up to 5 files within 5MB each
Only doc, docx, xls, xlsx, ppt, ppts, hwp, txt, pdf, jpg, jpeg, png, gif, tif files can be attached
Table. Detailed contents of Edge Server service cancellation request items
Service Request page where you enter the required information and click the Request button.
The cancellation process takes around 3-4 weeks based on business days since it requires the return of physical servers.
14.1.3 - Release Note
Edge Server
2025.10.23
FEATUREAdd Local Web feature
Edge Server provides Local Web functionality, allowing customers to create, retrieve, modify, and delete Edge Server resources within their own network.
2025.07.01
NEWK8s, Storage installation type provided
Edge Server provides K8s, Storage features as an installation type, allowing customers to utilize them for POC and other purposes.
2025.04.24
NEWEdge Server Service Official Version Release
Through the Samsung Cloud Platform, customers can select general/high-capacity/GPU server types according to their needs.
We have launched an On-Site type service that can easily create/delete and manage Virtual Servers.
14.2 - Oracle Services
14.2.1 - Overview
Service Overview
Oracle Services is an Oracle Cloud service that physically configures Oracle Cloud resources dedicated to Samsung Cloud Platform within the Samsung SDS data center to support workloads based on Oracle software. Samsung Cloud Platform users can use Oracle-based workloads through Samsung Cloud Platform without separately configuring Oracle Cloud resources.
Features
Multi Cloud
With just the Samsung Cloud Platform console login procedure, you can easily enter the OCI environment. It provides the same high performance, scalability, security, availability, and automation as services offered by OCI, allowing you to configure Oracle workloads such as Oracle Database, Exadata, Weblogic without license issues.
High level of security
Samsung Cloud Platform Oracle Services is completely isolated from Public OCI both physically and logically, ensuring a high level of security. The OCI provided by Samsung Cloud Platform connects to SCP resources via a fast and reliable dedicated line, and a secure OCI usage environment is guaranteed within the Samsung SDS data center.
Convenient Integrated Management
Samsung Cloud Platform customers can use Samsung Cloud Platform’s contract information to integrate with existing Samsung Cloud Platform resources and manage OCI, and OCI usage costs are conveniently consolidated billing through Samsung Cloud Platform.
Service Architecture Diagram
Figure. Oracle Services Diagram
Provided Features
Oracle Services provides the following functions.
User account linking between Samsung Cloud Platform and OCI
Using the Samsung Cloud Platform user’s Account and IAM User information to automatically create and register OCI tenancy/account
Single Sign-On(SSO) through Samsung Cloud Platform console access only, access OCI web console without additional login
Connecting/Extending User Environment between Samsung Cloud Platform and OCI
Connect and extend the user’s Account Samsung Cloud Platform VPC to the user’s OCI VCN environment
Using a pre-configured dedicated line between Samsung Cloud Platform and OCI, BGP peering connection between Samsung Cloud Platform Transit Gateway and OCI DRG
Provide function to initially configure OCI internal network
Oracle DB, Exadata, Weblogic provides BluePrint function that initially configures essential prerequisite products for use
VCN, Private Subnet, Service Gateway creation, DRG-VCN Attach, IP registration one-click configuration
Oracle DB, Exadata, Weblogic provides BluePrint function to initially configure essential prerequisite products for use
Users can easily apply for and utilize the required OCI products using the OCI web console
View OCI usage details and consolidated billing through Samsung Cloud Platform
Region-wise Provision Status
Oracle Services can be provided in the environment below.
Region
Availability
Korea West (kr-west1)
Provided
Korea East (kr-east1)
Provided
South Korea 1(kr-south1)
Not provided
South Korea 2(kr-south2)
Not provided
South Korea South 3(kr-south3)
Not provided
Table. Oracle Services regional availability status
Constraints
You can only request Oracle Services when the applying Account is registered in the organization (orgarnization).
Preliminary Service
This is a list of services that must be pre-configured before creating the service. For details, refer to the guide provided for each service and prepare in advance.
Multi-gateway service that connects to the customer’s network or acts as a hub for connections between multiple VPCs
Table. Oracle Services Preceding Service
14.2.2 - How-to guides
Users can enter the required information for Oracle Services through the Samsung Cloud Platform Console, select detailed options, and create the service.
Oracle Services Create
You can create and use Oracle Services from the Samsung Cloud Platform Console.
Reference
Oracle Services can be applied for only one per project.
To proceed with a new creation, cancel the existing Oracle Services and create again.
To create Oracle Services, follow the steps below.
All Services > Hybrid Cloud > Oracle Services menu, click the Create Service button. Navigate to the Service Home page of Oracle Services.
Click the Oracle Services Creation button on the Service Home page. You will be taken to the Oracle Services Creation page.
Oracle Services creation Enter the information required to create the service on the page.
Service Information Input Enter the required information in the area.
Category
Required
Detailed description
Oracle Services name
Required
Oracle Services name to be used by the user
Start with a lowercase English letter and use lowercase English letters, numbers, and the special character (-) to input 3-30 characters
Description
Option
Enter additional description for Oracle Services
Table. Oracle Services Service Information Input Items
Additional Information Input area, please enter or select the required information.
Category
Required or not
Detailed description
Select Tenancy
Option
Tenancy name assigned to the user’s Organization
The tenancy name assigned to the account’s Organization is displayed automatically, so selection is not possible
For service application, the account must belong to an Organization. The account’s Organization must be created and joined before creating the service
Tag
Select
Add Tag
Up to 50 can be added per resource
After clicking the Add Tag button, enter or select Key, Value values
Table. Oracle Services Additional Information Input Items
Check the detailed information and estimated billing amount generated in the summary panel, and click the Complete button.
When creation is complete, check the created resource on the Resource List page.
It can only be used after the operator approves the usage request after service creation. For detailed information about service usage approval, click ? > Support Center > Contact Us to check the details.
Oracle Services Detailed Information Check
Oracle Services service can view and edit the full resource list and detailed information from the Oracle Services menu. The Oracle Services Details page consists of Detail Information, Tags, Activity History tabs.
Note
Detailed information of Oracle Services can only be viewed after the service is requested and the operator has completed approval.
To check the detailed information, make sure the service status has changed to Active after the service approval is completed.
To view detailed information of Oracle Services, follow the steps below.
All Services > Hybrid Cloud > Oracle Services Click the menu. Go to the Service Home page of Oracle Services.
Click the Oracle Services menu on the Service Home page. Go to the Oracle Services list page.
Oracle Services List click the resource to view detailed information on the page. Oracle Services Detail page will be opened.
Oracle Services Details page displays status information and additional feature information, and consists of Details, Tags, Activity History tabs.
Category
Detailed description
Status
Current Service Status
Creating: Service application in progress
Active: Service approved and successfully created
Creation Failed: Error occurred during creation
Deletion Failed: Error occurred during deletion
Denied: Operator approval denied
Requested: Waiting for operator approval
Deleting: Service termination request in progress
Service termination
Service termination button
Table. Oracle Services status information and additional feature items
Detailed Information
Details tab allows you to view the detailed information of the selected Oracle Services.
Category
Detailed description
Service
Service Name
Resource Type
Resource Type (Oracle Services)
SRN
Resource unique ID in Samsung Cloud Platform
Resource Name
Resource Name
Resource ID
Unique resource ID in the service
Creator
Service creation request user
Creation Date/Time
Service Creation Date/Time
Editor
Service modification request user
Modification Date/Time
Service Modification Date/Time
Tenancy name
Tenancy name assigned to Organization
Oracle Services NW Configuration
Oracle Services Network Configuration Request and Edit
**Before configuration** state, clicking the edit icon displays the configuration edit window
Can be set only before network configuration; after configuration the button is disabled and cannot be changed
For detailed explanation of network configuration, refer to [Oracle Services Network Configuration Request](/userguide/hybrid_cloud/oracle_services/how_to_guides/_index.md#oracle-services-네트워크-구성-요청하기)
|
| Compartment name | Resource name connected to the service |
| SCP VPC ↔ Oracles Service connection | **Oracle Services circuit request/modify/termination shortcut** click to request uplink circuit in the service request popup
For detailed description, refer to [Samsung Cloud Platform VPC and Oracle Services connection request](/userguide/hybrid_cloud/oracle_services/how_to_guides/_index.md#samsung-cloud-platform-vpc와-oracle-services-연결-요청하기) |
| Dynamic Group configuration | Dynamic Group configuration status
Click the edit icon to configure Dynamic Group
|
| Virtual Circuit OCID | Virtual Circuit OCID Path |
| Description | Oracle Services service additional description |
| Connected Users | List of users who can access the Oracle Services
Click **Add User** to add a user
In the per-user 'More' menu, you can edit permissions or delete
|
Table. Oracle Services detailed information tab items
Tag
In the Tag tab, you can view the tag information of the selected resource, and you can add, modify, or delete it.
Category
Detailed description
Tag List
Tag List
Can check Key, Value information of tags
Up to 50 tags can be added per resource
When entering tags, search and select from existing Key and Value lists
Table. Oracle Services Tag Tab Items
Work History
Work History tab allows you to view the work history of the selected resource.
Category
Detailed description
Work History List
Resource Change History
Work date and time, resource type, resource name, work details, work result, operator name, path information can be checked
To perform detailed search, click the Detailed Search button
Table. Oracle Services Job History Tab Detailed Information Items
Oracle Services Network Configuration and Access
You can connect the user VPC of Samsung Cloud Platform with the VCN (Virtual Cloud Network) in Samsung Cloud Platform Dedicated Oracle Cloud to use Oracle Services. To connect the VPC and the VCN in Oracle Cloud, you must configure the Oracle Services network and then apply for a dedicated uplink for Oracle Services.
Oracle Services network configuration and uplink request are completed, Samsung Cloud Platform VPC - TGW - dedicated line segment - Oracle Cloud DRG - VCN connection is completed, allowing various hybrid cloud configurations such as connecting the user’s VM in the VPC environment with the DB in the Oracle Cloud VCN.
Oracle Services Request Network Configuration
When using Samsung Cloud Platform Dedicated Oracle Cloud, it creates and configures the user’s essential network resources.
This configuration request uses the user’s input data with the Oracle Cloud API, providing a service provisioning role that creates and deploys Oracle Cloud network services so that users can easily use Oracle software such as Oracle DBCS (Database Cloud Service) and ExaCS (Exadata Cloud Service).
Reference
Oracle Services network configuration status can be requested only when it is pre-configuration.
To request the network configuration of Oracle Services, follow the steps below.
All Services > Hybrid Cloud > Oracle Services Click the menu. Go to the Service Home page of Oracle Services.
Service Home page, click the Oracle Services menu. Navigate to the Oracle Services list page.
Click the resource to view detailed information on the Oracle Services list page. Navigate to the Oracle Services detail page.
Click the edit icon of Oracle Services NW Configuration on the Oracle Services Details page. The Oracle Services Network Configuration Edit window appears.
Oracle Services Network Configuration Modification Enter or select detailed information in the window and click OK.
Category
Detailed description
SCP TGW selection
Select the Transit Gateway to connect to Samsung Cloud Platform Dedicated Oracle Cloud
VCN name
Enter the VCN name to be created on Samsung Cloud Platform Dedicated Oracle Cloud
VCN IPv4 CIDR block
Enter the VCN’s IPv4 CIDR block
Example: 10.0.0.0/16
Using DNS Hostname (VCN)
Set to Enabled to use Oracle Exa/DBCS, etc.
Enable Enabled to assign instance host names using VCN DNS or third‑party DNS
DNS Label (VCN)
The DNS label is generated from the value entered by the user
It is set to be the same as the VCN name, but may be changed according to DNS label constraints
Subnet name
Enter the name of the Private subnet to be created in the VCN
Subnet IPv4 block
Enter the IPv4 CIDR block of the private subnet
Example: 10.0.0.0/24
Using DNS Hostname (Subnet)
Set to Use to use Oracle Exa/DBCS, etc.
Set to Use to assign instance host names using VCN DNS or third‑party DNS
DNS Label (Subnet)
DNS label is generated from the value entered by the user
It is set to be the same as the Subnet name, but may be changed according to DNS label constraints
Subnet resource logging
Set whether to use Subnet resource logging
Set selection allows receiving resource tracing, troubleshooting, and data insight information
Routing table rule
Enter Dynamic routing rule in Samsung Cloud Platform Dedicated Oracle Cloud
Oracle Services when you request network configuration attempts network service provisioning and automatic configuration with the values entered by the user. If the user needs to update after provisioning, they can directly update the configurable settings in the Oracle Cloud web console after creation.
Create Oracle Cloud VCN.
Create Oracle Cloud DRG.
DRG(Dynamic Routing Gateway) is an optional virtual router that can be added to a VCN, serving a role similar to Samsung Cloud Platform Transit Gateway. This gateway provides private network traffic routing between the VCN and on-premises networks.
Connect the DRG to the VCN.
Enable the “Use DNS Hostnames” option of the VCN.
After defining FastConnect, create a Virtual Circuit for logical allocation.
FastConnect is a dedicated line product for Oracle Cloud, serving a role similar to Samsung Cloud Platform Direct Connect.
When defining FastConnect, set it to the same bandwidth as the corresponding TGW’s bandwidth.
Create a Private Subnet within VCN.
Enable the “Use DNS Hostnames” option of the Subnet.
Add DRG to the VCN Route Table as 0.0.0.0.
The default input value of DRG is 0.0.0.0.
Set up user logging.
Create the log group first, then set up logging.
Request to connect Samsung Cloud Platform VPC and Oracle Services
To provide Oracle Service, a connection as shown below has been established between Samsung Cloud Platform and Oracle Cloud.
A common network is configured for service/authentication integration and console connection between Samsung Cloud Platform and Oracle Cloud.
Through the Samsung Cloud Platform Oracle product, set up user-specific service connection networks, and based on the physical network connection between Samsung Cloud Platform and Oracle Cloud, connect the Samsung Cloud Platform VPC and Oracle Cloud VCN.
Workloads configured in Oracle Cloud (e.g. Computing, DB instance) can be utilized using the pre-configured Cross network between Samsung Cloud Platform-Oracle Cloud.
Samsung Cloud Platform TGW and Direct Connect routing configuration allows direct access/utilization of the DB configured in Oracle Cloud even in customer network environments connected via Direct Connect or TGW Uplink.
Figure. Connection structure between Samsung Cloud Platform and Oracle Cloud
Samsung Cloud Platform and Oracle Cloud single connection architecture can be implemented via TGW peering and TGW-to-TGW connections, and can be commonly used across multiple projects and VPCs. However, when used commonly, performance and bandwidth considerations are needed.
Samsung Cloud Platform TGW and Oracle Cloud’s DRG must have a 1:1 connection structure.
It is possible to share a single connection architecture across multiple projects and VPCs through peering.
However, it is not possible to peer multiple Samsung Cloud Platform VPCs using a single Oracle Cloud DRG.
Figure. Connection structure between Samsung Cloud Platform and Oracle Cloud 2
Oracle Services Uplink Apply for Line
After completing the Oracle Services network configuration, you must request a dedicated line connection to the DRG via the TGW of Samsung Cloud Platform.
Note
Oracle Services Uplink line connection can only be requested after network configuration.
The Samsung Cloud Platform TGW to be uplinked can only have a 1:1 relationship with Oracle Cloud. Before requesting an uplink, first create a new TGW for exclusive use with Oracle Services, then proceed with the TGW-VPC connection task.
Oracle Services Uplink application must be done via the Hybrid Cloud > Oracle Services > Oracle Services line request/modification/termination shortcut menu.
Networking > Transit Gateway > Uplink line If you request an uplink connection, the application details may not be accurately conveyed to the responsible engineer.
To apply for Oracle Services’ Uplink line, follow the steps below.
All Services > Hybrid Cloud > Oracle Services Click the menu. Navigate to the Service Home page of Oracle Services.
Service Home page, click the Oracle Services menu. Go to the Oracle Services list page.
Click the resource to view detailed information on the Oracle Services List page. Go to the Oracle Services Details page.
Click Oracle Services line request/modify/termination shortcut on the Oracle Services Details page. You will be taken to the Service Request page.
On the Service Request page, after entering the relevant information in the service information input area, click the Complete button.
Input Item
Detailed Description
Title
Title of the service you want to request
Region
Select location of Samsung Cloud Platform
Automatically filled with the project’s region
Service
Select the service group and service of the relevant service
Service group: Hybrid Cloud
Service: Oracle Services
Task Category
Select the task you want to perform
Oracle Services Uplink line request: select if you are applying for this service
Content
Enter detailed information to connect the L3 switch for user VPC TGW-Oracle Services connection
Enter applicant information: user email, name, and contact. Application details, progress, and results will be delivered via email.
Enter customer IP range information and Samsung Cloud Platform TGW, OCI virtual circuit information to connect to the Oracle Cloud dedicated for Samsung Cloud Platform
Target DC/Region identifier: Korea (Seoul)/ap-seoul-2
Customer OCI IP range: Enter the IP range to connect to the Oracle Cloud dedicated for Samsung Cloud Platform (e.g., 10.10.10.0/24)
Samsung Cloud Platform VPC range communicating with OCI: Enter the Samsung Cloud Platform VPC range connected to Oracle Cloud (e.g., 20.20.20.0/24)
SCP TGW name: Enter the TGW name to connect to the Oracle Cloud dedicated for Samsung Cloud Platform (how to check: Samsung Cloud Platform > Networking > TGW)
SCP TGW resource ID: Enter the TGW ID to connect to the Oracle Cloud dedicated for Samsung Cloud Platform (how to check: Samsung Cloud Platform > Networking > TGW > Details)
OCI virtual circuit OCID: Enter the OCID of the Virtual Circuit created in Network Configuration of Oracle Cloud (how to check: Samsung Cloud Platform > Hybrid Cloud > Oracle Services > Details)
Progress can be checked in the service request menu
If there is a work account of the network department that can coordinate the work, enter it additionally
After connecting Oracle Services, you can proceed with Single Sign On (SSO) for Oracle Cloud login and access the Customer console.
Reference
Since it is a Samsung Cloud Platform dedicated Oracle Cloud completely separated from Public Oralce Cloud, existing accounts of Public Oracle Cloud cannot be used.
Oracle Services can be accessed only after the operator approves the service usage after application.
To access Oracle Services, follow the steps below.
All Services > Hybrid Cloud > Oracle Services Click the menu. Navigate to the Service Home page of Oracle Services.
Service Home page, click the Oracle Services menu. Go to the Oracle Services list page.
Oracle Services List page, click the resource to view detailed information. Oracle Services Detail page will be opened.
Click the Service Access button on the Oracle Services Details page. The Oracle Cloud Console Login window appears.
Oracle Cloud Console Login After entering user information in the window, click Samsung Cloud Platform Login.
In the Samsung Cloud Platform logged-in state, you can log in to Oracle Cloud by linking with SSO.
After entering the 6-digit OTP generated in the mobile app, click Confirm.
When the user information is authenticated, you will be redirected to the Tenancy page of Oracle Cloud.
Oracle Services Cancel
If you cancel unused Oracle Services, you can reduce operating costs.
Caution
If the service status of Oracle Services is Active, Requested, Creation Failed/Deletion Failed, or Denied, you can terminate the service.
To cancel Oracle Services, follow the steps below.
All Services > Hybrid Cloud > Oracle Services Click the menu. Navigate to the Service Home page of Oracle Services.
Click the Oracle Services menu on the Service Home page. Go to the Oracle Services list page.
Click the resource to be terminated on the Oracle Services List page. It navigates to the Oracle Services Details page.
Click the Cancel Service button on the Oracle Services Details page.
When the termination is complete, check the resource termination status in the Oracle Services list.
Delete resources in Compartment
Oracle Services product termination is only possible when all resources within the user’s Compartment have been deleted.
To check and delete the remaining resources, follow the steps below.
Reference
If the resource status in the Compartment is Terminated, it is recognized as all resources being deleted. In this case, you can proceed with the termination of the Oracle Services product.
After clicking the top resource explorer in the Oracle Cloud console, click Advanced resource query. Enter the Resource Explorer.
Click Show basic search mode.
Click Search and Filter, then select Compartment.
Please select the Compartment to delete.
Compartment starts with PROJECT- xxx and is located under the Root.
Apply Filter를 클릭해 삭제 대상 Compartment 내의 리소스를 확인하세요.
Delete all resources in the Compartment.
14.2.2.1 - Setting up a proxy for accessing the Oracle web console
Oracle Cloud web console used to create/retrieve/update/delete Oracle Services can only be accessed through Samsung Cloud Platform. To access the Oracle console, you must set a web browser proxy in the user’s network environment.
Oracle console access proceeds according to the following steps.
After logging into the Samsung Cloud Platform console, access Oracle Services.
Oracle Cloud is blocked from accessing the Internet and external networks, so it can only be accessed via the Samsung Cloud Platform.
To access the Oracle Cloud console, you need to set the Oracle Cloud web proxy located within the Samsung Cloud Platform in the user’s web browser.
If the user attempts to access the login link provided in the Samsung Cloud Platform console with a web browser, it will be accessed through the proxy set in the web browser.
Figure. Proxy configuration procedure for accessing Oracle web console
Set web browser proxy according to user network environment
You can set the Oracle Cloud web proxy in the system settings’ web proxy configuration and use the Samsung Cloud Platform Oracle Service product.
Depending on the security characteristics of the user environment, you can branch the connection at the internal proxy, or you can configure the user client to add an automatic configuration file (PAC file) to allow direct connection.
Guide
When setting up a web proxy, follow company policy and consult with the network/security personnel who manage the internal proxy before connecting.
Create PAC file
You can create a PAC file to access the Oracle Cloud web console.
Notice
If you are a Samsung SDS employee, download the Developer PAC file from the following link. The branch settings for accessing the Oracle Cloud web console from the internal network are registered in the Developer PAC file.
To create a PAC file directly, write the following PAC file using an editor such as notepad, or download it and save it to disk.
Notice
If you have a separate PAC file or proxy server, update the specified PAC file and use it.
function FindProxyForURL(url,host)
{
//* Local Loopback return DIRECT *//
if (shExpMatch(url,"*localhost*") || shExpMatch(url,"*127.0.0.1*")) return "DIRECT";
//* URL Miss return DIRECT *//
if (!isResolvable(host)) return "DIRECT";
if (shExpMatch(host,"*.ap-suwon-1.oci.oraclecloud35.com")) {return "PROXY 42.15.1.42:8080";} // TED
if (shExpMatch(host,"*.oci.oraclecloud35.com")) {return "PROXY 42.15.1.42:8080";} // TED
if (shExpMatch(host,"*.ap-suwon-1.oraclecloud35.com")) {return "PROXY 42.15.1.42:8080";} // TED
if (shExpMatch(host,"*.ap-suwon-idcs-1.identity.oci.oraclecloud35.com")) {return "PROXY 42.15.1.42:8080";} // TED
if (shExpMatch(host,"*.ap-chunchen-2.oci.oraclecloud35.com")) {return "PROXY 42.15.1.42:8080";} // TED
if (shExpMatch(host,"*.oci.oraclecloud35.com")) {return "PROXY 42.15.1.42:8080";} // TED
if (shExpMatch(host,"*.ap-chunchen-2.oraclecloud35.com")) {return "PROXY 42.15.1.42:8080";} // TED
if (shExpMatch(host,"*.ap-chunchen-idcs-2.identity.oci.oraclecloud35.com")) {return "PROXY 42.15.1.42:8080";} // TED
if (dnsDomainIs(host,"locales.plugins.oci.oraclecloud.com")) {return "DIRECT";} //
if (dnsDomainIs(host,"ocsp.digicert.com")) {return "DIRECT";} //
else { return "DIRECT"; }
"}
Register PAC file on client PC
If you download the PAC file and register it on your PC, you can access the Oracle Cloud web console through Samsung Cloud Platform.
Follow the steps below to register the developer PAC file in the proxy settings.
In the Windows search bar, search for Proxy Settings and click the result.
Windows Settings > Network & Internet > Proxy click to go to the proxy settings page.
Click Automatic Proxy Settings > Edit Configuration Script on the proxy page.
Set the Use configuration script item to On and enter the generated PAC file as the script address.
Enter http://70.10.5.20/dev-pac.pac.
Click Save to complete registration.
After accessing the Samsung Cloud Platform using browsers such as Chrome, Edge, then access the Oracle Cloud Console link provided by the Samsung Cloud Platform.
When registering a PAC file on the client PC and connecting directly, the network path is as follows.
Figure. Network path when registering a PAC file on the client PC for direct connection
Setting up connection when there is a separate proxy server (Register forwarding information on the proxy server)
Depending on the user environment, there are cases where a proxy managed by the network/security administrator is already installed for security purposes. In such cases, you need to request the proxy administrator to add proxy settings for accessing the Samsung Cloud Platform Oracle service.
When requesting the proxy administrator to register forwarding information, convey the following.
If you register a forward setting on the internal proxy and connect, the network path is as follows.
Figure. Network path when registering forward settings on internal proxy and connecting
In case of a network environment that passes through other separate security equipment
If a separate security device is installed and used in addition to the proxy, you must provide the network/security administrator with the Samsung Cloud Platform Oracle Cloud web console connection diagram and proceed with the connection configuration work.
Ask the internal network/security administrator about proxy settings.
Apply for firewall registration
If a firewall is installed in the customer’s network environment for external access control, you must register the proxy server on the firewall to use Samsung Cloud Platform.
When applying for firewall registration, apply the information below.
Source IP: User PC IP Address
Destination IP: Oracle Cloud web proxy IP Address (42.15.1.42)
Protocol/port: TCP/8080
14.2.3 - Release Note
Oracle Services
2025.12.16
NEWOracle Services Official Version Release
To support workloads based on Oracle S/W, we physically configure OCI (Oracle Cloud Infrastructure) resources within Samsung SDS data center and provide OCI services.
15 - Release note
16 - Glossary
Learn everything from cloud terminology to Samsung Cloud Platform terms quickly and easily.
FEATUREStandard image addition and SSD_Provisioned disk type addition and ServiceWatch metric monitoring feature addition
OS Image addition provision
Standard Image has been added. (Window server 2016)
SSD volume with configurable IOPS and Throughput has been added.
You can select SSD_Provisioned disk type when creating Block Storage.
You can set IOPS and Throughput maximum values.
You can view Virtual Server ServiceWatch metric monitoring graphs on the detail page.
2025.12.16
FEATUREVirtual Server feature addition
OS Image addition provision
Standard Image has been added. (Alma Linux 9.6, Oracle Linux 9.6, RHEL 9.6, Rocky Linux 9.6)
New Server Group policy addition
Partition (Virtual Server and Block Storage distributed placement) policy has been added.
You can collect custom metrics and logs by installing Virtual Server ServiceWatch Agent.
2025.10.23
FEATUREServer name change feature addition and ServiceWatch service integration provision
You can change the server name on the Virtual Server detail page of Samsung Cloud Platform Console.
When changing the server name, only the information in Samsung Cloud Platform Console is changed, not the OS’s Hostname.
ServiceWatch service integration provision
You can monitor data through ServiceWatch service.
2025.07.01
FEATUREVirtual Server feature addition and Image sharing method change
Virtual Server feature addition
IP, Public NAT IP, Private NAT IP configuration feature has been added.
LLM Endpoint for using LLM is provided.
You can select OS Image subscribed from Marketplace when creating Virtual Server.
2nd generation server type has been added.
2nd generation (s2) server type based on Intel 4th generation (Sapphire Rapids) Processor has been added. For details, refer to Virtual Server Server Type
Image sharing method between Accounts has been changed.
You can share by creating a new qcow2 Image or Image for sharing.
2025.02.27
FEATURENAT configuration feature and OS Image, server type addition
Virtual Server feature addition
NAT configuration feature has been added in Virtual Server.
OS Image addition provision
Standard Image has been added. (Alma Linux 8.10, Alma Linux 9.4, Oracle Linux 8.10, Oracle Linux 9.4, RHEL 8.10, RHEL 9.4, Rocky Linux 8.10, Rocky Linux 9.4, Ubuntu 24.04)
Image for Kubernetes has been added. You can create Kubernetes Engine using Image for Kubernetes.
2nd generation server type addition
2nd generation (h2) server type based on Intel 4th generation (Sapphire Rapids) Processor has been added. For details, refer to Virtual Server Server Type
Samsung Cloud Platform common function change
Common CX changes such as Account, IAM and Service Home, tags have been reflected.
2024.10.01
NEWVirtual Server service official version release
Virtual Server service official release.
We have released a virtualization server that allows you to freely allocate and use as much as you need at the necessary time without purchasing infrastructure resources individually.
2024.07.02
NEWBeta version release
We have released a virtualization server that allows you to freely allocate and use as much as you need at the necessary time without purchasing infrastructure resources individually.
Virtual Server Auto-Scaling
2025.07.01
FEATURENew feature added
Added notification feature to Virtual Server Auto-Scaling.
You can add notification settings in the Auto-Scaling Group creation or detail screen.
You can set the scaling policy when creating an Auto-Scaling Group.
You can set the Draining Timeout when connecting to the Load Balancer.
In an Auto-Scaling Group, a Virtual Server can be connected to up to 50 instances, and an LB server group and port can be connected up to 3 instances.
2025.02.27
FEATUREVirtual Server Auto Scaling-Load Balancer service linkage release and NAT setting feature addition
Virtual Server Auto-Scaling feature change
It will be released in conjunction with the Load Balancer service to be released in February 2025.
NAT setting feature has been added to Auto-Scaling Group.
Samsung Cloud Platform common feature changes
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.11.19
NEWVirtual Server Auto Scaling Service Official Version Release
Virtual Server Auto-Scaling creates and manages Auto-Scaling Group through Launch Configuration and checks and manages the server.
It provides a schedule method that can set the desired number of servers at a fixed time and a policy method that adjusts the number of servers based on CPU usage rate.
GPU Server
2025.10.23
FEATUREAdd new features and provide ServiceWatch service integration functionality
ServiceWatch service integration provision
You can monitor data through the ServiceWatch service.
You can select a RHEL image when creating a GPU Server.
Keypair management feature has been added.
You can create a keypair to use, or retrieve a public key and apply it.
2025.07.01
FEATUREGPU Server feature addition, Image sharing method change and GPU Server usage guide addition
GPU Server feature addition
IP, Public NAT IP, Private NAT IP configuration feature has been added.
LLM Endpoint is provided for LLM usage.
The method of sharing images between accounts has been changed.
GPU Server RHEL OS and GPU driver version have been added.
2025.02.27
FEATURECommon Feature Change
GPU Server feature addition
NAT setting feature has been added to GPU Server.
Samsung Cloud Platform Common Feature Change
Account, IAM and Service Home, tags, etc. have been reflected in common CX changes.
2024.10.01
NEWGPU Server Service Official Version Release
GPU Server service has been officially launched.
CPU, GPU, memory, etc., we have launched a virtualization computing service that allows you to allocate and use the infrastructure resources provided by the server as needed at the required time without having to purchase them individually.
Bare Metal Server
2026.03.19
CHANGENew Server Type Added
BM 4th generation based on Intel 6th generation (Granite Rapids) Processor has been released.
Bare Metal Server s4 and h4 server types have been added.
You can now create and use up to 10 local disk partitions.
2025.07.01
FEATURENew Features Added and OS Images Added
You can release multiple resources simultaneously from the Bare Metal Server list.
You can change the IP of a general Subnet.
OS Images have been added.
RHEL 8.10, Ubuntu 24.04
2025.02.27
FEATUREPlacement Group Feature and OS Images, Server Types Added
Bare Metal Server features added
Distributes servers belonging to the same Placement Group across different racks.
OS Images added (RHEL 9.4, Rocky Linux 8.6, Rocky Linux 9.4)
3rd generation (s3/h3) server types based on Intel 4th generation (Sapphire Rapids) Processor added. For details, please refer to Bare Metal Server Server Types.
Samsung Cloud Platform common feature changes
Common CX changes for Account, IAM and Service Home, tags, etc. have been reflected.
2024.10.01
NEWBare Metal Server Service Official Version Release
Bare Metal Server service has been officially released.
Bare Metal Server service has been released, which allows customers to exclusively use physical servers without virtualization.
Multi-node GPU Cluster
2025.07.01
FEATURENew feature added and monitoring linked
You can cancel multiple resources at the same time from the GPU Node list.
The nodes must use the same DataSet and Cluster Fabric.
It has been linked with Cloud Monitoring.
You can check major performance items in real-time in Cloud Monitoring.
2025.02.27
NEWMulti-node GPU Cluster Service Official Version Release
Multi-node GPU Cluster service has been launched.
Provides a service that offers physical GPU servers without virtualization for large-scale high-performance AI computing.
Cloud Functions
2025.12.16
FEATUREAIOS, PrivateLink service integration
You can use functions in conjunction with the AIOS service.
Cloud Functions can be linked with AIOS to utilize LLM.
You can use functions in conjunction with the PrivateLink service.
Through Private connection (PrivateLink), you can internally connect Samsung Cloud Platform’s VPC to VPC, and VPC to services without going through the Internet.
The feature to upload Java Runtime executable files has been added.
You can fetch and configure a Java Runtime executable archive file (.jar/.zip) to Object Storage.
2025.07.01
NEWCloud Functions Service Official Version Release
Cloud Functions service has been officially launched.
It is a serverless computing-based FaaS (Function as a Service) that easily runs function-style applications without the need for server provisioning.
Virtual Server DR
2025.07.01
NEWOfficial Launch of Virtual Server DR Service
Virtual Server DR service has been officially released.
The system can be restored to normal operating conditions in a short period of time when it is interrupted by various disasters and risk factors.
Block Storage
2025.07.01
FEATURESnapshot Billing Policy Change and Monitoring Linkage
The snapshot is charged based on the size of the original Block Storage.
It has been linked with Cloud Monitoring.
You can check IOPS, Latency, Throughput information in Cloud Monitoring.
2025.02.27
FEATUREBlock Storage disk type added
Block Storage feature change
The HDD disk type has been added, and you can select the added type (HDD, HDD_MultiAttach, HDD_KMS) according to the purpose.
Samsung Cloud Platform common feature changes
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.10.01
NEWBlock Storage Service Official Version Release
SSD_KMS disk type has been added.
When SSD_KMS is selected, encryption through the KMS (Key Management Service) encryption key is added.
Released a high-performance storage service suitable for handling large-scale data and database workloads.
2024.07.02
NEWBeta version release
Released a high-performance storage service suitable for handling large-scale data and database workloads.
Block Storage(BM)
2025.12.16
FEATUREIOPS, Throughput setting feature added
You can set the volume performance metrics (IOPS, Throughput) and edit them on the detail page.
IOPS: 3,000 ~ 16,000
Throughput: 125 ~ 1,000
No separate charges during the preview period (billing planned for the first half of 2026).
2025.10.23
FEATUREAdd snapshot recovery creation feature
The feature to create a recovery (Recovery) using snapshots has been added.
A recovery copy is a separate volume created with the same capacity as the original, and additional costs are incurred.
2025.07.01
FEATURENew feature addition and monitoring integration
HDD Disk type has been added. When creating Block Storage (BM), HDD disk can be selected.
Provides an IaC environment through Terraform.
You can use the snapshot feature on a replica volume.
Cloud Monitoring has been linked.
You can view IOPS, Latency, and Throughput information in Cloud Monitoring.
2025.02.27
FEATUREAdd replication and Volume Group feature
Block Storage(BM) Feature Change
Block Storage(BM) Replication feature that allows volumes to be replicated to another location has been added.
The Volume Group feature has been added, allowing you to set up to 16 Block Storage (BM) volumes as a group to create snapshots and replication at a consistent point in time.
Account, IAM and Service Home, tags etc. reflected the common CX changes.
2024.10.01
NEWBlock Storage(BM) Service Official Version Release
Launched a high-performance storage service suitable for handling large-scale data and database workloads.
File Storage
2025.10.23
FEATUREAdd disk type and provide ServiceWatch integration
SAP Account dedicated volume SSD SAP_E, SSD SAP_E disk type has been added.
If you use a SAP Account dedicated volume, when a failure occurs causing a failover, the LIF (storage mount IP) is automatically transferred.
Can only be used in SAP Account.
ServiceWatch service integration provision
You can monitor data through the ServiceWatch service.
2025.07.01
FEATUREAdd disk type and disk backup feature
High-performance SSD disk type has been added, and can be used by connecting to a Multi-node GPU Cluster.
Disk Backup Through the function, you can store snapshots in backup-dedicated HDD Storage. You can select a location other than the original location.
2025.02.27
FEATUREAdd disk type and replication, VPC Endpoint connection feature
File Storage feature change
SSD disk type has been added, allowing you to select the disk type according to the purpose.
Create the same replica volume at a different location, and you can set the data replication cycle.
Through a VPC Endpoint connection, you can use File Storage from an external network.
Samsung Cloud Platform Common Feature Change
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.10.01
NEWFile Storage Service Official Version Release
Because it automatically expands or shrinks based on usage, users can use the volume without capacity limits.
You can select the connection target through the access control function.
2024.07.02
NEWBeta version release
We have launched the File Storage service, a storage that allows multiple client servers to share files via network connection.
Object Storage
2025.10.23
FEATUREDuplication, file copy feature added and Cloud Functions service integration
Object Storage’s replication feature has been added.
You can perform replication to a bucket in a different location or the same location, and you can set multiple replication policies.
File copy feature has been added.
You can copy the desired file within the same bucket and folder.
Cloud Functions service has been added to access control.
You can upload Java Runtime executable files in Cloud Functions.
2025.07.01
FEATUREAdd access server resources and Presigned URL feature
A server resource target product has been added to Object Storage access control.
Multi-node GPU Cluster, PostgreSQL, MariaDB, MySQL, EPAS, Microsoft SQL Server
Presigned URL has been added.
You can download the file using a Presigned URL for the set period of time.
You can perform Copyobject on encrypted files.
2025.04.28
FEATUREAmazon S3 version added
Additional versions of the Amazon S3 SDK and Amazon S3 CLI that can be used have been added.
2025.02.27
FEATUREVPC Endpoint connection feature added
Object Storage feature change
VPC Endpoint can be used to access Object Storage from external networks.
Samsung Cloud Platform Common Function Change
Account, IAM and Service Home, tags, etc. have been reflected in the common CX changes.
2024.10.01
NEWObject Storage Service Official Version Release
Launched an object storage service that makes data storage and retrieval easy.
2024.07.02
NEWBeta version release
We have launched Object Storage, a service that provides a space (bucket) to economically store large amounts of data.
Archive Storage
2025.10.23
FEATUREVersion control feature added
You can manage archived folders or files by version.
The user can view archived folders or files by version and select the desired version to delete or restore.
2025.07.01
NEWArchive Storage Service Official Version Release
Archive Storage service has been launched.
Automatically transfer data stored in Object Storage to Archive Storage for storage, and easily recover it when needed.
Backup
2025.12.16
FEATURECopy tab and additional features added
Backup replication tab added
In the Backup detailed page, you can view the original and replica information in the Replication tab.
Add download feature for related backup information
In the Backup detail page, you can download the Backup history, recovery target, and recovery history list as an Excel file.
2025.07.01
FEATUREExpanded recovery location
Expand recovery scope
When restoring with an agentless remote backup copy, you can select the restore location (target server or backup copy location).
2025.04.28
FEATUREBackup location and target expansion
Expand backup location and target
Agentless-based remote backup: You can backup and restore to a different location from the backup target server.
Backup Agent feature added: By configuring the Agent, you can back up the filesystem of a Bare Metal Server.
2025.02.27
FEATURECommon Feature Change
Samsung Cloud Platform Common Feature Change
Account, IAM and Service Home, tags, etc. have been reflected in common CX changes.
2024.12.23
NEWBackup service official version launch
We launched the Backup service to provide a service that safely backs up and restores data.
Since backup policies include targets, frequency, retention period, etc., users can set backup plans according to their business environment and requirements.
Parallel File Storage
2025.12.16
NEWParallel File Storage Official Version Release
Parallel File Storage service has been officially launched.
File data can be distributed across multiple storage nodes to process large-scale data quickly and efficiently.
Through fast data processing speed and reduced analysis time, it can be used in various fields such as AI/ML analysis and big data analysis.
Kubernetes Engine
2026.03.19
FEATUREKubernetes version added, GPU VM custom image provision, k8s and OS version EoTS management logic provision, node pool OS image EOS response and upgrade default setting, Terraform kubeconfig not provided, type: LB setting related improvements
Kubernetes Engine feature changes
Supports Kubernetes v1.34 version.
Provides GPU VM custom image for node pools.
Provides EoTS management logic and display function for cluster and node pool k8s versions and node pool OS versions.
Provides OS selection dropdown feature when upgrading node pools.
type: LB L7 listener idle-timeout added and session-duration-time default value changed and improved.
Does not provide kubeconfig feature in Terraform.
2025.12.18
FEATUREKubernetes version added, node pool GPU Driver version display, MNGC node support(SR), node pool default disk maximum capacity changed, node pool Validation added and supplemented
Kubernetes Engine feature changes
Supports Kubernetes v1.33 version.
Provides GPU Driver version information for node pool GPU nodes.
Provides MNGC nodes in SR request setting format.
Changes the maximum capacity of Block Storage for node pool OS to be the same as VM products from 1 TB → 12 TB.
Provides additional validation for label key when creating/modifying node pools and additional validation for GPU node pool server group not supported.
2025.10.23
FEATUREKubernetes version added, node pool advanced setting feature, node pool server group setting, ServiceWatch integration, UserKubeconfig download, OS version consideration node pool upgrade supplemented
Kubernetes Engine feature changes
Supports Kubernetes v1.32 version.
Provides node pool advanced setting feature.
Provides node pool server group (Affinity or Anti-affinity) setting feature.
Provides user Kubeconfig download feature following the administrator Kubeconfig download button.
Provides additional upgrade logic considering OS version when upgrading node pools.
Provides log collection feature based on ServiceWatch integration.
2025.07.01
FEATUREKubernetes version added, public endpoint provision, private endpoint access control target added, node pool Label/Taint, Block Storage CSI, kubectl login plugin added
Kubernetes Engine feature changes
Supports Kubernetes v1.31 version.
Provides public endpoint for the cluster.
Adds MNGC(Baremetal) product and DevOps Service product to private endpoint access control targets for the cluster.
Provides node pool Label and Taint setting feature.
Provides Block Storage CSI and kubectl login plugin features.
Provides private endpoint and access control features.
Provides type: LoadBalancer feature.
2025.02.27
FEATUREKubernetes version added and Kubernetes version upgrade, Custom Image, GPU node creation feature added
Kubernetes Engine feature changes
Supports Kubernetes v1.30 version.
Provides Kubernetes version upgrade feature for cluster and node pools.
Provides Multi-Security Group feature.
Provides Custom Image node and GPU node creation feature.
Samsung Cloud Platform common feature changes
Reflected common CX changes for Account, IAM, Service Home, and tags.
2024.10.01
NEWKubernetes Engine service official version release
Released Kubernetes Engine product that provides lightweight virtual computing Containers and Kubernetes clusters for managing them.
Creates Container nodes and manages them through the cluster to enable deployment of various Container applications.
2024.07.02
NEWBeta version release
Released Kubernetes Engine product Beta version.
Container Registry
2026.03.19
FEATUREOCI Distribution Spec. compatibility secured, image vulnerability check feature expanded
Container Registry feature changes
Improved user registry by securing OCI(Open Container Initiative) Distribution Spec. v1.1.1 compatibility for registry.
Expanded provision by adding OS and Language types subject to container image vulnerability checks.
2025.12.18
FEATUREImage tag deletion policy added, Public Endpoint access control IP Validation improved
Container Registry feature changes and improvements
Added image tag deletion policy feature based on count criteria.
Improved Public Endpoint access control IP input value Validation according to Firewall product IP range constraints.
2025.10.23
FEATUREImage tag deletion policy activation item added, ServiceWatch integration supported
Container Registry feature changes
Provided deletion policy activation setting feature for image tag deletion items.
Provided log collection feature based on ServiceWatch integration.
2025.07.01
FEATURESelf-encryption / S3 API compatible bucket-based Container Registry, public endpoint provision, private endpoint access control target added, Image Life Cycle Policy supported
Container Registry feature changes
Provided Container Registry service based on Object Storage with self-encryption / S3 API compatibility issue patches applied.
Provided public endpoint and access control features for the registry.
Added Multi-Node GPU Cluster product to private endpoint access control targets for the registry.
Provided automatic deletion policy setting feature for Repository and stored Images and their tags(digests).
2025.02.27
FEATUREImage Lock feature and monitoring, VPC Endpoint integration added
Container Registry feature changes
Provided Lock feature for Images stored in the registry.
Provided monitoring feature for the registry through integration with Cloud Monitoring product.
Provided integration with VPC Endpoint.
Samsung Cloud Platform common feature changes
Reflected common CX changes for Account, IAM, Service Home, and tags.
2024.11.28
NEWContainer Registry service temporary version release
Container Registry is a service that provides a registry and repository for easy storage, management, and sharing of container images and OCI(Open Container Initiative) standard artifacts.
Released as a temporary version, and migration to the official version is planned when the encryption scheme is updated.
VPC
2026.03.19
FEATUREVPC New Features Added
VPC IP Range Addition Feature
You can add and use a new IP range to the VPC.
Virtual IP Feature
You can reserve and use a Virtual IP in a Subnet.
Private NAT Feature Improvement
You can now use Private NAT in Transit Gateway as well.
2025.10.23
FEATUREPrivateLink Feature Added
You can connect via a private path between the VPC and SCP services without exposing internal Samsung Cloud Platform data to the internet.
2025.07.01
FEATURENew Services Added Beyond Transit Gateway
Transit Gateway Feature
Easily connects customer networks and Samsung Cloud Platform’s networks and acts as a connection hub for multiple VPCs within the cloud environment.
VPC Peering Feature
Allows IP communication via 1:1 private routes between VPCs.
Private NAT Feature
Compute resources within a VPC can connect by mapping customer network IPs using Direct Connect.
2025.02.27
FEATUREVPC Endpoint Service Added
VPC Feature
Provides an endpoint (entry point) that allows access to Samsung Cloud Platform through a private connection from an external network connected to the VPC.
Samsung Cloud Platform Common Feature Changes
Reflected common CX changes such as Account, IAM, Service Home, and tags.
2024.12.23
FEATURENAT Log Storage Feature Added
Added the ability to store NAT logs.
You can decide whether to store NAT logs and store logs in Object Storage.
2024.10.01
NEWVPC Service Official Version Release
VPC service providing independent virtual network spaces has been released.
2024.07.02
NEWBeta Version Release
VPC service providing independent virtual network spaces has been released.
Security Group
2026.03.19
FEATURESecurity Group Feature Improvement
Can select multiple service ports when adding Security Group rules
Improved to allow selecting multiple service ports when adding rules in the Console.
2025.07.01
FEATURESecurity Group Rule Input Method Addition
Security Group rule input method added
Added the ability to enter IP protocol.
Added the ability to select well-known protocols.
2025.02.27
FEATURECommon Feature Changes
Samsung Cloud Platform common feature changes
Reflected common CX changes such as Account, IAM and Service Home, and tags.
2025.02.27
CHANGEDSecurity Group Feature Improvement
Improved to allow entering multiple IPs when adding Security Group rules.
2024.12.23
FEATURESecurity Group Log Storage Feature Added
Added the ability to store Security Group logs.
Can determine whether to store Security Group logs and store logs in Object Storage.
2024.10.01
NEWSecurity Group Service Official Version Release
Released the Security Group service that provides virtual firewall functionality for instance resources.
Can control inbound and outbound traffic occurring in instance resources through the Security Group service.
2024.07.02
NEWBeta Version Release
Released the Security Group service that provides virtual firewall functionality for instance resources.
Can control inbound and outbound traffic occurring in instance resources through the Security Group service.
Load Balancer
2025.12.16
FEATURELB health check setting change and addition of LB health check, LB server group options
LB health check port configuration method has been changed.
You can choose between member port/direct input, and if you select direct input, specify the port to use.
Existing LB health checks are changed to member ports. (Same as the current health check method)
HTTPS option has been added to the LB health check protocol.
You can monitor the server TLS connection status.
When using URL redirection on the HTTP Listener, you can specify the target port for the redirection.
You can add Multi-node GPU Cluster resources to LB server group members.
2025.10.23
FEATURELoad Balancer Feature Added
You can set the Source NAT IP and health check IP when creating a Load Balancer.
TLS protocol has been added to L4 Listener.
You can configure TLS services based on TCP.
Routing rule option has been added to L7 Listener.
Routing conditions allow setting URL path or host-specific branching.
Supports multiple SSL certificates.
Supports SNI, allowing multiple certificates to be registered on a single Listener.
2025.07.01
FEATURELB health check and LB server group feature addition
Add LB health check management feature
Create an LB health check to define the required health check method and connect it to an LB server group for use.
LB server group weighted load balancing support
Weighted Round Robin and Weighted Least Connection have been added to the load balancing options.
By setting per-member weights, you can distribute server load.
Add LB server group member activation feature
You can select whether to enable or disable members belonging to the LB server group.
2025.02.27
NEWNew Load Balancer Service Launch
A Load Balancer service that provides more stable and enhanced features has been launched.
Provides an L7 Load Balancer that supports HTTP, HTTPS protocols.
Provides an L4 Load Balancer that supports TCP, UDP protocols.
DNS
2026.03.19
FEATUREDNS Feature Improvements
In conjunction with the Service Watch service, you can view measurements for the following 5 items.
Number of server error responses (unit: seconds)
Number of NXDOMAIN responses (unit: seconds)
Number of queries not responded within 1 second (unit: seconds)
Number of outgoing UDP queries (unit: seconds)
Number of UDP-based data request processing (unit: seconds)
2025.12.16
FEATUREAdded Public Domain Name Transfer Feature Between User Accounts
Public Domain Names registered through Samsung Cloud Platform can be transferred to other user accounts within the allowed period.
2025.07.01
NEWDNS Service Official Version Release
Officially released DNS service available in private network and internet environments. You can manage Private DNS and Private Hosted Zone targeting limited networks, and apply for Public Domain Name registration for internet environment and manage Public Hosted Zone.
2024.07.02
NEWBeta Version Release
Beta released DNS service that provides new domain registration application and management functions based on user requests.
VPN
2025.10.23
FEATUREChange in the number of additional remote site subnets for VPN Tunnel
You can enter up to 10 remote subnets (CIDR).
2024.02.27
NEWOfficial Release of VPN Service
A VPN service has been released that connects the customer network and Samsung Cloud Platform through an encrypted (IPSec) virtual private network.
Firewall
2026.03.19
FEATUREFirewall rule management structure change
For user convenience, pages for Firewall rule input and modification/deletion have been added. You can perform desired operations by moving to a separate page when managing Firewall rules.
2025.10.23
FEATUREFirewall rule input method added
Firewall rule input method added
In KR WEST and KR EAST regions, you can enter the destination address in FQDN (Fully Qualified Domain Name) format.
2025.07.01
FEATUREFirewall rule input method added
Firewall rule input method added
A function to enter the IP protocol has been added.
2025.02.27
FEATURELoad Balancer-Firewall feature added
Firewall feature added
You can use Firewall in the Load Balancer service.
Samsung Cloud Platform common feature changes
Common CX changes for Account, IAM, Service Home, tags, etc. have been reflected.
2024.12.23
FEATUREFirewall log storage feature added
A function to store Firewall logs has been added.
You can decide whether to store Firewall logs and store logs in Object Storage.
2024.10.01
NEWFirewall service official version release
You can control inbound and outbound traffic occurring in VPC through the Firewall service.
2024.07.02
NEWBeta version release
The Firewall service has been released.
Direct Connect
2025.02.27
NEWCommon Feature Changes
Samsung Cloud Platform common feature changes
Reflected common CX changes, including Account, IAM, Service Home, and tags.
2024.10.01
NEWDirect Connect Service Official Release
Launching Direct Connect service, which quickly and securely connects customer networks and Samsung Cloud Platform networks.
Cloud LAN Campus
2025.07.01
NEWCloud LAN Campus_Enterprise Service Official Version Release
We have launched the Cloud LAN Campus service, which provides authentication-based wired and wireless integrated network services within the customer’s business site.
Cloud LAN-Data Center
2025.07.01
NEWCloud LAN-Data Center common feature changes
Samsung Cloud Platform common feature change
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2025.02.27
NEWCloud LAN-Data Center Service Official Launch
We have launched the Cloud LAN-Data Center service, which provides connections between various networks through virtual network configuration within the data center.
Cloud WAN
2025.07.01
NEWCloud WAN Service Official Version Release
Samsung Cloud Platform launched Cloud WAN service, providing network connections between global regions and customer bases.
SASE
2026.03.19
FEATURESASE service ledger creation automation
The automatic ledger creation feature has been added through the Samsung Cloud Platform user console.
2025.07.01
NEWSASE Service Official Version Release
We have launched a SASE service that combines network and security functions into a single cloud-based service platform.
Cloud Last Mile
2026.03.19
FEATURECloud Last Mile Service Ledger Creation Automation
Samsung Cloud Platform user console has added automatic ledger creation feature.
2025.07.01
NEWCloud Last Mile Service Official Version Release
We have launched the Cloud Last Mile service that provides Last Mile lines for network connection from the customer’s site to the Samsung Cloud Platform region and Customer Edge resources within the customer’s site.
Global CDN
2026.03.19
FEATUREGlobal CDN feature improvement
You can check measurement values for the following 2 items in conjunction with the Service Watch service.
Check Global CDN status
Check Global CDN processed data volume
Data from 30 minutes ago is retrieved due to external CDN network traffic processing time.
2025.07.01
NEWGlobal CDN service official version release
We have released the Global CDN service, which transmits static content stored on web servers or object storage to users faster and more securely through edge servers distributed across the global network.
GSLB
2025.12.16
FEATURERegional Routing Controller Service Added
You can control whether to use traffic to be connected through GSLB by region.
2025.07.01
NEWGSLB Service Official Version Released
We have released the GSLB service that can automatically distribute network traffic to adjacent regions on a DNS basis when traffic increases in a specific global region, providing stable service.
Cloud Virtual Circuit
2025.09.08
NEWCloud Virtual Circuit Service Official Version Release
Cloud Virtual Circuit service has been officially launched.
The user can apply for a 1:1 virtual circuit based on the line bandwidth between the Global Samsung Cloud Platform region or the customer’s hub.
Private 5G Cloud
2025.09.08
NEWPrivate 5G Cloud Service Release
A Private 5G Cloud product that provides 5G services to customers based on the Samsung Cloud Platform has been launched.
PostgreSQL(DBaaS)
2026.03.19
FEATUREAdded OS(Kernel) Upgrade Function
Enhances latest security patches and stability through OS(Kernel) upgrade function.
2025.12.16
FEATUREAdded Disaster Recovery Replica Configuration Function
Can configure disaster recovery replicas through Replica configuration (Other Region) function.
2025.07.01
FEATUREAdded User (Access Control) Management, Archive Setting Function, DB Audit Log Export Function, Backup Notification Function, Migration Function
PostgreSQL(DBaaS) feature additions
2nd Generation Server Type added
Added 2nd generation (db2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For details, see PostgreSQL(DBaaS) Server Type
DB User and Access Control Management and Archive Setting Function added
Provides notification feature for backup success and failure. For more information, see Creating Notification Policy
Migration feature added
Provides non-disruptive data migration feature based on Replication. For more information, see Configuring Migration
Added HDD, HDD_KMS types to Block Storage type
2025.02.27
FEATUREServer Type Added and Per Server IP Setting, Block Storage Capacity Expansion Feature Added
MariaDB(DBaaS) feature changes
Added 2nd generation server type
Added 2nd generation (dbh2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For more information, see MariaDB(DBaaS) Server Type
After service creation, Block Storage capacity expansion is possible.
Per server network IP setting feature added allows common settings or per server settings according to usage purpose.
Samsung Cloud Platform common feature changes
Reflected common CX changes such as Account, IAM and Service Home, Tags, etc.
2024.10.01
NEWMariaDB(DBaaS) Service Official Version Released
Added volume encrypted storage selection option to Block Storage type.
Added function to Switch Role (Active ↔ Standby) of Active DB and Standby DB configured in redundancy.
Integrated with Cloud Monitoring Service to enable DB instance performance and log monitoring.
Planned Compute policy setting is available according to the server type selected by the customer.
2024.07.02
NEWBeta Version Released
Released MariaDB(DBaaS) service that allows easy creation and management of MariaDB in a web environment.
MySQL(DBaaS)
2026.03.19
FEATUREDisaster recovery Replica configuration, OS(Kernel) upgrade function added, Servicewatch integration function provided
You can configure a disaster recovery Replica through the Replica configuration (Other Region) function.
Enhances latest security patches and stability through the OS(Kernel) upgrade function.
You can monitor metrics and logs through integration with Servicewatch.
2025.07.01
FEATUREUser(access control) management, Archive setting function added, DB Audit Log export function added, backup notification function provided, Migration function added
MySQL(DBaaS) function additions
2nd generation server type added
Added 2nd generation (db2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For details, refer to MySQL(DBaaS) Server Type
DB user and access control management and Archive setting function added
Provides notification function for backup success and failure. For details, refer to Create Notification Policy
Migration function added
Provides non-stop data migration function based on Replication. For details, refer to Configure Migration
Added HDD, HDD_KMS types to Block Storage type
2025.02.27
FEATUREServer type added and per-server IP setting, Block Storage capacity expansion function added
MySQL(DBaaS) function changes
2nd generation server type added
Added 2nd generation (dbh2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For details, refer to MySQL(DBaaS) Server Type
After service creation, Block Storage capacity expansion is possible.
Per-server network IP setting function added to allow common settings or per-server settings depending on usage purpose.
Samsung Cloud Platform common function changes
Reflected common CX changes for Account, IAM, Service Home, and tags.
2024.10.01
NEWMySQL(DBaaS) service official version release
MySQL(DBaaS) service has been released that allows easy creation and management of MariaDB in a web environment.
Microsoft SQL Server(DBaaS)
2025.07.01
FEATUREUser (Access Control) Management, DB Audit Log Export Function Added, Backup Notification Function Provided
Microsoft SQL Server(DBaaS) feature added
2nd generation server type added
Intel 4th generation (Sapphire Rapids) processor-based 2nd generation (db2) server type added. For more information, see Microsoft SQL Server (DBaaS) server type
Backup Notification Feature provided
* Provides notification features for backup success and failure. For more information, see Creating a Notification Policy
Block Storage type added **HDD, HDD_KMS type
2025.02.27
NEWMicrosoft SQL Server(DBaaS) Service Official Version Release
A Microsoft SQL Server (DBaaS) service that allows you to easily create and manage Microsoft SQL Server in a web environment has been released.
CacheStore(DBaaS)
2026.03.19
FEATUREMinor Version Upgrade Feature Added
Provides stable service continuity through the minor version upgrade feature.
Support for open source Valkey image developed by forking Redis OSS
2nd Generation Server Type added
Added 2nd generation (db2) server type based on Intel 4th generation (Sapphire Rapids) Processor. For more information, see CacheStore(DBaaS) Server Type
Backup notification feature provided
Provides notification feature for backup success and failure. For more information, see Creating Notification Policy
Added HDD, HDD_KMS types to Block Storage type
2025.02.27
FEATURECommon Feature Changes
Samsung Cloud Platform common feature changes
Reflected common CX changes such as Account, IAM and Service Home, Tags, etc.
2024.10.01
NEWCacheStore(DBaaS) Service Official Version Released
Changed the service name to CacheStore(DBaaS).
Added volume encrypted storage selection option to Block Storage type.
Added Role Switch (Active ↔ Standby) function for Active DB and Standby DB configured in redundancy.
Integrated with Cloud Monitoring Service to enable DB instance performance and log monitoring.
Planned Compute policy setting is available according to the server type selected by the customer.
2024.07.02
NEWBeta Version Released
Released Redis(DBaaS) service that allows easy creation and management of Redis OSS in a web environment.
Event Streams
2025.07.01
FEATURETerraform and Disk Type Addition
It provides Terraform.
HDD, HDD_KMS disk types are also provided.
2025.02.27
NEWEvent Streams Service Official Version Release
An Event Streams service that easily creates and manages Apache Kafka clusters in a web environment has been released.
Search Engine
2025.07.01
FEATURENew feature, Terraform and disk type added
OpenSearch 2.17.1 is newly provided.
It provides Terraform.
HDD, HDD_KMS disk types are also provided.
2025.02.27
NEWSearch Engine Service Official Version Release
A Search Engine service that can easily create and manage ElasticSearch Enterprise in a web environment has been released.
Vertica(DBaaS)
2025.07.01
NEWVertica(DBaaS) Service Official Version Release
Released Vertica(DBaaS) service, which can efficiently store data and improve query performance with columnar storage-based compression and encoding features.
Data Flow
2025.04.28
NEWOfficial Release of Data Flow Service
The Data Flow service, which extracts/transforms/transfers data from various sources and automates data processing flows, has been released.
It provides open-source Apache NiFi.
Data Ops
2025.04.28
NEWData Ops Service Official Version Release
A workflow can be created and job scheduling automated for periodic or repetitive data processing tasks with the release of the Data Ops service.
It is a managed workflow orchestration service based on Apache Airflow.
Quick Query
2025.07.01
NEWQuick Query Official Version Release
A Quick Query service has been released, allowing for easy analysis of large-scale data using standard SQL.
API Gateway
2026.03.19
FEATUREAdd resource-based policy feature
You can set resource-based policies for APIs.
A resource-based policy is a policy that is applied to the API itself to allow external access.
Using resource-based policies, you can allow or deny actions on specific resources to specific principals.
2025.07.01
NEWOfficial release of API Gateway service
API Gateway service that allows easy management and monitoring of APIs has been released.
You can easily define resources and methods related to APIs, and conveniently monitor API usage status and performance metrics.
Queue Service
2025.12.16
NEWOfficial Service Version Release
Queue Service has been officially released.
Through Queue Service, you can distribute system load caused by messages and efficiently manage messages in microservice architectures or event-driven systems.
Message transmission and reception operate independently, improving responsiveness and processing speed.
Key Management Service
2026.03.19
FEATUREPlatform Managed Key Service Provision
In addition to the ‘customer-managed key’ that the user creates directly, a ‘platform-managed key’ service that the CSP (Cloud Service Provider) creates and manages directly is also provided.
If other products within Samsung Cloud Platform encrypt using KMS keys, the user can encrypt with a platform-managed key generated directly by CSP without creating a key directly in KMS.
2025.10.23
FEATURELog expansion provision and notification feature improvement
Improved by segmenting the work history of API calls such as encryption and decryption into individual API units and logging them, making tracking management of API calls easier.
When an encryption key is deleted, it provides a notification not only to the user who deleted the key but also to the key creator, and additionally includes the region name where the encryption key is located in the notification.
2025.07.01
FEATUREAdditional encryption method provided
Provides additional generation/verification (HMAC) encryption method used for creating and verifying hash-based message authentication codes.
2025.02.27
NEWKey Management Service Official Version Release
Launched an encryption key management service (Key Management Service) to securely protect important data of customer applications.
You can generate, provide, and manage encryption keys for various purposes (encryption/decryption, signing/verification).
Config Inspection
2025.07.01
FEATUREService Offering Expansion
We have launched the Config Inspection product, which can comprehensively diagnose and manage security vulnerabilities in the customer’s multi-cloud console.
The account (or other cloud account) to be diagnosed is registered, allowing for continuous diagnosis, and the dashboard and detailed results can be checked in the report.
2025.02.27
FEATURECommon Feature Changes
Samsung Cloud Platform common feature changes
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.12.23
NEWBeta version release
You can manage Samsung Cloud Platform Console setting vulnerabilities through console diagnostics.
It provides a Report that can view the security diagnosis results.
Certificate Manager
2025.07.01
NEWCertificate Manager Service Official Version Release
Released Certificate Manager service that supports SSL/TLS certificate deployment and integrated management.
You can register a certificate issued by a certification authority (CA) or create a self-signed certificate for development/test purposes.
Samsung Cloud Platform connects to resources and enables encryption of network communication and management of certificate lifecycles.
Secret Vault
2025.07.01
NEWOfficial Release of Secret Vault Service
A Secret Vault service has been released that can manage token-based temporary key issuance and lifecycle.
SingleID
2025.11.04
FEATUREAdd console access history log monitoring feature, Expand CSP support for console access control, Improve announcement feature, Improve approval system feature, Improve batch scheduler management feature, Improve CAM system user role management feature, Improve system global variable management feature
Console access history log monitoring feature added
Added the feature to view and download console access logs
Console access control support CSP expanded
Expanded support CSP for console access control from the existing AWS to Azure and Samsung Cloud Platform (KR EAST1 region, KR WEST1 region)
Notice feature improved
Improved the feature to register and manage notices per tenant
Approval system feature improved
Added a self-built approval system-based approval function to the existing Knox-based approval function
Batch scheduler management feature improved
The batch scheduler management function has been improved, allowing execution results and details to be viewed and enabling immediate execution.
CAM system user role management feature improved
Improved to allow creation/listing/viewing/detail of user roles for the CAM system itself.
System-wide variable management feature improved
Added system-wide variable management function for CAM Portal system itself
Other convenience improved
Improved so that users of PM/PL group can change the IP of already enrolled resources (no need to re-enroll the resource)
Improved to allow navigation to the detailed Role/Policy/Account page from Console Access menu
Changed manual, release note and FAQ URLs to SCP Documentation URL
2025.10.23
FEATUREAdd admin delegation feature, Add approval status menu to dashboard, Add sign-up status menu to dashboard, Add user campaign feature, Add dormant account policy feature, Add user lifecycle management feature, Add rebranding feature to login page, Improve simple authentication feature, Add user security enhancement feature, Improve user profile attribute setting feature, Add application entitlement management feature
Admin delegation feature added
A feature that allows delegating authentication for identity verification to an administrator has been added. This feature is only available for MFA products.
Approval status menu added to dashboard
A feature has been added that allows managing user approval requests and statuses from the dashboard.
Member registration status menu added to dashboard
A feature has been added that allows managing users’ sign-up status from the dashboard.
User campaign feature added
If only one user authentication method is registered, a campaign feature that recommends adding additional authentication registration has been added.
Dormant account policy feature added
Dormant user settings, alarm sending settings, exception user registration, long-term dormant user, dormant self-recovery settings have been added.
User lifecycle management feature added
When signing up and registering users, features for setting user defaults, setting user account usage period, and approval policy have been added.
Rebranding feature added to login page
A feature has been added to change the top and bottom logos, key visual images, text, etc. in the Admin Portal.
The redirection functions for member sign-up page settings, bottom privacy policy, terms of use, etc., have been added.
Passwordless authentication feature improved
Mobile Passkey, security key, a convenient authentication method that allows easy login with a Windows PIN code has been added.
User security feature enhanced
If you use only one authentication method for a long period, a conditional authentication policy feature that requires additional identity verification has been added.
User profile attribute setting feature improved
You can further expand and apply the user’s personal information attributes.
Added a feature to set a prefix text when sending SMS
Improved the image upload screen and process
2025.07.01
NEWSingleID Service Official Version Release
SingleID service launched that allows users to log into business systems with a single ID and enables administrators to easily control access by integrating various access environments
WAF
2025.07.01
NEWWAF Service Official Version Release
We are launching a WAF service to protect web applications from web vulnerabilities and attacks.
DDoS Protection
2025.07.01
NEWDDoS Protection Service Official Version Release
We are launching a DDoS Protection service that provides detection and response to large-scale network traffic attacks.
IPS
2025.07.01
NEWIPS Service Official Version Release
Launched an IPS service that continuously updates IPS intrusion detection policies reflecting the latest security threats and responds in real-time.
Secured Firewall
2025.07.01
NEWSecured Firewall Service Official Version Released
Samsung Cloud Platform has released Secured Firewall, a next-generation firewall service for cloud network security.
Secured VPN
2025.07.01
NEWOfficial Release of Secured VPN Service
Launched Secured VPN service that securely connects the customer network outside and the cloud network of Samsung Cloud Platform through an encrypted virtual private network.
FPMS
2025.12.16
FEATUREAdd firewall and Security Group registration feature, improve SecuAI firewall support
A feature has been added that allows registering the firewall and Security Group of the Samsung Cloud Platform Console to FPMS for management.
SecuEye firewall v3.7 support (anyzone) feature has been improved.
2025.07.01
NEWFPMS Service Official Version Release
We have launched the Firewall Policy Management System (FPMS) service for automating firewall operation tasks to efficiently and safely operate firewalls in various cloud environments.
Secrets Manager
2026.03.19
FEATUREPrivate Endpoint Service Provision
Provides a Private Endpoint that can be called with Secret from VM resources in Samsung Cloud Platform.
You can select the VM resources within the Samsung Cloud Platform of the Secret that stores security information and set access control.
2025.12.16
NEWSecrets Manager Service Official Version Release
We have launched a service that encrypts customers’ sensitive information in the form of Secret (security information) and stores and manages it safely.
You can remove hardcoding of security information in the application source code and call securely stored Secrets to retrieve them.
Log Transmission
2025.10.23
NEWLog Transmission Service Official Version Release
We have released a Log Transmission service that can execute security monitoring of the user area on Samsung Cloud Platform.
Architecture Diagram
2025.10.23
FEATUREAdd support resources
Load Balancer resources have been added.
2025.07.01
FEATUREArchitecture Diagram Feature Added
The relationship between resources can be easily identified by marking it with a dotted line.
You can click the name of the resource in the resource detail information popup window to go to the detail page.
Through Copilot, you can create VPC and Security Group services or control Virtual Server.
You can easily add the Architecture Diagram item from the Configuration Diagram Filter.
2025.02.27
NEWArchitecture Diagram Release
Architecture Diagram service has been newly launched.
Provides a service that can check the relationships between resources.
Cloud Control
2025.10.23
NEWOfficial Service Version Release
Cloud Control service official version has been released.
You can easily and safely build, operate, and manage a multi-account environment on Samsung Cloud Platform.
The organization’s cloud governance (security, compliance, standardization, etc.) can be automated and managed through policy violation detection and monitoring functions.
Cloud Monitoring
2025.07.01
FEATURECloud Monitoring Integration Service Added
In July 2025, a linked service with Cloud Monitoring was added.
In February 2025, a linked service with Cloud Monitoring was added.
Additional linked services: Container(Container Registry), Database(EPAS, Microsoft SQL Server), Data Analytics(Event Streams, Search Engine), Networking(Load Balancer, Load Balancer Listener, Load Balancer Server Group, VPN)
2024.10.01
NEWCloud Monitoring Service Official Version Release
Cloud Monitoring service has been released. It collects usage and change information of operating infrastructure resources, and supports a stable cloud operating environment through event occurrence/notification when exceeding the set threshold.
IAM
2025.10.23
FEATUREIP access control feature added and other features enhanced
When creating a user or changing a password, related information can be shared by email.
The entities that perform the role function have been added as Virtual Server and Cloud Function.
When the role is changed, you can check the session expiration time in My Menu.
You can register and manage IP addresses that can access the Console.
The Root user and IAM user with the same information (phone number, email) can switch to each other even after logging in.
You can choose to use AD (Active Directory) as a credential source.
AD (Active Directory) is used so that users can directly manage the authentication source.
2025.07.01
NEWID Center Service Official Launch
ID Center service has been officially launched.
You can manage to perform tasks according to user permissions by creating authority policies for each service and assigning policies and accounts linked to the Organization service to users.
Security can be enhanced to allow only authorized ID Center users to access through the Access Portal.
Set Trail logs to be sent to ServiceWatch’s log group, enabling monitoring through ServiceWatch and receiving alerts when specific activities occur.
2025.07.01
FEATURELogging&Audit New release service addition linkage
Logging&Audit Newly released service additional integration
API Gateway, Archive Storage, Backup Agent, Cloud Functions, Cloud LAN-Datacenter, Cloud WAN, CloudML, Cost Savings, GSLB, Global CDN, IAM > role, Load Balancer > LB Health Check, Marketplace, Organization, Private DNS, Private NAT, Public Domain Name, Quick Query, Secret Vault, SingleID, Support Plan, Transit Gateway, VPC Peering, Vertica
When viewing activity logs, we added a period filter/time zone and a feature to compare activity logs.
2025.04.28
FEATURELogging&Audit New Release Service Additional Integration
Logging&Audit New launch service additional integration
Data Flow, Data Ops
2025.02.27
FEATURELogging&Audit New Release Service Additional Integration
Logging&Audit Newly launched service additional integration
AI&MLOps Platform, Multi-node GPU Cluster, VPN, Cloud LAN-Campus, KMS, Event Streams, Serch Engine, EPAS, Microsoft SQL Server
Samsung Cloud Platform Common Feature Change
Account, IAM and Service Home, tags, etc. have been reflected in common CX changes.
2024.10.01
NEWLogging&Audit Service Official Version Release
Logging&Audit service has been launched. It stores/searches all activity logs performed by customers (Console, API, CLI), and provides functions such as change tracking of cloud resources, troubleshooting, security checks, etc.
2024.07.02
NEWBeta version release
Logging&Audit service has been launched. It stores/searches all activity logs performed by customers (Console, API, CLI), and provides functions such as change tracking of cloud resources, troubleshooting, security checks, etc.
Notification Manager
2024.10.01
NEWNotification Manager Release
The Notification Manager service has been released. It provides a feature to manage notifications provided to users when notifications occur.
You can create and manage notification policies and notification groups to receive notifications, and add users.
Organization
2025.10.23
FEATUREAccount deletion feature improvement
You can also delete the Account created in the Organization from the Member Account.
Deletable Accounts are limited to Accounts directly created in the Organization.
2025.07.01
NEWOrganization Service Official Launch
Organization service has been officially launched.
Account can be organized by organizational units, managed hierarchically, and resource access permissions can be controlled.
You can monitor the resource usage of all accounts within the organization to optimize costs.
Resource Explorer
2025.02.27
NEWResource Explorer Release
A search service for resources has been released.
Resources from multiple regions can be checked at once through the Resource Explorer.
Resource Groups
2025.02.27
NEWResource Groups Release
We have launched a service that efficiently manages resources through grouping.
Resources can be logically grouped and managed based on tags.
ServiceWatch
2026.03.19
FEATUREServiceWatch New Feature Release and Existing Feature Improvements
ServiceWatch service dashboard launch
Provides a service dashboard composed of key metrics for each service.
When the resources of the service are created and metric data is collected by ServiceWatch, the service dashboard is automatically generated and can be viewed.
ServiceWatch metric search feature improvement
Improve so that you can check search results even when the search term is included when searching indicators.
When searching for indicators, you can specify a period for a specific indicator to see how the indicator’s data changes over multiple periods.
ServiceWatch log pattern feature release
You can create a log pattern and filter the log data collected in ServiceWatch that matches the pattern.
Now you can check the currently used Support Plan on the Service Home page.
2025.07.01
FEATURESupport Plan addition
Support Plan has been added. Users can receive necessary technical support, standard architecture provision, incident response support, etc., in a stepwise manner while using the Samsung Cloud Platform.
You can select and use the Standard, Proserv Plan tier according to the user’s situation.
2024.02.27
NEWSupport Center Release
Support Center has been launched. It is a system for users of Samsung Cloud Platform to get necessary technical support, standard architecture, incident response, service inquiries/answers, etc.
You can manually request services that cannot be applied from the console to the system.
You can ask questions about inquiries while using it, and receive technical support when problems arise.
Quota Service
2025.07.01
NEWQuota Service Official Version Release
Quota Service service has been launched.
You can manage the maximum number of resources, tasks, or items (quotas) set for each service within your account.
Cost Management
2025.10.23
FEATURECredit and Budget Management Feature Added
Credit management page where you can check the summary information of valid Credit and expired Credit.
A function for budget management has been added.
When creating a budget, you can specify the month to start applying the budget.
The budget depletion rate can be set to limit new service creation when it reaches the corresponding point.
2025.07.01
FEATURECarbon Emission Check Function Added
Cost Savings, Carbon Emissions feature has been added.
Cost Savings: Samsung Cloud Platform Console’s Compute service allows you to set time contracts to save on usage costs.
Carbon Emissions: You can check the carbon emissions generated when using the services of the Samsung Cloud Platform Console.
2025.02.27
NEWCost Management Service Release
Cost Management service has been launched. You can manage and optimize service costs.
It provides features such as usage and billing records, cost analysis, and credit management.
You can set and manage budgets and provide account and payment method information.
Planned Compute
2025.10.23
FEATUREPlanned Compute contract start date setting feature added
Planned Compute contract application can set the start date when the contract is applied.
You can start applying the contract from the desired date, starting from the day after the application date.
2025.07.01
FEATUREAdd Planned Compute target service
Planned Compute target services have been added.
Data Catalog, Vertica(DBaaS)
2024.10.01
NEWPlanned Compute Service Official Version Release
Planned Compute service was launched.
Compute, Database service category services are provided with discounted rates as a condition for committing to the server type.
Marketplace
2025.07.01
NEWOfficial Release of Marketplace Service
We have launched the Marketplace service, which supports various software applications through Samsung Cloud Platform, from application to installation and management.
2025.10.23
FEATUREKorea East (kr-east1) Region Service Open
The DevOps Service can also be used in the Korean eastern (kr-east1) region.
2025.07.01
FEATUREAdd User Member
Add user member
When creating a DevOps Service, you can add members to perform the Admin role.
2025.02.27
FEATURECommon Feature Changes
Samsung Cloud Platform common feature changes
Account, IAM and Service Home, tags, etc. reflected common CX changes.
2024.12.23
NEWDevOps Service Official Version Release
We have launched a DevOps Service service that provides an integrated environment for software development/deployment/operation quickly and safely.
DevOps Console
2025.07.01
FEATUREv1.16.0 changes
Self-user management and authentication features have been added.
DevOps IDP is used to manage and authenticate users.
Jenkins DevOps Plugin installation and update feature has been added.
You can check the version of the installed Jenkins and the installation and version information of the recommended plugins, and install and update them.
You can download the billing basis data from the current data in the tenant dashboard in Excel.
2024.10.24
FEATUREv1.15.0 changes
A supported Helm chart repository has been added.
OCI standard Helm chart repository is now available.
Pipeline feature has been added.
It supports creating multi-branch pipeline functionality.
The management organization function of tools/templates has been improved.
The organization (tenant, project group) that manages tools and templates can be transferred and has been improved.
Other changed things
The supported version of the image storage tool Harbor has been expanded. (~2.10)
It has become impossible to directly create a Job in Jenkins. (Only possible through DevOps Service)
2025.07.01
NEWAIOS Service Official Launch
The AIOS service has been officially launched.
On Samsung Cloud Platform, you can create Virtual Server, GPU Server, Kubernetes Engine resources and use LLM on those resources.
CloudML
2025.07.01
NEWCloudML Service Official Version Release
Samsung Cloud Platform provides CloudML service that supports the entire machine learning process from data analysis to model development, learning, verification, and deployment in a cloud environment.
AI&MLOps Platform
2025.07.01
FEATUREAI&MLOps Platform Open-Source Version Upgrade
AI&MLOps Platform open source version has been upgraded.
Kubeflow 1.9
2025.02.27
NEWAI&MLOps Platform Service Official Version Release
The AI&MLOps Platform service, which automates the repetitive tasks of the entire pipeline of development, learning, and deployment of machine learning models, has been released.
Provides a machine learning platform service based on Kubernetes.
Edge Server
2025.10.23
FEATUREAdd Local Web feature
Edge Server provides Local Web functionality, allowing customers to create, retrieve, modify, and delete Edge Server resources within their own network.
2025.07.01
NEWK8s, Storage installation type provided
Edge Server provides K8s, Storage features as an installation type, allowing customers to utilize them for POC and other purposes.
2025.04.24
NEWEdge Server Service Official Version Release
Through the Samsung Cloud Platform, customers can select general/high-capacity/GPU server types according to their needs.
We have launched an On-Site type service that can easily create/delete and manage Virtual Servers.
Oracle Services
2025.12.16
NEWOracle Services Official Version Release
To support workloads based on Oracle S/W, we physically configure OCI (Oracle Cloud Infrastructure) resources within Samsung SDS data center and provide OCI services.