1 - Event Streams

1.1 - Overview

Service Overview

Event Streams provides fully managed creation and configuration of the open source Apache Kafka for large-scale, massive message data processing. Samsung Cloud Platform automates the creation and configuration of Apache Kafka through a web-based Console, and users can configure the main components of Apache Kafka, such as Broker, Zookeeper, and AKHQ, in a single or cluster form.

Event Streams cluster is composed of multiple Broker nodes, and Brokers can be installed from a minimum of 1 up to a maximum of 10, typically installed with 3 or more. Zookeeper can be installed separately to manage the distributed Brokers, and if not installed separately, it is installed together on the Broker node. Additionally, a tool for managing Kafka called AKHQ (Apache Kafka HQ) is provided, allowing users to manage cluster operations through it.

Provided Features

Event Streams provides the following features.

  • Auto Provisioning (Auto Provisioning): You can configure and set up an Apache Kafka cluster via the UI.
  • Operation Control Management: Provides a function to control the status of running servers. In addition to starting and stopping the cluster, restarting is possible to apply configuration values.
  • AKHQ Provision: AKHQ, a tool that can manage Kafka, is provided, allowing users to manage and monitor clusters through it.
  • Add Broker node: If expansion is required to improve the cluster’s performance and stability, you can add nodes with the same specifications as the existing Broker nodes.
  • Parameter management: Performance improvement and security-related configuration parameter setting and modification are possible.
  • Monitoring: CPU, memory, and performance monitoring information can be checked via Cloud Monitoring and Servicewatch.

Components

Event Streams provides pre-validated engine versions and various server types according to the open source support policy. Users can select and use them according to the scale of the service they want to configure.

Engine Version

The engine versions supported by Event Streams are as follows.

Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to six months before the EoTS date.

According to the supplier’s policy, the EOS and EoTS dates may change, so please refer to the supplier’s license management policy page for details.

Provided VersionEoS DateEoTS Date
3.8.02026-06 (scheduled)2026-12-02
3.9.12026-09 (scheduled)2027-02-19
Table. Engine versions provided by Event Streams

Server Type

The server types supported by Event Streams are as follows.

For detailed information about the server types provided by Event Streams, see Event Streams Server Types.

Standard ess1v2m4
CategoryExampleDetailed description
Server TypeStandardProvided Server Types
  • Standard: Standard specifications (vCPU, Memory) commonly used
  • High Capacity: Large server specifications of 24 vCore or more
Server Specificationsess1Provided server specifications
  • ess1, ess2: Standard specifications (vCPU, Memory) configuration commonly used
  • esh2: Large-capacity server specifications
    • Provides servers with 24 vCores or more
Server specificationsv2Number of vCores
  • v2: 2 virtual cores
Server Specificationsm4Memory Capacity
  • m4: 4GB Memory
Table. Event Streams server type components

Preceding Service

This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.

Service CategoryServiceDetailed Description
NetworkingVPCA service that provides an independent virtual network in a cloud environment
Table. Event Streams Preceding Service

1.1.1 - Server Type

Event Streams server type

Event Streams provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc. When creating Event Streams, Apache Kafka is installed according to the selected server type suitable for the purpose of use.

The server types supported in Event Streams are as follows.

Standard ess1v2m4
Classification
ExampleDetailed Description
Server TypeStandardProvided server type distinction
  • Standard: Composed of standard specifications (vCPU, Memory) commonly used
  • High Capacity: Server specifications with higher capacity than Standard
Server Specificationsess1Classification of provided server type and generation
  • ess1: s means general specifications, and 1 means generation
  • esh2: h means large-capacity server specifications, and 2 means generation
Server Specificationv2Number of vCores
  • v2: 2 virtual cores
Server Specificationm4Memory Capacity
  • m4: 4GB Memory
Table. Event Streams server type formats
Reference

Please select the server type by checking the node’s minimum specifications as follows.

DivisionvCPUMemory
Broker2 vCore4 GB
Zookeeper1 vCore2 GB

ess1 server type

The ess1 server type of Event Streams is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.

  • Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor Supports up to 16 vCPUs and 64 GB of memory
  • Up to 12.5 Gbps networking speed
ClassificationServer TypevCPUMemoryNetwork Bandwidth
Standardess1v1m21 vCore2 GBUp to 10 Gbps
Standardess1v2m42 vCore4 GBUp to 10 Gbps
Standardess1v2m82 vCore8 GBUp to 10 Gbps
Standardess1v4m84 vCore8 GBUp to 10 Gbps
Standardess1v4m164 vCore16 GBUp to 10 Gbps
Standardess1v8m168 vCore16 GBUp to 10 Gbps
Standardess1v8m328 vCore32 GBUp to 10 Gbps
Standardess1v16m3216 vCore32 GBUp to 12.5 Gbps
Standardess1v16m6416 vCore64 GBUp to 12.5 Gbps
Table. Event Streams server type specification - ess1 server type

ess2 server type

The ess2 server type of Event Streams is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.

  • Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
  • Supports up to 16 vCPUs and 64 GB of memory
  • Up to 12.5 Gbps networking speed
ClassificationServer TypeCPU vCoreMemoryNetwork Bandwidth(Gbps)
Standardess2v1m21 vCore2 GBUp to 10 Gbps
Standardess2v2m42 vCore4 GBUp to 10 Gbps
Standardess2v2m82 vCore8 GBUp to 10 Gbps
Standardess2v4m84 vCore8 GBUp to 10 Gbps
Standardess2v4m164 vCore16 GBUp to 10 Gbps
Standardess2v8m168 vCore16 GBUp to 10 Gbps
Standardess2v8m328 vCore32 GBUp to 10 Gbps
Standardess2v16m3216 vCore32 GBUp to 12.5 Gbps
Standardess2v16m6416 vCore64 GBUp to 12.5 Gbps
Table. Event Streams Server Type Specifications - ess2 Server Type

esh2 server type

The esh2 server type of Event Streams is provided with high-capacity server specifications and is suitable for database workloads for large-scale data processing.

  • Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
  • Supports up to 32 vCPUs and 128 GB of memory
  • Up to 25Gbps networking speed
DivisionServer TypevCPUMemoryNetwork Bandwidth
High Capacityesh2v32m6432 vCore64 GBUp to 25 Gbps
High Capacityesh2v32m12832 vCore128 GBUp to 25 Gbps
Table. Event Streams server type specification - esh2 server type

1.1.2 - Monitoring Metrics

Event Streams Monitoring Metrics

The table below shows the performance monitoring metrics of Event Streams that can be checked through Cloud Monitoring. For detailed Cloud Monitoring usage instructions, refer to Cloud Monitoring guide.

For server monitoring metrics of Event Streams, refer to Virtual Server Monitoring Metrics guide.

Performance ItemDescriptionUnit
AKHQ State [PID]AHKQ process PIDPID
Connections [Zookeeper Client]Number of ZooKeeper connectionscnt
Disk Useddatadir usage amountbytes
Failed [Client Fetch Request]Number of failed client Fetch request processingcnt
Failed [Produce Request]Number of failed Producer request processingcnt
Incomming MessagesNumber of messages received by Brokercnt
Instance State [PID]kafka process PIDPID
Kibana state [PID]Kibana process PIDPID
Leader ElectionsNumber of Leader Election occurrencescnt
Leader Elections [Unclean]Number of Unclean Leader Election occurrencescnt
Log FlushesNumber of log flush occurrencescnt
Network In BytesBytes received by all Topicsbytes
Network Out BytesBytes sent by all Topicsbytes
Rejected BytesBytes rejected by all Topicsbytes
Request Queue LengthRequest queue sizecnt
ShardsCluster shard countcnt
Zookeeper Sessions [Closed]ZooKeeper closed sessions per secondcnt
Zookeeper Sessions [Expired]Zookeeper expired sessions per secondcnt
Zookeeper State [PID]zookeeper process PIDPID
Table. Event Streams Monitoring Metrics

1.1.3 - ServiceWatch Metrics

Event Streams sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at a 1‑minute interval.

Reference
To check metrics in ServiceWatch, refer to the ServiceWatch guide.

Basic Indicators

The following are the basic metrics for the namespace Event Streams.

OS Basic Metrics

CategoryPerformance ItemDetailed DescriptionUnitMeaningful Statistics
CPUCPU UsageCPU Usage RatePercent
DiskDisk UsageDisk Usage RatePercent
DiskDisk Write BytesWrite capacity on block device (bytes/second)Bytes/Second
DiskDisk Read BytesAmount read from block device (bytes/second)Bytes/Second
DiskDisk Write RequestNumber of write requests on block device (requests/second)Count/Second
DiskDisk Read RequestsNumber of read requests on block device (requests/second)Count/Second
DiskAverage Disk I/O Queue SizeAverage queue length of requests issued to the block deviceNone
DiskDisk I/O UtilizationProportion of time the block device actually processes I/O operationsPercent
MemoryMemory UsageMemory Usage RatePercent
NetworkNetwork In BytesReceived capacity on the network interface (bytes/second)Bytes/Second
NetworkNetwork Out BytesData transmitted from network interface (bytes/second)Bytes/Second
NetworkTCP ConnectionsTotal number of TCP connections currently established correctlyCount/Second
NetworkNetwork In PacketsNumber of packets received on the network interfaceCount
NetworkNetwork Out PacketsNumber of packets transmitted from the network interfaceCount
NetworkNetwork In DroppedNumber of packet drops received on the network interfaceCount
NetworkNetwork Out DroppedNumber of packet drops transmitted from the network interfaceCount
NetworkNetwork In ErrorsNumber of packet errors received on the network interfaceCount
NetworkNetwork Out ErrorsNumber of packet errors transmitted from the network interfaceCount
Table. OS Basic Metrics

Event Streams Basic Metrics

CategoryPerformance ItemDetailed DescriptionUnitMeaningful Statistics
ActivelockActive locksNumber of active locksCount
ActivesessionActive sessionsNumber of active sessionsCount
ActivesessionConnection usageDB connection session usage ratePercent
ActivesessionConnectionsDB connection sessionCount
ActivesessionConnections(MAX)Maximum number of connections that can be attached to the DBCount
ProxySQLProxy UptimeExpress the proxy’s uptime in secondsSeconds
ProxySQLBackend connections(CONNECTED)Number of sessions connected to the Proxy serverCount
ProxySQLClient connections connectedNumber of client sessions currently connected to the proxyCount
ProxySQLQueries routedNumber of queries routed to backend serverCount
ProxySQLBackend connections(ACTIVE, IDLE)Number of Active / idle connections per EndpointCount
ProxySQLBackend server statusBackend server status
  • 1 - Online
  • 2- SHUNNED
  • 3 - OFFLINE_SOFT
  • 4 - OFFLINE_HARD
  • 5 - SHUNNED_REPLICATION_LAG
None
ProxySQLBackend connection checkBackend server’s connection success/failure checkCount
StateInstance stateScalable DB status up/down checkCount
StateSlave behind master secondsReplica’s delay amount (unit: seconds)Seconds
TablespaceTablespace usedTablespace usageMegabytes
TablespaceTablespace used(TOTAL)Tablespace usage (total)Megabytes
TransactionsSlow queriesNumber of slow queriesCount
TransactionsTransaction timeLong Transaction timeSeconds
TransactionsWait locks LockNumber of waiting sessionsCount
Table. Event Streams basic metrics

1.2 - How-to guides

The user can enter the required information for Event Streams through the Samsung Cloud Platform Console, select detailed options, and create the service.

Event Streams Create

You can create and use the Event Streams service from the Samsung Cloud Platform Console.

Notice

Before creating the service, please configure the VPC’s Subnet type as General.

  • If the Subnet type is Local, the creation of the corresponding Database service is not possible.

To create Event Streams, follow these steps.

  1. Click the All Services > Data Analytics > Event Streams menu. Navigate to the Service Home page of Event Streams.
  2. On the Service Home page, click the Create Event Streams button. You will be taken to the Create Event Streams page.
  3. Create Event Streams page, enter the information required to create the service, and select detailed options.
  • Image and version selection area, select the required information.
    Category
    Required or not
    Detailed description
    Image versionRequiredProvide version list of Event Streams
    Table. Event Streams Service Information Input Items
    • Service Information Input area, input or select the required information.
      Category
      Required or not
      Detailed description
      Server Name PrefixRequiredServer name where Apache Kafka will be installed
      • Start with a lowercase English letter, and use lowercase letters, numbers, and the special character (-) to input 3 to 13 characters
      • A postfix such as 001, 002 is attached based on the server name to create the actual server name
      Cluster NameRequiredCluster name of the servers
      • Enter using English letters, 3 ~ 20 characters
      • A cluster is a unit that groups multiple servers
      Broker > Broker Node countrequiredBroker Node count
      Broker > Server TypeRequiredServer type where the Broker will be installed
      • Standard: Standard specifications commonly used
      • High Capacity: Large-capacity server with 24 vCore or more
      Broker > Planned ComputeSelectStatus of resources with Planned Compute set
      • In Use: Number of resources with Planned Compute set that are currently in use
      • Configured: Number of resources with Planned Compute set
      • Coverage Preview: Amount applied by Planned Compute per resource
      • Apply for Planned Compute Service: Go to the Planned Compute service application page
      Broker > Block StorageRequiredBlock Storage type to be used for the Broker node
      • Base OS: Area where the engine is installed
      • DATA: Data file storage area
        • Select the storage type and then enter the capacity. (For detailed information on each Block Storage type, refer to Create Block Storage)
          • SSD: High‑performance general volume
          • HDD: General volume
          • SSD_KMS/HDD_KMS: Additional encrypted volume using Samsung Cloud Platform KMS (Key Management System) encryption key
        • Enter capacity as a multiple of 8 within the range 16 ~ 5,120
      Zookeeper separate installation > UseSelectZookeeper node separate installation option
      • If Use is selected, install Zookeeper node separately
      • If Zookeeper node is not installed separately, the Broker node also performs the Zookeeper role
      Zookeeper separate installation > server typeselectServer type where Zookeeper will be installed
      • Zookeeper node provides vCPU 1, Memory 2G or vCPU 2, Memory 4G
      Zookeeper separate installation > Planned ComputeSelectStatus of resources with Planned Compute set
      • In use: Number of resources with Planned Compute set that are currently in use
      • Configured: Number of resources with Planned Compute set
      • Coverage preview: Amount applied per resource by Planned Compute
      • Apply for Planned Compute service: Go to Planned Compute service application page
      Zookeeper separate installation > Block StorageRequiredBlock Storage type to be used on Zookeeper nodes
      • Base OS: Area where the engine is installed
      • DATA: Data file storage area
        • Select the storage type and then enter the capacity. (For detailed information on each Block Storage type, refer to Creating Block Storage)
          • SSD: High‑performance general volume
          • HDD: General volume
          • SSD_KMS/HDD_KMS: Additional encrypted volume using Samsung Cloud Platform KMS (Key Management System) encryption keys
        • Enter capacity as a multiple of 8 within the range 16 to 5,120
      AKHQ > UseRequiredAKHQ installation status
      • If Use is selected, AKHQ will be installed
      AKHQ > Server TypeRequiredServer type where AKHQ will be installed
      • AKHQ only provides vCPU 2, Memory 4G type
      AKHQ > Planned ComputeSelectStatus of resources with Planned Compute set
      • In use: Number of resources with Planned Compute that are currently in use
      • Configured: Number of resources with Planned Compute set
      • Coverage preview: Amount applied per resource by Planned Compute
      • Apply for Planned Compute service: Go to the Planned Compute service application page
      AKHQ > Block StorageRequiredBlock Storage type to be used on the server where AKHQ is installed
      • Base OS: Area where the engine is installed
      AKHQ > AKHQ accountRequiredAKHQ account
      • Enter using lowercase English letters, 2 to 20 characters
      AKHQ > AKHQ passwordRequiredAKHQ account password
      • Enter 8 ~ 30 characters including English letters, numbers and special characters (excluding “ ‘)
      AKHQ > AKHQ Password ConfirmationRequiredAKHQ Account Password Confirmation
      • Re-enter the same AKHQ account password
      AKHQ > AKHQ Port NumberRequiredAKHQ connection port number
      • Port number is automatically set to 8080 and cannot be modified
      Network > Common SettingsRequiredNetwork settings where servers generated by the service are installed
      • Choose if you want to apply the same settings to all installed servers
      • Select a pre‑created VPC and Subnet
      • IP: Only automatic generation is possible
      • For Public NAT settings, it is only possible in per‑server settings
      Network > Per-Server SettingsRequiredNetwork settings where servers generated by the service are installed
      • Select if you want to apply different settings per installed server
      • Select a pre‑created VPC and Subnet
      • IP: Enter each server’s IP
      • Public NAT feature is available only when the VPC is connected to an Internet Gateway; if you check Use, you can select from reserved IPs in the VPC product’s Public IP. For details, see Create Public IP
      IP Access ControlSelectService Access Policy Settings
      • Since the access policy is set for the IP entered on the page, you do not need to separately configure Security Group policies.
      • Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.0/24, 192.168.10.1/32) and click the Add button
      • To delete an entered IP, click the x button next to the entered IP
      Maintenance PeriodSelectEvent Streams Maintenance Period
      • Select Use to set day of week, start time, and duration
      • It is recommended to set a maintenance period for stable service management. Patch work will be performed at the set time, and service interruption may occur
      • We are not responsible for issues arising from patches not applied (set as not used)
      Table. Event Streams service configuration items
  • Database configuration required information input Please enter or select the required information in this area.
    Category
    Required or not
    Detailed description
    Zookeeper SASL accountRequiredZookeeper account
    • Enter using lowercase English letters, 2 ~ 20 characters
    Zookeeper SASL passwordRequiredZookeeper account password
    • Enter 8 to 30 characters including letters, numbers, and special characters (excluding )
    Zookeeper SASL password verificationRequiredZookeeper account password verification
    • Re-enter the Zookeeper SASL account password identically
    Zookeeper Port numberrequiredZookeeper port number
    • 1200 ~ 65535 can be entered, but the Broker port or 2888, 3888 cannot be used
    Broker SASL AccountRequiredKafka connection account
    • Enter using lowercase English letters, 2 to 20 characters
    Broker SASL passwordRequiredKafka connection account password
    • Enter 8 to 30 characters including English letters, numbers, and special characters (excluding “ and ‘)
    Broker SASL password verificationRequiredCheck Kafka connection account password
    • Re-enter the Broker SASL account password identically
    Broker Port numberRequiredKafka port number
    • 1200 ~ 65535 can be entered, and Broker port or 2888, 3888 cannot be used
    ParameterRequiredEvent Streams configuration parameters
    • View button click to view detailed information of the parameter
    • Parameters can be modified after the service creation is completed, and a restart is required when modified
    Time zoneSelectionStandard time zone used by the service
    ServiceWatch log collectionSelectWhether to collect ServiceWatch logs
    • Select Use to set up the ServiceWatch log collection feature
    • Provided free up to 5 GB for all services within the account, and charges apply based on storage size if exceeding 5 GB
    • When collecting, log groups and log streams are automatically created and cannot be deleted until the resources are removed
    • To prevent exceeding 5 GB, direct deletion of log data or shortening the retention period is recommended
    Table. Required information input items for Event Streams Database configuration
    • Additional Information Input Enter or select the required information in the area.
      Category
      Required or not
      Detailed description
      TagSelectAdd Tag
      • Add Tag button can be clicked to create and add a tag, or add an existing tag
      • Up to 50 tags can be added
      • Added new tags are applied after the service creation is completed
      Table. Event Streams Service Additional Information Input Items
  1. Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
    • Once creation is complete, check the created resource on the Resource List page.

Event Streams Check Detailed Information

Event Streams service can view and edit the full resource list and detailed information. Event Streams Details page consists of Details, Tags, Activity History tabs.

To view detailed information about the Event Streams service, follow these steps.

  1. All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
  2. On the Service Home page, click the Event Streams menu. It navigates to the Event Streams List page.
  3. Click the resource to view detailed information on the Event Streams List page. It navigates to the Event Streams Details page.
    • Event Streams Details The top of the page displays status information and information about additional features.
      CategoryDetailed description
      Cluster StatusCluster Status
      • Creating: Cluster is being created
      • Editing: Cluster is changing to a state of performing operation
      • Error: Cluster is in a state where a failure occurred while performing a task
        • If it occurs continuously, contact the administrator
      • Failed: Cluster is in a failed state during creation
      • Restarting: Cluster is restarting
      • Running: Cluster is operating normally
      • Starting: Cluster is starting
      • Stopped: Cluster is stopped
      • Stopping: Cluster is being stopped
      • Synchronizing: Cluster is synchronizing
      • Terminating: Cluster is terminating
      • Unknown: Cluster status is unknown
        • If it occurs continuously, contact the administrator
      • Upgrading: Cluster is changing to an upgrade execution state
      Cluster ControlButton to change cluster state
      • Start: Start a stopped cluster
      • Stop: Stop a running cluster
      • Restart: Restart a running cluster
      More additional featuresCluster-related management button
      • Service status synchronization: Can query current server status and synchronize to the Console
      • Parameter management: Can view and modify service configuration parameters
      • Add Broker Node: Add a Broker Node
        • If configured as a cluster, the Add Broker Node button is displayed
      Service terminationButton to cancel the service
      Table. Event Streams status information and additional features

Detailed Information

Event Streams list page you can view the detailed information of the selected resource and, if necessary, edit the information.

CategoryDetailed description
Server InformationServer information configured in the respective cluster
  • Category: Server type (Zookeeper&Broker,Broker, Zookeeper, AKHQ)
  • Server Name: Server name
  • IP:Port: Server IP and port
  • NAT IP: NAT IP
  • Status: Server status
serviceservice name
Resource TypeResource Type
SRNUnique resource ID in Samsung Cloud Platform
  • Means cluster SRN
Resource NameResource Name
  • means cluster name
Resource IDUnique resource ID in the service
CreatorUser who created the service
Creation Date/TimeService Creation Date/Time
ModifierUser who edited the service information
Modification Date/TimeDate/Time Service Information Was Modified
Image VersionInstalled service image and version information
  • Click the Edit button to upgrade the version
Cluster NameName of the cluster composed of servers
Planned ComputeResource status with Planned Compute set
Maintenance PeriodPatch Work Period Setting Status
  • If maintenance period setting is required, click the Edit button to set
Time ZoneStandard time zone used by the service
Zookeeper Port NumberZookeeper Port Number
Broker Port numberKafka port number
AKHQ connection informationAKHQ connection information
ServiceWatch log collectionServiceWatch log collection configuration status
  • If log collection configuration is required, click the Edit button next to log collection to set it
NetworkInstalled network information (VPC, Subnet)
IP Access ControlService Access Policy Settings
  • If you need to add or delete an IP, click the Edit button to set
ZookeeperServer type, default OS, additional Disk information for Zookeeper node
  • If you need to modify the server type, click the Edit button next to the server type to set it
    • Modifying the server type requires a server restart
  • If you need to expand storage, click the Edit button next to the storage capacity to expand
BrokerServer type, default OS, additional Disk information for the Broker node
  • If you need to modify the server type, click the Edit button next to the server type to set it
    • Modifying the server type requires a server restart
  • If storage expansion is needed, click the Edit button next to the storage capacity to expand
AKHQServer type and basic OS information for AKHQ node
  • If you need to modify the server type, click the Edit button next to the server type to set it
    • Modifying the server type requires a server restart
Table. Event Streams detailed information items

Tag

On the Event Streams List page, you can view the tag information of the selected resource, and you can add, modify, or delete it.

CategoryDetailed description
Tag ListTag List
  • You can view the Key, Value information of tags
  • Up to 50 tags can be added per resource
  • When entering tags, search and select from the previously created Key and Value list
Table. Event Streams Tag Tab Items

Work History

You can view the operation history of the selected resource on the Event Streams list page.

CategoryDetailed description
Work History ListResource Change History
  • Work details, work date/time, resource type, resource ID, resource name, event topic, work result, worker information verification
  • Detailed Search button provides detailed search function
Table. Event Streams Job History Tab Detailed Information Items

Event Streams Resource Management

If you need to change the existing configuration options of a created Event Streams resource, manage parameters, or add broker node configurations, you can perform the tasks on the Event Streams Details page.

Operating Control

If changes occur to the running Event Streams resources, you can start, stop, or restart.

To control the operation of Event Streams, follow the steps below.

  1. All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
  2. Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
  3. On the Event Streams List page, click the resource to control operation. It navigates to the Event Streams Details page.
  4. Check the Event Streams status and complete the changes using the control button below.
    • Start: the server where the Event Streams service is installed and the Event Streams service is running.
    • Stop: The server where the Event Streams service is installed and the Event Streams service will be stopped (Stopped).
    • Restart: Only the Event Streams service will be restarted.

Synchronize Service Status

You can query the current server status and synchronize it to the Console.

To synchronize the service status of Event Streams, follow the steps below.

  1. All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
  2. Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
  3. Event Streams list 페이지에서 서비스 상태를 조회할 자원을 클릭하세요. Event Streams details 페이지로 이동합니다.
  4. Click the Service Status Synchronization button. It takes a little time to retrieve, and while retrieving, the cluster changes to Synchronizing state.
  5. When the query is completed, the status in the server information item is updated, and the cluster changes to Running state.

Parameter Management

Provides parameter query and modification functions.

To view and modify configuration parameters, follow the steps below.

  1. All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
  2. Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
  3. Event Streams List Click the resource whose parameters you want to view and edit on the page. Event Streams Details You will be moved to the page.
  4. Click the Parameter Management button. You will be taken to the Parameter Management page.
  5. Parameter Management on the page, click the Search button. Database Search popup window opens.
  6. To view the Parameter information, click the Confirm button. It takes a little time to retrieve.
    • You can modify the Parameter information after performing a query.
  7. To edit the Parameter information, click the Edit button and then enter the changes in the Custom Value area of the Parameter to be edited.
  • When the application type is dynamic, it is applied immediately, and when it is static, a service restart is required, causing service interruption.
  1. When input is complete, click the Save button.

Change Server Type

You can change the configured server type.

To change the server type, follow the steps below.

Caution
  • If the server type is configured as Standard, it cannot be changed to High Capacity. If you want to change to High Capacity, create a new service.
  • If you modify the server type, a server reboot is required. Please separately verify any SW license changes or SW settings and reflections due to spec changes.
  1. Click the All Services > Data Analytics > Event Streams menu. Navigate to the Service Home page of Event Streams.
  2. Click the Event Streams menu on the Service Home page. Navigate to the Event Streams list page.
  3. On the Event Streams list page, click the resource to change the server type. You will be taken to the Event Streams details page.
  4. Click the Edit button of the server type you want to change at the bottom of the detailed information. The Edit Server Type popup window opens.
  5. Edit Server Type After selecting the server type in the popup window, click the Confirm button.

Expanding storage

You can expand the storage added to the data area up to a maximum of 5TB based on the initially allocated capacity. You can expand the storage without stopping Event Streams, and if configured as a cluster, all nodes are expanded simultaneously.

Notice
  • If encryption is set on the existing Block Storage, encryption will also be applied to the additional Disk.
  • Disk size modification is only possible to increase by at least 16GB over the current disk size.

To increase storage capacity, follow the steps below.

  1. All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
  2. Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
  3. On the Event Streams list page, click the resource whose server type you want to change. You will be taken to the Event Streams details page.
  4. Click the Edit button of the added Disk you want to expand at the bottom of the detailed information. The Disk Edit popup window opens.
  5. Disk edit In the popup window, after entering the expanded capacity, click the Confirm button.

Add Broker Node

If Event Streams cluster expansion is required, you can add nodes with the same specifications as the Broker Node you are using. The added nodes are added to the existing cluster without server downtime, and the existing data is automatically distributed.

Notice
  • Up to 10 nodes can be used within the cluster. Please note that additional charges apply for created nodes.
  • Adding nodes may degrade cluster performance.

To add a Broker node, follow the steps below.

  1. All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
  2. On the Service Home page, click the Event Streams menu. Navigate to the Event Streams list page.
  3. Event Streams resource On the list page, click the resource you want to recover. Event Streams details page will be opened.
  4. Click the Broker Node Add button. Navigate to the Broker Node Add page.
  5. Enter required information after entering the relevant information in the area, click the Complete button.
    Category
    Required
    Detailed description
    Server NameRequiredServer name where Broker is installed
    • It is set to the server name configured in the original cluster.
    Cluster NameRequiredCluster Name
    • It will be set to the cluster name set in the original cluster.
    Number of additional NodesRequiredNumber of Nodes to add
    • Use up to 10 nodes per cluster
    Service Type > Server TypeRequiredServer type where the Broker will be installed
    • It is set to be the same as the server type set in the original cluster.
    Service Type > Planned ComputeSelectStatus of resources with Planned Compute set
    • In Use: Number of resources with Planned Compute that are currently in use
    • Configured: Number of resources with Planned Compute set
    • Coverage Preview: Amount applied per resource by Planned Compute
    • Planned Compute Service Application: Go to the Planned Compute service application page
    Service Type > Block StorageRequiredBlock Storage settings to be used on Broker nodes
    • The Storage type and capacity set in the original cluster are applied identically
    NetworkRequiredNetwork where servers are installed
    • Apply the same network as set in the original cluster
    Table. Event Streams Broker Node Additional Items

Event Streams Cancel

You can cancel unused Event Streams to reduce operating costs. However, if you cancel the service, the running service may be stopped immediately, so you should consider the impact of service interruption sufficiently before proceeding with the cancellation.

To cancel Event Streams, follow the steps below.

  1. All Services > Data Analytics > Event Streams Click the menu. Go to the Service Home page of Event Streams.
  2. Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
  3. Event Streams list on the page, select the resource to cancel, and click the Cancel Service button.
  4. Once the termination is complete, check on the Event Streams list page whether the resource has been terminated.

1.3 - API Reference

API Reference

1.4 - CLI Reference

CLI Reference

1.5 - Release Note

Event Streams

2025.07.01
FEATURE Terraform and Disk Type Addition
  • It provides Terraform.
  • HDD, HDD_KMS disk types are also provided.
2025.02.27
NEW Event Streams Service Official Version Release
  • An Event Streams service that easily creates and manages Apache Kafka clusters in a web environment has been released.

2 - Search Engine

2.1 - Overview

Service Overview

Search Engine provides automated creation and configuration of the distributed search and analytics engines Elasticsearch and OpenSearch through a web-based console. Users can select a server type that fits the system configuration to set up a cluster, and it supports the data analysis and visualization tools Kibana and the OpenSearch dashboard.

Notice
  • Search Engine provides Elasticsearch Enterprise version and OpenSearch version.
  • Elasticsearch Enterprise’s software license uses a Bring Your Own License (BYOL), and the software license policy in cloud environments must follow the supplier’s policy.

Search Engine Cluster consists of multiple master nodes and data nodes. Data nodes can be installed from a minimum of 1 up to a maximum of 10, and are usually installed with 3 or more. If a master node is not installed separately, the data node also performs the role of the master node and can be installed up to a maximum of 10. When a master node is installed separately, data nodes can be up to 50.

Provided Features

Search Engine provides the following functions.

  • Auto Provisioning (Auto Provisioning): You can configure and set up Elasticsearch and OpenSearch clusters via UI.
  • Operation Control Management: Provides functionality to control the status of running servers. Restart is possible for reflecting configuration values, along with starting and stopping the cluster.
  • Backup and Recovery: Backup is possible using the built-in backup feature, and recovery can be performed to the point in time of the backup file.
  • Add Data Node: If cluster expansion is required, you can add nodes with the same specifications as the data nodes in use. Up to 10 nodes can be added within the cluster.
  • Visualization tool support: Provides data analysis and visualization tools, and supports Elasticsearch Kibana or OpenSearch dashboards.
  • Monitoring: CPU, memory, cluster performance monitoring information can be checked through the Cloud Monitoring service.

Components

Search Engine provides pre-validated engine versions and various server types according to the open source support policy. Users can select and use them according to the scale of the service they want to configure.

Engine Version

Search Engine supported engine versions are as follows.

Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to six months before the EoTS date.

Since the EOS and EoTS dates may change according to the supplier’s policy, please refer to the supplier’s license management policy page for details.

Information

Search Engine’s next version is scheduled to be provided after March 2026. The actual service provision schedule may change.

  • OpenSearch 3.4.0 version
Provided VersionEoS DateEoTS Date
8.15.02027-01 (planned)2027-07-15
8.19.72027-01 (scheduled)2027-07-15
Table. Search Engine's Elasticsearch engine version
Provided VersionEoS DateEoTS Date
2.19.32027-01 (planned)2027-07-15
3.4.0TBDTBD
Table. Search Engine's OpenSearch engine version

Server Type

The server types supported by Search Engine are as follows.

For detailed information about the server types provided by Search Engine, please refer to Search Engine Server Type.

Standard se1v2m4
CategoryExampleDetailed description
Server TypeStandardProvided Server Types
  • Standard: Standard specifications (vCPU, Memory) configuration commonly used
  • High Capacity: High-capacity server specifications of 24 vCore or more
Server specificationsse1Provided server specifications
  • se1: Standard specifications (vCPU, Memory) configuration commonly used
  • seh2: Large-capacity server specifications
    • Provides servers with 24 vCore or more
Server specificationsv2Number of vCores
  • v2: 2 virtual cores
Server specificationsm4Memory capacity
  • m4: 4GB Memory
Table. Search Engine Server Type Components

Preliminary Service

This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.

Service CategoryServiceDetailed Description
NetworkingVPCA service that provides an independent virtual network in a cloud environment
Table. Search Engine Pre-service

2.1.1 - Server Type

Search Engine server type

Search Engine provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc. When creating a Search Engine, Elastic Search is installed according to the server type selected to match the purpose of use.

The server types supported by the Search Engine are as follows.

Standard ses1v2m4
Classification
ExampleDetailed Description
Server TypeStandardProvided server type distinction
  • Standard: Composed of standard specifications (vCPU, Memory) commonly used
  • High Capacity: Server specifications with higher capacity than Standard
Server Specificationdb1Classification of provided server type and generation
  • ses1: s means general specification, and 1 means generation
  • seh2: h means large-capacity server specification, and 2 means generation
Server Specificationv2Number of vCores
  • v2: 2 virtual cores
Server Specificationm4Memory Capacity
  • m4: 4GB Memory
Table. Search Engine server type format

ses1 server type

The ses1 server type of Search Engine is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.

  • Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
  • Supports up to 16 vCPUs and 256 GB of memory
  • Up to 12.5 Gbps networking speed
ClassificationServer TypevCPUMemoryNetwork Bandwidth
Standardses1v1m21 vCore2 GBUp to 10 Gbps
Standardses1v2m82 vCore8 GBUp to 10 Gbps
Standardses1v2m162 vCore16 GBup to 10 Gbps
Standardses1v2m242 vCore24 GBUp to 10 Gbps
Standardses1v2m322 vCore32 GBUp to 10 Gbps
Standardses1v4m84 vCore8 GBUp to 10 Gbps
Standardses1v4m164 vCore16 GBUp to 10 Gbps
Standardses1v4m324 vCore32 GBUp to 10 Gbps
Standardses1v4m484 vCore48 GBUp to 10 Gbps
Standardses1v4m644 vCore64 GBup to 10 Gbps
Standardses1v6m126 vCore12 GBUp to 10 Gbps
Standardses1v6m246 vCore24 GBUp to 10 Gbps
Standardses1v6m486 vCore48 GBUp to 10 Gbps
Standardses1v6m726 vCore72 GBUp to 10 Gbps
Standardses1v6m966 vCore96 GBUp to 10 Gbps
Standardses1v8m168 vCore16 GBUp to 10 Gbps
Standardses1v8m328 vCore32 GBUp to 10 Gbps
Standardses1v8m648 vCore64 GBUp to 10 Gbps
Standardses1v8m968 vCore96 GBUp to 10 Gbps
Standardses1v8m1288 vCore128 GBUp to 10 Gbps
Standardses1v10m2010 vCore20 GBUp to 10 Gbps
Standardses1v10m4010 vCore40 GBUp to 10 Gbps
Standardses1v10m8010 vCore80 GBup to 10 Gbps
Standardses1v10m12010 vCore120 GBUp to 10 Gbps
Standardses1v10m16010 vCore160 GBup to 10 Gbps
Standardses1v12m2412 vCore24 GBUp to 12.5 Gbps
Standardses1v12m4812 vCore48 GBUp to 12.5 Gbps
Standardses1v12m9612 vCore96 GBup to 12.5 Gbps
Standardses1v12m14412 vCore144 GBUp to 12.5 Gbps
Standardses1v12m19212 vCore192 GBup to 12.5 Gbps
Standardses1v14m2814 vCore28 GBUp to 12.5 Gbps
Standardses1v14m5614 vCore56 GBUp to 12.5 Gbps
Standardses1v14m11214 vCore112 GBUp to 12.5 Gbps
Standardses1v14m16814 vCore168 GBup to 12.5 Gbps
Standardses1v14m22414 vCore224 GBUp to 12.5 Gbps
Standardses1v16m3216 vCore32 GBUp to 12.5 Gbps
Standardses1v16m6416 vCore64 GBup to 12.5 Gbps
Standardses1v16m12816 vCore128 GBup to 12.5 Gbps
Standardses1v16m19216 vCore192 GBup to 12.5 Gbps
Standardses1v16m25616 vCore256 GBup to 12.5 Gbps
Table. Search Engine server type specification - ses1 server type

ses2 server type

The ses1 server type of Search Engine is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.

  • Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
  • Supports up to 16 vCPUs and 256 GB of memory
  • Up to 12.5 Gbps networking speed
ClassificationServer TypeCPU vCoreMemoryNetwork Bandwidth(Gbps)
Standardses2v2m42 vCore4 GBUp to 10 Gbps
Standardses2v2m82 vCore8 GBUp to 10 Gbps
Standardses2v2m162 vCore16 GBUp to 10 Gbps
Standardses2v2m242 vCore24 GBUp to 10 Gbps
Standardses2v2m322 vCore32 GBUp to 10 Gbps
Standardses2v4m84 vCore8 GBUp to 10 Gbps
Standardses2v4m164 vCore16 GBUp to 10 Gbps
Standardses2v4m324 vCore32 GBUp to 10 Gbps
Standardses2v4m484 vCore48 GBUp to 10 Gbps
Standardses2v4m644 vCore64 GBUp to 10 Gbps
Standardses2v6m126 vCore12 GBUp to 10 Gbps
Standardses2v6m246 vCore24 GBUp to 10 Gbps
Standardses2v6m486 vCore48 GBUp to 10 Gbps
Standardses2v6m726 vCore72 GBUp to 10 Gbps
Standardses2v6m966 vCore96 GBUp to 10 Gbps
Standardses2v8m168 vCore16 GBUp to 10 Gbps
Standardses2v8m328 vCore32 GBUp to 10 Gbps
Standardses2v8m648 vCore64 GBup to 10 Gbps
Standardses2v8m968 vCore96 GBUp to 10 Gbps
Standardses2v8m1288 vCore128 GBUp to 10 Gbps
Standardses2v10m2010 vCore20 GBUp to 10 Gbps
Standardses2v10m4010 vCore40 GBUp to 10 Gbps
Standardses2v10m8010 vCore80 GBUp to 10 Gbps
Standardses2v10m12010 vCore120 GBUp to 10 Gbps
Standardses2v10m16010 vCore160 GBUp to 10 Gbps
Standardses2v12m2412 vCore24 GBUp to 12.5 Gbps
Standardses2v12m4812 vCore48 GBUp to 12.5 Gbps
Standardses2v12m9612 vCore96 GBUp to 12.5 Gbps
Standardses2v12m14412 vCore144 GBUp to 12.5 Gbps
Standardses2v12m19212 vCore192 GBUp to 12.5 Gbps
Standardses2v14m2814 vCore28 GBUp to 12.5 Gbps
Standardses2v14m5614 vCore56 GBUp to 12.5 Gbps
Standardses2v14m11214 vCore112 GBUp to 12.5 Gbps
Standardses2v14m16814 vCore168 GBUp to 12.5 Gbps
Standardses2v14m22414 vCore224 GBup to 12.5 Gbps
Standardses2v16m3216 vCore32 GBUp to 12.5 Gbps
Standardses2v16m6416 vCore64 GBup to 12.5 Gbps
Standardses2v16m12816 vCore128 GBUp to 12.5 Gbps
Standardses2v16m19216 vCore192 GBUp to 12.5 Gbps
Standardses2v16m25616 vCore256 GBup to 12.5 Gbps
Table. Search Engine server type specification - ses2 server type

SEH2 server type

The seh2 server type of Search Engine is provided with large-capacity server specifications and is suitable for database workloads for large-scale data processing.

  • Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
  • Supports up to 72 vCPUs and 288 GB of memory
  • Up to 25Gbps networking speed
ClassificationServer TypevCPUMemoryNetwork Bandwidth
High Capacityseh2v24m4824 vCore48 GBUp to 25 Gbps
High Capacityseh2v24m9624 vCore96 GBUp to 25 Gbps
High Capacityseh2v24m19224 vCore192 GBUp to 25 Gbps
High Capacityseh2v24m28824 vCore288 GBUp to 25 Gbps
High Capacityseh2v32m6432 vCore64 GBUp to 25 Gbps
High Capacityseh2v32m12832 vCore128 GBUp to 25 Gbps
High Capacityseh2v32m25632 vCore256 GBUp to 25 Gbps
High Capacityseh2v48m9648 vCore96 GBUp to 25 Gbps
High Capacityseh2v48m19248 vCore192 GBUp to 25 Gbps
High Capacityseh2v64m12864 vCore128 GBUp to 25 Gbps
High Capacityseh2v64m25664 vCore256 GBUp to 25 Gbps
High Capacityseh2v72m14472 vCore144 GBUp to 25 Gbps
High Capacityseh2v72m28872 vCore288 GBUp to 25 Gbps
Table. Search Engine server type specification - seh2 server type

2.1.2 - Monitoring Metrics

Search Engine Monitoring Metrics

The following table shows the performance monitoring metrics of Event Streams that can be checked through Cloud Monitoring. For detailed Cloud Monitoring usage, please refer to the Cloud Monitoring guide.

For server monitoring metrics of the Search Engine, please refer to the Virtual Server Monitoring Metrics guide.

Performance ItemDetailed DescriptionUnit
Disk Usagedatadir usageMB
Documents [Deleted]total number of deleted documentscnt
Documents [Existing]total number of existing documentscnt
Filesystem Bytes [Available]available filesystembytes
Filesystem Bytes [Free]free filesystembytes
Filesystem Bytes [Total]total filesystembytes
Instance Status [PID]Elasticsearch process PIDPID
JVM Heap Used [Init]JVM heap used init (bytes)bytes
JVM Heap Used [MAX]JVM heap used max (bytes)bytes
JVM Non Heap Used [Init]JVM non-heap used init (bytes)bytes
JVM Non Heap Used [MAX]JVM non-heap used max (bytes)bytes
Kibana ConnectionsKibana connectionscnt
Kibana Memory Heap Allocated [Limit]maximum allocated Node.js process heap size (bytes)bytes
Kibana Memory Heap Allocated [Total]total allocated Node.js process heap size (bytes)bytes
Kibana Memory Heap Usedused Node.js process heap size (bytes)bytes
Kibana Process UptimeKibana process uptimems
Kibana Requests [Disconnected]request count metriccnt
Kibana Requests [Total]request count metriccnt
Kibana Response Time [Avg]response time metricms
Kibana Response Time [MAX]response time metricms
Kibana Status [PID]Kibana process PIDPID
License Expiry Date [ms]license expiry date [milliseconds]ms
License Statuslicense statusstatus
License Typelicense typetype
Queue Timequeue timems
Segmentstotal number of segmentscnt
Segments Bytestotal segment size (bytes)bytes
Shardscluster shard countcnt
Store Bytestotal store size (bytes)bytes
Table. Search Engine Monitoring Metrics

2.2 - How-to guides

Users can create the Search Engine service by entering required information and selecting detailed options through Samsung Cloud Platform Console.

Create Search Engine

You can create and use the Search Engine service in Samsung Cloud Platform Console.

Notice

Before creating the service, make sure to configure the VPC Subnet type to General.

  • If the Subnet type is Local, you cannot create the Database service.

Follow the procedure below to create a Search Engine.

Notice
The following explanation is for the case where Elasticsearch Enterprise image is selected.
  1. Click All Services > Database > Search Engine menu. You will be moved to the Service Home page of Search Engine.

  2. Click the Create Search Engine button on the Service Home page. You will be moved to the Create Search Engine page.

  3. Enter the information required to create the service and select detailed options on the Create Search Engine page.

    • Select the required information in the Image and Version Selection area.
      Division
      Required
      Description
      ImageRequiredSelect the type of image provided
      • Elasticsearch Enterprise, OpenSearch
      Image VersionRequiredSelect the version of the selected image
      • List of versions of provided server images
      Table. Search Engine Image and Version Selection Items
    • Enter or select the required information in the Service Information Input area.
      Division
      Required
      Description
      Server Name PrefixRequiredServer name where Elasticsearch is installed
      • Start with lowercase English letters, and enter 3 to 13 characters using lowercase letters, numbers, and special characters (-)
      • Actual server name is created with postfix such as 001, 002 based on the server name
      Cluster NameRequiredCluster name where servers are configured
      • Enter 3 to 20 characters using English letters
      • Cluster is a unit that bundles multiple servers
      Install MasterNode Separately > UseRequiredWhether to install Master node separately
      • If Use is selected, Master node is installed separately
      • If Master node is not installed separately, data node performs master role as well
      Install MasterNode Separately > MasterNode CountRequiredNumber of Master nodes
      • Master nodes are installed with fixed 3 units for recovery (Fail-over)
      Install MasterNode Separately > Server TypeRequiredMaster node server type
      • Standard: Standard specifications commonly used
      • High Capacity: Large capacity servers with 24vCore or more
      Install MasterNode Separately > Planned ComputeOptionalResource status where Planned Compute is set
      • In Use: Number of resources in use among resources where Planned Compute is set
      • Set: Number of resources where Planned Compute is set
      • Coverage Preview: Amount applied with Planned Compute for each resource
      • Apply for Planned Compute Service: Move to Planned Compute service application page
      Install MasterNode Separately > Block StorageRequiredBlock Storage type to be used for Master node
      • Basic OS: Area where engine is installed
      • DATA: Data file storage area
        • Select storage type and enter capacity (for more details about each Block Storage type, refer to Create Block Storage)
          • SSD: High performance general volume
          • HDD: General volume
          • SSD_KMS/HDD_KMS: Additional encrypted volume using KMS(Key Management System) encryption key
        • Enter capacity in multiples of 8 in the range of 16 ~ 5,120
      • Add Disk: Data storage area
        • Select Use and enter storage Capacity
        • Click + button to add storage, and click x button to delete. You can add up to 9.
        • Enter capacity in multiples of 8 in the range of 16 ~ 5,120, and you can create up to 9
      Node CountRequiredNumber of data nodes
      • If Master node is installed separately, select 2 or more; otherwise, select 1 or more
      Service Type > Server TypeRequiredData node server type
      • Standard: Standard specifications commonly used
      • High Capacity: Large capacity servers with 24vCore or more
      Service Type > Planned ComputeOptionalResource status where Planned Compute is set
      • In Use: Number of resources in use among resources where Planned Compute is set
      • Set: Number of resources where Planned Compute is set
      • Coverage Preview: Amount applied with Planned Compute for each resource
      • Apply for Planned Compute Service: Move to Planned Compute service application page
      Service Type > Block StorageRequiredBlock Storage type to be used for data nodes
      • Basic OS: Area where engine is installed
      • DATA: Data file storage area
        • Select storage type and enter capacity (for more details about each Block Storage type, refer to Create Block Storage)
          • SSD: High performance general volume
          • HDD: General volume
          • SSD_KMS/HDD_KMS: Additional encrypted volume using KMS(Key Management System) encryption key
        • Enter capacity in multiples of 8 in the range of 16 ~ 5,120
      • Add Disk: Data, backup additional storage area
        • Select Use and enter storage Purpose, Capacity
        • Click + button to add storage, and click x button to delete. You can add up to 9.
        • Enter capacity in multiples of 8 in the range of 16 ~ 5,120, and you can create up to 9
      Kibana > Server TypeRequiredServer type where Kibana is installed
      • Standard: Standard specifications commonly used
      Kibana > Planned ComputeOptionalResource status where Planned Compute is set
      • In Use: Number of resources in use among resources where Planned Compute is set
      • Set: Number of resources where Planned Compute is set
      • Coverage Preview: Amount applied with Planned Compute for each resource
      • Apply for Planned Compute Service: Move to Planned Compute service application page
      Kibana > Block StorageRequiredBlock Storage type to be used for server where Kibana is installed
      • Basic OS: Area where engine is installed
      Network > Common SettingsRequiredNetwork settings where servers created in the service are installed
      • Select to apply the same settings to all servers being installed
      • Select previously created VPC and Subnet
      • IP: Only automatic creation is possible
      • Public NAT settings are only possible with per-server settings
      Network > Per-Server SettingsRequiredNetwork settings where servers created in the service are installed
      • Select to apply different settings for each server being installed
      • Select previously created VPC and Subnet
      • IP: Enter IP for each server
      • Public NAT function can be used only when VPC is connected to Internet Gateway. If Use is checked, you can select from reserved IPs in Public IP of VPC product. For more information, refer to Create Public IP
      IP Access ControlOptionalService access policy settings
      • Access policy is set for IPs entered on the page, so separate Security Group policy settings are not required
      • Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.0/24, 192.168.10.1/32) and click Add button
      • To delete entered IP, click x button next to the entered IP
      Maintenance WindowOptionalSearch Engine maintenance window
      • If Use is selected, set day of week, start time, and duration
      • It is recommended to set a maintenance window for stable service management. Patch work proceeds at the set time and service interruption occurs
      • If set to Not Used, problems caused by not applying patches are not the responsibility of the company
      Table. Search Engine Service Information Input Items
    • Enter or select the required information in the Database Configuration Required Information Input area.
      Division
      Required
      Description
      Backup > UseOptionalWhether to use node backup
      • If node backup is selected, select retention period and backup start time
      Backup > Retention PeriodOptionalBackup retention period
      • Select backup retention period. File retention period can be set from 7 days to 35 days
      • Separate charges occur for backup files depending on capacity
      Backup > Backup Start TimeOptionalBackup start time
      • Select backup start time
      • Backup execution minutes are set randomly, and backup end time cannot be set
      Cluster Port NumberRequiredElasticsearch connection port number
      • Can enter one of 1200 ~ 65535, but cannot use 9300 which is Elasticsearch internal port and 5301 which is Kibana port
      Elastic UsernameRequiredElasticsearch username
      • Enter within 2 to 20 characters using lowercase English letters
      • Following usernames cannot be used
        • apm_system, beats_system, elastic, kibana, kibana_system, logstash_system, remote_monitoring_user, scp_kibana_system, scp_manager, maxigent_cl
      Elastic PasswordRequiredElasticsearch connection password
      • Enter 8 to 30 characters including English letters, numbers, and special characters (excluding ", , \)
      Elastic Password ConfirmationRequiredElasticsearch connection password confirmation
      • Re-enter the Elasticsearch connection password identically
      License KeyRequiredElasticsearch License Key
      • Enter the entire content in the issued license file (.json)
      • If the entered license key is invalid, service creation may not be possible
      • OpenSearch does not require License Key
      Time ZoneOptionalStandard time zone where the service is used
      Table. Search Engine Database Configuration Required Information Input Items
    • Enter or select the required information in the Additional Information Input area.
      Division
      Required
      Description
      TagsOptionalAdd tags
      • Create and add tags by clicking Add Tag button or add existing tags
      • Can add up to 50 tags
      • Added new tags are applied after service creation is completed
      Table. Search Engine Service Additional Information Input Items
  4. Check the detailed information and estimated billing amount in the Summary panel, and click the Complete button.

    • When creation is completed, check the created resource on the Resource List page.

Check Search Engine Detailed Information

Search Engine service can check and modify the entire resource list and detailed information. The Search Engine Details page consists of Details, Tags, Operation History tabs.

Follow the procedure below to check the detailed information of Search Engine service.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to check detailed information on the Search Engine List page. You will be moved to the Search Engine Details page.
    • Status information and additional feature information are displayed at the top of the Search Engine Details page.
      DivisionDescription
      Cluster StatusCluster status
      • Creating: Cluster is being created
      • Editing: Cluster is changing to state where Operation is being performed
      • Error: Cluster failed while performing operation
        • If it occurs continuously, contact administrator
      • Failed: Cluster failed during creation process
      • Restarting: Cluster is being restarted
      • Running: Cluster is operating normally
      • Starting: Cluster is being started
      • Stopped: Cluster is stopped
      • Stopping: Cluster is in stopping state
      • Synchronizing: Cluster is being synchronized
      • Terminating: Cluster is being deleted
      • Unknown: Cluster status is unknown
        • If it occurs continuously, contact administrator
      • Upgrading: Cluster is changing to state where upgrade is being performed
      Cluster ControlButtons to change cluster status
      • Start: Starts the stopped cluster
      • Stop: Stops the running cluster
      • Restart: Restarts the running cluster
      Additional Features MoreCluster-related management buttons
      • Synchronize Service Status: Can synchronize to Console by checking current server status
      • Backup History: If backup is set, check whether backup is executed normally and history
      • Cluster Recovery: Recovers cluster based on specific time point
      • Add Node: Adds data nodes
      Service TerminationButton to terminate service
      Table. Search Engine Status Information and Additional Features

Details

You can check the detailed information of the resource selected on the Search Engine List page and modify information if necessary.

DivisionDescription
Server InformationServer information configured in the cluster
  • Category: Server type (Master&Data, Master, Data, Kibana)
  • Server Name: Server name
  • IP:Port: Server IP and port
  • NAT IP: NAT IP
  • Status: Server status
ServiceService name
Resource TypeResource type
SRNUnique resource ID in Samsung Cloud Platform
  • Means cluster SRN
Resource NameResource name
  • Means cluster name
Resource IDUnique resource ID in the service
CreatorUser who created the service
Created AtDate and time when the service was created
ModifierUser who modified the service information
Modified AtDate and time when the service information was modified
Image/VersionInstalled service image and version information
Cluster NameCluster name where servers are configured
Planned ComputeResource status where Planned Compute is set
Maintenance WindowMaintenance window status
  • If maintenance window setting is needed, click Modify button to set
BackupBackup setting status
  • If backup setting is needed, click Modify button to set
Time ZoneStandard time zone where the service is used
LicenseElasticsearch license information
  • License update is possible in Kibana > Stack Management > License management
  • If License expires, service cannot be used
Elastic UsernameElasticsearch username
Kibana Connection InformationKibana connection information
NetworkInstalled network information (VPC, Subnet)
IP Access ControlService access policy settings
  • If IP addition or deletion is needed, click Modify button to set
MasterServer type, basic OS, additional Disk information for Master node
  • If server type modification is needed, click Modify button next to server type to set
    • If server type is modified, server restart is required
  • If storage expansion is needed, click Modify button next to storage capacity to expand
  • If storage addition is needed, click Add Disk button next to additional Disk to add
DataServer type, basic OS, additional Disk information for Broker node
  • If server type modification is needed, click Modify button next to server type to set
    • If server type is modified, server restart is required
  • If storage addition is needed, click Add Disk button next to additional Disk to add
KibanaServer type, basic OS information for Kibana node
  • If server type modification is needed, click Modify button next to server type to set
    • If server type is modified, server restart is required
Table. Search Engine Details Information Items

Tags

You can check the tag information of the resource selected on the Search Engine List page and add, change, or delete tags.

DivisionDescription
Tag ListTag list
  • Can check tag Key, Value information
  • Can add up to 50 tags per resource
  • When entering tags, search and select from previously created Key and Value lists
Table. Search Engine Tags Tab Items

Operation History

You can check the operation history of the resource selected on the Search Engine List page.

DivisionDescription
Operation History ListResource change history
  • Check operation details, operation date and time, resource type, resource ID, resource name, event topic, operation result, operator information
Table. Search Engine Operation History Tab Detailed Information Items

Manage Search Engine Resources

If you need to change existing configuration options of created Search Engine resources, manage parameters, or add Node configuration, you can perform tasks on the Search Engine Details page.

Control Operation

If there are changes to running Search Engine resources, you can start, stop, or restart.

Follow the procedure below to control the operation of Search Engine.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to control operation on the Search Engine List page. You will be moved to the Search Engine Details page.
  4. Check Search Engine status and complete changes through the following control buttons.
    • Start: Server where Search Engine service is installed and Search Engine service become running.
    • Stop: Server where Search Engine service is installed and Search Engine service become stopped.
    • Restart: Only Search Engine service is restarted.

Synchronize Service Status

You can check the current server status and synchronize it to Console.

Follow the procedure below to synchronize the service status of Search Engine.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to check service status on the Search Engine List page. You will be moved to the Search Engine Details page.
  4. Click Synchronize Service Status button. It takes some time to check, and cluster changes to Synchronizing status during checking.
  5. When checking is completed, status is updated in the server information item, and cluster changes to Running status.

Change Server Type

You can change the configured server type.

Follow the procedure below to change the server type.

Caution
  • If server type is configured as Standard, it cannot be changed to High Capacity. If you want to change to High Capacity, create a new service.
  • If server type is modified, server restart is required. Please check separately for SW license modification matters or SW settings and reflection according to specification change.
  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to change server type on the Search Engine List page. You will be moved to the Search Engine Details page.
  4. Click Modify button of the Server Type you want to change at the bottom of detailed information. Modify Server Type popup window opens.
  5. Select server type in the Modify Server Type popup window, and click Confirm button.

Expand Storage

You can expand storage added as data area up to 5TB based on initially allocated capacity. You can expand storage without stopping Search Engine, and if configured as a cluster, all nodes are expanded simultaneously.

Notice
  • If existing Block Storage has encryption setting, encryption is also applied to additional Disk.
  • Disk size modification is only possible to expand more than 16GB than current disk size.

Follow the procedure below to expand storage capacity.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to change server type on the Search Engine List page. You will be moved to the Search Engine Details page.
  4. Click Modify button of the Additional Disk you want to expand at the bottom of detailed information. Modify Disk popup window opens.
  5. Enter expansion capacity in the Modify Disk popup window, and click Confirm button.

Add Storage

If you need more than 5TB of data storage space, you can add storage.

Notice
  • If existing Block Storage has encryption setting, encryption is also applied to additional Disk.

Follow the procedure below to add storage capacity.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to add storage on the Search Engine List page. You will be moved to the Search Engine Details page.
  4. Click Add Disk button at the bottom of detailed information. Add Disk popup window opens.
  5. Enter purpose and capacity in the Add Disk popup window, and click Confirm button.

Backup Search Engine

Through backup setting functionality, users can set data retention period and start cycle, and can perform backup history lookup and deletion through backup history functionality.

Set Backup

For the procedure of setting backup while creating Search Engine, refer to Create Search Engine guide, and follow the procedure below to modify backup settings of created resources.

Caution
  • If backup is set, backup is performed at the specified time after the set time, and additional charges occur depending on backup capacity.
  • If backup setting is changed to Not Set, backup execution stops immediately, and stored backup data is deleted and can no longer be used.
  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to set backup on the Search Engine List page. You will be moved to the Search Engine Details page.
  4. Click Modify button in the backup item. Modify Backup popup window opens.
  5. If setting backup, click Use in the Modify Backup popup window, select retention period, backup start time, Archive backup cycle, and click Confirm button.
    • If stopping backup setting, uncheck Use in the Modify Backup popup window, and click Confirm button.

Check Backup History

Notice
To set notifications for backup success and failure, you can set through Notification Manager product. For detailed usage guide for notification policy setting, refer to Create Notification Policy.

Follow the procedure below to check backup history.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to check backup history on the Search Engine List page. You will be moved to the Search Engine Details page.
  4. Click Backup History button. Backup History popup window opens.
  5. In the Backup History popup window, you can check backup status, version, backup start date and time, backup completion date and time, and capacity.

Delete Backup File

Follow the procedure below to delete backup history.

Caution
Deleted backup files cannot be restored, so please make sure to check if it is unnecessary data before deleting.
  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource for which you want to check backup history on the Search Engine List page. You will be moved to the Search Engine Details page.
  4. Click Backup History button. Backup History popup window opens.
  5. Check the file you want to delete in the Backup History popup window, and click Delete button.

Recover Search Engine

If recovery is needed from backup file due to failure or data loss, recovery is possible based on specific time point through cluster recovery functionality.

Caution
For recovery execution, capacity at least equal to data type Disk capacity is required. If Disk capacity is insufficient, recovery may fail.

Notice
Cluster recovery is restored to the same configuration as the original. For example, if configured with 3 Master nodes and 2 Data nodes, it is restored to the same configuration

Follow the procedure below to recover Search Engine.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource you want to recover on the Search Engine Resource list page. You will be moved to the Search Engine Details page.
  4. Click Cluster Recovery button. You will be moved to the Cluster Recovery page.
  5. Enter the corresponding information in the Cluster Recovery Configuration area, and click Complete button.
    Division
    Required
    Description
    Recovery Time PointRequiredSet the time point user wants to recover
    • Select from the list of time points of backup files displayed in the list
    Server Name PrefixRequiredRecovery server name
    • Start with lowercase English letters and enter 3 to 16 characters using lowercase letters, numbers, and special characters (-)
    • Actual server name is created with postfix such as 001, 002 based on server name
    Cluster NameRequiredRecovery server cluster name
    • Enter 3 to 20 characters using English letters
    • Cluster is a unit that bundles multiple servers
    Node CountRequiredNumber of data nodes
    • Set to the same as the number of nodes set in the original cluster
    Service Type > Server TypeRequiredData node server type
    • Set to the same as the number of nodes set in the original cluster
    Service Type > Planned ComputeOptionalResource status where Planned Compute is set
    • In Use: Number of resources in use among resources where Planned Compute is set
    • Set: Number of resources where Planned Compute is set
    • Coverage Preview: Amount applied with Planned Compute for each resource
    • Apply for Planned Compute Service: Move to Planned Compute service application page
    Service Type > Block StorageRequiredBlock Storage to be used for data nodes
    • Basic OS: Area where engine is installed
    • DATA: Data file storage area
      • Applied identically to the Storage type set in the original cluster
      • Enter capacity in multiples of 8 in the range of 16 ~ 5,120
    • Add Disk: Data, backup additional storage area
      • Select Use and enter storage purpose and capacity
      • Click + button to add storage, and click x button to delete
      • Enter capacity in multiples of 8 in the range of 16 ~ 5,120, and you can create up to 9
    Install MasterNode Separately > UseRequiredWhether to install Master node separately
    • Applied identically to the installation status of the original cluster
    Install MasterNode Separately > MasterNode CountRequiredNumber of Master nodes
    Install MasterNode Separately > Server TypeRequiredMaster node server type
    • Set to the same as the number of nodes set in the original cluster
    Install MasterNode Separately > Planned ComputeOptionalResource status where Planned Compute is set
    • In Use: Number of resources in use among resources where Planned Compute is set
    • Set: Number of resources where Planned Compute is set
    • Coverage Preview: Amount applied with Planned Compute for each resource
    • Apply for Planned Compute Service: Move to Planned Compute service application page
    Install MasterNode Separately > Block StorageRequiredBlock Storage to be used for Master node
    • Basic OS: Area where engine is installed
    • DATA: Data file storage area
      • Applied identically to the Storage type set in the original cluster
      • Enter capacity in multiples of 8 in the range of 16 ~ 5,120
    • Add Disk: Data additional storage area
      • Select Use and enter storage capacity
      • Click + button to add storage, and click x button to delete
      • Enter capacity in multiples of 8 in the range of 16 ~ 5,120, and you can create up to 9
    Kibana > Server TypeRequiredKibana node server type
    • Set to the same as the number of nodes set in the original cluster
    Kibana > Planned ComputeOptionalResource status where Planned Compute is set
    • In Use: Number of resources in use among resources where Planned Compute is set
    • Set: Number of resources where Planned Compute is set
    • Coverage Preview: Amount applied with Planned Compute for each resource
    • Apply for Planned Compute Service: Move to Planned Compute service application page
    Kibana > Block StorageRequiredBlock Storage to be used for Kibana node
    • Basic OS: Area where engine is installed
    Cluster Port NumberRequiredElasticsearch connection port number
    • Set identically to the port number set in the original cluster
    License KeyRequiredElasticsearch License Key
    • Enter the entire content in the issued license file (.json)
    • If the entered license key is invalid, service creation may not be possible
    • OpenSearch does not require License Key
    IP Access ControlOptionalService access policy settings
    • Access policy is set for IPs entered on the page, so separate Security Group policy settings are not required
    • Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.0/24, 192.168.10.1/32) and click Add button
    • To delete entered IP, click x button next to the entered IP
    Maintenance WindowOptionalMaintenance window
    • If Use is selected, set day of week, start time, and duration
    • It is recommended to set a maintenance window for stable service management. Patch work proceeds at the set time and service interruption occurs
    • If set to Not Used, problems caused by not applying patches are not the responsibility of the company
    Table. Search Engine Recovery Configuration Items

Add Node

If Search Engine cluster expansion is needed, you can add nodes with the same specifications as currently used data nodes.

Notice
  • You can use up to 10 nodes within a cluster. Note that additional charges occur for created nodes.
  • During node addition, cluster performance may degrade.

Follow the procedure below to add nodes.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Click the resource you want to recover on the Search Engine Resources list page. You will be moved to the Search Engine Details page.
  4. Click Add Broker Node button. You will be moved to the Add Broker Node page.
  5. Enter the corresponding information in the Required Information Input area, and click Complete button.
    Division
    Required
    Description
    Server Name PrefixRequiredData node server name
    • Set to the server name set in the original cluster
    Cluster NameRequiredCluster name
    • Set to the cluster name set in the original cluster
    Additional Node CountRequiredNumber of Nodes to add
    • Use up to 10 nodes in one cluster
    Service Type > Server TypeRequiredData node server type
    • Set identically to the server type set in the original cluster
    Service Type > Planned ComputeOptionalResource status where Planned Compute is set
    • In Use: Number of resources in use among resources where Planned Compute is set
    • Set: Number of resources where Planned Compute is set
    • Coverage Preview: Amount applied with Planned Compute for each resource
    • Apply for Planned Compute Service: Move to Planned Compute service application page
    Service Type > Block StorageRequiredBlock Storage settings to be used for data nodes
    • Storage type and capacity set in the original cluster are applied identically
    NetworkRequiredNetwork where servers are installed
    • Applied identically to the network set in the original cluster
    Table. Search Engine Node Addition Items

Terminate Search Engine

You can reduce operating costs by terminating unused Search Engine. However, if you terminate the service, the running service may stop immediately, so you should fully consider the impact of service interruption before proceeding with termination work.

Follow the procedure below to terminate Search Engine.

  1. Click All Services > Data Analytics > Search Engine menu. You will be moved to the Service Home page of Search Engine.
  2. Click Search Engine menu on the Service Home page. You will be moved to the Search Engine List page.
  3. Select the resource to terminate on the Search Engine List page, and click Terminate Service button.
  4. When termination is completed, check if the resource is terminated on the Search Engine list page.

2.3 - API Reference

API Reference

2.4 - CLI Reference

CLI Reference

2.5 - Release Note

Search Engine

2025.07.01
FEATURE New feature, Terraform and disk type added
  • OpenSearch 2.17.1 is newly provided.
  • It provides Terraform.
  • HDD, HDD_KMS disk types are also provided.
2025.02.27
NEW Search Engine Service Official Version Release
  • A Search Engine service that can easily create and manage ElasticSearch Enterprise in a web environment has been released.

3 - Vertica(DBaaS)

3.1 - Overview

Service Overview

Vertica(DBaaS) is a high-availability enterprise database based on Data Warehouse for large-scale data analysis/processing. It is a data analysis platform that, through a single engine, can perform basic analyses such as queries on data coming from various sources without moving them, as well as AI analyses like machine learning. In Samsung Cloud Platform, DB management functions such as high‑availability configuration, backup/recovery, patching, parameter management, and monitoring are added to ensure stable management of single instances or critical data, enabling automation of tasks throughout the database lifecycle. Additionally, to prepare for issues with DB servers or data, it provides an automatic backup function at user‑specified times, supporting data recovery at the desired point in time.

Service Architecture Diagram

Diagram
Figure. Vertica diagram

Provided Features

Vertica (DBaaS) provides the following features.

  • Auto Provisioning: Automatically installs the DB of the standard version of Samsung Cloud Platform based on Virtual Servers of various specifications.
  • Cluster configuration: Provides its own high-availability architecture in a Masterless form.
  • Operation Control Management: Provides a function to control the status of running servers. Servers can be started and stopped, and can be restarted if there is a problem with the DB or to apply configuration values.
  • Backup and Recovery: Provides a data backup function based on its own backup commands. The backup retention period and backup start time can be set by the user, and additional charges apply based on backup size. It also provides a recovery function for backed-up data; when the user performs a recovery, a separate DB is created and recovery proceeds to the point selected by the user (backup save point, user-specified point). When recovering a Database, you can choose to install the Management Console for use.
  • Service status query: You can view the final status of the current DB service.
  • Monitoring: CPU, memory, DB performance monitoring information can be checked through the Cloud Monitoring service.
  • High-performance processing of large-scale data: Guarantees stable performance in environments with massive parallel processing (MPP, Massively Parallel Processing) and SQL query Mixed Workload. Vertica processes queries through distributed processing and has a structure that allows queries to be started from any node, so there is no Single Point of Failure where queries would not be executed in case of a specific node failure.

Components

Vertica(DBaaS) provides pre-validated engine versions and various server types. Users can select and use them according to the scale of the service they want to configure.

Engine Version

The engine versions supported by Vertica(DBaaS) are as follows.

Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to six months before the EoTS date.

According to the supplier’s policy, the EOS and EoTS dates may change, so please refer to the supplier’s license management policy page for details.

Provided versionEOS date(Samsung Cloud Platform new creation stop date)EoTS date(supplier technical support end date)
24.2.0-22026-09 (planned)2027-04-30
Table. Vertica (DBaaS) Service Provision Engine Version

Server Type

The server types supported by Vertica (DBaaS) are as follows.

For detailed information about the server types provided by Vertica (DBaaS), please refer to Vertica server types.

CategoryExampleDetailed Description
Server TypeStandardProvided Server Types
  • Standard: Standard specifications (vCPU, Memory) configuration commonly used
  • High Capacity: 24 vCore or more large-capacity server specifications
Server specificationsDb1Provided server specifications
  • db1: Standard specifications (vCPU, Memory) configuration commonly used
  • dbh2: Large-scale server specifications
    • Provide servers with 24 vCore or more
Server specificationsV2vCore count
  • v2: 2 virtual cores
Server specificationsM4Memory capacity
  • m4: 4GB Memory
Table. Vertica (DBaaS) server type components

Preliminary Service

This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.

Service CategoryServiceDetailed Description
NetworkingVPCA service that provides an independent virtual network in a cloud environment
Table. Vertica (DBaaS) Preliminary Service

3.1.1 - Server Type

Vertica(DBaaS) server type

Vertica(DBaaS) provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc. When creating Vertica(DBaaS), the Database Engine is installed according to the server type selected for the purpose of use.

The server types supported by Vertica(DBaaS) are as follows.

Standard db1v2m4
Classification
ExampleDetailed Description
Server TypeStandardProvided server type classification
  • Standard: Composed of standard specifications (vCPU, Memory) commonly used
  • High Capacity: Server specifications with high capacity over Standard
Server Specificationdb1Classification of provided server type and generation
  • db: means general specification, and 1 means generation
  • dbh: h means large-capacity server specification, and 2 means generation
Server Specificationv2Number of vCores
  • v2: 2 virtual cores
Server Specificationm4Memory Capacity
  • m4: 4GB Memory
Table. Vertica(DBaaS) server type format

db1 server type

The db1 server type of Vertica(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.

  • Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor
  • Supports up to 16 vCPUs and 256 GB of memory
  • Up to 12.5 Gbps networking speed
DivisionServer TypevCPUMemoryNetwork Bandwidth
Standarddb1v1m21 vCore2 GBUp to 10 Gbps
Standarddb1v2m42 vCore4 GBUp to 10 Gbps
Standarddb1v2m82 vCore8 GBUp to 10 Gbps
Standarddb1v2m162 vCore16 GBUp to 10 Gbps
Standarddb1v2m242 vCore24 GBUp to 10 Gbps
Standarddb1v2m322 vCore32 GBUp to 10 Gbps
Standarddb1v4m84 vCore8 GBUp to 10 Gbps
Standarddb1v4m164 vCore16 GBUp to 10 Gbps
Standarddb1v4m324 vCore32 GBUp to 10 Gbps
Standarddb1v4m484 vCore48 GBUp to 10 Gbps
Standarddb1v4m644 vCore64 GBUp to 10 Gbps
Standarddb1v6m126 vCore12 GBUp to 10 Gbps
Standarddb1v6m246 vCore24 GBUp to 10 Gbps
Standarddb1v6m486 vCore48 GBUp to 10 Gbps
Standarddb1v6m726 vCore72 GBUp to 10 Gbps
Standarddb1v6m966 vCore96 GBUp to 10 Gbps
Standarddb1v8m168 vCore16 GBUp to 10 Gbps
Standarddb1v8m328 vCore32 GBUp to 10 Gbps
Standarddb1v8m648 vCore64 GBUp to 10 Gbps
Standarddb1v8m968 vCore96 GBUp to 10 Gbps
Standarddb1v8m1288 vCore128 GBUp to 10 Gbps
Standarddb1v10m2010 vCore20 GBUp to 10 Gbps
Standarddb1v10m4010 vCore40 GBUp to 10 Gbps
Standarddb1v10m8010 vCore80 GBUp to 10 Gbps
Standarddb1v10m12010 vCore120 GBUp to 10 Gbps
Standarddb1v10m16010 vCore160 GBUp to 10 Gbps
Standarddb1v12m2412 vCore24 GBUp to 12.5 Gbps
Standarddb1v12m4812 vCore48 GBUp to 12.5 Gbps
Standarddb1v12m9612 vCore96 GBUp to 12.5 Gbps
Standarddb1v12m14412 vCore144 GBUp to 12.5 Gbps
Standarddb1v12m19212 vCore192 GBUp to 12.5 Gbps
Standarddb1v14m2814 vCore28 GBUp to 12.5 Gbps
Standarddb1v14m5614 vCore56 GBUp to 12.5 Gbps
Standarddb1v14m11214 vCore112 GBUp to 12.5 Gbps
Standarddb1v14m16814 vCore168 GBUp to 12.5 Gbps
Standarddb1v14m22414 vCore224 GBUp to 12.5 Gbps
Standarddb1v16m3216 vCore32 GBUp to 12.5 Gbps
Standarddb1v16m6416 vCore64 GBUp to 12.5 Gbps
Standarddb1v16m12816 vCore128 GBUp to 12.5 Gbps
Standarddb1v16m19216 vCore192 GBUp to 12.5 Gbps
Standarddb1v16m25616 vCore256 GBUp to 12.5 Gbps
Table. Vertica(DBaaS) server type specifications - db1 server type

DB2 server type

The db2 server type of Vertica(DBaaS) is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.

  • Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
  • Supports up to 16 vCPUs and 256 GB of memory
  • Up to 12.5 Gbps networking speed
ClassificationServer TypevCPUMemoryNetwork Bandwidth
Standarddb2v1m21 vCore2 GBUp to 10 Gbps
Standarddb2v2m42 vCore4 GBUp to 10 Gbps
Standarddb2v2m82 vCore8 GBUp to 10 Gbps
Standarddb2v2m162 vCore16 GBUp to 10 Gbps
Standarddb2v2m242 vCore24 GBUp to 10 Gbps
Standarddb2v2m322 vCore32 GBUp to 10 Gbps
Standarddb2v4m84 vCore8 GBUp to 10 Gbps
Standarddb2v4m164 vCore16 GBUp to 10 Gbps
Standarddb2v4m324 vCore32 GBUp to 10 Gbps
Standarddb2v4m484 vCore48 GBUp to 10 Gbps
Standarddb2v4m644 vCore64 GBUp to 10 Gbps
Standarddb2v6m126 vCore12 GBUp to 10 Gbps
Standarddb2v6m246 vCore24 GBUp to 10 Gbps
Standarddb2v6m486 vCore48 GBUp to 10 Gbps
Standarddb2v6m726 vCore72 GBUp to 10 Gbps
Standarddb2v6m966 vCore96 GBUp to 10 Gbps
Standarddb2v8m168 vCore16 GBUp to 10 Gbps
Standarddb2v8m328 vCore32 GBUp to 10 Gbps
Standarddb2v8m648 vCore64 GBUp to 10 Gbps
Standarddb2v8m968 vCore96 GBUp to 10 Gbps
Standarddb2v8m1288 vCore128 GBup to 10 Gbps
Standarddb2v10m2010 vCore20 GBUp to 10 Gbps
Standarddb2v10m4010 vCore40 GBUp to 10 Gbps
Standarddb2v10m8010 vCore80 GBUp to 10 Gbps
Standarddb2v10m12010 vCore120 GBUp to 10 Gbps
Standarddb2v10m16010 vCore160 GBUp to 10 Gbps
Standarddb2v12m2412 vCore24 GBUp to 12.5 Gbps
Standarddb2v12m4812 vCore48 GBUp to 12.5 Gbps
Standarddb2v12m9612 vCore96 GBUp to 12.5 Gbps
Standarddb2v12m14412 vCore144 GBUp to 12.5 Gbps
Standarddb2v12m19212 vCore192 GBUp to 12.5 Gbps
Standarddb2v14m2814 vCore28 GBUp to 12.5 Gbps
Standarddb2v14m5614 vCore56 GBUp to 12.5 Gbps
Standarddb2v14m11214 vCore112 GBUp to 12.5 Gbps
Standarddb2v14m16814 vCore168 GBUp to 12.5 Gbps
Standarddb2v14m22414 vCore224 GBUp to 12.5 Gbps
Standarddb2v16m3216 vCore32 GBUp to 12.5 Gbps
Standarddb2v16m6416 vCore64 GBUp to 12.5 Gbps
Standarddb2v16m12816 vCore128 GBUp to 12.5 Gbps
Standarddb2v16m19216 vCore192 GBUp to 12.5 Gbps
Standarddb2v16m25616 vCore256 GBup to 12.5 Gbps
Table. Vertica(DBaaS) server type specifications - db2 server type

DBH2 Server Type

The dbh2 server type of Vertica(DBaaS) is provided with large-capacity server specifications and is suitable for database workloads for large-scale data processing.

  • Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
  • Supports up to 128 vCPUs and 1,536 GB of memory
  • Up to 25Gbps networking speed
ClassificationServer TypevCPUMemoryNetwork Bandwidth
High Capacitydbh2v24m4824 vCore48 GBUp to 25 Gbps
High Capacitydbh2v24m9624 vCore96 GBUp to 25 Gbps
High Capacitydbh2v24m19224 vCore192 GBUp to 25 Gbps
High Capacitydbh2v24m28824 vCore288 GBUp to 25 Gbps
High Capacitydbh2v32m6432 vCore64 GBUp to 25 Gbps
High Capacitydbh2v32m12832 vCore128 GBUp to 25 Gbps
High Capacitydbh2v32m25632 vCore256 GBUp to 25 Gbps
High Capacitydbh2v32m38432 vCore384 GBUp to 25 Gbps
High Capacitydbh2v48m19248 vCore192 GBUp to 25 Gbps
High Capacitydbh2v48m57648 vCore576 GBUp to 25 Gbps
High Capacitydbh2v64m25664 vCore256 GBUp to 25 Gbps
High Capacitydbh2v64m76864 vCore768 GBUp to 25 Gbps
High Capacitydbh2v72m28872 vCore288 GBUp to 25 Gbps
High Capacitydbh2v72m86472 vCore864 GBUp to 25 Gbps
High Capacitydbh2v96m38496 vCore384 GBUp to 25 Gbps
High Capacitydbh2v96m115296 vCore1152 GBUp to 25 Gbps
High Capacitydbh2v128m512128 vCore512 GBUp to 25 Gbps
High Capacitydbh2v128m1536128 vCore1536 GBUp to 25 Gbps
Table. Vertica(DBaaS) server type specifications - dbh2 server type

3.1.2 - Monitoring Metrics

Vertica(DBaaS) monitoring metrics

The following table shows the performance monitoring metrics of Vertica (DBaaS) that can be checked through Cloud Monitoring. For detailed instructions on how to use Cloud Monitoring, please refer to the Cloud Monitoring guide.

The server monitoring metrics of Vertica(DBaaS) refer to the Virtual Server monitoring metrics guide.

Performance ItemDetailed DescriptionUnit
Active LocksNumber of Active Lockscnt
Active SessionsTotal number of active sessionscnt
Instance StatusNode alive statusstatus
Tablespace UsedTablespace usagebytes
Table. Vertica(DBaaS) Monitoring Metrics

3.2 - How-to guides

Users can create the Vertica(DBaaS) service by entering required information and selecting detailed options through Samsung Cloud Platform Console.

Create Vertica(DBaaS)

You can create and use the Vertica(DBaaS) service in Samsung Cloud Platform Console.

Follow the procedure below to create Vertica(DBaaS).

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).

  2. Click Create Vertica(DBaaS) button on the Service Home page. You will be moved to the Create page.

  3. Enter the information required to create the service and select detailed options on the Create Vertica(DBaaS) page.

    • Select the required information in the Image and Version Selection area.
      Division
      Required
      Description
      Image VersionRequiredList of Vertica(DBaaS) versions
      Table. Vertica(DBaaS) Image and Version Input Items
    • Enter or select the required information in the Service Information Input area.
      Division
      Required
      Description
      Server Name PrefixRequiredServer name where Vertica is installed
      • Start with lowercase English letters, and enter 3 to 13 characters using lowercase letters, numbers, and special characters (-)
      • Actual server name is created with postfix such as 001, 002 based on server name
      Cluster NameRequiredCluster name where servers are configured
      • Enter 3 to 20 characters using English letters
      • Cluster is a unit that bundles multiple servers
      Node CountRequiredNumber of data nodes
      • Enter node count in the range of 1-10
      • If you enter 2 or more nodes to configure a cluster, you secure High Availability
      Service Type > Server TypeRequiredData node server type
      • Standard: Standard specifications commonly used
      • High Capacity: Large capacity servers with 24vCore or more
      Service Type > Planned ComputeOptionalResource status where Planned Compute is set
      • In Use: Number of resources in use among resources where Planned Compute is set
      • Set: Number of resources where Planned Compute is set
      • Coverage Preview: Amount applied with Planned Compute for each resource
      • Apply for Planned Compute Service: Move to Planned Compute service application page
      Service Type > Block StorageRequiredBlock Storage type to be used for data nodes
      • Basic OS: Area where engine is installed
      • DATA: Data file storage area
        • Select storage type and enter capacity (for detailed information about each Block Storage type, refer to Create Block Storage)
          • SSD: General Block Storage
          • SSD_KMS: Additional encrypted volume using KMS(Key Management System) encryption key
        • Set Storage type is also applied identically to additional storage
        • Enter capacity in multiples of 8 in the range of 16 ~ 5,120
      • Additional: DATA, Backup data storage area
        • Select Use and enter storage Purpose, Capacity
        • Click + button to add storage, and click x button to delete, you can add up to 9
        • Enter capacity in multiples of 8 in the range of 16 ~ 5,120, and you can create up to 9
      Management ConsoleOptionalIf Use is selected, server type and Block Storage settings for Node for cluster management and monitoring
      Management Console > Server TypeRequiredSelect data node server type for cluster management and monitoring
      Management Console > Block StorageRequiredSelect Block Storage type to be used for data node for cluster management and monitoring
      Network > Common SettingsRequiredNetwork settings where servers created in the service are installed
      • Select to apply the same settings to all servers being installed
      • Select previously created VPC and Subnet
      • IP: Enter IP for each server
      • Public NAT settings are only possible with per-server settings
      Network > Per-Server SettingsRequiredNetwork settings where servers created in the service are installed
      • Select to apply different settings for each server being installed
      • Select previously created VPC and Subnet
      • IP: Enter IP for each server
      • Public NAT function can be used only when VPC is connected to Internet Gateway. If Use is checked, you can select from reserved IPs in Public IP of VPC product. For more information, refer to Create Public IP
      IP Access ControlOptionalService access policy settings
      • Access policy is set for IPs entered on the page, so separate Security Group policy settings are not required
      • Enter in IP format (example: 192.168.10.1) or CIDR format (example: 192.168.10.0/24, 192.168.10.1/32) and click Add button
      • To delete entered IP, click x button next to the entered IP
      Maintenance WindowOptionalDB maintenance window
      • If Use is selected, set day of week, start time, and duration
      • It is recommended to set a maintenance window for stable DB management. Patch work proceeds at the set time and service interruption occurs
      • If set to Not Used, problems caused by not applying patches are not the responsibility of Samsung SDS
      Table. Vertica(DBaaS) Service Configuration Items
    • Enter or select the required information in the Database Configuration Required Information Input area.
      Division
      Required
      Description
      Database NameRequiredServer name applied when DB is installed
      • Start with English letters, and enter 3 to 20 characters using English letters and numbers
      Database UsernameRequiredDB username
      • Account with that name is also created in OS
      • Enter 2 to 20 characters using lowercase English letters
      • Database usernames with restricted use can be checked in Console
      Database PasswordRequiredPassword to use when accessing DB
      • Enter 8 to 30 characters including English letters, numbers, and special characters (excluding " )
      Database Password ConfirmationRequiredRe-enter the password to use when accessing DB identically
      Database Port NumberRequiredPort number required for DB connection
      • Enter DB port in the range of 1200 ~ 65535
      Backup > UseOptionalWhether to use node backup
      • Select Use and select node backup retention period and backup start time
      Backup > Retention PeriodOptionalBackup retention period
      • Select backup retention period. File retention period can be set from 7 days to 35 days
      • Separate fees are charged for backup files depending on capacity
      Backup > Backup Start TimeOptionalBackup start time
      • Select backup start time
      • Backup execution minutes are set randomly, and backup end time cannot be set
      License KeyRequiredEnter Vertica License Key held by customer
      • If the entered license key is invalid, service creation is not possible
      DB LocaleRequiredSettings related to string processing, number/currency/date/time display format, etc. to use in Vertica(DBaaS)
      • DB is created with default settings to the selected Locale
      Time ZoneRequiredStandard time zone to use in Vertica(DBaaS)
      Table. Vertica(DBaaS) Required Configuration Items
    • Enter or select the required information in the Additional Information Input area.
      Division
      Required
      Description
      TagsOptionalAdd tags
      • Can add up to 50 per resource
      • After clicking Add Tag button, enter or select Key, Value values
      Table. Vertica(DBaaS) Additional Information Input Items
  4. Check the detailed information and estimated billing amount in the Summary panel, and click Complete button.

    • When creation is completed, check the created resource on the Resource List page.

Check Vertica(DBaaS) Detailed Information

Vertica(DBaaS) service can check and modify the entire resource list and detailed information. The Vertica(DBaaS) Details page consists of Details, Tags, Operation History tabs.

Follow the procedure below to check the detailed information of Vertica(DBaaS) service.

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
  2. Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
  3. Click the resource for which you want to check detailed information on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
    • Status information and additional feature information are displayed at the top of the Vertica(DBaaS) Details page.
      DivisionDescription
      Cluster StatusCluster status
      • Creating: Cluster is being created
      • Editing: Cluster is changing to state where Operation is being performed
      • Error: Cluster failed while performing operation
        • If it occurs continuously, contact administrator
      • Failed: Cluster failed during creation process
      • Restarting: Cluster is being restarted
      • Running: Cluster is operating normally
      • Starting: Cluster is being started
      • Stopped: Cluster is stopped
      • Stopping: Cluster is in stopping state
      • Synchronizing: Cluster is being synchronized
      • Terminating: Cluster is being deleted
      • Unknown: Cluster status is unknown
        • If it occurs continuously, contact administrator
      • Upgrading: Cluster is changing to state where upgrade is being performed
      Cluster ControlButtons to change cluster status
      • Start: Start the stopped cluster
      • Stop: Stop the running cluster
      • Restart: Restart the running cluster
      Additional Features MoreCluster-related management buttons
      • Synchronize Service Status: Check real-time DB service status
      • Backup History: If backup is set, check whether backup is executed normally and history
      • Database Recovery: Recover DB based on specific time point
      Service TerminationButton to terminate service
      Table. Vertica(DBaaS) Status Information and Additional Features

Details

You can check the detailed information of the resource selected on the Vertica(DBaaS) List page and modify information if necessary.

DivisionDescription
Server InformationServer information configured in the cluster
  • Category: Server type (Vertica cluster configuration nodes are displayed as Data, Management Console is displayed as Console)
  • Server Name: Server name
  • IP:Port: Server IP and port
  • Status: Server status
ServiceService name
Resource TypeResource type
SRNUnique resource ID in Samsung Cloud Platform
  • Means cluster SRN
Resource NameResource name
  • Means cluster name
Resource IDUnique resource ID in the service
CreatorUser who created the service
Created AtDate and time when the service was created
ModifierUser who modified the service information
Modified AtDate and time when the service information was modified
Image/VersionInstalled DB image and version information
Cluster NameCluster name where servers are configured
Database NameServer name applied when DB is installed
Database UsernameDB username
Planned ComputeResource status where Planned Compute is set
Maintenance WindowDB maintenance window status
  • If maintenance window setting is needed, click Modify icon to set
BackupBackup setting status
  • If backup setting is needed, click Modify icon to set
Managed ConsoleManaged Console resource status set when DB is installed
NetworkInstalled network information (VPC, Subnet)
IP Access ControlService access policy settings
  • If IP addition or deletion is needed, click Modify icon to set
Time ZoneStandard time zone where Vertica(DBaaS) DB is used
LicenseVertica(DBaaS) license information
Server InformationData/Console server type, basic OS, additional Disk information
  • If server type modification is needed, click Modify icon next to server type to set. For server type modification procedure, refer to Change Server Type
    • If server type is modified, server restart is required
  • If storage expansion is needed, click Modify icon next to storage capacity to expand. For storage expansion procedure, refer to Expand Storage
  • If storage addition is needed, click Add Disk button next to additional Disk to add. For storage addition procedure, refer to Add Storage
Table. Vertica(DBaaS) Details Information Items

Tags

You can check the tag information of the resource selected on the Vertica(DBaaS) List page and add, change, or delete tags.

DivisionDescription
Tag ListTag list
  • Can check tag Key, Value information
  • Can add up to 50 tags per resource
  • When entering tags, search and select from previously created Key and Value lists
Table. Vertica(DBaaS) Tags Tab Items

Operation History

You can check the operation history of the resource selected on the Vertica(DBaaS) List page.

DivisionDescription
Operation History ListResource change history
  • Check operation date and time, resource ID, resource name, operation details, event topic, operation result, operator information
Table. Vertica(DBaaS) Operation History Tab Detailed Information Items

Manage Vertica(DBaaS) Resources

If you need to change existing configuration options of created Vertica(DBaaS) resources or add storage configuration, you can perform tasks on the Vertica(DBaaS) Details page.

Control Operation

If there are changes to running Vertica(DBaaS) resources, you can start, stop, or restart.

Follow the procedure below to control the operation of Vertica(DBaaS).

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
  2. Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
  3. Click the resource for which you want to control operation on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
  4. Check Vertica(DBaaS) status and complete changes through the following control buttons.
    • Start: Server where Vertica(DBaaS) service is installed and Vertica(DBaaS) service become running.
    • Stop: Server where Vertica(DBaaS) service is installed and Vertica(DBaaS) service become stopped.
    • Restart: Only Vertica(DBaaS) service is restarted.

Synchronize Service Status

You can synchronize the real-time service status of Vertica(DBaaS).

Follow the procedure below to check the service status of Vertica(DBaaS).

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
  2. Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
  3. Click the resource for which you want to check service status on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
  4. Click Synchronize Service Status button. Cluster changes to Synchronizing status during checking.
  5. When checking is completed, status is updated in the server information item, and cluster changes to Running status.

Change Server Type

You can change the configured server type.

Caution
  • If server type is configured as Standard, it cannot be changed to High Capacity. If you want to change to High Capacity, create a new service.
  • If server type is modified, server restart is required. Please check separately for SW license modification matters or SW settings and reflection according to server specification change.

Follow the procedure below to change the server type.

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
  2. Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
  3. Click the resource for which you want to change server type on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
  4. Click Modify icon of the server type you want to change at the bottom of detailed information. Modify Server Type popup window opens.
  5. Select server type in the Modify Server Type popup window, and click Confirm button.

Add Storage

If you need more than 5 TB of data storage space, you can add storage. If it is High Availability configuration (HA cluster), when storage capacity is expanded or added, it is applied to all DBs simultaneously.

Follow the procedure below to add storage.

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
  2. Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
  3. Click the resource for which you want to add storage on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
  4. Click Add Disk button at the bottom of detailed information. Request Additional Storage popup window opens.
  5. Enter purpose and capacity in the Request Additional Storage popup window, and click Confirm button.

Expand Storage

You can expand storage added as data area up to 5TB based on initially allocated capacity. You can expand storage without stopping Vertica(DBaaS), and if configured as a cluster, all nodes are expanded simultaneously.

Follow the procedure below to expand storage capacity.

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
  2. Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
  3. Click the resource for which you want to change server type on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
  4. Click Modify button of the additional Disk you want to expand at the bottom of detailed information. Modify Additional Storage popup window opens.
  5. Enter expansion capacity in the Modify Additional Storage popup window, and click Confirm button.

Change Recovery DB Instance Type

After DB recovery is completed, you can change the instance type in the Recovery detailed information screen.

Follow the procedure below to change the Recovery DB instance type.

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
  2. Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
  3. Click the resource for which you want to change Recovery DB instance type on the Vertica(DBaaS) List page. You will be moved to the Vertica(DBaaS) Details page.
  4. Click Change Instance Type button. Change Instance Type confirmation dialog is displayed.
  • DB instance type is changed from Recovery to Active to perform the same function as a single DB.

Terminate Vertica(DBaaS)

You can reduce operating costs by terminating unused Vertica(DBaaS). However, if you terminate the service, the running service may stop immediately, so you should fully consider the impact of service interruption before proceeding with termination work.

Follow the procedure below to terminate Vertica(DBaaS).

  1. Click All Services > Data Analytics > Vertica(DBaaS) menu. You will be moved to the Service Home page of Vertica(DBaaS).
  2. Click Vertica(DBaaS) menu on the Service Home page. You will be moved to the Vertica(DBaaS) List page.
  3. Select the resource to terminate on the Vertica(DBaaS) List page, and click Terminate Service button.
  4. When termination is completed, check if the resource is terminated on the Vertica(DBaaS) list page.

3.2.1 - Vertica Backup and Recovery

Users can set up backups of Vertica (DBaaS) through the Samsung Cloud Platform Console and restore from the backed-up files.

Vertica(DBaaS) Backup

You can set up a backup function so that the user’s data can be stored safely. Also, through the backup history function, you can verify whether the backup was performed correctly and you can also delete backed-up files.

Set up backup

For backup configuration of Vertica(DBaaS), see Create Vertica(DBaaS).

To modify the backup settings of Vertica (DBaaS), follow the steps below.

Caution
  • If a backup is set, the backup will be performed at the designated time after the set time, and additional charges will be incurred depending on the backup size.
  • If you change the backup setting to unused, backup execution will stop immediately, and the stored backup data will be deleted and can no longer be used.
  1. All Services > Data Analytics > Vertica(DBaaS) Click the menu. Navigate to the Service Home page of Vertica(DBaaS).
  2. Click the Vertica(DBaaS) menu on the Service Home page. Navigate to the Vertica(DBaaS) List page.
  3. Click the resource to set backup on the Vertica(DBaaS) List page. You will be taken to the Vertica(DBaaS) Details page.
  4. Click the Edit button of the backup item. Backup Settings popup window opens.
  5. When setting up a backup, click Use in the Backup Settings popup, select the retention period and backup start time, and click the Confirm button.
    • If you want to stop the backup setting, uncheck Use in the Backup Setting popup window and click the Confirm button.

Check backup history

Guide
To set notifications for backup success and failure, you can configure them via the Notification Manager product. For a detailed usage guide on setting notification policies, refer to Create Notification Policy.

To view the backup history, follow these steps.

  1. All Services > Data Analytics > Vertica(DBaaS) Click the menu. Go to the Service Home page of Vertica(DBaaS).
  2. Click the Vertica(DBaaS) menu on the Service Home page. Navigate to the Vertica(DBaaS) list page.
  3. Click the resource to view the backup history on the Vertica(DBaaS) List page. Go to the Vertica(DBaaS) Details page.
  4. Click the Backup History button. Backup History popup opens.
  5. Backup History In the popup window, you can check the backup status, version, backup start date and time, backup completion date and time, and size.

Delete backup file

To delete the backup history, follow the steps below.

Caution
Backup files cannot be restored after deletion. Please be sure to confirm whether the data is unnecessary before deleting.
  1. All Services > Data Analytics > Vertica(DBaaS) Click the menu. Navigate to the Service Home page of Vertica(DBaaS).
  2. Service Home page, click the Vertica(DBaaS) menu. Go to the Vertica(DBaaS) list page.
  3. Vertica(DBaaS) List On the page, click the resource to view the backup history. Vertica(DBaaS) Details You will be taken to the page.
  4. Click the Backup History button. The Backup History popup window opens.
  5. Backup History In the popup window, check the file you want to delete, then click the Delete button.

Vertica(DBaaS) Recover

If restoration from a backup file is required due to a failure or data loss, you can use the cluster recovery feature to recover based on a specific point in time.

Caution
To perform recovery, a capacity at least equal to the data type Disk capacity is required. If Disk capacity is insufficient, recovery may fail.

To recover Vertica (DBaaS), follow the steps below.

  1. All Services > Data Analytics > Vertica(DBaaS) Click the menu. Navigate to the Service Home page of Vertica(DBaaS).
  2. Click the Vertica(DBaaS) menu on the Service Home page. Go to the Vertica(DBaaS) List page.
  3. Vertica(DBaaS) Resource On the list page, click the resource you want to recover. You will be taken to the Vertica(DBaaS) Detail page.
  4. Click the Database Recovery button. Go to the Database(DBaaS) Recovery page.
  5. Database Recovery area, after entering the relevant information, click the Complete button.
    Category
    Required or not
    Detailed description
    Recovery TypeRequiredSet the point in time the user wants to recover
    • Backup point (recommended): Recover based on backup file. Select from the list of backup file timestamps displayed in the list
    • Recovery point: Choose the date and time to recover. Can be selected from the start time of the backup history
    Server Name PrefixRequiredRecovery DB Server Name
    • Enter 3~16 characters starting with a lowercase English letter, using lowercase letters, numbers, and the special character (-)
    • A postfix such as 001, 002 is appended based on the server name to create the actual server name
    Cluster NameRequiredRecovery DB Cluster Name
    • Enter using English, 3 to 20 characters
    • A cluster is a unit that groups multiple servers
    Number of nodesSelectNumber of data nodes
    • Set to be the same as the number of nodes configured in the original cluster.
    Service Type > Server TypeRequiredRecovery DB Server Type
    • Standard: Standard specifications commonly used
    • High Capacity: Large-capacity server of 24 vCore or more
    Service Type > Planned ComputeSelectStatus of resources with Planned Compute set
    • In Use: Number of resources with Planned Compute that are currently in use
    • Configured: Number of resources with Planned Compute set
    • Coverage Preview: Amount applied per resource by Planned Compute
    • Planned Compute Service Application: Go to the Planned Compute service application page
    Service Type > Block StorageRequiredBlock Storage settings used by the recovery DB
    • Base OS: Area where the DB engine is installed
    • DATA: Storage area for table data, archive files, etc.
      • Apply the same Storage type as set in the source cluster
      • After selecting Use, enter the storage purpose and capacity
      • Click the + button to add storage, and the x button to delete
      • Capacity can be entered in multiples of 8 within the range 16 to 5,120, and up to 9 can be created
    Management Console > Server TypeRequiredManagement Console Server Type
    • After selecting Use, choose the storage purpose and capacity
    • Standard: Standard specifications commonly used
    • High Capacity: Large-capacity server with 24 vCore or more
    Management Console > Block StorageRequiredBlock Storage settings used by Management Console
    • Select Use and then select Base OS
    Database usernameRequiredDatabase username
    • Apply the same username set in the original cluster
    Database PasswordRequiredDatabase Password
    • Apply the same password set in the original cluster
    Database Port NumberRequiredDatabase Port Number
    • Apply the same Port number as set in the original cluster
    IP Access ControlSelectService Access Policy Settings
    • Since the access policy is set for the IP entered on the page, you do not need to separately configure Security Group policies.
    • Enter in IP format (e.g., 192.168.10.1) or CIDR format (e.g., 192.168.10.0/24, 192.168.10.1/32) and click the Add button
    • To delete an entered IP, click the x button next to the entered IP
    Maintenance periodSelectDB maintenance period
    • If Use is selected, set day of week, start time, and duration
    • It is recommended to set a maintenance period for stable DB management. Patch work will be performed at the set time, causing service interruption
    • If set to not use, Samsung SDS is not responsible for issues arising from unapplied patches.
    License KeyRequiredEnter the Vertica License Key to recover
    • If the entered license key is not valid, service creation is not possible
    TagSelectAdd Tag
    • Add Tag button click after entering or selecting Key, Value values
    Table. Vertica(DBaaS) Recovery Configuration Items

3.3 - API Reference

API Reference

3.4 - CLI Reference

CLI Reference

3.5 - Release Note

Vertica(DBaaS)

2025.07.01
NEW Vertica(DBaaS) Service Official Version Release
  • Released Vertica(DBaaS) service, which can efficiently store data and improve query performance with columnar storage-based compression and encoding features.

4 - Data Flow

4.1 - Overview

Service Overview

Data Flow is a data processing flow tool that extracts large amounts of data from various data sources and visually creates a processing flow for transformation/transmission of stream/batch data, providing open-source Apache NiFi. Data Flow can be used independently in the Kubernetes Engine cluster environment of the Samsung Cloud Platform or with other application software.

architecture diagram
Figure. Data Flow architecture diagram

Provided Features

Data Flow provides the following functions.

  • Easy installation and management: Data Flow can be easily installed through the web-based Samsung Cloud Platform Console in a standard Kubernetes cluster environment. Based on open-source Apache NiFi, it automatically configures the architecture required for extensible clustering, and automatically installs ZooKeeper, Registry, and management modules. Through Data Flow, you can set up and deploy the setting files, NiFi templates, etc. required for service connection.
  • Easy Data Flow Management: The processing flow of stream/batch data can be easily written in a GUI-based manner tailored to the user environment, and efficient data extraction/transmission/processing between systems is possible with GUI-based data flow writing.
  • NiFi Template Gallery: You can share/distribute reference NiFi templates. Data Flow provides a gallery of work files for data processing flows frequently used in the field, and users can share their own data processing flow tasks.

Component

Data Flow is composed of Manager and Service modules, and provides Apache NiFi as a package.

Data Flow Manager

Data Flow Manager provides various managing functions to utilize NiFi more efficiently.

  • Through Data Flow Manager, customers can upload the Nar File they created and use it in the Processor, and upload setting files to share them.
  • Among NiFi templates, high-frequency templates are assetized and provided as a gallery, and can be used immediately with just one click.
  • Provides real-time monitoring and resource status monitoring for multiple services configured for Native NiFi Service.
  • You can easily provision setting information for NiFi configuration components within the cluster.

Data Flow Service

  • It provides a data flow management service based on Apache NiFi.
  • It automatically configures the architecture required for extensible clustering based on Apache NiFi, and Nifi, ZooKeeper, Nifi Registry modules are automatically installed.
  • When providing Nifi, you can set Description, resource size, access ID/PW, and Host Alias.
  • After creating the service, you can modify the Description, necessary resource size, access password, Host Alias, etc. and reflect them in the service.

Server spec type

When creating a Data Flow service, please check the following contents.

  • Recommended Service Installation Specifications: CPU 21 core, Memory 57 GB, storage 100 GB or more
Reference
  • The Data Flow service needs to be installed before creating the Ingress Controller.
  • In a Kubernetes cluster, only 1 Ingress Controller can be installed.
  • For more information, please refer to Ingress Controller installation.

Regional Provision Status

Data Flow is available in the following environments.

RegionAvailability
Western Korea (kr-west1)Provided
Korea East (kr-east1)Available
South Korea (kr-south1)Not provided
South Korea southern region 2 (kr-south2)Not provided
South Korea southern region 3(kr-south3)Not provided
Table. Data Flow Provision Status by Region

Preceding Service

This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance.

Service CategoryServiceDetailed Description
StorageFile StorageStorage that allows multiple client servers to share files through network connections
ContainerKubernetes EngineKubernetes container orchestration service
Fig. Preceding Data Flow Service

4.2 - How-to guides

The user can enter the essential information of Data Flow through the Samsung Cloud Platform Console and create the service by selecting detailed options.

Creating Data Flow

You can create and use the Data Flow service in the Samsung Cloud Platform Console.

To create a Data Flow, follow the next procedure.

  1. Click on the menu for all services > Data Analytics > Data Flow. It moves to the Service Home page of Data Flow.

  2. On the Service Home page, click the Create Data Flow button. It moves to the Create Data Flow page.

  3. Data Flow Creation page where you enter the information needed to create a service and select detailed options.

    • Version Selection area, please select the necessary information.

      Division
      Necessity
      Detailed Description
      Data Flow versionrequiredSelect version of the selected image
      • Provide a list of versions of the server image provided
      Fig. Data Flow version selection items

    • Cluster Selection area, please enter or select the required information. To install Data Flow, creating nodes for the Kubernetes cluster and a workspace is required first.

      Classification
      Necessity
      Detailed Description
      Cluster NameRequiredSelect Cluster to Use
      Ingress ControllerRequiredSelect the Ingress Controller installed in the cluster
      • In the Details tab of the installed Ingress Controller, add the following information to the ConfigMap item:
        • Key: allow-snippet-annotations
        • Value: true
      Fig. Data Flow cluster selection items

    • Service Information Input area, please enter or select the necessary information.

      Classification
      Necessity
      Detailed Description
      Data Flow namerequiredEnter Data Flow name
      • Start with lowercase English letters and do not end with a special character (-), using lowercase English letters, numbers, and special characters (-) to input 3 ~ 30 characters
      Storage ClassRequiredSelect the storage class used by the chosen cluster
      DescriptionSelectEnter additional information or description about the Data Flow within 150 characters
      Domain settingMandatoryEnter Data Flow domain
      • Start with lowercase English letters and do not end with a special character (-), using lowercase letters, numbers, and special characters (-) to input 3 to 50 characters
      • {Data Flow name}.{set domain} will be the Data Flow access address.
      Node SelectorRequiredTo install on a specific node, enter a distinguishable label from the node’s labels
      • If the node label is entered incorrectly, an installation error may occur, so check the node label in advance
      • The node label can be checked in the yaml file of the corresponding node
      AccountRequiredEnter Data Flow Manager account
      • ID: Starts with lowercase English letters and uses lowercase letters and numbers to enter a value between 6 and 30
      • Password: Includes uppercase (English), lowercase (English), numbers, and special characters (!@#$%^&*) and enter 8 to 50 characters
      • Password Confirmation: Enter the password exactly once more
      Host AliasSelectionAdd host information to be connected to Data Flow (up to 20 can be created, including default)
      • Select “Use”, then click the + button
      • Hostname: Enter in hostname or domain format, using lowercase, numbers, and special characters (-) with 3-63 characters
      • IP: Enter in IP format
      • To delete, click the X button
      • The firewall between the cluster and the server must be open to use the added host information
      Fig. Data Flow service information input items

    • Enter Additional Information area, please enter or select the necessary information.

      Division
      Necessity
      Detailed Description
      TagSelectionTag addition
      • Tag addition button to create and add tags or add existing tags possible
      • Up to 50 tags can be added
      • Newly added tags are applied after service creation is complete
      Fig. Data Flow Additional Information Input Items

  4. In the Summary panel, review the detailed information and estimated charges, then click the Complete button.

    • Once creation is complete, check the created resource on the Data Flow list page.

Check Data Flow Detailed Information

You can check and modify the list of all resources and detailed information of Data Flow. The Data Flow details page consists of detailed information, tags, and work history tabs.

To check the detailed information of Data Flow, follow the next procedure.

  1. Click on the menu for all services > Data Analytics > Data Flow. It moves to the Service Home page of Data Flow.
  2. On the Service Home page, click the Data Flow menu. It moves to the Data Flow list page.
  3. Data Flow list page, click on the resource to check the detailed information. It moves to the Data Flow details page.
    • Data Flow Details page top shows status information and additional function information.
ClassificationDetailed Description
Status DisplayData Flow Status
  • Creating: being created
  • Running: operating, Data Flow Services can be created
  • Updating: settings are being updated
  • Terminating: service is being terminated
  • Error: error occurred during creation or service is in an abnormal state
Hosts file setting informationButton to check and copy host file information to access Data Flow
Service CancellationButton to cancel the service
Fig. Data Flow status information and additional functions

Detailed Information

On the Data Flow List page, you can check the detailed information of the selected resource and modify the information if necessary.

ClassificationDetailed Description
ServiceService Category
Resource TypeService Name
SRNUnique resource ID on Samsung Cloud Platform
  • Means cluster SRN
Resource NameResource Name
  • Means cluster name
Resource IDUnique resource ID in the service
CreatorUser who created the service
Creation TimeTime when the service was created
ModifierUser who modified the service information
Revision TimeTime when service information was revised
Cluster NameServer cluster name composed of servers
Storage ClassStorage class used by the selected cluster
DescriptionAdditional information or description about Data Flow
Domain SettingData Flow Domain Name
Node SelectorNode Label
Web UrlData Flow URL
AccountData Flow Manager account
Host AliasHost information to be connected to Data Flow
Fig. Data Flow detailed information tab items

Tag

On the Data Flow List page, you can check the tag information of the selected resource, and add, change, or delete it.

ClassificationDetailed Description
Tag listTag list
  • Check Key, Value information of the tag
  • Up to 50 tags can be added per resource
  • When entering a tag, search and select from the existing Key and Value list
Fig. Data Flow tag tab items

Work History

You can check the work history of the selected resource on the Data Flow list page.

ClassificationDetailed Description
Work history listResource change history
  • Check work time, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Data Flow job history tab detailed information items

Data Flow cancellation

You can cancel unused Data Flow to reduce operating costs. However, if you cancel the service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.

To cancel Data Flow, follow the next procedure.

  1. Click on the menu for all services > Data Analytics > Data Flow. It moves to the Service Home page of Data Flow.
  2. Service Home page, click the Data Flow menu. It moves to the Data Flow list page.
  3. Data Flow list page, select the resource to be canceled and click the Service Cancellation button.
  4. Once the cancellation is complete, check the Data Flow list page to see if the resource has been cancelled.
Notice
  • Data Flow You must first delete the connected Data Flow Services to cancel.
  • Data Flow will be cancelled, and the created namespace will also be deleted.

4.2.1 - Data Flow Services

The user can enter the essential information of Data Flow Services in the Data Flow service through the Samsung Cloud Platform Console and create the service by selecting detailed options.

Create Data Flow Services

The user can add a service by selecting the detailed options of the Data Flow service or entering the setting value.

Notice
When applying for Data Flow Services, the scale of resources must be secured to be more than the available capacity of the K8s cluster.

To create Data Flow Services, follow these steps.

  1. Click all services > Data Analytics > Data Flow menu. It moves to Data Flow Service Home page.

  2. On the Service Home page, click Data Flow Services. It moves to the Data Flow Services list page.

  3. On the Data Flow Services list page, click the Create Data Flow Services button. It moves to the Create Data Flow Services page.

  4. Data Flow Services Creation page, enter the information required for service creation and select detailed options.

    • Enter Service Information Enter or select the required information in the area.

      Classification
      Necessity
      Detailed Description
      Data Flow namerequiredData Flow selection
      Flow Service nameRequiredEnter Data Flow Services name
      • Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to enter 3 to 30 characters
      Storage ClassRequiredSelect the storage class used by the selected cluster
      DescriptionSelectEnter additional information or description about Data Flow Services within 150 characters
      Domain SettingMandatoryEnter the Data Flow Services domain
      • Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to input 3 ~ 50 characters
      • {Data Flow Services name}.{set domain} will be the Data Flow Services access address.
      Node SelectorRequiredTo install on a specific node, enter a distinguishable Label from the node’s Labels
      • If the node Label is entered incorrectly, an installation error may occur, so check the node Label in advance
      • The node Label can be checked in the yaml file of the corresponding node
      Service WorkloadRequired
      • Nifi: A module that provides Apache Nifi services and UI
      • Nifi Registry: A module for setting and deploying Nifi templates
      • Zookeeper: A module that supports distributed processing of Nifi in multiple nodes
      AccountRequiredEnter Nifi account
      • ID: Enter a value between 6 and 30 characters, starting with a lowercase letter and using lowercase letters and numbers
      • Password: Enter a value of 8 to 50 characters, including uppercase letters (English), lowercase letters (English), numbers, and special characters (!@#$%^&*)
      • Password Confirmation: Enter the password again, identical to the previous entry
      Fig. Data Flow Services Service Information Input Items

    • Additional Information Input area, please enter or select the required information.

      Classification
      Necessity
      Detailed Description
      Host AliasSelectionAdd host information to be connected to Data Flow (up to 20 can be created, including default)
      • Use is selected and then + button is clicked
      • Hostname: in the form of hostname or domain, using lowercase letters, numbers, and special characters (-) to enter 3 ~ 63 characters
      • IP: enter in IP format
      • click the X button to delete
      • the firewall between the cluster and the corresponding server must be open to use the added host information
      TagSelectionAdd tag
      • Add tag button to create and add tags or add existing tags
      • Up to 50 tags can be added
      • Newly added tags are applied after service creation is completed
      Fig. Data Flow Additional Information Input Items

  5. In the Summary panel, review the detailed information and estimated charges, and click the Complete button.

    • Once creation is complete, check the created resource on the Data Flow Services list page.

Data Flow Services detailed information check

You can check and modify the list of all resources and detailed information of Data Flow Services. The Data Flow Services details page consists of details, tags, and operation history tabs.

To check the detailed information of Data Flow Services, follow the next procedure.

  1. All Services > Data Analytics > Data Flow menu should be clicked. It moves to the Service Home page of Data Flow.
  2. Service Home page, click the Data Flow Services menu. It moves to the Data Flow Services list page.
  3. Data Flow Services list page, click on the resource to check the detailed information. Move to the Data Flow Services details page.
    • Data Flow Services Details page displays status information and additional features at the top.
ClassificationDetailed Description
Status DisplayData Flow Services status
  • Creating: being created
  • Running: in operation
  • Updating: updating settings
  • Terminating: service termination in progress
  • Error: creation failed or service unavailable
Hosts file setting informationA button to check and copy host file information to access Data Flow Services
Data Flow Services deletionButton to cancel the service
Fig. Data Flow Services Status Information and Additional Functions

Detailed Information

On the Data Flow Services list page, you can check the detailed information of the selected resource and modify the information if necessary.

DivisionDetailed Description
ServiceService Name
Resource TypeResource Type
SRNUnique resource ID in Samsung Cloud Platform
  • Means cluster SRN
Resource NameResource Name
  • Means cluster name
Resource IDUnique resource ID in the service
CreatorService creator user
Creation TimeThe time when the service was created
ModifierUser who modified the service information
Modified TimeTime when service information was modified
Data Flow NameData Flow Name
Storage ClassStorage class used by the selected cluster
DescriptionAdditional information or description about Data Flow Services
Domain SettingData Flow Services domain name
Node SelectorNode Label
Web UrlData Flow Services URL
AccountAirflow Account
Host AliasHost information to be connected to Data Flow Services
Fig. Data Flow Services detailed information tab items

Tag

On the Data Flow Services List page, you can check the tag information of the selected resource, and add, change, or delete it.

ClassificationDetailed Description
Tag listTag list
  • Key, Value information of the tag can be checked
  • Up to 50 tags can be added per resource
  • When entering a tag, search and select from the existing Key and Value list
Fig. Data Flow Services Tag Tab Items

Work History

You can check the operation history of the selected resource on the Data Flow Services list page.

ClassificationDetailed Description
Work history listResource change history
  • Check work date, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Data Flow Services job history tab detailed information items

Cancel Data Flow Services

You can cancel unused Data Flow Services to reduce operating costs. However, when canceling a service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.

To cancel Data Flow or Data Flow Services, follow the procedure below.

  1. Click All Services > Data Analytics > Data Flow menu. It moves to the Service Home page of Data Flow.
  2. Service Home page, click the Data Flow Services menu. Move to the Data Flow Services list page.
  3. Data Flow Services list page, select the resource to be canceled and click the Data Flow Services delete button.
  4. Once the cancellation is complete, please check if the resource has been cancelled on the Data Flow Services list page.
Notice
  • Data Flow Services will be cancelled, and the created namespace will also be deleted.

4.2.2 - Install Ingress Controller

User must install Ingress Controller before creating Data Flow service. Only one Ingress Controller should be installed in the Kubernetes cluster.

Install Ingress Controller using Container Registry

To install the Ingress Controller using Container Registry, follow the steps below.

For detailed Container Registry creation instructions, please refer to the Container > Container Registry > How-to guides guide.
  1. After checking the service domain, download the corresponding Ingress Controller image file.
  2. All Services > Container > Kubernetes Engine > Workloads > Pods menu, click. Pod List page will be displayed.
  3. Object Creation Click the button. Object Creation A popup window opens.
  4. After selecting the cluster to install Data Flow, copy and paste the contents of the Yaml file.
  5. Click the Confirm button to complete the installation. The installed Ingress Controller can be seen in the list.
Reference
For detailed object creation methods, please refer to Container > Kubernetes Engine > Create Deployment.

4.3 - API Reference

API Reference

4.4 - CLI Reference

CLI Reference

4.5 - Release Note

Data Flow

2025.04.28
NEW Official Release of Data Flow Service
  • The Data Flow service, which extracts/transforms/transfers data from various sources and automates data processing flows, has been released.
  • It provides open-source Apache NiFi.

5 - Data Ops

5.1 - Overview

Service Overview

Data Ops is a managed workflow orchestration service based on Apache Airflow that writes workflows for periodic or repetitive data processing tasks and automates task scheduling. Users can automate the process of bringing useful data to the right place at the right time, and monitor the configuration and progress of data pipelines.

Architecture Diagram
Figure. Data Ops Architecture Diagram

Provided Features

Data Ops provides the following functions.

  • Easy installation and management: Data Ops can be easily installed through a web-based Console in a standard Kubernetes cluster environment. Apache Airflow and management modules are automatically installed, and integrated monitoring of the execution status of web servers and schedulers is possible through an integrated dashboard.
  • Dynamic Pipeline Composition: Pipeline composition for data tasks is possible based on Python code. Since it dynamically generates tasks in conjunction with data task scheduling, you can freely compose the desired workflow form and scheduling.
  • Convenient workflow management: DAG (Direct Acyclic Graph: directed acyclic graph) configuration is visualized and managed through a web-based UI, making it easy to understand the data flow’s preceding and parallel relationships. Additionally, each task’s timeout, retry count, priority definition, etc. can be easily managed.

Component

Data Ops consists of Manager and Service modules, and provides Apache Airflow by packaging it.

Data Ops Manager

Data Ops Manager provides various managing functions to use Airflow more efficiently.

  • You can upload Plugin File, Shared File, Python Library File to be used in Ops Service through Ops Manager.
  • You can easily provision setting information for Airflow configuration components within the cluster.
  • You can manage and easily provision different service settings within the Airflow cluster.

Data Ops Service

  • Provides a managed workflow orchestration service based on Apache Airflow.
  • When Airflow is provided, you can set Description, necessary resource size, DAGs GitSync, and Host Alias.
  • After creating a service, you can modify Description, resource usage, DAGs GitSync, and Host Alias to reflect the service.

Server Spec Type

When creating a Data Ops service, please check the following contents.

  • Recommended Service Installation Specifications: CPU KubernetesExecutor 43 core, CPU CeleryExecutor 25 core, Memory 50 GB, storage 100 GB or more
Note
  • It is necessary to install Ingress Controller before creating Data Ops service.
  • In a Kubernetes cluster, only 1 Ingress Controller can be installed.
  • For more detailed information, please refer to Ingress Controller installation.

Regional Provision Status

Data Ops is available in the following environments.

RegionAvailability
Western Korea(kr-west1)Provided
Korea East(kr-east1)Not provided
South Korea (kr-south1)Provided
South Korea Central(kr-central)Available
South Korea southern region 3(kr-south3)Provided
Table. Data Ops Regional Provision Status

Preceding Service

This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance.

Service CategoryServiceDetailed Description
StorageFile StorageStorage that allows multiple client servers to share files through network connections
ContainerKubernetes EngineKubernetes container orchestration service
ContainerContainer RegistryA service that easily stores, manages, and shares container images
Fig. Data Ops Preceding Service

5.2 - How-to guides

The user can enter the essential information of Data Ops through the Samsung Cloud Platform Console and create the service by selecting detailed options.

Create Data Ops

You can create and use the Data Ops service on the Samsung Cloud Platform Console.

To create Data Ops, follow the following procedure.

  1. Click on the menu for all services > Data Analytics > Data Ops. It moves to the Service Home page of Data Ops.

  2. On the Service Home page, click the Create Data Ops button. It moves to the Create Data Ops page.

  3. Data Ops Creation page, enter the information required for service creation and select detailed options.

    • Version Selection area, please select the necessary information.

      Classification
      Necessity
      Detailed Description
      Data Ops versionrequiredSelect version of the selected image
      • Provide a list of versions of the provided server image
      Table. Data Ops version selection items

    • Cluster Selection area, please enter or select the required information. To install Data Ops, it is necessary to create nodes for the Kubernetes cluster and the working environment first.

      Classification
      Mandatory
      Detailed Description
      Cluster NameRequiredSelect Cluster to Use
      Ingress ControllerrequiredSelect the Ingress Controller installed in the cluster
      Fig. Data Ops Cluster Selection Items

    • Enter Service Information area, please enter or select the necessary information.

      Classification
      Necessity
      Detailed Description
      Data Ops namerequiredEnter Data Ops name
      • Start with lowercase English letters and do not end with special characters (-), use lowercase letters, numbers, and special characters (-) to enter 3 ~ 30 characters
      Storage ClassRequiredSelect the storage class used by the selected cluster
      DescriptionOptionalEnter additional information or description about Data Ops within 150 characters
      Domain SettingMandatoryEnter Data Ops domain
      • Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to enter 3 to 50 characters
      • {Data Ops name}.{set domain} will be the Data Ops access address.
      Node SelectorRequiredTo install on a specific node, enter a distinguishable Label from the node’s Labels
      • If the node Label is entered incorrectly, an installation error may occur, so check the node Label in advance
      • The node Label can be checked in the yaml file of the corresponding node
      AccountRequiredEnter Data Ops Manager account
      • ID: Enter a value between 6 and 30 characters, starting with a lowercase English letter and using only lowercase letters and numbers
      • Password: Enter a value between 8 and 50 characters, including uppercase letters (English), lowercase letters (English), numbers, and special characters (!@#$%^&*)
      • Password Confirmation: Enter the password again, identical to the previous entry
      Host AliasSelectionAdd host information to be connected to Data Ops (up to 20 can be created, including default)
      • Select “Use” and click the + button
      • Hostname: Enter in hostname or domain format, using lowercase letters, numbers, and special characters (-) in 3-63 characters
      • IP: Enter in IP format
      • To delete, click the X button
      • The firewall between the cluster and the corresponding server must be open to use the added host information
      Fig. Data Ops Service Information Input Items

    • Enter Additional Information Enter or select the required information in the area.

      Classification
      Necessity
      Detailed Description
      TagSelectAdd Tag
      • Add Tag button to create and add tags or add existing tags
      • Up to 50 tags can be added
      • Newly added tags will be applied after service creation is complete
      Fig. Data Ops Additional Information Input Items

  4. In the Summary panel, review the detailed information and estimated charges, and then click the Complete button.

    • Once creation is complete, check the created resource on the Data Ops list page.

Data Ops detailed information check

You can check and modify the full list of Data Ops resources and detailed information. The Data Ops details page consists of detailed information, tags, and work history tabs.

To check the detailed information of Data Ops, follow the next procedure.

  1. All Services > Data Analytics > Data Ops menu should be clicked. It moves to the Service Home page of Data Ops.
  2. Service Home page, click the Data Ops menu. It moves to the Data Ops list page.
  3. Data Ops list page, click on the resource to check the detailed information. It moves to the Data Ops details page.
    • Data Ops Details page top shows status information and additional function information.
ClassificationDetailed Description
Status DisplayData Ops Status
  • Creating: being created
  • Running: operating, Data Ops Services can be created
  • Updating: settings update in progress
  • Terminating: service termination in progress
  • Error: error occurred during creation or service abnormal status
Hosts file setting informationButton to check and copy host file information to access Data Ops
Service CancellationButton to cancel the service
Fig. Data Ops Status Information and Additional Features

Detailed Information

On the Data Ops list page, you can check the detailed information of the selected resource and modify the information if necessary.

ClassificationDetailed Description
ServiceService Name
Resource TypeResource Type
SRNUnique resource ID in Samsung Cloud Platform
  • Means cluster SRN
Resource NameResource Name
  • Means cluster name
Resource IDUnique resource ID in the service
CreatorUser who created the service
Creation TimeTime when the service was created
ModifierUser who modified the service information
Modified DateDate when service information was modified
Cluster NameServer cluster name composed of servers
Storage ClassStorage class used by the selected cluster
DescriptionAdditional information or description about Data Ops
Domain SettingData Ops Domain Name
Node SelectorNode Label
Web UrlData Ops URL
AccountData Ops Manager account
Host AliasHost information to be connected to Data Ops
Fig. Data Ops detailed information tab items

Tag

On the Data Ops list page, you can check the tag information of the selected resource, and add, change, or delete it.

ClassificationDetailed Description
Tag listTag list
  • Check Key, Value information of the tag
  • Up to 50 tags can be added per resource
  • When entering a tag, search and select from the existing list of created Key and Value
Fig. Data Ops tags tab items

Work History

You can check the work history of the selected resource on the Data Ops list page.

ClassificationDetailed Description
Work history listResource change history
  • Check work date, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Data Ops job history tab detailed information items

Cancel Data Ops

You can cancel unused Data Ops to reduce operating costs. However, if you cancel the service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.

To cancel Data Ops, follow the procedure below.

  1. Click All Services > Data Analytics > Data Ops menu. It moves to the Service Home page of Data Ops.
  2. On the Service Home page, click the Data Ops menu. It moves to the Data Ops list page.
  3. Data Ops list page, select the resource to be canceled and click the Service Cancellation button.
  4. Once the cancellation is complete, please check if the resource has been cancelled on the Data Ops list page.
Notice
Data Ops cannot be deleted until you delete the connected Data Ops Services.

5.2.1 - Data Ops Services

Users can enter essential information for Data Ops Services within the Data Ops service and create the service by selecting detailed options through the Samsung Cloud Platform Console.

Create Data Ops Services

The user can add a service by selecting detailed options for Data Ops or entering setting values.

Notice
When applying for Data Ops Services, the scale of resources should be secured to be more than the available capacity of the K8s cluster.

To create Data Ops Services, follow the procedure below.

  1. Click on the menu for all services > Data Analytics > Data Ops. It moves to the Service Home page of Data Ops.

  2. On the Service Home page, click Data Ops Services. It moves to the Data Ops Services list page.

  3. On the Data Ops Services list page, click the Create Data Ops Services button. It moves to the Create Data Ops Services page.

  4. Data Ops Services Creation page, enter the information required for service creation and select detailed options.

    • Enter Service Information area, enter or select the required information.

      Division
      Necessity
      Detailed Description
      Data Ops NameRequiredData Ops Selection
      Ops Service NameRequiredEnter Data Ops Services name
      • Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to input 3 ~ 30 characters
      Storage ClassRequiredSelect the storage class used by the chosen cluster
      DescriptionOptionalEnter additional information or description about Data Ops Services within 150 characters
      Domain settingMandatoryEnter Data Ops Services domain
      • Start with lowercase English letters and do not end with a special character (-), use lowercase letters, numbers, and special characters (-) to input 3 ~ 50 characters
      • {Data Ops Services name}.{set domain} will be the Data Ops Services access address.
      Node SelectorRequiredTo install on a specific node, enter a distinguishable label from the node’s labels
      • If the node label is entered incorrectly, an installation error may occur, so check the node label in advance
      • Node labels can be checked in the yaml file of the corresponding node
      Service WorkloadRequired
      • Web Server: Provides visualization of DAG components and status, and Airflow configuration management module
      • Scheduler: Manages scheduling and execution of various DAGs and tasks for orchestration
      • Worker: Performs actual orchestration and data processing tasks
        • Worker(Kubernetes): Dynamically creates and runs pods when worker conditions are met, allowing for efficient resource usage. The Replica text box is disabled when Kubernetes is selected.
        • Worker(Celery): Creates and maintains static pods when worker conditions are met, allowing for faster performance with large requests. The Replica text box is enabled and user input is allowed when Celery is selected.
        • The type of executor chosen cannot be changed once selected
      AccountRequiredEnter Airflow account
      • ID: Starts with lowercase English letters and uses lowercase letters and numbers to enter a value between 6 and 30 characters
      • Password: Includes uppercase (English), lowercase (English), numbers, and special characters (!@#$%^&*) and enters 8 to 50 characters
      • Password Confirmation: Enter the password again
      Table. Data Ops Services service information input items

    • Enter Additional Information area, enter or select the required information.

      Classification
      Necessity
      Detailed Description
      Host AliasSelectionAdd host information to be connected to Data Ops (up to 20 can be created, including default)
      • Select “Use” and click the + button
      • Hostname: Enter in hostname or domain format, using lowercase letters, numbers, and special characters (-) with 3 ~ 63 characters
      • IP: Enter in IP format
      • To delete, click the X button
      • The firewall between the cluster and the server must be open to use the added host information
      TagSelectionTag addition
      • Tag addition button to create and add tags or add existing tags possible
      • Up to 50 tags can be added
      • Newly added tags are applied after service creation is complete
      Fig. Additional Data Ops information input items

  5. In the Summary panel, review the detailed information and estimated charges, then click the Complete button.

    • Once creation is complete, check the created resource on the Data Ops Services list page.

Data Ops Services detailed information check

You can check and modify the full list of Data Ops Services resources and detailed information. The Data Ops Services details page consists of details, tags, and work history tabs.

To check the details of Data Ops Services, follow the next procedure.

  1. Click on the menu for all services > Data Analytics > Data Ops. It moves to the Service Home page of Data Ops.
  2. On the Service Home page, click the Data Ops Services menu. It moves to the Data Ops Services list page.
  3. Data Ops Services list page, click on the resource to check the detailed information. It moves to the Data Ops Services details page.
    • Data Ops Services Details page top shows status information and additional features.
ClassificationDetailed Description
Status IndicatorData Ops Services status
  • Creating: being created
  • Running: operating
  • Updating: updating settings
  • Terminating: service termination in progress
  • Error: creation failed or service unavailable
Hosts file setting informationButton to check and copy host file information to access Data Ops Services
Data Ops Services deletionbutton to cancel the service
Fig. Data Ops Services Status Information and Additional Features

Detailed Information

On the Data Ops Services list page, you can check the detailed information of the selected resource and modify the information if necessary.

ClassificationDetailed Description
ServiceService Category
Resource TypeService Name
SRNUnique resource ID in Samsung Cloud Platform
  • Means cluster SRN
Resource NameResource Name
  • Means cluster name
Resource IDUnique resource ID in the service
CreatorUser who created the service
Creation TimeTime when the service was created
ModifierUser who modified the service information
Revision TimeThe time when service information was revised
Data Ops NameData Ops Full Name
Storage ClassStorage class used by the selected cluster
DescriptionAdditional information or description about Data Ops Services
Domain SettingData Ops Services domain name
Node SelectorNode Label
Web UrlData Ops Services URL
AccountAirflow Account
Host AliasHost information to be connected to Data Ops Services
Fig. Data Ops Services detailed information tab items

Tag

On the Data Ops Services list page, you can check the tag information of the selected resource and add, change, or delete it.

ClassificationDetailed Description
Tag listTag list
  • Key, Value information of the tag can be checked
  • Up to 50 tags can be added per resource
  • When entering a tag, search and select from the existing Key and Value list
Fig. Data Ops Services tags tab items

Work History

You can check the operation history of the selected resource on the Data Ops Services list page.

ClassificationDetailed Description
Work history listResource change history
  • Check work time, resource ID, resource name, work details, event topic, work result, and worker information
Fig. Data Ops Services job history tab detailed information items

Data Ops Services cancellation

You can cancel unused Data Ops Services to reduce operating costs. However, when canceling a service, the operating service may be stopped immediately, so you should consider the impact of stopping the service sufficiently before proceeding with the cancellation work.

To cancel Data Ops Services, follow the procedure below.

  1. Click on the menu for all services > Data Analytics > Data Ops. It moves to the Service Home page of Data Ops.
  2. On the Service Home page, click the Data Ops Services menu. It moves to the Data Ops Services list page.
  3. Data Ops Services list page, select the resource to be canceled and click the Data Ops Services delete button.
  4. Once the cancellation is complete, please check if the resource has been cancelled on the Data Ops Services list page.

5.2.2 - Ingress Controller Install

User must install the Ingress Controller before creating the Data Ops service. Only one Ingress Controller should be installed in the Kubernetes cluster.

Install Ingress Controller using Container Registry

To install the Ingress Controller using Container Registry, follow the steps below.

For detailed Container Registry creation instructions, please refer to the Container > Container Registry > How-to guides guide.
  1. Check the service domain, then download the corresponding Ingress Controller image file.
  2. All Services > Container > Kubernetes Engine > Workloads > Pods menu, click. Pod List page, navigate.
  3. Click the Create Object button. The Create Object popup opens.
  4. After selecting the cluster to install Data Ops, copy and paste the contents of the Yaml file.
  5. Confirm Click the button to complete the installation. The installed Ingress Controller can be seen in the list.
Reference
For detailed object creation method, refer to Container > Kubernetes Engine > Create Deployment.

5.3 - API Reference

API Reference

5.4 - CLI Reference

CLI Reference

5.5 - Release Note

Data Ops

2025.04.28
NEW Data Ops Service Official Version Release
  • A workflow can be created and job scheduling automated for periodic or repetitive data processing tasks with the release of the Data Ops service.
  • It is a managed workflow orchestration service based on Apache Airflow.

6 - Quick Query

6.1 - Overview

Service Overview

Quick Query is an interactive query service that allows you to analyze large amounts of data quickly and easily using standard SQL. It is automatically installed on a standard Kubernetes cluster and provides easy and fast access to various data sources such as Cloud Hadoop, Object Storage, and RDB, enabling data retrieval and processing.

Key Features

  • Easy and Fast Data Retrieval: After defining a schema for data stored in Object Storage, you can easily and quickly retrieve data using standard SQL. Any user who can handle SQL can easily analyze large datasets without being a professional analyst.
  • Rapid Parallel Distributed Processing: Using the Trino engine, which supports parallel distributed processing, queries are automatically divided and processed in parallel on multiple nodes, allowing you to quickly retrieve query results even for large amounts of data.
  • Various Service Structures: It provides a shared fixed resource mode, a shared resource expansion mode, and a personal resource expansion mode. The shared fixed resource mode supports a stable response speed for large data queries, while the shared resource expansion mode allows for more affordable use in cases of irregular usage. Additionally, the personal resource expansion mode supports each user’s independent analysis work, enabling the use of Quick Query with a structure that meets user demands.

Service Composition Diagram

Composition Diagram
Figure. Quick Query Composition Diagram

Provided Functions

Quick Query provides the following functions:

  • Single Access Support for Various Data Sources (Supporting 11 Data Sources)
  • Automatic Storage Function for Result Data in Object Storage
  • Reuse Function for Query Results
  • Access Control Function through Ranger Integration
  • Data Usage Control Function
CategoryTypeNote
Cloud Hadoophive_on_cloud_hadoop
iceberg_on_cloud_hadoop
Using Cloud Hadoop’s Hive Metastore
Object Storagehive_on_object_storage
iceberg_on_object_storage
Deploying Hive Metastore in Quick Query
RDBpostgresql
mariadb
sqlserver
oracle
mysql
JDBC Driver Upload required (licensed)
TPCDStpcdsBuilt-in Data Source provided by Quick Query
TPCHtpchBuilt-in Data Source provided by Quick Query
Table. Supported Data Sources
Typeselectinsertupdatedeletecreatedropalteranalyzecall
hive_on_cloud_hadoopOOOOOOOOO
iceberg_on_cloud_hadoopOOOOOOOOO
hive_on_object_storageOOOOOOOOO
iceberg_on_object_storageOOOOOOOOO
postgresqlOOOOOO
mariadbOOOOOO
sqlserverOOOOOO
greenplumOOOOOO
oracleOOOOOO
mysqlOOOOOO
tpcdsO
tpchO
Table. Supported SQL

Components

Query Engine Type: Shared

The query engine is a structure that is shared by multiple users when one is running.

  • Fixed Resource Mode (No Auto Scaling): When Auto Scaling is not used, the query engine runs with fixed resources according to the user’s selection. Since the query engine always runs with the same resources, it can guarantee consistent query performance.

    Diagram
    Figure. Fixed Resource Mode (No Auto Scaling)
  • Resource Expansion Mode (Using Auto Scaling): When Auto Scaling is used, the query engine’s worker nodes automatically scale in/out according to the processing volume. When the processing volume is low, the worker nodes decrease to one, and when the processing volume increases, the worker nodes increase. Additionally, resources can be adjusted according to the cluster size.

    Diagram
    Figure. Resource Expansion Mode (Using Auto Scaling)

Query Engine Type: Personal

  • Resource Expansion Mode (Using Auto Scaling): The personal query engine type is a structure where the query engine runs separately for each user. Each query engine supports Auto Scale in/out and automatically stops when not used for an extended period. When used again, the query engine automatically restarts. The worker nodes decrease to one when the processing volume is low and increase when the processing volume increases. Additionally, resources can be adjusted according to the cluster size.

    Diagram
    Figure. Resource Expansion Mode (Using Auto Scaling)

Server Type

The server types supported by Quick Query are as follows:

ClassificationExampleDetailed Description
Server TypeStandardProvided server types
  • Standard: Standard specifications (vCPU, Memory) configuration commonly used
  • High Capacity: Server specifications with 24 cores or more
Server Sizes1v2m4Provided server specifications
  • vCPU 2, Memory 4G
Table. Quick Query Supported Server Types

The minimum specifications required to use Quick Query are as follows:

ClassificationDetailsCluster Size (User Input Value)Fixed Node PoolAuto-Scaling Node Pool
SharedFixed Resource Mode (No Auto Scaling)Replica: 1
CPU: 4 Core
Memory: 8GB
8 Core, 16GB * 4N/A
SharedResource Expansion Mode (Using Auto Scaling)Small(1 Core, 4GB)8 Core, 16GB * 38 Core, 16GB * 1
PersonalResource Expansion Mode (Using Auto Scaling)Small(1 Core, 4GB)8 Core, 16GB * 38 Core, 32GB * 2
Table. Quick Query Minimum Specifications

Region-Based Provisioning Status

Quick Query is available in the following environments:

RegionAvailability
Korea West (kr-west1)Available
Korea East (kr-east1)Available
Korea South 1 (kr-south1)Not Available
Korea South 2 (kr-south2)Not Available
Korea South 3 (kr-south3)Not Available
Table. Quick Query Region-Based Provisioning Status

Preceding Services

The following services must be configured before creating Quick Query. Please refer to the guides provided for each service to prepare them in advance.

Service CategoryServiceDetailed Description
NetworkingVPCA service that provides an independent virtual network in a cloud environment
NetworkingSecurity GroupA virtual firewall that controls server traffic
StorageFile StorageA storage that allows multiple client servers to share files through network connections
Table. Quick Query Preceding Services

6.2 - How-to guides

Users can create Quick Query services by entering the required information and selecting detailed options through the Samsung Cloud Platform Console.

Creating Quick Query

You can create Quick Query services through the Samsung Cloud Platform Console.

To create Quick Query, follow these steps:

  1. Click All Services > Data Analytics > Quick Query. This will take you to the Service Home page of Quick Query.

  2. On the Service Home page, click the Create Quick Query button. This will take you to the Create Quick Query page.

  3. On the Create Quick Query page, enter the required information and select the detailed options.

    • In the Version Selection section, select the required information.
      Category
      Required
      Description
      Quick QueryRequiredSelect the Quick Query service version
      • Provides a list of available versions
      Table. Quick Query Service Version Selection Items
    • In the Service Information Input section, enter or select the required information.
      Category
      Required
      Description
      Quick Query NameRequiredEnter the Quick Query name
      • Starts with a lowercase letter and does not end with a special character (-), uses lowercase letters, numbers, and special characters (-) to enter 3-30 characters
      DescriptionOptionalEnter additional information or description of Quick Query within 150 characters
      Domain SettingRequiredEnter the Quick Query domain
      • Starts with a lowercase letter and does not end with special characters (-, .), uses lowercase letters, numbers, and special characters (-, .) to enter 3-50 characters
      • {Quick Query Name}.{Set Domain} will be the Quick Query access address.
      Query Engine TypeRequiredSelect the query engine type
      • Shared: Multiple users share a single query engine
      • Dedicated: Each user has a separate engine
      Cluster SizeRequiredSelect the resource capacity for cluster configuration
      • If the engine type is Shared,
        • Auto Scaling can be selected to choose the cluster capacity (Small, Medium, Large, Extra Large).
        • If Auto Scaling is not selected, the cluster capacity can be set by entering Replica, CPU, and Memory.
      • If the engine type is Dedicated,
        • the cluster capacity can be selected (Small, Medium, Large, Extra Large).
      • Engine capacity (when using Auto Scaling)
        • Small: 1Core, 4GB
        • Medium: 4Core, 16GB
        • Large: 8Core, 64GB
        • Extra Large: 16Core, 128GB
      • Engine capacity (when not using Auto Scaling)
        • Replica: 1-9 input possible, default: 1
        • CPU: 4-24 input possible (4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24 input possible), default: 4
        • Memory: 8-256 input possible (8, 16, 32, 64, 128, 192, 256 input possible), default: 8
      Maximum Concurrent Query ExecutionRequiredSelect the maximum number of queries to execute concurrently in Quick Query
      • Available values: 32, 64, 96, 128
      Data Service Console ConnectionRequiredEnter the Data Service Console domain
      • Starts with a lowercase letter and does not end with special characters (-, .), uses lowercase letters, numbers, and special characters (-, .) to enter 3-50 characters
      Host AliasOptionalAdd host information to be connected to Quick Query (up to 20 can be created, including the default)
      • Use is selected, and the + button is clicked
      • Hostname: Hostname or domain format, using lowercase letters, numbers, and special characters (-, .) to enter 3-63 characters
      • IP: IP format input
      • To delete, click the X button
      • The firewall between the cluster and the corresponding server must be open to use the added host information
      Table. Quick Query Service Information Input Items
    • In the Cluster Information Input section, enter or select the required information.
      Category
      Required
      Description
      Cluster NameRequiredEnter the cluster name
      • Starts with a lowercase letter and does not end with a special character (-), uses lowercase letters, numbers, and special characters (-) to enter 3-30 characters
      Control Area SettingRequired/Optional
      • Kubernetes Version: Displays the Kubernetes version
        • The Kubernetes version can be upgraded after provisioning.
      • Public Endpoint Access: To access the Kubernetes API server endpoint from outside, select Use and enter the Access Control IP Range (cannot be changed after service application).
      • Control Area Logging: Select whether to use control area logging
        • If Use is selected, the cluster control area’s Audit/event log can be checked in Management > Cloud Monitoring > Log Analysis.
        • 1GB of log storage is provided free of charge for all services in the project, and logs exceeding 1GB will be deleted sequentially.
      Network SettingRequiredSet the network connection
      • VPC: Use the same VPC as Data Service Console
      • Subnet: Select a subnet from the selected VPC
      • Security Group: Click Search and select a security group in the Security Group Selection popup window
      File Storage SettingRequiredSelect the file storage volume to be used by the cluster
      • Default Volume (NFS): Click Search and select a file storage in the File Storage Selection popup window
      Table. Quick Query Service Cluster Information Input Items
    • Enter Node Pool Information area, enter or select the required information.
      Classification
      Required
      Detailed Description
      Node Pool ConfigurationRequired/OptionalEnter detailed information about the node pool to be added
      • * marked items are required input items
        • If the Query Engine Type is Public and Auto Scaling is set to Not Used, only the Node Pool Configuration (Fixed) item can be set.
        • Keypair: Select the authentication method to use when connecting to the Virtual Server
      Table. Quick Query Service Node Pool Information Input Items
    • Enter Additional Information area, enter or select the required information.
      Classification
      Required
      Detailed Description
      TagsOptionalAdd tags
      • Tag Add button to create and add tags or add existing tags
      • Up to 50 tags can be added
      • Newly added tags are applied after service creation is complete
      Table. Quick Query Service Additional Information Input Items
  4. In the Summary panel, check the detailed information created and the estimated billing amount, and click the Complete button.

  • After creation is complete, check the created resource in the Quick Query List page.

Check Quick Query Details

You can check the entire resource list and detailed information of the Quick Query service and modify it. The Quick Query Details page consists of Details, Tags, and Work History tabs.

To check the detailed information of the Quick Query service, follow these steps:

  1. Click All Services > Data Analytics > Quick Query menu. Move to the Quick Query Service Home page.
  2. Click the Quick Query menu on the Service Home page. Move to the Quick Query List page.
  3. Click the resource to check the detailed information on the Quick Query List page. Move to the Quick Query Details page.
    • At the top of the Quick Query Details page, status information and additional feature information are displayed.
      ClassificationDetailed Description
      Status DisplayStatus of the Quick Query created by the user
      • Creating: Creating
      • Running: Creation complete, service available
      • Updating: Setting update in progress
      • Terminating: Service termination in progress
      • Error: Error occurred during creation or service abnormal state
      Hosts File Setting InformationButton to check and copy host file information for accessing Quick Query and Data Service Console
      Service TerminationButton to terminate the service
      Table. Quick Query Status Information and Additional Features

Details

You can check the detailed information of the resource selected on the Quick Query List page and modify it if necessary.

ClassificationDetailed Description
ServiceService name
Resource TypeResource type
SRNUnique resource ID in Samsung Cloud Platform
  • Means cluster SRN
Resource NameResource name
  • Means cluster name
Resource IDUnique resource ID in the service
CreatorUser who created the service
Creation TimeTime when the service was created
ModifierUser who modified the service information
Modification TimeTime when the service information was modified
Quick Query NameQuick Query name
DescriptionAdditional information or description of Quick Query
VersionQuick Query version
Service TypeQuick Query service type
Query Engine TypeQuick Query engine type
Engine Spec
  • Whether Auto Scaling is used
  • Resource capacity for cluster configuration
Maximum Concurrent Query ExecutionMaximum number of queries that can be executed concurrently in Quick Query
Domain SettingQuick Query domain
Data Service ConsoleData Service Console domain
Host AliasHost information to be connected to Quick Query
Web URLWeb URL of Data Service Console and Quick Query
Cluster NameName of the cluster composed of servers
Installation Node InformationDetailed information of the installed node pool
Table. Quick Query Details Tab Items

Tags

You can check the tag information of the resource selected on the Quick Query List page and add, change, or delete it.

ClassificationDetailed Description
Tag ListTag list
  • Key, Value information of tags can be checked
  • Up to 50 tags can be added per resource
  • When entering tags, existing Key and Value lists can be searched and selected
Table. Quick Query Tag Tab Items

Work History

You can check the work history of the resource selected on the Quick Query List page.

ClassificationDetailed Description
Work History ListResource change history
  • Work time, resource type, resource name, work details, work result, and worker information can be checked
  • Click the corresponding resource in the Work History List. The Work History Details popup window opens.
  • Detailed Search button provides detailed search function
Table. Quick Query Work History Tab Detailed Information Items

Connecting to Quick Query

To connect to Quick Query, follow these steps:

  1. Check the IP of the Windows system (PC) that you want to connect to Quick Query.
    • You need to check the public IP of the system since external access is required.
  2. Check if the IGW connection is set to use in the VPC where Quick Query is installed.
    • The Internet Gateway setting must be enabled for external access.
  3. Add the following contents to the hosts file of the Windows system:
    • Domain address of Data Service Console
    • Domain address of Data Service Console IAM
    • Domain address of Quick Query
    • You can check the hosts file setting information by clicking Hosts file setting information in the Quick Query detailed screen.
  4. Add the following rules to the VPC IGW Firewall that you selected when applying for the Quick Query service:
    • Source IP: IP of the Windows system (PC)
    • Destination IP: Subnet range of the Kubernetes where Quick Query is installed
    • Protocol: TCP
    • Port: 443
  5. Add the following rules to the Load Balancer Firewall that you selected when applying for the Quick Query service:
    • Source IP: IP of the Windows system (PC)
    • Destination IP: Subnet range of the Kubernetes where Quick Query is installed
    • Protocol: TCP
    • Port: 443
  6. Add the following rules to the Security Group that you selected when applying for the Quick Query service:
    • Type: Inbound rule
    • Destination address: IP of the Windows system (PC)
    • Protocol: TCP
    • Port: 443, 30000 ~ 32767
  7. Run the Chrome browser on the Windows system (PC) that you want to connect to and access the Quick Query URL.

Quick Query Target IP/Port Information

To access Quick Query, add the target IP and port for each service to the Security Group as follows:

ItemProtocolSourceTarget IPPortNote
Quick QueryTCPUser IPQuick Query443, 30000 ~ 32767Quick Query web https
Table. Quick Query Target IP/Port Information

Canceling Quick Query

You can cancel the service to reduce operating costs. However, canceling the service may immediately stop the operating service, so you should carefully consider the impact of service cancellation before proceeding.

To cancel Quick Query, follow these steps:

  1. Click the All Services > Data Analytics > Quick Query menu. You will be taken to the Service Home page of Quick Query.
  2. Click the Quick Query menu on the Service Home page. You will be taken to the Quick Query List page.
  3. On the Quick Query List page, select the resource you want to cancel and click the Cancel Service button.
  4. After cancellation is complete, check if the resource has been canceled on the Quick Query List page.

6.3 - API Reference

API Reference

6.4 - CLI Reference

CLI Reference

6.5 - Release Note

Quick Query

2025.07.01
NEW Quick Query Official Version Release
  • A Quick Query service has been released, allowing for easy analysis of large-scale data using standard SQL.