This is the multi-page printable view of this section. Click here to print.
Event Streams
- 1: Overview
- 1.1: Server Type
- 1.2: Monitoring Metrics
- 1.3: ServiceWatch Metrics
- 2: How-to guides
- 3: API Reference
- 4: CLI Reference
- 5: Release Note
1 - Overview
Service Overview
Event Streams provides fully managed creation and configuration of the open source Apache Kafka for large-scale, massive message data processing. Samsung Cloud Platform automates the creation and configuration of Apache Kafka through a web-based Console, and users can configure the main components of Apache Kafka, such as Broker, Zookeeper, and AKHQ, in a single or cluster form.
Event Streams cluster is composed of multiple Broker nodes, and Brokers can be installed from a minimum of 1 up to a maximum of 10, typically installed with 3 or more. Zookeeper can be installed separately to manage the distributed Brokers, and if not installed separately, it is installed together on the Broker node. Additionally, a tool for managing Kafka called AKHQ (Apache Kafka HQ) is provided, allowing users to manage cluster operations through it.
Provided Features
Event Streams provides the following features.
- Auto Provisioning (Auto Provisioning): You can configure and set up an Apache Kafka cluster via the UI.
- Operation Control Management: Provides a function to control the status of running servers. In addition to starting and stopping the cluster, restarting is possible to apply configuration values.
- AKHQ Provision: AKHQ, a tool that can manage Kafka, is provided, allowing users to manage and monitor clusters through it.
- Add Broker node: If expansion is required to improve the cluster’s performance and stability, you can add nodes with the same specifications as the existing Broker nodes.
- Parameter management: Performance improvement and security-related configuration parameter setting and modification are possible.
- Monitoring: CPU, memory, and performance monitoring information can be checked via Cloud Monitoring and Servicewatch.
Components
Event Streams provides pre-validated engine versions and various server types according to the open source support policy. Users can select and use them according to the scale of the service they want to configure.
Engine Version
The engine versions supported by Event Streams are as follows.
Technical support can be used until the supplier’s EoTS (End of Technical Service) date, and the EOS date when new creation is stopped is set to six months before the EoTS date.
According to the supplier’s policy, the EOS and EoTS dates may change, so please refer to the supplier’s license management policy page for details.
- Apache Kafka: https://docs.confluent.io/platform/current/installation/versions-interoperability.html
| Provided Version | EoS Date | EoTS Date |
|---|---|---|
| 3.8.0 | 2026-06 (scheduled) | 2026-12-02 |
| 3.9.1 | 2026-09 (scheduled) | 2027-02-19 |
Server Type
The server types supported by Event Streams are as follows.
For detailed information about the server types provided by Event Streams, see Event Streams Server Types.
Standard ess1v2m4
| Category | Example | Detailed description |
|---|---|---|
| Server Type | Standard | Provided Server Types
|
| Server Specifications | ess1 | Provided server specifications
|
| Server specifications | v2 | Number of vCores
|
| Server Specifications | m4 | Memory Capacity
|
Preceding Service
This is a list of services that must be pre-configured before creating the service. Please refer to the guide provided for each service for details and prepare in advance.
| Service Category | Service | Detailed Description |
|---|---|---|
| Networking | VPC | A service that provides an independent virtual network in a cloud environment |
1.1 - Server Type
Event Streams server type
Event Streams provides a server type composed of various combinations such as CPU, Memory, Network Bandwidth, etc. When creating Event Streams, Apache Kafka is installed according to the selected server type suitable for the purpose of use.
The server types supported in Event Streams are as follows.
Standard ess1v2m4
Classification | Example | Detailed Description |
|---|---|---|
| Server Type | Standard | Provided server type distinction
|
| Server Specifications | ess1 | Classification of provided server type and generation
|
| Server Specification | v2 | Number of vCores
|
| Server Specification | m4 | Memory Capacity
|
Please select the server type by checking the node’s minimum specifications as follows.
| Division | vCPU | Memory |
|---|---|---|
| Broker | 2 vCore | 4 GB |
| Zookeeper | 1 vCore | 2 GB |
ess1 server type
The ess1 server type of Event Streams is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
- Up to 3.3Ghz Intel 3rd generation (Ice Lake) Xeon Gold 6342 Processor Supports up to 16 vCPUs and 64 GB of memory
- Up to 12.5 Gbps networking speed
| Classification | Server Type | vCPU | Memory | Network Bandwidth |
|---|---|---|---|---|
| Standard | ess1v1m2 | 1 vCore | 2 GB | Up to 10 Gbps |
| Standard | ess1v2m4 | 2 vCore | 4 GB | Up to 10 Gbps |
| Standard | ess1v2m8 | 2 vCore | 8 GB | Up to 10 Gbps |
| Standard | ess1v4m8 | 4 vCore | 8 GB | Up to 10 Gbps |
| Standard | ess1v4m16 | 4 vCore | 16 GB | Up to 10 Gbps |
| Standard | ess1v8m16 | 8 vCore | 16 GB | Up to 10 Gbps |
| Standard | ess1v8m32 | 8 vCore | 32 GB | Up to 10 Gbps |
| Standard | ess1v16m32 | 16 vCore | 32 GB | Up to 12.5 Gbps |
| Standard | ess1v16m64 | 16 vCore | 64 GB | Up to 12.5 Gbps |
ess2 server type
The ess2 server type of Event Streams is provided with standard specifications (vCPU, Memory) and is suitable for various database workloads.
- Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
- Supports up to 16 vCPUs and 64 GB of memory
- Up to 12.5 Gbps networking speed
| Classification | Server Type | CPU vCore | Memory | Network Bandwidth(Gbps) |
|---|---|---|---|---|
| Standard | ess2v1m2 | 1 vCore | 2 GB | Up to 10 Gbps |
| Standard | ess2v2m4 | 2 vCore | 4 GB | Up to 10 Gbps |
| Standard | ess2v2m8 | 2 vCore | 8 GB | Up to 10 Gbps |
| Standard | ess2v4m8 | 4 vCore | 8 GB | Up to 10 Gbps |
| Standard | ess2v4m16 | 4 vCore | 16 GB | Up to 10 Gbps |
| Standard | ess2v8m16 | 8 vCore | 16 GB | Up to 10 Gbps |
| Standard | ess2v8m32 | 8 vCore | 32 GB | Up to 10 Gbps |
| Standard | ess2v16m32 | 16 vCore | 32 GB | Up to 12.5 Gbps |
| Standard | ess2v16m64 | 16 vCore | 64 GB | Up to 12.5 Gbps |
esh2 server type
The esh2 server type of Event Streams is provided with high-capacity server specifications and is suitable for database workloads for large-scale data processing.
- Up to 3.2GHz Intel 4th generation (Sapphire Rapids) Xeon Gold 6448H Processor
- Supports up to 32 vCPUs and 128 GB of memory
- Up to 25Gbps networking speed
| Division | Server Type | vCPU | Memory | Network Bandwidth |
|---|---|---|---|---|
| High Capacity | esh2v32m64 | 32 vCore | 64 GB | Up to 25 Gbps |
| High Capacity | esh2v32m128 | 32 vCore | 128 GB | Up to 25 Gbps |
1.2 - Monitoring Metrics
Event Streams Monitoring Metrics
The table below shows the performance monitoring metrics of Event Streams that can be checked through Cloud Monitoring. For detailed Cloud Monitoring usage instructions, refer to Cloud Monitoring guide.
For server monitoring metrics of Event Streams, refer to Virtual Server Monitoring Metrics guide.
| Performance Item | Description | Unit |
|---|---|---|
| AKHQ State [PID] | AHKQ process PID | PID |
| Connections [Zookeeper Client] | Number of ZooKeeper connections | cnt |
| Disk Used | datadir usage amount | bytes |
| Failed [Client Fetch Request] | Number of failed client Fetch request processing | cnt |
| Failed [Produce Request] | Number of failed Producer request processing | cnt |
| Incomming Messages | Number of messages received by Broker | cnt |
| Instance State [PID] | kafka process PID | PID |
| Kibana state [PID] | Kibana process PID | PID |
| Leader Elections | Number of Leader Election occurrences | cnt |
| Leader Elections [Unclean] | Number of Unclean Leader Election occurrences | cnt |
| Log Flushes | Number of log flush occurrences | cnt |
| Network In Bytes | Bytes received by all Topics | bytes |
| Network Out Bytes | Bytes sent by all Topics | bytes |
| Rejected Bytes | Bytes rejected by all Topics | bytes |
| Request Queue Length | Request queue size | cnt |
| Shards | Cluster shard count | cnt |
| Zookeeper Sessions [Closed] | ZooKeeper closed sessions per second | cnt |
| Zookeeper Sessions [Expired] | Zookeeper expired sessions per second | cnt |
| Zookeeper State [PID] | zookeeper process PID | PID |
1.3 - ServiceWatch Metrics
Event Streams sends metrics to ServiceWatch. The metrics provided by basic monitoring are data collected at a 1‑minute interval.
Basic Indicators
The following are the basic metrics for the namespace Event Streams.
OS Basic Metrics
| Category | Performance Item | Detailed Description | Unit | Meaningful Statistics |
|---|---|---|---|---|
| CPU | CPU Usage | CPU Usage Rate | Percent | |
| Disk | Disk Usage | Disk Usage Rate | Percent | |
| Disk | Disk Write Bytes | Write capacity on block device (bytes/second) | Bytes/Second | |
| Disk | Disk Read Bytes | Amount read from block device (bytes/second) | Bytes/Second | |
| Disk | Disk Write Request | Number of write requests on block device (requests/second) | Count/Second | |
| Disk | Disk Read Requests | Number of read requests on block device (requests/second) | Count/Second | |
| Disk | Average Disk I/O Queue Size | Average queue length of requests issued to the block device | None | |
| Disk | Disk I/O Utilization | Proportion of time the block device actually processes I/O operations | Percent | |
| Memory | Memory Usage | Memory Usage Rate | Percent | |
| Network | Network In Bytes | Received capacity on the network interface (bytes/second) | Bytes/Second | |
| Network | Network Out Bytes | Data transmitted from network interface (bytes/second) | Bytes/Second | |
| Network | TCP Connections | Total number of TCP connections currently established correctly | Count/Second | |
| Network | Network In Packets | Number of packets received on the network interface | Count | |
| Network | Network Out Packets | Number of packets transmitted from the network interface | Count | |
| Network | Network In Dropped | Number of packet drops received on the network interface | Count | |
| Network | Network Out Dropped | Number of packet drops transmitted from the network interface | Count | |
| Network | Network In Errors | Number of packet errors received on the network interface | Count | |
| Network | Network Out Errors | Number of packet errors transmitted from the network interface | Count |
Event Streams Basic Metrics
| Category | Performance Item | Detailed Description | Unit | Meaningful Statistics |
|---|---|---|---|---|
| Activelock | Active locks | Number of active locks | Count | |
| Activesession | Active sessions | Number of active sessions | Count | |
| Activesession | Connection usage | DB connection session usage rate | Percent | |
| Activesession | Connections | DB connection session | Count | |
| Activesession | Connections(MAX) | Maximum number of connections that can be attached to the DB | Count | |
| ProxySQL | Proxy Uptime | Express the proxy’s uptime in seconds | Seconds | |
| ProxySQL | Backend connections(CONNECTED) | Number of sessions connected to the Proxy server | Count | |
| ProxySQL | Client connections connected | Number of client sessions currently connected to the proxy | Count | |
| ProxySQL | Queries routed | Number of queries routed to backend server | Count | |
| ProxySQL | Backend connections(ACTIVE, IDLE) | Number of Active / idle connections per Endpoint | Count | |
| ProxySQL | Backend server status | Backend server status
| None | |
| ProxySQL | Backend connection check | Backend server’s connection success/failure check | Count | |
| State | Instance state | Scalable DB status up/down check | Count | |
| State | Slave behind master seconds | Replica’s delay amount (unit: seconds) | Seconds | |
| Tablespace | Tablespace used | Tablespace usage | Megabytes | |
| Tablespace | Tablespace used(TOTAL) | Tablespace usage (total) | Megabytes | |
| Transactions | Slow queries | Number of slow queries | Count | |
| Transactions | Transaction time | Long Transaction time | Seconds | |
| Transactions | Wait locks Lock | Number of waiting sessions | Count |
2 - How-to guides
The user can enter the required information for Event Streams through the Samsung Cloud Platform Console, select detailed options, and create the service.
Event Streams Create
You can create and use the Event Streams service from the Samsung Cloud Platform Console.
Before creating the service, please configure the VPC’s Subnet type as General.
- If the Subnet type is Local, the creation of the corresponding Database service is not possible.
To create Event Streams, follow these steps.
- Click the All Services > Data Analytics > Event Streams menu. Navigate to the Service Home page of Event Streams.
- On the Service Home page, click the Create Event Streams button. You will be taken to the Create Event Streams page.
- Create Event Streams page, enter the information required to create the service, and select detailed options.
- Image and version selection area, select the required information.
Category Required or notDetailed description Image version Required Provide version list of Event Streams Table. Event Streams Service Information Input Items- Service Information Input area, input or select the required information.
Category Required or notDetailed description Server Name Prefix Required Server name where Apache Kafka will be installed - Start with a lowercase English letter, and use lowercase letters, numbers, and the special character (
-) to input 3 to 13 characters
- A postfix such as 001, 002 is attached based on the server name to create the actual server name
Cluster Name Required Cluster name of the servers - Enter using English letters, 3 ~ 20 characters
- A cluster is a unit that groups multiple servers
Broker > Broker Node count required Broker Node count Broker > Server Type Required Server type where the Broker will be installed - Standard: Standard specifications commonly used
- High Capacity: Large-capacity server with 24 vCore or more
- For detailed information about server types provided by Event Streams, refer to Event Streams Server Type
Broker > Planned Compute Select Status of resources with Planned Compute set - In Use: Number of resources with Planned Compute set that are currently in use
- Configured: Number of resources with Planned Compute set
- Coverage Preview: Amount applied by Planned Compute per resource
- Apply for Planned Compute Service: Go to the Planned Compute service application page
- For details, refer to Apply for Planned Compute
Broker > Block Storage Required Block Storage type to be used for the Broker node - Base OS: Area where the engine is installed
- DATA: Data file storage area
- Select the storage type and then enter the capacity. (For detailed information on each Block Storage type, refer to Create Block Storage)
- SSD: High‑performance general volume
- HDD: General volume
- SSD_KMS/HDD_KMS: Additional encrypted volume using Samsung Cloud Platform KMS (Key Management System) encryption key
- Enter capacity as a multiple of 8 within the range 16 ~ 5,120
- Select the storage type and then enter the capacity. (For detailed information on each Block Storage type, refer to Create Block Storage)
Zookeeper separate installation > Use Select Zookeeper node separate installation option - If Use is selected, install Zookeeper node separately
- If Zookeeper node is not installed separately, the Broker node also performs the Zookeeper role
Zookeeper separate installation > server type select Server type where Zookeeper will be installed - Zookeeper node provides vCPU 1, Memory 2G or vCPU 2, Memory 4G
Zookeeper separate installation > Planned Compute Select Status of resources with Planned Compute set - In use: Number of resources with Planned Compute set that are currently in use
- Configured: Number of resources with Planned Compute set
- Coverage preview: Amount applied per resource by Planned Compute
- Apply for Planned Compute service: Go to Planned Compute service application page
- For details, refer to Apply for Planned Compute
Zookeeper separate installation > Block Storage Required Block Storage type to be used on Zookeeper nodes - Base OS: Area where the engine is installed
- DATA: Data file storage area
- Select the storage type and then enter the capacity. (For detailed information on each Block Storage type, refer to Creating Block Storage)
- SSD: High‑performance general volume
- HDD: General volume
- SSD_KMS/HDD_KMS: Additional encrypted volume using Samsung Cloud Platform KMS (Key Management System) encryption keys
- Enter capacity as a multiple of 8 within the range 16 to 5,120
- Select the storage type and then enter the capacity. (For detailed information on each Block Storage type, refer to Creating Block Storage)
AKHQ > Use Required AKHQ installation status - If Use is selected, AKHQ will be installed
AKHQ > Server Type Required Server type where AKHQ will be installed - AKHQ only provides vCPU 2, Memory 4G type
AKHQ > Planned Compute Select Status of resources with Planned Compute set - In use: Number of resources with Planned Compute that are currently in use
- Configured: Number of resources with Planned Compute set
- Coverage preview: Amount applied per resource by Planned Compute
- Apply for Planned Compute service: Go to the Planned Compute service application page
- For more details, refer to Apply for Planned Compute
AKHQ > Block Storage Required Block Storage type to be used on the server where AKHQ is installed - Base OS: Area where the engine is installed
AKHQ > AKHQ account Required AKHQ account - Enter using lowercase English letters, 2 to 20 characters
AKHQ > AKHQ password Required AKHQ account password - Enter 8 ~ 30 characters including English letters, numbers and special characters (excluding “ ‘)
AKHQ > AKHQ Password Confirmation Required AKHQ Account Password Confirmation - Re-enter the same AKHQ account password
AKHQ > AKHQ Port Number Required AKHQ connection port number - Port number is automatically set to 8080 and cannot be modified
Network > Common Settings Required Network settings where servers generated by the service are installed - Choose if you want to apply the same settings to all installed servers
- Select a pre‑created VPC and Subnet
- IP: Only automatic generation is possible
- For Public NAT settings, it is only possible in per‑server settings
Network > Per-Server Settings Required Network settings where servers generated by the service are installed - Select if you want to apply different settings per installed server
- Select a pre‑created VPC and Subnet
- IP: Enter each server’s IP
- Public NAT feature is available only when the VPC is connected to an Internet Gateway; if you check Use, you can select from reserved IPs in the VPC product’s Public IP. For details, see Create Public IP
IP Access Control Select Service Access Policy Settings - Since the access policy is set for the IP entered on the page, you do not need to separately configure Security Group policies.
- Enter in IP format (e.g.,
192.168.10.1) or CIDR format (e.g.,192.168.10.0/24,192.168.10.1/32) and click the Add button
- To delete an entered IP, click the x button next to the entered IP
Maintenance Period Select Event Streams Maintenance Period - Select Use to set day of week, start time, and duration
- It is recommended to set a maintenance period for stable service management. Patch work will be performed at the set time, and service interruption may occur
- We are not responsible for issues arising from patches not applied (set as not used)
Table. Event Streams service configuration items - Start with a lowercase English letter, and use lowercase letters, numbers, and the special character (
- Service Information Input area, input or select the required information.
- Database configuration required information input Please enter or select the required information in this area.
Category Required or notDetailed description Zookeeper SASL account Required Zookeeper account - Enter using lowercase English letters, 2 ~ 20 characters
Zookeeper SASL password Required Zookeeper account password - Enter 8 to 30 characters including letters, numbers, and special characters (excluding
“‘)
Zookeeper SASL password verification Required Zookeeper account password verification - Re-enter the Zookeeper SASL account password identically
Zookeeper Port number required Zookeeper port number 1200 ~ 65535can be entered, but the Broker port or2888,3888cannot be used
Broker SASL Account Required Kafka connection account - Enter using lowercase English letters, 2 to 20 characters
Broker SASL password Required Kafka connection account password - Enter 8 to 30 characters including English letters, numbers, and special characters (excluding “ and ‘)
Broker SASL password verification Required Check Kafka connection account password - Re-enter the Broker SASL account password identically
Broker Port number Required Kafka port number 1200 ~ 65535can be entered, and Broker port or2888,3888cannot be used
Parameter Required Event Streams configuration parameters - View button click to view detailed information of the parameter
- Parameters can be modified after the service creation is completed, and a restart is required when modified
Time zone Selection Standard time zone used by the service ServiceWatch log collection Select Whether to collect ServiceWatch logs - Select Use to set up the ServiceWatch log collection feature
- For details about the collected logs, refer to ServiceWatch metrics
- Provided free up to 5 GB for all services within the account, and charges apply based on storage size if exceeding 5 GB
- When collecting, log groups and log streams are automatically created and cannot be deleted until the resources are removed
- To prevent exceeding 5 GB, direct deletion of log data or shortening the retention period is recommended
Table. Required information input items for Event Streams Database configuration- Additional Information Input Enter or select the required information in the area.
Category Required or notDetailed description Tag Select Add Tag - Add Tag button can be clicked to create and add a tag, or add an existing tag
- Up to 50 tags can be added
- Added new tags are applied after the service creation is completed
Table. Event Streams Service Additional Information Input Items
- Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
- Once creation is complete, check the created resource on the Resource List page.
Event Streams Check Detailed Information
Event Streams service can view and edit the full resource list and detailed information. Event Streams Details page consists of Details, Tags, Activity History tabs.
To view detailed information about the Event Streams service, follow these steps.
- All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
- On the Service Home page, click the Event Streams menu. It navigates to the Event Streams List page.
- Click the resource to view detailed information on the Event Streams List page. It navigates to the Event Streams Details page.
- Event Streams Details The top of the page displays status information and information about additional features.
Category Detailed description Cluster Status Cluster Status - Creating: Cluster is being created
- Editing: Cluster is changing to a state of performing operation
- Error: Cluster is in a state where a failure occurred while performing a task
- If it occurs continuously, contact the administrator
- Failed: Cluster is in a failed state during creation
- Restarting: Cluster is restarting
- Running: Cluster is operating normally
- Starting: Cluster is starting
- Stopped: Cluster is stopped
- Stopping: Cluster is being stopped
- Synchronizing: Cluster is synchronizing
- Terminating: Cluster is terminating
- Unknown: Cluster status is unknown
- If it occurs continuously, contact the administrator
- Upgrading: Cluster is changing to an upgrade execution state
Cluster Control Button to change cluster state - Start: Start a stopped cluster
- Stop: Stop a running cluster
- Restart: Restart a running cluster
More additional features Cluster-related management button - Service status synchronization: Can query current server status and synchronize to the Console
- Parameter management: Can view and modify service configuration parameters
- Add Broker Node: Add a Broker Node
- If configured as a cluster, the Add Broker Node button is displayed
Service termination Button to cancel the service Table. Event Streams status information and additional features
- Event Streams Details The top of the page displays status information and information about additional features.
Detailed Information
Event Streams list page you can view the detailed information of the selected resource and, if necessary, edit the information.
| Category | Detailed description |
|---|---|
| Server Information | Server information configured in the respective cluster
|
| service | service name |
| Resource Type | Resource Type |
| SRN | Unique resource ID in Samsung Cloud Platform
|
| Resource Name | Resource Name
|
| Resource ID | Unique resource ID in the service |
| Creator | User who created the service |
| Creation Date/Time | Service Creation Date/Time |
| Modifier | User who edited the service information |
| Modification Date/Time | Date/Time Service Information Was Modified |
| Image Version | Installed service image and version information
|
| Cluster Name | Name of the cluster composed of servers |
| Planned Compute | Resource status with Planned Compute set
|
| Maintenance Period | Patch Work Period Setting Status
|
| Time Zone | Standard time zone used by the service |
| Zookeeper Port Number | Zookeeper Port Number |
| Broker Port number | Kafka port number |
| AKHQ connection information | AKHQ connection information |
| ServiceWatch log collection | ServiceWatch log collection configuration status
|
| Network | Installed network information (VPC, Subnet) |
| IP Access Control | Service Access Policy Settings
|
| Zookeeper | Server type, default OS, additional Disk information for Zookeeper node
|
| Broker | Server type, default OS, additional Disk information for the Broker node
|
| AKHQ | Server type and basic OS information for AKHQ node
|
Tag
On the Event Streams List page, you can view the tag information of the selected resource, and you can add, modify, or delete it.
| Category | Detailed description |
|---|---|
| Tag List | Tag List
|
Work History
You can view the operation history of the selected resource on the Event Streams list page.
| Category | Detailed description |
|---|---|
| Work History List | Resource Change History
|
Event Streams Resource Management
If you need to change the existing configuration options of a created Event Streams resource, manage parameters, or add broker node configurations, you can perform the tasks on the Event Streams Details page.
Operating Control
If changes occur to the running Event Streams resources, you can start, stop, or restart.
To control the operation of Event Streams, follow the steps below.
- All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
- Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
- On the Event Streams List page, click the resource to control operation. It navigates to the Event Streams Details page.
- Check the Event Streams status and complete the changes using the control button below.
- Start: the server where the Event Streams service is installed and the Event Streams service is running.
- Stop: The server where the Event Streams service is installed and the Event Streams service will be stopped (Stopped).
- Restart: Only the Event Streams service will be restarted.
Synchronize Service Status
You can query the current server status and synchronize it to the Console.
To synchronize the service status of Event Streams, follow the steps below.
- All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
- Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
- Event Streams list 페이지에서 서비스 상태를 조회할 자원을 클릭하세요. Event Streams details 페이지로 이동합니다.
- Click the Service Status Synchronization button. It takes a little time to retrieve, and while retrieving, the cluster changes to Synchronizing state.
- When the query is completed, the status in the server information item is updated, and the cluster changes to Running state.
Parameter Management
Provides parameter query and modification functions.
To view and modify configuration parameters, follow the steps below.
- All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
- Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
- Event Streams List Click the resource whose parameters you want to view and edit on the page. Event Streams Details You will be moved to the page.
- Click the Parameter Management button. You will be taken to the Parameter Management page.
- Parameter Management on the page, click the Search button. Database Search popup window opens.
- To view the Parameter information, click the Confirm button. It takes a little time to retrieve.
- You can modify the Parameter information after performing a query.
- To edit the Parameter information, click the Edit button and then enter the changes in the Custom Value area of the Parameter to be edited.
- When the application type is dynamic, it is applied immediately, and when it is static, a service restart is required, causing service interruption.
- When input is complete, click the Save button.
Change Server Type
You can change the configured server type.
To change the server type, follow the steps below.
- If the server type is configured as Standard, it cannot be changed to High Capacity. If you want to change to High Capacity, create a new service.
- If you modify the server type, a server reboot is required. Please separately verify any SW license changes or SW settings and reflections due to spec changes.
- Click the All Services > Data Analytics > Event Streams menu. Navigate to the Service Home page of Event Streams.
- Click the Event Streams menu on the Service Home page. Navigate to the Event Streams list page.
- On the Event Streams list page, click the resource to change the server type. You will be taken to the Event Streams details page.
- Click the Edit button of the server type you want to change at the bottom of the detailed information. The Edit Server Type popup window opens.
- Edit Server Type After selecting the server type in the popup window, click the Confirm button.
Expanding storage
You can expand the storage added to the data area up to a maximum of 5TB based on the initially allocated capacity. You can expand the storage without stopping Event Streams, and if configured as a cluster, all nodes are expanded simultaneously.
- If encryption is set on the existing Block Storage, encryption will also be applied to the additional Disk.
- Disk size modification is only possible to increase by at least 16GB over the current disk size.
To increase storage capacity, follow the steps below.
- All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
- Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
- On the Event Streams list page, click the resource whose server type you want to change. You will be taken to the Event Streams details page.
- Click the Edit button of the added Disk you want to expand at the bottom of the detailed information. The Disk Edit popup window opens.
- Disk edit In the popup window, after entering the expanded capacity, click the Confirm button.
Add Broker Node
If Event Streams cluster expansion is required, you can add nodes with the same specifications as the Broker Node you are using. The added nodes are added to the existing cluster without server downtime, and the existing data is automatically distributed.
- Up to 10 nodes can be used within the cluster. Please note that additional charges apply for created nodes.
- Adding nodes may degrade cluster performance.
To add a Broker node, follow the steps below.
- All Services > Data Analytics > Event Streams Click the menu. Navigate to the Service Home page of Event Streams.
- On the Service Home page, click the Event Streams menu. Navigate to the Event Streams list page.
- Event Streams resource On the list page, click the resource you want to recover. Event Streams details page will be opened.
- Click the Broker Node Add button. Navigate to the Broker Node Add page.
- Enter required information after entering the relevant information in the area, click the Complete button.
Category RequiredDetailed description Server Name Required Server name where Broker is installed - It is set to the server name configured in the original cluster.
Cluster Name Required Cluster Name - It will be set to the cluster name set in the original cluster.
Number of additional Nodes Required Number of Nodes to add - Use up to 10 nodes per cluster
Service Type > Server Type Required Server type where the Broker will be installed - It is set to be the same as the server type set in the original cluster.
Service Type > Planned Compute Select Status of resources with Planned Compute set - In Use: Number of resources with Planned Compute that are currently in use
- Configured: Number of resources with Planned Compute set
- Coverage Preview: Amount applied per resource by Planned Compute
- Planned Compute Service Application: Go to the Planned Compute service application page
- For more details, refer to Apply for Planned Compute
Service Type > Block Storage Required Block Storage settings to be used on Broker nodes - The Storage type and capacity set in the original cluster are applied identically
Network Required Network where servers are installed - Apply the same network as set in the original cluster
Table. Event Streams Broker Node Additional Items
Event Streams Cancel
You can cancel unused Event Streams to reduce operating costs. However, if you cancel the service, the running service may be stopped immediately, so you should consider the impact of service interruption sufficiently before proceeding with the cancellation.
To cancel Event Streams, follow the steps below.
- All Services > Data Analytics > Event Streams Click the menu. Go to the Service Home page of Event Streams.
- Click the Event Streams menu on the Service Home page. Navigate to the Event Streams List page.
- Event Streams list on the page, select the resource to cancel, and click the Cancel Service button.
- Once the termination is complete, check on the Event Streams list page whether the resource has been terminated.
3 - API Reference
4 - CLI Reference
5 - Release Note
Event Streams
- It provides Terraform.
- HDD, HDD_KMS disk types are also provided.
- An Event Streams service that easily creates and manages Apache Kafka clusters in a web environment has been released.