Using ServiceWatch Agent
Users can install ServiceWatch Agent on Virtual Server/GPU Server/Bare Metal Server, etc. to collect custom metrics and logs.
ServiceWatch Agent
The agents that need to be installed on a server for custom metric and log collection of ServiceWatch can be largely divided into two types. Prometheus Exporter and Open Telemetry Collector.
| Category | Description | |
|---|---|---|
| Prometheus Exporter | Provides metrics of a specific application or service in a format that Prometheus can scrape
| |
| Open Telemetry Collector | Acts as a centralized collector that collects telemetry data such as metrics and logs of distributed systems, processes them (filtering, sampling, etc.), and sends them to multiple backends (e.g., Prometheus, Jaeger, Elasticsearch, etc.)
|
To link server log files to ServiceWatch through ServiceWatch Agent, you must first create a log group and log streams within the log group.
- For more information about creating log groups and log streams, see Logs.
Pre-environment Configuration for ServiceWatch Agent
You must add Security Group and Firewall rules for communication between ServiceWatch Agent and ServiceWatch.
Adding Security Group Rules
- To send data collected from ServiceWatch Agent installed on Virtual Server/GPU Server to ServiceWatch, you must add rules to the Security Group as follows.
Direction Type Port Destination Address Outbound Custom TCP 443 ServiceWatch OpenAPI Endpoint IP Address Table. Security Group Rules for ServiceWatch Agent Communication
- To send data collected from ServiceWatch Agent installed on Virtual Server/GPU Server to ServiceWatch, you must add rules to the Security Group as follows.
Adding Firewall Rules
- If firewall is enabled on the Internet Gateway of the VPC, you must add Firewall rules as follows.
Direction Type Port Action Source Address Destination Address Outbound TCP 443 Allow Private IP address assigned when creating Virtual Server. Virtual Server Private IP address can be checked in Checking Virtual Server Details. ServiceWatch OpenAPI Endpoint IP Address Table. Internet Gateway Firewall Rules for ServiceWatch Agent Communication
- If firewall is enabled on the Internet Gateway of the VPC, you must add Firewall rules as follows.
ServiceWatch OpenAPI Endpoint IP Address
The Endpoint IP address required for ServiceWatch Agent to send collected data to ServiceWatch is as follows.
| Offering | Region | URL | IP Address |
|---|---|---|---|
| For Enterprise | kr-west1 | https://servicewatch.kr-west1.e.samsungsdscloud.com | 112.107.105.24 |
| For Enterprise | kr-east1 | https://servicewatch.kr-east1.e.samsungsdscloud.com | 112.107.105.68 |
Configuring Open Telemetry Collector for ServiceWatch
To use Open Telemetry Collector for ServiceWatch metric and log collection on a server, install it in the following order.
Download the Agent file from the URL where you can download the Agent file for ServiceWatch.
GuideThe file download link for ServiceWatch Agent installation will be provided through Samsung Cloud Platform Console announcements and Support Center > Contact Us.Color modewget [ServiceWatch Agent File Download URL]wget [ServiceWatch Agent File Download URL]Code Block. ServiceWatch Agent Installation File Download Command The Open Telemetry Collector Agent file for ServiceWatch can be checked as follows.
- Extract the Agent file for ServiceWatch.Color mode
unzip ServiceWatch_Agent.zipunzip ServiceWatch_Agent.zipCode Block. Extracting ServiceWatch Agent File - If the environment using ServiceWatch Agent is Linux OS, you must grant execution permissions as follows.Color mode
chmod +x agent/otelcontribcol_linux_amd64 chmod +x agent/servicewatch-agent-manager-linux-amd64chmod +x agent/otelcontribcol_linux_amd64 chmod +x agent/servicewatch-agent-manager-linux-amd64Code Block. Granting Execution Permissions to ServiceWatch Agent File Category Description examples Example configuration file folder. Inside each folder, there are agent.json,log.json,metric.jsonexample filesos-metrics-min-examples: Minimum metric setting example using Node Exporter
os-metrics-all-examples: Metric setting example using Node Exporter memory/filesystem Collector
gpu-metrics-min-examples: Minimum metric setting example using DCGM Exporter
gpu-metrics-all-examples: Key metric setting example using DCGM Exporter
otelcontribcol_linux_amd64 Open Telemetry Collector for Linux for ServiceWatch otelcontribcol_windows_amd64.exe Open Telemetry Collector for Windows for ServiceWatch servicewatch-agent-manager-linux-amd64 ServiceWatch Agent Manager for Linux servicewatch-agent-manager-windows-amd64.exe ServiceWatch Agent Manager for Windows Table. ServiceWatch Agent File Configuration
- Extract the Agent file for ServiceWatch.
Define the Agent configuration file of ServiceWatch Agent Manager for the Open Telemetry Collector for ServiceWatch.
Category Description namespace Custom namespace for custom metrics - Namespace is a logical division used to classify and group metrics, and is specified as a custom metric to classify custom metrics
- Namespace must be 3~128 characters including English, numbers, spaces, and special characters (
_-/), and must start with English.
accessKey IAM authentication key Access Key accessSecret IAM authentication key Secret Key resourceId Resource ID of the server in Samsung Cloud Platform - Example: Resource ID of Virtual Server
openApiEndpoint ServiceWatch OpenAPI Endpoint by region/offering - Example: https://servicewatch.
region.offering.samsungsdscloud.com
regionandofferinginformation can be checked from Samsung Cloud Platform Console access URL
telemetryPort Telemetry Port of ServiceWatch Agent - Usually uses 8888 Port. If 8888 Port is in use, it needs to be changed
Table. agent.json Configuration File ItemsColor mode{ "namespace": "swagent-windows", # Custom namespace for custom metrics "accessKey": "testKey", # IAM authentication key Access Key "accessSecret": "testSecret", # IAM authentication key Secret Key "resourceId": "resourceID", # Resource ID of the server in Samsung Cloud Platform "openApiEndpoint": "https://servicewatch.kr-west1.e.samsungsdscloud.com", # ServiceWatch OpenAPI Endpoint by region/environment "telemetryPort": 8889 # Telemetry Port of ServiceWatch Agent (Usually uses 8888 Port. If 8888 Port is in use, it needs to be changed) }{ "namespace": "swagent-windows", # Custom namespace for custom metrics "accessKey": "testKey", # IAM authentication key Access Key "accessSecret": "testSecret", # IAM authentication key Secret Key "resourceId": "resourceID", # Resource ID of the server in Samsung Cloud Platform "openApiEndpoint": "https://servicewatch.kr-west1.e.samsungsdscloud.com", # ServiceWatch OpenAPI Endpoint by region/environment "telemetryPort": 8889 # Telemetry Port of ServiceWatch Agent (Usually uses 8888 Port. If 8888 Port is in use, it needs to be changed) }Code Block. agent.json Configuration Example Define the Metric configuration file for metric collection for ServiceWatch.
- If you want to collect metrics through the Agent, configure metric.json.
Category Description prometheus > scrape_configs > targets Endpoint of the metric collection target - In the case of a server, since Prometheus Exporter is installed on the same server, set it to that endpoint
- Example: localhost:9100
prometheus > scrape_configs > jobName Job Name setting. Usually set to the Prometheus Exporter type used when collecting metrics - Example: node-exporter
metricMetas > metricName Set the name of the metric you want to collect. The metric name must be 3~128 characters including English, numbers, and special characters ( _), and must start with English.- Example: node_cpu_seconds_total
metricMetas > dimensions Set the label to visualize and display in the Console among the Collector’s labels provided to identify the source of the Exporter’s metric data. When displaying the collected metric in the Console, it is displayed by combining according to the dimensions setting. - Example: In the case of metrics like the Memory Collector of Node Exporter that do not provide special labels, set it to resource_id
- Example: Node Exporter Filesystem Collector metrics can set dimensions to mountpoint, which represents the path where the filesystem is mounted on the system
metricMetas > unit Can set the unit of the metric - Example: Bytes, Count, etc.
metricMetas > aggregationMethod Method of aggregating based on the specified dimensions - Example: Select from SUM, MAX, MIN, COUNT
metricMetas > descriptionKo Korean description of the metric being collected metricMetas > descriptionEn English description of the metric being collected Table. metric.json Configuration File ItemsColor mode{ "prometheus": { "scrape_configs": { "targets": [ "localhost:9100" # Endpoint of Prometheus Exporter installed in the server ], "jobName": "node-exporter" # Usually set to the name of the installed Exporter } }, "metricMetas": [ { "metricName": "node_memory_MemTotal_bytes", # Set the metric name to be linked to ServiceWatch among metrics collected from Prometheus Exporter "dimensions": [ [ "resource_id" # Set the label to visualize and display in the Console among the Collector's labels provided to identify the source of Node Exporter's metric data # In the case of metrics like Memory that do not provide special labels, set it to resource_id ] ], "unit": "Bytes", # Unit of collected metric data "aggregationMethod": "SUM", # Aggregation method "descriptionKo": "Total physical memory size of the server", # Korean description of the metric "descriptionEn": "node memory total bytes" # English description of the metric }, { "metricName": "node_filesystem_size_bytes", # Set the metric name to be linked to ServiceWatch among metrics collected from Prometheus Exporter "dimensions": [ [ "mountpoint" # Set the label to visualize and display in the Console among the Collector's labels provided to identify the source of Node Exporter's metric data # Set dimensions to mountpoint, which represents the path where the filesystem is mounted on the system for Filesystem-related metrics ] ], "unit": "Bytes", "aggregationMethod": "SUM", "descriptionKo": "node filesystem size bytes", "descriptionEn": "node filesystem size bytes" }, { "metricName": "node_memory_MemAvailable_bytes", "dimensions": [ [ "resource_id" ] ], "unit": "Bytes", "aggregationMethod": "SUM", "descriptionKo": "node memory available bytes", "descriptionEn": "node memory available bytes" }, { "metricName": "node_filesystem_avail_bytes", "dimensions": [ [ "mountpoint" ] ], "unit": "Bytes", "aggregationMethod": "SUM", "descriptionKo": "node filesystem available bytes", "descriptionEn": "node filesystem available bytes" } ] }{ "prometheus": { "scrape_configs": { "targets": [ "localhost:9100" # Endpoint of Prometheus Exporter installed in the server ], "jobName": "node-exporter" # Usually set to the name of the installed Exporter } }, "metricMetas": [ { "metricName": "node_memory_MemTotal_bytes", # Set the metric name to be linked to ServiceWatch among metrics collected from Prometheus Exporter "dimensions": [ [ "resource_id" # Set the label to visualize and display in the Console among the Collector's labels provided to identify the source of Node Exporter's metric data # In the case of metrics like Memory that do not provide special labels, set it to resource_id ] ], "unit": "Bytes", # Unit of collected metric data "aggregationMethod": "SUM", # Aggregation method "descriptionKo": "Total physical memory size of the server", # Korean description of the metric "descriptionEn": "node memory total bytes" # English description of the metric }, { "metricName": "node_filesystem_size_bytes", # Set the metric name to be linked to ServiceWatch among metrics collected from Prometheus Exporter "dimensions": [ [ "mountpoint" # Set the label to visualize and display in the Console among the Collector's labels provided to identify the source of Node Exporter's metric data # Set dimensions to mountpoint, which represents the path where the filesystem is mounted on the system for Filesystem-related metrics ] ], "unit": "Bytes", "aggregationMethod": "SUM", "descriptionKo": "node filesystem size bytes", "descriptionEn": "node filesystem size bytes" }, { "metricName": "node_memory_MemAvailable_bytes", "dimensions": [ [ "resource_id" ] ], "unit": "Bytes", "aggregationMethod": "SUM", "descriptionKo": "node memory available bytes", "descriptionEn": "node memory available bytes" }, { "metricName": "node_filesystem_avail_bytes", "dimensions": [ [ "mountpoint" ] ], "unit": "Bytes", "aggregationMethod": "SUM", "descriptionKo": "node filesystem available bytes", "descriptionEn": "node filesystem available bytes" } ] }Code Block. metric.json Configuration Example - To display the resource name, set resource_name in commonLabels as follows and also set resource_name in metricMetas.dimensions, so you can check the resource name together when viewing metrics in ServiceWatch.Color mode
... "commonLabels": { "resource_name": "ResourceName" # Resource name that can be checked in User Console }, "metricMetas": [ { "metricName": "metric_name", "dimensions": [ [ "resource_id", "resource_name" # Add the resource_name set in commonLabels to each metric's dimensions ] ], "unit": "Bytes", "aggregationMethod": "SUM", "descriptionKo": "metric_name description" "descriptionEn": "metric_name description" }, ... ] ...... "commonLabels": { "resource_name": "ResourceName" # Resource name that can be checked in User Console }, "metricMetas": [ { "metricName": "metric_name", "dimensions": [ [ "resource_id", "resource_name" # Add the resource_name set in commonLabels to each metric's dimensions ] ], "unit": "Bytes", "aggregationMethod": "SUM", "descriptionKo": "metric_name description" "descriptionEn": "metric_name description" }, ... ] ...Code Block. metric.json - Resource Name Setting
- If you want to collect metrics through the Agent, configure metric.json.
Define the Log configuration file for log collection for ServiceWatch.
- If you want to collect logs, you must configure log.json.
Category Description fileLog > include Location of log files to collect fileLog > operators Defined to parse log messages to collect fileLog > operators > regex Express log message format as regular expression fileLog > operators > timestamp Format of Time Stamp of log message to be sent to ServiceWatch logMetas > log_group_value Log group name created to send logs to ServiceWatch logMetas > log_stream_value Log stream name in ServiceWatch log group Table. log.json Configuration File ItemsColor mode{ "fileLog": { "include": [ "/var/log/syslog", # Log file to collect in ServiceWatch "/var/log/auth.log" ], "operators": { "regex": "^(?P<timestamp>\\S+)\\s+(?P<hostname>\\S+)\\s+(?P<process>[^:]+):\\s+(?P<message>.*)$", # Express log file format as regular expression "timestamp": { # Set Time Stamp format of log message "layout_type": "gotime", "layout": "2006-01-02T15:04:05.000000Z07:00" } } }, "logMetas": { "log_group_value": "custom-log-group", # Log group name of ServiceWatch created in advance "log_stream_value": "custom-log-stream" # Log stream name in ServiceWatch log group created in advance } }{ "fileLog": { "include": [ "/var/log/syslog", # Log file to collect in ServiceWatch "/var/log/auth.log" ], "operators": { "regex": "^(?P<timestamp>\\S+)\\s+(?P<hostname>\\S+)\\s+(?P<process>[^:]+):\\s+(?P<message>.*)$", # Express log file format as regular expression "timestamp": { # Set Time Stamp format of log message "layout_type": "gotime", "layout": "2006-01-02T15:04:05.000000Z07:00" } } }, "logMetas": { "log_group_value": "custom-log-group", # Log group name of ServiceWatch created in advance "log_stream_value": "custom-log-stream" # Log stream name in ServiceWatch log group created in advance } }Code Block. log.json Configuration Example NoteTo link server log files to ServiceWatch through ServiceWatch Agent, you must first create a log group and log streams within the log group.
- For more information about creating log groups and log streams, see Logs.
- If you want to collect logs, you must configure log.json.
Running Open Telemetry Collector for ServiceWatch
| Execution Option | Description |
|---|---|
-action | Action setting (run or stop) |
-dir | Location of ServiceWatch Agent configuration files such as agent.json, metric.json, log.json |
-collector | Location of Open Telemetry Collector executable |
Running ServiceWatch Agent (for Linux)
agent.json, metric.json, log.json files are in current_location/agent/examples/os-metrics-min-examples and otelcontribcol_linux_amd64 file is in current_location/agent, execute as follows.Run ServiceWatch Agent.
- Check the location of
agent.json,metric.json,log.jsonfiles and the location ofservicewatch-agent-manager-linux-amd64,otelcontribcol_linux_amd64files and start ServiceWatch Agent.Color mode./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64Code Block. Starting ServiceWatch Agent - Collecting Both Metrics and Logs - If you want to collect only metrics, rename the
log.jsonfile to a different file name or move it so it’s not in the same directory asagent.json,metric.json, and execute as follows.Color mode./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64Code Block. Starting ServiceWatch Agent - Collecting Only Metrics - If you want to collect only logs, rename the
metric.jsonfile to a different file name or move it so it’s not in the same directory asagent.json,log.json, and execute as follows.Color mode./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64Code Block. Starting ServiceWatch Agent - Collecting Only Logs
- Check the location of
Stop ServiceWatch Agent.
Color mode./agent/servicewatch-agent-manager-linux-amd64 -action stop -dir ./agent/examples/os-metrics-min-examples./agent/servicewatch-agent-manager-linux-amd64 -action stop -dir ./agent/examples/os-metrics-min-examplesCode Block. Stopping ServiceWatch Agent
Running ServiceWatch Agent (for Windows)
Run ServiceWatch Agent.
Color modeservicewatch-agent-manager-windows-amd64.exe -action run -dir ./examples -collector otelcontribcol_windows_amd64.exeservicewatch-agent-manager-windows-amd64.exe -action run -dir ./examples -collector otelcontribcol_windows_amd64.exeCode Block. Starting ServiceWatch Agent Stop ServiceWatch Agent.
Color modeservicewatch-agent-manager-windows-amd64.exe -action stop -dir ./examplesservicewatch-agent-manager-windows-amd64.exe -action stop -dir ./examplesCode Block. Stopping ServiceWatch Agent