ServiceWatch Agent Using
Users can install the ServiceWatch Agent on Virtual Server/GPU Server/Bare Metal Server, etc., to collect custom metrics and logs.
ServiceWatch Agent
The agents that need to be installed on the server for collecting ServiceWatch’s custom metrics and logs can be broadly divided into two types. Prometheus Exporter and Open Telemetry Collector.
| Category | Detailed description | |
|---|---|---|
| Prometheus Exporter | Provides metrics of a specific application or service in a format that Prometheus can scrape
| |
| Open Telemetry Collector | Acts as a centralized collector that gathers telemetry data such as metrics and logs from distributed systems, processes (filtering, sampling, etc.) it, and exports it to various backends (e.g., Prometheus, Jaeger, Elasticsearch).
|
To link server log files to ServiceWatch via the ServiceWatch Agent, you must first create a log group and a log stream within the log group.
- For detailed information on creating log groups and log streams, please refer to Log.
Open Telemetry Collector preset for ServiceWatch
Install according to the steps below to use the Open Telemetry Collector for collecting ServiceWatch metrics and logs on the server.
- Download the Agent file from the URL where the Agent file can be downloaded for ServiceWatch.
wget [ServiceWatch Agent file download URL]wget [ServiceWatch Agent file download URL]- The Open Telemetry Collector Agent file for ServiceWatch can be checked as follows.
- Extract the Agent file for ServiceWatch.Color mode
unzip ServiceWatch_Agent.zipunzip ServiceWatch_Agent.zipCode block. Decompress Agent file for ServiceWatch - If the environment using ServiceWatch Agent is Linux OS, you must grant execution permission as below.Color mode
chmod +x agent/otelcontribcol_linux_amd64 chmod +x agent/servicewatch-agent-manager-linux-amd64chmod +x agent/otelcontribcol_linux_amd64 chmod +x agent/servicewatch-agent-manager-linux-amd64Code block. Grant execution permission to Agent file for ServiceWatch Category Detailed description examples Example configuration file folder. Each folder contains example files agent.json,log.json,metric.jsonos-metrics-min-examples: Example of minimal metric configuration using Node Exporter
os-metrics-all-examples: Example of memory/filesystem Collector metric configuration using Node Exporter
gpu-metrics-min-examples: Example of minimal metric configuration using DCGM Exporter
gpu-metrics-all-examples: Example of major metric configuration using DCGM Exporter
otelcontribcol_linux_amd64 Open Telemetry Collector for Linux for ServiceWatch otelcontribcol_windows_amd64.exe Open Telemetry Collector for Windows for ServiceWatch servicewatch-agent-manager-linux-amd64 ServiceWatch Agent Manager for Linux servicewatch-agent-manager-windows-amd64.exe ServiceWatch Agent Manager for Windows Table. Agent file configuration for ServiceWatch
- Extract the Agent file for ServiceWatch.
Define the Agent configuration file of ServiceWatch Agent Manager for the Open Telemetry Collector for ServiceWatch.
Category Detailed description namespace Custom namespace for custom metrics - A namespace is a logical separation used to distinguish and group metrics, designated as a custom metric to differentiate custom metrics.
- The namespace must be 3 to 128 characters long, consisting of letters, numbers, spaces, and special characters (
_-/), and must start with a letter.
accessKey IAM authentication key Access Key accessSecret IAM authentication key Secret Key resourceId Resource ID of the server on Samsung Cloud Platform - Example: Resource ID of Virtual Server
openApiEndpoint ServiceWatch OpenAPI Endpoint by region/offering - Example: https://servicewatch.
region.offering.samsungsdscloud.com
- The
regionandofferinginformation can be found in the Samsung Cloud Platform Console access URL
telemetryPort ServiceWatch Agent’s Telemetry Port - Usually uses port 8888. Change required if port 8888 is already in use
Table. agent.json configuration file itemsColor mode{ "namespace": "swagent-windows", # Custom namespace for custom metrics "accessKey": "testKey", # IAM authentication key Access Key "accessSecret": "testSecret", # IAM authentication key Secret Key "resourceId": "resourceID", # Resource ID on the server's Samsung Cloud Platform "openApiEndpoint": "https://servicewatch.kr-west1.e.samsungsdscloud.com", # Region/Environment-specific ServiceWatch OpenAPI Endpoint "telemetryPort": 8889 # ServiceWatch Agent's Telemetry Port (usually uses port 8888. Change if port 8888 is in use) "}{ "namespace": "swagent-windows", # Custom namespace for custom metrics "accessKey": "testKey", # IAM authentication key Access Key "accessSecret": "testSecret", # IAM authentication key Secret Key "resourceId": "resourceID", # Resource ID on the server's Samsung Cloud Platform "openApiEndpoint": "https://servicewatch.kr-west1.e.samsungsdscloud.com", # Region/Environment-specific ServiceWatch OpenAPI Endpoint "telemetryPort": 8889 # ServiceWatch Agent's Telemetry Port (usually uses port 8888. Change if port 8888 is in use) "}Code block. agent.json configuration example Define the metric configuration file for collecting metrics for ServiceWatch.
- If you want to collect metrics through the Agent, set metric.json.
Category Detailed description prometheus > scrape_configs > targets Endpoint of metric collection target - For a server, install Prometheus Exporter on the same server, so set this endpoint
- Example: localhost:9100
prometheus > scrape_configs > jobName Job Name setting. Usually set to the type of Prometheus Exporter used for metric collection - Example: node-exporter
metricMetas > metricName Set the name of the metric to be collected. The metric name must be 3 to 128 characters long, including English letters, numbers, and special characters ( _), and must start with an English letter.- Example: node_cpu_seconds_total
metricMetas > dimensions Set the label among the Collector’s labels provided to identify the source of the Exporter’s metric data, for visualizing on the Console. When visualizing the collected metrics on the Console, they are displayed in combinations according to the dimensions setting. - Example: If no special label is provided, as with the metrics of Node Exporter’s Memory Collector, set it to resource_id
- Example: The metrics of Node Exporter’s Filesystem Collector can have the dimension set to mountpoint, which represents the path where the filesystem is mounted on the system
metricMetas > unit Metric unit can be set - Example: Bytes, Count, etc.
metricMetas > aggregationMethod Method of aggregating based on the specified dimension(s) - Example: Choose among SUM, MAX, MIN, COUNT
metricMetas > descriptionKo Korean description of the metric being collected metricMetas > descriptionEn English description of the metric being collected Table. metric.json configuration file itemsColor mode{ "prometheus": { "scrape_configs": { "targets": [" "localhost:9100" # Endpoint of the Prometheus Exporter installed on the server ], "jobName": "node-exporter" # Usually set to the name of the installed Exporter } }, "metricMetas": [ { "metricName": "node_memory_MemTotal_bytes", # Set the metric name to be linked with ServiceWatch among the metrics collected from the Prometheus Exporter "dimensions": [ [ "resource_id" # Set the label for visualizing on the Console among the Collector's labels provided to identify the source of Node Exporter's metric data. # If there is no special label provided, such as the relevant Memory-related metric, set it to resource_id ] ], "unit": "bytes", # Collection metric data unit "aggregationMethod": "SUM", # aggregation method "descriptionKo": "Total physical memory size of the server", # Indicator's Korean description "descriptionEn": "node memory total bytes" # English description of the metric }, { "metricName": "node_filesystem_size_bytes", # Set the metric name to be linked with ServiceWatch among the metrics collected from the Prometheus Exporter "dimensions": [ [ "mountpoint" # Set the label for visualization on the Console among the Collector's labels provided to identify the source of Node Exporter's metric data. # Set dimension (dimensions) as mountpoint indicating the path where the filesystem related to the Filesystem metric is mounted on the system ] ], "unit": "bytes", "aggregationMethod": "SUM", "descriptionKo": "node filesystem size bytes", "descriptionEn": "node filesystem size bytes" }, { "metricName": "node_memory_MemAvailable_bytes", "dimensions": [ [ "resource_id" ] ], "unit": "bytes", "aggregationMethod": "SUM", "descriptionKo": "node memory available bytes", "descriptionEn": "node memory available bytes" }, { "metricName": "node_filesystem_avail_bytes", "dimensions": [ [ "mountpoint" ] ], "unit": "bytes", "aggregationMethod": "SUM", "descriptionKo": "node filesystem available bytes", "descriptionEn": "node filesystem available bytes" } ] "}{ "prometheus": { "scrape_configs": { "targets": [" "localhost:9100" # Endpoint of the Prometheus Exporter installed on the server ], "jobName": "node-exporter" # Usually set to the name of the installed Exporter } }, "metricMetas": [ { "metricName": "node_memory_MemTotal_bytes", # Set the metric name to be linked with ServiceWatch among the metrics collected from the Prometheus Exporter "dimensions": [ [ "resource_id" # Set the label for visualizing on the Console among the Collector's labels provided to identify the source of Node Exporter's metric data. # If there is no special label provided, such as the relevant Memory-related metric, set it to resource_id ] ], "unit": "bytes", # Collection metric data unit "aggregationMethod": "SUM", # aggregation method "descriptionKo": "Total physical memory size of the server", # Indicator's Korean description "descriptionEn": "node memory total bytes" # English description of the metric }, { "metricName": "node_filesystem_size_bytes", # Set the metric name to be linked with ServiceWatch among the metrics collected from the Prometheus Exporter "dimensions": [ [ "mountpoint" # Set the label for visualization on the Console among the Collector's labels provided to identify the source of Node Exporter's metric data. # Set dimension (dimensions) as mountpoint indicating the path where the filesystem related to the Filesystem metric is mounted on the system ] ], "unit": "bytes", "aggregationMethod": "SUM", "descriptionKo": "node filesystem size bytes", "descriptionEn": "node filesystem size bytes" }, { "metricName": "node_memory_MemAvailable_bytes", "dimensions": [ [ "resource_id" ] ], "unit": "bytes", "aggregationMethod": "SUM", "descriptionKo": "node memory available bytes", "descriptionEn": "node memory available bytes" }, { "metricName": "node_filesystem_avail_bytes", "dimensions": [ [ "mountpoint" ] ], "unit": "bytes", "aggregationMethod": "SUM", "descriptionKo": "node filesystem available bytes", "descriptionEn": "node filesystem available bytes" } ] "}Code block. metric.json configuration example
- If you want to collect metrics through the Agent, set metric.json.
Define the Log configuration file for log collection for ServiceWatch.
- If you want to collect logs, you need to configure log.json.
Category Detailed description fileLog > include Log file location to collect fileLog > operators Definition for parsing log messages to be collected fileLog > operators > regex Express log message format as a regular expression fileLog > operators > timestamp The format of the Time Stamp of the log message to be sent to ServiceWatch logMetas > log_group_value Log group name created to send logs to ServiceWatch logMetas > log_stream_value ServiceWatch log group log stream name Table. log.json configuration file itemsColor mode{ "fileLog": { "include": [ "/var/log/syslog", # Log file to be collected by ServiceWatch "/var/log/auth.log" ], "operators": { "regex": "^(?P<timestamp>\\S+)\\s+(?P<hostname>\\S+)\\s+(?P<process>[^:]+):\\s+(?P<message>.*)$", # Represent log file format with regex "timestamp": { # Set the format of the log message's Time Stamp "layout_type": "gotime", "layout": "2006-01-02T15:04:05.000000Z07:00" } } }, "logMetas": { "log_group_value": "custom-log-group", # Log group name of ServiceWatch created in advance "log_stream_value": "custom-log-stream" # The log stream name within the pre-created ServiceWatch log group } }{ "fileLog": { "include": [ "/var/log/syslog", # Log file to be collected by ServiceWatch "/var/log/auth.log" ], "operators": { "regex": "^(?P<timestamp>\\S+)\\s+(?P<hostname>\\S+)\\s+(?P<process>[^:]+):\\s+(?P<message>.*)$", # Represent log file format with regex "timestamp": { # Set the format of the log message's Time Stamp "layout_type": "gotime", "layout": "2006-01-02T15:04:05.000000Z07:00" } } }, "logMetas": { "log_group_value": "custom-log-group", # Log group name of ServiceWatch created in advance "log_stream_value": "custom-log-stream" # The log stream name within the pre-created ServiceWatch log group } }Code block. log.json configuration example
- If you want to collect logs, you need to configure log.json.
To link the server’s log files to ServiceWatch via the ServiceWatch Agent, you must first create a log group and a log stream within the log group.
- For detailed information on creating log groups and log streams, please refer to Log.
Running Open Telemetry Collector for ServiceWatch
| Execution Options | Detailed Description |
|---|---|
-action | Action setting (run or stop) |
-dir | agent.json, metric.json, log.json location of ServiceWatch Agent configuration files such as |
-collector | Open Telemetry Collector executable file location |
ServiceWatch Agent Execution (for Linux)
agent.json, metric.json, log.json files are in current location/agent/examples/os-metrics-min-examples, and assuming the otelcontribcol_linux_amd64 file is in current location/agent, run as follows.Run the ServiceWatch Agent.
agent.json,metric.json,log.jsonfile locations andservicewatch-agent-manager-linux-amd64,otelcontribcol_linux_amd64file locations are checked to start the ServiceWatch Agent.Color mode./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64Code block. ServiceWatch Agent start - Collect all metrics/logs - If you only want to collect metrics, rename the
log.jsonfile to a different name or move it so that it is not in a directory such asagent.json,metric.json, and run as below.Color mode./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64Code block. ServiceWatch Agent start - collect only metrics - If you only want to collect logs, rename the
metric.jsonfile to a different filename or move it so that it is not in the same directory asagent.json,log.json, and then run as shown below.Color mode./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64./agent/servicewatch-agent-manager-linux-amd64 -action run -dir ./agent/examples/os-metrics-min-examples -collector ./agent/otelcontribcol_linux_amd64Code block. ServiceWatch Agent start - collect only logs
Stop the ServiceWatch Agent.
Color mode./agent/servicewatch-agent-manager-linux-amd64 -action stop -dir ./agent/examples/os-metrics-min-examples./agent/servicewatch-agent-manager-linux-amd64 -action stop -dir ./agent/examples/os-metrics-min-examplesCode block. ServiceWatch Agent stop
ServiceWatch Agent Run (for Windows)
Run the ServiceWatch Agent.
Color modeservicewatch-agent-manager-windows-amd64.exe -action run -dir ./examples -collector otelcontribcol_windows_amd64.exeservicewatch-agent-manager-windows-amd64.exe -action run -dir ./examples -collector otelcontribcol_windows_amd64.exeCode block. ServiceWatch Agent start Stop the ServiceWatch Agent.
Color modeservicewatch-agent-manager-windows-amd64.exe -action stop -dir ./examplesservicewatch-agent-manager-windows-amd64.exe -action stop -dir ./examplesCode block. ServiceWatch Agent stop