Using AIOS
AIOS provides an environment where LLM can be used by default within each resource when you create Virtual Server, GPU Server, Kubernetes Engine services.
For detailed information on each service creation, refer to the table below.
| Service | Guide |
|---|---|
| Virtual Server | Virtual Server Create |
| GPU Server | Create GPU Server |
| Cloud Functions | Cloud Functions Create |
| Kubernetes Engine | Create Cluster |
Using LLM
LLM can be used by utilizing the LLM Endpoint within the service resources such as Virtual Server, GPU Server, Cloud Functions, Kubernetes Engine created on Samsung Cloud Platform. The LLM Endpoint can be checked through the Usage Guide for the LLM Endpoint on the service’s detail page.
Check the LLM Endpoint of Virtual Server
You can check the usage guide for the LLM Endpoint on the Virtual Server Details page of the created Virtual Server.
To check the usage guide for the LLM Endpoint, follow the steps below.
- All Services > Compute > Virtual Server Click the menu. Go to the Service Home page of Virtual Server.
- Click the Virtual Server menu on the Service Home page. Navigate to the Virtual Server list page.
- Virtual Server List page, click the resource to connect to the LLM Endpoint. Navigate to the Virtual Server Details page.
- Virtual Server Details on the page, click the User Guide link of the LLM Endpoint item. It will navigate to the LLM User Guide popup window.
Check GPU Server’s LLM Endpoint
You can check the usage guide for the LLM Endpoint on the GPU Server Details page of the created GPU Server.
To view the usage guide for LLM Endpoint, follow the steps below.
- All Services > Compute > GPU Server Click the menu. Go to the Service Home page of GPU Server.
- Click the GPU Server menu on the Service Home page. It navigates to the GPU Server List page.
- GPU Server List page, click the resource to connect to the LLM Endpoint. GPU Server Details page, navigate.
- GPU Server Details on the page, click the LLM Endpoint item’s User Guide link. You will be taken to the LLM User Guide popup window.
Checking the LLM Endpoint of Cloud Functions
You can view the usage guide for the LLM Endpoint on the Cloud Functions Details page of the created Cloud Functions.
To view the usage guide for the LLM Endpoint, follow the steps below.
- All Services > Compute > Cloud Functions Click the menu. Go to the Service Home page of Cloud Functions.
- Click the Functions menu on the Service Home page. Go to the Functions list page.
- On the Functions list page, click the resource to connect to the LLM Endpoint. You will be taken to the Functions details page.
- Click the User Guide link of the LLM Endpoint item on the Functions Details page. It will open the LLM User Guide popup.
Check the LLM Endpoint of the Kubernetes Engine cluster
You can check the usage guide for the LLM Endpoint on the Cluster Details page of the created Kubernetes Engine cluster.
To view the usage guide for LLM Endpoint, follow the steps below.
- Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu from the Service Home page. Go to the Cluster List page.
- Click the resource to connect to the LLM Endpoint on the Cluster List page. You will be taken to the Cluster Details page.
- On the Cluster Details page, click the User Guide link of the LLM Endpoint item. It will open the LLM User Guide popup.
LLM Usage Guide
In the usage guide of LLM Endpoint, you can see AIOS LLM Private Endpoint, the provided model, and sample code examples.
AIOS LLM Private Endpoint
The URL of the AIOS LLM private endpoint is displayed. Check the URL to use it within the resources created for the Virtual Server, GPU Server, Kubernetes Engine services.
AIOS LLM Provided Model
The AIOS LLM provided models are as follows.
| Model Name | Model ID | Context Size | RPM (Request per minute) | TPM (Token per minute) | Purpose | License | Discontinuation Date |
|---|---|---|---|---|---|---|---|
| gpt-oss-120b | openai/gpt-oss-120b | 131,072 | 50 RPM | 200K | Research, Experiment, Advanced Language Understanding | Apache 2.0 | No plans |
| Qwen3-Coder-30B-A3B-Instruct | Qwen/Qwen3-Coder-30B-A3B-Instruct | 65,536 | 20 RPM | 30K | code generation, analysis, debugging support | Apache 2.0 | No plan |
| Qwen3-30B-A3B-Thinking-2507 | Qwen/Qwen3-30B-A3B-Thinking-2507 | 32,768 | 10 RPM | 30K | deep reasoning, long text analysis, essay writing | Apache 2.0 | no plan |
| Llama-4-Scout | meta-llama/Llama-4-Scout | 32,768 | 20 RPM | 35K | Latest Llama model with multimodal capability | llama4 | No plans |
| Llama-Guard-4-12B | meta-llama/Llama-Guard-4-12B | 32,768 | 20 RPM | 200K | Core security and moderation model to enhance reliability and safety in the latest large language models and multimodal AI services | llama4 | No plan |
| bge-m3 | sds/bge-m3 | 8,192 | 100 RPM | 200K | It is a multilingual embedding model that supports multiple languages. | Samsung SDS | No plan |
| bge-reranker-v2-m3 | sds/bge-reranker-v2-m3 | 8,192 | 100 RPM | 200K | Provides fast computation and high performance as a lightweight multilingual reranker. | Samsung SDS | No plans |
Sample code
Refer to the following for AIOS LLM sample code examples.
curl -H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-oss-120b"
, "prompt" : "Write a haiku about recursion in programming."
, "temperature": 0
, "max_tokens": 100
, "stream": false
}' \
{AIOS LLM private endpoint}/{API}curl -H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-oss-120b"
, "prompt" : "Write a haiku about recursion in programming."
, "temperature": 0
, "max_tokens": 100
, "stream": false
}' \
{AIOS LLM private endpoint}/{API}Check usage per LLM model
You can view the list of LLMs and token usage per model on the Service Home page of AIOS.
- All Services > AI-ML > AIOS Click the menu. Navigate to AIOS’s Service Home page.
- LLM usage by model In the list, check the LLM’s model name, model type, and usage token amount (1 week).
Category Detailed description Model Name LLM Name - Click the name to go to the model’s Report page
Model Type LLM Type - chat, reasoning, vision, moderation, embedding, rerank
- Model-specific information is Provided Model see
Token usage (1 Week) Token usage for one week as of today Table. AIOS LLM list items
Report Check
You can check the daily LLM call count and token usage on AIOS’s Report page.
The service types can be selected as Virtual Server, GPU Server, Kubernetes Engine, and you can query by selecting resource names among the resources actually created in the service, and you can also query by the LLM model used.
- All Services > AI-ML > AIOS Click the menu. Navigate to AIOS’s Service Home page.
- Click the Report menu on the Service Home page. Navigate to the Report page of AIOS.
- LLM usage by model In the list, clicking the LLM model name will take you directly to that LLM’s Report page.
- Report page, after selecting the LLM model to view the Report, click the Query button. The Report information for that LLM model will be displayed.
Category Detailed description Service Type Select service type using LLM - Virtual Server, GPU Server, Kubernetes Engine
Resource Name Select Service Name - If you do not select a service type, only All can be selected, and if you select a specific product in the service type, a specific resource name can be selected
Model Select LLM model type - For information per model, see Provided Models
Query Period Select the period to view the Report - Selectable in weekly units
- Previous periods can be queried up to a maximum of 3 months
- The data retrieved is provided up to a maximum of 30 minutes prior to the current time
Call Count Daily call count during the query period - Displayed per day as total count, success count, and failure count
- Total call count: Provides the total number of calls during the period by model
Token usage Daily Token input and output amounts during the query period - Total number of Tokens: Total Token usage during the query period
- Average number of Tokens per Request: Average Token amount used when calling the LLM during the query period
Table. AIOS Report items