Skip to main content

Fine-Tuning

This article walks you through how to fine-tune base models in Hyperstack AI Studio using either the API or the user interface. It explains how to configure training parameters, select and filter training data, adjust advanced LoRA settings, and initiate fine-tuning jobs. You'll also learn how to estimate training duration and monitor job progress.

In this article


Fine-Tuning Using the UI

To fine-tune a model using Hyperstack AI Studio, follow the steps below:

  1. Start a New Fine-Tuning Job

    Go to the My Models page and click the Fine-Tune a Model button in the top-right corner.

  2. Configure Basic Settings

    • Model Name – Enter a unique name for your fine-tuned model.
    • Base Model – Select one of the available base models to fine-tune.
  3. Select Training Data

    Choose the source of logs to use for training:

    • All Logs – Use all logs in your account.
    • By Tags – Select logs that have specific tags.
    • By Dataset – Select logs from an existing dataset.
    • Upload Logs – Upload logs just for this training run. You can choose to save these logs with custom tags, or not save them at all.

    Ensure your logs meet the required file format as outlined in the JSONL File Format guidelines.

    Minimum Logs Requirement

    You must include at least 10 logs. If filtering by tags, dataset, or uploading logs, the resulting set must also contain at least 10 valid logs.

    Context and Sequence Length Limit

    Each training example (including prompt and expected output) must fit within the model’s maximum context length of 8192 tokens.
    Additionally, the maximum sequence length for training is set to 2048 tokens.

  4. Adjust Advanced Settings

    Optionally customize training parameters:

    • Epochs – Full passes through the dataset (e.g., 1–10).
    • Batch Size – Samples per training step (e.g., 1–16).
    • Learning Rate – Step size for weight updates (e.g., 0.00005–0.001).
    • LoRA Rank (r) – Rank of adaptation matrices.
    • LoRA Alpha – Scaling factor for LoRA.
    • LoRA Dropout – Dropout rate for LoRA layers.
    Fine-Tuning Duration

    Training time varies depending on your dataset and model complexity. You can track progress in real time through the UI.

  5. Review Estimates

    You’ll see an estimation summary including:

    • Estimated Time to Train
    • Current Training Progress
    • Recommended Minimum Logs based on your setup
  6. Start Training

    Click Start Training to begin. The following will occur:

    • Validation – Logs are checked and filtered. If no filters are applied, all logs will be used. Invalid logs will be skipped if specified.
    • Training Begins – The system allocates resources and kicks off the job.
    • Monitor Progress – Track training from the UI. You can cancel the job at any time.

    See Training Metrics for more details on tracking and reviewing training runs.


Model Fine-Tuning APIs

Create Fine-Tuning Job API

https://console.hyperstack.cloud/ai/api/v1/finetuning/jobs

To create a fine-tuning job for a base model, replace the following variables before running the command:

  • API_KEY: Your API key.
  • model_name: Unique name for your fine-tuned model.
  • base_model: The base model to fine-tune. To learn about available base models, click here.
    Valid values:
    • mistral-small-24b-instruct-2501
    • llama-3.3-70B-instruct
    • llama-3.1-8B-instruct
  • batch_size: Number of training samples per step. Default: 4 (Recommended range: 1–16).
  • epoch: Number of times the dataset is fully traversed during training. Default: 1 (Typical: 1–10).
  • learning_rate: Speed at which the model learns during training. Default: 0.0002 (Typical range: 0.00005–0.001).

To further customize model behavior, see the full list of Optional Parameters.

Minimum Logs Requirement

At least 10 logs are required to start a fine-tuning job. If you apply filters (e.g. tags, models, custom_dataset, or custom_logs_filename), the filtered set must still include at least 10 logs.
If no filters are specified, all uploaded logs will be used by default.

Context and Sequence Length Limit

Each training example (including prompt and expected output) must fit within the model’s maximum context length of 8192 tokens.
Additionally, the maximum sequence length for training is set to 2048 tokens.

Example Request
curl -X POST "https://console.hyperstack.cloud/ai/api/v1/finetuning/jobs" \
-H "api_key: API_KEY" \
-H "Content-Type: application/json" \
-d '{
"base_model": "BASE_MODEL",
"model_name": "YOUR_NEW_MODEL_NAME",
"batch_size": 4,
"epoch": 1,
"learning_rate": 0.0002,
"tags": ["tag1", "tag2"],
"custom_logs_filename": "optional-filename",
"save_logs_with_tags": true,
"custom_dataset": "optional-dataset-name",
"parent_model_id": "optional-parent-model-id",
"lora_r": 32,
"lora_alpha": 16,
"lora_dropout": 0.05
}'

Success Response

200 OK
{
"message": "Reservation starting",
"status": "success"
}

Delete Fine-Tuned Model API

DELETE https://console.hyperstack.cloud/ai/api/v1/models/{model_id}

Use this endpoint to delete a specific fine-tuned model. This action is irreversible and permanently removes the model and its associated metadata.

To delete a fine-tuned model, replace the following variables before running the command:

  • API_KEY: Your API key.
  • model_id (integer) – Provide the ID of the fine-tuned model to be deleted in the path. Use the List Models API to get the ID.
Example Request
curl -X DELETE "https://console.hyperstack.cloud/ai/api/v1/models/{model_id}" \
-H "api_key: YOUR_API_KEY"

200 OK
{
"message": "Model deleted successfully",
"status": "success"
}

Returns

  • 200 OK: The model was successfully deleted.
  • 400 Bad Request: The request was malformed or missing required fields.
  • 404 Not Found: A model with the specified ID does not exist.
  • 422 Unprocessable Entity: The provided model ID is invalid or improperly formatted.

Back to top