Deploying a Model#
Before deploying a model, ensure that the QUANTAGONIA_API_KEY environment variable is set to a valid API key.
There are two ways to deploy a DecisionAI model, each suited for different use cases:
Using the CLI Tools (Recommended for Production)#
The CLI tools are recommended for production systems as they maintain consistent model IDs across deployments using a lock file system and support multiple models in a single project.
The configuration is stored in a TOML file, and model IDs are tracked in a decisionai.lock file.
Global Options#
All CLI commands support these global options:
--config PATH: Path to the TOML configuration file (default: “pyproject.toml”)
--section TEXT: Section name in the TOML file (default: “tool.decision_ai”)
--skip-version-check: Skip version compatibility check
--version: Show version information
Initialization#
Initialize your model using
decision_ai init [OPTIONS] MODEL_NAME
- Required arguments:
MODEL_NAME: Name for referencing your model
- Optional arguments:
--model-file-path TEXT: Path to the Python file containing the model code (default: “model.py”)--opt-model-class-name TEXT: Name of the model class (optional - will be auto-detected if not provided)--input-data-class-name TEXT: Name of the input data class (optional - will be auto-detected if not provided)--description TEXT: Description of your optimization model--examples-dir TEXT: Directory containing XML example files for in-context learning
The initialization process:
Adds model configuration to your TOML file
The configuration includes model name, file path, description, and examples directory
Class names will be automatically detected during deployment unless explicitly specified
Multiple models can be initialized in the same project
If options are not provided, the command will interactively prompt for required information
Note
Both --opt-model-class-name and --input-data-class-name must be provided together, or both omitted for automatic detection. If automatic detection fails, you can manually add the class names to your pyproject.toml file.
Validation#
You can validate your TOML configuration using
decision_ai validate
This ensures your configuration is correct before deployment.
Deployment#
Deploy your models using
decision_ai deploy [OPTIONS]
- Optional arguments:
--force: Force reprocessing of models even if they haven’t changed--base-url TEXT: Override the API base URL
This command:
Reads your TOML configuration
Creates model IDs for new models (stored in
decisionai.lock)Uploads model code to the Quantagonia cloud
Syncs in-context learning examples (if examples_dir is specified)
Makes your models available for chat sessions
The lock file (decisionai.lock) tracks model IDs per base URL, enabling:
- Consistent model IDs across deployments
- Support for multiple environments (dev, staging, prod)
- Multiple models in a single project
Model Management#
Remove specific models from the lock file:
decision_ai remove [OPTIONS] MODEL_NAME
- Optional arguments:
--yes: Skip confirmation prompt
Remove all deployed models from the backend and clean up the lock file:
decision_ai prune [OPTIONS]
- Optional arguments:
--base-url TEXT: Override the base URL from environment variables--yes: Skip confirmation prompt
Note
The deployment will return an error if the python code of the model contains syntax errors. In this case, make sure to fix your model file and re-deploy the model.
Tip
For information about setting up in-context learning examples to improve AI responses, see In-Context Learning Examples.
Using the Python Interface (For Dynamic Deployment)#
The Python interface is ideal for:
Dynamically creating and testing models
Quick experimentation with DecisionAI
Prototyping and development workflows
Here’s how to use it:
from decision_ai import DecisionAI
from decision_ai.client.schemas.model import OptModelClassNames
client = DecisionAI()
# Create and deploy model with automatic class name detection
response = client.deploy_model(
module="path/to/your_model.py", # File path to the python module containing the model
model_name="your_model", # Name for the model
description="" # Optional description
)
# Or specify class names manually
class_names = OptModelClassNames(
py_class_name_of_model="YourModel",
py_class_name_of_input_data="YourInputData"
)
response = client.deploy_model(
module="path/to/your_model.py",
model_name="your_model",
description="",
class_names=class_names
)
# The response contains the model ID
model_id = response.id
Configuration File#
When using the CLI tools, the TOML configuration file (default: pyproject.toml) stores your model settings. Multiple models are supported.
You can either use decision_ai init to add models to the configuration file, or manually edit the file directly.
Structure#
Models are defined under [tool.decision_ai.model.<model_name>] sections:
[tool.decision_ai.model.my_model1]
description = "First optimization model"
path = "model1.py"
opt_model_class_name = "Model1" # Optional - will be auto-detected if omitted
input_data_class_name = "MyInputData1" # Optional - will be auto-detected if omitted
examples_dir = "examples1" # Optional
[tool.decision_ai.model.my_model2]
description = "Second optimization model"
path = "model2.py"
# No class names - will be auto-detected during deployment
# No examples_dir - no in-context learning
Configuration Fields#
Each model configuration supports the following fields:
Required fields:
path(string): Path to the Python file containing the model code. The file must exist and be a valid Python file.
Optional fields:
opt_model_class_name(string): Name of the model class in the Python file. If omitted, the class name will be automatically detected during deployment. Must be provided together withinput_data_class_nameor both omitted.input_data_class_name(string): Name of the input data class in the Python file. If omitted, the class name will be automatically detected during deployment. Must be provided together withopt_model_class_nameor both omitted.description(string): Description of your optimization model. Default: empty string.examples_dir(string): Path to a directory containing XML example files for in-context learning. If specified, the directory must exist. If omitted, no in-context learning examples will be used.
Validation Rules#
When you run decision_ai validate or decision_ai deploy, the configuration is validated:
The
pathmust point to an existing file (not a directory)Both
opt_model_class_nameandinput_data_class_namemust be provided together, or both must be omitted for automatic detectionIf class names are provided, they must exist in the specified Python file
If
examples_diris specified, it must point to an existing directory (not a file)
Manual Configuration#
You can manually edit the pyproject.toml file instead of using decision_ai init. This is useful when:
Migrating existing models
Batch adding multiple models
Using version control to manage configurations
Automating deployment pipelines
After manually editing the configuration, run decision_ai validate to ensure your configuration is correct before deploying.
Lock File#
The decisionai.lock file tracks deployed model IDs per base URL:
{
"https://decisionai.quantagonia.com/api/v1": {
"my_model1": "model-id-1",
"my_model2": "model-id-2"
},
"https://staging.example.com/api/v1": {
"my_model1": "staging-model-id-1"
}
}
Warning
Do not manually edit the lock file. Use the CLI commands to manage model deployments.
Environment Variables#
Required:
QUANTAGONIA_API_KEY: Your API key for authentication (required for both CLI and Python interface)