Setup Guide#
Warning
The DecisionAI API is currently in active development and is not yet stable. The API may undergo significant changes between versions. Please be aware that code written for one version may need to be updated to work with future releases. We are aiming to make breaking changes as transparent as possible. For more details, see the Latest Changes section below.
What you can do#
With DecisionAI, you can:
Formulate a base model and input data to represent the foundation of your optimization problems.
Leverage LLMs to accurately adapt and extend your base model (e.g., constraints, objectives) based on natural language descriptions.
Solve the optimization models using the powerful HybridSolver and obtain the solutions.
Requirements#
Python >= 3.10, < 3.12
Installation#
Setup API key#
Set up authentication for the Quantagonia Cloud to be able to use decision-ai and to submit compute jobs to the cloud-based solver. If you don’t have an API key yet, sign up for a free plan at platform.quantagonia.com and store it in an environment variable:
export QUANTAGONIA_API_KEY=<YOUR_API_KEY> #for windows users: set QUANTAGONIA_API_KEY=<YOUR_API_KEY>
For activating the decision-ai client in the early access program please fill out this form.
Install via pip#
pip install decision-ai --extra-index-url https://<YOUR TOKEN>:@pypi.fury.io/developmentqg/
Setup within a dependency managment tool#
Hatch
To configure Hatch to use the private index in your project:
# In pyproject.toml [tool.hatch.envs.default.env-vars] PIP_EXTRA_INDEX_URL = "https://<YOUR TOKEN>:@pypi.fury.io/developmentqg/"Then add decision-ai as a dependency:
# In pyproject.toml [project] # ... other project configuration dependencies = [ "decision-ai", # ... other dependencies ]uv
Configure uv to use the private index in your pyproject.toml:
# In pyproject.toml [[tool.uv.index]] url = "https://pypi.fury.io/developmentqg/" username = "<YOUR TOKEN>" password = ""You can also add decision-ai as a dependency using the command line:
uv add decision-ai --index-url https://<YOUR TOKEN>:@pypi.fury.io/developmentqg/Or specify it in your pyproject.toml:
[project] # ... other project configuration dependencies = [ "decision-ai", # ... other dependencies ]Poetry
Configure Poetry to use the private repository in your project securely:
# In pyproject.toml [[tool.poetry.source]] name = "fury" url = "https://pypi.fury.io/developmentqg/" priority = "supplemental"Then add decision-ai as a dependency:
poetry config http-basic.fury <YOUR TOKEN> "" poetry add decision-ai --source furyOr manually in your pyproject.toml:
[tool.poetry.dependencies] python = ">=3.10,<3.12" decision-ai = "^x.y.z"Pipenv
To add the private repository to your Pipfile with secure token handling:
[[source]] url = "https://<YOUR TOKEN>:@pypi.fury.io/developmentqg/" verify_ssl = true name = "fury"Then add decision-ai as a dependency:
[packages] decision-ai = {version = "*", index = "fury"}requirements.txt
Add the following to your requirements.txt file:
--extra-index-url https://<YOUR TOKEN>:@pypi.fury.io/developmentqg/ decision-ai==x.y.z # Other dependencies
First steps#
Once you have installed the package and set up your API key, you can start defining your first optimization model using the DecisionAI model interface. This is a Python file that contains the input data, model variables, and the model itself.
For more details on how to set up the model, refer to the Usage section.
Latest Changes#
1.4.0 — 2025-10-08#
Added#
CLI: Added support for multiple models in a single project via pyproject.toml configuration
CLI: Added
decisionai.lockfile for tracking deployed model IDs per base URLCLI: Added
--base-urlparameter to override environment-based API URL configurationCLI: Added
decision_ai prunecommand to delete deployed models and clean up lock fileCLI: Added
decision_ai removecommand to selectively remove specific models from deploymentModel class names (optimization model and input data classes) are now automatically detected during model processing using LLM-based code analysis. This eliminates the need to manually specify class names in most cases.
Added helpful error messages with instructions when automatic class name detection fails, showing users how to manually specify class names via CLI flags or config files.
Added
reasoning_enabled(bool) parameter toChatConnection.stream_messages()to control reasoning mode
Changed#
InputData became less restrictive and is now a pure pydantic base model. That means, any types that pydantic allows are also vald for the input data.
CLI: Enhanced
decision_ai initcommand to support initializing multiple models without pre-generating IDsCLI: Enhanced
decision_ai deploycommand to automatically create and manage model IDs via lock fileThe input data class name is now stored remotely and automatically retrieved when continuing a chat session. Users no longer need to track or provide this information.
Model deployment is now simpler - the main optimization model class and input data class names are automatically detected and don’t need to be specified in most cases.
Breaking Changes#
The signature of
DecisionAI.create_chat_session(self, input_data: InputData, opt_model_id: str) -> ChatSessionwas changed toDecisionAI.create_chat_session(self, opt_input_data: InputData, opt_model_id: str) -> ChatSession.The signature of
DecisionAI.continue_chat_session(self, input_data_class_name: str, chat_session_id: str) -> ChatSessionwas changed toDecisionAI.continue_chat_session(self, chat_session_id: str, opt_input_data_class_name: str) -> ChatSession.Continuing a chat session now works via the input data class name (str) instead of the instance.
CLI: Replaced
decision_ai versioncommand with--versionflag - users must now usedecision_ai --versioninstead ofdecision_ai versionCLI: Changed model configuration format in pyproject.toml - models are now defined as
[tool.decision_ai.model.{name}]sections without embedded IDsCLI: Renamed
--file-pathparameter to--config- users must now use--configinstead of--file-pathThe parameter
opt_input_data_class_namehas been removed fromDecisionAI.continue_chat_session(). The input data class name is now stored remotely and retrieved automatically.
1.3.0 — 2025-09-08#
Changed#
Logging and analytics were improved.
The agent system prompt was improved. It can now write better queries for understanding the input data.
1.2.0 — 2025-08-29#
Fixed#
Application no longer crashes when maximum number of tool calls is reached
Error handling when conversation reaches tool call limit - users now receive informative error message and can continue chatting
Fixed an issue that dependencies of the input data class were not loaded when constructing it within a conversation with the agent.
1.1.0 — 2025-08-25#
Removed#
Removed redundant
input_or_variable_nameattribute fromVariableschema.Automatic valdiation when constructing a chat session was removed. Call
ChatSession.validate_and_detect_incompatible_constraintsfor manual validation after creating a chat session.
Added#
Added
post_solve_callbackparameter toChatSession.connect()with signatureCallable[[Solution | None, Exception | None], None]. The callback is called after each solve operation, receiving the solution object directly when successful (witherror=None) or an exception when failed (withsolution=None).It is now possible to change input data via the agent. Ask “Please remove Linda” or “Please add another employee called ‘Jane’” in the staff scheduling example to try it out!
The objective has now the fields
incompatibility(bool) andincompatibility_reason(str) in the chat session state. The fields are populated when the objective becomes incompatible with the input data.In-Context Learning Examples: New CLI feature allowing users to provide custom XML examples to guide AI responses
Added
examples_dirparameter todecision_ai initcommand for specifying examples directoryAdded XML file parsing with
<prompt>and<how_to_proceed>structureAdded automatic examples syncing during
decision_ai deploycommandAdded backend API endpoint
/learning/sync/{opt_model_id}for examples synchronizationAdded comprehensive documentation for in-context learning examples feature
Configuration Support: Added
examples_dirfield toDecisionAiProjectConfigwith validationThere is now a
ModelSnapshotclass:The class provides a snapshot of the optimization model at a current time in the chat session
Can be used to formulate or solve the optimization problem
The agent can now modify the input data class, i.e., it can add/replace/delete fields of the provided class.
A new base model consulting staff planning in the examples that models the assignment of consultants to projects over multiple time periods
Changed#
The signature of the
ChatConnectionconstructor was changed. Now aChatSessionobject is required instead of a chat session id.Improvements to the defaults
__str__methods of the message classes.Improved input data synchronization and re-fetching mechanism in chat session and chat connection classes
Enhanced input data cache management to ensure consistency between local and remote state
Version Compatibility: Changed version compatibility checks from hard errors to warnings, allowing CLI commands to continue execution
Documentation: Enhanced CLI documentation with comprehensive in-context learning examples guide and updated existing deployment documentation
Improved code validation system. Tool output messages are now clearer for the agent.
Removed the
ChatMessageResponseclass in favor ofMessageContainer. TheMessageContainerautomatically discriminates messages into their subtypes.Updated
MathematicalNotationcreation to automatically use the variable ID asinput_or_variable_namewhen generating notations from variables.Updated variable parser to no longer handle the redundant
input_or_variable_namefield.
Deprecated#
Deprecated
ChatSession.set_opt_input_data()method in favor of theopt_input_dataproperty setter
Fixed#
Faster validation for checking newly introduced variables in constraints.
Fixed an issue that constraints and variables could not be manually deleted.
Breaking Changes#
The constructor of
ChatConnectionwas changed. As the object should only be created via theChatSession.connectfactory method, there is only small chance that this change is affecting anyone.Client Schema Changes: Renamed the
methodologyfield tohow_to_proceedin all in-context learning example schemas:InContextLearningExample.methodology→InContextLearningExample.how_to_proceedInContextLearningExampleCreate.methodology→InContextLearningExampleCreate.how_to_proceedInContextLearningExampleResponse.methodology→InContextLearningExampleResponse.how_to_proceed
Updated client request handler to use the new field name when syncing examples
Updated CLI tools XML documentation to reflect the new field name
Action Required: Client code using these schemas must be updated to use
how_to_proceedinstead ofmethodologyClient Schema Change: The
Variableschema no longer includes theinput_or_variable_namefield. This is a breaking change that affects:Client code that creates or uses
VariableobjectsVariable parsing and model generation
Any client code that previously accessed the
input_or_variable_namefield from variables
Required Actions:
All existing models must be reinitialized and deployed again
Update any client code that references
input_or_variable_nameon Variable objects
1.0.1 — 2025-07-11#
Changed#
API Documentation is improved.
Agent was slightly updated and improved.
1.0.0 — 2025-07-09#
Added#
Access to improved DecisionAI agent is now enabled.
Changed#
The incoming messages are now better defined and can be discriminated more easily.
Modified the staff scheduling example to allow additional user constraints without compromising feasibility.
Soft constraint priorities can now be found under the
Constraintclass within the chat session remote state. Each constraint has a field priority that can take values “hard”, “A”, “B”, “C”. If anything other than “hard” is chosen the constrain will be a soft cosntraint.The flag if a constraint is enabled or not can now be found under the
Constraintclass within the chat session remote state. The field is calledenabled.
Fixed#
Model parse errors are again propagated correctly when deploying a model.
Constraint priorities are fetched before calling formulate/solve in the chat connection. Previously, they were passed to the
ChatConnectionobject, but were not updated.
Breaking Changes#
The
chat_session.stream_messagesmethod now returnsAssistantMessageobjects. Surround them withstr(...)to convert them into strings.The
solvemethod inDecisionAInow fetches the priorities internally and does not allow passing them anymore.The
solvemethod inChatSessionnow fetches the priorities internally and does not allow passing them anymore.
0.13.0 — 2025-06-24#
Added#
Introduced a client method to check version compatibility with the server, allowing clients to detect if they are up-to-date or require an update.
Changed#
Updated ChatSession.connect() method to use the executor property instead of direct field access, ensuring soft constraint enabling settings are properly updated before creating chat connections.
Fixed#
During solving the problem via a chat session, soft constraints were not correctly enabled and were mistakenly treated as hard constraints.
0.12.0 — 2025-05-26#
Added#
More robust input data preperation.
Fixed#
Fixed key errors when adding constraints that are empty or do not yield any pulp constraints.
0.11.0 — 2025-05-22#
Removed#
Removed the _rename_variables method from the pulp_model.py, since it is not needed anymore.
Added#
DECISION_AI_LOG_LEVELcan now be set to “NONE” in order to disable client logging. Logging for the client is still disabled by default.Backend preparations for improved agent.
Added a parameter to enable retries on request timeouts.
Fixed#
Fixed that messages are stored in wrong order.
Fixed a bug that added wrong type hints to the generated code. The error complained about undefined classes
InputandVariables.Fixed an error when deleting or modifying a constraint.
Fixed key errors when changing priorities of soft constraints while simultaneously activating or deactivating them.
Fixed a bug where chat sessions with Constraints.code=None threw an uncaught AttributeNullError when listing chat sessions. We catch the error now and exclude the invalid sessions.
0.10.0 — 2025-05-13#
Added#
Better client-side control over request timeouts.
Fixed#
Fixed that soft constraint priorities were not removed when the respective constraint was removed.
Fixed formulation cache that caused an error when a constraint was deleted.
Fixed that inactive constraints were sent to the client for solving.
Fixed that data-incompatible constraints were never marked compatible again, even if they were repaired.
Fixed several backend bugs.
0.9.1 — 2025-04-29#
Removed#
Custom model dump and schema formatting for input data.
InputData.dump_model_transfornedandInputData.dump_model_json_transformed.Custom string representations of input data.
Default dumping of all input data into the chat history at the start of a session.
Added#
Logging that covers chat sessions and chat connections.
JSON query tool using jq.
LLM agent for querying input data.
Ability for input data preparation node to query data rather than load it all.
Changed#
The DecisionAI model is now validated if input data is changed. Incompatible constraints are marked as such and can then be disabled or handled otherwise.
InputDataclass to be more aligned with default Pydantic models.InputDataPreparationAgentto use query rather than load.Split
InputDatainto two classes to focus on type restriction and serialization respectively.
Fixed#
Support for websockets >= 15.
Bug that caused the feedback request to be sent twice.
Missing input_data argument in the model.solve() example in “Using the DecisionAI Model”.
Breaking Changes#
InputData.dump_model_transfornedandInputData.dump_model_json_transformedmethods no longer exist.Custom string representations of input data are no longer supported.
Input data must now satisfy pydantic data class rules, in particular, dictionary keys must be primitives.
0.8.0 — 2025-04-01#
Changed#
Improved error feedback when deploying a model. Now errors in the model code are displayed and the deployment is stopped.
Added#
Added a method
get_opt_input_datainDecisionAIwhich allows the client to download input data given the id.Added a method to
DecisionAIto get the processing status of a model.
0.7.0 — 2025-03-20#
Added#
Added
list_chat_sessionsmethod toDecisionAIAdded
get_messagesmethod inChatSessionAdded
set_opt_input_datamethod inChatSessionAdded
ChatSession.from_state,ChatSession.from_existing_id, andChatSession.create_newfactory methodsAdded a method to
DecisionAIto continue a chat session.Added a method to
DecisionAIto list all models.Added a method to
DecisionAIto get a model.
Changed#
The
ChatSessionconstructor is now more sophisticated and allows for more options. Use the factory methods for creating aChatSessionobject more easily.Improved error response when the LLM context is full.
0.6.1 — 2025-03-17#
Fixed#
Fixed an issue where the client would solve the optimization model in an infinite loop when the solving took to long.
0.6.0 — 2025-03-13#
Removed#
Removed model summary object from chat session, chat session response, model item, etc. It contained duplicated descriptions of the objective function and constraints. Now the information is stored in the respective fields.
Only thing that remains is the executive summary field.
Added#
Docs for how to override the solve method.
Raise error if trying to set priority for a hard constraint.
Allow setting make_constraints_soft to True from the client.
Keep track of the solution object in the chat connection, can be retrieved with a getter.
Add support for inheritance of constraints from parent models.
InputData now supports
datetime.time,datetime.datetime,datetime.date, and datetime.timedelta` objects.
Changed#
The field “opt_model_name” was renamed to “py_class_name” in the chat session. The previous name was not descriptive enough. The field contains the name of the class that is used to define the optimization model.
In line with this change the function parameters in
OptModelExecutorwere also renamed to reflect this refactoring.The
ChatConnectionnow uses this information to execute the model more robustly.Unify get_complete_model_summary function to be used in all places where the model summary is needed.
Fixed#
The FormulationHashTracker now keeps track of the input data hash.
We reformulate completely if the input data changes.
inactive_constraint_ids and inactive_variable_ids are now lists instead of sets. This fixes an issue with the serialization during state patching.
Pass priorities to the stream_messages method if make_constraints_soft is True. Before, we were running into an error because the priorities were not passed.
Fixed an issue which caused an error when the code for a chat session was requested.
Python code that was produced by the LLM and which had syntax errors caused an exception that was not catched. Now the LLM gets the possibility to correct its mistake.
Breaking Changes#
The solve method of PulpDecisionAIModel now requires an input_ parameter. This allows users to conveniently override the solve method.
In-context learning examples have to be re-added if they were previously added.
0.5.0 — 2025-02-17#
Added#
Implemented variable support with Pydantic models
Added initialization methods for each variable type
Introduced new variable parser functionality
Improved string representation handling in
InputData, allowing users to define custom representations for specific fields to optimize LLM context usage.Added validation checks in
InputData:Ensured
_to_strmethods do not exist within nestedInputDatainstances.Verified
_to_strmethods can be called without errors.
Introduced
serialize_and_deserialize()helper function in tests to streamline serialization validation.Added
DecisionAIclient class for starting new chat sessions viacreate_chat_session(input_data: InputData, opt_model_id: str)Added
ChatSessionclass with real-time chat interaction via WebSocket connections and state synchronizationAdded
OptModelExecutorfor model execution with support for soft constraints and lazy model updatesAdded support for custom field representations in
InputDatavia_to_strmethodsAdded support for tuple keys in top-level dictionaries of
InputDataAdded CLI tool with commands for model initialization and deployment
Added comprehensive solver parameter support including time limits, tolerances, and gaps via
HybridSolverParameters
Changed#
Updated all testing instances to use the new variable support
Updated documentation to reflect the new variable support
Refactored serialization methods in
InputData, renaming:dump_model_with_dictionary_transformation()→dump_model_transformed()dump_model_json_with_dictionary_transformation()→dump_model_json_transformed()model_json_schema_with_dictionary_transformation()→model_json_schema_transformed()
Updated
DecisionAIRequestHandlerto use new serialization method names.Refactored model initialization to use class-based variable definitions
Migrated from pandas-based string representation to JSON-based serialization in
InputDataEnhanced input data validation with strict type checking for nested structures
Removed
RestrictedBaseModeldependency in favor of standard Pydantic models
Fixed#
Updated test suite to align with new code injector implementation
Fixed variable-related tests
Fixed protocol schema handling in client requests
Fixed state synchronization between client and server
Fixed dictionary key handling in nested structures
Security#
Replaced hardcoded API key with an environment variable to prevent accidental credential leaks.
Important Notes#
Continuing existing chat sessions is not supported in this release
Model identification is currently done via system-assigned IDs only
Both the optimization model and input data must be deployed before starting a chat session
Breaking Changes#
Model variables must now be defined via
variables_classclass attribute:class MyModel(PulpDecisionAIModel): variables_class = MyVariables # Required
Each variable requires a static initialization method:
class MyVariables(PulpVariables): x: pulp.LpVariable @staticmethod def init_x(input_: MyInput) -> pulp.LpVariable: return pulp.LpVariable("x", lowBound=0)
Removed
set_up_variablesmethod fromPulpDecisionAIModelRemoved pandas-based string representation from
InputData
0.4.0 — 2024-11-19#
Added#
Added support for transforming the constraints of a model into soft constraints.
Added better error handling when adding invalid constraint code. Every constraint must contain at least one yield.
Dictionaries with tuple as keys are now supported for the model input data.
Input data can now be hashed.
Added documentation for meta optimization model; the intersection between AI provided code and the optimization model.
Added hand-through of solver parameters for default solve method.
Changed#
Changed the examples and updated the documentation.
Refactored the model interface to improve code organization and maintainability.
Simplified the model API to make it more intuitive and easier to use.
Fixed#
Fixed a small bug that would wrongly render the string representation of input data.
Fixed a bug that caused the package to not build correctly.