Quick Start#

After installing the DecisionAI client (see Setup Guide), you can set up your authentication with the Quantagonia cloud.

export QUANTAGONIA_API_KEY=your_api_key

To try out a sample model, you can run the staff scheduling example:

python -m decision_ai.examples.staff_scheduling.chat_example

To solve a model during a chat session, simply type solve. To learn more about this staff scheduling example, see Staff Scheduling.

Workflow Overview#


Here’s how to create and deploy your own DecisionAI models:

  1. Define Your Model & Input Data

    Create a model class that inherits from decision_ai.PulpDecisionAIModel, model variables using decision_ai.PulpVariables, and an input data class from decision_ai.InputData. You can learn the details in Setting Up the Base Model.

    For example, let’s implement this simple linear optimization problem and store it as simple_model_complete_example.py:

    \[\begin{split}\min \ c_1 x + c_2 y \\ \text{s.t.} \ a x + b y \leq c \\ x, y \geq 0\end{split}\]

    The complete model file of this example is:

    import pulp
    from pydantic import Field
    
    from decision_ai import InputData, PulpDecisionAIModel, PulpVariables, Solution, constraint
    from decision_ai.typing import ConstraintGenerator
    
    
    class ExampleInputData(InputData):
        a: int
        b: int
        c: int
        cost: dict[str, int]
    
    
    class Variables(PulpVariables):
        x: pulp.LpVariable = Field(..., description="Variable x")
        y: pulp.LpVariable = Field(..., description="Variable y")
    
        @staticmethod
        def init_x(input_: ExampleInputData) -> pulp.LpVariable:  # noqa: ARG004
            return pulp.LpVariable("x", lowBound=0)
    
        @staticmethod
        def init_y(input_: ExampleInputData) -> pulp.LpVariable:  # noqa: ARG004
            return pulp.LpVariable("y", lowBound=0)
    
    
    class Model(PulpDecisionAIModel[ExampleInputData, Variables]):
        variables_class = Variables
    
        @constraint
        def constraint_1(input_: ExampleInputData, variables: Variables) -> ConstraintGenerator:
            yield input_.a * variables.x + input_.b * variables.y <= input_.c
    
        def set_up_objective(self, input_: ExampleInputData, prob: pulp.LpProblem, variables: Variables) -> pulp.LpProblem:
            prob += pulp.lpSum([input_.cost["c1"] * variables.x, input_.cost["c2"] * variables.y])
            return prob
    
        def solution_to_str(self, input_: ExampleInputData, solution: Solution) -> str:
            return f"Objective value: {solution.objective}\n\nStatus: {solution.status}"
    
    

    See Examples for more models and Setting Up the Base Model for detailed explanations.

  2. Initialize the Model using the CLI

    Create a model configuration using:

    decision_ai init [OPTIONS] MODEL_NAME
    

    This command adds model configuration to your TOML file. The MODEL_NAME is the name you want to give to your model.

    Example

    For the example in this quickstart, we would run:

    decision_ai init simple_example
    

    You’ll be asked to provide:

    • The path to your model file (in the quickstart example simple_model_complete_example.py)

    • A brief model description you want to provide about your model and its purpose.

    The model and input data class names will be automatically detected during deployment. If you need to specify them manually, use:

    decision_ai init simple_example --opt-model-class-name Model --input-data-class-name MyInputData
    

    Note

    Details how to initialize and deploy models with the Python interface can be found in Deploying a Model.

  3. Deploy Your Model using the CLI

    You can create a model ID and deploy your model by running:

    decision_ai deploy
    

    To redeploy updated model code, run decision_ai deploy. You only need to reinitialize with decision_ai init <MODEL_NAME> when adding new models to your project. The deployment will return an error if the python code of the model contains syntax errors. In this case, make sure to fix your model file and re-deploy the model.

    Tip

    You can enhance AI responses by providing in-context learning examples. See In-Context Learning Examples for details.

  4. Start a Chat Session

    Create a chat interface for your model. Here’s a simple example using the console in simple_example_chat.py:

    # ruff: noqa: T201
    import asyncio
    
    from rich.console import Console
    from rich.markdown import Markdown
    from rich.panel import Panel
    from rich.syntax import Syntax
    from rich.text import Text
    from simple_model_complete_example import ExampleInputData
    
    from decision_ai.client.chat_session import ChatSession
    from decision_ai.client.client import DecisionAI
    
    console = Console()
    
    
    async def interactive_chat(chat_session: ChatSession) -> None:
        """Interactive chat session with markdown rendering."""
        async with chat_session.connect() as chat:
            while True:
                user_input = input("\nYour prompt: ")
                if user_input.lower() == "q":
                    console.print("[bold red]Exiting chat...[/bold red]")
                    break
    
                # Calculate panel width as 2/3 of terminal width
                panel_width = int(console.width * 2 / 3)
    
                # Special command to show chat session state
                if user_input.lower() == "status":
                    user_panel = Panel(
                        Text(user_input, justify="right"),
                        border_style="blue",
                        title="You",
                        title_align="right",
                        width=panel_width,
                    )
                    console.print(user_panel, justify="right")
    
                    # Get current state and pretty print it with syntax highlighting
                    with chat_session.remote_state() as state:
                        json_str = state.model_dump_json(indent=2)
                        syntax = Syntax(json_str, "json", theme="monokai", word_wrap=True)
                        assistant_panel = Panel(
                            syntax,
                            border_style="green",
                            title="Assistant",
                            title_align="left",
                            width=panel_width,
                        )
                    console.print(assistant_panel, justify="left")
                    continue
    
                # Regular chat message handling
                user_panel = Panel(
                    Text(user_input, justify="right"),
                    border_style="blue",
                    title="You",
                    title_align="right",
                    width=panel_width,
                )
                console.print(user_panel, justify="right")
    
                console.print("[bold green]Sending...[/bold green]")
                async for message in chat.stream_messages(prompt=user_input):
                    assistant_panel = Panel(
                        Markdown(str(message)),
                        border_style="green",
                        title="Assistant",
                        title_align="left",
                        width=panel_width,
                    )
                    console.print(assistant_panel, justify="left")
    
    
    if __name__ == "__main__":
        client = DecisionAI()
    
        # Load input data
        input_data = ExampleInputData(a=3, b=5, c=20, cost={"c1": 4, "c2": 6})
    
        # Print help text
        help_panel = Panel(
            Text.from_markup(
                "[bold]Available Commands[/bold]\n\n"
                "[cyan]status[/cyan] - Show current chat session state\n"
                "[cyan]q[/cyan]      - Quit the chat session\n"
                "Any other input will be sent as a prompt to the model"
            ),
            title="Help",
            border_style="yellow",
        )
        console.print(help_panel)
    
        # Create a chat session with the model.
        chat_session = client.create_chat_session(input_data, opt_model_id="<YOUR_MODEL_ID>")
        asyncio.run(interactive_chat(chat_session))
    

    Run it with:

    python -m simple_example_chat.py
    

You can now modify the model through natural language and solve it by simply typing solve.

Next Steps#