Chat Connection#
API Reference > Chat > Chat Connection
Note
This class is typically used through decision_ai.ChatSession.connect() rather than being instantiated directly.
The decision_ai.ChatConnection class provides real-time Server-Sent Events (SSE) communication with DecisionAI. It enables:
Streaming responses from the DecisionAI in real-time
Sending prompts and receiving responses
Automatic handling of model validation and solving
Real-time updates on model formulation and changes
Error handling and state management
Usage Example#
# Connect to a chat session for real-time communication
async with session.connect() as connection:
# Send a prompt and stream responses
async for message in connection.stream_messages("Create a vehicle routing model"):
if message.kind == "chat_response":
print(f"DecisionAI Response: {message.content}")
elif message.kind == "tool_call":
print(f"Tool being called: {message.content}")
elif message.kind == "tool_output":
print(f"Tool output: {message.content}")
# The connection automatically handles:
# - Model validation requests
# - Solve requests
# - Error handling
# You can access the current solution if available
if connection.current_solution:
print(f"Objective value: {connection.current_solution.objective_value}")
Advanced Usage#
The chat connection supports advanced control over message handling:
Control which message types to receive
Access solution objects directly
Handle validation and solve requests manually if needed
Customize solver parameters
For example:
async with session.connect() as connection:
# Show all message types (including protocol messages)
async for message in connection.stream_messages(
"Create a staff scheduling model",
exclude_message_types=None # Show all messages
):
print(f"Message type: {message.kind}")
print(f"Content: {message.content}")
# You can handle different message types
match message.kind:
case "chat_response":
print("Got a chat response")
case "tool_call":
print("DecisionAI is using a tool")
case "tool_output":
print("Tool produced output")
case "ask_for_validation":
print("Model needs validation")
case "ask_for_solve":
print("Model needs to be solved")
# Build model incrementally
steps = [
"Add a constraint: each shift needs at least 2 people",
"Update the objective to maximize employee satisfaction",
"Add soft constraints for employee preferences",
"Solve the model"
]
for step in steps:
async for message in connection.stream_messages(step):
if message.kind == "chat_response":
print(f"Step response: {message.content}")
Class Reference#
- class decision_ai.ChatConnection(request_handler: DecisionAIRequestHandler, chat_session: ChatSession, opt_model_executor: OptModelExecutor, *, py_class_name: str | None = None, propagate_solve_errors: bool = False, solver_kwargs: dict | None = None, post_solve_callback: Callable[[Solution | None, Exception | None], None] | None = None)#
Bases:
objectA context manager for real-time chat interactions.
Don’t construct this class directly. Use
ChatSession.connectinstead.- Parameters:
request_handler (RequestHandler) – Handler for making API requests
chat_session (ChatSession) – Chat session
opt_model_executor (OptModelExecutor) – Executor for the optimization model
opt_input_data (InputData) – Input data for the optimization model
opt_input_data_cls (type[InputData]) – Class of the input data for the optimization model
py_class_name (str, optional) – Name of the optimization model class in the python code. Defaults to None.
propagate_solve_errors (bool, optional) – Whether to propagate solve errors to the caller. Defaults to False.
solver_kwargs (dict, optional) – Additional keyword arguments to pass to the solver. Defaults to None.
post_solve_callback (Callable[[Solution | None, Exception | None], None], optional) – Callback function that is called after each solve operation with the solution (if successful) and exception (if failed). Defaults to None.
- async stream_messages(prompt: str, exclude_message_types: list[AssistantMessageKind] | tuple[AssistantMessageKind, ...] | None = ('ask_for_solve', 'ask_for_validation', 'ask_for_prompt', 'ask_for_modify_input_data'), llm_iteration_limit: int | None = None, *, reasoning_enabled: bool | None = None) AsyncIterator[MessageContainer]#
Send a prompt and stream the assistant’s responses.
This method first checks for any pending requests from the backend (e.g., AskForSolve), handles them, and then sends the user’s prompt.
- Parameters:
prompt (str) – The user’s prompt to send.
exclude_message_types (list[AssistantMessageKind] | tuple[AssistantMessageKind, ...] | None) –
List or tuple of message types to exclude from the stream. Defaults to excluding only protocol messages that are handled automatically. Pass None to exclude no messages (show all message types).
Available message types:
chat_response: Chat responses from the assistanttool_call: Tool calls being made by the assistanttool_output: Output/results from tool callsuncritical_error: Non-critical error messagescritical_error: Critical error messageask_for_prompt: System request for user promptask_for_validation: System request for validationask_for_solve: System request to solve modelask_for_modify_input_data: System request to modify input data
llm_iteration_limit (int | None) – The maximum number of iterations of the LLM to allow. If None, the default value (9) will be used.
reasoning_enabled (bool | None) – Whether to enable reasoning for processing this prompt. If None, the agent will use the current reasoning setting.
- Yields:
A stream of MessageContainer responses from the assistant.
Message Types#
The following message types can be received from the connection:
chat_response: Chat responses from the assistanttool_call: Tool calls being made by the assistanttool_output: Output/results from tool callsuncritical_error: Non-critical error messagescritical_error: Critical error messagesask_for_prompt: System request for user promptask_for_validation: System request for validationask_for_solve: System request to solve model
By default, protocol messages (ask_for_prompt, ask_for_validation, ask_for_solve) are handled automatically and not included in the stream. You can customize this behavior using the exclude_message_types parameter.
Back to: API Reference (API Reference Overview)