AI Integration¶
Structured result types for AI-assisted simulation analysis and recommendations.
AI integration layer for happysimulator.
Provides rich result wrappers, comparison tools, and recommendations for simulation output analysis.
Recommendation
dataclass
¶
An actionable suggestion based on simulation analysis.
MetricDiff
dataclass
¶
MetricDiff(
name: str,
mean_a: float,
mean_b: float,
mean_change_pct: float,
p99_a: float,
p99_b: float,
p99_change_pct: float,
)
Difference between a single metric across two simulation runs.
SimulationComparison
dataclass
¶
SimulationComparison(
result_a: SimulationResult,
result_b: SimulationResult,
metric_diffs: dict[str, MetricDiff] = dict(),
)
Side-by-side comparison of two simulation runs.
to_prompt_context ¶
Format comparison as structured text for AI consumption.
SimulationResult
dataclass
¶
SimulationResult(
summary: SimulationSummary,
analysis: SimulationAnalysis,
latency: Data | None = None,
queue_depth: dict[str, Data] = dict(),
throughput: Data | None = None,
recommendations: list[Any] = list(),
)
Rich simulation result with analysis, comparison, and AI-friendly output.
Works with any simulation — not tied to a specific builder pattern.
from_run
classmethod
¶
from_run(
summary: SimulationSummary,
*,
latency: Data | None = None,
queue_depth: dict[str, Data] | None = None,
throughput: Data | None = None,
**named_metrics: Data,
) -> SimulationResult
Create a SimulationResult by running analyze() automatically.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
summary
|
SimulationSummary
|
SimulationSummary from Simulation.run(). |
required |
latency
|
Data | None
|
Latency time-series data (e.g., from LatencyTracker.data). |
None
|
queue_depth
|
dict[str, Data] | None
|
Queue depth data keyed by server name. |
None
|
throughput
|
Data | None
|
Throughput time-series data. |
None
|
**named_metrics
|
Data
|
Additional named Data objects to analyze. |
{}
|
to_prompt_context ¶
Generate AI-optimized summary text.
Includes analysis output plus recommendations.
compare ¶
Compare this result with another.
SweepResult
dataclass
¶
Results from a parametric sweep across multiple simulation runs.
best_by ¶
Find the result with the best (lowest) value for a metric.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
metric
|
str
|
"latency" or a queue_depth key. |
'latency'
|
stat
|
str
|
"p99", "mean", "p50", etc. |
'p99'
|
Returns:
| Type | Description |
|---|---|
SimulationResult
|
The SimulationResult with the lowest value for the given metric+stat. |
to_prompt_context ¶
Format sweep results as a table for AI consumption.
generate_recommendations ¶
Analyze results and suggest improvements.
Rules: - Queue saturation: queue depth growing over time -> more capacity - Underutilization: low utilization -> fewer servers - Tail latency: high p99/p50 ratio -> investigate variance - Phase transitions: degraded phases detected -> capacity planning