Behavior¶
Behavioral simulation: agents with personality traits, decision models, social graphs, populations, and influence dynamics.
Behavioral simulation components for modeling human agents.
Provides personality traits, decision models, social networks, and environment mediation for simulating individual and collective human behavior such as product adoption, opinion dynamics, and policy impact.
Agent ¶
Agent(
name: str,
traits: TraitSet | None = None,
decision_model: DecisionModel | None = None,
state: AgentState | None = None,
seed: int | None = None,
heartbeat_interval: float = 0.0,
action_delay: float = 0.0,
)
Bases: Entity
A behavioral agent that receives stimuli and makes decisions.
The agent maintains personality traits, mutable internal state, and a pluggable decision model. Incoming events trigger the decision pipeline: state decay -> memory recording -> decision -> action.
Action handlers are registered per action name and produce downstream events.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Unique identifier for this agent. |
required |
traits
|
TraitSet | None
|
Personality trait vector. |
None
|
decision_model
|
DecisionModel | None
|
Strategy for choosing actions. |
None
|
state
|
AgentState | None
|
Initial internal state (defaults to fresh AgentState). |
None
|
seed
|
int | None
|
Random seed for deterministic behavior. |
None
|
heartbeat_interval
|
float
|
If > 0, schedule periodic self-maintenance events (seconds). |
0.0
|
action_delay
|
float
|
Simulated delay (seconds) between decision and action execution. |
0.0
|
BoundedRationalityModel ¶
Satisficing (Simon): accept the first option exceeding an aspiration level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
utility_fn
|
UtilityFunction
|
Maps (choice, context) to a scalar utility. |
required |
aspiration
|
float
|
Minimum acceptable utility threshold. |
0.5
|
Choice
dataclass
¶
A candidate action the agent may select.
Attributes:
| Name | Type | Description |
|---|---|---|
action |
str
|
Short identifier for the action (e.g. "buy", "wait"). |
context |
dict[str, Any]
|
Arbitrary metadata about this choice. |
CompositeModel ¶
Hybrid decision making via weighted voting across sub-models.
Each sub-model votes for a choice; votes are weighted and tallied.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
models
|
list[tuple[DecisionModel, float]]
|
List of (DecisionModel, weight) pairs. |
required |
DecisionContext
dataclass
¶
DecisionContext(
traits: TraitSet,
state: AgentState,
choices: list[Choice],
stimulus: dict[str, Any] = dict(),
environment: dict[str, Any] = dict(),
social_context: dict[str, Any] = dict(),
)
Everything an agent knows when making a decision.
Attributes:
| Name | Type | Description |
|---|---|---|
traits |
TraitSet
|
The agent's personality traits. |
state |
AgentState
|
The agent's current internal state. |
choices |
list[Choice]
|
Available actions to choose from. |
stimulus |
dict[str, Any]
|
Metadata about the triggering event. |
environment |
dict[str, Any]
|
Shared environment state (prices, policies, etc.). |
social_context |
dict[str, Any]
|
Information about peer behavior. |
DecisionModel ¶
Bases: Protocol
Protocol for agent decision-making strategies.
decide ¶
Select a choice from the context, or None to abstain.
Rule
dataclass
¶
A condition-action pair for rule-based decision making.
Attributes:
| Name | Type | Description |
|---|---|---|
condition |
RuleCondition
|
Predicate that tests whether this rule applies. |
action |
str
|
The action to take if the condition is met. |
priority |
int
|
Higher priority rules are evaluated first. |
RuleBasedModel ¶
Heuristic decision making via priority-ordered if-then rules.
Rules are evaluated in descending priority order. The first rule whose condition is satisfied selects the matching choice.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
rules
|
list[Rule]
|
List of Rule instances. |
required |
default_action
|
str | None
|
Action to fall back to if no rule fires. |
None
|
SocialInfluenceModel ¶
Conformity-based decision making weighted by peer behavior.
Looks at context.social_context["peer_actions"] (a dict mapping
action name to count) and selects proportionally, weighted by the
agent's agreeableness trait.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
individual_fn
|
UtilityFunction
|
Fallback utility function for individual preference. |
required |
conformity_weight
|
float
|
Base weight given to peer actions (0-1). |
0.5
|
UtilityModel ¶
Rational choice: maximize a utility function.
Optionally applies softmax temperature for stochastic selection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
utility_fn
|
UtilityFunction
|
Maps (choice, context) to a scalar utility. |
required |
temperature
|
float
|
Softmax temperature. 0 = deterministic argmax. |
0.0
|
Environment ¶
Environment(
name: str,
agents: list[Agent] | None = None,
social_graph: SocialGraph | None = None,
shared_state: dict[str, Any] | None = None,
influence_model: InfluenceModel | None = None,
seed: int | None = None,
)
Bases: Entity
Mediator entity that routes stimuli to behavioral agents.
Handles three event types:
- BroadcastStimulus: Forward to all registered agents.
- TargetedStimulus: Forward to named agents only.
- InfluencePropagation: Iterate social graph and create
SocialMessage events between connected agents.
- StateChange: Update a shared state variable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Identifier for this environment. |
required |
agents
|
list[Agent] | None
|
List of Agent entities to manage. |
None
|
social_graph
|
SocialGraph | None
|
Optional social graph for influence propagation. |
None
|
shared_state
|
dict[str, Any] | None
|
Initial shared state (prices, policies, etc.). |
None
|
influence_model
|
InfluenceModel | None
|
Model for computing opinion updates. |
None
|
seed
|
int | None
|
Random seed for deterministic behavior. |
None
|
Relationship
dataclass
¶
Relationship(
source: str,
target: str,
weight: float = 0.5,
trust: float = 0.5,
interaction_count: int = 0,
)
A directed edge in the social graph.
Attributes:
| Name | Type | Description |
|---|---|---|
source |
str
|
Name of the source agent. |
target |
str
|
Name of the target agent. |
weight |
float
|
General relationship strength (0-1). |
trust |
float
|
How much the source trusts the target (0-1). |
interaction_count |
int
|
Number of interactions recorded. |
SocialGraph ¶
Directed weighted graph of agent relationships.
Edges are stored as adjacency lists keyed by source agent name. Supports O(1) neighbor queries and several common graph generation algorithms.
add_edge ¶
Add a directed edge from source to target.
add_bidirectional_edge ¶
add_bidirectional_edge(
a: str, b: str, weight: float = 0.5, trust: float = 0.5
) -> tuple[Relationship, Relationship]
Add edges in both directions.
get_edge ¶
Return the relationship from source to target, or None.
influence_weights ¶
Return {influencer_name: weight} for edges pointing at name.
record_interaction ¶
Increment the interaction count on an existing edge.
complete
classmethod
¶
complete(
names: list[str],
weight: float = 0.5,
trust: float = 0.5,
rng: Random | None = None,
) -> SocialGraph
Fully connected graph (every pair has bidirectional edges).
random_erdos_renyi
classmethod
¶
random_erdos_renyi(
names: list[str],
p: float = 0.1,
weight: float = 0.5,
trust: float = 0.5,
rng: Random | None = None,
) -> SocialGraph
Erdos-Renyi random graph: each directed edge exists with probability p.
small_world
classmethod
¶
small_world(
names: list[str],
k: int = 4,
p_rewire: float = 0.1,
weight: float = 0.5,
trust: float = 0.5,
rng: Random | None = None,
) -> SocialGraph
Watts-Strogatz small-world graph.
Starts with a ring lattice where each node connects to its k nearest neighbors, then rewires each edge with probability p.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
names
|
list[str]
|
Node names. |
required |
k
|
int
|
Number of nearest neighbors in the ring (must be even). |
4
|
p_rewire
|
float
|
Probability of rewiring each edge. |
0.1
|
weight
|
float
|
Default edge weight. |
0.5
|
trust
|
float
|
Default edge trust. |
0.5
|
rng
|
Random | None
|
Random number generator for determinism. |
None
|
AgentState
dataclass
¶
AgentState(
satisfaction: float = 0.5,
energy: float = 1.0,
mood: float = 0.5,
beliefs: dict[str, float] = dict(),
needs: dict[str, float] = dict(),
knowledge: set[str] = set(),
_memories: deque[Memory] = (
lambda: deque(maxlen=100)
)(),
)
Mutable internal state of a behavioral agent.
All scalar fields are bounded to [0, 1] except beliefs which range [-1, 1] (opinion strength).
Attributes:
| Name | Type | Description |
|---|---|---|
satisfaction |
float
|
Overall satisfaction level (0-1). |
energy |
float
|
Energy/motivation level (0-1). |
mood |
float
|
Current mood (0=negative, 0.5=neutral, 1=positive). |
beliefs |
dict[str, float]
|
Topic-keyed opinion values (-1 to 1). |
needs |
dict[str, float]
|
Named need levels (0=satisfied, 1=urgent). |
knowledge |
set[str]
|
Set of known facts/topics. |
add_memory ¶
Record a new memory, evicting the oldest if at capacity.
recent_memories ¶
Return the n most recent memories (newest first).
average_recent_valence ¶
Mean valence of the n most recent memories.
decay ¶
Apply time-based decay to needs, mood, and energy.
- Needs drift upward (grow more urgent) at 0.01/s.
- Mood drifts toward neutral (0.5) at 0.02/s.
- Energy drifts downward at 0.005/s.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dt_seconds
|
float
|
Elapsed simulation time in seconds. |
required |
Memory
dataclass
¶
Memory(
time: float,
event_type: str,
source: str = "",
valence: float = 0.0,
details: dict[str, Any] = dict(),
)
A single episodic memory recorded by an agent.
Attributes:
| Name | Type | Description |
|---|---|---|
time |
float
|
Simulation time (seconds) when the event occurred. |
event_type |
str
|
Type label of the triggering event. |
source |
str
|
Name of the entity or agent that originated the event. |
valence |
float
|
Emotional valence of the memory (-1.0 negative to 1.0 positive). |
details |
dict[str, Any]
|
Arbitrary extra information. |
NormalTraitDistribution ¶
Per-dimension normal distribution, clamped to [0, 1].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
means
|
dict[str, float]
|
Mean value per dimension. |
required |
stds
|
dict[str, float] | None
|
Standard deviation per dimension. |
None
|
PersonalityTraits
dataclass
¶
Immutable float-vector personality model.
Each dimension is a value in [0.0, 1.0]. Dimensions are stored in a dict keyed by dimension name.
Attributes:
| Name | Type | Description |
|---|---|---|
dimensions |
dict[str, float]
|
Mapping from trait name to value (0.0-1.0). |
big_five
staticmethod
¶
big_five(
openness: float = 0.5,
conscientiousness: float = 0.5,
extraversion: float = 0.5,
agreeableness: float = 0.5,
neuroticism: float = 0.5,
) -> PersonalityTraits
Construct traits using the Big Five personality model.
TraitDistribution ¶
Bases: Protocol
Protocol for sampling TraitSet instances from a distribution.
TraitSet ¶
UniformTraitDistribution ¶
Uniform distribution across all dimensions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dimension_names
|
Sequence[str]
|
Names of the trait dimensions to sample. |
required |
BoundedConfidenceModel ¶
Hegselmann-Krause bounded confidence model.
Only considers influencers whose opinions are within epsilon of the agent's current opinion, then averages them.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
epsilon
|
float
|
Maximum opinion distance to consider. |
0.3
|
self_weight
|
float
|
Weight the agent places on its own opinion. |
0.5
|
DeGrootModel ¶
Weighted average convergence (consensus model).
At each round, the agent's opinion becomes the weighted average of its own opinion and those of its influencers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
self_weight
|
float
|
Weight the agent places on its own opinion. |
0.5
|
InfluenceModel ¶
Bases: Protocol
Protocol for computing how opinions change under social influence.
compute_influence ¶
compute_influence(
current: float,
influencer_opinions: list[float],
weights: list[float],
rng: Random,
) -> float
Compute the updated opinion value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
current
|
float
|
The agent's current opinion on a topic (-1 to 1). |
required |
influencer_opinions
|
list[float]
|
Opinions of influencing agents. |
required |
weights
|
list[float]
|
Corresponding influence weights (same length). |
required |
rng
|
Random
|
Random number generator. |
required |
Returns:
| Type | Description |
|---|---|
float
|
Updated opinion value. |
VoterModel ¶
Random adoption from one neighbor.
At each round, the agent randomly selects one influencer (weighted by influence weight) and adopts their opinion entirely.
DemographicSegment
dataclass
¶
DemographicSegment(
name: str,
fraction: float,
trait_distribution: TraitDistribution | None = None,
decision_model_factory: Callable[[], DecisionModel]
| None = None,
initial_state_factory: Callable[[], AgentState]
| None = None,
seed: int | None = None,
)
Description of a sub-population segment.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Segment label (e.g. "innovators", "majority"). |
fraction |
float
|
Proportion of total population (0 to 1). |
trait_distribution |
TraitDistribution | None
|
Distribution for sampling personality traits. |
decision_model_factory |
Callable[[], DecisionModel] | None
|
Callable that creates a DecisionModel per agent. |
initial_state_factory |
Callable[[], AgentState] | None
|
Optional callable that creates initial AgentState. |
seed |
int | None
|
Optional seed for the trait distribution RNG. |
Population ¶
A collection of agents with an associated social graph.
Use the class methods uniform() and from_segments() to
construct populations conveniently.
Attributes:
| Name | Type | Description |
|---|---|---|
agents |
The list of Agent instances. |
|
social_graph |
The social graph connecting agents. |
uniform
classmethod
¶
uniform(
size: int,
decision_model: DecisionModel | None = None,
graph_type: str = "small_world",
seed: int | None = None,
name_prefix: str = "agent",
) -> Population
Create a population with uniform trait distribution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
size
|
int
|
Number of agents. |
required |
decision_model
|
DecisionModel | None
|
Shared decision model for all agents. |
None
|
graph_type
|
str
|
One of "small_world", "complete", "random". |
'small_world'
|
seed
|
int | None
|
Random seed for reproducibility. |
None
|
name_prefix
|
str
|
Prefix for auto-generated agent names. |
'agent'
|
from_segments
classmethod
¶
from_segments(
total_size: int,
segments: list[DemographicSegment],
graph_type: str = "small_world",
seed: int | None = None,
name_prefix: str = "agent",
) -> Population
Create a population from demographic segments.
Each segment contributes floor(fraction * total_size) agents.
Remaining agents are assigned to the largest segment.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
total_size
|
int
|
Total number of agents. |
required |
segments
|
list[DemographicSegment]
|
Segment definitions with fractions summing to ~1.0. |
required |
graph_type
|
str
|
Social graph topology. |
'small_world'
|
seed
|
int | None
|
Random seed for reproducibility. |
None
|
name_prefix
|
str
|
Prefix for agent names. |
'agent'
|
AgentStats
dataclass
¶
AgentStats(
events_received: int = 0,
decisions_made: int = 0,
actions_by_type: dict[str, int] = dict(),
social_messages_received: int = 0,
)
Frozen snapshot of per-agent statistics.
Attributes:
| Name | Type | Description |
|---|---|---|
events_received |
int
|
Total events handled by this agent. |
decisions_made |
int
|
Number of times the decision model was invoked. |
actions_by_type |
dict[str, int]
|
Count of each action type chosen. |
social_messages_received |
int
|
Number of social influence messages processed. |
EnvironmentStats
dataclass
¶
EnvironmentStats(
broadcasts_sent: int = 0,
targeted_sends: int = 0,
influence_rounds: int = 0,
state_changes: int = 0,
)
Frozen snapshot of Environment entity statistics.
Attributes:
| Name | Type | Description |
|---|---|---|
broadcasts_sent |
int
|
Number of broadcast stimulus events dispatched. |
targeted_sends |
int
|
Number of targeted stimulus events dispatched. |
influence_rounds |
int
|
Number of influence propagation rounds executed. |
state_changes |
int
|
Number of shared-state mutations applied. |
PopulationStats
dataclass
¶
PopulationStats(
size: int = 0,
total_events: int = 0,
total_decisions: int = 0,
total_actions: dict[str, int] = dict(),
)
Frozen snapshot of aggregate statistics across a population.
Attributes:
| Name | Type | Description |
|---|---|---|
size |
int
|
Number of agents in the population. |
total_events |
int
|
Sum of events received by all agents. |
total_decisions |
int
|
Sum of decisions made by all agents. |
total_actions |
dict[str, int]
|
Aggregate action counts across all agents. |
broadcast_stimulus ¶
broadcast_stimulus(
time: Instant | float,
environment: Environment,
stimulus_type: str,
choices: list[Choice | str | dict] | None = None,
**metadata: Any,
) -> Event
Create a broadcast stimulus event targeting an Environment.
The Environment will forward this as individual Stimulus events to all registered agents.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
time
|
Instant | float
|
When the stimulus occurs (Instant or float seconds). |
required |
environment
|
Environment
|
The Environment entity to receive the broadcast. |
required |
stimulus_type
|
str
|
Label for the stimulus (becomes inner event_type). |
required |
choices
|
list[Choice | str | dict] | None
|
Available actions for agents (Choice, str, or dict). |
None
|
**metadata
|
Any
|
Additional context passed through to agents. |
{}
|
influence_propagation ¶
Trigger one round of social influence propagation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
time
|
Instant | float
|
When the influence round occurs. |
required |
environment
|
Environment
|
The Environment entity. |
required |
topic
|
str
|
The belief topic to propagate. |
required |
policy_announcement ¶
policy_announcement(
time: Instant | float,
environment: Environment,
policy: str,
description: str,
valence: float = 0.0,
) -> Event
Create a policy announcement with accept/protest/ignore choices.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
time
|
Instant | float
|
When the announcement occurs. |
required |
environment
|
Environment
|
The Environment entity. |
required |
policy
|
str
|
Policy identifier. |
required |
description
|
str
|
Human-readable description. |
required |
valence
|
float
|
Positive or negative framing (-1 to 1). |
0.0
|
price_change ¶
price_change(
time: Instant | float,
environment: Environment,
product: str,
old_price: float,
new_price: float,
) -> Event
Create a price-change broadcast with pre-built buy/wait/switch choices.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
time
|
Instant | float
|
When the price change takes effect. |
required |
environment
|
Environment
|
The Environment entity. |
required |
product
|
str
|
Product identifier. |
required |
old_price
|
float
|
Previous price. |
required |
new_price
|
float
|
New price. |
required |
targeted_stimulus ¶
targeted_stimulus(
time: Instant | float,
environment: Environment,
targets: Sequence[str],
stimulus_type: str,
choices: list[Choice | str | dict] | None = None,
**metadata: Any,
) -> Event
Create a targeted stimulus event for specific agents.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
time
|
Instant | float
|
When the stimulus occurs. |
required |
environment
|
Environment
|
The Environment entity. |
required |
targets
|
Sequence[str]
|
Agent names to receive the stimulus. |
required |
stimulus_type
|
str
|
Label for the stimulus. |
required |
choices
|
list[Choice | str | dict] | None
|
Available actions for agents. |
None
|
**metadata
|
Any
|
Additional context. |
{}
|