ergodic_insurance package

Subpackages

Submodules

ergodic_insurance.accrual_manager module

Accrual and timing management for financial operations.

This module provides functionality to track timing differences between cash movements and accounting recognition, following GAAP principles.

Uses Decimal for all currency amounts to prevent floating-point precision errors.

class AccrualType(*values)[source]

Bases: Enum

Types of accrued items.

WAGES = 'wages'
INTEREST = 'interest'
TAXES = 'taxes'
INSURANCE_CLAIMS = 'insurance_claims'
REVENUE = 'revenue'
OTHER = 'other'
class PaymentSchedule(*values)[source]

Bases: Enum

Payment schedule types.

IMMEDIATE = 'immediate'
QUARTERLY = 'quarterly'
ANNUAL = 'annual'
CUSTOM = 'custom'
class AccrualItem(item_type: AccrualType, amount: Decimal, period_incurred: int, payment_schedule: PaymentSchedule, payment_dates: List[int] = <factory>, amounts_paid: List[Decimal] = <factory>, description: str = '') None[source]

Bases: object

Individual accrual item with tracking information.

Uses Decimal for all currency amounts to ensure precise calculations.

item_type: AccrualType
amount: Decimal
period_incurred: int
payment_schedule: PaymentSchedule
payment_dates: List[int]
amounts_paid: List[Decimal]
description: str = ''
__post_init__() None[source]

Convert amounts to Decimal if needed (runtime check for backwards compatibility).

Return type:

None

property remaining_balance: Decimal

Calculate remaining unpaid balance.

property is_fully_paid: bool

Check if accrual has been fully paid.

__deepcopy__(memo: Dict[int, Any]) AccrualItem[source]

Create a deep copy of this accrual item.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

AccrualItem

Returns:

Independent copy of this AccrualItem

class AccrualManager[source]

Bases: object

Manages accruals and timing differences for financial operations.

Tracks accrued expenses and revenues with various payment schedules, particularly focusing on quarterly tax payments and multi-year claim settlements. Uses FIFO approach for payment matching.

accrued_expenses: Dict[AccrualType, List[AccrualItem]]
accrued_revenues: List[AccrualItem]
current_period: int
__deepcopy__(memo: Dict[int, Any]) AccrualManager[source]

Create a deep copy of this accrual manager.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

AccrualManager

Returns:

Independent copy of this AccrualManager with all accruals

record_expense_accrual(item_type: AccrualType, amount: Decimal | float | int, payment_schedule: PaymentSchedule = PaymentSchedule.IMMEDIATE, payment_dates: List[int] | None = None, description: str = '') AccrualItem[source]

Record an accrued expense.

Parameters:
  • item_type (AccrualType) – Type of expense being accrued

  • amount (Union[Decimal, float, int]) – Total amount to be accrued (converted to Decimal)

  • payment_schedule (PaymentSchedule) – Schedule for payments

  • payment_dates (Optional[List[int]]) – Custom payment dates if schedule is CUSTOM

  • description (str) – Optional description of the accrual

Return type:

AccrualItem

Returns:

The created AccrualItem

record_revenue_accrual(amount: Decimal | float | int, collection_dates: List[int] | None = None, description: str = '') AccrualItem[source]

Record accrued revenue not yet collected.

Parameters:
  • amount (Union[Decimal, float, int]) – Amount of revenue accrued (converted to Decimal)

  • collection_dates (Optional[List[int]]) – Expected collection dates

  • description (str) – Optional description

Return type:

AccrualItem

Returns:

The created AccrualItem

process_payment(item_type: AccrualType, amount: Decimal | float | int, period: int | None = None) List[Tuple[AccrualItem, Decimal]][source]

Process a payment against accrued items using FIFO.

Parameters:
  • item_type (AccrualType) – Type of accrual being paid

  • amount (Union[Decimal, float, int]) – Payment amount (converted to Decimal)

  • period (Optional[int]) – Period when payment is made (defaults to current)

Return type:

List[Tuple[AccrualItem, Decimal]]

Returns:

List of (AccrualItem, amount_applied) tuples with Decimal amounts

get_quarterly_tax_schedule(annual_tax: Decimal | float | int) List[Tuple[int, Decimal]][source]

Calculate quarterly tax payment schedule.

Parameters:

annual_tax (Union[Decimal, float, int]) – Total annual tax liability (converted to Decimal)

Return type:

List[Tuple[int, Decimal]]

Returns:

List of (period, amount) tuples for quarterly payments (Decimal amounts)

get_claim_payment_schedule(claim_amount: Decimal | float | int, development_pattern: List[Decimal | float] | None = None) List[Tuple[int, Decimal]][source]

Calculate insurance claim payment schedule over multiple years.

Parameters:
  • claim_amount (Union[Decimal, float, int]) – Total claim amount (converted to Decimal)

  • development_pattern (Optional[List[Union[Decimal, float]]]) – Percentage paid each year (defaults to standard pattern)

Return type:

List[Tuple[int, Decimal]]

Returns:

List of (period, amount) tuples for claim payments (Decimal amounts)

get_total_accrued_expenses() Decimal[source]

Get total outstanding accrued expenses as Decimal.

Return type:

Decimal

get_total_accrued_revenues() Decimal[source]

Get total outstanding accrued revenues as Decimal.

Return type:

Decimal

get_accruals_by_type(item_type: AccrualType) List[AccrualItem][source]

Get all accruals of a specific type.

Parameters:

item_type (AccrualType) – Type of accrual to retrieve

Return type:

List[AccrualItem]

Returns:

List of accruals of the specified type

get_payments_due(period: int | None = None) Dict[AccrualType, Decimal][source]

Get payments due in a specific period.

Parameters:

period (Optional[int]) – Period to check (defaults to current)

Return type:

Dict[AccrualType, Decimal]

Returns:

Dictionary of payment amounts by type (Decimal values)

advance_period(periods: int = 1)[source]

Advance the current period.

Parameters:

periods (int) – Number of periods to advance

get_balance_sheet_items() Dict[str, Decimal][source]

Get accrual items for balance sheet reporting.

Return type:

Dict[str, Decimal]

Returns:

Dictionary with balance sheet line items (Decimal values)

clear_fully_paid()[source]

Remove fully paid accruals to maintain performance.

ergodic_insurance.accuracy_validator module

Numerical accuracy validation for Monte Carlo simulations.

This module provides tools to validate the numerical accuracy of optimized Monte Carlo simulations against reference implementations, ensuring that performance optimizations don’t compromise result quality.

Key features:
  • High-precision reference implementations

  • Statistical validation of distributions

  • Edge case and boundary condition testing

  • Accuracy comparison metrics

  • Detailed validation reports

Example

>>> from accuracy_validator import AccuracyValidator
>>> import numpy as np
>>>
>>> validator = AccuracyValidator()
>>>
>>> # Compare optimized vs reference implementation
>>> optimized_results = np.random.normal(0.08, 0.02, 10000)
>>> reference_results = np.random.normal(0.08, 0.02, 10000)
>>>
>>> validation = validator.compare_implementations(
...     optimized_results, reference_results
... )
>>> print(f"Accuracy: {validation.accuracy_score:.4f}")

Google-style docstrings are used throughout for Sphinx documentation.

class ValidationResult(accuracy_score: float, mean_error: float = 0.0, max_error: float = 0.0, relative_error: float = 0.0, ks_statistic: float = 0.0, ks_pvalue: float = 0.0, passed_tests: List[str] = <factory>, failed_tests: List[str] = <factory>, edge_cases: Dict[str, bool]=<factory>) None[source]

Bases: object

Results from accuracy validation.

accuracy_score: float

Overall accuracy score (0-1)

mean_error: float = 0.0

Mean absolute error

max_error: float = 0.0

Maximum absolute error

relative_error: float = 0.0

Mean relative error

ks_statistic: float = 0.0

Kolmogorov-Smirnov test statistic

ks_pvalue: float = 0.0

Kolmogorov-Smirnov test p-value

passed_tests: List[str]

List of passed validation tests

failed_tests: List[str]

List of failed validation tests

edge_cases: Dict[str, bool]

Results from edge case testing

is_valid(tolerance: float = 0.01) bool[source]

Check if validation passes within tolerance.

Parameters:

tolerance (float) – Maximum acceptable relative error.

Return type:

bool

Returns:

True if validation passes.

summary() str[source]

Generate validation summary.

Return type:

str

Returns:

Formatted summary string.

class ReferenceImplementations[source]

Bases: object

High-precision reference implementations for validation.

These implementations prioritize accuracy over speed and serve as the ground truth for validation.

static calculate_growth_rate_precise(final_assets: float, initial_assets: float, n_years: float) float[source]

Calculate growth rate with high precision.

Parameters:
  • final_assets (float) – Final asset value.

  • initial_assets (float) – Initial asset value.

  • n_years (float) – Number of years.

Return type:

float

Returns:

Precise growth rate.

static apply_insurance_precise(loss: float, attachment: float, limit: float) Tuple[float, float][source]

Apply insurance with precise calculations.

Parameters:
  • loss (float) – Loss amount.

  • attachment (float) – Insurance attachment point.

  • limit (float) – Insurance limit.

Return type:

Tuple[float, float]

Returns:

Tuple of (retained_loss, recovered_amount).

static calculate_var_precise(losses: ndarray, confidence: float) float[source]

Calculate Value at Risk with high precision.

Parameters:
  • losses (ndarray) – Array of loss amounts.

  • confidence (float) – Confidence level (e.g., 0.95).

Return type:

float

Returns:

VaR at specified confidence level.

static calculate_tvar_precise(losses: ndarray, confidence: float) float[source]

Calculate Tail Value at Risk with high precision.

Parameters:
  • losses (ndarray) – Array of loss amounts.

  • confidence (float) – Confidence level (e.g., 0.95).

Return type:

float

Returns:

TVaR at specified confidence level.

static calculate_ruin_probability_precise(paths: ndarray, threshold: float = 0.0) float[source]

Calculate ruin probability with high precision.

Parameters:
  • paths (ndarray) – Array of asset paths.

  • threshold (float) – Ruin threshold.

Return type:

float

Returns:

Probability of ruin.

class StatisticalValidation[source]

Bases: object

Statistical tests for distribution validation.

static compare_distributions(data1: ndarray, data2: ndarray) Dict[str, Any][source]

Compare two distributions statistically.

Parameters:
  • data1 (ndarray) – First dataset.

  • data2 (ndarray) – Second dataset.

Return type:

Dict[str, Any]

Returns:

Dictionary of statistical test results.

static validate_statistical_properties(data: ndarray, expected_mean: float, expected_std: float, tolerance: float = 0.05) Dict[str, bool][source]

Validate statistical properties of data.

Parameters:
  • data (ndarray) – Data to validate.

  • expected_mean (float) – Expected mean value.

  • expected_std (float) – Expected standard deviation.

  • tolerance (float) – Relative tolerance for validation.

Return type:

Dict[str, bool]

Returns:

Dictionary of validation results.

class EdgeCaseTester[source]

Bases: object

Test edge cases and boundary conditions.

static test_extreme_values() Dict[str, bool][source]

Test handling of extreme values.

Return type:

Dict[str, bool]

Returns:

Dictionary of test results.

static test_boundary_conditions() Dict[str, bool][source]

Test boundary conditions.

Return type:

Dict[str, bool]

Returns:

Dictionary of test results.

class AccuracyValidator(tolerance: float = 0.01)[source]

Bases: object

Main accuracy validation engine.

Provides comprehensive validation of numerical accuracy for Monte Carlo simulations.

compare_implementations(optimized_results: ndarray, reference_results: ndarray, test_name: str = 'Implementation Comparison') ValidationResult[source]

Compare optimized implementation against reference.

Parameters:
  • optimized_results (ndarray) – Results from optimized implementation.

  • reference_results (ndarray) – Results from reference implementation.

  • test_name (str) – Name of the test being performed.

Return type:

ValidationResult

Returns:

ValidationResult with comparison metrics.

validate_growth_rates(optimized_func: Callable, test_cases: List[Tuple] | None = None) ValidationResult[source]

Validate growth rate calculations.

Parameters:
  • optimized_func (Callable) – Optimized growth rate function.

  • test_cases (Optional[List[Tuple]]) – List of (final, initial, years) test cases.

Return type:

ValidationResult

Returns:

ValidationResult for growth rate calculations.

validate_insurance_calculations(optimized_func: Callable, test_cases: List[Tuple] | None = None) ValidationResult[source]

Validate insurance calculations.

Parameters:
  • optimized_func (Callable) – Optimized insurance function.

  • test_cases (Optional[List[Tuple]]) – List of (loss, attachment, limit) test cases.

Return type:

ValidationResult

Returns:

ValidationResult for insurance calculations.

validate_risk_metrics(optimized_var: Callable, optimized_tvar: Callable, test_data: ndarray | None = None) ValidationResult[source]

Validate risk metric calculations.

Parameters:
  • optimized_var (Callable) – Optimized VaR function.

  • optimized_tvar (Callable) – Optimized TVaR function.

  • test_data (Optional[ndarray]) – Test loss data.

Return type:

ValidationResult

Returns:

ValidationResult for risk metrics.

run_full_validation() ValidationResult[source]

Run comprehensive validation suite.

Return type:

ValidationResult

Returns:

Complete ValidationResult.

generate_validation_report(results: List[ValidationResult]) str[source]

Generate comprehensive validation report.

Parameters:

results (List[ValidationResult]) – List of validation results.

Return type:

str

Returns:

Formatted validation report.

ergodic_insurance.adaptive_stopping module

Adaptive stopping criteria for Monte Carlo simulations.

This module implements adaptive stopping rules based on convergence diagnostics, allowing simulations to terminate early when convergence criteria are met.

class StoppingRule(*values)[source]

Bases: Enum

Enumeration of available stopping rules.

R_HAT = 'r_hat'
ESS = 'ess'
RELATIVE_CHANGE = 'relative_change'
MCSE = 'mcse'
GEWEKE = 'geweke'
HEIDELBERGER = 'heidelberger'
COMBINED = 'combined'
CUSTOM = 'custom'
class StoppingCriteria(rule: StoppingRule = StoppingRule.COMBINED, r_hat_threshold: float = 1.05, min_ess: int = 1000, relative_tolerance: float = 0.01, mcse_relative_threshold: float = 0.05, min_iterations: int = 1000, max_iterations: int = 100000, check_interval: int = 100, patience: int = 3, confidence_level: float = 0.95) None[source]

Bases: object

Configuration for stopping criteria.

rule: StoppingRule = 'combined'
r_hat_threshold: float = 1.05
min_ess: int = 1000
relative_tolerance: float = 0.01
mcse_relative_threshold: float = 0.05
min_iterations: int = 1000
max_iterations: int = 100000
check_interval: int = 100
patience: int = 3
confidence_level: float = 0.95
__post_init__()[source]

Validate criteria after initialization.

class ConvergenceStatus(converged: bool, iteration: int, reason: str, diagnostics: Dict[str, float], should_stop: bool, estimated_remaining: int | None = None) None[source]

Bases: object

Container for convergence status information.

converged: bool
iteration: int
reason: str
diagnostics: Dict[str, float]
should_stop: bool
estimated_remaining: int | None = None
__str__() str[source]

String representation of convergence status.

Return type:

str

class AdaptiveStoppingMonitor(criteria: StoppingCriteria | None = None, custom_rule: Callable | None = None)[source]

Bases: object

Monitor for adaptive stopping based on convergence criteria.

Provides sophisticated adaptive stopping with multiple criteria, burn-in detection, and convergence rate estimation.

r_hat_history: List[float]
ess_history: List[float]
mean_history: List[float]
variance_history: List[float]
iteration_history: List[int]
convergence_rate: float | None
estimated_total_iterations: int | None
check_convergence(iteration: int, chains: ndarray, diagnostics: Dict[str, float] | None = None) ConvergenceStatus[source]

Check if convergence criteria are met.

Parameters:
  • iteration (int) – Current iteration number

  • chains (ndarray) – Array of chain values

  • diagnostics (Optional[Dict[str, float]]) – Pre-calculated diagnostics (optional)

Return type:

ConvergenceStatus

Returns:

ConvergenceStatus object with convergence information

detect_adaptive_burn_in(chains: ndarray, method: str = 'geweke') int[source]

Detect burn-in period adaptively.

Parameters:
  • chains (ndarray) – Array of chain values

  • method (str) – Method for burn-in detection

Return type:

int

Returns:

Estimated burn-in period

estimate_convergence_rate(diagnostic_history: List[float], target_value: float = 1.0) Tuple[float, int][source]

Estimate convergence rate and iterations to target.

Parameters:
  • diagnostic_history (List[float]) – History of diagnostic values

  • target_value (float) – Target value for convergence

Return type:

Tuple[float, int]

Returns:

Tuple of (convergence_rate, estimated_iterations_to_target)

get_stopping_summary() Dict[str, Any][source]

Get summary of stopping monitor state.

Return type:

Dict[str, Any]

Returns:

Dictionary with monitor summary information

ergodic_insurance.batch_processor module

Batch processing engine for running multiple simulation scenarios.

This module provides a framework for executing multiple scenarios in parallel or serial, with support for checkpointing, resumption, and result aggregation.

class ProcessingStatus(*values)[source]

Bases: Enum

Status of scenario processing.

PENDING = 'pending'
RUNNING = 'running'
COMPLETED = 'completed'
FAILED = 'failed'
SKIPPED = 'skipped'
class BatchResult(scenario_id: str, scenario_name: str, status: ProcessingStatus, simulation_results: SimulationResults | None = None, execution_time: float = 0.0, error_message: str | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Result from a single scenario execution.

scenario_id

Unique scenario identifier

scenario_name

Human-readable scenario name

status

Processing status

simulation_results

Monte Carlo simulation results

execution_time

Time taken to execute scenario

error_message

Error message if failed

metadata

Additional result metadata

scenario_id: str
scenario_name: str
status: ProcessingStatus
simulation_results: SimulationResults | None = None
execution_time: float = 0.0
error_message: str | None = None
metadata: Dict[str, Any]
class AggregatedResults(batch_results: ~typing.List[~ergodic_insurance.batch_processor.BatchResult], summary_statistics: ~pandas.core.frame.DataFrame, comparison_metrics: ~typing.Dict[str, ~pandas.core.frame.DataFrame], sensitivity_analysis: ~pandas.core.frame.DataFrame | None = None, execution_summary: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]

Bases: object

Aggregated results from batch processing.

batch_results

Individual scenario results

summary_statistics

Summary stats across scenarios

comparison_metrics

Comparative metrics between scenarios

sensitivity_analysis

Sensitivity analysis results

execution_summary

Batch execution summary

batch_results: List[BatchResult]
summary_statistics: DataFrame
comparison_metrics: Dict[str, DataFrame]
sensitivity_analysis: DataFrame | None = None
execution_summary: Dict[str, Any]
get_successful_results() List[BatchResult][source]

Get only successful results.

Return type:

List[BatchResult]

to_dataframe() DataFrame[source]

Convert results to DataFrame for analysis.

Return type:

DataFrame

Returns:

DataFrame with scenario results

class CheckpointData(completed_scenarios: ~typing.Set[str], failed_scenarios: ~typing.Set[str], batch_results: ~typing.List[~ergodic_insurance.batch_processor.BatchResult], timestamp: ~datetime.datetime, metadata: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]

Bases: object

Checkpoint data for resumable batch processing.

completed_scenarios: Set[str]
failed_scenarios: Set[str]
batch_results: List[BatchResult]
timestamp: datetime
metadata: Dict[str, Any]
class BatchProcessor(loss_generator: ManufacturingLossGenerator | None = None, insurance_program: InsuranceProgram | None = None, manufacturer: WidgetManufacturer | None = None, n_workers: int | None = None, checkpoint_dir: Path | None = None, use_parallel: bool = True, progress_bar: bool = True)[source]

Bases: object

Engine for batch processing multiple simulation scenarios.

batch_results: List[BatchResult]
completed_scenarios: Set[str]
failed_scenarios: Set[str]
process_batch(scenarios: List[ScenarioConfig], resume_from_checkpoint: bool = True, checkpoint_interval: int = 10, max_failures: int | None = None, priority_threshold: int | None = None) AggregatedResults[source]

Process a batch of scenarios.

Parameters:
  • scenarios (List[ScenarioConfig]) – List of scenarios to process

  • resume_from_checkpoint (bool) – Whether to resume from checkpoint

  • checkpoint_interval (int) – Save checkpoint every N scenarios

  • max_failures (Optional[int]) – Maximum allowed failures before stopping

  • priority_threshold (Optional[int]) – Only process scenarios up to this priority

Return type:

AggregatedResults

Returns:

Aggregated results from batch processing

clear_checkpoints() None[source]

Clear all checkpoints.

Return type:

None

export_results(path: str | Path, export_format: str = 'csv') None[source]

Export aggregated results to file.

Parameters:
  • path (Union[str, Path]) – Output file path

  • export_format (str) – Export format (csv, json, excel)

Return type:

None

export_financial_statements(path: str | Path) None[source]

Export comprehensive financial statements to Excel.

Generates detailed financial statements including balance sheets, income statements, cash flow statements, reconciliation reports, and metrics dashboards for each scenario.

Parameters:

path (Union[str, Path]) – Output directory path for Excel files

Return type:

None

ergodic_insurance.benchmarking module

Comprehensive benchmarking suite for Monte Carlo simulations.

This module provides tools for benchmarking Monte Carlo engine performance, targeting 100K simulations in under 60 seconds on 4-core CPUs with <4GB memory.

Key features:
  • Performance benchmarking at multiple scales (1K, 10K, 100K)

  • Memory usage tracking and profiling

  • CPU efficiency monitoring

  • Cache effectiveness measurement

  • Automated performance report generation

  • Comparison of optimization strategies

Example

>>> from benchmarking import BenchmarkSuite, BenchmarkConfig
>>> from monte_carlo import MonteCarloEngine
>>>
>>> suite = BenchmarkSuite()
>>> config = BenchmarkConfig(scales=[1000, 10000, 100000])
>>>
>>> # Run comprehensive benchmarks
>>> results = suite.run_comprehensive_benchmark(engine, config)
>>> print(results.summary())
>>>
>>> # Check if performance targets are met
>>> if results.meets_requirements():
...     print("✓ All performance targets achieved!")

Google-style docstrings are used throughout for Sphinx documentation.

class BenchmarkMetrics(execution_time: float, simulations_per_second: float, memory_peak_mb: float, memory_average_mb: float, cpu_utilization: float = 0.0, cache_hit_rate: float = 0.0, accuracy_score: float = 1.0, convergence_iterations: int = 0) None[source]

Bases: object

Metrics collected during benchmarking.

execution_time

Total execution time in seconds

simulations_per_second

Throughput metric

memory_peak_mb

Peak memory usage in MB

memory_average_mb

Average memory usage in MB

cpu_utilization

Average CPU utilization percentage

cache_hit_rate

Cache effectiveness percentage

accuracy_score

Numerical accuracy score

convergence_iterations

Iterations to convergence

execution_time: float
simulations_per_second: float
memory_peak_mb: float
memory_average_mb: float
cpu_utilization: float = 0.0
cache_hit_rate: float = 0.0
accuracy_score: float = 1.0
convergence_iterations: int = 0
to_dict() Dict[str, Any][source]

Convert metrics to dictionary.

Return type:

Dict[str, Any]

Returns:

Dictionary representation of metrics.

class BenchmarkResult(scale: int, metrics: ~ergodic_insurance.benchmarking.BenchmarkMetrics, configuration: ~typing.Dict[str, ~typing.Any], timestamp: ~datetime.datetime, system_info: ~typing.Dict[str, ~typing.Any] = <factory>, optimizations: ~typing.List[str] = <factory>) None[source]

Bases: object

Results from a benchmark run.

scale

Number of simulations

metrics

Performance metrics

configuration

Configuration used

timestamp

When benchmark was run

system_info

System information

optimizations

Optimizations applied

scale: int
metrics: BenchmarkMetrics
configuration: Dict[str, Any]
timestamp: datetime
system_info: Dict[str, Any]
optimizations: List[str]
meets_target(target_time: float, target_memory: float) bool[source]

Check if result meets performance targets.

Parameters:
  • target_time (float) – Maximum execution time in seconds.

  • target_memory (float) – Maximum memory usage in MB.

Return type:

bool

Returns:

True if targets are met.

summary() str[source]

Generate result summary.

Return type:

str

Returns:

Formatted summary string.

class BenchmarkConfig(scales: List[int] = <factory>, n_years: int = 10, n_workers: int = 4, memory_limit_mb: float = 4000.0, target_times: Dict[int, float]=<factory>, repetitions: int = 3, warmup_runs: int = 2, enable_profiling: bool = True) None[source]

Bases: object

Configuration for benchmarking.

scales

List of simulation counts to test

n_years

Years per simulation

n_workers

Number of parallel workers

memory_limit_mb

Memory limit for testing

target_times

Target execution times per scale

repetitions

Number of repetitions per test

warmup_runs

Number of warmup runs

enable_profiling

Enable detailed profiling

scales: List[int]
n_years: int = 10
n_workers: int = 4
memory_limit_mb: float = 4000.0
target_times: Dict[int, float]
repetitions: int = 3
warmup_runs: int = 2
enable_profiling: bool = True
class SystemProfiler[source]

Bases: object

Profile system resources during benchmarking.

start() None[source]

Start profiling.

Return type:

None

sample() None[source]

Take a resource sample.

Return type:

None

get_metrics() Tuple[float, float, float][source]

Get profiling metrics.

Return type:

Tuple[float, float, float]

Returns:

Tuple of (avg_cpu, peak_memory, avg_memory).

static get_system_info() Dict[str, Any][source]

Get system information.

Return type:

Dict[str, Any]

Returns:

Dictionary of system information.

class BenchmarkRunner(profiler: SystemProfiler | None = None)[source]

Bases: object

Run individual benchmarks with monitoring.

run_single_benchmark(func: Callable, args: Tuple = (), kwargs: Dict | None = None, monitor_interval: float = 0.1) BenchmarkMetrics[source]

Run a single benchmark with monitoring.

Parameters:
  • func (Callable) – Function to benchmark.

  • args (Tuple) – Positional arguments for function.

  • kwargs (Optional[Dict]) – Keyword arguments for function.

  • monitor_interval (float) – Monitoring interval in seconds.

Return type:

BenchmarkMetrics

Returns:

BenchmarkMetrics from the run.

run_with_warmup(func: Callable, args: Tuple = (), kwargs: Dict | None = None, warmup_runs: int = 2, benchmark_runs: int = 3) List[BenchmarkMetrics][source]

Run benchmark with warmup.

Parameters:
  • func (Callable) – Function to benchmark.

  • args (Tuple) – Positional arguments.

  • kwargs (Optional[Dict]) – Keyword arguments.

  • warmup_runs (int) – Number of warmup runs.

  • benchmark_runs (int) – Number of benchmark runs.

Return type:

List[BenchmarkMetrics]

Returns:

List of benchmark metrics.

class BenchmarkSuite[source]

Bases: object

Comprehensive benchmark suite for Monte Carlo simulations.

Provides tools to benchmark performance across different scales and configurations, generating detailed reports.

results: List[BenchmarkResult]
benchmark_scale(engine, scale: int, config: BenchmarkConfig, optimizations: List[str] | None = None) BenchmarkResult[source]

Benchmark at a specific scale.

Parameters:
  • engine – Monte Carlo engine to benchmark.

  • scale (int) – Number of simulations.

  • config (BenchmarkConfig) – Benchmark configuration.

  • optimizations (Optional[List[str]]) – List of applied optimizations.

Return type:

BenchmarkResult

Returns:

BenchmarkResult for this scale.

run_comprehensive_benchmark(engine, config: BenchmarkConfig | None = None) ComprehensiveBenchmarkResult[source]

Run comprehensive benchmark suite.

Parameters:
Return type:

ComprehensiveBenchmarkResult

Returns:

ComprehensiveBenchmarkResult with all results.

compare_configurations(engine_factory: Callable, configurations: List[Dict[str, Any]], scale: int = 10000) ConfigurationComparison[source]

Compare different configurations.

Parameters:
  • engine_factory (Callable) – Factory function to create engines.

  • configurations (List[Dict[str, Any]]) – List of configuration dictionaries.

  • scale (int) – Number of simulations to test.

Return type:

ConfigurationComparison

Returns:

ConfigurationComparison results.

class ComprehensiveBenchmarkResult(results: List[BenchmarkResult], config: BenchmarkConfig, system_info: Dict[str, Any]) None[source]

Bases: object

Results from comprehensive benchmark suite.

results

List of individual benchmark results

config

Configuration used

system_info

System information

results: List[BenchmarkResult]
config: BenchmarkConfig
system_info: Dict[str, Any]
meets_requirements() bool[source]

Check if all requirements are met.

Return type:

bool

Returns:

True if all performance requirements are satisfied.

summary() str[source]

Generate comprehensive summary.

Return type:

str

Returns:

Formatted summary string.

save_report(filepath: str) None[source]

Save benchmark report to file.

Parameters:

filepath (str) – Path to save report.

Return type:

None

class ConfigurationComparison(results: List[Dict[str, Any]]) None[source]

Bases: object

Results from configuration comparison.

results: List[Dict[str, Any]]
best_configuration() Dict[str, Any][source]

Find best configuration.

Return type:

Dict[str, Any]

Returns:

Best configuration based on execution time.

summary() str[source]

Generate comparison summary.

Return type:

str

Returns:

Formatted summary string.

run_quick_benchmark(engine, n_simulations: int = 10000) BenchmarkMetrics[source]

Run a quick benchmark.

Parameters:
  • engine – Monte Carlo engine to benchmark.

  • n_simulations (int) – Number of simulations.

Return type:

BenchmarkMetrics

Returns:

BenchmarkMetrics from the run.

ergodic_insurance.bootstrap_analysis module

Bootstrap confidence interval analysis for simulation results.

This module provides comprehensive bootstrap analysis capabilities for statistical significance testing and confidence interval calculation. Supports both percentile and BCa (bias-corrected and accelerated) methods with parallel processing for performance optimization.

Example

>>> import numpy as np
>>> from bootstrap_analysis import BootstrapAnalyzer
>>> # Create sample data
>>> data = np.random.normal(100, 15, 1000)
>>> analyzer = BootstrapAnalyzer(n_bootstrap=10000, seed=42)
>>> # Calculate confidence interval for mean
>>> ci = analyzer.confidence_interval(data, np.mean)
>>> print(f"95% CI: [{ci[0]:.2f}, {ci[1]:.2f}]")
>>> # Parallel bootstrap for faster computation
>>> ci_parallel = analyzer.confidence_interval(
...     data, np.mean, method='bca', parallel=True
... )
DEFAULT_N_BOOTSTRAP

Default number of bootstrap iterations (10000).

Type:

int

DEFAULT_CONFIDENCE

Default confidence level (0.95).

Type:

float

DEFAULT_N_WORKERS

Default number of parallel workers (4).

Type:

int

class BootstrapResult(statistic: float, confidence_level: float, confidence_interval: Tuple[float, float], bootstrap_distribution: ndarray, method: str, n_bootstrap: int, bias: float | None = None, acceleration: float | None = None, converged: bool = True, metadata: Dict[str, Any] | None = None) None[source]

Bases: object

Container for bootstrap analysis results.

statistic: float
confidence_level: float
confidence_interval: Tuple[float, float]
bootstrap_distribution: ndarray
method: str
n_bootstrap: int
bias: float | None = None
acceleration: float | None = None
converged: bool = True
metadata: Dict[str, Any] | None = None
summary() str[source]

Generate human-readable summary of bootstrap results.

Return type:

str

Returns:

Formatted string with key bootstrap statistics.

class BootstrapAnalyzer(n_bootstrap: int = 10000, confidence_level: float = 0.95, seed: int | None = None, n_workers: int = 4, show_progress: bool = True)[source]

Bases: object

Main class for bootstrap confidence interval analysis.

Provides methods for calculating bootstrap confidence intervals using various methods including percentile and BCa. Supports parallel processing for improved performance with large datasets.

Parameters:
  • n_bootstrap (int) – Number of bootstrap iterations (default 10000).

  • confidence_level (float) – Confidence level for intervals (default 0.95).

  • seed (Optional[int]) – Random seed for reproducibility (default None).

  • n_workers (int) – Number of parallel workers (default 4).

  • show_progress (bool) – Whether to show progress bar (default True).

Example

>>> analyzer = BootstrapAnalyzer(n_bootstrap=5000, confidence_level=0.99)
>>> data = np.random.exponential(2, 1000)
>>> result = analyzer.confidence_interval(data, np.median)
>>> print(result.summary())
DEFAULT_N_BOOTSTRAP = 10000
DEFAULT_CONFIDENCE = 0.95
DEFAULT_N_WORKERS = 4
bootstrap_sample(data: ndarray, statistic: Callable[[ndarray], float], n_samples: int = 1) ndarray[source]

Generate bootstrap samples and compute statistics.

Parameters:
  • data (ndarray) – Input data array.

  • statistic (Callable[[ndarray], float]) – Function to compute on each bootstrap sample.

  • n_samples (int) – Number of bootstrap samples to generate.

Return type:

ndarray

Returns:

Array of bootstrap statistics.

confidence_interval(data: ndarray, statistic: Callable[[ndarray], float], confidence_level: float | None = None, method: str = 'percentile', parallel: bool = False) BootstrapResult[source]

Calculate bootstrap confidence interval for a statistic.

Parameters:
  • data (ndarray) – Input data array.

  • statistic (Callable[[ndarray], float]) – Function to compute the statistic of interest.

  • confidence_level (Optional[float]) – Confidence level (uses default if None).

  • method (str) – ‘percentile’ or ‘bca’ (bias-corrected and accelerated).

  • parallel (bool) – Whether to use parallel processing.

Return type:

BootstrapResult

Returns:

BootstrapResult containing confidence interval and diagnostics.

Raises:

ValueError – If method is not ‘percentile’ or ‘bca’.

compare_statistics(data1: ndarray, data2: ndarray, statistic: Callable[[ndarray], float], comparison: str = 'difference') BootstrapResult[source]

Compare statistics between two datasets using bootstrap.

Parameters:
  • data1 (ndarray) – First dataset.

  • data2 (ndarray) – Second dataset.

  • statistic (Callable[[ndarray], float]) – Function to compute on each dataset.

  • comparison (str) – Type of comparison (‘difference’ or ‘ratio’).

Return type:

BootstrapResult

Returns:

BootstrapResult for the comparison statistic.

Raises:

ValueError – If comparison type is not supported.

bootstrap_confidence_interval(data: ~numpy.ndarray | ~typing.List[float], statistic: ~typing.Callable[[~numpy.ndarray], float] = <function mean>, confidence_level: float = 0.95, n_bootstrap: int = 10000, method: str = 'percentile', seed: int | None = None) Tuple[float, Tuple[float, float]][source]

Convenience function for simple bootstrap confidence interval calculation.

Parameters:
  • data (Union[ndarray, List[float]]) – Input data (array or list).

  • statistic (Callable[[ndarray], float]) – Function to compute statistic (default: mean).

  • confidence_level (float) – Confidence level (default: 0.95).

  • n_bootstrap (int) – Number of bootstrap iterations (default: 10000).

  • method (str) – ‘percentile’ or ‘bca’ (default: ‘percentile’).

  • seed (Optional[int]) – Random seed for reproducibility.

Return type:

Tuple[float, Tuple[float, float]]

Returns:

Tuple of (original_statistic, (lower_bound, upper_bound)).

Example

>>> data = np.random.normal(100, 15, 1000)
>>> stat, ci = bootstrap_confidence_interval(data, np.median)
>>> print(f"Median: {stat:.2f}, 95% CI: [{ci[0]:.2f}, {ci[1]:.2f}]")

ergodic_insurance.business_optimizer module

Business outcome optimization algorithms for insurance decisions.

This module implements sophisticated optimization algorithms focused on real business outcomes (ROE, growth rate, survival probability) rather than technical metrics. These algorithms maximize long-term company value through optimal insurance decisions.

Author: Alex Filiakov Date: 2025-01-25

class OptimizationDirection(*values)[source]

Bases: Enum

Direction of optimization for objectives.

MAXIMIZE = 'maximize'
MINIMIZE = 'minimize'
class BusinessObjective(name: str, weight: float = 1.0, target_value: float | None = None, optimization_direction: OptimizationDirection = OptimizationDirection.MAXIMIZE, constraint_type: str | None = None, constraint_value: float | None = None) None[source]

Bases: object

Business optimization objective definition.

name

Name of the objective (e.g., ‘ROE’, ‘bankruptcy_risk’)

weight

Weight in multi-objective optimization (0-1)

target_value

Optional target value for the objective

optimization_direction

Whether to maximize or minimize

constraint_type

Optional constraint type (‘>=’, ‘<=’, ‘==’)

constraint_value

Optional constraint value

name: str
weight: float = 1.0
target_value: float | None = None
optimization_direction: OptimizationDirection = 'maximize'
constraint_type: str | None = None
constraint_value: float | None = None
__post_init__()[source]

Validate objective configuration.

class BusinessConstraints(max_risk_tolerance: float = 0.01, min_roe_threshold: float = 0.1, max_leverage_ratio: float = 2.0, min_liquidity_ratio: float = 1.2, max_premium_budget: float = 0.02, min_coverage_ratio: float = 0.5, regulatory_requirements: Dict[str, float]=<factory>) None[source]

Bases: object

Business optimization constraints.

max_risk_tolerance

Maximum acceptable probability of bankruptcy

min_roe_threshold

Minimum required return on equity

max_leverage_ratio

Maximum debt-to-equity ratio

min_liquidity_ratio

Minimum liquidity requirements

max_premium_budget

Maximum insurance premium as % of revenue

min_coverage_ratio

Minimum coverage as % of assets

regulatory_requirements

Additional regulatory constraints

max_risk_tolerance: float = 0.01
min_roe_threshold: float = 0.1
max_leverage_ratio: float = 2.0
min_liquidity_ratio: float = 1.2
max_premium_budget: float = 0.02
min_coverage_ratio: float = 0.5
regulatory_requirements: Dict[str, float]
__post_init__()[source]

Validate constraint values.

class OptimalStrategy(coverage_limit: float, deductible: float, premium_rate: float, expected_roe: float, bankruptcy_risk: float, growth_rate: float, capital_efficiency: float, recommendations: List[str] = <factory>) None[source]

Bases: object

Optimal insurance strategy result.

coverage_limit

Optimal coverage limit amount

deductible

Optimal deductible amount

premium_rate

Optimal premium rate

expected_roe

Expected ROE with this strategy

bankruptcy_risk

Probability of bankruptcy

growth_rate

Expected growth rate

capital_efficiency

Capital efficiency ratio

recommendations

List of actionable recommendations

coverage_limit: float
deductible: float
premium_rate: float
expected_roe: float
bankruptcy_risk: float
growth_rate: float
capital_efficiency: float
recommendations: List[str]
to_dict() Dict[str, float | List[str]][source]

Convert to dictionary for serialization.

Return type:

Dict[str, Union[float, List[str]]]

class BusinessOptimizationResult(optimal_strategy: OptimalStrategy, objective_values: Dict[str, float], constraint_satisfaction: Dict[str, bool], convergence_info: Dict[str, bool | int | float], sensitivity_analysis: Dict[str, float] | None = None) None[source]

Bases: object

Result of business outcome optimization.

optimal_strategy

The optimal insurance strategy

objective_values

Values achieved for each objective

constraint_satisfaction

Status of constraint satisfaction

convergence_info

Optimization convergence information

sensitivity_analysis

Sensitivity to parameter changes

optimal_strategy: OptimalStrategy
objective_values: Dict[str, float]
constraint_satisfaction: Dict[str, bool]
convergence_info: Dict[str, bool | int | float]
sensitivity_analysis: Dict[str, float] | None = None
is_feasible() bool[source]

Check if all constraints are satisfied.

Return type:

bool

class BusinessOptimizer(manufacturer: WidgetManufacturer, decision_engine: InsuranceDecisionEngine | None = None, ergodic_analyzer: ErgodicAnalyzer | None = None, loss_distribution: LossDistribution | None = None, optimizer_config: BusinessOptimizerConfig | None = None)[source]

Bases: object

Optimize business outcomes through insurance decisions.

This class implements sophisticated optimization algorithms focused on real business metrics like ROE, growth rate, and survival probability.

maximize_roe_with_insurance(constraints: BusinessConstraints, time_horizon: int = 10, n_simulations: int = 1000) OptimalStrategy[source]

Maximize ROE subject to business constraints.

Objective: max(ROE_with_insurance - ROE_baseline)

Parameters:
  • constraints (BusinessConstraints) – Business constraints to satisfy

  • time_horizon (int) – Planning horizon in years

  • n_simulations (int) – Number of Monte Carlo simulations

Return type:

OptimalStrategy

Returns:

Optimal insurance strategy maximizing ROE

minimize_bankruptcy_risk(growth_targets: Dict[str, float], budget_constraint: float, time_horizon: int = 10) OptimalStrategy[source]

Minimize bankruptcy risk while achieving growth targets.

Objective: min(P(bankruptcy))

Parameters:
  • growth_targets (Dict[str, float]) – Target growth rates (e.g., {‘revenue’: 0.15, ‘assets’: 0.10})

  • budget_constraint (float) – Maximum premium budget

  • time_horizon (int) – Planning horizon in years

Return type:

OptimalStrategy

Returns:

Risk-minimizing insurance strategy

optimize_capital_efficiency(available_capital: float, investment_opportunities: Dict[str, float]) Dict[str, float][source]

Optimize capital allocation across insurance and investments.

Parameters:
  • available_capital (float) – Total capital available for allocation

  • investment_opportunities (Dict[str, float]) – Opportunities with expected returns

Return type:

Dict[str, float]

Returns:

Optimal capital allocation dictionary

analyze_time_horizon_impact(strategies: List[Dict[str, Any]], time_horizons: List[int] | None = None) DataFrame[source]

Analyze strategy performance across different time horizons.

Parameters:
Return type:

DataFrame

Returns:

DataFrame with performance metrics by time horizon

optimize_business_outcomes(objectives: List[BusinessObjective], constraints: BusinessConstraints, time_horizon: int = 10, method: str = 'weighted_sum') BusinessOptimizationResult[source]

Multi-objective optimization of business outcomes.

Parameters:
  • objectives (List[BusinessObjective]) – List of business objectives to optimize

  • constraints (BusinessConstraints) – Business constraints to satisfy

  • time_horizon (int) – Planning horizon in years

  • method (str) – Optimization method (‘weighted_sum’, ‘epsilon_constraint’, ‘pareto’)

Return type:

BusinessOptimizationResult

Returns:

Comprehensive optimization result

ergodic_insurance.claim_development module

Claim development patterns for cash flow modeling.

This module provides classes for modeling realistic claim payment patterns, including immediate and long-tail development patterns typical for manufacturing liability claims. It supports IBNR estimation, reserve calculations, and cash flow projections.

class DevelopmentPatternType(*values)[source]

Bases: Enum

Standard claim development pattern types.

IMMEDIATE = 'immediate'
MEDIUM_TAIL_5YR = 'medium_tail_5yr'
LONG_TAIL_10YR = 'long_tail_10yr'
VERY_LONG_TAIL_15YR = 'very_long_tail_15yr'
CUSTOM = 'custom'
class ClaimDevelopment(pattern_name: str, development_factors: List[float], tail_factor: float = 0.0) None[source]

Bases: object

Claim development pattern for payment timing.

This class defines how claim payments develop over time, with development factors representing the percentage of total claim amount paid in each year.

pattern_name: str
development_factors: List[float]
tail_factor: float = 0.0
__post_init__()[source]

Validate development pattern.

Raises:

ValueError – If development factors are invalid or don’t sum to 1.0.

classmethod create_immediate() ClaimDevelopment[source]

Create immediate payment pattern (property damage).

Return type:

ClaimDevelopment

Returns:

ClaimDevelopment with immediate payment pattern.

classmethod create_medium_tail_5yr() ClaimDevelopment[source]

Create 5-year workers compensation pattern.

Return type:

ClaimDevelopment

Returns:

ClaimDevelopment with 5-year workers compensation pattern.

classmethod create_long_tail_10yr() ClaimDevelopment[source]

Create 10-year general liability pattern.

Return type:

ClaimDevelopment

Returns:

ClaimDevelopment with 10-year general liability pattern.

classmethod create_very_long_tail_15yr() ClaimDevelopment[source]

Create 15-year product liability pattern.

Return type:

ClaimDevelopment

Returns:

ClaimDevelopment with 15-year product liability pattern.

calculate_payments(claim_amount: float, accident_year: int, payment_year: int) float[source]

Calculate payment amount for a specific year.

Parameters:
  • claim_amount (float) – Total claim amount.

  • accident_year (int) – Year when claim occurred.

  • payment_year (int) – Year for which to calculate payment.

Return type:

float

Returns:

Payment amount for the specified year.

get_cumulative_paid(years_since_accident: int) float[source]

Get cumulative percentage paid by year.

Parameters:

years_since_accident (int) – Number of years since accident.

Return type:

float

Returns:

Cumulative percentage paid (0-1).

class Claim(claim_id: str, accident_year: int, reported_year: int, initial_estimate: float, claim_type: str = 'general_liability', development_pattern: ClaimDevelopment | None = None, payments_made: Dict[int, float]=<factory>) None[source]

Bases: object

Individual claim with development tracking.

claim_id: str
accident_year: int
reported_year: int
initial_estimate: float
claim_type: str = 'general_liability'
development_pattern: ClaimDevelopment | None = None
payments_made: Dict[int, float]
__post_init__()[source]

Set default development pattern if not provided.

Uses general liability pattern as default if no pattern is specified.

record_payment(year: int, amount: float)[source]

Record a payment made for this claim.

Parameters:
  • year (int) – Year of payment.

  • amount (float) – Payment amount.

get_total_paid() float[source]

Get total amount paid to date.

Return type:

float

Returns:

Sum of all payments made on this claim.

get_outstanding_reserve() float[source]

Calculate outstanding reserve requirement.

Return type:

float

Returns:

Outstanding reserve amount (initial estimate minus payments made).

class ClaimCohort(accident_year: int, claims: List[Claim] = <factory>) None[source]

Bases: object

Cohort of claims from the same accident year.

accident_year: int
claims: List[Claim]
add_claim(claim: Claim)[source]

Add a claim to the cohort.

Parameters:

claim (Claim) – Claim to add.

Raises:

ValueError – If claim is from different accident year.

calculate_payments(payment_year: int) float[source]

Calculate total payments for a specific year.

Parameters:

payment_year (int) – Year for which to calculate payments.

Return type:

float

Returns:

Total payment amount for the year.

get_total_incurred() float[source]

Get total incurred amount for the cohort.

Return type:

float

Returns:

Sum of initial estimates for all claims in the cohort.

get_total_paid() float[source]

Get total amount paid for the cohort.

Return type:

float

Returns:

Sum of all payments made for claims in the cohort.

get_outstanding_reserve() float[source]

Get total outstanding reserve for the cohort.

Return type:

float

Returns:

Sum of outstanding reserves for all claims in the cohort.

class CashFlowProjector(discount_rate: float = 0.03)[source]

Bases: object

Project cash flows based on claim development patterns.

cohorts: Dict[int, ClaimCohort]
add_cohort(cohort: ClaimCohort)[source]

Add a claim cohort to the projector.

Parameters:

cohort (ClaimCohort) – Claim cohort to add.

project_payments(start_year: int, end_year: int) Dict[int, float][source]

Project claim payments for a range of years.

Parameters:
  • start_year (int) – First year of projection.

  • end_year (int) – Last year of projection.

Return type:

Dict[int, float]

Returns:

Dictionary mapping years to payment amounts.

calculate_present_value(payments: Dict[int, float], base_year: int) float[source]

Calculate present value of future payments.

Parameters:
  • payments (Dict[int, float]) – Dictionary of year to payment amount.

  • base_year (int) – Year to discount to.

Return type:

float

Returns:

Present value of all payments.

estimate_ibnr(evaluation_year: int, reporting_lag: int = 3) float[source]

Estimate IBNR using simplified chain-ladder method.

Parameters:
  • evaluation_year (int) – Current evaluation year.

  • reporting_lag (int) – Average months for claim reporting.

Return type:

float

Returns:

Estimated IBNR amount.

calculate_total_reserves(evaluation_year: int) Dict[str, float][source]

Calculate total reserve requirements.

Parameters:

evaluation_year (int) – Current evaluation year.

Return type:

Dict[str, float]

Returns:

Dictionary with case reserves, IBNR, and total.

load_development_patterns(file_path: str) Dict[str, ClaimDevelopment][source]

Load development patterns from YAML configuration.

Parameters:

file_path (str) – Path to YAML configuration file.

Return type:

Dict[str, ClaimDevelopment]

Returns:

Dictionary mapping pattern names to ClaimDevelopment objects.

ergodic_insurance.config module

Configuration management using Pydantic v2 models.

This module provides comprehensive configuration classes for the Ergodic Insurance simulation framework. It uses Pydantic models for validation, type safety, and automatic serialization/deserialization of configuration parameters.

The configuration system is hierarchical, with specialized configs for different aspects of the simulation (manufacturer, insurance, simulation parameters, etc.) that can be composed into a master configuration.

Key Features:
  • Type-safe configuration with automatic validation

  • Hierarchical configuration structure

  • Environment variable support

  • JSON/YAML serialization support

  • Default values with business logic constraints

  • Cross-field validation for consistency

Examples

Quick start with defaults:

from ergodic_insurance import Config

# All defaults — $10M manufacturer, 50-year horizon
config = Config()

From basic company info:

config = Config.from_company(
    initial_assets=50_000_000,
    operating_margin=0.12,
    industry="manufacturing",
)

Full control:

from ergodic_insurance import Config, ManufacturerConfig

config = Config(
    manufacturer=ManufacturerConfig(
        initial_assets=10_000_000,
        asset_turnover_ratio=0.8,
        base_operating_margin=0.08,
        tax_rate=0.25,
        retention_ratio=0.7,
    )
)

Loading from file:

config = Config.from_yaml(Path('config.yaml'))

Note

All monetary values are in nominal dollars unless otherwise specified. Rates and ratios are expressed as decimals (0.1 = 10%).

Since:

Version 0.1.0

DEFAULT_RISK_FREE_RATE: float = 0.02

Default risk-free rate (2%) used for Sharpe ratio and risk-adjusted calculations.

class BusinessOptimizerConfig(base_roe: float = 0.15, protection_benefit_factor: float = 0.05, roe_noise_std: float = 0.1, base_bankruptcy_risk: float = 0.02, max_risk_reduction: float = 0.015, premium_burden_risk_factor: float = 0.5, time_risk_constant: float = 20.0, base_growth_rate: float = 0.1, growth_boost_factor: float = 0.03, premium_drag_factor: float = 0.5, asset_growth_factor: float = 0.8, equity_growth_factor: float = 1.1, risk_transfer_benefit_rate: float = 0.05, risk_reduction_value: float = 0.03, stability_value: float = 0.02, growth_enablement_value: float = 0.03, assumed_volatility: float = 0.2, volatility_reduction_factor: float = 0.05, min_volatility: float = 0.05) None[source]

Bases: object

Calibration parameters for BusinessOptimizer financial heuristics.

Issue #314 (C1): Consolidates all hardcoded financial multipliers from BusinessOptimizer into a single, documentable configuration object.

These are simplified model parameters used by the optimizer’s heuristic methods (_estimate_roe, _estimate_bankruptcy_risk, _estimate_growth_rate, etc.). They are NOT derived from manufacturer data—they are tuning knobs for the optimizer’s internal scoring functions.

base_roe: float = 0.15

Base return on equity (15%) before insurance adjustments.

protection_benefit_factor: float = 0.05

Coverage-to-assets ratio multiplier for protection benefit.

roe_noise_std: float = 0.1

Standard deviation of multiplicative noise applied to ROE.

base_bankruptcy_risk: float = 0.02

Base annual bankruptcy probability (2%).

max_risk_reduction: float = 0.015

Maximum risk reduction from insurance coverage (1.5%).

premium_burden_risk_factor: float = 0.5

Multiplier converting premium burden ratio to risk increase.

time_risk_constant: float = 20.0

Time constant (years) for exponential risk accumulation.

base_growth_rate: float = 0.1

Base growth rate (10%) before insurance adjustments.

growth_boost_factor: float = 0.03

Coverage ratio multiplier for growth boost (up to 3%).

premium_drag_factor: float = 0.5

Multiplier for premium-to-revenue drag on growth.

asset_growth_factor: float = 0.8

Growth adjustment factor for asset metric.

equity_growth_factor: float = 1.1

Growth adjustment factor for equity metric.

risk_transfer_benefit_rate: float = 0.05

Fraction of coverage limit freed up by risk transfer (5%).

risk_reduction_value: float = 0.03

Return contribution from risk reduction (3%).

stability_value: float = 0.02

Return contribution from stability improvement (2%).

growth_enablement_value: float = 0.03

Return contribution from growth enablement (3%).

assumed_volatility: float = 0.2

Assumed base volatility for ergodic correction.

volatility_reduction_factor: float = 0.05

Coverage ratio multiplier for volatility reduction.

min_volatility: float = 0.05

Floor for adjusted volatility.

class DecisionEngineConfig(base_growth_rate: float = 0.08, volatility_reduction_factor: float = 0.3, max_volatility_reduction: float = 0.15, growth_benefit_factor: float = 0.5) None[source]

Bases: object

Calibration parameters for InsuranceDecisionEngine heuristics.

Issue #314 (C2): Consolidates hardcoded values from the decision engine’s growth estimation and simulation methods.

base_growth_rate: float = 0.08

Base growth rate (8%) for decision engine growth estimation.

volatility_reduction_factor: float = 0.3

Coverage ratio multiplier for volatility reduction.

max_volatility_reduction: float = 0.15

Maximum volatility reduction (15%).

growth_benefit_factor: float = 0.5

Simplified growth benefit multiplier.

class ManufacturerConfig(**data: Any) None[source]

Bases: BaseModel

Financial parameters for the widget manufacturer.

This class defines the core financial parameters used to initialize and configure a widget manufacturing company in the simulation. All parameters are validated to ensure realistic business constraints.

initial_assets

Starting asset value in dollars. Must be positive.

asset_turnover_ratio

Revenue per dollar of assets. Typically 0.5-2.0 for manufacturing companies.

base_operating_margin

Core operating margin before insurance costs (EBIT before insurance / Revenue). Typically 5-15% for healthy manufacturers.

tax_rate

Corporate tax rate. Typically 20-30% depending on jurisdiction.

retention_ratio

Portion of earnings retained vs distributed as dividends. Higher retention supports faster growth.

ppe_ratio

Property, Plant & Equipment allocation ratio as fraction of initial assets. Defaults based on operating margin if not specified.

Examples

Conservative manufacturer:

config = ManufacturerConfig(
    initial_assets=5_000_000,
    asset_turnover_ratio=0.6,  # Low turnover
    base_operating_margin=0.05,      # 5% base margin
    tax_rate=0.25,
    retention_ratio=0.9         # High retention
)

Aggressive growth manufacturer:

config = ManufacturerConfig(
    initial_assets=20_000_000,
    asset_turnover_ratio=1.2,  # High turnover
    base_operating_margin=0.12,      # 12% base margin
    tax_rate=0.25,
    retention_ratio=1.0         # Full retention
)

Custom PP&E allocation:

config = ManufacturerConfig(
    initial_assets=15_000_000,
    asset_turnover_ratio=0.9,
    base_operating_margin=0.10,
    tax_rate=0.25,
    retention_ratio=0.8,
    ppe_ratio=0.6  # Override default PP&E allocation
)

Note

The asset turnover ratio and base operating margin together determine the core return on assets (ROA) before insurance costs and taxes. Actual operating margins will be lower when insurance costs are included.

initial_assets: float
asset_turnover_ratio: float
base_operating_margin: float
tax_rate: float
nol_carryforward_enabled: bool
nol_limitation_pct: float
retention_ratio: float
ppe_ratio: float | None
insolvency_tolerance: float
expense_ratios: ExpenseRatioConfig | None
premium_payment_month: int
revenue_pattern: Literal['uniform', 'seasonal', 'back_loaded']
check_intra_period_liquidity: bool
set_default_ppe_ratio()[source]

Set default PPE ratio based on operating margin if not provided.

classmethod validate_margin(v: float) float[source]

Warn if base operating margin is unusually high or negative.

Parameters:

v (float) – Base operating margin value to validate (as decimal, e.g., 0.1 for 10%).

Returns:

The validated base operating margin value.

Return type:

float

Note

Margins above 30% are flagged as unusual for manufacturing. Negative margins indicate unprofitable operations before insurance.

classmethod from_industry_config(industry_config, **kwargs)[source]

Create ManufacturerConfig from an IndustryConfig instance.

Parameters:
  • industry_config – IndustryConfig instance with industry-specific parameters

  • **kwargs – Additional parameters to override or supplement

Returns:

ManufacturerConfig instance with parameters derived from industry config

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class WorkingCapitalConfig(**data: Any) None[source]

Bases: BaseModel

Working capital management parameters.

This class configures how working capital requirements are calculated as a percentage of sales revenue. Working capital represents the funds tied up in day-to-day operations (inventory, receivables, etc.).

percent_of_sales

Working capital as percentage of sales. Typically 15-25% for manufacturers depending on payment terms and inventory turnover.

Examples

Efficient working capital:

wc_config = WorkingCapitalConfig(
    percent_of_sales=0.15  # 15% - lean operations
)

Conservative working capital:

wc_config = WorkingCapitalConfig(
    percent_of_sales=0.30  # 30% - higher inventory/receivables
)

Note

Higher working capital requirements reduce available cash for growth investments but provide operational cushion.

percent_of_sales: float
classmethod validate_working_capital(v: float) float[source]

Validate working capital percentage.

Parameters:

v (float) – Working capital percentage to validate (as decimal).

Returns:

The validated working capital percentage.

Return type:

float

Raises:

ValueError – If working capital percentage exceeds 50% of sales, which would indicate severe operational inefficiency.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class GrowthConfig(**data: Any) None[source]

Bases: BaseModel

Growth model parameters.

Configures whether the simulation uses deterministic or stochastic growth models, along with the associated parameters. Stochastic models add realistic business volatility to growth trajectories.

type

Growth model type - ‘deterministic’ for fixed growth or ‘stochastic’ for random variation.

annual_growth_rate

Base annual growth rate (e.g., 0.05 for 5%). Can be negative for declining businesses.

volatility

Growth rate volatility (standard deviation) for stochastic models. Zero for deterministic models.

Examples

Stable growth:

growth = GrowthConfig(
    type='deterministic',
    annual_growth_rate=0.03  # 3% steady growth
)

Volatile growth:

growth = GrowthConfig(
    type='stochastic',
    annual_growth_rate=0.05,  # 5% expected
    volatility=0.15           # 15% std dev
)

Note

Stochastic growth uses geometric Brownian motion to model realistic business volatility patterns.

type: Literal['deterministic', 'stochastic']
annual_growth_rate: float
volatility: float
validate_stochastic_params()[source]

Ensure volatility is set for stochastic models.

Returns:

The validated config object.

Return type:

GrowthConfig

Raises:

ValueError – If stochastic model is selected but volatility is zero, which would make it effectively deterministic.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class DebtConfig(**data: Any) None[source]

Bases: BaseModel

Debt financing parameters for insurance claims.

Configures debt financing options and constraints for handling large insurance claims and maintaining liquidity. Companies may need to borrow to cover deductibles or claims exceeding insurance limits.

interest_rate

Annual interest rate on debt (e.g., 0.05 for 5%).

max_leverage_ratio

Maximum debt-to-equity ratio allowed. Higher ratios increase financial risk.

minimum_cash_balance

Minimum cash balance to maintain for operations.

Examples

Conservative debt policy:

debt = DebtConfig(
    interest_rate=0.04,        # 4% borrowing cost
    max_leverage_ratio=1.0,    # Max 1:1 debt/equity
    minimum_cash_balance=1_000_000
)

Aggressive leverage:

debt = DebtConfig(
    interest_rate=0.06,        # Higher rate for risk
    max_leverage_ratio=3.0,    # 3:1 leverage allowed
    minimum_cash_balance=500_000
)

Note

Higher leverage increases return on equity but also increases bankruptcy risk during adverse claim events.

interest_rate: float
max_leverage_ratio: float
minimum_cash_balance: float
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class SimulationConfig(**data: Any) None[source]

Bases: BaseModel

Simulation execution parameters.

Controls how the simulation runs, including time resolution, horizon, and randomization settings. These parameters affect computational performance and result granularity.

time_resolution

Simulation time step - ‘annual’ or ‘monthly’. Monthly provides more granularity but increases computation.

time_horizon_years

Simulation horizon in years. Longer horizons reveal ergodic properties but require more computation.

max_horizon_years

Maximum supported horizon to prevent excessive memory usage.

random_seed

Random seed for reproducibility. None for random.

fiscal_year_end

Month of fiscal year end (1-12). Default is 12 (December) for calendar year alignment. Set to 6 for June, 3 for March, etc. to match different fiscal calendars.

Examples

Quick test simulation:

sim = SimulationConfig(
    time_resolution='annual',
    time_horizon_years=10,
    random_seed=42  # Reproducible
)

Long-term ergodic analysis:

sim = SimulationConfig(
    time_resolution='annual',
    time_horizon_years=500,
    max_horizon_years=1000,
    random_seed=None  # Random each run
)

Non-calendar fiscal year:

sim = SimulationConfig(
    time_resolution='annual',
    time_horizon_years=50,
    fiscal_year_end=6  # June fiscal year end
)

Note

For ergodic analysis, horizons of 100+ years are recommended to observe long-term time averages.

time_resolution: Literal['annual', 'monthly']
time_horizon_years: int
max_horizon_years: int
random_seed: int | None
fiscal_year_end: int
validate_horizons()[source]

Ensure time horizon doesn’t exceed maximum.

Returns:

The validated config object.

Return type:

SimulationConfig

Raises:

ValueError – If time horizon exceeds maximum allowed value, preventing potential memory issues.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class OutputConfig(**data: Any) None[source]

Bases: BaseModel

Output and results configuration.

Controls where and how simulation results are saved, including file formats and checkpoint frequencies.

output_directory: str
file_format: Literal['csv', 'parquet', 'json']
checkpoint_frequency: int
detailed_metrics: bool
property output_path: Path

Get output directory as Path object.

Returns:

Path object for the output directory.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class LoggingConfig(**data: Any) None[source]

Bases: BaseModel

Logging configuration.

Controls logging behavior including level, output destinations, and message formatting.

enabled: bool
level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR']
log_file: str | None
console_output: bool
format: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class Config(**data: Any) None[source]

Bases: BaseModel

Complete configuration for the Ergodic Insurance simulation.

This is the main configuration class that combines all sub-configurations and provides methods for loading, saving, and manipulating configurations.

All sub-configs have sensible defaults, so Config() with no arguments creates a valid configuration for a $10M widget manufacturer.

Examples

Minimal usage:

config = Config()

Override specific parameters:

config = Config(
    manufacturer=ManufacturerConfig(initial_assets=20_000_000)
)

From basic company info:

config = Config.from_company(initial_assets=50_000_000, operating_margin=0.12)
manufacturer: ManufacturerConfig
working_capital: WorkingCapitalConfig
growth: GrowthConfig
debt: DebtConfig
simulation: SimulationConfig
output: OutputConfig
logging: LoggingConfig
classmethod from_company(initial_assets: float = 10000000, operating_margin: float = 0.08, industry: str = 'manufacturing', tax_rate: float = 0.25, growth_rate: float = 0.05, time_horizon_years: int = 50, **kwargs) Config[source]

Create a Config from basic company information.

This factory derives reasonable sub-config defaults from a small number of intuitive business parameters, so actuaries and risk managers can get started quickly without understanding every sub-config class.

Parameters:
  • initial_assets (float) – Starting asset value in dollars.

  • operating_margin (float) – Base operating margin (e.g. 0.08 for 8%).

  • industry (str) – Industry type for deriving defaults. Supported values: “manufacturing”, “service”, “retail”.

  • tax_rate (float) – Corporate tax rate.

  • growth_rate (float) – Annual growth rate.

  • time_horizon_years (int) – Simulation horizon in years.

  • **kwargs – Additional overrides passed to sub-configs.

Return type:

Config

Returns:

Config object with parameters derived from company info.

Examples

Minimal:

config = Config.from_company(initial_assets=50_000_000)

With industry defaults:

config = Config.from_company(
    initial_assets=25_000_000,
    operating_margin=0.15,
    industry="service",
)
classmethod from_yaml(path: Path) Config[source]

Load configuration from YAML file.

Parameters:

path (Path) – Path to YAML configuration file.

Return type:

Config

Returns:

Config object with validated parameters.

Raises:
  • FileNotFoundError – If config file doesn’t exist.

  • ValidationError – If configuration is invalid.

classmethod from_dict(data: dict, base_config: Config | None = None) Config[source]

Create config from dictionary, optionally overriding base config.

Parameters:
  • data (dict) – Dictionary with configuration parameters.

  • base_config (Optional[Config]) – Optional base configuration to override.

Return type:

Config

Returns:

Config object with validated parameters.

override(**kwargs) Config[source]

Create a new config with overridden parameters.

Parameters:

**kwargs – Parameters to override in dot notation e.g., manufacturer__operating_margin=0.1.

Return type:

Config

Returns:

New Config object with overrides applied.

to_yaml(path: Path) None[source]

Save configuration to YAML file.

Parameters:

path (Path) – Path where to save the configuration.

Return type:

None

setup_logging() None[source]

Configure logging based on settings.

Sets up logging handlers for console and/or file output based on the logging configuration.

Return type:

None

validate_paths() None[source]

Create output directories if they don’t exist.

Ensures that the configured output directory exists, creating it if necessary.

Return type:

None

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class PricingScenario(**data: Any) None[source]

Bases: BaseModel

Individual market pricing scenario configuration.

Represents a specific market condition (soft/normal/hard) with associated pricing parameters and market characteristics.

name: str
description: str
market_condition: Literal['soft', 'normal', 'hard']
primary_layer_rate: float
first_excess_rate: float
higher_excess_rate: float
capacity_factor: float
competition_level: Literal['low', 'moderate', 'high']
retention_discount: float
volume_discount: float
loss_ratio_target: float
expense_ratio: float
new_business_appetite: Literal['restrictive', 'selective', 'aggressive']
renewal_retention_focus: Literal['low', 'balanced', 'high']
coverage_enhancement_willingness: Literal['low', 'moderate', 'high']
validate_rate_ordering() PricingScenario[source]

Ensure premium rates follow expected ordering.

Primary rates should be higher than excess rates, and first excess should be higher than higher excess layers.

Return type:

PricingScenario

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class TransitionProbabilities(**data: Any) None[source]

Bases: BaseModel

Market state transition probabilities.

soft_to_soft: float
soft_to_normal: float
soft_to_hard: float
normal_to_soft: float
normal_to_normal: float
normal_to_hard: float
hard_to_soft: float
hard_to_normal: float
hard_to_hard: float
validate_probabilities() TransitionProbabilities[source]

Ensure transition probabilities sum to 1.0 for each state.

Return type:

TransitionProbabilities

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class MarketCycles(**data: Any) None[source]

Bases: BaseModel

Market cycle configuration and dynamics.

average_duration_years: float
soft_market_duration: float
normal_market_duration: float
hard_market_duration: float
transition_probabilities: TransitionProbabilities
validate_cycle_duration() MarketCycles[source]

Validate that cycle durations are reasonable.

Return type:

MarketCycles

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class PricingScenarioConfig(**data: Any) None[source]

Bases: BaseModel

Complete pricing scenario configuration.

Contains all market scenarios and cycle dynamics for insurance pricing sensitivity analysis.

scenarios: Dict[str, PricingScenario]
market_cycles: MarketCycles
get_scenario(scenario_name: str) PricingScenario[source]

Get a specific pricing scenario by name.

Parameters:

scenario_name (str) – Name of the scenario to retrieve

Return type:

PricingScenario

Returns:

PricingScenario configuration

Raises:

KeyError – If scenario_name not found

get_rate_multiplier(from_scenario: str, to_scenario: str) float[source]

Calculate rate change multiplier between scenarios.

Parameters:
  • from_scenario (str) – Starting scenario name

  • to_scenario (str) – Target scenario name

Return type:

float

Returns:

Multiplier for premium rates when transitioning

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ProfileMetadata(**data: Any) None[source]

Bases: BaseModel

Metadata for configuration profiles.

name: str
description: str
version: str
extends: str | None
includes: List[str]
presets: Dict[str, str]
author: str | None
created: datetime | None
tags: List[str]
classmethod validate_name(v: str) str[source]

Ensure profile name is valid.

Parameters:

v (str) – Profile name to validate.

Return type:

str

Returns:

Validated profile name.

Raises:

ValueError – If name contains invalid characters.

classmethod validate_version(v: str) str[source]

Validate semantic version string.

Parameters:

v (str) – Version string to validate.

Return type:

str

Returns:

Validated version string.

Raises:

ValueError – If version format is invalid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class InsuranceLayerConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for a single insurance layer.

name: str
limit: float
attachment: float
base_premium_rate: float
reinstatements: int
aggregate_limit: float | None
limit_type: str
per_occurrence_limit: float | None
validate_layer_structure()[source]

Ensure layer structure is valid.

Returns:

Validated layer config.

Raises:

ValueError – If layer structure is invalid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class InsuranceConfig(**data: Any) None[source]

Bases: BaseModel

Enhanced insurance configuration.

enabled: bool
layers: List[InsuranceLayerConfig]
deductible: float
coinsurance: float
waiting_period_days: int
claims_handling_cost: float
validate_layers()[source]

Ensure layers don’t overlap and are properly ordered.

Returns:

Validated insurance config.

Raises:

ValueError – If layers overlap or are misordered.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class LossDistributionConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for loss distributions.

frequency_distribution: str
frequency_annual: float
severity_distribution: str
severity_mean: float
severity_std: float
correlation_factor: float
tail_alpha: float
classmethod validate_frequency_dist(v: str) str[source]

Validate frequency distribution type.

Parameters:

v (str) – Distribution type.

Return type:

str

Returns:

Validated distribution type.

Raises:

ValueError – If distribution type is invalid.

classmethod validate_severity_dist(v: str) str[source]

Validate severity distribution type.

Parameters:

v (str) – Distribution type.

Return type:

str

Returns:

Validated distribution type.

Raises:

ValueError – If distribution type is invalid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ModuleConfig(**data: Any) None[source]

Bases: BaseModel

Base class for configuration modules.

module_name: str
module_version: str
dependencies: List[str]
model_config: ClassVar[ConfigDict] = {'extra': 'allow'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class PresetConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for a preset.

preset_name: str
preset_type: str
description: str
parameters: Dict[str, Any]
classmethod validate_preset_type(v: str) str[source]

Validate preset type.

Parameters:

v (str) – Preset type.

Return type:

str

Returns:

Validated preset type.

Raises:

ValueError – If preset type is invalid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class WorkingCapitalRatiosConfig(**data: Any) None[source]

Bases: BaseModel

Enhanced working capital configuration with detailed component ratios.

This extends the basic WorkingCapitalConfig to provide detailed control over individual working capital components using standard financial ratios.

days_sales_outstanding: float
days_inventory_outstanding: float
days_payable_outstanding: float
validate_cash_conversion_cycle()[source]

Validate that cash conversion cycle is reasonable.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ExpenseRatioConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for expense categorization and allocation.

Defines how revenue translates to expenses with proper GAAP categorization between COGS and operating expenses (SG&A).

Issue #255: COGS and SG&A breakdown ratios are now configurable to allow the Manufacturer to calculate these values explicitly, rather than having the Reporting layer estimate them with hardcoded ratios.

gross_margin_ratio: float
sga_expense_ratio: float
manufacturing_depreciation_allocation: float
admin_depreciation_allocation: float
direct_materials_ratio: float
direct_labor_ratio: float
manufacturing_overhead_ratio: float
selling_expense_ratio: float
general_admin_ratio: float
validate_depreciation_allocation()[source]

Ensure depreciation allocations sum to 100%.

validate_cogs_breakdown()[source]

Ensure COGS breakdown ratios sum to 100%.

validate_sga_breakdown()[source]

Ensure SG&A breakdown ratios sum to 100%.

property cogs_ratio: float

Calculate COGS as percentage of revenue.

property operating_margin_ratio: float

Calculate operating margin after all operating expenses.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class DepreciationConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for depreciation and amortization tracking.

Defines how fixed assets depreciate and prepaid expenses amortize over time.

ppe_useful_life_years: float
prepaid_insurance_amortization_months: int
initial_accumulated_depreciation: float
property annual_depreciation_rate: float

Calculate annual depreciation rate.

property monthly_insurance_amortization_rate: float

Calculate monthly insurance amortization rate.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ExcelReportConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for Excel report generation.

enabled: bool
output_path: str
include_balance_sheet: bool
include_income_statement: bool
include_cash_flow: bool
include_reconciliation: bool
include_metrics_dashboard: bool
include_pivot_data: bool
engine: str
currency_format: str
decimal_places: int
date_format: str
classmethod validate_engine(v: str) str[source]

Validate Excel engine selection.

Parameters:

v (str) – Engine name to validate.

Return type:

str

Returns:

Validated engine name.

Raises:

ValueError – If engine is not valid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class IndustryConfig(industry_type: str = 'manufacturing', days_sales_outstanding: float = 45, days_inventory_outstanding: float = 60, days_payables_outstanding: float = 30, gross_margin: float = 0.35, operating_expense_ratio: float = 0.25, current_asset_ratio: float = 0.4, ppe_ratio: float = 0.5, intangible_ratio: float = 0.1, ppe_useful_life: int = 10, depreciation_method: str = 'straight_line') None[source]

Bases: object

Base configuration for different industry types.

This class defines industry-specific financial parameters that determine how businesses operate, including working capital needs, margin structures, asset composition, and depreciation policies.

industry_type

Name of the industry (e.g., ‘manufacturing’, ‘services’)

Working capital ratios
days_sales_outstanding

Average collection period for receivables (days)

days_inventory_outstanding

Average inventory holding period (days)

days_payables_outstanding

Average payment period to suppliers (days)

Margin structure
gross_margin

Gross profit as percentage of revenue

operating_expense_ratio

Operating expenses as percentage of revenue

Asset composition
current_asset_ratio

Current assets as fraction of total assets

ppe_ratio

Property, Plant & Equipment as fraction of total assets

intangible_ratio

Intangible assets as fraction of total assets

Depreciation
ppe_useful_life

Average useful life of PP&E in years

depreciation_method

Method for calculating depreciation

industry_type: str = 'manufacturing'
days_sales_outstanding: float = 45
days_inventory_outstanding: float = 60
days_payables_outstanding: float = 30
gross_margin: float = 0.35
operating_expense_ratio: float = 0.25
current_asset_ratio: float = 0.4
ppe_ratio: float = 0.5
intangible_ratio: float = 0.1
ppe_useful_life: int = 10
depreciation_method: str = 'straight_line'
__post_init__()[source]

Validate configuration after initialization.

validate()[source]

Validate that all parameters are within reasonable bounds.

property working_capital_days: float

Calculate net working capital cycle in days.

property operating_margin: float

Calculate operating margin (EBIT margin).

class ManufacturingConfig(**kwargs: Any) None[source]

Bases: IndustryConfig

Configuration for manufacturing companies.

Manufacturing businesses typically have: - Significant inventory holdings - Moderate to high PP&E requirements - Working capital needs for raw materials and WIP - Gross margins of 25-40%

class ServiceConfig(**kwargs: Any) None[source]

Bases: IndustryConfig

Configuration for service companies.

Service businesses typically have: - Minimal or no inventory - Lower PP&E requirements - Faster cash conversion cycles - Higher gross margins but also higher operating expenses

class RetailConfig(**kwargs: Any) None[source]

Bases: IndustryConfig

Configuration for retail companies.

Retail businesses typically have: - High inventory turnover - Moderate PP&E (stores, fixtures) - Fast cash collection (often immediate) - Lower gross margins but efficient operations

class ConfigV2(**data: Any) None[source]

Bases: BaseModel

Enhanced unified configuration model for the 3-tier system.

profile: ProfileMetadata
manufacturer: ManufacturerConfig
working_capital: WorkingCapitalConfig
growth: GrowthConfig
debt: DebtConfig
simulation: SimulationConfig
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

output: OutputConfig
logging: LoggingConfig
insurance: InsuranceConfig | None
losses: LossDistributionConfig | None
excel_reporting: ExcelReportConfig | None
working_capital_ratios: WorkingCapitalRatiosConfig | None
expense_ratios: ExpenseRatioConfig | None
depreciation: DepreciationConfig | None
industry_config: IndustryConfig | None
custom_modules: Dict[str, ModuleConfig]
applied_presets: List[str]
overrides: Dict[str, Any]
classmethod from_profile(profile_path: Path) ConfigV2[source]

Load configuration from a profile file.

Parameters:

profile_path (Path) – Path to the profile YAML file.

Return type:

ConfigV2

Returns:

Loaded and validated ConfigV2 instance.

Raises:
  • FileNotFoundError – If profile file doesn’t exist.

  • ValidationError – If configuration is invalid.

classmethod with_inheritance(profile_path: Path, config_dir: Path) ConfigV2[source]

Load configuration with profile inheritance.

Parameters:
  • profile_path (Path) – Path to the profile YAML file.

  • config_dir (Path) – Root configuration directory.

Return type:

ConfigV2

Returns:

Loaded ConfigV2 with inheritance applied.

apply_module(module_path: Path) None[source]

Apply a configuration module.

Parameters:

module_path (Path) – Path to the module YAML file.

Return type:

None

apply_preset(preset_name: str, preset_data: Dict[str, Any]) None[source]

Apply a preset to the configuration.

Parameters:
  • preset_name (str) – Name of the preset.

  • preset_data (Dict[str, Any]) – Preset parameters to apply.

Return type:

None

with_overrides(**kwargs) ConfigV2[source]

Create a new config with runtime overrides.

Parameters:

**kwargs – Override parameters in format section__field=value.

Return type:

ConfigV2

Returns:

New ConfigV2 instance with overrides applied.

validate_completeness() List[str][source]

Validate configuration completeness.

Return type:

List[str]

Returns:

List of missing or invalid configuration items.

class PresetLibrary(**data: Any) None[source]

Bases: BaseModel

Collection of presets for a specific type.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

library_type: str
description: str
presets: Dict[str, PresetConfig]
classmethod from_yaml(path: Path) PresetLibrary[source]

Load preset library from YAML file.

Parameters:

path (Path) – Path to preset library YAML file.

Return type:

PresetLibrary

Returns:

Loaded PresetLibrary instance.

ergodic_insurance.config_compat module

Backward compatibility layer for the legacy configuration system.

This module provides adapters and shims to ensure existing code continues to work while transitioning to the new 3-tier configuration system.

class LegacyConfigAdapter[source]

Bases: object

Adapter to support old ConfigLoader interface using new ConfigManager.

load(config_name: str = 'baseline', override_params: Dict[str, Any] | None = None, **kwargs) Config[source]

Load configuration using legacy interface.

Parameters:
  • config_name (str) – Legacy configuration name.

  • override_params (Optional[Dict[str, Any]]) – Dictionary of override parameters.

  • **kwargs – Additional override parameters.

Return type:

Config

Returns:

Config object for backward compatibility.

load_config(config_path: str | Path | None = None, config_name: str = 'baseline', **overrides) Config[source]

Alternative legacy loading method.

Parameters:
  • config_path (Union[str, Path, None]) – Path to configuration file (ignored, for compatibility).

  • config_name (str) – Configuration name.

  • **overrides – Override parameters.

Return type:

Config

Returns:

Config object.

load_config(config_name: str = 'baseline', override_params: Dict[str, Any] | None = None, **kwargs) Config[source]

Legacy function interface for loading configurations.

Parameters:
  • config_name (str) – Configuration name.

  • override_params (Optional[Dict[str, Any]]) – Override parameters.

  • **kwargs – Additional overrides.

Return type:

Config

Returns:

Config object.

migrate_config_usage(file_path: Path) None[source]

Helper to migrate old config usage in a Python file.

Parameters:

file_path (Path) – Path to Python file to migrate.

Return type:

None

class ConfigTranslator[source]

Bases: object

Utilities for translating between old and new configuration formats.

static legacy_to_v2(legacy_config: Config) Dict[str, Any][source]

Convert legacy Config to ConfigV2 format.

Parameters:

legacy_config (Config) – Legacy configuration object.

Return type:

Dict[str, Any]

Returns:

Dictionary suitable for ConfigV2 initialization.

static v2_to_legacy(config_v2: ConfigV2) Dict[str, Any][source]

Convert ConfigV2 to legacy Config format.

Parameters:

config_v2 (ConfigV2) – New format configuration.

Return type:

Dict[str, Any]

Returns:

Dictionary suitable for legacy Config initialization.

static validate_translation(original: Config | ConfigV2, translated: Config | ConfigV2) bool[source]

Validate that translation preserved essential data.

Parameters:
Return type:

bool

Returns:

True if translation is valid.

ergodic_insurance.config_loader module

Configuration loader with validation and override support.

This module provides utilities for loading, validating, and managing configuration files, with support for caching, overrides, and scenario-based configurations.

NOTE: This module now uses the new ConfigManager through the compatibility layer. It maintains the same interface for backward compatibility.

class ConfigLoader(config_dir: Path | None = None)[source]

Bases: object

Handles loading and managing configuration.

A comprehensive configuration management system that supports YAML file loading, validation, caching, and runtime overrides.

NOTE: This class now delegates to LegacyConfigAdapter for backward compatibility while using the new ConfigManager internally.

DEFAULT_CONFIG_DIR = PosixPath('/home/runner/work/Ergodic-Insurance-Limits/Ergodic-Insurance-Limits/ergodic_insurance/data/parameters')
DEFAULT_CONFIG_FILE = 'baseline.yaml'
load(config_name: str = 'baseline', overrides: Dict[str, Any] | None = None, **kwargs: Any) Config[source]

Load configuration with optional overrides.

Parameters:
  • config_name (str) – Name of config file (without .yaml extension) or full path to config file.

  • overrides (Optional[Dict[str, Any]]) – Dictionary of overrides to apply.

  • **kwargs (Any) – Additional overrides in dot notation (e.g., manufacturer__operating_margin=0.1).

Return type:

Config

Returns:

Loaded and validated configuration.

Raises:
  • FileNotFoundError – If config file doesn’t exist.

  • ValidationError – If configuration is invalid.

load_scenario(scenario: str, overrides: Dict[str, Any] | None = None, **kwargs: Any) Config[source]

Load a predefined scenario configuration.

Parameters:
  • scenario (str) – Scenario name (“baseline”, “conservative”, “optimistic”).

  • overrides (Optional[Dict[str, Any]]) – Dictionary of overrides to apply.

  • **kwargs (Any) – Additional overrides in dot notation.

Return type:

Config

Returns:

Loaded and validated configuration.

Raises:

ValueError – If scenario is not recognized.

compare_configs(config1: str | Config, config2: str | Config) Dict[str, Any][source]

Compare two configurations and return differences.

Parameters:
  • config1 (Union[str, Config]) – First config (name or Config object).

  • config2 (Union[str, Config]) – Second config (name or Config object).

Return type:

Dict[str, Any]

Returns:

Dictionary of differences between configurations.

validate_config(config: str | Config) bool[source]

Validate a configuration.

Parameters:

config (Union[str, Config]) – Configuration to validate (name or Config object).

Return type:

bool

Returns:

True if valid, raises exception otherwise.

Raises:

ValidationError – If configuration is invalid.

load_pricing_scenarios(scenario_file: str = 'insurance_pricing_scenarios') PricingScenarioConfig[source]

Load pricing scenario configuration.

Parameters:

scenario_file (str) – Name of scenario file (without .yaml extension) or full path to scenario file.

Return type:

PricingScenarioConfig

Returns:

Loaded and validated pricing scenario configuration.

Raises:
  • FileNotFoundError – If scenario file not found.

  • ValidationError – If scenario data is invalid.

switch_pricing_scenario(config: Config, scenario_name: str) Config[source]

Switch to a different pricing scenario.

Updates the configuration’s insurance parameters to use rates from the specified pricing scenario.

Parameters:
  • config (Config) – Current configuration

  • scenario_name (str) – Name of scenario to switch to (inexpensive/baseline/expensive)

Return type:

Config

Returns:

Updated configuration with new pricing scenario

list_available_configs() list[str][source]

List all available configuration files.

Return type:

list[str]

Returns:

List of configuration file names (without .yaml extension).

clear_cache() None[source]

Clear the configuration cache.

Removes all cached configurations, forcing fresh loads on subsequent requests.

Return type:

None

load_config(config_name: str = 'baseline', overrides: Dict[str, Any] | None = None, **kwargs: Any) Config[source]

Quick helper to load a configuration.

Parameters:
  • config_name (str) – Name of config file or full path.

  • overrides (Optional[Dict[str, Any]]) – Dictionary of overrides.

  • **kwargs (Any) – Keyword overrides in dot notation.

Return type:

Config

Returns:

Loaded configuration.

ergodic_insurance.config_manager module

Configuration manager for the new 3-tier configuration system.

This module provides the main interface for loading and managing configurations using profiles, modules, and presets. It implements a modern configuration architecture that supports inheritance, composition, and runtime overrides.

The configuration system is organized into three tiers:
  1. Profiles: Complete configuration sets (default, conservative, aggressive)

  2. Modules: Reusable components (insurance, losses, stochastic, business)

  3. Presets: Quick-apply templates (market conditions, layer structures)

Example

Basic usage of ConfigManager:

from ergodic_insurance.config_manager import ConfigManager

# Initialize manager
manager = ConfigManager()

# Load a profile
config = manager.load_profile("default")

# Load with overrides
config = manager.load_profile(
    "conservative",
    manufacturer={"base_operating_margin": 0.12},
    growth={"annual_growth_rate": 0.08}
)

# Apply presets
config = manager.load_profile(
    "default",
    presets=["hard_market", "high_volatility"]
)

Note

This module replaces the legacy ConfigLoader and provides full backward compatibility through the config_compat module.

class ConfigManager(config_dir: Path | None = None)[source]

Bases: object

Manages configuration loading with profiles, modules, and presets.

This class provides a comprehensive configuration management system that supports profile inheritance, module composition, preset application, and runtime parameter overrides. It includes caching for performance and validation for correctness.

config_dir

Root configuration directory path

profiles_dir

Directory containing profile configurations

modules_dir

Directory containing module configurations

presets_dir

Directory containing preset libraries

_cache

Internal cache for loaded configurations

_preset_libraries

Cached preset library definitions

Example

Loading configurations with various options:

manager = ConfigManager()

# Simple profile load
config = manager.load_profile("default")

# With module selection
config = manager.load_profile(
    "default",
    modules=["insurance", "stochastic"]
)

# With inheritance chain
config = manager.load_profile("custom/client_abc")
load_profile(profile_name: str = 'default', use_cache: bool = True, **overrides) ConfigV2[source]

Load a configuration profile with optional overrides.

This method loads a configuration profile, applies any inheritance chain, includes specified modules, applies presets, and finally applies runtime overrides. The result is cached for performance.

Parameters:
  • profile_name (str) – Name of the profile to load. Can be a simple name (e.g., “default”) or a path to custom profiles (e.g., “custom/client_abc”).

  • use_cache (bool) – Whether to use cached configurations. Set to False when configuration files might have changed during runtime.

  • **overrides – Runtime overrides organized by section. Supports: - modules: List of module names to include - presets: List of preset names to apply - Any configuration section with nested parameters

Returns:

Fully loaded, validated, and merged configuration instance.

Return type:

ConfigV2

Raises:
  • FileNotFoundError – If the specified profile doesn’t exist.

  • ValueError – If configuration validation fails.

  • yaml.YAMLError – If YAML parsing fails.

Example

Various ways to load profiles:

# Basic load
config = manager.load_profile("default")

# With overrides
config = manager.load_profile(
    "conservative",
    manufacturer={"base_operating_margin": 0.12},
    simulation={"time_horizon_years": 50}
)

# With presets and modules
config = manager.load_profile(
    "default",
    modules=["insurance", "stochastic"],
    presets=["hard_market"]
)
with_preset(config: ConfigV2, preset_type: str, preset_name: str) ConfigV2[source]

Create a new configuration with a preset applied.

Parameters:
  • config (ConfigV2) – Base configuration.

  • preset_type (str) – Type of preset.

  • preset_name (str) – Name of the preset.

Return type:

ConfigV2

Returns:

New ConfigV2 instance with preset applied.

with_overrides(config: ConfigV2, **overrides) ConfigV2[source]

Create a new configuration with runtime overrides.

Parameters:
  • config (ConfigV2) – Base configuration.

  • **overrides – Override parameters.

Return type:

ConfigV2

Returns:

New ConfigV2 instance with overrides applied.

validate(config: ConfigV2) List[str][source]

Validate a configuration for completeness and consistency.

Parameters:

config (ConfigV2) – Configuration to validate.

Return type:

List[str]

Returns:

List of validation issues, empty if valid.

list_profiles() List[str][source]

List all available configuration profiles.

Return type:

List[str]

Returns:

List of profile names.

list_modules() List[str][source]

List all available configuration modules.

Return type:

List[str]

Returns:

List of module names.

list_presets() Dict[str, List[str]][source]

List all available presets by type.

Return type:

Dict[str, List[str]]

Returns:

Dictionary mapping preset types to list of preset names.

clear_cache() None[source]

Clear the configuration cache.

Return type:

None

get_profile_metadata(profile_name: str) Dict[str, Any][source]

Get metadata for a profile without loading the full configuration.

Parameters:

profile_name (str) – Name of the profile.

Return type:

Dict[str, Any]

Returns:

Profile metadata dictionary.

create_profile(name: str, description: str, base_profile: str = 'default', custom: bool = True, **config_params) Path[source]

Create a new configuration profile.

Parameters:
  • name (str) – Profile name.

  • description (str) – Profile description.

  • base_profile (str) – Profile to extend from.

  • custom (bool) – Whether to save as custom profile.

  • **config_params – Configuration parameters.

Return type:

Path

Returns:

Path to the created profile file.

ergodic_insurance.config_migrator module

Configuration migration tools for converting legacy YAML files to new 3-tier system.

This module provides utilities to migrate from the old 12-file configuration system to the new profiles/modules/presets architecture.

class ConfigMigrator[source]

Bases: object

Handles migration from legacy configuration to new 3-tier system.

convert_baseline() Dict[str, Any][source]

Convert baseline.yaml to new profile format.

Return type:

Dict[str, Any]

Returns:

Converted configuration as a dictionary.

convert_conservative() Dict[str, Any][source]

Convert conservative.yaml to new profile format.

Return type:

Dict[str, Any]

Returns:

Converted configuration as a dictionary.

convert_optimistic() Dict[str, Any][source]

Convert optimistic.yaml to new profile format.

Return type:

Dict[str, Any]

Returns:

Converted configuration as a dictionary.

extract_modules() None[source]

Extract reusable modules from legacy configuration files.

Return type:

None

create_presets() None[source]

Generate preset libraries from existing configurations.

Return type:

None

validate_migration() bool[source]

Validate that all configurations were successfully migrated.

Return type:

bool

Returns:

True if validation passes, False otherwise.

generate_migration_report() str[source]

Generate a detailed migration report.

Return type:

str

Returns:

Formatted migration report as a string.

run_migration() bool[source]

Run the complete migration process.

Return type:

bool

Returns:

True if migration successful, False otherwise.

ergodic_insurance.convergence module

Convergence diagnostics for Monte Carlo simulations.

This module provides tools for assessing convergence of Monte Carlo simulations including Gelman-Rubin R-hat, effective sample size, and Monte Carlo standard error.

class ConvergenceStats(r_hat: float, ess: float, mcse: float, converged: bool, n_iterations: int, autocorrelation: float) None[source]

Bases: object

Container for convergence statistics.

r_hat: float
ess: float
mcse: float
converged: bool
n_iterations: int
autocorrelation: float
__str__() str[source]

String representation of convergence stats.

Return type:

str

class ConvergenceDiagnostics(r_hat_threshold: float = 1.1, min_ess: int = 1000, relative_mcse_threshold: float = 0.05)[source]

Bases: object

Convergence diagnostics for Monte Carlo simulations.

Provides methods for assessing convergence using multiple chains and calculating effective sample sizes.

calculate_r_hat(chains: ndarray) float[source]

Calculate Gelman-Rubin R-hat statistic.

Parameters:

chains (ndarray) – Array of shape (n_chains, n_iterations) or (n_chains, n_iterations, n_metrics)

Return type:

float

Returns:

R-hat statistic (values close to 1 indicate convergence)

calculate_ess(chain: ndarray, max_lag: int | None = None) float[source]

Calculate effective sample size using autocorrelation.

Uses the formula: ESS = N / (1 + 2 * sum(autocorrelations)) where the sum is truncated at the first negative autocorrelation.

Parameters:
  • chain (ndarray) – 1D array of samples

  • max_lag (Optional[int]) – Maximum lag for autocorrelation calculation

Return type:

float

Returns:

Effective sample size

calculate_batch_ess(chains: ndarray, method: str = 'mean') float | ndarray[source]

Calculate ESS for multiple chains or metrics.

Parameters:
  • chains (ndarray) – Array of shape (n_chains, n_iterations) or (n_chains, n_iterations, n_metrics)

  • method (str) – How to combine ESS across chains (‘mean’, ‘min’, ‘all’)

Return type:

Union[float, ndarray]

Returns:

Combined ESS value(s)

calculate_ess_per_second(chain: ndarray, computation_time: float) float[source]

Calculate ESS per second of computation.

Useful for comparing efficiency of different sampling methods.

Parameters:
  • chain (ndarray) – 1D array of samples

  • computation_time (float) – Time in seconds taken to generate the chain

Return type:

float

Returns:

ESS per second

calculate_mcse(chain: ndarray, ess: float | None = None) float[source]

Calculate Monte Carlo standard error.

Parameters:
  • chain (ndarray) – 1D array of samples

  • ess (Optional[float]) – Effective sample size (calculated if not provided)

Return type:

float

Returns:

Monte Carlo standard error

check_convergence(chains: ndarray | List[ndarray], metric_names: List[str] | None = None) Dict[str, ConvergenceStats][source]

Check convergence for multiple chains and metrics.

Parameters:
  • chains (Union[ndarray, List[ndarray]]) – Array of shape (n_chains, n_iterations, n_metrics) or list of chains

  • metric_names (Optional[List[str]]) – Names of metrics (optional)

Return type:

Dict[str, ConvergenceStats]

Returns:

Dictionary mapping metric names to convergence statistics

geweke_test(chain: ndarray, first_fraction: float = 0.1, last_fraction: float = 0.5) Tuple[float, float][source]

Perform Geweke convergence test.

Compares means of first and last portions of chain.

Parameters:
  • chain (ndarray) – 1D array of samples

  • first_fraction (float) – Fraction of chain to use for first portion

  • last_fraction (float) – Fraction of chain to use for last portion

Return type:

Tuple[float, float]

Returns:

Tuple of (z-score, p-value)

heidelberger_welch_test(chain: ndarray, alpha: float = 0.05) Dict[str, bool | float][source]

Perform Heidelberger-Welch stationarity and halfwidth tests.

Parameters:
  • chain (ndarray) – 1D array of samples

  • alpha (float) – Significance level

Return type:

Dict[str, Union[bool, float]]

Returns:

Dictionary with test results

ergodic_insurance.convergence_advanced module

Advanced convergence diagnostics for Monte Carlo simulations.

This module extends basic convergence diagnostics with advanced features including autocorrelation analysis, spectral density estimation, and sophisticated ESS calculations.

class SpectralDiagnostics(spectral_density: ndarray, frequencies: ndarray, integrated_autocorr_time: float, effective_sample_size: float) None[source]

Bases: object

Container for spectral analysis results.

spectral_density: ndarray
frequencies: ndarray
integrated_autocorr_time: float
effective_sample_size: float
__str__() str[source]

String representation of spectral diagnostics.

Return type:

str

class AutocorrelationAnalysis(acf_values: ndarray, lags: ndarray, integrated_time: float, initial_monotone_sequence: int, initial_positive_sequence: int) None[source]

Bases: object

Container for autocorrelation analysis results.

acf_values: ndarray
lags: ndarray
integrated_time: float
initial_monotone_sequence: int
initial_positive_sequence: int
__str__() str[source]

String representation of autocorrelation analysis.

Return type:

str

class AdvancedConvergenceDiagnostics(fft_size: int | None = None)[source]

Bases: object

Advanced convergence diagnostics for Monte Carlo simulations.

Provides sophisticated methods for assessing convergence including spectral density estimation, multiple ESS calculation methods, and advanced autocorrelation analysis.

calculate_autocorrelation_full(chain: ndarray, max_lag: int | None = None, method: str = 'fft') AutocorrelationAnalysis[source]

Calculate comprehensive autocorrelation analysis.

Parameters:
  • chain (ndarray) – 1D array of samples

  • max_lag (Optional[int]) – Maximum lag for autocorrelation (None for automatic)

  • method (str) – Method for calculation (“fft”, “direct”, or “biased”)

Return type:

AutocorrelationAnalysis

Returns:

AutocorrelationAnalysis object with detailed results

calculate_spectral_density(chain: ndarray, method: str = 'welch', nperseg: int | None = None) SpectralDiagnostics[source]

Calculate spectral density and related diagnostics.

Parameters:
  • chain (ndarray) – 1D array of samples

  • method (str) – Method for spectral estimation (“welch”, “periodogram”, “multitaper”)

  • nperseg (Optional[int]) – Length of each segment for Welch’s method

Return type:

SpectralDiagnostics

Returns:

SpectralDiagnostics object with spectral analysis results

calculate_ess_batch_means(chain: ndarray, batch_size: int | None = None, n_batches: int | None = None) float[source]

Calculate ESS using batch means method.

Parameters:
  • chain (ndarray) – 1D array of samples

  • batch_size (Optional[int]) – Size of each batch (calculated if None)

  • n_batches (Optional[int]) – Number of batches (calculated if None)

Return type:

float

Returns:

Effective sample size estimate

calculate_ess_overlapping_batch(chain: ndarray, batch_size: int | None = None) float[source]

Calculate ESS using overlapping batch means (more efficient).

Parameters:
  • chain (ndarray) – 1D array of samples

  • batch_size (Optional[int]) – Size of each batch (calculated if None)

Return type:

float

Returns:

Effective sample size estimate

heidelberger_welch_advanced(chain: ndarray, alpha: float = 0.05, eps: float = 0.1, pvalue_threshold: float = 0.05) Dict[str, bool | int | float][source]

Advanced Heidelberger-Welch stationarity test.

Parameters:
  • chain (ndarray) – 1D array of samples

  • alpha (float) – Significance level for confidence intervals

  • eps (float) – Relative precision for halfwidth test

  • pvalue_threshold (float) – P-value threshold for stationarity

Return type:

Dict[str, Union[bool, int, float]]

Returns:

Dictionary with detailed test results

raftery_lewis_diagnostic(chain: ndarray, q: float = 0.025, r: float = 0.005, s: float = 0.95) Dict[str, float][source]

Raftery-Lewis diagnostic for required chain length.

Parameters:
  • chain (ndarray) – 1D array of samples

  • q (float) – Quantile of interest

  • r (float) – Desired accuracy

  • s (float) – Probability of achieving accuracy

Return type:

Dict[str, float]

Returns:

Dictionary with diagnostic results

ergodic_insurance.convergence_plots module

Real-time convergence visualization for Monte Carlo simulations.

This module provides real-time plotting capabilities for monitoring convergence during long-running simulations with minimal computational overhead.

class RealTimeConvergencePlotter(n_parameters: int = 1, n_chains: int = 1, buffer_size: int = 1000, update_interval: int = 100, figsize: Tuple[float, float] = (12, 8))[source]

Bases: object

Real-time convergence plotting with minimal overhead.

Provides animated visualization of convergence diagnostics during Monte Carlo simulations with efficient updating mechanisms.

setup_figure(parameter_names: List[str] | None = None, show_diagnostics: bool = True) Figure[source]

Setup the figure with subplots for real-time monitoring.

Parameters:
  • parameter_names (Optional[List[str]]) – Names of parameters being monitored

  • show_diagnostics (bool) – Whether to show diagnostic panels

Return type:

Figure

Returns:

Matplotlib figure object

update_data(iteration: int, chains_data: ndarray, diagnostics: Dict[str, List[float]] | None = None)[source]

Update data buffers with new samples.

Parameters:
  • iteration (int) – Current iteration number

  • chains_data (ndarray) – Array of shape (n_chains, n_parameters)

  • diagnostics (Optional[Dict[str, List[float]]]) – Optional dictionary with R-hat, ESS values

plot_static_convergence(chains: ndarray, burn_in: int | None = None, thin: int = 1) Figure[source]

Create static convergence plots for completed chains.

Parameters:
  • chains (ndarray) – Array of shape (n_chains, n_iterations, n_parameters)

  • burn_in (Optional[int]) – Burn-in period to highlight

  • thin (int) – Thinning interval for display

Return type:

Figure

Returns:

Figure with convergence plots

plot_ess_evolution(ess_values: List[float] | ndarray, iterations: ndarray | None = None, target_ess: float = 1000) Figure[source]

Plot evolution of effective sample size.

Parameters:
  • ess_values (Union[List[float], ndarray]) – ESS values over iterations

  • iterations (Optional[ndarray]) – Iteration numbers (generated if None)

  • target_ess (float) – Target ESS threshold

Return type:

Figure

Returns:

Figure with ESS evolution plot

plot_autocorrelation_surface(chains: ndarray, max_lag: int = 50, param_idx: int = 0) Figure[source]

Create 3D surface plot of autocorrelation over time.

Parameters:
  • chains (ndarray) – Array of shape (n_chains, n_iterations, n_parameters)

  • max_lag (int) – Maximum lag for ACF

  • param_idx (int) – Parameter index to plot

Return type:

Figure

Returns:

Figure with 3D autocorrelation surface

create_convergence_dashboard(chains: ndarray, diagnostics: Dict[str, Any], parameter_names: List[str] | None = None) Figure[source]

Create comprehensive convergence dashboard.

Parameters:
  • chains (ndarray) – Array of shape (n_chains, n_iterations, n_parameters)

  • diagnostics (Dict[str, Any]) – Dictionary with convergence diagnostics

  • parameter_names (Optional[List[str]]) – Names of parameters

Return type:

Figure

Returns:

Figure with comprehensive dashboard

ergodic_insurance.decimal_utils module

Decimal utilities for financial calculations.

This module provides utilities for precise financial calculations using Python’s decimal.Decimal type. Using Decimal instead of float prevents accumulation errors in iterative simulations and ensures accounting identities hold exactly.

Example

Convert a float to decimal for financial use:

from ergodic_insurance.decimal_utils import to_decimal, ZERO

amount = to_decimal(1234.56)
if amount != ZERO:
    print(f"Amount: {amount}")
to_decimal(value: float | int | str | Decimal | None) Decimal[source]

Convert a numeric value to Decimal with proper handling.

Converts floats, ints, strings, or existing Decimals to a standardized Decimal value. Floats are converted via string representation to avoid binary floating point artifacts.

Parameters:

value (Union[float, int, str, Decimal, None]) – Numeric value to convert. None is converted to ZERO.

Return type:

Decimal

Returns:

Decimal representation of the value.

Example

>>> to_decimal(1234.56)
Decimal('1234.56')
>>> to_decimal(None)
Decimal('0.00')
quantize_currency(value: Decimal | float | int) Decimal[source]

Quantize a value to currency precision (2 decimal places).

Rounds using ROUND_HALF_UP (banker’s rounding away from zero for .5 cases) which is standard for financial calculations.

Parameters:

value (Union[Decimal, float, int]) – Numeric value to quantize.

Return type:

Decimal

Returns:

Decimal rounded to 2 decimal places.

Example

>>> quantize_currency(Decimal("1234.567"))
Decimal('1234.57')
>>> quantize_currency(1234.565)
Decimal('1234.57')
is_zero(value: Decimal | float | int) bool[source]

Check if a value is effectively zero after quantization.

Useful for balance checks where we need exact equality after rounding to currency precision.

Parameters:

value (Union[Decimal, float, int]) – Numeric value to check.

Return type:

bool

Returns:

True if value rounds to zero at currency precision.

Example

>>> is_zero(Decimal("0.001"))
True
>>> is_zero(Decimal("0.01"))
False
sum_decimals(*values: Decimal | float | int) Decimal[source]

Sum multiple values with Decimal precision.

Converts all values to Decimal before summing to maintain precision.

Parameters:

*values (Union[Decimal, float, int]) – Numeric values to sum.

Return type:

Decimal

Returns:

Decimal sum of all values.

Example

>>> sum_decimals(0.1, 0.2, 0.3)
Decimal('0.6')
safe_divide(numerator: Decimal | float | int, denominator: Decimal | float | int, default: Decimal | float | int = Decimal('0.00')) Decimal[source]

Safely divide two values, returning default if denominator is zero.

Parameters:
Return type:

Decimal

Returns:

Result of division, or default if denominator is zero.

Example

>>> safe_divide(100, 4)
Decimal('25')
>>> safe_divide(100, 0, default=Decimal("-1"))
Decimal('-1')

ergodic_insurance.decision_engine module

Algorithmic insurance decision engine for optimal coverage selection.

This module implements a comprehensive decision framework that optimizes insurance purchasing decisions using multi-objective optimization to balance growth targets with bankruptcy risk constraints.

class OptimizationMethod(*values)[source]

Bases: Enum

Available optimization methods.

SLSQP = 'SLSQP'
ENHANCED_SLSQP = 'enhanced_slsqp'
DIFFERENTIAL_EVOLUTION = 'differential_evolution'
WEIGHTED_SUM = 'weighted_sum'
TRUST_REGION = 'trust_region'
PENALTY_METHOD = 'penalty_method'
AUGMENTED_LAGRANGIAN = 'augmented_lagrangian'
MULTI_START = 'multi_start'
class OptimizationConstraints(max_premium_budget: float = 1000000, min_coverage_limit: float = 5000000, max_coverage_limit: float = 100000000, max_bankruptcy_probability: float = 0.01, min_retained_limit: float = 100000, max_retained_limit: float = 10000000, max_layers: int = 5, min_layers: int = 1, required_roi_improvement: float = 0.0, max_debt_to_equity: float = 2.0, max_insurance_cost_ratio: float = 0.03, min_coverage_requirement: float = 0.0, max_retention_limit: float = inf) None[source]

Bases: object

Constraints for insurance optimization.

max_premium_budget: float = 1000000
min_coverage_limit: float = 5000000
max_coverage_limit: float = 100000000
max_bankruptcy_probability: float = 0.01
min_retained_limit: float = 100000
max_retained_limit: float = 10000000
max_layers: int = 5
min_layers: int = 1
required_roi_improvement: float = 0.0
max_debt_to_equity: float = 2.0
max_insurance_cost_ratio: float = 0.03
min_coverage_requirement: float = 0.0
max_retention_limit: float = inf
class InsuranceDecision(retained_limit: float, layers: List[EnhancedInsuranceLayer], total_premium: float, total_coverage: float, pricing_scenario: str, optimization_method: str, convergence_iterations: int = 0, objective_value: float = 0.0) None[source]

Bases: object

Represents an insurance purchasing decision.

retained_limit: float
layers: List[EnhancedInsuranceLayer]
total_premium: float
total_coverage: float
pricing_scenario: str
optimization_method: str
convergence_iterations: int = 0
objective_value: float = 0.0
__post_init__()[source]

Calculate derived fields.

class DecisionMetrics(ergodic_growth_rate: float, bankruptcy_probability: float, expected_roe: float, roe_improvement: float, premium_to_limit_ratio: float, coverage_adequacy: float, capital_efficiency: float, value_at_risk_95: float, conditional_value_at_risk: float, decision_score: float = 0.0, time_weighted_roe: float = 0.0, roe_volatility: float = 0.0, roe_sharpe_ratio: float = 0.0, roe_downside_deviation: float = 0.0, roe_1yr_rolling: float = 0.0, roe_3yr_rolling: float = 0.0, roe_5yr_rolling: float = 0.0, operating_roe: float = 0.0, insurance_impact_roe: float = 0.0, tax_effect_roe: float = 0.0) None[source]

Bases: object

Comprehensive metrics for evaluating an insurance decision.

ergodic_growth_rate: float
bankruptcy_probability: float
expected_roe: float
roe_improvement: float
premium_to_limit_ratio: float
coverage_adequacy: float
capital_efficiency: float
value_at_risk_95: float
conditional_value_at_risk: float
decision_score: float = 0.0
time_weighted_roe: float = 0.0
roe_volatility: float = 0.0
roe_sharpe_ratio: float = 0.0
roe_downside_deviation: float = 0.0
roe_1yr_rolling: float = 0.0
roe_3yr_rolling: float = 0.0
roe_5yr_rolling: float = 0.0
operating_roe: float = 0.0
insurance_impact_roe: float = 0.0
tax_effect_roe: float = 0.0
calculate_score(weights: Dict[str, float] | None = None) float[source]

Calculate weighted decision score.

Parameters:

weights (Optional[Dict[str, float]]) – Weights for each metric (default: equal weights)

Return type:

float

Returns:

Weighted score between 0 and 1

class SensitivityReport(base_decision: InsuranceDecision, base_metrics: DecisionMetrics, parameter_sensitivities: Dict[str, Dict[str, float]], key_drivers: List[str], robust_range: Dict[str, Tuple[float, float]], stress_test_results: Dict[str, DecisionMetrics]) None[source]

Bases: object

Results of sensitivity analysis.

base_decision: InsuranceDecision
base_metrics: DecisionMetrics
parameter_sensitivities: Dict[str, Dict[str, float]]
key_drivers: List[str]
robust_range: Dict[str, Tuple[float, float]]
stress_test_results: Dict[str, DecisionMetrics]
class Recommendations(primary_recommendation: InsuranceDecision, primary_rationale: str, alternative_options: List[Tuple[InsuranceDecision, str]], implementation_timeline: List[str], risk_considerations: List[str], expected_benefits: Dict[str, float], confidence_level: float) None[source]

Bases: object

Executive-ready recommendations from the decision engine.

primary_recommendation: InsuranceDecision
primary_rationale: str
alternative_options: List[Tuple[InsuranceDecision, str]]
implementation_timeline: List[str]
risk_considerations: List[str]
expected_benefits: Dict[str, float]
confidence_level: float
class InsuranceDecisionEngine(manufacturer: WidgetManufacturer, loss_distribution: LossDistribution, pricing_scenario: str = 'baseline', config_loader: ConfigLoader | None = None, engine_config: DecisionEngineConfig | None = None)[source]

Bases: object

Algorithmic engine for optimizing insurance decisions.

optimize_insurance_decision(constraints: OptimizationConstraints, method: OptimizationMethod = OptimizationMethod.SLSQP, weights: Dict[str, float] | None = None, _attempted_methods: Set[OptimizationMethod] | None = None) InsuranceDecision[source]

Find optimal insurance structure given constraints.

Uses multi-objective optimization to balance growth, risk, and cost. Falls back through alternative methods if validation fails, tracking attempted methods to prevent infinite recursion.

Parameters:
Return type:

InsuranceDecision

Returns:

Optimal insurance decision

calculate_decision_metrics(decision: InsuranceDecision) DecisionMetrics[source]

Calculate comprehensive metrics for a decision.

Parameters:

decision (InsuranceDecision) – Insurance decision to evaluate

Return type:

DecisionMetrics

Returns:

Comprehensive metrics

run_sensitivity_analysis(base_decision: InsuranceDecision, parameters: List[str] | None = None, variation_range: float = 0.2) SensitivityReport[source]

Analyze decision sensitivity to parameter changes.

Parameters:
  • base_decision (InsuranceDecision) – Base decision to analyze

  • parameters (Optional[List[str]]) – Parameters to test (default: key parameters)

  • variation_range (float) – ±% to vary parameters (default: 20%)

Return type:

SensitivityReport

Returns:

Comprehensive sensitivity report

generate_recommendations(analysis_results: List[Tuple[InsuranceDecision, DecisionMetrics]]) Recommendations[source]

Generate executive-ready recommendations.

Parameters:

analysis_results (List[Tuple[InsuranceDecision, DecisionMetrics]]) – List of (decision, metrics) tuples to analyze

Return type:

Recommendations

Returns:

Comprehensive recommendations

ergodic_insurance.ergodic_analyzer module

Ergodic analysis framework for comparing time-average vs ensemble-average growth.

This module provides the theoretical foundation and computational tools for applying ergodic economics to insurance decision making. It implements Ole Peters’ framework for distinguishing between ensemble averages (what we expect to happen across many parallel scenarios) and time averages (what actually happens to a single entity over time).

The key insight is that for multiplicative processes like business growth with volatile losses, the ensemble average and time average diverge significantly. Insurance transforms the growth process in ways that traditional expected value analysis cannot capture, often making insurance optimal even when premiums exceed expected losses by substantial margins.

Key Concepts:

Time Average Growth Rate: The growth rate experienced by a single business entity over time, calculated as g = (1/T) * ln(X(T)/X(0)). This captures the actual compound growth experience.

Ensemble Average Growth Rate: The expected growth rate calculated across many parallel scenarios at each time point. This represents the traditional expected value approach.

Ergodic Divergence: The difference between time and ensemble averages, indicating non-ergodic behavior where individual experience differs from statistical expectations.

Survival Rate: The fraction of simulation paths that remain solvent, capturing the probability dimension ignored by pure growth metrics.

Theoretical Foundation:

Based on Ole Peters’ ergodic economics framework (Peters, 2019; Peters & Gell-Mann, 2016), this module demonstrates that:

  1. Multiplicative Growth: Business equity follows multiplicative dynamics where losses compound over time in non-linear ways.

  2. Jensen’s Inequality: For concave utility functions (log wealth), the expected value of a function differs from the function of expected values.

  3. Path Dependence: The order and timing of losses matters critically, making time-average analysis essential for decision making.

  4. Insurance as Growth Optimization: Insurance can increase time-average growth rates even when premiums appear “expensive” from ensemble perspective.

Core Classes:

Examples

Basic ergodic comparison between insured and uninsured scenarios:

import numpy as np
from ergodic_insurance import ErgodicAnalyzer

# Initialize analyzer
analyzer = ErgodicAnalyzer(convergence_threshold=0.01)

# Simulate equity trajectories (example data)
insured_trajectories = [
    np.array([10e6, 10.2e6, 10.5e6, 10.8e6, 11.1e6]),    # Stable growth
    np.array([10e6, 10.1e6, 10.3e6, 10.6e6, 10.9e6]),    # Stable growth
    np.array([10e6, 10.3e6, 10.7e6, 11.0e6, 11.4e6])     # Stable growth
]

uninsured_trajectories = [
    np.array([10e6, 10.5e6, 8.2e6, 12.1e6, 13.5e6]),     # Volatile
    np.array([10e6, 9.8e6, 5.1e6, 0]),                   # Bankruptcy
    np.array([10e6, 10.8e6, 11.2e6, 14.8e6, 16.2e6])     # High growth
]

# Compare scenarios
comparison = analyzer.compare_scenarios(
    insured_trajectories,
    uninsured_trajectories,
    metric="equity"
)

print(f"Insured time-average growth: {comparison['insured']['time_average_mean']:.1%}")
print(f"Uninsured time-average growth: {comparison['uninsured']['time_average_mean']:.1%}")
print(f"Ergodic advantage: {comparison['ergodic_advantage']['time_average_gain']:.1%}")
print(f"Survival rate improvement: {comparison['ergodic_advantage']['survival_gain']:.1%}")

Monte Carlo analysis with convergence checking:

from ergodic_insurance.simulation import run_monte_carlo

# Run Monte Carlo simulations (pseudo-code)
simulation_results = run_monte_carlo(
    n_simulations=1000,
    time_horizon=20,
    insurance_enabled=True
)

# Analyze batch results
analysis = analyzer.analyze_simulation_batch(
    simulation_results,
    label="Insured Scenario"
)

print(f"Time-average growth: {analysis['time_average']['mean']:.2%} ± {analysis['time_average']['std']:.2%}")
print(f"Ensemble average: {analysis['ensemble_average']['mean']:.2%}")
print(f"Ergodic divergence: {analysis['ergodic_divergence']:.2%}")
print(f"Convergence: {analysis['convergence']['converged']} (SE: {analysis['convergence']['standard_error']:.4f})")

Integration with loss modeling:

from ergodic_insurance import LossData, InsuranceProgram, WidgetManufacturer

# Set up integrated analysis
loss_data = LossData.from_distribution(
    frequency_lambda=2.5,
    severity_mean=1_000_000,
    severity_cv=2.0
)

insurance = InsuranceProgram(
    layers=[(0, 1_000_000, 0.015), (1_000_000, 10_000_000, 0.008)]
)

manufacturer = WidgetManufacturer(config)

# Run integrated ergodic analysis
results = analyzer.integrate_loss_ergodic_analysis(
    loss_data=loss_data,
    insurance_program=insurance,
    manufacturer=manufacturer,
    time_horizon=20,
    n_simulations=1000
)

print(f"Time-average growth rate: {results.time_average_growth:.2%}")
print(f"Ensemble average growth: {results.ensemble_average_growth:.2%}")
print(f"Survival rate: {results.survival_rate:.1%}")
print(f"Insurance benefit: ${results.insurance_impact['net_benefit']:,.0f}")
print(f"Analysis valid: {results.validation_passed}")
Implementation Notes:
  • All growth rate calculations use natural logarithms for mathematical consistency

  • Infinite values (from bankruptcy) are handled gracefully in statistical calculations

  • Convergence checking uses standard error to determine Monte Carlo adequacy

  • Significance testing employs t-tests for comparing growth rate distributions

  • Variable-length trajectories (due to insolvency) are supported throughout

Performance Optimization:
  • Vectorized numpy operations for large Monte Carlo batches

  • Efficient handling of mixed-length trajectory data

  • Memory-conscious processing of large simulation datasets

  • Configurable convergence thresholds to balance accuracy and computation time

References

  • Peters, O. (2019). “The ergodicity problem in economics.” Nature Physics, 15(12), 1216-1221.

  • Peters, O., & Gell-Mann, M. (2016). “Evaluating gambles using dynamics.” Chaos, 26(2), 023103.

  • Kelly, J. L. (1956). “A new interpretation of information rate.” Bell System Technical Journal, 35(4), 917-926.

See also

simulation: Monte Carlo simulation framework manufacturer: Financial model for business dynamics insurance_program: Insurance structure modeling optimization: Optimization algorithms using ergodic metrics

class ErgodicData(time_series: ndarray = <factory>, values: ndarray = <factory>, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Standardized data container for ergodic time series analysis.

This class provides a consistent format for storing and validating time series data used in ergodic calculations. It ensures data integrity and provides metadata tracking for analysis reproducibility.

time_series

Array of time points corresponding to values. Should be monotonically increasing for meaningful analysis.

Type:

np.ndarray

values

Array of observed values (e.g., equity, assets) at each time point. Must have same length as time_series.

Type:

np.ndarray

metadata

Dictionary containing analysis metadata such as simulation parameters, data source, units, etc.

Type:

Dict[str, Any]

Examples

Create ergodic data for analysis:

import numpy as np

# Equity trajectory over 10 years
data = ErgodicData(
    time_series=np.arange(11),  # Years 0-10
    values=np.array([10e6, 10.2e6, 10.5e6, 10.1e6, 10.8e6,
                   11.2e6, 10.9e6, 11.5e6, 12.1e6, 12.8e6, 13.2e6]),
    metadata={
        'units': 'USD',
        'metric': 'equity',
        'simulation_id': 'run_001',
        'scenario': 'insured'
    }
)

# Validate data consistency
assert data.validate(), "Data validation failed"

Handle validation failures:

# Mismatched lengths will fail validation
invalid_data = ErgodicData(
    time_series=np.arange(10),
    values=np.arange(5),  # Wrong length
    metadata={'note': 'This will fail validation'}
)

if not invalid_data.validate():
    print("Data validation failed - fix before analysis")

Note

The validate() method should be called before using data in ergodic calculations to ensure mathematical operations will succeed.

See also

ErgodicAnalyzer: Main analysis class that uses ErgodicData ErgodicAnalysisResults: Results format for ergodic calculations

time_series: ndarray
values: ndarray
metadata: Dict[str, Any]
validate() bool[source]

Validate data consistency and integrity.

Performs comprehensive validation of the ergodic data to ensure it meets requirements for mathematical analysis. This includes checking array lengths, data types, and basic reasonableness of values.

Returns:

True if all validation checks pass, False otherwise.

False indicates the data should not be used in ergodic calculations without correction.

Return type:

bool

Examples

Validate data before analysis:

data = ErgodicData(
    time_series=np.arange(10),
    values=np.random.randn(10) + 100,
    metadata={'units': 'USD'}
)

if data.validate():
    print("Data validated - ready for analysis")
else:
    print("Data validation failed - check inputs")
Validation Checks:
  • Arrays have matching lengths

  • Arrays are not empty

  • Time series is monotonic (if more than one point)

  • Values are numeric (not NaN in inappropriate places)

class ErgodicAnalysisResults(time_average_growth: float, ensemble_average_growth: float, survival_rate: float, ergodic_divergence: float, insurance_impact: ~typing.Dict[str, float], validation_passed: bool, metadata: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]

Bases: object

Comprehensive results from integrated ergodic analysis.

This class encapsulates all results from a complete ergodic analysis, including growth metrics, survival statistics, insurance impacts, and validation status. It provides a standardized format for reporting and comparing different insurance strategies.

time_average_growth

Mean time-average growth rate across all valid simulation paths. Calculated as the average of individual path growth rates: mean(ln(X_final/X_initial)/T). May be -inf if all paths resulted in bankruptcy.

Type:

float

ensemble_average_growth

Ensemble average growth rate calculated from the mean of initial and final values across all paths: ln(mean(X_final)/mean(X_initial))/T. Always finite for valid data.

Type:

float

survival_rate

Fraction of simulation paths that remained solvent throughout the analysis period. Range: [0.0, 1.0].

Type:

float

ergodic_divergence

Difference between time-average and ensemble average growth rates (time_average_growth - ensemble_average_growth). Positive values indicate time-average exceeds ensemble average.

Type:

float

insurance_impact

Dictionary containing insurance-related metrics such as: - ‘premium_cost’: Total premium payments - ‘recovery_benefit’: Total insurance recoveries - ‘net_benefit’: Net financial benefit of insurance - ‘growth_improvement’: Improvement in growth rate from insurance

Type:

Dict[str, float]

validation_passed

Whether the analysis passed internal validation checks for data consistency and mathematical validity.

Type:

bool

metadata

Additional analysis metadata including: - ‘n_simulations’: Number of Monte Carlo simulations - ‘time_horizon’: Analysis time horizon - ‘n_survived’: Absolute number of paths that survived - ‘loss_statistics’: Statistics about loss distributions

Type:

Dict[str, Any]

Examples

Interpret analysis results:

# Example results from ergodic analysis
results = ErgodicAnalysisResults(
    time_average_growth=0.045,      # 4.5% annual growth
    ensemble_average_growth=0.052,   # 5.2% ensemble average
    survival_rate=0.95,              # 95% survival rate
    ergodic_divergence=-0.007,       # -0.7% divergence
    insurance_impact={
        'premium_cost': 2_500_000,
        'recovery_benefit': 8_200_000,
        'net_benefit': 5_700_000,
        'growth_improvement': 0.012
    },
    validation_passed=True,
    metadata={
        'n_simulations': 1000,
        'time_horizon': 20,
        'n_survived': 950
    }
)

# Interpret results
if results.validation_passed:
    print(f"Time-average growth: {results.time_average_growth:.1%}")
    print(f"Ensemble average: {results.ensemble_average_growth:.1%}")

    if results.ergodic_divergence < 0:
        print("Insurance reduces volatility drag (ergodic benefit)")

    if results.insurance_impact['net_benefit'] > 0:
        print(f"Insurance provides net benefit: ${results.insurance_impact['net_benefit']:,.0f}")
else:
    print("Analysis validation failed - results may be unreliable")

Compare multiple scenarios:

def compare_results(results_a, results_b, label_a="Scenario A", label_b="Scenario B"):
    print(f"{label_a} vs {label_b}:")
    print(f"  Time-average growth: {results_a.time_average_growth:.2%} vs {results_b.time_average_growth:.2%}")
    print(f"  Survival rate: {results_a.survival_rate:.1%} vs {results_b.survival_rate:.1%}")
    print(f"  Ergodic divergence: {results_a.ergodic_divergence:.3f} vs {results_b.ergodic_divergence:.3f}")

    growth_advantage = results_a.time_average_growth - results_b.time_average_growth
    survival_advantage = results_a.survival_rate - results_b.survival_rate

    print(f"  Advantages: Growth={growth_advantage:.2%}, Survival={survival_advantage:.1%}")

Note

All growth rates are expressed as decimal values (0.05 = 5% annual growth). Negative ergodic_divergence indicates insurance reduces “volatility drag”. Always check validation_passed before interpreting results.

See also

ErgodicAnalyzer: Class that generates these results ErgodicAnalyzer.integrate_loss_ergodic_analysis(): Method producing these results ValidationResults: Detailed validation information

time_average_growth: float
ensemble_average_growth: float
survival_rate: float
ergodic_divergence: float
insurance_impact: Dict[str, float]
validation_passed: bool
metadata: Dict[str, Any]
class ValidationResults(premium_deductions_correct: bool, recoveries_credited: bool, collateral_impacts_included: bool, time_average_reflects_benefit: bool, overall_valid: bool, details: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Comprehensive results from insurance impact validation analysis.

This class encapsulates the results of detailed validation checks performed on insurance effects in ergodic analysis. It provides both high-level validation status and detailed diagnostic information to help identify and resolve any modeling inconsistencies.

premium_deductions_correct

Whether insurance premiums are properly deducted from cash flows. True indicates expected premium costs match observed differences in net income between scenarios.

Type:

bool

recoveries_credited

Whether insurance recoveries are properly credited to improve financial outcomes. True indicates insured scenarios show appropriate financial benefit from loss recoveries.

Type:

bool

collateral_impacts_included

Whether letter of credit costs and asset restrictions are properly modeled. True indicates collateral requirements are reflected in financial calculations.

Type:

bool

time_average_reflects_benefit

Whether time-average growth rate calculations properly reflect insurance benefits. True indicates growth improvements are consistent with insurance effects.

Type:

bool

overall_valid

Master validation flag indicating whether all individual checks passed. True means the ergodic analysis results are reliable and properly reflect insurance impacts.

Type:

bool

details

Detailed diagnostic information from each validation check, including specific metrics, calculations, and discrepancy measurements. Used for troubleshooting validation failures.

Type:

Dict[str, Any]

Examples

Interpret validation results:

validation = analyzer.validate_insurance_ergodic_impact(
    base_scenario, insurance_scenario, insurance_program
)

if validation.overall_valid:
    print("✓ All validation checks passed")
    print("Ergodic analysis results are reliable")
else:
    print("⚠ Validation issues detected:")

    if not validation.premium_deductions_correct:
        print("  - Premium deduction mismatch")
    if not validation.recoveries_credited:
        print("  - Recovery crediting issue")
    if not validation.collateral_impacts_included:
        print("  - Collateral impact missing")
    if not validation.time_average_reflects_benefit:
        print("  - Growth calculation inconsistency")

    print("Review model implementation before using results")

Access detailed diagnostics:

if 'premium_check' in validation.details:
    premium_info = validation.details['premium_check']
    expected = premium_info['expected']
    actual = premium_info['actual_diff']
    print(f"Premium validation: Expected ${expected:,.0f}, Got ${actual:,.0f}")

    if abs(expected - actual) > expected * 0.05:  # 5% tolerance
        print("⚠ Significant premium discrepancy detected")

Note

A failed overall validation doesn’t necessarily mean the analysis is wrong - it may indicate edge cases or modeling assumptions that need review. Always examine the details for specific guidance on issues.

See also

ErgodicAnalyzer.validate_insurance_ergodic_impact(): Method generating these results ErgodicAnalysisResults: Main analysis results that this validation supports

premium_deductions_correct: bool
recoveries_credited: bool
collateral_impacts_included: bool
time_average_reflects_benefit: bool
overall_valid: bool
details: Dict[str, Any]
class ErgodicAnalyzer(convergence_threshold: float = 0.01)[source]

Bases: object

Advanced analyzer for ergodic properties of insurance strategies.

This class implements the core computational engine for ergodic economics analysis in insurance contexts. It provides methods to calculate and compare time-average versus ensemble-average growth rates, demonstrating the fundamental difference between traditional expected-value thinking and actual experienced growth over time.

The analyzer addresses the key ergodic insight that for multiplicative processes (like business growth with volatile losses), what happens to an ensemble of businesses differs from what happens to any individual business over time. Insurance can improve time-average growth even when it appears costly from an ensemble (expected value) perspective.

Key Capabilities:
  • Time-average growth rate calculation for individual trajectories

  • Ensemble average computation across multiple simulation paths

  • Statistical significance testing of insurance benefits

  • Monte Carlo convergence analysis

  • Integrated loss modeling and insurance impact assessment

  • Comprehensive validation of insurance effects

convergence_threshold

Standard error threshold for determining Monte Carlo convergence. Lower values require more simulations but provide higher confidence in results.

Type:

float

Mathematical Foundation:
Time-Average Growth: For a trajectory X(t), the time-average growth rate is:

g_time = (1/T) * ln(X(T)/X(0))

Ensemble Average Growth: Across N paths, the ensemble growth rate is:

g_ensemble = (1/T) * ln(⟨X(T)⟩/⟨X(0)⟩)

Ergodic Divergence: The difference g_time - g_ensemble indicates

non-ergodic behavior where individual experience differs from statistical expectations.

Examples

Basic analyzer setup and usage:

from ergodic_insurance import ErgodicAnalyzer
import numpy as np

# Initialize with tight convergence criteria
analyzer = ErgodicAnalyzer(convergence_threshold=0.005)

# Calculate time-average growth for a single trajectory
equity_path = np.array([10e6, 10.5e6, 9.8e6, 11.2e6, 12.1e6])
time_avg_growth = analyzer.calculate_time_average_growth(equity_path)
print(f"Time-average growth: {time_avg_growth:.2%} annually")

Ensemble analysis with multiple trajectories:

# Multiple simulation paths (some ending in bankruptcy)
trajectories = [
    np.array([10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6]),  # Survivor
    np.array([10e6, 9.2e6, 8.1e6, 6.8e6, 0]),          # Bankruptcy
    np.array([10e6, 10.8e6, 11.5e6, 12.8e6, 14.2e6]),  # High growth
    np.array([10e6, 9.8e6, 10.2e6, 10.6e6, 11.1e6])   # Stable growth
]

# Calculate ensemble statistics
ensemble_stats = analyzer.calculate_ensemble_average(
    trajectories,
    metric="growth_rate"
)

print(f"Ensemble growth rate: {ensemble_stats['mean']:.2%}")
print(f"Survival rate: {ensemble_stats['survival_rate']:.1%}")
print(f"Growth rate std dev: {ensemble_stats['std']:.2%}")

Insurance scenario comparison:

# Compare insured vs uninsured scenarios
insured_paths = generate_insured_trajectories()    # Your simulation code
uninsured_paths = generate_uninsured_trajectories()  # Your simulation code

comparison = analyzer.compare_scenarios(
    insured_paths,
    uninsured_paths,
    metric="equity"
)

# Extract key insights
time_avg_benefit = comparison['ergodic_advantage']['time_average_gain']
survival_benefit = comparison['ergodic_advantage']['survival_gain']
is_significant = comparison['ergodic_advantage']['significant']

print(f"Time-average growth improvement: {time_avg_benefit:.2%}")
print(f"Survival rate improvement: {survival_benefit:.1%}")
print(f"Statistically significant: {is_significant}")

Monte Carlo convergence analysis:

# Run large Monte Carlo study
simulation_results = run_monte_carlo_study(n_sims=2000)

analysis = analyzer.analyze_simulation_batch(
    simulation_results,
    label="High-Coverage Insurance"
)

# Check if we have enough simulations
if analysis['convergence']['converged']:
    print("Monte Carlo has converged - results are reliable")
    print(f"Standard error: {analysis['convergence']['standard_error']:.4f}")
else:
    print("Need more simulations for convergence")
    needed_se = analyzer.convergence_threshold
    current_se = analysis['convergence']['standard_error']
    factor = (current_se / needed_se) ** 2
    print(f"Suggest ~{int(2000 * factor)} simulations")
Advanced Features:

The analyzer provides several advanced capabilities for robust analysis:

Variable-Length Trajectories: Handles paths that end early due to bankruptcy, maintaining proper statistics across mixed survival scenarios.

Significance Testing: Built-in t-tests to determine if observed differences between scenarios are statistically meaningful.

Convergence Monitoring: Automated checking of Monte Carlo convergence using rolling standard error calculations.

Integrated Validation: Comprehensive validation of insurance effects to ensure results accurately reflect premium costs, recoveries, and collateral impacts.

Performance Notes:
  • Optimized for large Monte Carlo datasets (1000+ simulations)

  • Memory-efficient processing of variable-length trajectories

  • Vectorized calculations where possible for speed

  • Graceful handling of edge cases (bankruptcy, infinite values)

See also

ErgodicAnalysisResults: Comprehensive results format ValidationResults: Insurance impact validation results integrate_loss_ergodic_analysis(): End-to-end analysis pipeline compare_scenarios(): Core scenario comparison functionality

calculate_time_average_growth(values: ndarray, time_horizon: int | None = None) float[source]

Calculate time-average growth rate for a single trajectory.

This method implements the core ergodic calculation for individual path growth rates using the logarithmic growth formula. It handles edge cases gracefully, including bankruptcy scenarios and invalid data.

The time-average growth rate represents the actual compound growth experienced by a single entity over time, which differs fundamentally from ensemble averages in multiplicative processes.

Parameters:
  • values (np.ndarray) – Array of values over time (e.g., equity, assets, wealth). Should be monotonic in time with positive values for meaningful growth calculations. Length must be >= 2 for growth calculation.

  • time_horizon (Optional[int]) – Specific time horizon to use for calculation. If None, uses the full trajectory length minus 1. Useful for comparing trajectories of different lengths or analyzing partial periods.

Returns:

Time-average growth rate as decimal (0.05 = 5% annual growth).

Special return values: - -inf: Trajectory ended in bankruptcy (final value <= 0) - 0.0: Single time point or zero time horizon - Finite value: Calculated growth rate

Return type:

float

Examples

Calculate growth for successful trajectory:

import numpy as np

# 5-year equity trajectory
equity = np.array([10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6])
growth = analyzer.calculate_time_average_growth(equity)
print(f"Growth rate: {growth:.2%} annually")
# Output: Growth rate: 5.68% annually

Handle bankruptcy scenario:

# Trajectory ending in bankruptcy
failed_equity = np.array([10e6, 9.2e6, 7.1e6, 4.8e6, 0])
growth = analyzer.calculate_time_average_growth(failed_equity)
print(f"Growth rate: {growth}")
# Output: Growth rate: -inf

Analyze partial trajectory:

# Long trajectory, analyze first 3 years only
long_equity = np.array([10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6, 13.1e6])
partial_growth = analyzer.calculate_time_average_growth(
    long_equity,
    time_horizon=3
)
# Analyzes first 4 points (years 0-3)

Handle trajectories with initial zeros:

# Trajectory starting from zero (invalid)
invalid_equity = np.array([0, 1e6, 2e6, 3e6, 4e6])
growth = analyzer.calculate_time_average_growth(invalid_equity)
# Will find first valid positive value and calculate from there
Mathematical Details:
The calculation uses the formula:

g = (1/T) * ln(X(T) / X(0))

Where: - g: time-average growth rate - T: time horizon - X(T): final value - X(0): initial value - ln: natural logarithm

This formula gives the constant compound growth rate that would produce the observed change from initial to final value.

Edge Cases:
  • Empty array: Returns -inf

  • Single value: Returns 0.0

  • Final value <= 0: Returns -inf (bankruptcy)

  • All values <= 0: Returns -inf

  • Zero time horizon: Returns 0.0 if positive, -inf if negative

Note

This is the fundamental calculation in ergodic economics, representing the growth rate that a single entity actually experiences over time, as opposed to what we might expect from ensemble averages.

Warning

The method filters out non-positive values when finding the initial value, which may skip early periods of the trajectory. Ensure your data represents meaningful business values (positive equity/assets).

See also

calculate_ensemble_average(): For ensemble growth calculations compare_scenarios(): For comparing time vs ensemble averages

calculate_ensemble_average(trajectories: List[ndarray] | ndarray, metric: str = 'final_value') Dict[str, float][source]

Calculate ensemble average and statistics across multiple simulation paths.

This method computes ensemble statistics representing the traditional expected value approach to analyzing multiple parallel scenarios. It handles variable-length trajectories (due to bankruptcy) and provides comprehensive statistics for comparison with time-average calculations.

The ensemble perspective answers: “What would happen on average across many parallel businesses?” This differs from the time-average perspective of “What happens to one business over time?”

Parameters:
  • trajectories (Union[List[np.ndarray], np.ndarray]) – Multiple simulation trajectories. Can be: - List of 1D numpy arrays (supports variable lengths) - 2D numpy array with shape (n_paths, n_timesteps) Each trajectory represents values over time (equity, assets, etc.)

  • metric (str) – Type of ensemble statistic to compute: - “final_value”: Statistics of final values across paths - “growth_rate”: Statistics of growth rates across paths - “full”: Average trajectory at each time step (fixed-length only) Defaults to “final_value”.

Returns:

Dictionary containing ensemble statistics:
  • ’mean’: Mean of the selected metric across all valid paths

  • ’std’: Standard deviation of the metric

  • ’median’: Median value of the metric

  • ’survival_rate’: Fraction of paths avoiding bankruptcy

  • ’n_survived’: Absolute number of surviving paths

  • ’n_total’: Total number of input paths

For metric=”full”: - ‘mean_trajectory’: Mean values at each time step - ‘std_trajectory’: Standard deviations at each time step

Return type:

Dict[str, float]

Examples

Analyze final equity values:

import numpy as np

# Multiple simulation results
trajectories = [
    np.array([10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6]),  # Success
    np.array([10e6, 9.2e6, 8.1e6, 6.8e6, 0]),          # Bankruptcy
    np.array([10e6, 10.8e6, 11.5e6, 12.8e6, 14.2e6]),  # High growth
]

final_stats = analyzer.calculate_ensemble_average(
    trajectories,
    metric="final_value"
)

print(f"Average final equity: ${final_stats['mean']:,.0f}")
print(f"Survival rate: {final_stats['survival_rate']:.1%}")
print(f"Standard deviation: ${final_stats['std']:,.0f}")

Analyze growth rate distribution:

growth_stats = analyzer.calculate_ensemble_average(
    trajectories,
    metric="growth_rate"
)

print(f"Average growth rate: {growth_stats['mean']:.2%}")
print(f"Growth rate volatility: {growth_stats['std']:.2%}")
print(f"Median growth: {growth_stats['median']:.2%}")

Full trajectory analysis (fixed-length only):

# Convert to fixed-length array
fixed_trajectories = np.array([
    [10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6],
    [10e6, 9.8e6, 10.1e6, 10.6e6, 11.1e6],
    [10e6, 10.8e6, 11.5e6, 12.8e6, 14.2e6]
])

full_stats = analyzer.calculate_ensemble_average(
    fixed_trajectories,
    metric="full"
)

mean_path = full_stats['mean_trajectory']
print(f"Mean trajectory: {mean_path}")
# Shows average value at each time step

Handle mixed survival scenarios:

mixed_trajectories = [
    np.array([10e6, 11e6, 12e6]),        # Short survivor
    np.array([10e6, 9e6, 0]),             # Early bankruptcy
    np.array([10e6, 11e6, 12e6, 13e6]),   # Long survivor
]

stats = analyzer.calculate_ensemble_average(mixed_trajectories)
print(f"{stats['n_survived']}/{stats['n_total']} paths survived")
print(f"Survival rate: {stats['survival_rate']:.1%}")
Statistical Interpretation:

Mean: The expected value under the ensemble perspective. For multiplicative processes, this may differ significantly from what any individual entity experiences.

Standard Deviation: Measures the spread of outcomes across the ensemble, indicating the uncertainty in individual results.

Survival Rate: Critical metric often ignored in traditional expected value analysis. Shows the probability of avoiding bankruptcy.

Median: Often more representative than mean for skewed distributions common in financial modeling.

Edge Cases:
  • Empty trajectory list: Returns zeros/NaN appropriately

  • All paths end in bankruptcy: survival_rate=0, mean/median may be 0

  • Single trajectory: Statistics reduce to that trajectory’s values

  • Mixed lengths: Handled gracefully with proper filtering

Performance Notes:
  • Optimized for large numbers of trajectories (1000+ paths)

  • Memory efficient for mixed-length trajectory lists

  • Vectorized calculations where possible

See also

calculate_time_average_growth(): Individual trajectory analysis compare_scenarios(): Ensemble vs time-average comparison analyze_simulation_batch(): Comprehensive batch analysis

check_convergence(values: ndarray, window_size: int = 100) Tuple[bool, float][source]

Check Monte Carlo convergence using rolling standard error analysis.

This method determines whether a Monte Carlo simulation has run enough iterations to provide statistically reliable results. It uses rolling standard error calculations to assess whether adding more simulations would significantly change the estimated mean.

Convergence analysis is crucial for ergodic analysis because insufficient simulations can lead to misleading conclusions about insurance benefits. The method provides both a binary convergence decision and quantitative standard error metrics for informed decision making.

Parameters:
  • values (np.ndarray) – Array of values to check for convergence, typically time-average growth rates from Monte Carlo simulations. Should contain at least window_size values for meaningful analysis. Infinite values (from bankruptcy) are handled appropriately.

  • window_size (int) – Size of rolling window for convergence assessment. Larger windows provide more stable convergence detection but require more data points. Typical values: - 50: Quick convergence check for small samples - 100: Standard convergence analysis (default) - 200: Conservative convergence for high precision Must be <= len(values) for analysis to proceed.

Returns:

Convergence assessment results:
  • converged (bool): Whether the series has converged according to the specified threshold. True indicates sufficient simulations.

  • standard_error (float): Current standard error of the mean based on the last window_size observations. Lower values indicate higher precision and greater confidence in results.

Return type:

Tuple[bool, float]

Examples

Check convergence during Monte Carlo analysis:

import numpy as np

# Simulate running Monte Carlo with growth rate collection
growth_rates = []

for i in range(2000):  # Up to 2000 simulations
    # Run single simulation (pseudo-code)
    result = run_single_simulation()
    growth_rate = analyzer.calculate_time_average_growth(result.equity)
    growth_rates.append(growth_rate)

    # Check convergence every 100 simulations
    if (i + 1) % 100 == 0 and i >= 100:
        converged, se = analyzer.check_convergence(
            np.array(growth_rates),
            window_size=100
        )

        print(f"Simulation {i+1}: SE={se:.4f}, Converged={converged}")

        if converged:
            print(f"✓ Convergence achieved after {i+1} simulations")
            break

if not converged:
    print(f"⚠ Convergence not achieved after {len(growth_rates)} simulations")
    print(f"Current standard error: {se:.4f}")
    print(f"Target threshold: {analyzer.convergence_threshold:.4f}")

Adaptive Monte Carlo with convergence monitoring:

def run_adaptive_monte_carlo(target_precision=0.01, max_sims=5000):
    '''Run Monte Carlo until convergence or maximum simulations.'''
    results = []

    for i in range(max_sims):
        # Run simulation
        sim_result = run_single_simulation()
        results.append(sim_result)

        # Extract growth rates for convergence check
        growth_rates = [analyzer.calculate_time_average_growth(r.equity)
                      for r in results]

        # Check convergence (need at least 100 for stability)
        if i >= 100:
            converged, se = analyzer.check_convergence(
                np.array([g for g in growth_rates if np.isfinite(g)])
            )

            if converged and se <= target_precision:
                print(f"Achieved target precision after {i+1} simulations")
                return results, True

    print(f"Maximum simulations reached without convergence")
    return results, False

# Run adaptive analysis
results, converged = run_adaptive_monte_carlo()
if converged:
    print("Analysis complete with sufficient precision")
else:
    print("Consider increasing maximum simulations")

Convergence diagnostics and troubleshooting:

# Analyze convergence pattern
growth_rates = np.array([...])  # Your Monte Carlo results

# Check convergence with different window sizes
window_sizes = [50, 100, 150, 200]

print("=== Convergence Analysis ===")
for ws in window_sizes:
    if len(growth_rates) >= ws:
        converged, se = analyzer.check_convergence(growth_rates, ws)
        print(f"Window {ws:3d}: SE={se:.5f}, Converged={converged}")

# Plot convergence pattern (conceptual)
rolling_means = []
rolling_ses = []

for i in range(100, len(growth_rates), 10):
    subset = growth_rates[:i]
    converged, se = analyzer.check_convergence(subset)
    rolling_means.append(np.mean(subset[np.isfinite(subset)]))
    rolling_ses.append(se)

# Analyze convergence stability
recent_se_trend = np.diff(rolling_ses[-10:])  # Last 10 points
if np.mean(recent_se_trend) < 0:
    print("✓ Standard error decreasing - convergence improving")
else:
    print("⚠ Standard error not decreasing - may need more simulations")

Compare convergence across scenarios:

# Check convergence for both insured and uninsured scenarios
scenarios = {
    'insured': insured_growth_rates,
    'uninsured': uninsured_growth_rates
}

convergence_status = {}
for name, rates in scenarios.items():
    converged, se = analyzer.check_convergence(rates)
    convergence_status[name] = {
        'converged': converged,
        'standard_error': se,
        'n_simulations': len(rates)
    }

print("=== Scenario Convergence Status ===")
for name, status in convergence_status.items():
    print(f"{name:10}: {status['converged']} "
          f"(SE={status['standard_error']:.4f}, n={status['n_simulations']})")

# Determine if comparison is valid
both_converged = all(s['converged'] for s in convergence_status.values())
if both_converged:
    print("✓ Both scenarios converged - comparison is reliable")
else:
    print("⚠ Incomplete convergence - results may be unreliable")
Mathematical Background:

The method calculates the standard error of the mean for the most recent window_size observations:

SE = σ / √n

Where: - σ = standard deviation of the sample - n = sample size (window_size)

Convergence is achieved when SE < convergence_threshold, indicating that the sample mean is stable within the desired precision.

Convergence Guidelines:

Standard Error Thresholds: - SE < 0.005: High precision (recommended for final analysis) - SE < 0.01: Standard precision (adequate for most decisions) - SE < 0.02: Low precision (suitable for initial exploration) - SE > 0.02: Insufficient precision (run more simulations)

Sample Size Rules of Thumb: - n < 100: Generally insufficient for convergence assessment - n = 100-500: May achieve convergence for low-volatility scenarios - n = 500-2000: Standard range for most insurance analyses - n > 2000: High-precision analysis or high-volatility scenarios

Edge Cases:
  • Fewer observations than window_size: Returns (False, inf)

  • All infinite values: Returns (False, inf)

  • High volatility data: May require very large samples for convergence

  • Bimodal distributions: Standard error may not capture full uncertainty

Performance Notes:
  • Fast execution even for large arrays (10,000+ observations)

  • Memory efficient rolling window calculations

  • Robust handling of infinite and missing values

See also

convergence_threshold: Threshold used for convergence decision analyze_simulation_batch(): Includes automatic convergence analysis calculate_ensemble_average(): Ensemble statistics that benefit from convergence

compare_scenarios(insured_results: List[SimulationResults] | ndarray, uninsured_results: List[SimulationResults] | ndarray, metric: str = 'equity') Dict[str, Any][source]

Compare insured vs uninsured scenarios using comprehensive ergodic analysis.

This is the core method for demonstrating ergodic advantages of insurance. It performs side-by-side comparison of insured and uninsured scenarios, calculating both time-average and ensemble-average growth rates to reveal the fundamental difference between expected value thinking and actual experienced growth.

The comparison reveals how insurance can be optimal from a time-average perspective even when it appears costly from an ensemble (expected value) perspective - the key insight of ergodic economics applied to insurance.

Args:
insured_results (Union[List[SimulationResults], np.ndarray]): Simulation

results from insured scenarios. Can be:

  • List of SimulationResults objects from Monte Carlo runs

  • List of numpy arrays representing trajectories

  • 2D numpy array with shape (n_simulations, n_timesteps)

uninsured_results (Union[List[SimulationResults], np.ndarray]):

Simulation results from uninsured scenarios, same format as insured_results. Should have same number of simulations for valid comparison.

metric (str): Financial metric to analyze for comparison:

  • “equity”: Company equity over time (recommended)

  • “assets”: Total assets over time

  • “cash”: Available cash over time

  • Any attribute available in SimulationResults objects

Defaults to “equity”.

Returns:

Dict[str, Any]: Comprehensive comparison results with nested structure:

  • ‘insured’ (Dict): Insured scenario statistics:

    • ‘time_average_mean’: Mean time-average growth rate

    • ‘time_average_median’: Median time-average growth rate

    • ‘time_average_std’: Standard deviation of growth rates

    • ‘ensemble_average’: Ensemble average growth rate

    • ‘survival_rate’: Fraction avoiding bankruptcy

    • ‘n_survived’: Absolute number of survivors

  • ‘uninsured’ (Dict): Uninsured scenario statistics:

    • Same structure as ‘insured’

  • ‘ergodic_advantage’ (Dict): Comparative metrics:

    • ‘time_average_gain’: Difference in time-average growth

    • ‘ensemble_average_gain’: Difference in ensemble averages

    • ‘survival_gain’: Improvement in survival rate

    • ‘t_statistic’: t-test statistic for significance

    • ‘p_value’: p-value for statistical significance

    • ‘significant’: Boolean indicating significance (p < 0.05)

Examples:

Basic insurance vs no insurance comparison:

# Run Monte Carlo simulations (pseudo-code)
insured_sims = run_simulations(insurance_enabled=True, n_sims=1000)
uninsured_sims = run_simulations(insurance_enabled=False, n_sims=1000)

# Compare scenarios
comparison = analyzer.compare_scenarios(
    insured_sims,
    uninsured_sims,
    metric="equity"
)

# Extract key insights
time_avg_gain = comparison['ergodic_advantage']['time_average_gain']
survival_gain = comparison['ergodic_advantage']['survival_gain']
is_significant = comparison['ergodic_advantage']['significant']

print(f"Time-average growth improvement: {time_avg_gain:.2%}")
print(f"Survival rate improvement: {survival_gain:.1%}")
print(f"Statistical significance: {is_significant}")

Detailed analysis of results:

# Examine both perspectives
insured = comparison['insured']
uninsured = comparison['uninsured']
advantage = comparison['ergodic_advantage']

print("
=== ENSEMBLE PERSPECTIVE (Traditional Analysis) ===”)

print(f”Insured ensemble growth: {insured[‘ensemble_average’]:.2%}”) print(f”Uninsured ensemble growth: {uninsured[‘ensemble_average’]:.2%}”) print(f”Ensemble advantage: {advantage[‘ensemble_average_gain’]:.2%}”)

print(”

=== TIME-AVERAGE PERSPECTIVE (Ergodic Analysis) ===”)

print(f”Insured time-average growth: {insured[‘time_average_mean’]:.2%}”) print(f”Uninsured time-average growth: {uninsured[‘time_average_mean’]:.2%}”) print(f”Time-average advantage: {advantage[‘time_average_gain’]:.2%}”)

print(”

=== SURVIVAL ANALYSIS ===”)

print(f”Insured survival rate: {insured[‘survival_rate’]:.1%}”) print(f”Uninsured survival rate: {uninsured[‘survival_rate’]:.1%}”) print(f”Survival improvement: {advantage[‘survival_gain’]:.1%}”)

# Interpret ergodic vs ensemble difference if advantage[‘time_average_gain’] > advantage[‘ensemble_average_gain’]:

print(”

✓ Insurance shows ergodic advantage!”)

print(” Time-average benefit exceeds ensemble expectation”)

else:

print(”

! No clear ergodic advantage detected”)

Statistical significance analysis:

if comparison['ergodic_advantage']['significant']:
    p_val = comparison['ergodic_advantage']['p_value']
    t_stat = comparison['ergodic_advantage']['t_statistic']

    print(f"Results are statistically significant:")
    print(f"  t-statistic: {t_stat:.3f}")
    print(f"  p-value: {p_val:.4f}")
    print(f"  Confidence level: {(1-p_val)*100:.1f}%")
else:
    print("Results not statistically significant")
    print("Consider running more simulations")

Multiple metric analysis:

# Compare different financial metrics
metrics_to_analyze = ['equity', 'assets', 'cash']
results = {}

for metric in metrics_to_analyze:
    results[metric] = analyzer.compare_scenarios(
        insured_sims, uninsured_sims, metric=metric
    )

# Find metric showing strongest insurance advantage
best_metric = max(metrics_to_analyze,
    key=lambda m: results[m]['ergodic_advantage']['time_average_gain']
)

print(f"Strongest insurance advantage in: {best_metric}")
gain = results[best_metric]['ergodic_advantage']['time_average_gain']
print(f"Time-average improvement: {gain:.2%}")
Mathematical Background:

The comparison reveals the ergodic/non-ergodic nature of financial processes by calculating:

Time-Average Growth: Mean of individual trajectory growth rates:

g_time = mean([ln(X_i(T)/X_i(0))/T for each path i])

Ensemble Average Growth: Growth of the ensemble mean:

g_ensemble = ln(mean([X_i(T)])/mean([X_i(0)]))/T

Ergodic Divergence: g_time - g_ensemble

For multiplicative processes with volatility, these typically differ, with insurance often improving time-average more than ensemble average.

Interpretation Guidelines:

Positive Time-Average Gain: Insurance improves actual experienced growth rates, even if ensemble analysis suggests otherwise.

Survival Rate Improvement: Critical for long-term viability, often the primary benefit of insurance in high-volatility scenarios.

Statistical Significance: p < 0.05 indicates results are unlikely due to random chance, supporting reliability of conclusions.

Edge Cases:
  • All paths bankrupt in one scenario: Handled with -inf growth rates

  • Mismatched simulation counts: Statistics calculated on available data

  • Identical scenarios: All advantages will be zero

  • High volatility: May require more simulations for significance

Performance Notes:
  • Handles thousands of simulation paths efficiently

  • Memory-conscious processing of large trajectory datasets

  • Automatic handling of variable-length trajectories

See Also:

calculate_time_average_growth(): Individual path analysis calculate_ensemble_average(): Ensemble statistics significance_test(): Statistical testing details ErgodicAnalysisResults: Comprehensive results format

Return type:

Dict[str, Any]

significance_test(sample1: List[float] | ndarray, sample2: List[float] | ndarray, test_type: str = 'two-sided') Tuple[float, float][source]

Perform statistical significance test between two growth rate samples.

This method conducts a two-sample t-test to determine whether observed differences between insured and uninsured scenarios are statistically significant or could reasonably be attributed to random variation. Statistical significance provides confidence that ergodic advantages are genuine rather than artifacts of sampling variability.

Parameters:
  • sample1 (Union[List[float], np.ndarray]) – First sample of growth rates, typically from insured scenarios. Should contain time-average growth rates from individual simulation paths. Infinite values (from bankruptcy) are automatically handled.

  • sample2 (Union[List[float], np.ndarray]) – Second sample of growth rates, typically from uninsured scenarios. Should be comparable to sample1 with same underlying business conditions but different insurance coverage.

  • test_type (str) – Type of statistical test to perform: - “two-sided”: Tests if samples have different means (default) - “greater”: Tests if sample1 mean > sample2 mean - “less”: Tests if sample1 mean < sample2 mean Defaults to “two-sided” for general hypothesis testing.

Returns:

Statistical test results:
  • t_statistic (float): t-test statistic value. Positive values indicate sample1 has higher mean than sample2.

  • p_value (float): Probability of observing the data under the null hypothesis of no difference. Lower values indicate stronger evidence against the null hypothesis.

Return type:

Tuple[float, float]

Examples

Test insurance benefit significance:

import numpy as np

# Growth rates from Monte Carlo simulations
insured_growth = np.array([0.048, 0.051, 0.047, 0.049, 0.052, ...])
uninsured_growth = np.array([0.038, -np.inf, 0.042, 0.035, 0.041, ...])

# Two-sided test for any difference
t_stat, p_value = analyzer.significance_test(
    insured_growth,
    uninsured_growth,
    test_type="two-sided"
)

print(f"t-statistic: {t_stat:.3f}")
print(f"p-value: {p_value:.4f}")

if p_value < 0.05:
    print("✓ Statistically significant difference at 5% level")
else:
    print("No significant difference detected")

One-sided test for insurance superiority:

# Test if insurance provides superior growth rates
t_stat, p_value = analyzer.significance_test(
    insured_growth,
    uninsured_growth,
    test_type="greater"
)

print(f"Testing if insured > uninsured:")
print(f"t-statistic: {t_stat:.3f}")
print(f"p-value: {p_value:.4f}")

if p_value < 0.01:
    print("✓ Strong evidence that insurance improves growth (p < 0.01)")
elif p_value < 0.05:
    print("✓ Moderate evidence that insurance improves growth (p < 0.05)")
elif p_value < 0.10:
    print("? Weak evidence that insurance improves growth (p < 0.10)")
else:
    print("No significant evidence that insurance improves growth")

Comprehensive significance analysis:

# Test multiple hypotheses
tests = [
    ("two-sided", "Any difference"),
    ("greater", "Insurance superior"),
    ("less", "Insurance inferior")
]

print("=== Statistical Significance Analysis ===")
for test_type, description in tests:
    t_stat, p_value = analyzer.significance_test(
        insured_growth, uninsured_growth, test_type
    )

    significance = "***" if p_value < 0.001 else                                  "**" if p_value < 0.01 else                                  "*" if p_value < 0.05 else                                  "" if p_value < 0.10 else "n.s."

    print(f"{description:20}: t={t_stat:6.3f}, p={p_value:.4f} {significance}")

Sample size and power analysis:

# Check if samples are large enough for reliable testing
n1, n2 = len(insured_growth), len(uninsured_growth)

if n1 < 30 or n2 < 30:
    print(f"⚠ Small sample sizes (n1={n1}, n2={n2})")
    print("Consider running more simulations for robust results")

# Calculate effect size (Cohen's d)
mean1, mean2 = np.mean(insured_growth), np.mean(uninsured_growth)
pooled_std = np.sqrt(((n1-1)*np.var(insured_growth) +
                    (n2-1)*np.var(uninsured_growth)) / (n1+n2-2))

cohens_d = (mean1 - mean2) / pooled_std

print(f"Effect size (Cohen's d): {cohens_d:.3f}")
if abs(cohens_d) > 0.8:
    print("Large effect size")
elif abs(cohens_d) > 0.5:
    print("Medium effect size")
elif abs(cohens_d) > 0.2:
    print("Small effect size")
else:
    print("Very small effect size")
Statistical Interpretation:

p-value Guidelines: - p < 0.001: Very strong evidence against null hypothesis (*) - p < 0.01: Strong evidence () - p < 0.05: Moderate evidence (*) - p < 0.10: Weak evidence - p >= 0.10: No significant evidence

t-statistic Guidelines: - |t| > 3: Very large effect - |t| > 2: Large effect - |t| > 1: Moderate effect - |t| < 1: Small effect

Assumptions and Limitations:

t-test Assumptions: 1. Samples are independent 2. Data approximately normally distributed (robust to violations with large n) 3. Equal variances (Welch’s t-test used automatically if needed)

Handling of Infinite Values: The method automatically excludes infinite values (from bankruptcy scenarios) using scipy’s nan_policy=’omit’. This is appropriate since infinite values represent qualitatively different outcomes (business failure) rather than extreme but finite growth rates.

Multiple Testing: If performing multiple significance tests, consider adjusting significance levels (e.g., Bonferroni correction) to account for increased Type I error probability.

Performance Notes:
  • Efficient for samples up to 10,000+ observations

  • Automatic handling of missing/infinite values

  • Uses scipy.stats for robust statistical calculations

See also

compare_scenarios(): Includes automatic significance testing check_convergence(): For determining adequate sample sizes scipy.stats.ttest_ind: Underlying statistical test implementation

analyze_simulation_batch(simulation_results: List[SimulationResults], label: str = 'Scenario') Dict[str, Any][source]

Perform comprehensive ergodic analysis on a batch of simulation results.

This method provides a complete analysis of a single scenario (e.g., all insured simulations or all uninsured simulations), including time-average and ensemble statistics, convergence analysis, and survival metrics. It serves as a comprehensive diagnostic tool for understanding the ergodic properties of a particular insurance strategy.

Args:
simulation_results (List[SimulationResults]): List of simulation result

objects from Monte Carlo runs. Each should contain trajectory data including equity, assets, years, and potential insolvency information. Typically 100-2000 simulations for robust analysis.

label (str): Descriptive label for this batch of simulations, used

in reporting and metadata. Examples: “High Deductible”, “Full Coverage”, “Base Case”, “Stress Scenario”. Defaults to “Scenario”.

Returns:

Dict[str, Any]: Comprehensive analysis dictionary with nested structure:

  • ‘label’ (str): The provided label for identification

  • ‘n_simulations’ (int): Number of simulations analyzed

  • ‘time_average’ (Dict): Time-average growth statistics:

    • ‘mean’: Mean time-average growth rate across all paths

    • ‘median’: Median time-average growth rate

    • ‘std’: Standard deviation of growth rates

    • ‘min’: Minimum growth rate observed

    • ‘max’: Maximum growth rate observed

  • ‘ensemble_average’ (Dict): Ensemble statistics:

    • ‘mean’: Ensemble average growth rate

    • ‘std’: Standard deviation of ensemble

    • ‘median’: Median of ensemble

    • ‘survival_rate’: Fraction avoiding bankruptcy

    • ‘n_survived’: Absolute number of survivors

    • ‘n_total’: Total number of simulations

  • ‘convergence’ (Dict): Monte Carlo convergence analysis:

    • ‘converged’: Boolean indicating if results have converged

    • ‘standard_error’: Current standard error of the estimates

    • ‘threshold’: Convergence threshold used

  • ‘survival_analysis’ (Dict): Survival metrics:

    • ‘survival_rate’: Fraction avoiding bankruptcy (duplicate)

    • ‘mean_survival_time’: Average time to insolvency or end

  • ‘ergodic_divergence’ (float): Difference between time-average and ensemble-average growth rates

Examples:

Analyze a batch of insured simulations:

# Run Monte Carlo simulations
insured_results = []
for i in range(1000):
    sim = run_single_simulation(insurance_enabled=True, seed=i)
    insured_results.append(sim)

# Comprehensive analysis
analysis = analyzer.analyze_simulation_batch(
    insured_results,
    label="Full Insurance Coverage"
)

# Report key findings
print(f"
=== {analysis[‘label’]} Analysis ===”)

print(f”Simulations: {analysis[‘n_simulations’]}”) print(f”Time-average growth: {analysis[‘time_average’][‘mean’]:.2%} ± {analysis[‘time_average’][‘std’]:.2%}”) print(f”Ensemble average: {analysis[‘ensemble_average’][‘mean’]:.2%}”) print(f”Survival rate: {analysis[‘survival_analysis’][‘survival_rate’]:.1%}”) print(f”Ergodic divergence: {analysis[‘ergodic_divergence’]:.3f}”)

Check Monte Carlo convergence:

if analysis['convergence']['converged']:
    print(f"✓ Results have converged (SE: {analysis['convergence']['standard_error']:.4f})")
    print("Analysis is reliable for decision making")
else:
    current_se = analysis['convergence']['standard_error']
    target_se = analysis['convergence']['threshold']

    print(f"⚠ Convergence not reached (SE: {current_se:.4f} > {target_se:.4f})")

    # Estimate additional simulations needed
    current_n = analysis['n_simulations']
    factor = (current_se / target_se) ** 2
    recommended_n = int(current_n * factor)
    additional_needed = recommended_n - current_n

    print(f"Recommend ~{additional_needed} additional simulations")

Compare growth rate distributions:

# Analyze distribution characteristics
time_avg = analysis['time_average']

print(f"
=== Growth Rate Distribution ===”)

print(f”Mean: {time_avg[‘mean’]:.2%}”) print(f”Median: {time_avg[‘median’]:.2%}”) print(f”Std Dev: {time_avg[‘std’]:.2%}”) print(f”Range: {time_avg[‘min’]:.2%} to {time_avg[‘max’]:.2%}”)

# Check for skewness if time_avg[‘mean’] > time_avg[‘median’]:

print(“Distribution is right-skewed (long tail of high growth)”)

elif time_avg[‘mean’] < time_avg[‘median’]:

print(“Distribution is left-skewed (long tail of poor performance)”)

else:

print(“Distribution appears roughly symmetric”)

Survival analysis insights:

survival = analysis['survival_analysis']
ensemble = analysis['ensemble_average']

print(f"
=== Survival Analysis ===”)

print(f”Survival rate: {survival[‘survival_rate’]:.1%}”) print(f”Survivors: {ensemble[‘n_survived’]}/{ensemble[‘n_total’]}”) print(f”Mean time to insolvency/end: {survival[‘mean_survival_time’]:.1f} years”)

# Risk assessment if survival[‘survival_rate’] < 0.9:

print(”⚠ High bankruptcy risk - consider more insurance”)

elif survival[‘survival_rate’] > 0.99:

print(”✓ Very low bankruptcy risk - insurance is effective”)

else:

print(”✓ Moderate bankruptcy risk - acceptable for most businesses”)

Ergodic divergence interpretation:

divergence = analysis['ergodic_divergence']

if abs(divergence) < 0.001:  # Less than 0.1%
    print("Minimal ergodic divergence - process is nearly ergodic")
elif divergence > 0:
    print(f"Positive ergodic divergence ({divergence:.3f})")
    print("Time-average exceeds ensemble average - favorable")
else:
    print(f"Negative ergodic divergence ({divergence:.3f})")
    print("Ensemble average exceeds time-average - volatility drag")
Use Cases:

Single Scenario Analysis: Understand the characteristics of one insurance configuration before comparing alternatives.

Convergence Diagnostics: Determine if enough simulations have been run for reliable conclusions.

Risk Assessment: Evaluate bankruptcy probabilities and growth rate distributions for risk management decisions.

Parameter Sensitivity: Analyze how changes in insurance parameters affect ergodic properties by comparing batch analyses.

Performance Notes:
  • Efficient processing of 1000+ simulation results

  • Memory-conscious handling of trajectory data

  • Automatic filtering of invalid/infinite growth rates

  • Vectorized calculations for speed

Warning:

Large numbers of bankruptcy scenarios may skew statistics. Check the survival rate and consider whether the scenario parameters are realistic for your analysis goals.

See Also:

compare_scenarios(): For comparing multiple scenario batches check_convergence(): For detailed convergence analysis SimulationResults: Expected format for simulation_results ErgodicAnalysisResults: Alternative comprehensive results format

Return type:

Dict[str, Any]

integrate_loss_ergodic_analysis(loss_data: LossData, insurance_program: TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None, manufacturer: Any, time_horizon: int, n_simulations: int = 100) ErgodicAnalysisResults[source]

Perform end-to-end integrated loss modeling and ergodic analysis.

This method provides a complete pipeline from loss generation through insurance application to final ergodic analysis. It demonstrates the full power of the ergodic framework by seamlessly connecting actuarial loss modeling with business financial modeling and ergodic growth analysis.

The integration pipeline follows these steps: 1. Validate Input Data: Ensure loss data meets quality standards 2. Apply Insurance Program: Calculate recoveries and net exposures 3. Generate Annual Loss Aggregates: Convert to time-series format 4. Run Monte Carlo Simulations: Execute business simulations with losses 5. Calculate Ergodic Metrics: Analyze time-average vs ensemble behavior 6. Validate Results: Ensure mathematical and business logic consistency 7. Package Results: Return comprehensive analysis in standardized format

Args:
loss_data (LossData): Standardized loss data object containing loss

frequency and severity distributions. Must pass validation checks including proper distribution parameters and reasonable ranges.

insurance_program (Optional[InsuranceProgram]): Insurance program to

apply to losses. If None, analysis proceeds with no insurance coverage. Program should specify layers, deductibles, limits, and premium rates.

manufacturer (Any): Manufacturer model instance for running business

simulations. Should be configured with appropriate initial conditions and financial parameters. Must support claim processing and annual step operations.

time_horizon (int): Analysis time horizon in years. Typical values:

  • 10-20 years: Standard analysis period

  • 50+ years: Long-term ergodic behavior

  • 5-10 years: Quick analysis for parameter exploration

n_simulations (int): Number of Monte Carlo simulations to run.

More simulations provide better statistical reliability:

  • 100: Quick analysis for development/testing

  • 1000: Standard analysis for decision making

  • 5000+: High-precision analysis for final recommendations

Defaults to 100 for reasonable performance.

Returns:

ErgodicAnalysisResults: Comprehensive analysis results containing:

  • Time-average and ensemble-average growth rates

  • Survival rates and ergodic divergence

  • Insurance impact metrics (premiums, recoveries, net benefit)

  • Validation status and detailed metadata

  • All necessary information for decision making

Examples:

Basic integrated analysis:

from ergodic_insurance import LossData, InsuranceProgram, WidgetManufacturer, ManufacturerConfig

# Set up loss data
loss_data = LossData.from_poisson_lognormal(
    frequency_lambda=2.5,      # 2.5 claims per year on average
    severity_mean=1_000_000,   # $1M average claim
    severity_cv=2.0,           # High variability
    time_horizon=20
)

# Configure insurance program
insurance = InsuranceProgram([
    # (attachment, limit, rate)
    (0, 1_000_000, 0.015),           # $1M primary layer at 1.5%
    (1_000_000, 10_000_000, 0.008),  # $10M excess at 0.8%
    (11_000_000, 50_000_000, 0.004)  # $50M umbrella at 0.4%
])

# Set up manufacturer
config = ManufacturerConfig(
    initial_assets=25_000_000,
   base_operating_margin=0.08,
    asset_turnover_ratio=0.75
)
manufacturer = WidgetManufacturer(config)

# Run integrated analysis
results = analyzer.integrate_loss_ergodic_analysis(
    loss_data=loss_data,
    insurance_program=insurance,
    manufacturer=manufacturer,
    time_horizon=20,
    n_simulations=1000
)

# Interpret results
if results.validation_passed:
    print(f"Time-average growth: {results.time_average_growth:.2%}")
    print(f"Ensemble average: {results.ensemble_average_growth:.2%}")
    print(f"Survival rate: {results.survival_rate:.1%}")
    print(f"Ergodic divergence: {results.ergodic_divergence:.3f}")

    net_benefit = results.insurance_impact['net_benefit']
    print(f"Insurance net benefit: ${net_benefit:,.0f}")

    if results.ergodic_divergence > 0:
        print("✓ Insurance shows ergodic advantage")
else:
    print("⚠ Analysis validation failed - check inputs")

Compare insured vs uninsured scenarios:

# Run analysis with insurance
insured_results = analyzer.integrate_loss_ergodic_analysis(
    loss_data, insurance, manufacturer, 20, 1000
)

# Run analysis without insurance
uninsured_results = analyzer.integrate_loss_ergodic_analysis(
    loss_data, None, manufacturer, 20, 1000
)

# Compare outcomes
if insured_results.validation_passed and uninsured_results.validation_passed:
    growth_improvement = (insured_results.time_average_growth -
                        uninsured_results.time_average_growth)
    survival_improvement = (insured_results.survival_rate -
                          uninsured_results.survival_rate)

    print(f"Growth rate improvement: {growth_improvement:.2%}")
    print(f"Survival rate improvement: {survival_improvement:.1%}")

    if growth_improvement > 0 and survival_improvement > 0:
        print("✓ Insurance provides clear benefits")
    elif growth_improvement > 0:
        print("✓ Insurance improves growth despite survival costs")
    elif survival_improvement > 0:
        print("✓ Insurance improves survival despite growth costs")
    else:
        print("? Insurance benefits unclear - review parameters")

Parameter sensitivity analysis:

# Test different loss frequencies
frequencies = [1.0, 2.0, 3.0, 4.0, 5.0]
results = {}

for freq in frequencies:
    test_loss_data = LossData.from_poisson_lognormal(
        frequency_lambda=freq,
        severity_mean=1_000_000,
        severity_cv=2.0,
        time_horizon=20
    )

    result = analyzer.integrate_loss_ergodic_analysis(
        test_loss_data, insurance, manufacturer, 20, 500
    )

    results[freq] = result

# Find optimal frequency range for insurance benefit
for freq, result in results.items():
    if result.validation_passed:
        print(f"Frequency {freq}: Growth={result.time_average_growth:.2%}, "
              f"Survival={result.survival_rate:.1%}")

Detailed insurance impact analysis:

if results.validation_passed:
    impact = results.insurance_impact
    metadata = results.metadata

    print(f"
=== Insurance Impact Analysis ===”)

print(f”Total premiums paid: ${impact.get(‘premium_cost’, 0):,.0f}”) print(f”Total recoveries: ${impact.get(‘recovery_benefit’, 0):,.0f}”) print(f”Net financial benefit: ${impact.get(‘net_benefit’, 0):,.0f}”) print(f”Growth rate improvement: {impact.get(‘growth_improvement’, 0):.2%}”)

# Calculate benefit ratios premium_cost = impact.get(‘premium_cost’, 1) # Avoid division by zero if premium_cost > 0:

recovery_ratio = impact.get(‘recovery_benefit’, 0) / premium_cost benefit_ratio = impact.get(‘net_benefit’, 0) / premium_cost

print(f”

=== Efficiency Metrics ===”)

print(f”Recovery ratio: {recovery_ratio:.2f}x premiums”) print(f”Net benefit ratio: {benefit_ratio:.2f}x premiums”)

if benefit_ratio > 0:

print(”✓ Insurance provides positive net value”)

else:

print(”⚠ Insurance costs exceed benefits in expectation”) print(” (But may still provide ergodic advantages)”)

Validation and Error Handling:

The method includes comprehensive validation at multiple stages:

Input Validation: - Loss data consistency checks - Insurance program parameter validation - Manufacturer model state verification

Process Validation: - Simulation convergence monitoring - Mathematical consistency checks - Business logic validation

Output Validation: - Result reasonableness checks - Statistical significance assessment - Cross-validation with alternative methods

Performance Considerations:
  • Optimized for 100-5000 simulation runs

  • Memory-efficient trajectory storage

  • Parallel processing capabilities where available

  • Progress monitoring for long-running analyses

Error Conditions:

Returns results with validation_passed=False if: - Loss data fails validation checks - All simulation paths end in bankruptcy - Mathematical inconsistencies detected - Insufficient data for statistical analysis

See Also:

ErgodicAnalysisResults: Detailed results format LossData: Loss data requirements InsuranceProgram: Insurance setup validate_insurance_ergodic_impact(): Additional validation methods

Return type:

ErgodicAnalysisResults

validate_insurance_ergodic_impact(base_scenario: SimulationResults, insurance_scenario: SimulationResults, insurance_program: TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None = None) ValidationResults[source]

Comprehensively validate insurance effects in ergodic calculations.

This method performs detailed validation to ensure that insurance impacts are properly reflected in the ergodic analysis. It checks the mathematical consistency and business logic of insurance effects on cash flows, growth rates, and survival probabilities.

The validation is crucial for ensuring that ergodic analysis results are reliable and that observed insurance benefits (or costs) are genuine rather than artifacts of modeling errors or inconsistent implementations.

Validation Checks Performed:
  1. Premium Deduction Validation: Verifies that insurance premiums are properly deducted from cash flows and reflected in net income

  2. Recovery Credit Validation: Confirms that insurance recoveries are properly credited and improve financial outcomes

  3. Collateral Impact Validation: Checks that letter of credit costs and asset restrictions are properly modeled

  4. Growth Rate Consistency: Validates that time-average growth calculations properly reflect insurance benefits

Args:
base_scenario (SimulationResults): Simulation results from baseline

scenario without insurance coverage. Should represent the same business conditions and loss realizations as insurance_scenario but without insurance program applied.

insurance_scenario (SimulationResults): Simulation results from scenario

with insurance coverage. Should be directly comparable to base_scenario with only insurance coverage as the differentiating factor.

insurance_program (Optional[InsuranceProgram]): The insurance program

that was applied in insurance_scenario. If provided, enables more detailed validation of premium calculations and coverage effects. If None, performs validation based on observed differences only.

Returns:

ValidationResults: Comprehensive validation results containing:

  • premium_deductions_correct: Boolean indicating premium validation

  • recoveries_credited: Boolean indicating recovery validation

  • collateral_impacts_included: Boolean indicating collateral validation

  • time_average_reflects_benefit: Boolean indicating growth validation

  • overall_valid: Boolean indicating overall validation status

  • details: Dict with detailed validation information and metrics

Examples:

Basic validation after scenario comparison:

# Run paired simulations
base_sim = run_simulation(insurance_enabled=False, seed=12345)
insured_sim = run_simulation(insurance_enabled=True, seed=12345)

# Validate insurance effects
validation = analyzer.validate_insurance_ergodic_impact(
    base_sim,
    insured_sim,
    insurance_program
)

if validation.overall_valid:
    print("✓ Insurance effects properly modeled")
    print(f"  Premium deductions: {validation.premium_deductions_correct}")
    print(f"  Recoveries credited: {validation.recoveries_credited}")
    print(f"  Collateral impacts: {validation.collateral_impacts_included}")
    print(f"  Growth consistency: {validation.time_average_reflects_benefit}")
else:
    print("⚠ Validation issues detected")
    print("Review modeling implementation")

Detailed validation diagnostics:

validation = analyzer.validate_insurance_ergodic_impact(
    base_scenario, insurance_scenario, insurance_program
)

# Examine premium validation details
if 'premium_check' in validation.details:
    premium_info = validation.details['premium_check']
    print(f"
=== Premium Validation ===”)

print(f”Expected premium: ${premium_info[‘expected’]:,.0f}”) print(f”Actual cost difference: ${premium_info[‘actual_diff’]:,.0f}”) print(f”Validation passed: {premium_info[‘valid’]}”)

if not premium_info[‘valid’]:

diff = abs(premium_info[‘expected’] - premium_info[‘actual_diff’]) print(f”⚠ Premium discrepancy: ${diff:,.0f}”)

# Examine recovery validation details if ‘recovery_check’ in validation.details:

recovery_info = validation.details[‘recovery_check’] print(f”

=== Recovery Validation ===”)

print(f”Base scenario claims: ${recovery_info[‘base_claims’]:,.0f}”) print(f”Insured scenario claims: ${recovery_info[‘insured_claims’]:,.0f}”) print(f”Base final equity: ${recovery_info[‘base_final_equity’]:,.0f}”) print(f”Insured final equity: ${recovery_info[‘insured_final_equity’]:,.0f}”) print(f”Validation passed: {recovery_info[‘valid’]}”)

# Examine growth rate validation if ‘growth_check’ in validation.details:

growth_info = validation.details[‘growth_check’] print(f”

=== Growth Rate Validation ===”)

print(f”Base growth rate: {growth_info[‘base_growth’]:.2%}”) print(f”Insured growth rate: {growth_info[‘insured_growth’]:.2%}”) print(f”Growth improvement: {growth_info[‘improvement’]:.2%}”) print(f”Validation passed: {growth_info[‘valid’]}”)

if growth_info[‘improvement’] > 0:

print(”✓ Insurance improves time-average growth”)

elif np.isfinite(growth_info[‘insured_growth’]) and not np.isfinite(growth_info[‘base_growth’]):

print(”✓ Insurance prevents bankruptcy (infinite improvement)”)

Validation in Monte Carlo context:

# Validate across multiple random seeds
validation_results = []

for seed in range(10):  # Test 10 paired simulations
    base = run_simulation(insurance_enabled=False, seed=seed)
    insured = run_simulation(insurance_enabled=True, seed=seed)

    validation = analyzer.validate_insurance_ergodic_impact(
        base, insured, insurance_program
    )
    validation_results.append(validation.overall_valid)

# Check consistency across seeds
validation_rate = sum(validation_results) / len(validation_results)
print(f"Validation rate across seeds: {validation_rate:.1%}")

if validation_rate < 0.8:
    print("⚠ Inconsistent validation - check model implementation")
else:
    print("✓ Consistent validation across scenarios")

Integration with scenario comparison:

# Run comparison analysis
comparison = analyzer.compare_scenarios(
    [insured_sim], [base_sim], metric="equity"
)

# Validate the comparison
validation = analyzer.validate_insurance_ergodic_impact(
    base_sim, insured_sim, insurance_program
)

# Cross-check results
if validation.overall_valid and comparison['ergodic_advantage']['significant']:
    print("✓ Validated significant ergodic advantage from insurance")
    print(f"  Time-average improvement: {comparison['ergodic_advantage']['time_average_gain']:.2%}")
    print(f"  Statistical significance: p = {comparison['ergodic_advantage']['p_value']:.4f}")
elif validation.overall_valid:
    print("✓ Insurance effects validated but not statistically significant")
    print("Consider running more simulations or adjusting parameters")
else:
    print("⚠ Validation failed - results may be unreliable")
    print("Review model implementation before drawing conclusions")
Validation Logic Details:

Premium Validation: Compares expected premium costs (from insurance program) with actual observed difference in net income between scenarios. Allows for small numerical differences (<1% of expected premium).

Recovery Validation: Checks that insurance scenario shows better financial performance despite potential premium costs. Allows for 5% variance to account for timing differences and model approximations.

Collateral Validation: Verifies that letter of credit costs and asset restrictions are reflected in the financial calculations. Checks for non-zero differences in asset levels between scenarios.

Growth Rate Validation: Ensures that time-average growth calculations properly reflect insurance benefits, especially in scenarios with significant loss exposure. Handles bankruptcy cases appropriately.

Common Validation Failures:
  • Premium costs not properly deducted from cash flows

  • Insurance recoveries not credited to reduce net losses

  • Letter of credit collateral costs not included in expense calculations

  • Inconsistent treatment of bankruptcy scenarios

  • Timing mismatches between premium payments and loss occurrences

Troubleshooting:

If validation fails, check: 1. Consistent random seed usage between base and insured scenarios 2. Proper integration of insurance program with manufacturer model 3. Correct timing of premium payments and loss recoveries 4. Accurate letter of credit cost calculations 5. Consistent handling of bankruptcy and survival scenarios

Performance Notes:
  • Fast execution for single scenario pairs

  • Efficient for batch validation across multiple seeds

  • Comprehensive diagnostics with minimal computational overhead

See Also:

ValidationResults: Detailed validation results format compare_scenarios(): Main scenario comparison method integrate_loss_ergodic_analysis(): End-to-end analysis pipeline InsuranceProgram: Insurance modeling

Return type:

ValidationResults

ergodic_insurance.excel_reporter module

Excel report generation for financial statements and analysis.

This module provides comprehensive Excel report generation functionality, creating professional financial statements, diagnostic reports, and Monte Carlo aggregations with advanced formatting and validation.

Example

Generate Excel report from simulation:

from ergodic_insurance.excel_reporter import ExcelReporter, ExcelReportConfig
from ergodic_insurance.manufacturer import WidgetManufacturer

# Configure report
config = ExcelReportConfig(
    output_path=Path("./reports"),
    include_balance_sheet=True,
    include_income_statement=True,
    include_cash_flow=True
)

# Generate report
reporter = ExcelReporter(config)
output_file = reporter.generate_trajectory_report(
    manufacturer,
    "financial_statements.xlsx"
)
class ExcelReportConfig(output_path: Path = <factory>, include_balance_sheet: bool = True, include_income_statement: bool = True, include_cash_flow: bool = True, include_reconciliation: bool = True, include_metrics_dashboard: bool = True, include_pivot_data: bool = True, formatting: Dict[str, ~typing.Any] | None=None, engine: str = 'auto', currency_format: str = '$#, ##0', decimal_places: int = 0, date_format: str = 'yyyy-mm-dd') None[source]

Bases: object

Configuration for Excel report generation.

output_path

Directory for output files

include_balance_sheet

Whether to include balance sheet

include_income_statement

Whether to include income statement

include_cash_flow

Whether to include cash flow statement

include_reconciliation

Whether to include reconciliation sheet

include_metrics_dashboard

Whether to include metrics dashboard

include_pivot_data

Whether to include pivot-ready data sheet

formatting

Custom formatting options

engine

Excel engine to use (‘xlsxwriter’, ‘openpyxl’, ‘auto’)

currency_format

Currency format string

decimal_places

Number of decimal places for numbers

date_format

Date format string

output_path: Path
include_balance_sheet: bool = True
include_income_statement: bool = True
include_cash_flow: bool = True
include_reconciliation: bool = True
include_metrics_dashboard: bool = True
include_pivot_data: bool = True
formatting: Dict[str, Any] | None = None
engine: str = 'auto'
currency_format: str = '$#,##0'
decimal_places: int = 0
date_format: str = 'yyyy-mm-dd'
class ExcelReporter(config: ExcelReportConfig | None = None)[source]

Bases: object

Main Excel report generation engine.

This class handles the creation of comprehensive Excel reports from simulation data, including financial statements, metrics dashboards, and reconciliation reports.

config

Report configuration

workbook

Excel workbook object

formats

Dictionary of Excel format objects

engine

Selected Excel engine

workbook: Any | None
formats: Dict[str, Any]
generate_trajectory_report(manufacturer: WidgetManufacturer, output_file: str, title: str | None = None) Path[source]

Generate Excel report for a single simulation trajectory.

Creates a comprehensive Excel workbook with financial statements, metrics, and reconciliation for a single simulation run.

Parameters:
  • manufacturer (WidgetManufacturer) – WidgetManufacturer with simulation data

  • output_file (str) – Name of output Excel file

  • title (Optional[str]) – Optional report title

Return type:

Path

Returns:

Path to generated Excel file

generate_monte_carlo_report(results: Any, output_file: str, title: str | None = None) Path[source]

Generate aggregated report from Monte Carlo simulations.

Creates Excel report with statistical summaries across multiple simulation trajectories.

Parameters:
  • results (Any) – Monte Carlo simulation results

  • output_file (str) – Name of output Excel file

  • title (Optional[str]) – Optional report title

Return type:

Path

Returns:

Path to generated Excel file

ergodic_insurance.exposure_base module

Exposure base module for dynamic frequency scaling in insurance losses.

This module provides a hierarchy of exposure classes that dynamically adjust loss frequencies based on actual business metrics from the simulation. The exposure bases now work with real financial state from the manufacturer, not artificial growth projections.

Key Concepts:
  • Exposure bases query actual financial metrics from a state provider

  • Frequency multipliers are calculated from actual vs. base metrics

  • No artificial growth rates or projections

  • Direct integration with WidgetManufacturer financial state

Example

Basic usage with state-driven revenue exposure:

from ergodic_insurance.exposure_base import RevenueExposure
from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator

# Create manufacturer
manufacturer = WidgetManufacturer(config)

# Create exposure linked to manufacturer's actual state
exposure = RevenueExposure(state_provider=manufacturer)

# Create generator with exposure
generator = ManufacturingLossGenerator.create_simple(
    frequency=0.5,
    severity_mean=1_000_000
)

# Losses will be generated based on actual revenue during simulation
Since:

Version 0.3.0 - Complete refactor to state-driven approach

class FinancialStateProvider(*args, **kwargs)[source]

Bases: Protocol

Protocol for providing current financial state to exposure bases.

This protocol defines the interface that any class must implement to provide financial metrics to exposure bases. The WidgetManufacturer class implements this protocol to supply real-time financial data.

property current_revenue: Decimal

Get current revenue.

property current_assets: Decimal

Get current total assets.

property current_equity: Decimal

Get current equity value.

property base_revenue: Decimal

Get base (initial) revenue for comparison.

property base_assets: Decimal

Get base (initial) assets for comparison.

property base_equity: Decimal

Get base (initial) equity for comparison.

class ExposureBase[source]

Bases: ABC

Abstract base class for exposure calculations.

Exposure represents the underlying business metric that drives claim frequency. Common examples include revenue, assets, employee count, or production volume.

Subclasses must implement methods to calculate absolute exposure levels and frequency multipliers at different time points.

abstractmethod get_exposure(time: float) float[source]

Get absolute exposure level at given time.

Parameters:

time (float) – Time in years from simulation start (can be fractional).

Returns:

Exposure level (e.g., revenue in dollars, asset value, etc.).

Must be non-negative.

Return type:

float

abstractmethod get_frequency_multiplier(time: float) float[source]

Get frequency adjustment factor relative to base.

The multiplier is applied to the base frequency to determine the actual claim frequency at a given time.

Parameters:

time (float) – Time in years from simulation start (can be fractional).

Returns:

Multiplier to apply to base frequency. A value of 1.0

means no change from base frequency, 2.0 means double the base frequency, etc. Must be non-negative.

Return type:

float

abstractmethod reset() None[source]

Reset exposure to initial state.

This method should reset any internal state, cached values, or random number generators to their initial conditions. Useful for running multiple independent simulations with the same exposure configuration.

Return type:

None

class RevenueExposure(state_provider: FinancialStateProvider) None[source]

Bases: ExposureBase

Revenue-based exposure using actual financial state.

Models claim frequency that scales with actual business revenue from the simulation, not artificial growth projections. The exposure directly queries the current revenue from the manufacturer’s financial state.

state_provider

Object providing current and base financial metrics. Typically a WidgetManufacturer instance.

Example

Revenue exposure with actual manufacturer state:

from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.config import ManufacturerConfig

manufacturer = WidgetManufacturer(
    ManufacturerConfig(initial_assets=10_000_000)
)
exposure = RevenueExposure(state_provider=manufacturer)

# Exposure reflects actual manufacturer revenue
current_rev = exposure.get_exposure(1.0)
multiplier = exposure.get_frequency_multiplier(1.0)
state_provider: FinancialStateProvider
get_exposure(time: float) float[source]

Return current actual revenue from manufacturer.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Calculate multiplier from actual revenue ratio.

Return type:

float

reset() None[source]

No internal state to reset for state-driven exposure.

Return type:

None

class AssetExposure(state_provider: FinancialStateProvider) None[source]

Bases: ExposureBase

Asset-based exposure using actual financial state.

Models claim frequency based on actual asset values from the simulation, tracking real asset changes from operations, claims, and business growth. Suitable for businesses where physical assets drive risk exposure.

Frequency scales linearly with assets as more assets generally mean more insurable items that can generate claims.

state_provider

Object providing current and base financial metrics. Typically a WidgetManufacturer instance.

Example

Asset exposure with actual manufacturer state:

manufacturer = WidgetManufacturer(
    ManufacturerConfig(initial_assets=50_000_000)
)
exposure = AssetExposure(state_provider=manufacturer)

# Exposure reflects actual asset changes
current_assets = exposure.get_exposure(1.0)
multiplier = exposure.get_frequency_multiplier(1.0)
state_provider: FinancialStateProvider
get_exposure(time: float) float[source]

Return current actual assets from manufacturer.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Calculate multiplier from actual asset ratio.

Return type:

float

reset() None[source]

No internal state to reset for state-driven exposure.

Return type:

None

class EquityExposure(state_provider: FinancialStateProvider) None[source]

Bases: ExposureBase

Equity-based exposure using actual financial state.

Models claim frequency based on actual equity values from the simulation, tracking real equity changes from profits, losses, and retained earnings. Suitable for financial analysis where equity represents business scale.

state_provider

Object providing current and base financial metrics. Typically a WidgetManufacturer instance.

Example

Equity exposure with actual manufacturer state:

manufacturer = WidgetManufacturer(
    ManufacturerConfig(initial_assets=20_000_000)
)
exposure = EquityExposure(state_provider=manufacturer)

# Exposure reflects actual equity changes
current_equity = exposure.get_exposure(1.0)
multiplier = exposure.get_frequency_multiplier(1.0)
state_provider: FinancialStateProvider
get_exposure(time: float) float[source]

Return current actual equity from manufacturer.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Higher equity implies larger operations.

Return type:

float

reset() None[source]

No internal state to reset for state-driven exposure.

Return type:

None

class EmployeeExposure(base_employees: int, hiring_rate: float = 0.0, automation_factor: float = 0.0) None[source]

Bases: ExposureBase

Exposure based on employee count.

Models claim frequency based on workforce size, accounting for hiring and automation effects. Suitable for businesses where employee-related risks dominate (workers comp, employment practices, etc.).

base_employees

Initial number of employees.

hiring_rate

Annual net hiring rate (can be negative for downsizing).

automation_factor

Annual reduction in exposure per employee due to automation.

Example

Employee exposure with automation:

exposure = EmployeeExposure(
    base_employees=500,
    hiring_rate=0.05,  # 5% annual growth
    automation_factor=0.02  # 2% automation improvement
)
base_employees: int
hiring_rate: float = 0.0
automation_factor: float = 0.0
__post_init__()[source]

Validate inputs.

get_exposure(time: float) float[source]

Calculate employee count with hiring and automation effects.

Return type:

float

get_frequency_multiplier(time: float) float[source]

More employees = more workplace incidents, but automation helps.

Return type:

float

reset() None[source]

No state to reset.

Return type:

None

class ProductionExposure(base_units: float, growth_rate: float = 0.0, seasonality: Callable[[float], float] | None = None, quality_improvement_rate: float = 0.0) None[source]

Bases: ExposureBase

Exposure based on production volume/units.

Models claim frequency based on production output, with support for seasonal patterns and quality improvements that reduce defect rates.

base_units

Initial production volume (units per year).

growth_rate

Annual production growth rate.

seasonality

Optional function returning seasonal multiplier.

quality_improvement_rate

Annual reduction in defect-related claims.

Example

Production exposure with seasonality:

def seasonal_pattern(time):
    # Higher production in Q4
    return 1.0 + 0.3 * np.sin(2 * np.pi * time)

exposure = ProductionExposure(
    base_units=100_000,
    growth_rate=0.08,
    seasonality=seasonal_pattern,
    quality_improvement_rate=0.03
)
base_units: float
growth_rate: float = 0.0
seasonality: Callable[[float], float] | None = None
quality_improvement_rate: float = 0.0
__post_init__()[source]

Validate inputs.

get_exposure(time: float) float[source]

Calculate production volume with growth and seasonality.

Return type:

float

get_frequency_multiplier(time: float) float[source]

More production = more potential defects, but quality improvements help.

Return type:

float

reset() None[source]

No state to reset.

Return type:

None

class CompositeExposure(exposures: Dict[str, ExposureBase], weights: Dict[str, float]) None[source]

Bases: ExposureBase

Weighted combination of multiple exposure bases.

Allows modeling complex businesses with multiple risk drivers by combining different exposure types with specified weights.

exposures

Dictionary of named exposure bases.

weights

Dictionary of weights for each exposure (will be normalized).

Example

Composite exposure for diversified business:

composite = CompositeExposure(
    exposures={
        'revenue': RevenueExposure(base_revenue=50_000_000, growth_rate=0.05),
        'assets': AssetExposure(base_assets=100_000_000),
        'employees': EmployeeExposure(base_employees=500)
    },
    weights={'revenue': 0.5, 'assets': 0.3, 'employees': 0.2}
)
exposures: Dict[str, ExposureBase]
weights: Dict[str, float]
__post_init__()[source]

Normalize weights to sum to 1.0.

get_exposure(time: float) float[source]

Weighted average of constituent exposures.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Weighted average of frequency multipliers.

Return type:

float

reset() None[source]

Reset all constituent exposures.

Return type:

None

class ScenarioExposure(scenarios: Dict[str, List[float]], selected_scenario: str, interpolation: str = 'linear') None[source]

Bases: ExposureBase

Predefined exposure scenarios for planning and stress testing.

Allows specification of exact exposure paths for scenario analysis, with interpolation between specified time points.

scenarios

Dictionary mapping scenario names to exposure paths.

selected_scenario

Currently active scenario name.

interpolation

Interpolation method (‘linear’, ‘cubic’, ‘nearest’).

Example

Scenario-based exposure planning:

scenarios = {
    'baseline': [100, 105, 110, 116, 122],
    'recession': [100, 95, 90, 92, 96],
    'expansion': [100, 112, 125, 140, 155]
}

exposure = ScenarioExposure(
    scenarios=scenarios,
    selected_scenario='recession',
    interpolation='linear'
)
scenarios: Dict[str, List[float]]
selected_scenario: str
interpolation: str = 'linear'
__post_init__()[source]

Validate scenarios.

get_exposure(time: float) float[source]

Interpolate exposure from scenario path.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Derive multiplier from exposure level.

Return type:

float

reset() None[source]

Cache base exposure.

Return type:

None

class StochasticExposure(base_value: float, process_type: str, parameters: Dict[str, float], seed: int | None = None) None[source]

Bases: ExposureBase

Stochastic exposure evolution using various processes.

Supports multiple stochastic processes for advanced exposure modeling: - Geometric Brownian Motion (GBM) - Mean-reverting (Ornstein-Uhlenbeck) - Jump diffusion

base_value

Initial exposure value.

process_type

Type of stochastic process (‘gbm’, ‘mean_reverting’, ‘jump_diffusion’).

parameters

Process-specific parameters.

seed

Random seed for reproducibility.

Example

GBM exposure process:

exposure = StochasticExposure(
    base_value=100_000_000,
    process_type='gbm',
    parameters={
        'drift': 0.05,      # 5% drift
        'volatility': 0.20  # 20% volatility
    },
    seed=42
)
base_value: float
process_type: str
parameters: Dict[str, float]
seed: int | None = None
__post_init__()[source]

Initialize and validate.

reset()[source]

Reset stochastic paths.

get_exposure(time: float) float[source]

Generate or retrieve stochastic path.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Derive multiplier from exposure level.

Return type:

float

ergodic_insurance.financial_statements module

Financial statement compilation and generation.

This module provides classes for generating standard financial statements (Balance Sheet, Income Statement, Cash Flow Statement) from simulation data. It supports both single trajectory and Monte Carlo aggregated reports with reconciliation capabilities.

Example

Generate financial statements from a manufacturer simulation:

from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.financial_statements import FinancialStatementGenerator

# Run simulation
manufacturer = WidgetManufacturer(config)
for year in range(10):
    manufacturer.step()

# Generate statements
generator = FinancialStatementGenerator(manufacturer)
balance_sheet = generator.generate_balance_sheet(year=5)
income_statement = generator.generate_income_statement(year=5)
cash_flow = generator.generate_cash_flow_statement(year=5)
class FinancialStatementConfig(currency_symbol: str = '$', decimal_places: int = 0, include_yoy_change: bool = True, include_percentages: bool = True, fiscal_year_end: int | None = None, consolidate_monthly: bool = True, current_claims_ratio: float = 0.1) None[source]

Bases: object

Configuration for financial statement generation.

currency_symbol

Symbol to use for currency formatting

decimal_places

Number of decimal places for numeric values

include_yoy_change

Whether to include year-over-year changes

include_percentages

Whether to include percentage breakdowns

fiscal_year_end

Month of fiscal year end (1-12). If None, inherits from the central Config.simulation.fiscal_year_end setting. Defaults to 12 (December) if neither is set, for calendar year alignment.

consolidate_monthly

Whether to consolidate monthly data into annual

current_claims_ratio

Fraction of claim liabilities classified as current (due within one year). Defaults to 0.1 (10%). Should be derived from actual claim payment schedules when available.

currency_symbol: str = '$'
decimal_places: int = 0
include_yoy_change: bool = True
include_percentages: bool = True
fiscal_year_end: int | None = None
consolidate_monthly: bool = True
current_claims_ratio: float = 0.1
class CashFlowStatement(metrics_history: List[Dict[str, Decimal | float | int | bool]], config: Any | None = None, ledger: Ledger | None = None)[source]

Bases: object

Generates cash flow statements using indirect or direct method.

This class creates properly structured cash flow statements with three sections (Operating, Investing, Financing) following GAAP standards. Supports both the indirect method (starting from net income) and the direct method (summing ledger entries) for operating activities.

When a ledger is provided, the direct method is available, which provides perfect reconciliation and audit trail for all cash flows.

metrics_history

List of metrics dictionaries from simulation

config

Configuration object with business parameters

ledger

Optional Ledger for direct method cash flow generation

generate_statement(year: int, period: str = 'annual', method: str = 'indirect') DataFrame[source]

Generate cash flow statement for specified year.

Parameters:
  • year (int) – Year index (0-based) for statement

  • period (str) – ‘annual’ or ‘monthly’ period type

  • method (str) – ‘indirect’ (default) or ‘direct’. Direct method requires a ledger to be provided during initialization.

Return type:

DataFrame

Returns:

DataFrame containing formatted cash flow statement

Raises:
  • IndexError – If year is out of range

  • ValueError – If direct method requested but no ledger available

class FinancialStatementGenerator(manufacturer: WidgetManufacturer | None = None, manufacturer_data: Dict[str, Any] | None = None, config: FinancialStatementConfig | None = None, ledger: Ledger | None = None)[source]

Bases: object

Generates financial statements from simulation data.

This class compiles standard financial statements (Balance Sheet, Income Statement, Cash Flow) from manufacturer metrics history. It handles both annual and monthly data, performs reconciliation checks, and calculates derived financial metrics.

When a ledger is provided (either directly or via the manufacturer), direct method cash flow statements can be generated, providing perfect reconciliation and audit trail for all cash transactions.

manufacturer_data

Raw simulation data from manufacturer

config

Configuration for statement generation

metrics_history

List of metrics dictionaries from simulation

years_available

Number of years of data available

ledger

Optional Ledger for direct method cash flow generation

generate_balance_sheet(year: int, compare_years: List[int] | None = None) DataFrame[source]

Generate balance sheet for specified year.

Creates a standard balance sheet with assets, liabilities, and equity sections. Includes year-over-year comparisons if configured.

When a ledger is available, balances are derived directly from the ledger using get_balance() for each account, ensuring perfect reconciliation. Otherwise, falls back to metrics_history from the manufacturer.

Parameters:
  • year (int) – Year index (0-based) for balance sheet

  • compare_years (Optional[List[int]]) – Optional list of years to compare against

Return type:

DataFrame

Returns:

DataFrame containing balance sheet data

Raises:

IndexError – If year is out of range

generate_income_statement(year: int, compare_years: List[int] | None = None, monthly: bool = False) DataFrame[source]

Generate income statement for specified year with proper GAAP structure.

Creates a standard income statement following US GAAP with proper categorization of COGS, operating expenses, and non-operating items. Supports both annual and monthly statement generation.

When a ledger is available, revenue and expenses are derived from ledger period changes using get_period_change(), ensuring perfect reconciliation. Otherwise, falls back to metrics_history from the manufacturer.

Parameters:
  • year (int) – Year index (0-based) for income statement

  • compare_years (Optional[List[int]]) – Optional list of years to compare against

  • monthly (bool) – If True, generate monthly statement (divides annual by 12)

Return type:

DataFrame

Returns:

DataFrame containing income statement data with GAAP structure

Raises:

IndexError – If year is out of range

generate_cash_flow_statement(year: int, period: str = 'annual', method: str = 'indirect') DataFrame[source]

Generate cash flow statement for specified year using CashFlowStatement class.

Creates a cash flow statement with three distinct sections (Operating, Investing, Financing). Supports both indirect method (starting from net income) and direct method (summing ledger entries) for operating activities.

When a ledger is available, the direct method is preferred as it provides perfect reconciliation and audit trail for all cash transactions by summing actual ledger entries.

Parameters:
  • year (int) – Year index (0-based) for cash flow statement

  • period (str) – ‘annual’ or ‘monthly’ for period type

  • method (str) – ‘indirect’ (default) or ‘direct’. Direct method requires a ledger to be available. When ledger is present and no method specified, direct method may be preferred for better accuracy.

Return type:

DataFrame

Returns:

DataFrame containing cash flow statement data

Raises:
  • IndexError – If year is out of range

  • ValueError – If direct method requested but no ledger available

generate_reconciliation_report(year: int) DataFrame[source]

Generate reconciliation report for financial statements.

Validates that financial statements balance and reconcile properly, checking key accounting identities and relationships.

Parameters:

year (int) – Year index (0-based) for reconciliation

Return type:

DataFrame

Returns:

DataFrame containing reconciliation checks and results

class MonteCarloStatementAggregator(monte_carlo_results: List[Dict] | DataFrame, config: FinancialStatementConfig | None = None)[source]

Bases: object

Aggregates financial statements across Monte Carlo simulations.

This class processes multiple simulation trajectories to create statistical summaries of financial statements, showing means, percentiles, and confidence intervals.

results

Monte Carlo simulation results

config

Configuration for statement generation

aggregate_balance_sheets(year: int, percentiles: List[float] | None = None) DataFrame[source]

Aggregate balance sheets across simulations.

Parameters:
  • year (int) – Year index to aggregate

  • percentiles (Optional[List[float]]) – Percentiles to calculate (defaults to [5, 25, 50, 75, 95])

Return type:

DataFrame

Returns:

DataFrame with aggregated balance sheet statistics

aggregate_income_statements(year: int, percentiles: List[float] | None = None) DataFrame[source]

Aggregate income statements across simulations.

Parameters:
  • year (int) – Year index to aggregate

  • percentiles (Optional[List[float]]) – Percentiles to calculate (defaults to [5, 25, 50, 75, 95])

Return type:

DataFrame

Returns:

DataFrame with aggregated income statement statistics

generate_convergence_analysis() DataFrame[source]

Analyze convergence of financial metrics across simulations.

Return type:

DataFrame

Returns:

DataFrame showing convergence statistics

ergodic_insurance.hjb_solver module

Hamilton-Jacobi-Bellman solver for optimal insurance control.

This module implements a Hamilton-Jacobi-Bellman (HJB) partial differential equation solver for finding optimal insurance strategies through dynamic programming. The solver handles multi-dimensional state spaces and provides theoretically optimal control policies.

The HJB equation provides globally optimal solutions by solving:

∂V/∂t + max_u[L^u V + f(x,u)] = 0

where V is the value function, L^u is the controlled infinitesimal generator, and f(x,u) is the running cost/reward.

Author: Alex Filiakov Date: 2025-01-26

class TimeSteppingScheme(*values)[source]

Bases: Enum

Time stepping schemes for PDE integration.

EXPLICIT = 'explicit'
IMPLICIT = 'implicit'
CRANK_NICOLSON = 'crank_nicolson'
class BoundaryCondition(*values)[source]

Bases: Enum

Types of boundary conditions.

DIRICHLET = 'dirichlet'
NEUMANN = 'neumann'
ABSORBING = 'absorbing'
REFLECTING = 'reflecting'
class StateVariable(name: str, min_value: float, max_value: float, num_points: int, boundary_lower: BoundaryCondition = BoundaryCondition.ABSORBING, boundary_upper: BoundaryCondition = BoundaryCondition.ABSORBING, log_scale: bool = False) None[source]

Bases: object

Definition of a state variable in the HJB problem.

name: str
min_value: float
max_value: float
num_points: int
boundary_lower: BoundaryCondition = 'absorbing'
boundary_upper: BoundaryCondition = 'absorbing'
log_scale: bool = False
__post_init__()[source]

Validate state variable configuration.

get_grid() ndarray[source]

Generate grid points for this variable.

Return type:

ndarray

Returns:

Array of grid points

class ControlVariable(name: str, min_value: float, max_value: float, num_points: int = 50, continuous: bool = True) None[source]

Bases: object

Definition of a control variable in the HJB problem.

name: str
min_value: float
max_value: float
num_points: int = 50
continuous: bool = True
__post_init__()[source]

Validate control variable configuration.

get_values() ndarray[source]

Get discrete control values for optimization.

Return type:

ndarray

Returns:

Array of control values

class StateSpace(state_variables: List[StateVariable]) None[source]

Bases: object

Multi-dimensional state space for HJB problem.

Handles arbitrary dimensionality with proper grid management and boundary condition enforcement.

state_variables: List[StateVariable]
__post_init__()[source]

Initialize derived attributes.

get_boundary_mask() ndarray[source]

Get boolean mask for boundary points.

Return type:

ndarray

Returns:

Boolean array where True indicates boundary points

interpolate_value(value_function: ndarray, points: ndarray) ndarray[source]

Interpolate value function at arbitrary points.

Parameters:
  • value_function (ndarray) – Value function on grid

  • points (ndarray) – Points to interpolate at (shape: [n_points, n_dims])

Return type:

ndarray

Returns:

Interpolated values

class UtilityFunction[source]

Bases: ABC

Abstract base class for utility functions.

Defines the interface for utility functions used in the HJB equation. Concrete implementations should provide both the utility value and its derivative.

abstractmethod evaluate(wealth: ndarray) ndarray[source]

Evaluate utility at given wealth levels.

Parameters:

wealth (ndarray) – Wealth values

Return type:

ndarray

Returns:

Utility values

abstractmethod derivative(wealth: ndarray) ndarray[source]

Compute marginal utility (first derivative).

Parameters:

wealth (ndarray) – Wealth values

Return type:

ndarray

Returns:

Marginal utility values

abstractmethod inverse_derivative(marginal_utility: ndarray) ndarray[source]

Compute inverse of marginal utility.

Used for finding optimal controls in some formulations.

Parameters:

marginal_utility (ndarray) – Marginal utility values

Return type:

ndarray

Returns:

Wealth values corresponding to given marginal utilities

class LogUtility(wealth_floor: float = 1e-06)[source]

Bases: UtilityFunction

Logarithmic utility function for ergodic optimization.

U(w) = log(w)

This utility function maximizes the long-term growth rate and is particularly suitable for ergodic analysis.

evaluate(wealth: ndarray) ndarray[source]

Evaluate log utility.

Return type:

ndarray

derivative(wealth: ndarray) ndarray[source]

Compute marginal utility: U’(w) = 1/w.

Return type:

ndarray

inverse_derivative(marginal_utility: ndarray) ndarray[source]

Compute inverse: (U’)^(-1)(m) = 1/m.

Return type:

ndarray

class PowerUtility(risk_aversion: float = 2.0, wealth_floor: float = 1e-06)[source]

Bases: UtilityFunction

Power (CRRA) utility function with risk aversion parameter.

U(w) = w^(1-γ)/(1-γ) for γ ≠ 1 U(w) = log(w) for γ = 1

where γ is the coefficient of relative risk aversion.

evaluate(wealth: ndarray) ndarray[source]

Evaluate power utility.

Return type:

ndarray

derivative(wealth: ndarray) ndarray[source]

Compute marginal utility: U’(w) = w^(-γ).

Return type:

ndarray

inverse_derivative(marginal_utility: ndarray) ndarray[source]

Compute inverse: (U’)^(-1)(m) = m^(-1/γ).

Return type:

ndarray

class ExpectedWealth[source]

Bases: UtilityFunction

Linear utility function for risk-neutral wealth maximization.

U(w) = w

This represents risk-neutral preferences where the goal is to maximize expected wealth.

evaluate(wealth: ndarray) ndarray[source]

Evaluate linear utility.

Return type:

ndarray

derivative(wealth: ndarray) ndarray[source]

Compute marginal utility: U’(w) = 1.

Return type:

ndarray

inverse_derivative(marginal_utility: ndarray) ndarray[source]

Inverse is undefined for constant marginal utility.

Return type:

ndarray

class HJBProblem(state_space: StateSpace, control_variables: List[ControlVariable], utility_function: UtilityFunction, dynamics: Callable[[ndarray, ndarray, float], ndarray], running_cost: Callable[[ndarray, ndarray, float], ndarray], terminal_value: Callable[[ndarray], ndarray] | None = None, discount_rate: float = 0.0, time_horizon: float | None = None) None[source]

Bases: object

Complete specification of an HJB optimal control problem.

state_space: StateSpace
control_variables: List[ControlVariable]
utility_function: UtilityFunction
dynamics: Callable[[ndarray, ndarray, float], ndarray]
running_cost: Callable[[ndarray, ndarray, float], ndarray]
terminal_value: Callable[[ndarray], ndarray] | None = None
discount_rate: float = 0.0
time_horizon: float | None = None
__post_init__()[source]

Validate problem specification.

class HJBSolverConfig(time_step: float = 0.01, max_iterations: int = 1000, tolerance: float = 1e-06, scheme: TimeSteppingScheme = TimeSteppingScheme.IMPLICIT, use_sparse: bool = True, verbose: bool = True) None[source]

Bases: object

Configuration for HJB solver.

time_step: float = 0.01
max_iterations: int = 1000
tolerance: float = 1e-06
scheme: TimeSteppingScheme = 'implicit'
use_sparse: bool = True
verbose: bool = True
class HJBSolver(problem: HJBProblem, config: HJBSolverConfig)[source]

Bases: object

Hamilton-Jacobi-Bellman PDE solver for optimal control.

Implements finite difference methods with upwind schemes for solving HJB equations. Supports multi-dimensional state spaces and various boundary conditions.

value_function: ndarray | None
optimal_policy: dict[str, ndarray] | None
solve() Tuple[ndarray, Dict[str, ndarray]][source]

Solve the HJB equation using policy iteration.

Return type:

Tuple[ndarray, Dict[str, ndarray]]

Returns:

Tuple of (value_function, optimal_policy_dict)

extract_feedback_control(state: ndarray) Dict[str, float][source]

Extract feedback control law at given state.

Parameters:

state (ndarray) – Current state values

Return type:

Dict[str, float]

Returns:

Dictionary of control variable names to optimal values

compute_convergence_metrics() Dict[str, Any][source]

Compute metrics for assessing solution quality.

Return type:

Dict[str, Any]

Returns:

Dictionary of convergence metrics

create_custom_utility(evaluate_func: Callable[[ndarray], ndarray], derivative_func: Callable[[ndarray], ndarray], inverse_derivative_func: Callable[[ndarray], ndarray] | None = None) UtilityFunction[source]

Factory function for creating custom utility functions.

This function allows users to create custom utility functions by providing the evaluation and derivative functions. This is the recommended way to add new utility functions beyond the built-in ones.

Parameters:
Return type:

UtilityFunction

Returns:

Custom utility function instance

Example

>>> # Create exponential utility: U(w) = 1 - exp(-α*w)
>>> def exp_eval(w):
...     alpha = 0.01
...     return 1 - np.exp(-alpha * w)
>>> def exp_deriv(w):
...     alpha = 0.01
...     return alpha * np.exp(-alpha * w)
>>> exp_utility = create_custom_utility(exp_eval, exp_deriv)

ergodic_insurance.insurance module

Insurance policy structure and claim processing.

This module provides classes for modeling multi-layer insurance policies with configurable attachment points, limits, and premium rates. It supports complex insurance structures commonly used in commercial insurance including excess layers, umbrella coverage, and multi-layer towers.

The module integrates with pricing engines for dynamic premium calculation and supports both static and market-driven pricing models.

Key Features:
  • Multi-layer insurance towers with attachment points and limits

  • Deductible and self-insured retention handling

  • Dynamic pricing integration with market cycles

  • Claim allocation across multiple layers

  • Premium calculation with various rating methods

Examples

Simple single-layer policy:

from ergodic_insurance.insurance import InsurancePolicy, InsuranceLayer

# $5M excess $1M with 3% rate
layer = InsuranceLayer(
    attachment_point=1_000_000,
    limit=5_000_000,
    rate=0.03
)

policy = InsurancePolicy(
    layers=[layer],
    deductible=500_000
)

# Process a $3M claim
company_payment, insurance_recovery = policy.process_claim(3_000_000)

Multi-layer tower:

# Build a insurance tower
layers = [
    InsuranceLayer(1_000_000, 4_000_000, 0.025),  # Primary
    InsuranceLayer(5_000_000, 5_000_000, 0.015),  # First excess
    InsuranceLayer(10_000_000, 10_000_000, 0.01), # Second excess
]

tower = InsurancePolicy(layers, deductible=1_000_000)
annual_premium = tower.calculate_premium()

Note

For advanced features like reinstatements and complex multi-layer programs, see the insurance_program module which provides EnhancedInsuranceLayer and InsuranceProgram classes.

Since:

Version 0.1.0

class InsuranceLayer(attachment_point: float, limit: float, rate: float) None[source]

Bases: object

Represents a single insurance layer.

Each layer has an attachment point (where coverage starts), a limit (maximum coverage), and a rate (premium percentage). Insurance layers are the building blocks of complex insurance programs.

attachment_point

Dollar amount where this layer starts providing coverage. Also known as the retention or excess point.

limit

Maximum coverage amount from this layer. The layer covers losses from attachment_point to (attachment_point + limit).

rate

Premium rate as a percentage of the limit. For example, 0.03 means 3% of limit as annual premium.

Examples

Primary layer with $1M retention:

primary = InsuranceLayer(
    attachment_point=1_000_000,  # $1M retention
    limit=5_000_000,             # $5M limit
    rate=0.025                   # 2.5% rate
)

# This covers losses from $1M to $6M
# Annual premium = $5M × 2.5% = $125,000

Excess layer in a tower:

excess = InsuranceLayer(
    attachment_point=6_000_000,  # Attaches at $6M
    limit=10_000_000,            # $10M limit
    rate=0.01                    # 1% rate (lower for excess)
)

Note

Layers are typically structured in towers with each successive layer attaching where the previous layer exhausts.

attachment_point: float
limit: float
rate: float
__post_init__()[source]

Validate insurance layer parameters.

Raises:

ValueError – If attachment_point is negative, limit is non-positive, or rate is negative.

calculate_recovery(loss_amount: float) float[source]

Calculate recovery from this layer for a given loss.

Determines how much of a loss is covered by this specific layer based on its attachment point and limit.

Parameters:

loss_amount (float) – Total loss amount in dollars to recover.

Returns:

Amount recovered from this layer in dollars. Returns 0

if loss is below attachment point, partial recovery if loss partially penetrates layer, or full limit if loss exceeds layer exhaust point.

Return type:

float

Examples

Layer with $1M attachment, $5M limit:

layer = InsuranceLayer(1_000_000, 5_000_000, 0.02)

# Loss below attachment
recovery = layer.calculate_recovery(500_000)  # Returns 0

# Loss partially in layer
recovery = layer.calculate_recovery(3_000_000)  # Returns 2M

# Loss exceeds layer
recovery = layer.calculate_recovery(10_000_000)  # Returns 5M (full limit)
calculate_premium() float[source]

Calculate premium for this layer.

Returns:

Annual premium amount in dollars (rate × limit).

Return type:

float

Examples

Calculate annual cost:

layer = InsuranceLayer(1_000_000, 10_000_000, 0.015)
premium = layer.calculate_premium()  # Returns 150,000
print(f"Annual premium: ${premium:,.0f}")
class InsurancePolicy(layers: List[InsuranceLayer], deductible: float = 0.0, pricing_enabled: bool = False, pricer: InsurancePricer | None = None)[source]

Bases: object

Multi-layer insurance policy with deductible.

Manages multiple insurance layers and processes claims across them, handling proper allocation of losses to each layer in sequence. Supports both static and dynamic pricing models.

The policy structure follows standard commercial insurance practices: 1. Insured pays deductible first 2. Losses then penetrate layers in order of attachment 3. Each layer pays up to its limit 4. Insured bears losses exceeding all coverage

layers

List of InsuranceLayer objects sorted by attachment point.

deductible

Self-insured retention before insurance applies.

pricing_enabled

Whether to use dynamic pricing models.

pricer

Optional pricing engine for market-based premiums.

pricing_results

History of pricing calculations.

Examples

Standard commercial property program:

# Build insurance program
policy = InsurancePolicy(
    layers=[
        InsuranceLayer(500_000, 4_500_000, 0.03),   # Primary
        InsuranceLayer(5_000_000, 10_000_000, 0.02), # Excess
        InsuranceLayer(15_000_000, 25_000_000, 0.01) # Umbrella
    ],
    deductible=500_000  # $500K SIR
)

# Process various claims
small_claim = policy.process_claim(100_000)  # All on deductible
medium_claim = policy.process_claim(3_000_000)  # Hits primary
large_claim = policy.process_claim(20_000_000)  # Multiple layers

Note

Layers are automatically sorted by attachment point to ensure proper claim allocation regardless of input order.

pricing_results: List[Any]
process_claim(claim_amount: float) Tuple[float, float][source]

Process a claim through the insurance structure.

Allocates a loss across the deductible and insurance layers, calculating how much is paid by the company versus insurance. Total insurance recovery is capped at (claim_amount - deductible) to prevent over-recovery when layer configurations overlap with the deductible region.

Parameters:

claim_amount (float) – Total claim amount.

Return type:

Tuple[float, float]

Returns:

Tuple of (company_payment, insurance_recovery).

calculate_recovery(claim_amount: float) float[source]

Calculate total insurance recovery for a claim.

Recovery is capped at (claim_amount - deductible) to prevent over-recovery when layer configurations overlap with the deductible region.

Parameters:

claim_amount (float) – Total claim amount.

Return type:

float

Returns:

Total insurance recovery amount.

calculate_premium() float[source]

Calculate total premium across all layers.

Return type:

float

Returns:

Total annual premium.

classmethod from_yaml(config_path: str) InsurancePolicy[source]

Load insurance policy from YAML configuration.

Parameters:

config_path (str) – Path to YAML configuration file.

Return type:

InsurancePolicy

Returns:

InsurancePolicy configured from YAML.

get_total_coverage() float[source]

Get total coverage across all layers.

Return type:

float

Returns:

Maximum possible insurance coverage.

to_enhanced_program() TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None[source]

Convert to enhanced InsuranceProgram for advanced features.

Return type:

Optional[ergodic_insurance.insurance_program.InsuranceProgram]

Returns:

InsuranceProgram instance with same configuration.

apply_pricing(expected_revenue: float, market_cycle: MarketCycle | None = None, loss_generator: ManufacturingLossGenerator | None = None) None[source]

Apply dynamic pricing to all layers in the policy.

Updates layer rates based on frequency/severity calculations.

Parameters:
Raises:

ValueError – If pricing not enabled or pricer not configured

Return type:

None

classmethod create_with_pricing(layers: List[InsuranceLayer], loss_generator: ManufacturingLossGenerator, expected_revenue: float, market_cycle: MarketCycle | None = None, deductible: float = 0.0) InsurancePolicy[source]

Create insurance policy with dynamic pricing.

Factory method that creates a policy with pricing already applied.

Parameters:
Return type:

InsurancePolicy

Returns:

InsurancePolicy with pricing applied

ergodic_insurance.insurance_accounting module

Insurance premium accounting module.

This module provides proper insurance premium accounting with prepaid asset tracking and systematic monthly amortization following GAAP principles.

Uses Decimal for all currency amounts to prevent floating-point precision errors in iterative calculations.

class InsuranceRecovery(amount: Decimal, claim_id: str, year_approved: int, amount_received: Decimal = <factory>) None[source]

Bases: object

Represents an insurance claim recovery receivable.

amount

Recovery amount approved by insurance (Decimal)

claim_id

Unique identifier for the claim

year_approved

Year when recovery was approved

amount_received

Amount received to date (Decimal)

amount: Decimal
claim_id: str
year_approved: int
amount_received: Decimal
__post_init__() None[source]

Convert amounts to Decimal if needed (runtime check for backwards compatibility).

Return type:

None

property outstanding: Decimal

Calculate outstanding receivable amount.

__deepcopy__(memo: Dict[int, Any]) InsuranceRecovery[source]

Create a deep copy of this insurance recovery.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

InsuranceRecovery

Returns:

Independent copy of this InsuranceRecovery

class InsuranceAccounting(prepaid_insurance: Decimal = <factory>, monthly_expense: Decimal = <factory>, annual_premium: Decimal = <factory>, months_in_period: int = 12, current_month: int = 0, recoveries: List[InsuranceRecovery] = <factory>) None[source]

Bases: object

Manages insurance premium accounting with proper GAAP treatment.

This class tracks annual insurance premium payments as prepaid assets and amortizes them monthly over the coverage period using straight-line amortization. It also tracks insurance claim recoveries separately from claim liabilities.

All currency amounts use Decimal for precise financial calculations.

prepaid_insurance

Current prepaid insurance asset balance (Decimal)

monthly_expense

Calculated monthly insurance expense (Decimal)

annual_premium

Total annual premium amount (Decimal)

months_in_period

Number of months in coverage period (default 12)

current_month

Current month in coverage period

recoveries

List of insurance recoveries receivable

prepaid_insurance: Decimal
monthly_expense: Decimal
annual_premium: Decimal
months_in_period: int = 12
current_month: int = 0
recoveries: List[InsuranceRecovery]
__post_init__() None[source]

Convert amounts to Decimal if needed (runtime check for backwards compatibility).

Return type:

None

__deepcopy__(memo: Dict[int, Any]) InsuranceAccounting[source]

Create a deep copy of this insurance accounting instance.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

InsuranceAccounting

Returns:

Independent copy of this InsuranceAccounting with all recoveries

pay_annual_premium(premium_amount: Decimal | float | int) Dict[str, Decimal][source]

Record annual premium payment at start of coverage period.

Parameters:

premium_amount (Union[Decimal, float, int]) – Annual premium amount to pay (converted to Decimal)

Returns:

  • cash_outflow: Cash paid for premium

  • prepaid_asset: Prepaid insurance asset created

  • monthly_expense: Calculated monthly expense

Return type:

Dictionary with transaction details as Decimal

record_monthly_expense() Dict[str, Decimal][source]

Amortize monthly insurance expense from prepaid asset.

Records one month of insurance expense by reducing the prepaid asset and recognizing the expense. Uses straight-line amortization over the coverage period.

Returns:

  • insurance_expense: Monthly expense recognized

  • prepaid_reduction: Reduction in prepaid asset

  • remaining_prepaid: Remaining prepaid balance

Return type:

Dictionary with transaction details as Decimal

record_claim_recovery(recovery_amount: Decimal | float | int, claim_id: str | None = None, year: int = 0) Dict[str, Decimal][source]

Record insurance claim recovery as receivable.

Parameters:
  • recovery_amount (Union[Decimal, float, int]) – Amount approved for recovery from insurance (converted to Decimal)

  • claim_id (Optional[str]) – Optional unique identifier for the claim

  • year (int) – Year when recovery was approved

Returns:

  • insurance_receivable: New receivable amount

  • total_receivables: Total outstanding receivables

Return type:

Dictionary with recovery details as Decimal

receive_recovery_payment(amount: Decimal | float | int, claim_id: str | None = None) Dict[str, Decimal][source]

Record receipt of insurance recovery payment.

Parameters:
  • amount (Union[Decimal, float, int]) – Amount received from insurance (converted to Decimal)

  • claim_id (Optional[str]) – Optional claim ID to apply payment to

Returns:

  • cash_received: Cash inflow amount

  • receivable_reduction: Reduction in receivables

  • remaining_receivables: Total remaining receivables

Return type:

Dictionary with payment details as Decimal

get_total_receivables() Decimal[source]

Calculate total outstanding insurance receivables.

Return type:

Decimal

Returns:

Total amount of outstanding insurance receivables as Decimal

get_amortization_schedule() List[Dict[str, int | Decimal]][source]

Generate remaining amortization schedule.

Return type:

List[Dict[str, Union[int, Decimal]]]

Returns:

List of monthly amortization entries remaining (amounts as Decimal)

reset_for_new_period() None[source]

Reset accounting for a new coverage period.

Clears current period data while preserving recoveries.

Return type:

None

get_summary() Dict[str, int | Decimal][source]

Get summary of current insurance accounting status.

Return type:

Dict[str, Union[int, Decimal]]

Returns:

Dictionary with key accounting metrics (amounts as Decimal)

ergodic_insurance.insurance_pricing module

Insurance pricing module with market cycle support.

This module implements realistic insurance premium calculation based on frequency and severity distributions, replacing hardcoded premium rates in simulations. It supports market cycle adjustments and integrates with existing loss generators and insurance structures.

Example

Basic usage for pricing an insurance program:

from ergodic_insurance.insurance_pricing import InsurancePricer, MarketCycle
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator

# Initialize loss generator and pricer
loss_gen = ManufacturingLossGenerator()
pricer = InsurancePricer(
    loss_generator=loss_gen,
    loss_ratio=0.70,
    market_cycle=MarketCycle.NORMAL
)

# Price an insurance program
program = InsuranceProgram(layers=[...])
priced_program = pricer.price_insurance_program(
    program,
    expected_revenue=15_000_000
)

# Get total premium
total_premium = priced_program.calculate_annual_premium()
MarketCycle[source]

Enum representing market conditions (HARD, NORMAL, SOFT)

class MarketCycle(*values)[source]

Bases: Enum

Market cycle states affecting insurance pricing.

Each state corresponds to a target loss ratio that insurers use to price coverage. Lower loss ratios (hard markets) result in higher premiums.

HARD

Seller’s market with limited capacity (60% loss ratio)

NORMAL

Balanced market conditions (70% loss ratio)

SOFT

Buyer’s market with excess capacity (80% loss ratio)

HARD = 0.6
NORMAL = 0.7
SOFT = 0.8
class PricingParameters(loss_ratio: float = 0.7, expense_ratio: float = 0.25, profit_margin: float = 0.05, risk_loading: float = 0.1, confidence_level: float = 0.95, simulation_years: int = 10, min_premium: float = 1000.0, max_rate_on_line: float = 0.5) None[source]

Bases: object

Parameters for insurance pricing calculations.

loss_ratio

Target loss ratio for pricing (claims/premium)

expense_ratio

Operating expense ratio (default 0.25)

profit_margin

Target profit margin (default 0.05)

risk_loading

Additional loading for uncertainty (default 0.10)

confidence_level

Confidence level for pricing (default 0.95)

simulation_years

Years to simulate for pricing (default 10)

min_premium

Minimum premium floor (default 1000)

max_rate_on_line

Maximum rate on line cap (default 0.50)

loss_ratio: float = 0.7
expense_ratio: float = 0.25
profit_margin: float = 0.05
risk_loading: float = 0.1
confidence_level: float = 0.95
simulation_years: int = 10
min_premium: float = 1000.0
max_rate_on_line: float = 0.5
class LayerPricing(attachment_point: float, limit: float, expected_frequency: float, expected_severity: float, pure_premium: float, technical_premium: float, market_premium: float, rate_on_line: float, confidence_interval: Tuple[float, float]) None[source]

Bases: object

Pricing details for a single insurance layer.

attachment_point

Where coverage starts

limit

Maximum coverage amount

expected_frequency

Expected claims per year hitting this layer

expected_severity

Average severity of claims in this layer

pure_premium

Expected loss cost

technical_premium

Pure premium with expenses and profit

market_premium

Final premium after market adjustments

rate_on_line

Premium as percentage of limit

confidence_interval

(lower, upper) bounds at confidence level

attachment_point: float
limit: float
expected_frequency: float
expected_severity: float
pure_premium: float
technical_premium: float
market_premium: float
rate_on_line: float
confidence_interval: Tuple[float, float]
class InsurancePricer(loss_generator: 'ManufacturingLossGenerator' | None = None, loss_ratio: float | None = None, market_cycle: MarketCycle | None = None, parameters: PricingParameters | None = None, exposure: 'ExposureBase' | None = None, seed: int | None = None)[source]

Bases: object

Calculate insurance premiums based on loss distributions and market conditions.

This class provides methods to price individual layers and complete insurance programs using frequency/severity distributions from loss generators. It supports market cycle adjustments and maintains backward compatibility with fixed rates.

Parameters:

Example

Pricing with different market conditions:

# Hard market pricing (higher premiums)
hard_pricer = InsurancePricer(
    loss_generator=loss_gen,
    market_cycle=MarketCycle.HARD
)

# Soft market pricing (lower premiums)
soft_pricer = InsurancePricer(
    loss_generator=loss_gen,
    market_cycle=MarketCycle.SOFT
)
calculate_pure_premium(attachment_point: float, limit: float, expected_revenue: float, simulation_years: int | None = None) Tuple[float, Dict[str, Any]][source]

Calculate pure premium for a layer using frequency/severity.

Pure premium represents the expected loss cost without expenses, profit, or risk loading.

Parameters:
  • attachment_point (float) – Where layer coverage starts

  • limit (float) – Maximum coverage from this layer

  • expected_revenue (float) – Expected annual revenue for scaling

  • simulation_years (Optional[int]) – Years to simulate (default from parameters)

Return type:

Tuple[float, Dict[str, Any]]

Returns:

Tuple of (pure_premium, statistics_dict) with detailed metrics

Raises:

ValueError – If loss_generator is not configured

calculate_technical_premium(pure_premium: float, limit: float) float[source]

Convert pure premium to technical premium with risk loading.

Technical premium adds a risk loading for parameter uncertainty to the pure premium. Expense and profit margins are applied separately via the loss ratio in calculate_market_premium() to avoid double-counting.

Parameters:
  • pure_premium (float) – Expected loss cost

  • limit (float) – Layer limit for rate capping

Return type:

float

Returns:

Technical premium amount

calculate_market_premium(technical_premium: float, market_cycle: MarketCycle | None = None) float[source]

Apply market cycle adjustment to technical premium.

Market premium = Technical premium / Loss ratio

Parameters:
  • technical_premium (float) – Premium with expenses and loadings

  • market_cycle (Optional[MarketCycle]) – Optional market cycle override

Return type:

float

Returns:

Market-adjusted premium

price_layer(attachment_point: float, limit: float, expected_revenue: float, market_cycle: MarketCycle | None = None) LayerPricing[source]

Price a single insurance layer.

Complete pricing process from pure premium through market adjustment.

Parameters:
  • attachment_point (float) – Where coverage starts

  • limit (float) – Maximum coverage amount

  • expected_revenue (float) – Expected annual revenue

  • market_cycle (Optional[MarketCycle]) – Optional market cycle override

Return type:

LayerPricing

Returns:

LayerPricing object with all pricing details

price_insurance_program(program: ergodic_insurance.insurance_program.InsuranceProgram, expected_revenue: float | None = None, time: float = 0.0, market_cycle: MarketCycle | None = None, update_program: bool = True) ergodic_insurance.insurance_program.InsuranceProgram[source]

Price a complete insurance program.

Prices all layers in the program and optionally updates their rates.

Parameters:
  • program (ergodic_insurance.insurance_program.InsuranceProgram) – Insurance program to price

  • expected_revenue (Optional[float]) – Expected annual revenue (optional if using exposure)

  • time (float) – Time for exposure calculation (default 0.0)

  • market_cycle (Optional[MarketCycle]) – Optional market cycle override

  • update_program (bool) – Whether to update program layer rates

Return type:

ergodic_insurance.insurance_program.InsuranceProgram

Returns:

Program with updated pricing (original or copy based on update_program)

price_insurance_policy(policy: InsurancePolicy, expected_revenue: float, market_cycle: MarketCycle | None = None, update_policy: bool = True) InsurancePolicy[source]

Price a basic insurance policy.

Prices all layers in the policy and optionally updates their rates.

Parameters:
  • policy (InsurancePolicy) – Insurance policy to price

  • expected_revenue (float) – Expected annual revenue

  • market_cycle (Optional[MarketCycle]) – Optional market cycle override

  • update_policy (bool) – Whether to update policy layer rates

Return type:

InsurancePolicy

Returns:

Policy with updated pricing (original or copy based on update_policy)

compare_market_cycles(attachment_point: float, limit: float, expected_revenue: float) Dict[str, LayerPricing][source]

Compare pricing across different market cycles.

Useful for understanding market impact on premiums.

Parameters:
  • attachment_point (float) – Where coverage starts

  • limit (float) – Maximum coverage amount

  • expected_revenue (float) – Expected annual revenue

Return type:

Dict[str, LayerPricing]

Returns:

Dictionary mapping market cycle names to pricing results

simulate_cycle_transition(program: ergodic_insurance.insurance_program.InsuranceProgram, expected_revenue: float, years: int = 10, transition_probs: Dict[str, float] | None = None) List[Dict[str, Any]][source]

Simulate insurance pricing over market cycle transitions.

Models how premiums change as markets transition between states.

Parameters:
  • program (ergodic_insurance.insurance_program.InsuranceProgram) – Insurance program to simulate

  • expected_revenue (float) – Expected annual revenue

  • years (int) – Number of years to simulate

  • transition_probs (Optional[Dict[str, float]]) – Market transition probabilities

Return type:

List[Dict[str, Any]]

Returns:

List of annual results with cycle states and premiums

static create_from_config(config: Dict[str, Any], loss_generator: ManufacturingLossGenerator | None = None) InsurancePricer[source]

Create pricer from configuration dictionary.

Parameters:
Return type:

InsurancePricer

Returns:

Configured InsurancePricer instance

ergodic_insurance.insurance_program module

Multi-layer insurance program with reinstatements and advanced features.

This module provides comprehensive insurance program management including multi-layer structures, reinstatements, attachment points, and accurate loss allocation for manufacturing risk transfer optimization.

class ReinstatementType(*values)[source]

Bases: Enum

Types of reinstatement provisions.

NONE = 'none'
PRO_RATA = 'pro_rata'
FULL = 'full'
FREE = 'free'
class OptimizationConstraints(max_total_premium: float | None = None, min_total_coverage: float | None = None, max_layers: int = 5, min_layers: int = 3, max_attachment_gap: float = 0.0, min_roe_improvement: float = 0.15, max_iterations: int = 1000, convergence_tolerance: float = 1e-06) None[source]

Bases: object

Constraints for insurance program optimization.

max_total_premium: float | None = None
min_total_coverage: float | None = None
max_layers: int = 5
min_layers: int = 3
max_attachment_gap: float = 0.0
min_roe_improvement: float = 0.15
max_iterations: int = 1000
convergence_tolerance: float = 1e-06
class OptimalStructure(layers: List[EnhancedInsuranceLayer], deductible: float, total_premium: float, total_coverage: float, ergodic_benefit: float, roe_improvement: float, optimization_metrics: Dict[str, Any], convergence_achieved: bool, iterations_used: int) None[source]

Bases: object

Result of insurance structure optimization.

layers: List[EnhancedInsuranceLayer]
deductible: float
total_premium: float
total_coverage: float
ergodic_benefit: float
roe_improvement: float
optimization_metrics: Dict[str, Any]
convergence_achieved: bool
iterations_used: int
class EnhancedInsuranceLayer(attachment_point: float, limit: float, base_premium_rate: float, reinstatements: int = 0, reinstatement_premium: float = 1.0, reinstatement_type: ReinstatementType = ReinstatementType.PRO_RATA, aggregate_limit: float | None = None, participation_rate: float = 1.0, limit_type: str = 'per-occurrence', per_occurrence_limit: float | None = None, premium_rate_exposure: ExposureBase | None = None) None[source]

Bases: object

Insurance layer with reinstatement support and advanced features.

Extends basic layer functionality with reinstatements, tracking, and more sophisticated premium calculations.

attachment_point: float
limit: float
base_premium_rate: float
reinstatements: int = 0
reinstatement_premium: float = 1.0
reinstatement_type: ReinstatementType = 'pro_rata'
aggregate_limit: float | None = None
participation_rate: float = 1.0
limit_type: str = 'per-occurrence'
per_occurrence_limit: float | None = None
premium_rate_exposure: ExposureBase | None = None
__post_init__()[source]

Validate layer parameters.

calculate_base_premium(time: float = 0.0) float[source]

Calculate base premium for this layer.

Parameters:

time (float) – Time in years for exposure calculation (default 0.0).

Return type:

float

Returns:

Base premium amount (rate × limit × exposure_multiplier).

calculate_reinstatement_premium(timing_factor: float = 1.0) float[source]

Calculate premium for a single reinstatement.

Parameters:

timing_factor (float) – Pro-rata factor based on policy period remaining (0-1).

Return type:

float

Returns:

Reinstatement premium amount.

can_respond(loss_amount: float) bool[source]

Check if this layer can respond to a loss.

Parameters:

loss_amount (float) – Total loss amount.

Return type:

bool

Returns:

True if loss exceeds attachment point.

calculate_layer_loss(total_loss: float) float[source]

Calculate the portion of loss hitting this layer.

Parameters:

total_loss (float) – Total loss amount.

Return type:

float

Returns:

Amount of loss allocated to this layer (before applying limits).

class LayerState(layer: EnhancedInsuranceLayer, used_limit: float = 0.0, reinstatements_used: int = 0, total_claims_paid: float = 0.0, reinstatement_premiums_paid: float = 0.0, is_exhausted: bool = False, aggregate_used: float = 0.0) None[source]

Bases: object

Tracks the current state of an insurance layer during simulation.

Maintains utilization, reinstatement count, and exhaustion status for accurate multi-claim processing.

layer: EnhancedInsuranceLayer
current_limit: float
used_limit: float = 0.0
reinstatements_used: int = 0
total_claims_paid: float = 0.0
reinstatement_premiums_paid: float = 0.0
is_exhausted: bool = False
aggregate_used: float = 0.0
__post_init__()[source]

Initialize current limit to layer’s base limit.

process_claim(claim_amount: float, timing_factor: float = 1.0) Tuple[float, float][source]

Process a claim against this layer.

Parameters:
  • claim_amount (float) – Amount of loss allocated to this layer.

  • timing_factor (float) – Pro-rata factor for reinstatement premium.

Return type:

Tuple[float, float]

Returns:

Tuple of (amount_paid, reinstatement_premium).

reset()[source]

Reset layer state for new policy period.

get_available_limit() float[source]

Get currently available limit.

Return type:

float

Returns:

Available limit for claims.

get_utilization_rate() float[source]

Calculate layer utilization rate.

Return type:

float

Returns:

Utilization as percentage of total available limit.

class InsuranceProgram(layers: List[EnhancedInsuranceLayer], deductible: float = 0.0, name: str = 'Manufacturing Insurance Program', pricing_enabled: bool = False, pricer: InsurancePricer | None = None)[source]

Bases: object

Comprehensive multi-layer insurance program manager.

Handles complex insurance structures with multiple layers, reinstatements, and sophisticated claim allocation.

pricing_results: List[Any]
calculate_annual_premium(time: float = 0.0) float[source]

Calculate total annual premium for the program.

Parameters:

time (float) – Time in years for exposure calculation (default 0.0).

Return type:

float

Returns:

Total base premium across all layers.

process_claim(claim_amount: float, timing_factor: float = 1.0) Dict[str, Any][source]

Process a single claim through the insurance structure.

Parameters:
  • claim_amount (float) – Total claim amount.

  • timing_factor (float) – Pro-rata factor for reinstatement premiums.

Return type:

Dict[str, Any]

Returns:

Dictionary with claim allocation details.

process_annual_claims(claims: List[float], claim_times: List[float] | None = None) Dict[str, Any][source]

Process all claims for a policy year.

Parameters:
  • claims (List[float]) – List of claim amounts.

  • claim_times (Optional[List[float]]) – Optional list of claim times (0-1 for year fraction).

Return type:

Dict[str, Any]

Returns:

Dictionary with annual summary statistics.

reset_annual()[source]

Reset program state for new policy year.

get_program_summary() Dict[str, Any][source]

Get current program state summary.

Return type:

Dict[str, Any]

Returns:

Dictionary with program statistics.

get_total_coverage() float[source]

Calculate maximum possible coverage.

Return type:

float

Returns:

Maximum claim amount that can be covered.

calculate_ergodic_benefit(loss_history: List[List[float]], manufacturer_profile: Dict[str, Any] | None = None, time_horizon: int = 100) Dict[str, float][source]

Calculate ergodic benefit of insurance structure.

Quantifies time-average growth improvement from insurance coverage versus ensemble-average cost.

Parameters:
  • loss_history (List[List[float]]) – Historical loss data (list of annual loss lists).

  • manufacturer_profile (Optional[Dict[str, Any]]) – Company profile with assets, revenue, etc.

  • time_horizon (int) – Time horizon for ergodic calculation.

Return type:

Dict[str, float]

Returns:

Dictionary with ergodic metrics.

find_optimal_attachment_points(loss_data: List[float], num_layers: int = 4, percentiles: List[float] | None = None) List[float][source]

Find optimal attachment points based on loss frequency/severity.

Uses data-driven approach to minimize gaps while optimizing cost.

Parameters:
  • loss_data (List[float]) – Historical loss amounts.

  • num_layers (int) – Number of layers to optimize.

  • percentiles (Optional[List[float]]) – Optional percentiles for attachment points.

Return type:

List[float]

Returns:

List of optimal attachment points.

optimize_layer_widths(attachment_points: List[float], total_budget: float, capacity_constraints: Dict[str, float] | None = None, loss_data: List[float] | None = None) List[float][source]

Optimize layer widths given attachment points and constraints.

Parameters:
  • attachment_points (List[float]) – Fixed attachment points for layers.

  • total_budget (float) – Total premium budget.

  • capacity_constraints (Optional[Dict[str, float]]) – Optional max capacity per layer.

  • loss_data (Optional[List[float]]) – Optional loss data for severity analysis.

Return type:

List[float]

Returns:

List of optimal layer widths.

optimize_layer_structure(loss_data: List[List[float]], company_profile: Dict[str, Any] | None = None, constraints: OptimizationConstraints | None = None) OptimalStructure[source]

Optimize complete insurance layer structure.

Main optimization method that orchestrates layer count, attachment points, and widths to maximize ergodic benefit.

Parameters:
Return type:

OptimalStructure

Returns:

Optimal insurance structure.

classmethod from_yaml(config_path: str) ergodic_insurance.insurance_program.InsuranceProgram[source]

Load insurance program from YAML configuration.

Parameters:

config_path (str) – Path to YAML configuration file.

Return type:

ergodic_insurance.insurance_program.InsuranceProgram

Returns:

Configured InsuranceProgram instance.

classmethod create_standard_manufacturing_program(deductible: float = 250000) ergodic_insurance.insurance_program.InsuranceProgram[source]

Create standard manufacturing insurance program.

Parameters:

deductible (float) – Self-insured retention amount.

Return type:

ergodic_insurance.insurance_program.InsuranceProgram

Returns:

Standard manufacturing insurance program.

apply_pricing(expected_revenue: float, market_cycle: MarketCycle | None = None, loss_generator: ManufacturingLossGenerator | None = None) None[source]

Apply dynamic pricing to all layers in the program.

Updates layer premium rates based on frequency/severity calculations.

Parameters:
Raises:

ValueError – If pricing not enabled or pricer not configured

Return type:

None

get_pricing_summary() Dict[str, Any][source]

Get summary of current pricing.

Return type:

Dict[str, Any]

Returns:

Dictionary with pricing details for each layer

classmethod create_with_pricing(layers: List[EnhancedInsuranceLayer], loss_generator: ManufacturingLossGenerator, expected_revenue: float, market_cycle: MarketCycle | None = None, deductible: float = 0.0, name: str = 'Priced Insurance Program') ergodic_insurance.insurance_program.InsuranceProgram[source]

Create insurance program with dynamic pricing.

Factory method that creates a program with pricing already applied.

Parameters:
Return type:

ergodic_insurance.insurance_program.InsuranceProgram

Returns:

InsuranceProgram with pricing applied

class ProgramState(program: ergodic_insurance.insurance_program.InsuranceProgram, years_simulated: int = 0, total_claims: List[float] = <factory>, total_recoveries: List[float] = <factory>, total_premiums: List[float] = <factory>, annual_results: List[Dict] = <factory>) None[source]

Bases: object

Tracks multi-year insurance program state for simulations.

Maintains historical data and statistics across multiple policy periods for long-term analysis.

program: InsuranceProgram
years_simulated: int = 0
total_claims: List[float]
total_recoveries: List[float]
total_premiums: List[float]
annual_results: List[Dict]
simulate_year(annual_claims: List[float], claim_times: List[float] | None = None) Dict[str, Any][source]

Simulate one year of the insurance program.

Parameters:
Return type:

Dict[str, Any]

Returns:

Annual results dictionary.

get_summary_statistics() Dict[str, Any][source]

Calculate summary statistics across all simulated years.

Return type:

Dict[str, Any]

Returns:

Dictionary with multi-year statistics.

ergodic_insurance.ledger module

Event-sourcing ledger for financial transactions.

This module implements a simple ledger system that tracks individual financial transactions using double-entry accounting. This provides transaction-level detail that is lost when using only point-in-time metrics snapshots.

The ledger enables: - Perfect reconciliation between financial statements - Direct method cash flow statement generation - Audit trail for all financial changes - Understanding of WHY balances changed (e.g., “was this AR change a

write-off or a payment?”)

Example

Record a sale on credit:

ledger = Ledger()
ledger.record_double_entry(
    date=5,  # Year 5
    debit_account="accounts_receivable",
    credit_account="revenue",
    amount=1_000_000,
    description="Annual sales on credit"
)

Generate cash flows for a period:

operating_cash_flows = ledger.get_cash_flows(period=5)
print(f"Cash from customers: ${operating_cash_flows['cash_from_customers']:,.0f}")
class AccountType(*values)[source]

Bases: Enum

Classification of accounts per GAAP chart of accounts.

ASSET

Resources owned by the company (debit normal balance)

LIABILITY

Obligations owed to others (credit normal balance)

EQUITY

Owner’s residual interest (credit normal balance)

REVENUE

Income from operations (credit normal balance)

EXPENSE

Costs of operations (debit normal balance)

ASSET = 'asset'
LIABILITY = 'liability'
EQUITY = 'equity'
REVENUE = 'revenue'
EXPENSE = 'expense'
class AccountName(*values)[source]

Bases: Enum

Standard account names for the chart of accounts.

Using this enum instead of raw strings prevents typos that would silently result in zero balances on financial statements. See Issue #260.

Account names are grouped by their AccountType:

Assets (debit normal balance):

CASH, ACCOUNTS_RECEIVABLE, INVENTORY, PREPAID_INSURANCE, INSURANCE_RECEIVABLES, GROSS_PPE, ACCUMULATED_DEPRECIATION, RESTRICTED_CASH, COLLATERAL, DEFERRED_TAX_ASSET

Liabilities (credit normal balance):

ACCOUNTS_PAYABLE, ACCRUED_EXPENSES, ACCRUED_WAGES, ACCRUED_TAXES, ACCRUED_INTEREST, CLAIM_LIABILITIES, UNEARNED_REVENUE

Equity (credit normal balance):

RETAINED_EARNINGS, COMMON_STOCK, DIVIDENDS

Revenue (credit normal balance):

REVENUE, SALES_REVENUE, INTEREST_INCOME, INSURANCE_RECOVERY

Expenses (debit normal balance):

COST_OF_GOODS_SOLD, OPERATING_EXPENSES, DEPRECIATION_EXPENSE, INSURANCE_EXPENSE, INSURANCE_LOSS, TAX_EXPENSE, INTEREST_EXPENSE, COLLATERAL_EXPENSE, WAGE_EXPENSE

Example

Use AccountName instead of strings to prevent typos:

from ergodic_insurance.ledger import AccountName, Ledger

ledger = Ledger()
ledger.record_double_entry(
    date=5,
    debit_account=AccountName.ACCOUNTS_RECEIVABLE,  # Safe
    credit_account=AccountName.REVENUE,
    amount=1_000_000,
    transaction_type=TransactionType.REVENUE,
)

# This would be a compile/lint error:
# debit_account=AccountName.ACCOUNT_RECEIVABLE  # Typo caught!
CASH = 'cash'
ACCOUNTS_RECEIVABLE = 'accounts_receivable'
INVENTORY = 'inventory'
PREPAID_INSURANCE = 'prepaid_insurance'
INSURANCE_RECEIVABLES = 'insurance_receivables'
GROSS_PPE = 'gross_ppe'
ACCUMULATED_DEPRECIATION = 'accumulated_depreciation'
RESTRICTED_CASH = 'restricted_cash'
COLLATERAL = 'collateral'
DEFERRED_TAX_ASSET = 'deferred_tax_asset'
ACCOUNTS_PAYABLE = 'accounts_payable'
ACCRUED_EXPENSES = 'accrued_expenses'
ACCRUED_WAGES = 'accrued_wages'
ACCRUED_TAXES = 'accrued_taxes'
ACCRUED_INTEREST = 'accrued_interest'
CLAIM_LIABILITIES = 'claim_liabilities'
UNEARNED_REVENUE = 'unearned_revenue'
RETAINED_EARNINGS = 'retained_earnings'
COMMON_STOCK = 'common_stock'
DIVIDENDS = 'dividends'
REVENUE = 'revenue'
SALES_REVENUE = 'sales_revenue'
INTEREST_INCOME = 'interest_income'
INSURANCE_RECOVERY = 'insurance_recovery'
COST_OF_GOODS_SOLD = 'cost_of_goods_sold'
OPERATING_EXPENSES = 'operating_expenses'
DEPRECIATION_EXPENSE = 'depreciation_expense'
INSURANCE_EXPENSE = 'insurance_expense'
INSURANCE_LOSS = 'insurance_loss'
TAX_EXPENSE = 'tax_expense'
INTEREST_EXPENSE = 'interest_expense'
COLLATERAL_EXPENSE = 'collateral_expense'
WAGE_EXPENSE = 'wage_expense'
class EntryType(*values)[source]

Bases: Enum

Type of ledger entry - debit or credit.

In double-entry accounting: - DEBIT increases assets and expenses, decreases liabilities and equity - CREDIT decreases assets and expenses, increases liabilities and equity

DEBIT = 'debit'
CREDIT = 'credit'
class TransactionType(*values)[source]

Bases: Enum

Classification of transaction for cash flow statement mapping.

These types enable automatic classification into operating, investing, or financing activities for cash flow statement generation.

REVENUE = 'revenue'
COLLECTION = 'collection'
EXPENSE = 'expense'
PAYMENT = 'payment'
WAGE_PAYMENT = 'wage_payment'
INTEREST_PAYMENT = 'interest_payment'
INVENTORY_PURCHASE = 'inventory_purchase'
INVENTORY_SALE = 'inventory_sale'
INSURANCE_PREMIUM = 'insurance_premium'
INSURANCE_CLAIM = 'insurance_claim'
TAX_ACCRUAL = 'tax_accrual'
TAX_PAYMENT = 'tax_payment'
DTA_ADJUSTMENT = 'dta_adjustment'
DEPRECIATION = 'depreciation'
WORKING_CAPITAL = 'working_capital'
CAPEX = 'capex'
ASSET_SALE = 'asset_sale'
DIVIDEND = 'dividend'
EQUITY_ISSUANCE = 'equity_issuance'
DEBT_ISSUANCE = 'debt_issuance'
DEBT_REPAYMENT = 'debt_repayment'
ADJUSTMENT = 'adjustment'
ACCRUAL = 'accrual'
WRITE_OFF = 'write_off'
REVALUATION = 'revaluation'
LIQUIDATION = 'liquidation'
TRANSFER = 'transfer'
RETAINED_EARNINGS = 'retained_earnings'
class LedgerEntry(date: int, account: str, amount: Decimal, entry_type: EntryType, transaction_type: TransactionType, description: str = '', reference_id: str = <factory>, timestamp: datetime = <factory>, month: int = 0) None[source]

Bases: object

A single entry in the accounting ledger.

Each entry represents one side of a double-entry transaction. A complete transaction always has matching debits and credits.

date

Period (year) when the transaction occurred

account

Name of the account affected (e.g., “cash”, “accounts_receivable”)

amount

Dollar amount of the entry (always positive)

entry_type

DEBIT or CREDIT

transaction_type

Classification for cash flow mapping

description

Human-readable description of the transaction

reference_id

UUID linking both sides of a double-entry transaction

timestamp

Actual datetime when entry was recorded (for audit)

month

Optional month within the year (0-11)

date: int
account: str
amount: Decimal
entry_type: EntryType
transaction_type: TransactionType
description: str = ''
reference_id: str
timestamp: datetime
month: int = 0
__post_init__() None[source]

Validate entry after initialization.

Return type:

None

property signed_amount: Decimal

Return amount with sign based on entry type.

For balance calculations: - Assets/Expenses: Debit positive, Credit negative - Liabilities/Equity/Revenue: Credit positive, Debit negative

This method returns the raw signed amount for debits (+) and credits (-). The Ledger class handles account type normalization.

__deepcopy__(memo: Dict[int, Any]) LedgerEntry[source]

Create a deep copy of this ledger entry.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

LedgerEntry

Returns:

Independent copy of this LedgerEntry

class Ledger(strict_validation: bool = True) None[source]

Bases: object

Double-entry accounting ledger for event sourcing.

The Ledger tracks all financial transactions at the entry level, enabling perfect reconciliation and direct method cash flow generation.

entries

List of all ledger entries

chart_of_accounts

Mapping of account names to their types

Thread Safety:

This class is not thread-safe. Concurrent reads are safe, but concurrent writes (record, record_double_entry, prune_entries, clear) or a mix of reads and writes require external synchronisation (e.g. a threading.Lock). Each simulation trial should use its own Ledger instance.

entries: List[LedgerEntry]
chart_of_accounts: Dict[str, AccountType]
record(entry: LedgerEntry) None[source]

Record a single ledger entry.

Parameters:

entry (LedgerEntry) – The LedgerEntry to add to the ledger

Raises:

ValueError – If strict_validation is True and the account name is not in the chart of accounts.

Return type:

None

Note

Prefer using record_double_entry() for complete transactions to ensure debits always equal credits.

record_double_entry(date: int, debit_account: AccountName | str, credit_account: AccountName | str, amount: Decimal | float | int, transaction_type: TransactionType, description: str = '', month: int = 0) Tuple[LedgerEntry | None, LedgerEntry | None][source]

Record a complete double-entry transaction.

Creates matching debit and credit entries with the same reference_id.

Parameters:
  • date (int) – Period (year) of the transaction

  • debit_account (Union[AccountName, str]) – Account to debit (increase assets/expenses). Can be AccountName enum (recommended) or string.

  • credit_account (Union[AccountName, str]) – Account to credit (increase liabilities/equity/revenue). Can be AccountName enum (recommended) or string.

  • amount (Union[Decimal, float, int]) – Dollar amount of the transaction (converted to Decimal)

  • transaction_type (TransactionType) – Classification for cash flow mapping

  • description (str) – Human-readable description

  • month (int) – Optional month within the year (0-11)

Return type:

Tuple[Optional[LedgerEntry], Optional[LedgerEntry]]

Returns:

Tuple of (debit_entry, credit_entry), or (None, None) for zero-amount transactions (Issue #315).

Raises:

ValueError – If amount is negative, or if account names are invalid (when strict_validation is True)

Example

Record a cash sale using AccountName enum (recommended):

debit, credit = ledger.record_double_entry(
    date=5,
    debit_account=AccountName.CASH,
    credit_account=AccountName.REVENUE,
    amount=500_000,
    transaction_type=TransactionType.REVENUE,
    description="Cash sales"
)

String account names still work but are validated:

debit, credit = ledger.record_double_entry(
    date=5,
    debit_account="cash",  # Validated against chart
    credit_account="revenue",
    amount=500_000,
    transaction_type=TransactionType.REVENUE,
)
get_balance(account: AccountName | str, as_of_date: int | None = None) Decimal[source]

Calculate the balance for an account.

Parameters:
  • account (Union[AccountName, str]) – Name of the account (AccountName enum recommended, string accepted)

  • as_of_date (Optional[int]) – Optional period to calculate balance as of (inclusive). When None, returns from cache in O(1). When specified, iterates through entries (O(N) for historical queries).

Return type:

Decimal

Returns:

Current balance of the account as Decimal, properly signed based on account type: - Assets/Expenses: positive = debit balance - Liabilities/Equity/Revenue: positive = credit balance

Example

Get current cash balance:

cash = ledger.get_balance(AccountName.CASH)
print(f"Cash: ${cash:,.0f}")

# String also works (validated)
cash = ledger.get_balance("cash")
get_period_change(account: AccountName | str, period: int, month: int | None = None) Decimal[source]

Calculate the change in account balance for a specific period.

Parameters:
  • account (Union[AccountName, str]) – Name of the account (AccountName enum recommended, string accepted)

  • period (int) – Year/period to calculate change for

  • month (Optional[int]) – Optional specific month within the period

Return type:

Decimal

Returns:

Net change in account balance during the period as Decimal

get_entries(account: AccountName | str | None = None, start_date: int | None = None, end_date: int | None = None, transaction_type: TransactionType | None = None) List[LedgerEntry][source]

Query ledger entries with optional filters.

Parameters:
  • account (Union[AccountName, str, None]) – Filter by account name (AccountName enum or string)

  • start_date (Optional[int]) – Filter by minimum period (inclusive)

  • end_date (Optional[int]) – Filter by maximum period (inclusive)

  • transaction_type (Optional[TransactionType]) – Filter by transaction classification

Return type:

List[LedgerEntry]

Returns:

List of matching LedgerEntry objects

Example

Get all cash entries for year 5:

cash_entries = ledger.get_entries(
    account=AccountName.CASH,
    start_date=5,
    end_date=5
)
sum_by_transaction_type(transaction_type: TransactionType, period: int, account: AccountName | str | None = None, entry_type: EntryType | None = None) Decimal[source]

Sum entries by transaction type for cash flow extraction.

Parameters:
Return type:

Decimal

Returns:

Sum of matching entries as Decimal (absolute value)

Example

Get total collections for year 5:

collections = ledger.sum_by_transaction_type(
    transaction_type=TransactionType.COLLECTION,
    period=5,
    account=AccountName.CASH,
    entry_type=EntryType.DEBIT
)
get_cash_flows(period: int) Dict[str, Decimal][source]

Extract cash flows for direct method cash flow statement.

Sums all cash-affecting transactions by category for the specified period.

Parameters:

period (int) – Year/period to extract cash flows for

Returns:

  • cash_from_customers: Collections on AR + cash sales

  • cash_to_suppliers: Inventory + expense payments

  • cash_for_insurance: Premium payments

  • cash_for_claim_losses: Claim-related asset reduction payments

  • cash_for_taxes: Tax payments

  • cash_for_wages: Wage payments

  • cash_for_interest: Interest payments

  • capital_expenditures: PP&E purchases

  • dividends_paid: Dividend payments

  • net_operating: Total operating cash flow

  • net_investing: Total investing cash flow

  • net_financing: Total financing cash flow

Return type:

Dictionary with cash flow categories as Decimal values

Example

Generate direct method cash flow:

flows = ledger.get_cash_flows(period=5)
print(f"Operating: ${flows['net_operating']:,.0f}")
print(f"Investing: ${flows['net_investing']:,.0f}")
print(f"Financing: ${flows['net_financing']:,.0f}")
verify_balance() Tuple[bool, Decimal][source]

Verify that debits equal credits (accounting equation).

Return type:

Tuple[bool, Decimal]

Returns:

Tuple of (is_balanced, difference) - is_balanced: True if debits exactly equal credits (using Decimal precision) - difference: Total debits minus total credits as Decimal

Example

Check ledger integrity:

balanced, diff = ledger.verify_balance()
if not balanced:
    print(f"Warning: Ledger out of balance by ${diff:,.2f}")
get_trial_balance(as_of_date: int | None = None) Dict[str, Decimal][source]

Generate a trial balance showing all account balances.

When as_of_date is None, reads directly from the O(1) balance cache. When a date is specified, performs a single O(N) pass over all entries instead of the previous O(N*M) approach (Issue #315).

Parameters:

as_of_date (Optional[int]) – Optional period to generate balance as of

Return type:

Dict[str, Decimal]

Returns:

Dictionary mapping account names to their balances as Decimal

Example

Review all balances:

trial = ledger.get_trial_balance()
for account, balance in trial.items():
    print(f"{account}: ${balance:,.0f}")
prune_entries(before_date: int) int[source]

Discard entries older than before_date to bound memory (Issue #315).

Before discarding, a per-account balance snapshot is computed so that get_balance(account, as_of_date) and get_trial_balance still return correct values for dates >= the prune point.

Entries with date < before_date are removed. The current balance cache (_balances) is unaffected because it already holds the cumulative totals.

Parameters:

before_date (int) – Entries with date strictly less than this value are pruned.

Return type:

int

Returns:

Number of entries removed.

Note

After pruning, historical queries for dates prior to before_date will reflect the snapshot balance at the prune boundary, not the true historical balance at that earlier date.

clear() None[source]

Clear all entries from the ledger.

Useful for resetting the ledger during simulation reset. Also resets the balance cache (Issue #259) and pruning state (Issue #315).

Return type:

None

__len__() int[source]

Return the number of entries in the ledger.

Return type:

int

__repr__() str[source]

Return string representation of the ledger.

Return type:

str

__deepcopy__(memo: Dict[int, Any]) Ledger[source]

Create a deep copy of this ledger.

Preserves all entries and the balance cache for O(1) balance queries.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

Ledger

Returns:

Independent copy of this Ledger with all entries and cached balances

ergodic_insurance.loss_distributions module

Enhanced loss distributions for manufacturing risk modeling.

This module provides parametric loss distributions for realistic insurance claim modeling, including attritional losses, large losses, and catastrophic events with revenue-dependent frequency scaling.

class LossDistribution(seed: int | SeedSequence | None = None)[source]

Bases: ABC

Abstract base class for loss severity distributions.

Provides a common interface for generating loss amounts and calculating statistical properties of the distribution.

abstractmethod generate_severity(n_samples: int) ndarray[source]

Generate loss severity samples.

Parameters:

n_samples (int) – Number of samples to generate.

Return type:

ndarray

Returns:

Array of loss amounts.

abstractmethod expected_value() float[source]

Calculate the analytical expected value of the distribution.

Return type:

float

Returns:

Expected value when analytically available, otherwise estimated.

reset_seed(seed) None[source]

Reset the random seed for reproducibility.

Parameters:

seed – New random seed to use (int or SeedSequence).

Return type:

None

class LognormalLoss(mean: float | None = None, cv: float | None = None, mu: float | None = None, sigma: float | None = None, seed: int | None = None)[source]

Bases: LossDistribution

Lognormal loss severity distribution.

Common for attritional and large losses in manufacturing. Parameters can be specified as either (mean, cv) or (mu, sigma).

generate_severity(n_samples: int) ndarray[source]

Generate lognormal loss samples.

Parameters:

n_samples (int) – Number of samples to generate.

Return type:

ndarray

Returns:

Array of loss amounts.

expected_value() float[source]

Calculate expected value of lognormal distribution.

Return type:

float

Returns:

Analytical expected value.

class ParetoLoss(alpha: float, xm: float, seed: int | None = None)[source]

Bases: LossDistribution

Pareto loss severity distribution for catastrophic events.

Heavy-tailed distribution suitable for modeling extreme losses with potentially unbounded severity.

generate_severity(n_samples: int) ndarray[source]

Generate Pareto loss samples.

Parameters:

n_samples (int) – Number of samples to generate.

Return type:

ndarray

Returns:

Array of loss amounts.

expected_value() float[source]

Calculate expected value of Pareto distribution.

Return type:

float

Returns:

Analytical expected value if it exists (alpha > 1), else inf.

class GeneralizedParetoLoss(severity_shape: float, severity_scale: float, seed: int | SeedSequence | None = None)[source]

Bases: LossDistribution

Generalized Pareto distribution for modeling excesses over threshold.

Implements the GPD using scipy.stats.genpareto for Peaks Over Threshold (POT) extreme value modeling. According to the Pickands-Balkema-de Haan theorem, excesses over a sufficiently high threshold asymptotically follow a GPD.

The distribution models: P(X - u | X > u) ~ GPD(ξ, β)

Shape parameter interpretation: - ξ < 0: Bounded distribution (Type III - short-tailed) - ξ = 0: Exponential distribution (Type I - medium-tailed) - ξ > 0: Pareto-type distribution (Type II - heavy-tailed)

generate_severity(n_samples: int) ndarray[source]

Generate GPD samples (excesses above threshold).

Parameters:

n_samples (int) – Number of samples to generate.

Return type:

ndarray

Returns:

Array of excess amounts above threshold.

expected_value() float[source]

Calculate expected excess above threshold.

Return type:

float

Returns:

Analytical expected value if it exists (ξ < 1), else inf. E[X - u | X > u] = β / (1 - ξ) for ξ < 1

class LossEvent(amount: float, time: float = 0.0, loss_type: str = 'operational', timestamp: float | None = None, event_type: str | None = None, description: str | None = None) None[source]

Bases: object

Represents a single loss event with timing and amount.

amount: float
time: float = 0.0
loss_type: str = 'operational'
timestamp: float | None = None
event_type: str | None = None
description: str | None = None
__post_init__()[source]

Handle alternative parameter names.

__le__(other)[source]

Support ordering by amount.

__lt__(other)[source]

Support ordering by amount.

class LossData(timestamps: ndarray = <factory>, loss_amounts: ndarray = <factory>, loss_types: List[str] = <factory>, claim_ids: List[str] = <factory>, development_factors: ndarray | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Unified loss data structure for cross-module compatibility.

This dataclass provides a standardized interface for loss data that can be used consistently across all modules in the framework.

timestamps: ndarray
loss_amounts: ndarray
loss_types: List[str]
claim_ids: List[str]
development_factors: ndarray | None = None
metadata: Dict[str, Any]
validate() bool[source]

Validate data consistency.

Return type:

bool

Returns:

True if data is valid and consistent, False otherwise.

to_ergodic_format() ergodic_insurance.ergodic_analyzer.ErgodicData[source]

Convert to ergodic analyzer format.

Return type:

ergodic_insurance.ergodic_analyzer.ErgodicData

Returns:

Data formatted for ergodic analysis.

apply_insurance(program: ergodic_insurance.insurance_program.InsuranceProgram) LossData[source]

Apply insurance recoveries to losses.

Parameters:

program (ergodic_insurance.insurance_program.InsuranceProgram) – Insurance program to apply.

Return type:

LossData

Returns:

New LossData with insurance recoveries applied.

classmethod from_loss_events(events: List[LossEvent]) LossData[source]

Create LossData from a list of LossEvent objects.

Parameters:

events (List[LossEvent]) – List of LossEvent objects.

Return type:

LossData

Returns:

LossData instance with consolidated event data.

to_loss_events() List[LossEvent][source]

Convert LossData back to LossEvent list.

Return type:

List[LossEvent]

Returns:

List of LossEvent objects.

get_annual_aggregates(years: int) Dict[int, float][source]

Aggregate losses by year.

Parameters:

years (int) – Number of years to aggregate over.

Return type:

Dict[int, float]

Returns:

Dictionary mapping year to total loss amount.

calculate_statistics() Dict[str, float][source]

Calculate comprehensive statistics for the loss data.

Return type:

Dict[str, float]

Returns:

Dictionary of statistical metrics.

class FrequencyGenerator(base_frequency: float, revenue_scaling_exponent: float = 0.0, reference_revenue: float = 10000000, seed: int | None = None)[source]

Bases: object

Base class for generating loss event frequencies.

Supports revenue-dependent scaling of claim frequencies.

reseed(seed) None[source]

Re-seed the random state.

Parameters:

seed – New random seed (int or SeedSequence).

Return type:

None

get_scaled_frequency(revenue: float) float[source]

Calculate revenue-scaled frequency.

Parameters:

revenue (float) – Current revenue level (can be float or Decimal).

Return type:

float

Returns:

Scaled frequency parameter.

generate_event_times(duration: float, revenue: float) ndarray[source]

Generate event times using Poisson process.

Parameters:
  • duration (float) – Time period in years.

  • revenue (float) – Revenue level for frequency scaling.

Return type:

ndarray

Returns:

Array of event times.

class AttritionalLossGenerator(base_frequency: float = 5.0, severity_mean: float = 25000, severity_cv: float = 1.5, revenue_scaling_exponent: float = 0.5, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]

Bases: object

Generator for high-frequency, low-severity attritional losses.

Typical for widget manufacturing: worker injuries, quality defects, minor property damage.

reseed(seed) None[source]

Re-seed all internal random states.

Parameters:

seed – New random seed (int or SeedSequence). A SeedSequence is used internally to derive independent child seeds for frequency and severity.

Return type:

None

generate_losses(duration: float, revenue: float) List[LossEvent][source]

Generate attritional loss events.

Parameters:
  • duration (float) – Simulation period in years.

  • revenue (float) – Current revenue level.

Return type:

List[LossEvent]

Returns:

List of loss events.

class LargeLossGenerator(base_frequency: float = 0.3, severity_mean: float = 2000000, severity_cv: float = 2.0, revenue_scaling_exponent: float = 0.7, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]

Bases: object

Generator for medium-frequency, medium-severity large losses.

Typical for manufacturing: product recalls, major equipment failures, litigation settlements.

reseed(seed) None[source]

Re-seed all internal random states.

Parameters:

seed – New random seed (int or SeedSequence). A SeedSequence is used internally to derive independent child seeds for frequency and severity.

Return type:

None

generate_losses(duration: float, revenue: float) List[LossEvent][source]

Generate large loss events.

Parameters:
  • duration (float) – Simulation period in years.

  • revenue (float) – Current revenue level.

Return type:

List[LossEvent]

Returns:

List of loss events.

class CatastrophicLossGenerator(base_frequency: float = 0.03, severity_alpha: float = 2.5, severity_xm: float = 1000000, revenue_scaling_exponent: float = 0.0, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]

Bases: object

Generator for low-frequency, high-severity catastrophic losses.

Uses Pareto distribution for heavy-tailed severity modeling. Examples: major equipment failure, facility damage, environmental disasters.

reseed(seed) None[source]

Re-seed all internal random states.

Parameters:

seed – New random seed (int or SeedSequence). A SeedSequence is used internally to derive independent child seeds for frequency and severity.

Return type:

None

generate_losses(duration: float, revenue: float = 10000000) List[LossEvent][source]

Generate catastrophic loss events.

Parameters:
  • duration (float) – Simulation period in years.

  • revenue (float) – Current revenue level (not used for scaling).

Return type:

List[LossEvent]

Returns:

List of loss events.

class ManufacturingLossGenerator(attritional_params: dict | None = None, large_params: dict | None = None, catastrophic_params: dict | None = None, extreme_params: dict | None = None, exposure: ExposureBase | None = None, seed: int | None = None)[source]

Bases: object

Composite loss generator for widget manufacturing risks.

Combines attritional, large, and catastrophic loss generators to provide comprehensive risk modeling.

reseed(seed: int) None[source]

Re-seed all internal random states using SeedSequence.

Derives independent child seeds for each sub-generator so that parallel workers produce statistically distinct loss sequences.

Parameters:

seed (int) – New random seed.

Return type:

None

classmethod create_simple(frequency: float = 0.1, severity_mean: float = 5000000, severity_std: float = 2000000, seed: int | None = None) ManufacturingLossGenerator[source]

Create a simple loss generator (migration helper from ClaimGenerator).

This factory method provides a simplified interface similar to ClaimGenerator, making migration easier. It creates a generator with mostly attritional losses and minimal catastrophic risk.

Parameters:
  • frequency (float) – Annual frequency of losses (Poisson lambda).

  • severity_mean (float) – Mean loss amount in dollars.

  • severity_std (float) – Standard deviation of loss amount.

  • seed (Optional[int]) – Random seed for reproducibility.

Return type:

ManufacturingLossGenerator

Returns:

ManufacturingLossGenerator configured for simple use case.

Examples

Simple usage (equivalent to ClaimGenerator):

generator = ManufacturingLossGenerator.create_simple(
    frequency=0.1,
    severity_mean=5_000_000,
    severity_std=2_000_000,
    seed=42
)
losses, stats = generator.generate_losses(duration=10, revenue=10_000_000)

Accessing loss amounts:

total_loss = sum(loss.amount for loss in losses)
print(f"Total losses: ${total_loss:,.0f}")
print(f"Number of events: {stats['total_losses']}")

Note

For advanced features (multiple loss types, extreme value modeling), use the standard __init__ method with explicit parameters.

See also

Migration guide: docs/migration_guides/claim_generator_migration.md

generate_losses(duration: float, revenue: float, include_catastrophic: bool = True, time: float = 0.0) Tuple[List[LossEvent], Dict[str, Any]][source]

Generate all types of losses for manufacturing operations.

Parameters:
  • duration (float) – Simulation period in years.

  • revenue (float) – Current revenue level.

  • include_catastrophic (bool) – Whether to include catastrophic events.

  • time (float) – Current time for exposure calculation (default 0.0).

Return type:

Tuple[List[LossEvent], Dict[str, Any]]

Returns:

Tuple of (all_losses, statistics_dict).

validate_distributions(n_simulations: int = 10000, duration: float = 1.0, revenue: float = 10000000) Dict[str, Dict[str, float]][source]

Validate distribution properties through simulation.

Parameters:
  • n_simulations (int) – Number of simulations to run.

  • duration (float) – Duration of each simulation.

  • revenue (float) – Revenue level for testing.

Return type:

Dict[str, Dict[str, float]]

Returns:

Dictionary of validation statistics.

perform_statistical_tests(samples: ndarray, distribution_type: str, params: Dict[str, Any], significance_level: float = 0.05) Dict[str, Any][source]

Perform statistical tests to validate distribution fit.

Parameters:
  • samples (ndarray) – Generated samples to test.

  • distribution_type (str) – Type of distribution (‘lognormal’ or ‘pareto’).

  • params (Dict[str, Any]) – Distribution parameters.

  • significance_level (float) – Significance level for tests.

Return type:

Dict[str, Any]

Returns:

Dictionary with test results.

ergodic_insurance.manufacturer module

Widget manufacturer financial model implementation.

This module implements the core financial model for a widget manufacturing company, providing comprehensive balance sheet management, insurance claim processing, and stochastic modeling capabilities. It serves as the central component of the ergodic insurance optimization framework.

The manufacturer model simulates realistic business operations including:
  • Asset-based revenue generation with configurable turnover ratios

  • Operating income calculations with industry-standard margins

  • Multi-layer insurance claim processing with deductibles and limits

  • Letter of credit collateral management for claim liabilities

  • Actuarial claim payment schedules over multiple years

  • Dynamic balance sheet evolution with growth and volatility

  • Integration with sophisticated stochastic processes

  • Comprehensive financial metrics and ratio analysis

Key Components:
  • WidgetManufacturer: Main financial model class

  • ClaimLiability: Actuarial claim payment tracking (re-exported)

  • TaxHandler: Tax calculation and accrual (re-exported)

Examples

Basic manufacturer setup and simulation:

from ergodic_insurance import ManufacturerConfig, WidgetManufacturer

config = ManufacturerConfig(
    initial_assets=10_000_000,
    asset_turnover_ratio=0.8,
    base_operating_margin=0.08,
    tax_rate=0.25,
    retention_ratio=0.7
)

manufacturer = WidgetManufacturer(config)

metrics = manufacturer.step(
    letter_of_credit_rate=0.015,
    growth_rate=0.05
)

print(f"ROE: {metrics['roe']:.1%}")
class WidgetManufacturer(config: ManufacturerConfig, stochastic_process: StochasticProcess | None = None)[source]

Bases: BalanceSheetMixin, ClaimProcessingMixin, IncomeCalculationMixin, SolvencyMixin, MetricsCalculationMixin

Financial model for a widget manufacturing company.

This class models the complete financial operations of a manufacturing company including revenue generation, claim processing, collateral management, and balance sheet evolution over time.

The manufacturer maintains a balance sheet with assets, equity, and tracks insurance-related collateral. It can process insurance claims with multi-year payment schedules and manages working capital requirements.

config

Manufacturing configuration parameters

stochastic_process

Optional stochastic process for revenue volatility

assets

Current total assets

collateral

Letter of credit collateral for insurance claims

restricted_assets

Assets restricted as collateral

equity

Current equity (assets minus liabilities)

year

Current simulation year

outstanding_liabilities

List of active claim liabilities

metrics_history

Historical metrics for each simulation period

bankruptcy

Whether the company has gone bankrupt

bankruptcy_year

Year when bankruptcy occurred (if applicable)

Example

Running a multi-year simulation:

manufacturer = WidgetManufacturer(config)

for year in range(10):
    losses, _ = loss_generator.generate_losses(duration=1, revenue=revenue)
    for loss in losses:
        manufacturer.process_insurance_claim(
            loss.amount, deductible, limit
        )
    metrics = manufacturer.step(letter_of_credit_rate=0.015)
    print(f"Year {year}: ROE={metrics['roe']:.1%}")
property current_revenue: Decimal

Get current revenue based on current assets and turnover ratio.

property current_assets: Decimal

Get current total assets.

property current_equity: Decimal

Get current equity value.

property base_revenue: Decimal

Get base (initial) revenue for comparison.

property base_assets: Decimal

Get base (initial) assets for comparison.

property base_equity: Decimal

Get base (initial) equity for comparison.

__deepcopy__(memo: Dict[int, Any]) WidgetManufacturer[source]

Create a deep copy preserving all state for Monte Carlo forking.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

WidgetManufacturer

Returns:

Independent copy of this WidgetManufacturer with all state preserved

__getstate__() Dict[str, Any][source]

Get state for pickling (required for Windows multiprocessing).

Return type:

Dict[str, Any]

Returns:

Dictionary of all instance attributes

__setstate__(state: Dict[str, Any]) None[source]

Restore state from pickle (required for Windows multiprocessing).

Parameters:

state (Dict[str, Any]) – Dictionary of instance attributes to restore

Return type:

None

process_accrued_payments(time_resolution: str = 'annual', max_payable: Decimal | float | None = None) Decimal[source]

Process due accrual payments for the current period.

Parameters:
  • time_resolution (str) – “annual” or “monthly” for determining current period

  • max_payable (Union[Decimal, float, None]) – Optional maximum amount that can be paid.

Return type:

Decimal

Returns:

Total cash payments made for accruals in this period

record_wage_accrual(amount: float, payment_schedule: PaymentSchedule = PaymentSchedule.IMMEDIATE) None[source]

Record accrued wages to be paid later.

Parameters:
  • amount (float) – Wage amount to accrue

  • payment_schedule (PaymentSchedule) – When wages will be paid

Return type:

None

step(letter_of_credit_rate: Decimal | float = 0.015, growth_rate: Decimal | float = 0.0, time_resolution: str = 'annual', apply_stochastic: bool = False) Dict[str, Decimal | float | int | bool][source]

Execute one time step of the financial model simulation.

Parameters:
  • letter_of_credit_rate (float) – Annual interest rate for letter of credit.

  • growth_rate (float) – Revenue growth rate for the period.

  • time_resolution (str) – “annual” or “monthly”.

  • apply_stochastic (bool) – Whether to apply stochastic shocks.

Returns:

Comprehensive financial metrics dictionary.

Return type:

Dict[str, float]

reset() None[source]

Reset the manufacturer to initial state for new simulation.

This method restores all financial parameters to their configured initial values and clears historical data, enabling fresh simulation runs from the same starting point.

Bug Fixes (Issue #305): - FIX 1: Uses config.ppe_ratio directly instead of recalculating from margins - FIX 2: Initializes AR/Inventory to steady-state (matching __init__) instead of zero

Return type:

None

copy() WidgetManufacturer[source]

Create a deep copy of the manufacturer for parallel simulations.

Returns:

A new manufacturer instance with same configuration.

Return type:

WidgetManufacturer

classmethod create_fresh(config: ManufacturerConfig, stochastic_process: StochasticProcess | None = None) WidgetManufacturer[source]

Create a fresh manufacturer from configuration alone.

Factory method that avoids copy.deepcopy by constructing a new instance directly from its config. Use this in hot loops (e.g. Monte Carlo workers) where each simulation needs a clean initial state.

Parameters:
  • config (ManufacturerConfig) – Manufacturing configuration parameters.

  • stochastic_process (Optional[StochasticProcess]) – Optional stochastic process instance. The caller is responsible for ensuring independence (e.g. by deep-copying the process once before passing it in).

Return type:

WidgetManufacturer

Returns:

A new WidgetManufacturer in its initial state.

ergodic_insurance.monte_carlo module

High-performance Monte Carlo simulation engine for insurance optimization.

class SimulationConfig(n_simulations: int = 100000, n_years: int = 10, n_chains: int = 4, parallel: bool = True, n_workers: int | None = None, chunk_size: int = 10000, use_float32: bool = False, cache_results: bool = True, checkpoint_interval: int | None = None, progress_bar: bool = True, seed: int | None = None, use_enhanced_parallel: bool = True, monitor_performance: bool = True, adaptive_chunking: bool = True, shared_memory: bool = True, enable_trajectory_storage: bool = False, trajectory_storage_config: StorageConfig | None = None, enable_advanced_aggregation: bool = True, aggregation_config: AggregationConfig | None = None, generate_summary_report: bool = False, summary_report_format: str = 'markdown', compute_bootstrap_ci: bool = False, bootstrap_confidence_level: float = 0.95, bootstrap_n_iterations: int = 10000, bootstrap_method: str = 'percentile', ruin_evaluation: List[int] | None = None, insolvency_tolerance: float = 10000, letter_of_credit_rate: float = 0.015, growth_rate: float = 0.0, time_resolution: str = 'annual', apply_stochastic: bool = False, enable_ledger_pruning: bool = False, crn_base_seed: int | None = None) None[source]

Bases: object

Configuration for Monte Carlo simulation.

n_simulations

Number of simulation paths

n_years

Number of years per simulation

n_chains

Number of parallel chains for convergence

parallel

Whether to use multiprocessing

n_workers

Number of parallel workers (None for auto)

chunk_size

Size of chunks for parallel processing

use_float32

Use float32 for memory efficiency

cache_results

Cache intermediate results

checkpoint_interval

Save checkpoint every N simulations

progress_bar

Show progress bar

seed

Random seed for reproducibility

use_enhanced_parallel

Use enhanced parallel executor for better performance

monitor_performance

Track detailed performance metrics

adaptive_chunking

Enable adaptive chunk sizing

shared_memory

Enable shared memory for read-only data

letter_of_credit_rate

Annual LoC rate for collateral costs (default 1.5%)

growth_rate

Revenue growth rate per period (default 0.0)

time_resolution

Time step resolution, “annual” or “monthly” (default “annual”)

apply_stochastic

Whether to apply stochastic shocks (default False)

enable_ledger_pruning

Prune old ledger entries each year to bound memory (default False)

crn_base_seed

Common Random Numbers base seed for cross-scenario comparison. When set, the loss generator and stochastic process are reseeded at each (sim_id, year) boundary using SeedSequence([crn_base_seed, sim_id, year]). This ensures that compared scenarios (e.g. different deductibles) experience the same underlying random draws each year, dramatically reducing estimator variance for growth-lift metrics. (default None, disabled)

n_simulations: int = 100000
n_years: int = 10
n_chains: int = 4
parallel: bool = True
n_workers: int | None = None
chunk_size: int = 10000
use_float32: bool = False
cache_results: bool = True
checkpoint_interval: int | None = None
progress_bar: bool = True
seed: int | None = None
use_enhanced_parallel: bool = True
monitor_performance: bool = True
adaptive_chunking: bool = True
shared_memory: bool = True
enable_trajectory_storage: bool = False
trajectory_storage_config: StorageConfig | None = None
enable_advanced_aggregation: bool = True
aggregation_config: AggregationConfig | None = None
generate_summary_report: bool = False
summary_report_format: str = 'markdown'
compute_bootstrap_ci: bool = False
bootstrap_confidence_level: float = 0.95
bootstrap_n_iterations: int = 10000
bootstrap_method: str = 'percentile'
ruin_evaluation: List[int] | None = None
insolvency_tolerance: float = 10000
letter_of_credit_rate: float = 0.015
growth_rate: float = 0.0
time_resolution: str = 'annual'
apply_stochastic: bool = False
enable_ledger_pruning: bool = False
crn_base_seed: int | None = None
class SimulationResults(final_assets: ndarray, annual_losses: ndarray, insurance_recoveries: ndarray, retained_losses: ndarray, growth_rates: ndarray, ruin_probability: Dict[str, float], metrics: Dict[str, float], convergence: Dict[str, ConvergenceStats], execution_time: float, config: SimulationConfig, performance_metrics: PerformanceMetrics | None = None, aggregated_results: Dict[str, Any] | None = None, time_series_aggregation: Dict[str, Any] | None = None, statistical_summary: Any | None = None, summary_report: str | None = None, bootstrap_confidence_intervals: Dict[str, Tuple[float, float]] | None = None) None[source]

Bases: object

Results from Monte Carlo simulation.

final_assets

Final asset values for each simulation

annual_losses

Annual loss amounts

insurance_recoveries

Insurance recovery amounts

retained_losses

Retained loss amounts

growth_rates

Realized growth rates

ruin_probability

Probability of ruin

metrics

Risk metrics calculated from results

convergence

Convergence statistics

execution_time

Total execution time in seconds

config

Simulation configuration used

performance_metrics

Detailed performance metrics (if monitoring enabled)

aggregated_results

Advanced aggregation results (if enabled)

time_series_aggregation

Time series aggregation results (if enabled)

statistical_summary

Complete statistical summary (if enabled)

summary_report

Formatted summary report (if generated)

bootstrap_confidence_intervals

Bootstrap confidence intervals for key metrics

final_assets: ndarray
annual_losses: ndarray
insurance_recoveries: ndarray
retained_losses: ndarray
growth_rates: ndarray
ruin_probability: Dict[str, float]
metrics: Dict[str, float]
convergence: Dict[str, ConvergenceStats]
execution_time: float
config: SimulationConfig
performance_metrics: PerformanceMetrics | None = None
aggregated_results: Dict[str, Any] | None = None
time_series_aggregation: Dict[str, Any] | None = None
statistical_summary: Any | None = None
summary_report: str | None = None
bootstrap_confidence_intervals: Dict[str, Tuple[float, float]] | None = None
summary() str[source]

Generate summary of simulation results.

Return type:

str

class MonteCarloEngine(loss_generator: ManufacturingLossGenerator, insurance_program: InsuranceProgram, manufacturer: WidgetManufacturer, config: SimulationConfig | None = None)[source]

Bases: object

High-performance Monte Carlo simulation engine for insurance analysis.

Provides efficient Monte Carlo simulation with support for parallel processing, convergence monitoring, checkpointing, and comprehensive result aggregation. Optimized for both high-end and budget hardware configurations.

Examples

Basic Monte Carlo simulation:

from .monte_carlo import MonteCarloEngine, SimulationConfig
from .loss_distributions import ManufacturingLossGenerator
from .insurance_program import InsuranceProgram
from .manufacturer import WidgetManufacturer

# Configure simulation
config = SimulationConfig(
    n_simulations=10000,
    n_years=20,
    parallel=True,
    n_workers=4
)

# Create components
loss_gen = ManufacturingLossGenerator()
insurance = InsuranceProgram.create_standard_program()
manufacturer = WidgetManufacturer.from_config()

# Run Monte Carlo
engine = MonteCarloEngine(
    loss_generator=loss_gen,
    insurance_program=insurance,
    manufacturer=manufacturer,
    config=config
)
results = engine.run()

print(f"Survival rate: {results.survival_rate:.1%}")
print(f"Mean ROE: {results.mean_roe:.2%}")

Advanced simulation with convergence monitoring:

# Enable convergence checking
config = SimulationConfig(
    n_simulations=100000,
    check_convergence=True,
    convergence_tolerance=0.001,
    min_iterations=1000
)

engine = MonteCarloEngine(
    loss_generator=loss_gen,
    insurance_program=insurance,
    manufacturer=manufacturer,
    config=config
)

# Run with progress tracking
results = engine.run(show_progress=True)

# Check convergence
if results.converged:
    print(f"Converged after {results.iterations} iterations")
    print(f"Standard error: {results.standard_error:.4f}")
loss_generator

Generator for manufacturing loss events

insurance_program

Insurance coverage structure

manufacturer

Manufacturing company financial model

config

Simulation configuration parameters

convergence_diagnostics

Convergence monitoring tools

See also

SimulationConfig: Configuration parameters MonteCarloResults: Simulation results container ParallelExecutor: Enhanced parallel processing ConvergenceDiagnostics: Convergence analysis tools

trajectory_storage: TrajectoryStorage | None
result_aggregator: ResultAggregator | None
time_series_aggregator: TimeSeriesAggregator | None
summary_statistics: SummaryStatistics | None
run() SimulationResults[source]

Execute Monte Carlo simulation.

Return type:

SimulationResults

Returns:

SimulationResults object with all outputs

export_results(results: SimulationResults, filepath: Path, file_format: str = 'csv') None[source]

Export simulation results to file.

Parameters:
  • results (SimulationResults) – Simulation results to export

  • filepath (Path) – Output file path

  • file_format (str) – Export format (‘csv’, ‘json’, ‘hdf5’)

Return type:

None

compute_bootstrap_confidence_intervals(results: SimulationResults, confidence_level: float = 0.95, n_bootstrap: int = 10000, method: str = 'percentile', show_progress: bool = False) Dict[str, Tuple[float, float]][source]

Compute bootstrap confidence intervals for key simulation metrics.

Parameters:
  • results (SimulationResults) – Simulation results to analyze.

  • confidence_level (float) – Confidence level for intervals (default 0.95).

  • n_bootstrap (int) – Number of bootstrap iterations (default 10000).

  • method (str) – Bootstrap method (‘percentile’ or ‘bca’).

  • show_progress (bool) – Whether to show progress bar.

Return type:

Dict[str, Tuple[float, float]]

Returns:

Dictionary mapping metric names to (lower, upper) confidence bounds.

run_with_progress_monitoring(check_intervals: List[int] | None = None, convergence_threshold: float = 1.1, early_stopping: bool = True, show_progress: bool = True) SimulationResults[source]

Run simulation with progress tracking and convergence monitoring.

Return type:

SimulationResults

run_with_convergence_monitoring(target_r_hat: float = 1.05, check_interval: int = 10000, max_iterations: int | None = None) SimulationResults[source]

Run simulation with automatic convergence monitoring.

Parameters:
  • target_r_hat (float) – Target R-hat for convergence

  • check_interval (int) – Check convergence every N simulations

  • max_iterations (Optional[int]) – Maximum iterations (None for no limit)

Return type:

SimulationResults

Returns:

Converged simulation results

estimate_ruin_probability(config: RuinProbabilityConfig | None = None) RuinProbabilityResults[source]

Estimate ruin probability over multiple time horizons.

Delegates to RuinProbabilityAnalyzer for specialized analysis.

Parameters:

config (Optional[RuinProbabilityConfig]) – Configuration for ruin probability estimation

Return type:

RuinProbabilityResults

Returns:

RuinProbabilityResults with comprehensive bankruptcy analysis

ergodic_insurance.monte_carlo_worker module

Standalone worker function for multiprocessing Monte Carlo simulations.

run_chunk_standalone(chunk: Tuple[int, int, int | None], loss_generator: ManufacturingLossGenerator, insurance_program: InsuranceProgram, manufacturer: WidgetManufacturer, config_dict: Dict[str, Any]) Dict[str, ndarray | List[Dict[int, bool]]][source]

Standalone function to run a chunk of simulations for multiprocessing.

This function is independent of the MonteCarloEngine class and can be pickled for multiprocessing on all platforms including Windows.

Parameters:
Return type:

Dict[str, Union[ndarray, List[Dict[int, bool]]]]

Returns:

Dictionary with simulation results for the chunk

ergodic_insurance.optimal_control module

Optimal control strategies for insurance decisions.

This module provides implementations of various control strategies derived from the HJB solver, including feedback control laws, state-dependent insurance limits, and integration with the simulation framework.

Key Components:
  • ControlSpace: Defines feasible insurance control parameters

  • ControlStrategy: Abstract base for control strategies

  • StaticControl: Fixed insurance parameters throughout simulation

  • HJBFeedbackControl: State-dependent optimal control from HJB solution

  • TimeVaryingControl: Predetermined time-based control schedule

  • OptimalController: Integrates control strategies with simulations

Typical Workflow:
  1. Solve HJB equation to get optimal policy

  2. Create control strategy (e.g., HJBFeedbackControl)

  3. Initialize OptimalController with strategy

  4. Apply controls in simulation loop

  5. Track and analyze performance

Example

>>> # Solve HJB problem
>>> solver = HJBSolver(problem, config)
>>> value_func, policy = solver.solve()
>>>
>>> # Create feedback control
>>> control_space = ControlSpace(
...     limits=[(1e6, 5e7)],
...     retentions=[(1e5, 1e7)]
... )
>>> strategy = HJBFeedbackControl(solver, control_space)
>>>
>>> # Apply in simulation
>>> controller = OptimalController(strategy, control_space)
>>> insurance = controller.apply_control(manufacturer, time=t)

Author: Alex Filiakov Date: 2025-01-26

class ControlMode(*values)[source]

Bases: Enum

Mode of control application.

STATIC

Fixed control parameters that never change.

STATE_FEEDBACK

Control depends on current system state.

TIME_VARYING

Control follows predetermined time schedule.

ADAPTIVE

Control adapts based on observed history.

STATIC = 'static'
STATE_FEEDBACK = 'state_feedback'
TIME_VARYING = 'time_varying'
ADAPTIVE = 'adaptive'
class ControlSpace(limits: ~typing.List[~typing.Tuple[float, float]], retentions: ~typing.List[~typing.Tuple[float, float]], coverage_percentages: ~typing.List[~typing.Tuple[float, float]] = <factory>, reinsurance_limits: ~typing.List[~typing.Tuple[float, float]] | None = None) None[source]

Bases: object

Definition of the control space for insurance decisions.

limits: List[Tuple[float, float]]
retentions: List[Tuple[float, float]]
coverage_percentages: List[Tuple[float, float]]
reinsurance_limits: List[Tuple[float, float]] | None = None
__post_init__()[source]

Validate control space configuration.

Raises:
  • ValueError – If limits and retentions have different number of layers.

  • ValueError – If coverage percentages don’t match number of layers.

  • ValueError – If any bounds are invalid (min >= max).

  • ValueError – If coverage percentages are outside [0, 1] range.

get_dimensions() int[source]

Get total number of control dimensions.

Returns:

Total number of control variables across all layers

and control types.

Return type:

int

Note

Used for determining the size of control vectors in optimization algorithms.

to_array(limits: List[float], retentions: List[float], coverages: List[float] | None = None) ndarray[source]

Convert control parameters to array format.

Parameters:
  • limits (List[float]) – Insurance limits for each layer.

  • retentions (List[float]) – Retention levels for each layer.

  • coverages (Optional[List[float]]) – Optional coverage percentages. If None, defaults to full coverage.

Returns:

Flattened control array suitable for

optimization algorithms.

Return type:

np.ndarray

Note

Array order is: [limits, retentions, coverages].

from_array(control_array: ndarray) Dict[str, List[float]][source]

Convert control array back to named parameters.

Parameters:

control_array (ndarray) – Flattened control array from optimization.

Returns:

Dictionary with keys ‘limits’,

’retentions’, and ‘coverages’ mapping to lists of values for each layer.

Return type:

Dict[str, List[float]]

Note

Inverse operation of to_array().

class ControlStrategy[source]

Bases: ABC

Abstract base class for control strategies.

All control strategies must implement methods to: 1. Determine control actions based on state/time 2. Update internal parameters based on outcomes

abstractmethod get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]

Get control action for current state and time.

Parameters:
  • state (Dict[str, float]) – Current state dictionary containing keys like ‘wealth’, ‘assets’, ‘cumulative_losses’, etc.

  • time (float) – Current simulation time.

Returns:

Control actions with keys ‘limits’,

’retentions’, and ‘coverages’, each mapping to lists of values.

Return type:

Dict[str, Any]

abstractmethod update(state: Dict[str, float], outcome: Dict[str, float])[source]

Update strategy based on observed outcome.

Parameters:
  • state (Dict[str, float]) – State where control was applied.

  • outcome (Dict[str, float]) – Observed outcome containing keys like ‘losses’, ‘premium_costs’, ‘claim_payments’, etc.

Note

May be no-op for non-adaptive strategies.

class StaticControl(limits: List[float], retentions: List[float], coverages: List[float] | None = None)[source]

Bases: ControlStrategy

Static control strategy with fixed parameters.

This is the simplest control strategy where insurance parameters remain constant throughout the simulation.

get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]

Return fixed control parameters.

Parameters:
  • state (Dict[str, float]) – Current state (ignored for static control).

  • time (float) – Current time (ignored for static control).

Returns:

Fixed control parameters.

Return type:

Dict[str, Any]

update(state: Dict[str, float], outcome: Dict[str, float])[source]

No updates for static control.

Parameters:
  • state (Dict[str, float]) – State where control was applied (ignored).

  • outcome (Dict[str, float]) – Observed outcome (ignored).

class HJBFeedbackControl(hjb_solver: HJBSolver, control_space: ControlSpace, state_mapping: Callable[[Dict[str, float]], ndarray] | None = None)[source]

Bases: ControlStrategy

State-feedback control derived from HJB solution.

This strategy uses the optimal policy computed by the HJB solver to determine insurance parameters based on the current state.

get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]

Get optimal control from HJB policy.

Parameters:
  • state (Dict[str, float]) – Current simulation state dictionary.

  • time (float) – Current time (may be included in state mapping).

Returns:

Optimal control parameters with keys

’limits’, ‘retentions’, and ‘coverages’.

Return type:

Dict[str, Any]

Note

Uses linear interpolation of the HJB optimal policy between grid points.

update(state: Dict[str, float], outcome: Dict[str, float])[source]

No updates needed for HJB feedback control.

Parameters:
  • state (Dict[str, float]) – State where control was applied (ignored).

  • outcome (Dict[str, float]) – Observed outcome (ignored).

Note

HJB policy is precomputed and doesn’t adapt online.

class TimeVaryingControl(time_schedule: List[float], limits_schedule: List[List[float]], retentions_schedule: List[List[float]], coverages_schedule: List[List[float]] | None = None)[source]

Bases: ControlStrategy

Time-varying control strategy with predetermined schedule.

This strategy adjusts insurance parameters according to a predetermined time schedule, useful for seasonal or cyclical risks.

get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]

Get control parameters for current time.

Parameters:
  • state (Dict[str, float]) – Current state (ignored for time-based control).

  • time (float) – Current simulation time.

Returns:

Control parameters interpolated linearly

between scheduled time points.

Return type:

Dict[str, Any]

Note

Uses nearest value for times outside the schedule range.

update(state: Dict[str, float], outcome: Dict[str, float])[source]

No updates for predetermined schedule.

Parameters:
  • state (Dict[str, float]) – State where control was applied (ignored).

  • outcome (Dict[str, float]) – Observed outcome (ignored).

class OptimalController(strategy: ControlStrategy, control_space: ControlSpace)[source]

Bases: object

Controller that applies optimal strategies in simulation.

This class integrates control strategies with the simulation framework, managing the application of controls and tracking performance.

control_history: list[Dict[str, Any]]
state_history: list[Dict[str, float]]
outcome_history: list[Dict[str, float]]
apply_control(manufacturer: WidgetManufacturer, state: Dict[str, float] | None = None, time: float = 0.0) InsuranceProgram[source]

Apply control strategy to create insurance program.

Parameters:
  • manufacturer (WidgetManufacturer) – Current manufacturer instance for extracting state if not provided.

  • state (Optional[Dict[str, float]]) – Optional state override. If None, state is extracted from manufacturer using _extract_state().

  • time (float) – Current simulation time.

Returns:

Insurance program with layers configured

according to the control strategy.

Return type:

InsuranceProgram

Note

Records control and state in history for later analysis.

update_outcome(outcome: Dict[str, float])[source]

Update controller with observed outcome.

Parameters:

outcome (Dict[str, float]) – Observed outcome dictionary with keys like ‘losses’, ‘premium_costs’, ‘claim_payments’, etc.

Note

Calls strategy.update() if strategy is adaptive.

get_performance_summary() DataFrame[source]

Get summary of controller performance.

Returns:

DataFrame with columns for step number,

state variables (prefixed with state_), control variables (prefixed with control_), and outcomes (prefixed with outcome_).

Return type:

pd.DataFrame

Note

Useful for analyzing control strategy effectiveness and creating visualizations.

reset()[source]

Reset controller history.

Clears all recorded history to prepare for new simulation run.

create_hjb_controller(manufacturer: WidgetManufacturer, simulation_years: int = 10, utility_type: str = 'log', risk_aversion: float = 2.0) OptimalController[source]

Convenience function to create HJB-based controller.

Creates and solves a simplified HJB problem for insurance optimization, then returns a controller configured with the optimal policy.

Parameters:
  • manufacturer (WidgetManufacturer) – Manufacturer instance for extracting model parameters like growth rates and risk characteristics.

  • simulation_years (int) – Time horizon for optimization. Longer horizons may require more grid points for accuracy.

  • utility_type (str) – Type of utility function: - ‘log’: Logarithmic utility (Kelly criterion) - ‘power’: Power/CRRA utility with risk aversion - ‘linear’: Risk-neutral expected wealth

  • risk_aversion (float) – Coefficient of relative risk aversion for power utility. Higher values imply more conservative policies. Ignored for log and linear utilities.

Returns:

Controller with HJB feedback strategy configured

for the specified problem.

Return type:

OptimalController

Raises:

ValueError – If utility_type is not recognized.

Example

>>> from ergodic_insurance.manufacturer import WidgetManufacturer
>>> from ergodic_insurance.config import ManufacturerConfig
>>>
>>> # Set up manufacturer
>>> config = ManufacturerConfig()
>>> manufacturer = WidgetManufacturer(config)
>>>
>>> # Create HJB controller with power utility
>>> controller = create_hjb_controller(
...     manufacturer,
...     simulation_years=10,
...     utility_type="power",
...     risk_aversion=2.0
... )
>>>
>>> # Apply control at current state
>>> insurance = controller.apply_control(manufacturer, time=0)
>>>
>>> # Run simulation step
>>> losses = manufacturer.generate_losses()
>>> manufacturer.apply_losses(losses, insurance)
>>>
>>> # Update controller with outcome
>>> outcome = {'losses': losses, 'premium': insurance.total_premium}
>>> controller.update_outcome(outcome)

Note

This function creates a simplified 2D state space (wealth, time) and single-layer insurance for demonstration. Production systems would use higher-dimensional state spaces and multiple layers.

ergodic_insurance.optimization module

Advanced optimization algorithms for constrained insurance decision making.

This module implements sophisticated optimization methods including trust-region, penalty methods, augmented Lagrangian, and multi-start techniques for finding global optima in complex insurance optimization problems.

class ConstraintType(*values)[source]

Bases: Enum

Types of constraints in optimization.

EQUALITY = 'eq'
INEQUALITY = 'ineq'
BOUNDS = 'bounds'
class ConstraintViolation(constraint_name: str, violation_amount: float, constraint_type: ConstraintType, current_value: float, limit_value: float, is_satisfied: bool) None[source]

Bases: object

Information about constraint violations.

constraint_name: str
violation_amount: float
constraint_type: ConstraintType
current_value: float
limit_value: float
is_satisfied: bool
__str__() str[source]

String representation of violation.

Return type:

str

class ConvergenceMonitor(max_iterations: int = 1000, tolerance: float = 1e-06, objective_history: List[float] = <factory>, constraint_violation_history: List[float] = <factory>, gradient_norm_history: List[float] = <factory>, step_size_history: List[float] = <factory>, iteration_count: int = 0, converged: bool = False, convergence_message: str = '') None[source]

Bases: object

Monitor and track convergence of optimization algorithms.

max_iterations: int = 1000
tolerance: float = 1e-06
objective_history: List[float]
constraint_violation_history: List[float]
gradient_norm_history: List[float]
step_size_history: List[float]
iteration_count: int = 0
converged: bool = False
convergence_message: str = ''
update(objective: float, constraint_violation: float = 0.0, gradient_norm: float = 0.0, step_size: float = 0.0)[source]

Update convergence history.

get_summary() Dict[str, Any][source]

Get convergence summary statistics.

Return type:

Dict[str, Any]

class AdaptivePenaltyParameters(initial_penalty: float = 10.0, penalty_increase_factor: float = 2.0, max_penalty: float = 1000000.0, constraint_tolerance: float = 0.0001, penalty_update_frequency: int = 10, current_penalties: Dict[str, float]=<factory>) None[source]

Bases: object

Parameters for adaptive penalty method.

initial_penalty: float = 10.0
penalty_increase_factor: float = 2.0
max_penalty: float = 1000000.0
constraint_tolerance: float = 0.0001
penalty_update_frequency: int = 10
current_penalties: Dict[str, float]
update_penalties(violations: List[ConstraintViolation])[source]

Update penalty parameters based on constraint violations.

class TrustRegionOptimizer(objective_fn: Callable, gradient_fn: Callable | None = None, hessian_fn: Callable | None = None, constraints: List[Dict[str, Any]] | None = None, bounds: Bounds | None = None)[source]

Bases: object

Trust-region constrained optimization with adaptive radius adjustment.

optimize(x0: ndarray, initial_radius: float = 1.0, max_radius: float = 10.0, eta: float = 0.15, max_iter: int = 1000, tol: float = 1e-06) OptimizeResult[source]

Run trust-region optimization.

Parameters:
  • x0 (ndarray) – Initial point

  • initial_radius (float) – Initial trust region radius

  • max_radius (float) – Maximum trust region radius

  • eta (float) – Minimum reduction ratio for accepting step

  • max_iter (int) – Maximum iterations

  • tol (float) – Convergence tolerance

Return type:

OptimizeResult

Returns:

Optimization result

class PenaltyMethodOptimizer(objective_fn: Callable, constraints: List[Dict[str, Any]], bounds: Bounds | None = None)[source]

Bases: object

Optimization using penalty method with adaptive penalty parameters.

optimize(x0: ndarray, method: str = 'L-BFGS-B', max_outer_iter: int = 50, max_inner_iter: int = 100, tol: float = 1e-06) OptimizeResult[source]

Run penalty method optimization.

Parameters:
  • x0 (ndarray) – Initial point

  • method (str) – Inner optimization method

  • max_outer_iter (int) – Maximum outer iterations

  • max_inner_iter (int) – Maximum inner iterations per outer loop

  • tol (float) – Convergence tolerance

Return type:

OptimizeResult

Returns:

Optimization result

class AugmentedLagrangianOptimizer(objective_fn: Callable, constraints: List[Dict[str, Any]], bounds: Bounds | None = None)[source]

Bases: object

Augmented Lagrangian method for constrained optimization.

optimize(x0: ndarray, max_outer_iter: int = 50, max_inner_iter: int = 100, tol: float = 1e-06, rho_init: float = 1.0, rho_max: float = 10000.0) OptimizeResult[source]

Run augmented Lagrangian optimization.

Parameters:
  • x0 (ndarray) – Initial point

  • max_outer_iter (int) – Maximum outer iterations

  • max_inner_iter (int) – Maximum inner iterations

  • tol (float) – Convergence tolerance

  • rho_init (float) – Initial penalty parameter

  • rho_max (float) – Maximum penalty parameter

Return type:

OptimizeResult

Returns:

Optimization result

class MultiStartOptimizer(objective_fn: Callable, bounds: Bounds, constraints: List[Dict[str, Any]] | None = None, base_optimizer: str = 'SLSQP')[source]

Bases: object

Multi-start optimization for finding global optima.

optimize(n_starts: int = 10, x0: ndarray | None = None, seed: int | None = None, parallel: bool = False) OptimizeResult[source]

Run multi-start optimization.

Parameters:
  • n_starts (int) – Number of random starts

  • x0 (Optional[ndarray]) – Optional initial point (included as first start)

  • seed (Optional[int]) – Random seed for reproducibility

  • parallel (bool) – Whether to run starts in parallel

Return type:

OptimizeResult

Returns:

Best optimization result across all starts

class EnhancedSLSQPOptimizer(objective_fn: Callable, gradient_fn: Callable | None = None, constraints: List[Dict[str, Any]] | None = None, bounds: Bounds | None = None)[source]

Bases: object

Enhanced SLSQP with adaptive step sizing and improved convergence.

step_size: float
prev_x: ndarray | None
prev_obj: float | None
optimize(x0: ndarray, adaptive_step: bool = True, line_search: str = 'armijo', max_iter: int = 1000, tol: float = 1e-06) OptimizeResult[source]

Run enhanced SLSQP optimization.

Parameters:
  • x0 (ndarray) – Initial point

  • adaptive_step (bool) – Whether to use adaptive step sizing

  • line_search (str) – Line search method (“armijo” or “wolfe”)

  • max_iter (int) – Maximum iterations

  • tol (float) – Convergence tolerance

Return type:

OptimizeResult

Returns:

Optimization result

create_optimizer(method: str, objective_fn: Callable, constraints: List[Dict[str, Any]] | None = None, bounds: Bounds | None = None, **kwargs) Any[source]

Factory function to create appropriate optimizer.

Parameters:
  • method (str) – Optimization method name

  • objective_fn (Callable) – Objective function

  • constraints (Optional[List[Dict[str, Any]]]) – Optional constraints

  • bounds (Optional[Bounds]) – Optional bounds

  • **kwargs – Additional optimizer-specific arguments

Return type:

Any

Returns:

Configured optimizer instance

ergodic_insurance.parallel_executor module

CPU-optimized parallel execution engine for Monte Carlo simulations.

This module provides enhanced parallel processing capabilities optimized for budget hardware (4-8 cores) with intelligent chunking, shared memory management, and minimal serialization overhead.

Features:
  • Smart dynamic chunking based on CPU resources and workload

  • Shared memory for read-only data structures

  • CPU affinity optimization for cache locality

  • Minimal IPC overhead (<5% target)

  • Memory-efficient execution (<4GB for 100K simulations)

Example

>>> from ergodic_insurance.parallel_executor import ParallelExecutor
>>> executor = ParallelExecutor(n_workers=4)
>>> results = executor.map_reduce(
...     work_function=simulate_path,
...     work_items=range(100000),
...     reduce_function=combine_results,
...     shared_data={'config': simulation_config}
... )
Author:

Alex Filiakov

Date:

2025-08-26

class CPUProfile(n_cores: int, n_threads: int, cache_sizes: Dict[str, int], available_memory: int, cpu_freq: float, system_load: float) None[source]

Bases: object

CPU performance profile for optimization decisions.

n_cores: int
n_threads: int
cache_sizes: Dict[str, int]
available_memory: int
cpu_freq: float
system_load: float
classmethod detect() CPUProfile[source]

Detect current CPU profile.

Returns:

Current system CPU profile

Return type:

CPUProfile

class ChunkingStrategy(initial_chunk_size: int = 1000, min_chunk_size: int = 100, max_chunk_size: int = 10000, target_chunks_per_worker: int = 10, adaptive: bool = True, profile_samples: int = 100) None[source]

Bases: object

Dynamic chunking strategy for parallel workloads.

initial_chunk_size: int = 1000
min_chunk_size: int = 100
max_chunk_size: int = 10000
target_chunks_per_worker: int = 10
adaptive: bool = True
profile_samples: int = 100
calculate_optimal_chunk_size(n_items: int, n_workers: int, item_complexity: float = 1.0, cpu_profile: CPUProfile | None = None) int[source]

Calculate optimal chunk size based on workload and resources.

Parameters:
  • n_items (int) – Total number of work items

  • n_workers (int) – Number of parallel workers

  • item_complexity (float) – Relative complexity of each item (1.0 = baseline)

  • cpu_profile (Optional[CPUProfile]) – CPU profile for optimization

Returns:

Optimal chunk size

Return type:

int

class SharedMemoryConfig(enable_shared_arrays: bool = True, enable_shared_objects: bool = True, compression: bool = False, cleanup_on_exit: bool = True) None[source]

Bases: object

Configuration for shared memory optimization.

enable_shared_arrays: bool = True
enable_shared_objects: bool = True
compression: bool = False
cleanup_on_exit: bool = True
class SharedMemoryManager(config: SharedMemoryConfig | None = None)[source]

Bases: object

Manager for shared memory resources.

Handles creation, access, and cleanup of shared memory segments for both numpy arrays and serialized objects.

shared_arrays: Dict[str, Tuple[SharedMemory, tuple, dtype]]
shared_objects: Dict[str, SharedMemory]
share_array(name: str, array: ndarray) str[source]

Share a numpy array via shared memory.

Parameters:
  • name (str) – Unique identifier for the array

  • array (ndarray) – Numpy array to share

Returns:

Shared memory name for retrieval

Return type:

str

get_array(shm_name: str, shape: tuple, dtype: dtype) ndarray[source]

Retrieve a shared numpy array.

Parameters:
  • shm_name (str) – Shared memory name

  • shape (tuple) – Array shape

  • dtype (dtype) – Array data type

Returns:

Shared array (view, not copy)

Return type:

np.ndarray

share_object(name: str, obj: Any) str[source]

Share a serialized object via shared memory.

Parameters:
  • name (str) – Unique identifier for the object

  • obj (Any) – Object to share

Returns:

Shared memory name for retrieval

Return type:

str

get_object(shm_name: str, size: int, compressed: bool = False) Any[source]

Retrieve a shared object.

Parameters:
  • shm_name (str) – Shared memory name

  • size (int) – Size of serialized data

  • compressed (bool) – Whether data is compressed

Returns:

Deserialized object

Return type:

Any

cleanup()[source]

Clean up all shared memory resources.

__del__()[source]

Cleanup on deletion.

class PerformanceMetrics(total_time: float = 0.0, setup_time: float = 0.0, computation_time: float = 0.0, serialization_time: float = 0.0, reduction_time: float = 0.0, memory_peak: int = 0, cpu_utilization: float = 0.0, items_per_second: float = 0.0, speedup: float = 1.0) None[source]

Bases: object

Performance metrics for parallel execution.

total_time: float = 0.0
setup_time: float = 0.0
computation_time: float = 0.0
serialization_time: float = 0.0
reduction_time: float = 0.0
memory_peak: int = 0
cpu_utilization: float = 0.0
items_per_second: float = 0.0
speedup: float = 1.0
summary() str[source]

Generate performance summary.

Returns:

Formatted performance summary

Return type:

str

class ParallelExecutor(n_workers: int | None = None, chunking_strategy: ChunkingStrategy | None = None, shared_memory_config: SharedMemoryConfig | None = None, monitor_performance: bool = True)[source]

Bases: object

CPU-optimized parallel executor for Monte Carlo simulations.

Provides intelligent work distribution, shared memory management, and performance monitoring for efficient parallel execution on budget hardware.

map_reduce(work_function: Callable, work_items: List | range, reduce_function: Callable | None = None, shared_data: Dict[str, Any] | None = None, progress_bar: bool = True) Any[source]

Execute parallel map-reduce operation.

Parameters:
  • work_function (Callable) – Function to apply to each work item

  • work_items (Union[List, range]) – List or range of work items

  • reduce_function (Optional[Callable]) – Function to combine results (None for list)

  • shared_data (Optional[Dict[str, Any]]) – Data to share across all workers

  • progress_bar (bool) – Show progress bar

Returns:

Combined results from reduce function or list of results

Return type:

Any

get_performance_report() str[source]

Get performance report.

Returns:

Formatted performance report

Return type:

str

__enter__()[source]

Context manager entry.

__exit__(exc_type, exc_val, exc_tb)[source]

Context manager exit with cleanup.

parallel_map(func: Callable, items: List | range, n_workers: int | None = None, progress: bool = True) List[Any][source]

Simple parallel map operation.

Parameters:
  • func (Callable) – Function to apply

  • items (Union[List, range]) – Items to process

  • n_workers (Optional[int]) – Number of workers

  • progress (bool) – Show progress bar

Returns:

Results

Return type:

List[Any]

parallel_aggregate(func: Callable, items: List | range, reducer: Callable, n_workers: int | None = None, shared_data: Dict | None = None, progress: bool = True) Any[source]

Parallel map-reduce operation.

Parameters:
  • func (Callable) – Function to apply to each item

  • items (Union[List, range]) – Items to process

  • reducer (Callable) – Function to combine results

  • n_workers (Optional[int]) – Number of workers

  • shared_data (Optional[Dict]) – Data to share across workers

  • progress (bool) – Show progress bar

Returns:

Aggregated result

Return type:

Any

ergodic_insurance.parameter_sweep module

Parameter sweep utilities for systematic exploration of parameter space.

This module provides utilities for systematic parameter sweeps across the full parameter space to identify optimal regions and validate robustness of recommendations across different scenarios.

Features:
  • Efficient grid search across parameter combinations

  • Parallel execution for large sweeps using multiprocessing

  • Result aggregation and storage with HDF5/Parquet support

  • Scenario comparison tools for side-by-side analysis

  • Optimal region identification using percentile-based methods

  • Pre-defined scenarios for company sizes, loss scenarios, and market conditions

  • Adaptive refinement near optima for efficient exploration

  • Progress tracking and resumption capabilities

Example

>>> from ergodic_insurance.parameter_sweep import ParameterSweeper, SweepConfig
>>> from ergodic_insurance.business_optimizer import BusinessOptimizer
>>>
>>> # Create optimizer
>>> optimizer = BusinessOptimizer(manufacturer)
>>>
>>> # Initialize sweeper
>>> sweeper = ParameterSweeper(optimizer)
>>>
>>> # Define parameter sweep
>>> config = SweepConfig(
...     parameters={
...         "initial_assets": [1e6, 10e6, 100e6],
...         "base_operating_margin": [0.05, 0.08, 0.12],
...         "loss_frequency": [3, 5, 8]
...     },
...     fixed_params={"time_horizon": 10},
...     metrics_to_track=["optimal_roe", "ruin_probability"]
... )
>>>
>>> # Execute sweep
>>> results = sweeper.sweep(config)
>>>
>>> # Find optimal regions
>>> optimal, summary = sweeper.find_optimal_regions(
...     results,
...     objective="optimal_roe",
...     constraints={"ruin_probability": (0, 0.01)}
... )
Author:

Alex Filiakov

Date:

2025-08-29

class SweepConfig(parameters: ~typing.Dict[str, ~typing.List[~typing.Any]], fixed_params: ~typing.Dict[str, ~typing.Any] = <factory>, metrics_to_track: ~typing.List[str] = <factory>, n_workers: int | None = None, batch_size: int = 100, adaptive_refinement: bool = False, refinement_threshold: float = 90.0, save_intermediate: bool = True, cache_dir: str = './cache/sweeps') None[source]

Bases: object

Configuration for parameter sweep.

parameters

Dictionary mapping parameter names to lists of values to sweep

fixed_params

Fixed parameters that don’t vary across sweep

metrics_to_track

List of metric names to extract from results

n_workers

Number of parallel workers for execution

batch_size

Size of batches for parallel processing

adaptive_refinement

Whether to adaptively refine near optima

refinement_threshold

Percentile threshold for refinement (e.g., 90 for top 10%)

save_intermediate

Whether to save intermediate results

cache_dir

Directory for caching results

parameters: Dict[str, List[Any]]
fixed_params: Dict[str, Any]
metrics_to_track: List[str]
n_workers: int | None = None
batch_size: int = 100
adaptive_refinement: bool = False
refinement_threshold: float = 90.0
save_intermediate: bool = True
cache_dir: str = './cache/sweeps'
__post_init__()[source]

Validate configuration and set defaults.

generate_grid() List[Dict[str, Any]][source]

Generate parameter grid for sweep.

Return type:

List[Dict[str, Any]]

Returns:

List of dictionaries, each containing a complete parameter configuration

estimate_runtime(seconds_per_run: float = 1.0) str[source]

Estimate total runtime for sweep.

Parameters:

seconds_per_run (float) – Estimated seconds per single parameter configuration

Return type:

str

Returns:

Human-readable runtime estimate

class ParameterSweeper(optimizer: BusinessOptimizer | None = None, cache_dir: str = './cache/sweeps', use_parallel: bool = True)[source]

Bases: object

Systematic parameter sweep utilities for insurance optimization.

This class provides methods for exploring the parameter space through grid search, identifying optimal regions, and comparing scenarios.

optimizer

Business optimizer instance for running optimizations

cache_dir

Directory for storing cached results

results_cache

In-memory cache of optimization results

use_parallel

Whether to use parallel processing

results_cache: Dict[str, Dict[str, Any]]
sweep(config: SweepConfig, progress_callback: Callable | None = None) DataFrame[source]

Execute parameter sweep with parallel processing.

Parameters:
Return type:

DataFrame

Returns:

DataFrame containing sweep results with all parameter combinations and metrics

create_scenarios() Dict[str, SweepConfig][source]

Create pre-defined scenario configurations.

Return type:

Dict[str, SweepConfig]

Returns:

Dictionary of scenario names to SweepConfig objects

find_optimal_regions(results: DataFrame, objective: str = 'optimal_roe', constraints: Dict[str, Tuple[float, float]] | None = None, top_percentile: float = 90) Tuple[DataFrame, DataFrame][source]

Identify optimal parameter regions.

Parameters:
  • results (DataFrame) – DataFrame of sweep results

  • objective (str) – Objective metric to optimize

  • constraints (Optional[Dict[str, Tuple[float, float]]]) – Dictionary mapping metric names to (min, max) constraint tuples

  • top_percentile (float) – Percentile threshold for optimal region (e.g., 90 for top 10%)

Return type:

Tuple[DataFrame, DataFrame]

Returns:

Tuple of (optimal results DataFrame, parameter statistics DataFrame)

compare_scenarios(results: Dict[str, DataFrame], metrics: List[str] | None = None, normalize: bool = False) DataFrame[source]

Compare results across multiple scenarios.

Parameters:
  • results (Dict[str, DataFrame]) – Dictionary mapping scenario names to result DataFrames

  • metrics (Optional[List[str]]) – List of metrics to compare (default: all common metrics)

  • normalize (bool) – Whether to normalize metrics to [0, 1] range

Return type:

DataFrame

Returns:

DataFrame with scenario comparison

load_results(sweep_hash: str) DataFrame | None[source]

Load cached sweep results.

Parameters:

sweep_hash (str) – Sweep configuration hash

Return type:

Optional[DataFrame]

Returns:

Results DataFrame if found, None otherwise

export_results(results: DataFrame, output_file: str, file_format: str = 'parquet') None[source]

Export results to specified format.

Parameters:
  • results (DataFrame) – Results DataFrame

  • output_file (str) – Output file path

  • file_format (str) – Export format (‘parquet’, ‘csv’, ‘excel’, ‘hdf5’)

Return type:

None

ergodic_insurance.pareto_frontier module

Pareto frontier analysis for multi-objective optimization.

This module provides comprehensive tools for generating, analyzing, and visualizing Pareto frontiers in multi-objective optimization problems, particularly focused on insurance optimization trade-offs between ROE, risk, and costs.

class ObjectiveType(*values)[source]

Bases: Enum

Types of objectives in multi-objective optimization.

MAXIMIZE = 'maximize'
MINIMIZE = 'minimize'
class Objective(name: str, type: ObjectiveType, weight: float = 1.0, normalize: bool = True, bounds: Tuple[float, float] | None = None) None[source]

Bases: object

Definition of an optimization objective.

name

Name of the objective (e.g., ‘ROE’, ‘risk’, ‘cost’)

type

Whether to maximize or minimize this objective

weight

Weight for weighted sum method (0-1)

normalize

Whether to normalize this objective

bounds

Optional bounds for this objective as (min, max)

name: str
type: ObjectiveType
weight: float = 1.0
normalize: bool = True
bounds: Tuple[float, float] | None = None
class ParetoPoint(objectives: ~typing.Dict[str, float], decision_variables: ~numpy.ndarray, is_dominated: bool = False, crowding_distance: float = 0.0, trade_offs: ~typing.Dict[str, float] = <factory>) None[source]

Bases: object

A point on the Pareto frontier.

objectives

Dictionary of objective values

decision_variables

The decision variables that produce these objectives

is_dominated

Whether this point is dominated by another

crowding_distance

Crowding distance metric for this point

trade_offs

Trade-off ratios with neighboring points

objectives: Dict[str, float]
decision_variables: ndarray
is_dominated: bool = False
crowding_distance: float = 0.0
trade_offs: Dict[str, float]
dominates(other: ParetoPoint, objectives: List[Objective]) bool[source]

Check if this point dominates another point.

Parameters:
  • other (ParetoPoint) – Another Pareto point to compare

  • objectives (List[Objective]) – List of objectives to consider

Return type:

bool

Returns:

True if this point dominates the other

class ParetoFrontier(objectives: List[Objective], objective_function: Callable, bounds: List[Tuple[float, float]], constraints: List[Dict[str, Any]] | None = None, seed: int | None = None)[source]

Bases: object

Generator and analyzer for Pareto frontiers.

This class provides methods for generating Pareto frontiers using various algorithms and analyzing the resulting trade-offs.

frontier_points: List[ParetoPoint]
generate_weighted_sum(n_points: int = 50, method: str = 'SLSQP') List[ParetoPoint][source]

Generate Pareto frontier using weighted sum method.

Parameters:
  • n_points (int) – Number of points to generate on the frontier

  • method (str) – Optimization method to use

Return type:

List[ParetoPoint]

Returns:

List of Pareto points forming the frontier

generate_epsilon_constraint(n_points: int = 50, method: str = 'SLSQP') List[ParetoPoint][source]

Generate Pareto frontier using epsilon-constraint method.

Parameters:
  • n_points (int) – Number of points to generate

  • method (str) – Optimization method to use

Return type:

List[ParetoPoint]

Returns:

List of Pareto points forming the frontier

generate_evolutionary(n_generations: int = 100, population_size: int = 50) List[ParetoPoint][source]

Generate Pareto frontier using evolutionary algorithm.

Parameters:
  • n_generations (int) – Number of generations for evolution

  • population_size (int) – Size of population in each generation

Return type:

List[ParetoPoint]

Returns:

List of Pareto points forming the frontier

calculate_hypervolume(reference_point: Dict[str, float] | None = None) float[source]

Calculate hypervolume indicator for the Pareto frontier.

Parameters:

reference_point (Optional[Dict[str, float]]) – Reference point for hypervolume calculation

Return type:

float

Returns:

Hypervolume value

get_knee_points(n_knees: int = 1) List[ParetoPoint][source]

Find knee points on the Pareto frontier.

Knee points represent good trade-offs where small improvements in one objective require large sacrifices in others.

Parameters:

n_knees (int) – Number of knee points to identify

Return type:

List[ParetoPoint]

Returns:

List of knee points

to_dataframe() DataFrame[source]

Convert frontier points to pandas DataFrame.

Return type:

DataFrame

Returns:

DataFrame with objectives and decision variables

ergodic_insurance.performance_optimizer module

Performance optimization module for Monte Carlo simulations.

This module provides tools and strategies to optimize the performance of Monte Carlo simulations, targeting 100K simulations in under 60 seconds on budget hardware (4-core CPU, 8GB RAM).

Key features:
  • Execution profiling and bottleneck identification

  • Vectorized operations for loss generation and insurance calculations

  • Smart caching for repeated calculations

  • Memory optimization for large-scale simulations

  • Integration with parallel execution framework

Example

>>> from performance_optimizer import PerformanceOptimizer
>>> from monte_carlo import MonteCarloEngine
>>>
>>> optimizer = PerformanceOptimizer()
>>> engine = MonteCarloEngine(config=config)
>>>
>>> # Profile execution
>>> profile_results = optimizer.profile_execution(engine, n_simulations=1000)
>>> print(profile_results.bottlenecks)
>>>
>>> # Apply optimizations
>>> optimized_engine = optimizer.optimize_engine(engine)
>>> results = optimized_engine.run()

Google-style docstrings are used throughout for Sphinx documentation.

jit(*args, **kwargs)[source]
class ProfileResult(total_time: float, bottlenecks: List[str] = <factory>, function_times: Dict[str, float]=<factory>, memory_usage: float = 0.0, recommendations: List[str] = <factory>) None[source]

Bases: object

Results from performance profiling.

total_time

Total execution time in seconds

bottlenecks

List of performance bottlenecks identified

function_times

Dictionary mapping function names to execution times

memory_usage

Peak memory usage in MB

recommendations

List of optimization recommendations

total_time: float
bottlenecks: List[str]
function_times: Dict[str, float]
memory_usage: float = 0.0
recommendations: List[str]
summary() str[source]

Generate a summary of profiling results.

Return type:

str

Returns:

Formatted summary string.

class OptimizationConfig(enable_vectorization: bool = True, enable_caching: bool = True, cache_size: int = 1000, enable_numba: bool = True, memory_limit_mb: float = 4000.0, chunk_size: int = 10000) None[source]

Bases: object

Configuration for performance optimization.

enable_vectorization

Use vectorized operations

enable_caching

Use smart caching

cache_size

Maximum cache entries

enable_numba

Use Numba JIT compilation

memory_limit_mb

Memory usage limit in MB

chunk_size

Chunk size for batch processing

enable_vectorization: bool = True
enable_caching: bool = True
cache_size: int = 1000
enable_numba: bool = True
memory_limit_mb: float = 4000.0
chunk_size: int = 10000
class SmartCache(max_size: int = 1000)[source]

Bases: object

Smart caching system for repeated calculations.

Provides intelligent caching with memory management and hit rate tracking.

cache: Dict[Tuple, Any]
access_counts: Dict[Tuple, int]
get(key: Tuple) Any | None[source]

Get value from cache.

Parameters:

key (Tuple) – Cache key (must be hashable).

Return type:

Optional[Any]

Returns:

Cached value or None if not found.

set(key: Tuple, value: Any) None[source]

Set value in cache.

Parameters:
  • key (Tuple) – Cache key (must be hashable).

  • value (Any) – Value to cache.

Return type:

None

property hit_rate: float

Calculate cache hit rate.

Returns:

Hit rate as percentage.

clear() None[source]

Clear the cache.

Return type:

None

class VectorizedOperations[source]

Bases: object

Vectorized operations for performance optimization.

static calculate_growth_rates(final_assets: ndarray, initial_assets: float, n_years: float) ndarray[source]

Calculate growth rates using vectorized operations.

Parameters:
  • final_assets (ndarray) – Array of final asset values.

  • initial_assets (float) – Initial asset value.

  • n_years (float) – Number of years.

Return type:

ndarray

Returns:

Array of growth rates.

static apply_insurance_vectorized(losses: ndarray, attachment: float, limit: float) Tuple[ndarray, ndarray][source]

Apply insurance coverage using vectorized operations.

Parameters:
  • losses (ndarray) – Array of loss amounts.

  • attachment (float) – Insurance attachment point.

  • limit (float) – Insurance limit.

Return type:

Tuple[ndarray, ndarray]

Returns:

Tuple of (retained_losses, recovered_amounts).

static calculate_premiums_vectorized(limits: ndarray, rates: ndarray) ndarray[source]

Calculate premiums using vectorized operations.

Parameters:
  • limits (ndarray) – Array of insurance limits.

  • rates (ndarray) – Array of premium rates.

Return type:

ndarray

Returns:

Array of premium amounts.

class PerformanceOptimizer(config: OptimizationConfig | None = None)[source]

Bases: object

Main performance optimization engine.

Provides profiling, optimization, and monitoring capabilities for Monte Carlo simulations.

profile_execution(func: Callable, *args, **kwargs) ProfileResult[source]

Profile function execution to identify bottlenecks.

Parameters:
  • func (Callable) – Function to profile.

  • *args – Positional arguments for function.

  • **kwargs – Keyword arguments for function.

Return type:

ProfileResult

Returns:

ProfileResult with profiling data.

optimize_loss_generation(losses: List[float], batch_size: int = 10000) ndarray[source]

Optimize loss generation using vectorization.

Parameters:
  • losses (List[float]) – List of loss values.

  • batch_size (int) – Size of processing batches.

Return type:

ndarray

Returns:

Optimized loss array.

optimize_insurance_calculation(losses: ndarray, layers: List[Tuple[float, float, float]]) Dict[str, Any][source]

Optimize insurance calculations using vectorization and caching.

Parameters:
Return type:

Dict[str, Any]

Returns:

Dictionary with optimized results.

optimize_memory_usage() Dict[str, Any][source]

Optimize memory usage for large simulations.

Return type:

Dict[str, Any]

Returns:

Dictionary with memory optimization metrics.

get_optimization_summary() str[source]

Get summary of optimization status.

Return type:

str

Returns:

Formatted optimization summary.

cached_calculation(cache_size: int = 128)[source]

Decorator for caching expensive calculations.

Parameters:

cache_size (int) – Maximum cache size.

Returns:

Decorated function with caching.

profile_function(func: Callable) Callable[source]

Decorator to profile function execution.

Parameters:

func (Callable) – Function to profile.

Return type:

Callable

Returns:

Decorated function with profiling.

ergodic_insurance.progress_monitor module

Lightweight progress monitoring for Monte Carlo simulations.

This module provides efficient progress tracking with minimal performance overhead, including ETA estimation, convergence summaries, and console output.

class ProgressStats(current_iteration: int, total_iterations: int, start_time: float, elapsed_time: float, estimated_time_remaining: float, iterations_per_second: float, convergence_checks: Tuple[int, float]]=<factory>, converged: bool = False, converged_at: int | None = None) None[source]

Bases: object

Statistics for progress monitoring.

current_iteration: int
total_iterations: int
start_time: float
elapsed_time: float
estimated_time_remaining: float
iterations_per_second: float
convergence_checks: List[Tuple[int, float]]
converged: bool = False
converged_at: int | None = None
summary() str[source]

Generate progress summary.

Return type:

str

class ProgressMonitor(total_iterations: int, check_intervals: List[int] | None = None, update_frequency: int = 1000, show_console: bool = True, convergence_threshold: float = 1.1)[source]

Bases: object

Lightweight progress monitor for Monte Carlo simulations.

Provides real-time progress tracking with minimal performance overhead (<1%). Includes ETA estimation, convergence monitoring, and console output.

iteration_times: List[float]
convergence_checks: List[Tuple[int, float]]
converged_at: int | None
update(iteration: int, convergence_value: float | None = None) bool[source]

Update progress and check for convergence.

Parameters:
  • iteration (int) – Current iteration number

  • convergence_value (Optional[float]) – Optional convergence metric (e.g., R-hat)

Return type:

bool

Returns:

True if should continue, False if converged and should stop

get_stats() ProgressStats[source]

Get current progress statistics.

Return type:

ProgressStats

Returns:

ProgressStats object with current metrics

generate_convergence_summary() Dict[str, Any][source]

Generate detailed convergence summary.

Return type:

Dict[str, Any]

Returns:

Dictionary with convergence analysis results

finish() ProgressStats[source]

Finish progress monitoring and return final stats.

Return type:

ProgressStats

Returns:

Final progress statistics

get_overhead_percentage() float[source]

Get the monitoring overhead as a percentage of total elapsed time.

Return type:

float

Returns:

Overhead percentage (0-100)

reset() None[source]

Reset the monitor to initial state.

Return type:

None

__enter__() ProgressMonitor[source]

Enter context manager.

Return type:

ProgressMonitor

__exit__(exc_type, exc_val, exc_tb) None[source]

Exit context manager and finish monitoring.

Return type:

None

finalize() None[source]

Finalize progress monitoring and print summary.

Return type:

None

ergodic_insurance.result_aggregator module

Advanced result aggregation framework for Monte Carlo simulations.

This module provides comprehensive aggregation capabilities for simulation results, supporting hierarchical aggregation, time-series analysis, and memory-efficient processing of large datasets.

class AggregationConfig(percentiles: List[float] = <factory>, calculate_moments: bool = True, calculate_distribution_fit: bool = False, chunk_size: int = 10000, cache_results: bool = True, precision: int = 6) None[source]

Bases: object

Configuration for result aggregation.

percentiles: List[float]
calculate_moments: bool = True
calculate_distribution_fit: bool = False
chunk_size: int = 10000
cache_results: bool = True
precision: int = 6
class BaseAggregator(config: AggregationConfig | None = None)[source]

Bases: ABC

Abstract base class for result aggregation.

Provides common functionality for all aggregation types.

abstractmethod aggregate(data: ndarray) Dict[str, Any][source]

Perform aggregation on data.

Parameters:

data (ndarray) – Input data array

Return type:

Dict[str, Any]

Returns:

Dictionary of aggregated statistics

class ResultAggregator(config: AggregationConfig | None = None, custom_functions: Dict[str, Callable] | None = None)[source]

Bases: BaseAggregator

Main aggregator for simulation results.

Provides comprehensive aggregation of Monte Carlo simulation results with support for custom aggregation functions.

aggregate(data: ndarray) Dict[str, Any][source]

Aggregate simulation results.

Parameters:

data (ndarray) – Array of simulation results

Return type:

Dict[str, Any]

Returns:

Dictionary containing all aggregated statistics

class TimeSeriesAggregator(config: AggregationConfig | None = None, window_size: int = 12)[source]

Bases: BaseAggregator

Aggregator for time-series data.

Supports annual, cumulative, and rolling window aggregations.

aggregate(data: ndarray) Dict[str, Any][source]

Aggregate time-series data.

Parameters:

data (ndarray) – 2D array where rows are time periods and columns are simulations

Return type:

Dict[str, Any]

Returns:

Dictionary of time-series aggregations

class PercentileTracker(percentiles: List[float], max_samples: int = 100000, seed: int | None = None)[source]

Bases: object

Efficient percentile tracking for streaming data.

Uses the t-digest algorithm (Dunning & Ertl, 2019) for memory-efficient percentile calculation on large datasets. The t-digest provides bounded memory usage and high accuracy, especially at tail percentiles relevant to insurance risk metrics (VaR, TVaR).

update(values: ndarray) None[source]

Update tracker with new values.

Parameters:

values (ndarray) – New values to add

Return type:

None

get_percentiles() Dict[str, float][source]

Get current percentile estimates.

Return type:

Dict[str, float]

Returns:

Dictionary of percentile values keyed as ‘pNN’.

merge(other: PercentileTracker) None[source]

Merge another tracker into this one.

Combines t-digest sketches from parallel simulation chunks without loss of accuracy.

Parameters:

other (PercentileTracker) – Another PercentileTracker to merge into this one.

Return type:

None

reset() None[source]

Reset tracker state.

Return type:

None

class ResultExporter[source]

Bases: object

Export aggregated results to various formats.

static to_csv(results: Dict[str, Any], filepath: Path, index_label: str = 'metric') None[source]

Export results to CSV file.

Parameters:
  • results (Dict[str, Any]) – Aggregated results dictionary

  • filepath (Path) – Output file path

  • index_label (str) – Label for index column

Return type:

None

static to_json(results: Dict[str, Any], filepath: Path, indent: int = 2) None[source]

Export results to JSON file.

Parameters:
  • results (Dict[str, Any]) – Aggregated results dictionary

  • filepath (Path) – Output file path

  • indent (int) – JSON indentation level

Return type:

None

static to_hdf5(results: Dict[str, Any], filepath: Path, compression: str = 'gzip') None[source]

Export results to HDF5 file.

Parameters:
  • results (Dict[str, Any]) – Aggregated results dictionary

  • filepath (Path) – Output file path

  • compression (str) – Compression algorithm to use

Return type:

None

class HierarchicalAggregator(levels: List[str], config: AggregationConfig | None = None)[source]

Bases: object

Aggregator for hierarchical data structures.

Supports multi-level aggregation across different dimensions (e.g., scenario -> year -> simulation).

aggregate_hierarchy(data: Dict[str, Any], level: int = 0) Dict[str, Any][source]

Recursively aggregate hierarchical data.

Parameters:
  • data (Dict[str, Any]) – Hierarchical data dictionary

  • level (int) – Current level in hierarchy

Return type:

Dict[str, Any]

Returns:

Aggregated results at all levels

ergodic_insurance.risk_metrics module

Comprehensive risk metrics suite for tail risk analysis.

This module provides industry-standard risk metrics including VaR, TVaR, PML, and Expected Shortfall to quantify tail risk and support insurance optimization decisions.

class RiskMetricsResult(metric_name: str, value: float, confidence_level: float | None = None, confidence_interval: Tuple[float, float] | None = None, metadata: Dict[str, Any] | None = None) None[source]

Bases: object

Container for risk metric calculation results.

metric_name: str
value: float
confidence_level: float | None = None
confidence_interval: Tuple[float, float] | None = None
metadata: Dict[str, Any] | None = None
class RiskMetrics(losses: ndarray, weights: ndarray | None = None, seed: int | None = None)[source]

Bases: object

Calculate comprehensive risk metrics for loss distributions.

This class provides industry-standard risk metrics for analyzing tail risk in insurance and financial applications.

var(confidence: float = 0.99, method: str = 'empirical', bootstrap_ci: bool = False, n_bootstrap: int = 1000) float | RiskMetricsResult[source]

Calculate Value at Risk (VaR).

VaR represents the loss amount that will not be exceeded with a given confidence level over a specific time period.

Parameters:
  • confidence (float) – Confidence level (e.g., 0.99 for 99% VaR).

  • method (str) – ‘empirical’ or ‘parametric’ (assumes normal distribution).

  • bootstrap_ci (bool) – Whether to calculate bootstrap confidence intervals.

  • n_bootstrap (int) – Number of bootstrap samples for CI calculation.

Return type:

Union[float, RiskMetricsResult]

Returns:

VaR value or RiskMetricsResult with confidence intervals.

Raises:

ValueError – If confidence level is not in (0, 1).

tvar(confidence: float = 0.99, var_value: float | None = None) float[source]

Calculate Tail Value at Risk (TVaR/CVaR).

TVaR represents the expected loss given that the loss exceeds VaR. It’s a coherent risk measure that satisfies sub-additivity.

Parameters:
  • confidence (float) – Confidence level for VaR threshold.

  • var_value (Optional[float]) – Pre-calculated VaR value (if None, will calculate).

Return type:

float

Returns:

TVaR value.

expected_shortfall(threshold: float) float[source]

Calculate Expected Shortfall (ES) above a threshold.

ES is the average of all losses that exceed a given threshold. Delegates to tvar() with a pre-computed VaR value.

Parameters:

threshold (float) – Loss threshold.

Return type:

float

Returns:

Expected shortfall value, or 0.0 if no losses exceed threshold.

pml(return_period: int) float[source]

Calculate Probable Maximum Loss (PML) for a given return period.

PML represents the loss amount expected to be equaled or exceeded once every ‘return_period’ years on average.

Parameters:

return_period (int) – Return period in years (e.g., 100 for 100-year event).

Return type:

float

Returns:

PML value.

Raises:

ValueError – If return period is less than 1.

conditional_tail_expectation(confidence: float = 0.99) float[source]

Calculate Conditional Tail Expectation (CTE).

CTE is similar to TVaR but uses a slightly different calculation method. It’s the expected value of losses that exceed the VaR threshold.

Parameters:

confidence (float) – Confidence level.

Return type:

float

Returns:

CTE value.

maximum_drawdown() float[source]

Calculate Maximum Drawdown.

Maximum drawdown measures the largest peak-to-trough decline in cumulative value.

Return type:

float

Returns:

Maximum drawdown value.

economic_capital(confidence: float = 0.999, expected_loss: float | None = None) float[source]

Calculate Economic Capital requirement.

Economic capital is the amount of capital needed to cover unexpected losses at a given confidence level.

Parameters:
  • confidence (float) – Confidence level (typically 99.9% for regulatory).

  • expected_loss (Optional[float]) – Expected loss (if None, will calculate mean).

Return type:

float

Returns:

Economic capital requirement.

return_period_curve(return_periods: ndarray | None = None) Tuple[ndarray, ndarray][source]

Generate return period curve (exceedance probability curve).

Parameters:

return_periods (Optional[ndarray]) – Array of return periods to calculate. If None, uses standard periods.

Return type:

Tuple[ndarray, ndarray]

Returns:

Tuple of (return_periods, loss_values).

tail_index(threshold: float | None = None) float[source]

Estimate tail index using Hill estimator.

The tail index characterizes the heaviness of the tail. Lower values indicate heavier tails.

Parameters:

threshold (Optional[float]) – Threshold for tail definition (if None, uses 90th percentile).

Return type:

float

Returns:

Estimated tail index.

risk_adjusted_metrics(returns: ndarray | None = None, risk_free_rate: float = 0.02) Dict[str, float][source]

Calculate risk-adjusted return metrics.

Parameters:
  • returns (Optional[ndarray]) – Array of returns (if None, uses negative of losses).

  • risk_free_rate (float) – Risk-free rate for Sharpe ratio calculation.

Return type:

Dict[str, float]

Returns:

Dictionary of risk-adjusted metrics.

coherence_test() Dict[str, bool][source]

Test coherence properties of risk measures.

A coherent risk measure satisfies: 1. Monotonicity 2. Sub-additivity 3. Positive homogeneity 4. Translation invariance

Return type:

Dict[str, bool]

Returns:

Dictionary indicating which properties are satisfied.

summary_statistics() Dict[str, float][source]

Calculate comprehensive summary statistics.

Return type:

Dict[str, float]

Returns:

Dictionary of summary statistics.

plot_distribution(bins: int = 50, show_metrics: bool = True, confidence_levels: List[float] | None = None, figsize: Tuple[int, int] = (12, 8)) Figure[source]

Plot loss distribution with risk metrics overlay.

Parameters:
  • bins (int) – Number of bins for histogram.

  • show_metrics (bool) – Whether to show VaR and TVaR lines.

  • confidence_levels (Optional[List[float]]) – Confidence levels for metrics to show.

  • figsize (Tuple[int, int]) – Figure size.

Return type:

Figure

Returns:

Matplotlib figure object.

compare_risk_metrics(scenarios: Dict[str, ndarray], confidence_levels: List[float] | None = None) DataFrame[source]

Compare risk metrics across multiple scenarios.

Parameters:
  • scenarios (Dict[str, ndarray]) – Dictionary mapping scenario names to loss arrays.

  • confidence_levels (Optional[List[float]]) – Confidence levels to evaluate.

Return type:

DataFrame

Returns:

DataFrame with comparative metrics.

class ROEAnalyzer(roe_series: ndarray, equity_series: ndarray | None = None)[source]

Bases: object

Comprehensive ROE analysis framework.

This class provides specialized metrics and analysis tools for Return on Equity (ROE) calculations, including time-weighted averages, component breakdowns, and volatility analysis.

time_weighted_average() float[source]

Calculate time-weighted average ROE using geometric mean.

Time-weighted average gives equal weight to each period regardless of the equity level, providing a measure of consistent performance.

Return type:

float

Returns:

Time-weighted average ROE.

equity_weighted_average() float[source]

Calculate equity-weighted average ROE.

Equity-weighted average gives more weight to periods with higher equity levels, reflecting the actual dollar impact.

Return type:

float

Returns:

Equity-weighted average ROE.

rolling_statistics(window: int) Dict[str, ndarray][source]

Calculate rolling window statistics for ROE.

Parameters:

window (int) – Window size in periods.

Return type:

Dict[str, ndarray]

Returns:

Dictionary with rolling mean, std, min, max arrays.

volatility_metrics() Dict[str, float][source]

Calculate comprehensive volatility metrics for ROE.

Return type:

Dict[str, float]

Returns:

Dictionary with volatility measures.

performance_ratios(risk_free_rate: float = 0.02) Dict[str, float][source]

Calculate performance ratios for ROE.

Parameters:

risk_free_rate (float) – Risk-free rate for Sharpe/Sortino calculations.

Return type:

Dict[str, float]

Returns:

Dictionary with performance ratios.

distribution_analysis() Dict[str, float][source]

Analyze the distribution of ROE values.

Return type:

Dict[str, float]

Returns:

Dictionary with distribution statistics.

stability_analysis(periods: List[int] | None = None) Dict[str, Any][source]

Analyze ROE stability across different time periods.

Parameters:

periods (Optional[List[int]]) – List of period lengths to analyze (default: [1, 3, 5, 10]).

Return type:

Dict[str, Any]

Returns:

Dictionary with stability metrics for each period.

ergodic_insurance.ruin_probability module

Ruin probability analysis for insurance optimization.

This module provides specialized classes and methods for analyzing bankruptcy and ruin probabilities in insurance scenarios.

class RuinProbabilityConfig(time_horizons: List[int] = <factory>, n_simulations: int = 10000, min_assets_threshold: float = 1000000, min_equity_threshold: float = 0.0, debt_service_coverage_ratio: float = 1.25, consecutive_negative_periods: int = 3, early_stopping: bool = True, parallel: bool = True, n_workers: int | None = None, seed: int | None = None, n_bootstrap: int = 1000, bootstrap_confidence_level: float = 0.95) None[source]

Bases: object

Configuration for ruin probability analysis.

time_horizons: List[int]
n_simulations: int = 10000
min_assets_threshold: float = 1000000
min_equity_threshold: float = 0.0
debt_service_coverage_ratio: float = 1.25
consecutive_negative_periods: int = 3
early_stopping: bool = True
parallel: bool = True
n_workers: int | None = None
seed: int | None = None
n_bootstrap: int = 1000
bootstrap_confidence_level: float = 0.95
class RuinProbabilityResults(time_horizons: ndarray, ruin_probabilities: ndarray, confidence_intervals: ndarray, bankruptcy_causes: Dict[str, ndarray], survival_curves: ndarray, execution_time: float, n_simulations: int, convergence_achieved: bool, mid_year_ruin_count: int = 0, ruin_month_distribution: Dict[int, int] | None = None) None[source]

Bases: object

Results from ruin probability analysis.

time_horizons

Array of time horizons analyzed (in years).

ruin_probabilities

Probability of ruin at each time horizon.

confidence_intervals

Bootstrap confidence intervals for probabilities.

bankruptcy_causes

Distribution of bankruptcy causes by horizon.

survival_curves

Survival probability curves over time.

execution_time

Total execution time in seconds.

n_simulations

Number of simulations run.

convergence_achieved

Whether convergence criteria were met.

mid_year_ruin_count

Number of simulations with mid-year ruin (Issue #279).

ruin_month_distribution

Distribution of ruin events by month (0-11).

time_horizons: ndarray
ruin_probabilities: ndarray
confidence_intervals: ndarray
bankruptcy_causes: Dict[str, ndarray]
survival_curves: ndarray
execution_time: float
n_simulations: int
convergence_achieved: bool
mid_year_ruin_count: int = 0
ruin_month_distribution: Dict[int, int] | None = None
summary() str[source]

Generate summary report.

Return type:

str

class RuinProbabilityAnalyzer(manufacturer, loss_generator, insurance_program, config)[source]

Bases: object

Analyzer for ruin probability calculations.

analyze_ruin_probability(config: RuinProbabilityConfig | None = None) RuinProbabilityResults[source]

Analyze ruin probability across multiple time horizons.

Parameters:

config (Optional[RuinProbabilityConfig]) – Configuration for analysis

Return type:

RuinProbabilityResults

Returns:

RuinProbabilityResults with analysis results

ergodic_insurance.safe_pickle module

Safe pickle serialization with HMAC integrity validation.

This module provides HMAC-signed pickle operations to prevent arbitrary code execution from tampered cache files. All file-based pickle operations in the codebase should use these functions instead of raw pickle.load/pickle.dump.

The HMAC key is stored in a .pickle_hmac_key file within the cache directory (or a default location). Files written with safe_dump can only be loaded by safe_load if the HMAC signature matches, preventing deserialization of untrusted data.

Also provides deterministic_hash() as a replacement for Python’s non-deterministic built-in hash() function.

safe_dump(obj: Any, f, protocol: int = 5, key_dir: Path | None = None) None[source]

Pickle dump with HMAC signature prepended.

Parameters:
  • obj (Any) – Object to serialize

  • f – Writable binary file object

  • protocol (int) – Pickle protocol version

  • key_dir (Optional[Path]) – Directory containing the HMAC key

Return type:

None

safe_load(f, key_dir: Path | None = None) Any[source]

Pickle load with HMAC verification.

Parameters:
  • f – Readable binary file object

  • key_dir (Optional[Path]) – Directory containing the HMAC key

Return type:

Any

Returns:

Deserialized object

Raises:

ValueError – If HMAC verification fails or file is too short

safe_dumps(obj: Any, protocol: int = 5, key_dir: Path | None = None) bytes[source]

Pickle dumps with HMAC signature prepended.

Parameters:
  • obj (Any) – Object to serialize

  • protocol (int) – Pickle protocol version

  • key_dir (Optional[Path]) – Directory containing the HMAC key

Return type:

bytes

Returns:

HMAC signature + pickled bytes

safe_loads(data: bytes, key_dir: Path | None = None) Any[source]

Pickle loads with HMAC verification.

Parameters:
  • data (bytes) – HMAC signature + pickled bytes

  • key_dir (Optional[Path]) – Directory containing the HMAC key

Return type:

Any

Returns:

Deserialized object

Raises:

ValueError – If HMAC verification fails or data is too short

deterministic_hash(*args: str, length: int = 16) str[source]

Generate a deterministic hash from string arguments.

Uses SHA-256 instead of Python’s non-deterministic hash(). This produces the same result across process restarts regardless of PYTHONHASHSEED.

Parameters:
  • *args (str) – String values to hash

  • length (int) – Number of hex characters to return (max 64)

Return type:

str

Returns:

Hex digest string of specified length

ergodic_insurance.scenario_manager module

Scenario management system for batch processing simulations.

This module provides a framework for managing multiple simulation scenarios, parameter sweeps, and configuration variations for comprehensive analysis.

class ScenarioType(*values)[source]

Bases: Enum

Types of scenario generation methods.

SINGLE = 'single'
CUSTOM = 'custom'
SENSITIVITY = 'sensitivity'
class ParameterSpec(**data: Any) None[source]

Bases: BaseModel

Specification for parameter variations in scenarios.

name

Parameter name (dot notation for nested params)

values

List of values for grid search

min_value

Minimum value for random search

max_value

Maximum value for random search

n_samples

Number of samples for random search

distribution

Distribution type for random sampling

base_value

Base value for sensitivity analysis

variation_pct

Percentage variation for sensitivity

name: str
values: List[Any] | None
min_value: float | None
max_value: float | None
n_samples: int
distribution: str
base_value: Any | None
variation_pct: float
classmethod validate_name(v: str) str[source]

Validate parameter name format.

Return type:

str

generate_values(method: ScenarioType, rng: Generator | None = None) List[Any][source]

Generate parameter values based on method.

Parameters:
Return type:

List[Any]

Returns:

List of parameter values

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ScenarioConfig(scenario_id: str, name: str, description: str = '', base_config: Config | None = None, simulation_config: SimulationConfig | None = None, parameter_overrides: Dict[str, ~typing.Any]=<factory>, tags: Set[str] = <factory>, priority: int = 100, created_at: datetime = <factory>, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Configuration for a single scenario.

scenario_id: str
name: str
description: str = ''
base_config: Config | None = None
simulation_config: SimulationConfig | None = None
parameter_overrides: Dict[str, Any]
tags: Set[str]
priority: int = 100
created_at: datetime
metadata: Dict[str, Any]
__post_init__()[source]

Initialize scenario with defaults.

generate_id() str[source]

Generate unique scenario ID from configuration.

Return type:

str

Returns:

Unique scenario identifier

apply_overrides(config: Any) Any[source]

Apply parameter overrides to configuration.

Parameters:

config (Any) – Configuration object to modify

Return type:

Any

Returns:

Modified configuration

to_dict() Dict[str, Any][source]

Convert scenario to dictionary representation.

Return type:

Dict[str, Any]

Returns:

Dictionary representation

class ScenarioManager[source]

Bases: object

Manager for creating and organizing simulation scenarios.

scenarios: List[ScenarioConfig]
scenario_index: Dict[str, ScenarioConfig]
create_scenario(name: str, base_config: Config | None = None, simulation_config: SimulationConfig | None = None, parameter_overrides: Dict[str, Any] | None = None, description: str = '', tags: Set[str] | None = None, priority: int = 100) ScenarioConfig[source]

Create a single scenario.

Parameters:
Return type:

ScenarioConfig

Returns:

Created scenario configuration

add_scenario(scenario: ScenarioConfig) None[source]

Add scenario to manager.

Parameters:

scenario (ScenarioConfig) – Scenario to add

Return type:

None

Create scenarios for grid search over parameters.

Parameters:
Return type:

List[ScenarioConfig]

Returns:

List of created scenarios

Create scenarios for random search over parameters.

Parameters:
Return type:

List[ScenarioConfig]

Returns:

List of created scenarios

create_sensitivity_analysis(base_name: str, parameter_specs: List[ParameterSpec], base_config: Config | None = None, simulation_config: SimulationConfig | None = None, tags: Set[str] | None = None) List[ScenarioConfig][source]

Create scenarios for sensitivity analysis.

Parameters:
Return type:

List[ScenarioConfig]

Returns:

List of created scenarios

get_scenarios_by_tag(tag: str) List[ScenarioConfig][source]

Get scenarios with specific tag.

Parameters:

tag (str) – Tag to filter by

Return type:

List[ScenarioConfig]

Returns:

List of matching scenarios

get_scenarios_by_priority(max_priority: int = 100) List[ScenarioConfig][source]

Get scenarios up to priority threshold.

Parameters:

max_priority (int) – Maximum priority value (inclusive)

Return type:

List[ScenarioConfig]

Returns:

Sorted list of scenarios

clear_scenarios() None[source]

Clear all scenarios.

Return type:

None

export_scenarios(path: str | Path) None[source]

Export scenarios to JSON file.

Parameters:

path (Union[str, Path]) – Output file path

Return type:

None

import_scenarios(path: str | Path) None[source]

Import scenarios from JSON file.

Parameters:

path (Union[str, Path]) – Input file path

Return type:

None

ergodic_insurance.sensitivity module

Comprehensive sensitivity analysis tools for insurance optimization.

This module provides tools for analyzing how changes in key parameters affect optimization results, including one-at-a-time (OAT) analysis, tornado diagrams, and two-way sensitivity analysis with efficient caching.

Example

Basic sensitivity analysis for a single parameter:

from ergodic_insurance.sensitivity import SensitivityAnalyzer
from ergodic_insurance.business_optimizer import BusinessOptimizer
from ergodic_insurance.manufacturer import WidgetManufacturer

# Setup optimizer
manufacturer = WidgetManufacturer(initial_assets=10_000_000)
optimizer = BusinessOptimizer(manufacturer)

# Run sensitivity analysis
analyzer = SensitivityAnalyzer(base_config, optimizer)
result = analyzer.analyze_parameter(
    "frequency",
    param_range=(3, 8),
    n_points=11
)

# Generate tornado diagram
tornado_data = analyzer.create_tornado_diagram(
    parameters=["frequency", "severity_mean", "premium_rate"],
    metric="optimal_roe"
)

Author: Alex Filiakov Date: 2025-01-29

class SensitivityResult(parameter: str, baseline_value: float, variations: ndarray, metrics: Dict[str, ndarray], parameter_path: str | None = None, units: str | None = None) None[source]

Bases: object

Results from sensitivity analysis for a single parameter.

parameter

Name of the parameter being analyzed

baseline_value

Original value of the parameter

variations

Array of parameter values tested

metrics

Dictionary of metric arrays for each variation

parameter_path

Nested path to parameter (e.g., “manufacturer.base_operating_margin”)

units

Optional units for the parameter (e.g., “percentage”, “dollars”)

parameter: str
baseline_value: float
variations: ndarray
metrics: Dict[str, ndarray]
parameter_path: str | None = None
units: str | None = None
calculate_impact(metric: str) float[source]

Calculate standardized impact on a specific metric.

The impact is calculated as the elasticity of the metric with respect to the parameter, normalized by the baseline values.

Parameters:

metric (str) – Name of the metric to calculate impact for

Return type:

float

Returns:

Standardized impact coefficient (elasticity)

Raises:

KeyError – If metric not found in results

get_metric_bounds(metric: str) Tuple[float, float][source]

Get the minimum and maximum values for a metric.

Parameters:

metric (str) – Name of the metric

Return type:

Tuple[float, float]

Returns:

Tuple of (min_value, max_value)

Raises:

KeyError – If metric not found in results

to_dataframe() DataFrame[source]

Convert results to a pandas DataFrame.

Return type:

DataFrame

Returns:

DataFrame with variations and all metrics

class TwoWaySensitivityResult(parameter1: str, parameter2: str, values1: ndarray, values2: ndarray, metric_grid: ndarray, metric_name: str) None[source]

Bases: object

Results from two-way sensitivity analysis.

parameter1

Name of first parameter

parameter2

Name of second parameter

values1

Array of values for first parameter

values2

Array of values for second parameter

metric_grid

2D array of metric values [len(values1), len(values2)]

metric_name

Name of the metric analyzed

parameter1: str
parameter2: str
values1: ndarray
values2: ndarray
metric_grid: ndarray
metric_name: str
find_optimal_region(target_value: float, tolerance: float = 0.05) ndarray[source]

Find parameter combinations that achieve target metric value.

Parameters:
  • target_value (float) – Target value for the metric

  • tolerance (float) – Relative tolerance for matching (default 5%)

Return type:

ndarray

Returns:

Boolean mask array indicating satisfactory regions

to_dataframe() DataFrame[source]

Convert to DataFrame for easier manipulation.

Return type:

DataFrame

Returns:

DataFrame with multi-index for parameters and metric values

class SensitivityAnalyzer(base_config: Dict[str, Any], optimizer: Any, cache_dir: Path | None = None)[source]

Bases: object

Comprehensive sensitivity analysis tools for optimization.

This class provides methods for analyzing how parameter changes affect optimization outcomes, with built-in caching for efficiency.

base_config

Base configuration dictionary

optimizer

Optimizer object with an optimize() method

results_cache

Cache for optimization results

cache_dir

Directory for persistent cache storage

results_cache: Dict[str, Any]
analyze_parameter(param_name: str, param_range: Tuple[float, float] | None = None, n_points: int = 11, param_path: str | None = None, relative_range: float = 0.3) SensitivityResult[source]

Analyze sensitivity to a single parameter.

Parameters:
  • param_name (str) – Name of parameter to analyze

  • param_range (Optional[Tuple[float, float]]) – (min, max) range for parameter values

  • n_points (int) – Number of points to evaluate

  • param_path (Optional[str]) – Nested path to parameter (e.g., “manufacturer.tax_rate”)

  • relative_range (float) – If param_range not provided, use ±relative_range from baseline

Return type:

SensitivityResult

Returns:

SensitivityResult with analysis results

Raises:

KeyError – If parameter not found in base configuration

create_tornado_diagram(parameters: List[str | Tuple[str, str]], metric: str = 'optimal_roe', relative_range: float = 0.3, n_points: int = 11) DataFrame[source]

Create tornado diagram data for parameter impacts.

Parameters:
  • parameters (List[Union[str, Tuple[str, str]]]) – List of parameter names or (name, path) tuples

  • metric (str) – Metric to analyze

  • relative_range (float) – Relative range for parameter variations

  • n_points (int) – Number of points for analysis

Returns:

  • parameter: Parameter name

  • impact: Absolute impact value

  • direction: “positive” or “negative”

  • low_value: Metric value at parameter minimum

  • high_value: Metric value at parameter maximum

  • baseline: Metric value at baseline

  • baseline_param: Baseline parameter value

Return type:

DataFrame sorted by impact magnitude with columns

analyze_two_way(param1: str | Tuple[str, str], param2: str | Tuple[str, str], param1_range: Tuple[float, float] | None = None, param2_range: Tuple[float, float] | None = None, n_points1: int = 10, n_points2: int = 10, metric: str = 'optimal_roe', relative_range: float = 0.3) TwoWaySensitivityResult[source]

Perform two-way sensitivity analysis.

Parameters:
  • param1 (Union[str, Tuple[str, str]]) – First parameter name or (name, path) tuple

  • param2 (Union[str, Tuple[str, str]]) – Second parameter name or (name, path) tuple

  • param1_range (Optional[Tuple[float, float]]) – Range for first parameter

  • param2_range (Optional[Tuple[float, float]]) – Range for second parameter

  • n_points1 (int) – Number of points for first parameter

  • n_points2 (int) – Number of points for second parameter

  • metric (str) – Metric to analyze

  • relative_range (float) – Relative range if explicit ranges not provided

Return type:

TwoWaySensitivityResult

Returns:

TwoWaySensitivityResult with grid of metric values

clear_cache() None[source]

Clear all cached results.

Return type:

None

analyze_parameter_group(parameter_group: Dict[str, Tuple[float, float]], n_points: int = 11, metric: str = 'optimal_roe') Dict[str, SensitivityResult][source]

Analyze sensitivity for a group of parameters.

Parameters:
  • parameter_group (Dict[str, Tuple[float, float]]) – Dictionary of parameter names to (min, max) ranges

  • n_points (int) – Number of points for each parameter

  • metric (str) – Primary metric for analysis

Return type:

Dict[str, SensitivityResult]

Returns:

Dictionary of parameter names to SensitivityResult objects

ergodic_insurance.sensitivity_visualization module

Visualization utilities for sensitivity analysis results.

This module provides publication-ready visualization functions for sensitivity analysis results, including tornado diagrams, two-way sensitivity heatmaps, and parameter impact charts.

Example

Creating a tornado diagram:

from ergodic_insurance.sensitivity_visualization import plot_tornado_diagram

# Assuming tornado_data is a DataFrame from SensitivityAnalyzer
fig = plot_tornado_diagram(
    tornado_data,
    title="Parameter Sensitivity Analysis",
    metric_label="ROE Impact"
)
fig.savefig("tornado_diagram.png", dpi=300, bbox_inches='tight')

Author: Alex Filiakov Date: 2025-01-29

plot_tornado_diagram(tornado_data: DataFrame, title: str = 'Sensitivity Analysis - Tornado Diagram', metric_label: str = 'Impact on Objective', figsize: Tuple[float, float] = (10, 6), n_params: int | None = None, color_positive: str = '#2E7D32', color_negative: str = '#C62828', show_values: bool = True) Figure[source]

Create a tornado diagram for sensitivity analysis results.

Parameters:
  • tornado_data (DataFrame) – DataFrame with columns: parameter, impact, direction, low_value, high_value, baseline

  • title (str) – Plot title

  • metric_label (str) – Label for the x-axis

  • figsize (Tuple[float, float]) – Figure size as (width, height)

  • n_params (Optional[int]) – Number of top parameters to show (None for all)

  • color_positive (str) – Color for positive impacts

  • color_negative (str) – Color for negative impacts

  • show_values (bool) – Whether to show numeric values on bars

Return type:

Figure

Returns:

Matplotlib Figure object

plot_two_way_sensitivity(result: TwoWaySensitivityResult, title: str | None = None, cmap: str = 'RdYlGn', figsize: Tuple[float, float] = (10, 8), show_contours: bool = True, contour_levels: int | None = 10, optimal_point: Tuple[float, float] | None = None, fmt: str = '.2f') Figure[source]

Create a heatmap for two-way sensitivity analysis.

Parameters:
  • result (TwoWaySensitivityResult) – TwoWaySensitivityResult object

  • title (Optional[str]) – Plot title (auto-generated if None)

  • cmap (str) – Colormap name

  • figsize (Tuple[float, float]) – Figure size as (width, height)

  • show_contours (bool) – Whether to show contour lines

  • contour_levels (Optional[int]) – Number of contour levels

  • optimal_point (Optional[Tuple[float, float]]) – Optional (param1_value, param2_value) to mark

  • fmt (str) – Format string for contour labels. Can be: - New-style format like ‘.2f’ or ‘.2%’ - Old-style format like ‘%.2f’ - Callable that takes a number and returns a string

Return type:

Figure

Returns:

Matplotlib Figure object

plot_parameter_sweep(result: SensitivityResult, metrics: List[str] | None = None, title: str | None = None, figsize: Tuple[float, float] = (12, 8), normalize: bool = False, mark_baseline: bool = True) Figure[source]

Plot multiple metrics against parameter variations.

Parameters:
  • result (SensitivityResult) – SensitivityResult object

  • metrics (Optional[List[str]]) – List of metrics to plot (None for all)

  • title (Optional[str]) – Plot title (auto-generated if None)

  • figsize (Tuple[float, float]) – Figure size as (width, height)

  • normalize (bool) – Whether to normalize metrics to [0, 1]

  • mark_baseline (bool) – Whether to mark the baseline value

Return type:

Figure

Returns:

Matplotlib Figure object

create_sensitivity_report(analyzer: SensitivityAnalyzer, parameters: List[str | Tuple[str, str]], output_dir: str | None = None, metric: str = 'optimal_roe', formats: List[str] | None = None) Dict[str, Any][source]

Generate a complete sensitivity analysis report.

Parameters:
  • analyzer (SensitivityAnalyzer) – SensitivityAnalyzer object with results

  • parameters (List[Union[str, Tuple[str, str]]]) – List of parameters to analyze

  • output_dir (Optional[str]) – Directory to save figures (None for no saving)

  • metric (str) – Primary metric for analysis

  • formats (Optional[List[str]]) – File formats to save figures in

Return type:

Dict[str, Any]

Returns:

Dictionary with generated figures and analysis summary

plot_sensitivity_matrix(results: Dict[str, SensitivityResult], metric: str = 'optimal_roe', figsize: Tuple[float, float] = (12, 10), cmap: str = 'coolwarm', show_values: bool = True) Figure[source]

Create a matrix plot showing sensitivity across multiple parameters.

Parameters:
  • results (Dict[str, SensitivityResult]) – Dictionary of parameter names to SensitivityResult objects

  • metric (str) – Metric to display

  • figsize (Tuple[float, float]) – Figure size as (width, height)

  • cmap (str) – Colormap name

  • show_values (bool) – Whether to show numeric values in cells

Return type:

Figure

Returns:

Matplotlib Figure object

ergodic_insurance.setup module

ergodic_insurance.simulation module

Simulation engine for time evolution of widget manufacturer model.

This module provides the main simulation engine that orchestrates the time evolution of the widget manufacturer financial model, managing loss events, financial calculations, and result collection.

The simulation framework supports both single-path and Monte Carlo simulations, enabling comprehensive analysis of insurance strategies and business outcomes under uncertainty. It tracks detailed financial metrics, processes insurance claims, and handles bankruptcy conditions appropriately.

Key Features:
  • Single-path trajectory simulation with detailed metrics

  • Monte Carlo simulation support through integration

  • Insurance claim processing with policy application

  • Financial statement tracking and ROE calculation

  • Bankruptcy detection and proper termination

  • Comprehensive result analysis and export capabilities

Examples

Basic simulation:

from ergodic_insurance import Simulation, Config, WidgetManufacturer, ManufacturingLossGenerator

config = Config()
manufacturer = WidgetManufacturer(config.manufacturer)
loss_generator = ManufacturingLossGenerator.create_simple(
    frequency=0.1, severity_mean=5_000_000, seed=42
)

sim = Simulation(
    manufacturer=manufacturer,
    loss_generator=loss_generator,
    time_horizon=50
)
results = sim.run()

print(f"Mean ROE: {results.summary_stats()['mean_roe']:.2%}")

Note

This module is thread-safe for parallel Monte Carlo simulations when each thread has its own Simulation instance.

Since:

Version 0.1.0

class SimulationResults(years: ndarray, assets: ndarray, equity: ndarray, roe: ndarray, revenue: ndarray, net_income: ndarray, claim_counts: ndarray, claim_amounts: ndarray, insolvency_year: int | None = None) None[source]

Bases: object

Container for simulation trajectory data.

Holds the complete time series of financial metrics and events from a single simulation run, with methods for analysis and export.

This dataclass provides comprehensive storage for all simulation outputs and includes utility methods for calculating derived metrics, performing statistical analysis, and exporting data for further processing.

years

Array of simulation years (0 to time_horizon-1).

assets

Total assets at each year.

equity

Shareholder equity at each year.

roe

Return on equity for each year.

revenue

Annual revenue for each year.

net_income

Annual net income for each year.

claim_counts

Number of claims in each year.

claim_amounts

Total claim amount in each year.

insolvency_year

Year when bankruptcy occurred (None if survived).

Examples

Analyzing simulation results:

results = simulation.run()

# Get summary statistics
stats = results.summary_stats()
print(f"Survival: {stats['survived']}")
print(f"Mean ROE: {stats['mean_roe']:.2%}")

# Export to DataFrame
df = results.to_dataframe()
df.to_csv('simulation_results.csv')

# Calculate volatility metrics
volatility = results.calculate_roe_volatility()
print(f"ROE Sharpe Ratio: {volatility['roe_sharpe']:.2f}")

Note

All financial values are in nominal dollars without inflation adjustment. ROE calculations handle edge cases like zero equity appropriately.

years: ndarray
assets: ndarray
equity: ndarray
roe: ndarray
revenue: ndarray
net_income: ndarray
claim_counts: ndarray
claim_amounts: ndarray
insolvency_year: int | None = None
to_dataframe() DataFrame[source]

Convert simulation results to pandas DataFrame.

Returns:

DataFrame with columns for year, assets, equity, roe,

revenue, net_income, claim_count, and claim_amount.

Return type:

pd.DataFrame

Examples

Export to Excel:

df = results.to_dataframe()
df.to_excel('results.xlsx', index=False)
calculate_time_weighted_roe() float[source]

Calculate time-weighted average ROE.

Time-weighted ROE gives equal weight to each period regardless of the equity level, providing a better measure of consistent performance over time. Uses geometric mean for proper compounding.

Returns:

Time-weighted average ROE as a decimal (e.g., 0.08 for 8%).

Return type:

float

Note

This method uses geometric mean of growth factors (1 + ROE) to properly account for compounding effects. NaN values are excluded from the calculation.

Examples

Compare different ROE measures:

simple_avg = np.mean(results.roe)
time_weighted = results.calculate_time_weighted_roe()
print(f"Simple average: {simple_avg:.2%}")
print(f"Time-weighted: {time_weighted:.2%}")
calculate_rolling_roe(window: int) ndarray[source]

Calculate rolling window ROE.

Parameters:

window (int) – Window size in years (e.g., 1, 3, 5). Must be positive and not exceed the data length.

Returns:

Array of rolling ROE values. Values are NaN for

positions where the full window is not available.

Return type:

np.ndarray

Raises:

ValueError – If window size exceeds data length.

Examples

Calculate and plot rolling ROE:

rolling_3yr = results.calculate_rolling_roe(3)
plt.plot(results.years, rolling_3yr, label='3-Year Rolling ROE')
plt.axhline(y=0.08, color='r', linestyle='--', label='Target')
calculate_roe_components(base_operating_margin: float = 0.08, tax_rate: float = 0.25) Dict[str, ndarray][source]

Calculate ROE component breakdown.

Decomposes ROE into operating, insurance, and tax components using DuPont-style analysis. This helps identify the drivers of ROE performance and the impact of insurance decisions.

Parameters:
  • base_operating_margin (float) – Baseline operating margin for the business. Defaults to 0.08 (8%). Can be sourced from manufacturer.config.base_operating_margin.

  • tax_rate (float) – Corporate tax rate. Defaults to 0.25 (25%). Can be sourced from manufacturer.config.tax_rate.

Returns:

Dictionary containing:
  • ’operating_roe’: Base business ROE without claims

  • ’insurance_impact’: ROE reduction from claims/premiums

  • ’tax_effect’: Impact of taxes on ROE

  • ’total_roe’: Actual ROE for reference

Return type:

Dict[str, np.ndarray]

Note

This is a simplified decomposition. Actual implementation would require more detailed financial data for precise attribution.

Examples

Analyze ROE drivers:

components = results.calculate_roe_components()
operating_avg = np.mean(components['operating_roe'])
insurance_drag = np.mean(components['insurance_impact'])
print(f"Operating ROE: {operating_avg:.2%}")
print(f"Insurance drag: {insurance_drag:.2%}")

Using manufacturer config values:

components = results.calculate_roe_components(
    base_operating_margin=manufacturer.config.base_operating_margin,
    tax_rate=manufacturer.config.tax_rate,
)
calculate_roe_volatility() Dict[str, float][source]

Calculate ROE volatility metrics.

Computes various risk-adjusted performance metrics for ROE, including standard deviation, downside deviation, Sharpe ratio, and coefficient of variation.

Returns:

Dictionary containing:
  • ’roe_std’: Standard deviation of ROE

  • ’roe_downside_deviation’: Downside deviation from mean

  • ’roe_sharpe’: Sharpe ratio using 2% risk-free rate

  • ’roe_coefficient_variation’: Coefficient of variation (std/mean)

Return type:

Dict[str, float]

Note

Returns zeros for all metrics if insufficient data (< 2 observations). Sharpe ratio uses a 2% risk-free rate assumption.

Examples

Risk-adjusted performance analysis:

volatility = results.calculate_roe_volatility()
if volatility['roe_sharpe'] > 1.0:
    print("Strong risk-adjusted performance")
print(f"Downside risk: {volatility['roe_downside_deviation']:.2%}")
summary_stats() Dict[str, float][source]

Calculate summary statistics for the simulation.

Computes comprehensive summary statistics including ROE metrics, rolling averages, volatility measures, and survival indicators.

Returns:
Dict[str, float]: Dictionary containing:
  • Basic ROE metrics (mean, std, median, time-weighted)

  • Rolling averages (1, 3, 5 year)

  • Final state (assets, equity)

  • Claims statistics (total, frequency)

  • Survival indicators (survived, insolvency_year)

  • Volatility metrics (from calculate_roe_volatility)

Examples:

Generate summary report:

stats = results.summary_stats()

print("Performance Summary:")
print(f"  Mean ROE: {stats['mean_roe']:.2%}")
print(f"  Volatility: {stats['std_roe']:.2%}")
print(f"  Sharpe Ratio: {stats['roe_sharpe']:.2f}")

print("
Risk Summary:”)

print(f” Survived: {stats[‘survived’]}”) print(f” Total Claims: ${stats[‘total_claims’]:,.0f}”)

Return type:

Dict[str, float]

class Simulation(manufacturer: WidgetManufacturer, loss_generator: ManufacturingLossGenerator | List[ManufacturingLossGenerator] | None = None, insurance_policy: InsurancePolicy | None = None, time_horizon: int = 100, seed: int | None = None, growth_rate: float = 0.0, letter_of_credit_rate: float = 0.015)[source]

Bases: object

Simulation engine for widget manufacturer time evolution.

The main simulation class that coordinates the time evolution of the widget manufacturer model, processing losses and tracking financial performance over the specified time horizon.

Supports both single-path and Monte Carlo simulations, with comprehensive tracking of financial metrics, loss events, and bankruptcy conditions.

Examples

Basic simulation setup and execution:

from ergodic_insurance.config import ManufacturerConfig
from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator
from ergodic_insurance.insurance import InsurancePolicy
from ergodic_insurance.simulation import Simulation

# Create manufacturer
config = ManufacturerConfig(initial_assets=10_000_000)
manufacturer = WidgetManufacturer(config)

# Create insurance policy
policy = InsurancePolicy(
    deductible=500_000,
    limit=5_000_000,
    premium_rate=0.02
)

# Run simulation
sim = Simulation(
    manufacturer=manufacturer,
    loss_generator=ManufacturingLossGenerator.create_simple(seed=42),
    insurance_policy=policy,
    time_horizon=10
)
results = sim.run()

# Analyze results
print(f"Mean ROE: {results.summary_stats()['mean_roe']:.2%}")
print(f"Survived: {results.insolvency_year is None}")

Running Monte Carlo simulation:

# Use MonteCarloEngine for multiple paths
monte_carlo = MonteCarloEngine(
    base_simulation=sim,
    n_simulations=1000,
    parallel=True
)
mc_results = monte_carlo.run()

print(f"Survival rate: {mc_results.survival_rate:.1%}")
print(f"95% VaR: ${mc_results.var_95:,.0f}")
manufacturer

The widget manufacturer being simulated

loss_generator

Generator for loss events

insurance_policy

Optional insurance coverage

time_horizon

Simulation duration in years

seed

Random seed for reproducibility

See also

SimulationResults: Container for simulation output MonteCarloEngine: For running multiple simulation paths WidgetManufacturer: The core financial model ManufacturingLossGenerator: For generating loss events

insolvency_year: int | None
step_annual(year: int, losses: List[LossEvent]) Dict[str, Any][source]

Execute single annual time step.

Processes losses for the year, applies insurance coverage, updates manufacturer financial state, and returns metrics.

Parameters:
  • year (int) – Current simulation year (0-indexed).

  • losses (List[LossEvent]) – List of LossEvent objects for this year.

Returns:

Dictionary containing metrics:
  • All metrics from manufacturer.step()

  • ’claim_count’: Number of losses this year

  • ’claim_amount’: Total loss amount before insurance

  • ’company_payment’: Amount paid by company after deductible

  • ’insurance_recovery’: Amount recovered from insurance

Return type:

Dict[str, float]

Note

This method modifies the manufacturer state in-place. Insurance premiums are deducted from both assets and equity to maintain balance sheet integrity.

Side Effects:
  • Modifies manufacturer.assets and manufacturer.equity

  • Updates manufacturer internal state via step() method

run(progress_interval: int = 100) SimulationResults[source]

Run the full simulation over the specified time horizon.

Executes a complete simulation trajectory, processing claims each year, updating the manufacturer’s financial state, and tracking all metrics. The simulation terminates early if the manufacturer becomes insolvent.

Parameters:

progress_interval (int) – How often to log progress (in years). Set to 0 to disable progress logging. Useful for long simulations.

Returns:

  • Complete time series of financial metrics

  • Claim history and amounts

  • ROE trajectory

  • Insolvency year (if bankruptcy occurred)

Return type:

SimulationResults object containing

Examples

Run simulation with progress updates:

sim = Simulation(manufacturer, time_horizon=1000)
results = sim.run(progress_interval=100)  # Log every 100 years

# Check if company survived
if results.insolvency_year:
    print(f"Bankruptcy in year {results.insolvency_year}")
else:
    print(f"Survived {len(results.years)} years")

Analyze simulation results:

results = sim.run()
df = results.to_dataframe()

# Plot equity evolution
import matplotlib.pyplot as plt
plt.plot(df['year'], df['equity'])
plt.xlabel('Year')
plt.ylabel('Equity ($)')
plt.title('Company Equity Over Time')
plt.show()

Note

The simulation uses pre-generated claims for efficiency. All claims are generated at the start based on the configured loss distributions and random seed.

See also

step_annual(): Single year simulation step SimulationResults: Output data structure

run_with_loss_data(loss_data: LossData, validate: bool = True, progress_interval: int = 100) SimulationResults[source]

Run simulation using standardized LossData.

Parameters:
  • loss_data (LossData) – Standardized loss data.

  • validate (bool) – Whether to validate loss data before running.

  • progress_interval (int) – How often to log progress.

Return type:

SimulationResults

Returns:

SimulationResults object with full trajectory.

get_trajectory() DataFrame[source]

Get simulation trajectory as pandas DataFrame.

This is a convenience method that runs the simulation if needed and returns the results as a DataFrame.

Return type:

DataFrame

Returns:

DataFrame with simulation trajectory.

classmethod run_monte_carlo(config: Config, insurance_policy: InsurancePolicy, n_scenarios: int = 10000, batch_size: int = 1000, n_jobs: int = 7, checkpoint_dir: Path | None = None, checkpoint_frequency: int = 5000, seed: int | None = None, resume: bool = True) Dict[str, Any][source]

Run Monte Carlo simulation using the MonteCarloEngine.

This is a convenience class method for running large-scale Monte Carlo simulations with the optimized engine.

Parameters:
  • config (Config) – Configuration object.

  • insurance_policy (InsurancePolicy) – Insurance policy to simulate.

  • n_scenarios (int) – Number of scenarios to run.

  • batch_size (int) – Scenarios per batch.

  • n_jobs (int) – Number of parallel jobs.

  • checkpoint_dir (Optional[Path]) – Directory for checkpoints.

  • checkpoint_frequency (int) – Save checkpoint every N scenarios.

  • seed (Optional[int]) – Random seed.

  • resume (bool) – Whether to resume from checkpoint.

Return type:

Dict[str, Any]

Returns:

Dictionary of Monte Carlo results and statistics.

classmethod compare_insurance_strategies(config: Config, insurance_policies: Dict[str, InsurancePolicy], n_scenarios: int = 1000, n_jobs: int = 7, seed: int | None = None) DataFrame[source]

Compare multiple insurance strategies via Monte Carlo.

Parameters:
  • config (Config) – Configuration object.

  • insurance_policies (Dict[str, InsurancePolicy]) – Dictionary of policy name to InsurancePolicy.

  • n_scenarios (int) – Scenarios per policy.

  • n_jobs (int) – Number of parallel jobs.

  • seed (Optional[int]) – Random seed.

Return type:

DataFrame

Returns:

DataFrame comparing results across strategies.

ergodic_insurance.statistical_tests module

Statistical hypothesis testing utilities for simulation results.

This module provides bootstrap-based hypothesis testing functions for comparing strategies, validating performance differences, and assessing statistical significance of simulation outcomes.

Example

>>> from statistical_tests import test_difference_in_means
>>> import numpy as np
>>> # Compare two strategies
>>> strategy_a_returns = np.random.normal(0.08, 0.02, 1000)
>>> strategy_b_returns = np.random.normal(0.10, 0.03, 1000)
>>> result = test_difference_in_means(
...     strategy_a_returns,
...     strategy_b_returns,
...     alternative='less'
... )
>>> print(f"P-value: {result.p_value:.4f}")
>>> print(f"Strategy B is better: {result.reject_null}")
DEFAULT_N_BOOTSTRAP

Default bootstrap iterations for tests (10000).

Type:

int

DEFAULT_ALPHA

Default significance level (0.05).

Type:

float

class HypothesisTestResult(test_statistic: float, p_value: float, reject_null: bool, confidence_interval: Tuple[float, float], null_hypothesis: str, alternative: str, alpha: float, method: str, bootstrap_distribution: ndarray | None = None, metadata: Dict[str, Any] | None = None) None[source]

Bases: object

Container for hypothesis test results.

test_statistic: float
p_value: float
reject_null: bool
confidence_interval: Tuple[float, float]
null_hypothesis: str
alternative: str
alpha: float
method: str
bootstrap_distribution: ndarray | None = None
metadata: Dict[str, Any] | None = None
summary() str[source]

Generate human-readable summary of test results.

Return type:

str

Returns:

Formatted string with test results and interpretation.

difference_in_means_test(sample1: ndarray, sample2: ndarray, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]

Test difference in means between two samples using bootstrap.

Tests the null hypothesis that the means of two populations are equal against various alternatives using bootstrap resampling.

Parameters:
  • sample1 (ndarray) – First sample array.

  • sample2 (ndarray) – Second sample array.

  • alternative (str) – Type of alternative hypothesis: - ‘two-sided’: means are different - ‘less’: mean1 < mean2 - ‘greater’: mean1 > mean2

  • alpha (float) – Significance level (default 0.05).

  • n_bootstrap (int) – Number of bootstrap iterations (default 10000).

  • seed (Optional[int]) – Random seed for reproducibility.

Return type:

HypothesisTestResult

Returns:

HypothesisTestResult containing test statistics and decision.

Raises:

ValueError – If alternative is not valid.

Example

>>> # Test if Strategy A has lower returns than Strategy B
>>> result = test_difference_in_means(
...     returns_a, returns_b, alternative='less'
... )
>>> if result.reject_null:
...     print("Strategy B significantly outperforms Strategy A")
ratio_of_metrics_test(sample1: ~numpy.ndarray, sample2: ~numpy.ndarray, statistic: ~typing.Callable[[~numpy.ndarray], float] = <function mean>, null_ratio: float = 1.0, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]

Test ratio of metrics between two samples using bootstrap.

Tests whether the ratio of a statistic (e.g., mean, median) between two samples equals a specified value (typically 1.0).

Parameters:
  • sample1 (ndarray) – First sample array.

  • sample2 (ndarray) – Second sample array.

  • statistic (Callable[[ndarray], float]) – Function to compute on each sample (default: mean).

  • null_ratio (float) – Null hypothesis ratio value (default: 1.0).

  • alternative (str) – Alternative hypothesis type.

  • alpha (float) – Significance level.

  • n_bootstrap (int) – Number of bootstrap iterations.

  • seed (Optional[int]) – Random seed.

Return type:

HypothesisTestResult

Returns:

HypothesisTestResult for the ratio test.

Example

>>> # Test if ROE ratio differs from 1.0
>>> result = test_ratio_of_metrics(
...     roe_strategy_a,
...     roe_strategy_b,
...     statistic=np.median,
...     null_ratio=1.0
... )
paired_comparison_test(paired_differences: ndarray, null_value: float = 0.0, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]

Test paired differences using bootstrap.

Tests whether paired differences (e.g., from matched scenarios) have a mean equal to a specified value (typically 0).

Parameters:
  • paired_differences (ndarray) – Array of paired differences.

  • null_value (float) – Null hypothesis value for mean difference (default: 0).

  • alternative (str) – Alternative hypothesis type.

  • alpha (float) – Significance level.

  • n_bootstrap (int) – Number of bootstrap iterations.

  • seed (Optional[int]) – Random seed.

Return type:

HypothesisTestResult

Returns:

HypothesisTestResult for the paired test.

Example

>>> # Test if insurance improves outcomes
>>> differences = outcomes_with_insurance - outcomes_without_insurance
>>> result = paired_comparison_test(differences, alternative='greater')
bootstrap_hypothesis_test(data: ndarray, null_hypothesis: Callable[[ndarray], ndarray], test_statistic: Callable[[ndarray], float], alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]

General bootstrap hypothesis testing framework.

Allows testing of custom hypotheses using any test statistic.

Parameters:
  • data (ndarray) – Input data array.

  • null_hypothesis (Callable[[ndarray], ndarray]) – Function that transforms data to satisfy null.

  • test_statistic (Callable[[ndarray], float]) – Function to compute test statistic.

  • alternative (str) – Alternative hypothesis type.

  • alpha (float) – Significance level.

  • n_bootstrap (int) – Number of bootstrap iterations.

  • seed (Optional[int]) – Random seed.

Return type:

HypothesisTestResult

Returns:

HypothesisTestResult for the custom test.

Example

>>> # Test if variance exceeds threshold
>>> def null_transform(x):
...     return x * np.sqrt(threshold_var / np.var(x))
>>> result = bootstrap_hypothesis_test(
...     data, null_transform, np.var, alternative='greater'
... )
multiple_comparison_correction(p_values: List[float], method: str = 'bonferroni', alpha: float = 0.05) Tuple[ndarray, ndarray][source]

Apply multiple comparison correction to p-values.

Adjusts p-values when multiple hypothesis tests are performed to control family-wise error rate or false discovery rate.

Parameters:
  • p_values (List[float]) – List of p-values from multiple tests.

  • method (str) – Correction method: - ‘bonferroni’: Bonferroni correction - ‘holm’: Holm-Bonferroni method - ‘fdr’: Benjamini-Hochberg FDR

  • alpha (float) – Overall significance level.

Return type:

Tuple[ndarray, ndarray]

Returns:

Tuple of (adjusted_p_values, reject_decisions).

Example

>>> p_vals = [0.01, 0.04, 0.03, 0.20]
>>> adj_p, reject = multiple_comparison_correction(p_vals)
>>> print(f"Significant tests: {np.sum(reject)}")

ergodic_insurance.stochastic_processes module

Stochastic processes for financial modeling.

This module provides various stochastic process implementations for modeling financial volatility, including Geometric Brownian Motion, lognormal volatility, and mean-reverting processes. These are used to add realistic randomness to revenue and growth modeling in the manufacturing simulation.

class StochasticConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for stochastic processes.

Defines parameters common to all stochastic process implementations, including volatility, drift, random seed, and time step parameters.

volatility: float
drift: float
random_seed: int | None
time_step: float
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class StochasticProcess(config: StochasticConfig)[source]

Bases: ABC

Abstract base class for stochastic processes.

Provides common interface and functionality for all stochastic process implementations used in financial modeling. All concrete implementations must provide a generate_shock method.

abstractmethod generate_shock(current_value: float) float[source]

Generate a stochastic shock for the current time step.

Parameters:

current_value (float) – Current value of the process

Return type:

float

Returns:

Multiplicative shock to apply to the value

reset(seed: int | None = None) None[source]

Reset the random number generator.

Parameters:

seed (Optional[int]) – Optional new seed to use

Return type:

None

class GeometricBrownianMotion(config: StochasticConfig)[source]

Bases: StochasticProcess

Geometric Brownian Motion process using Euler-Maruyama discretization.

Implements GBM with exact lognormal solution for high numerical accuracy. Commonly used for modeling asset prices and growth rates with constant relative volatility.

generate_shock(current_value: float) float[source]

Generate a multiplicative shock using GBM.

Uses the Euler-Maruyama discretization: dS = μ*S*dt + σ*S*dW

Which gives multiplicative shock: S(t+dt)/S(t) = exp((μ - σ²/2)*dt + σ*√dt*Z)

where Z ~ N(0,1)

Parameters:

current_value (float) – Current value (not used in GBM, included for interface)

Return type:

float

Returns:

Multiplicative shock factor

class LognormalVolatility(config: StochasticConfig)[source]

Bases: StochasticProcess

Simple lognormal volatility generator for revenue/sales.

Provides simpler alternative to full GBM by applying lognormal shocks centered around 1.0. Suitable for modeling revenue variations without drift components.

generate_shock(current_value: float) float[source]

Generate a lognormal multiplicative shock.

Simpler than full GBM - just applies lognormal volatility around 1.0. Shock = exp(σ*Z) where Z ~ N(0,1)

This gives E[shock] ≈ 1 for small σ (actually exp(σ²/2))

Parameters:

current_value (float) – Current value (not used)

Return type:

float

Returns:

Multiplicative shock factor centered around 1.0

class MeanRevertingProcess(config: StochasticConfig, mean_level: float = 1.0, reversion_speed: float = 0.5)[source]

Bases: StochasticProcess

Ornstein-Uhlenbeck mean-reverting process for bounded variables.

Implements mean-reverting dynamics suitable for modeling variables that tend to revert to long-term average levels, such as operating margins or capacity utilization rates.

generate_shock(current_value: float) float[source]

Generate mean-reverting shock.

Uses Ornstein-Uhlenbeck process discretization: dx = θ*(μ - x)*dt + σ*dW

Parameters:

current_value (float) – Current value of the process

Return type:

float

Returns:

Multiplicative shock

create_stochastic_process(process_type: str, volatility: float, drift: float = 0.0, random_seed: int | None = None, time_step: float = 1.0) StochasticProcess[source]

Factory function to create stochastic processes.

Parameters:
  • process_type (str) – Type of process (“gbm”, “lognormal”, “mean_reverting”)

  • volatility (float) – Annual volatility

  • drift (float) – Annual drift rate (for GBM)

  • random_seed (Optional[int]) – Random seed for reproducibility

  • time_step (float) – Time step in years

Return type:

StochasticProcess

Returns:

StochasticProcess instance

Raises:

ValueError – If process_type is not recognized

ergodic_insurance.strategy_backtester module

Strategy backtesting framework for insurance decision strategies.

This module provides base classes and implementations for various insurance strategies that can be tested and compared in walk-forward validation.

Example

>>> from strategy_backtester import ConservativeFixedStrategy, StrategyBacktester
>>> from simulation import SimulationEngine
>>> # Create and configure a strategy
>>> strategy = ConservativeFixedStrategy(
...     primary_limit=5000000,
...     excess_limit=20000000,
...     deductible=100000
... )
>>>
>>> # Run backtest
>>> backtester = StrategyBacktester(simulation_engine)
>>> results = backtester.test_strategy(
...     strategy=strategy,
...     n_simulations=1000,
...     n_years=10
... )
class InsuranceStrategy(name: str)[source]

Bases: ABC

Abstract base class for insurance strategies.

Defines the interface that all insurance strategies must implement for use in backtesting and walk-forward validation.

abstractmethod get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get insurance program for the current state.

Parameters:
  • manufacturer (WidgetManufacturer) – Current manufacturer state

  • historical_losses (Optional[ndarray]) – Past loss data for adaptive strategies

  • current_year (int) – Current year in simulation

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram or None for no insurance.

update(losses: ndarray, recoveries: ndarray, year: int)[source]

Update strategy based on recent experience.

Parameters:
  • losses (ndarray) – Recent loss amounts

  • recoveries (ndarray) – Recent recovery amounts

  • year (int) – Current year

reset()[source]

Reset strategy to initial state.

get_description() str[source]

Get strategy description.

Return type:

str

Returns:

Human-readable strategy description.

class NoInsuranceStrategy[source]

Bases: InsuranceStrategy

Baseline strategy with no insurance.

get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Return no insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

None to indicate no insurance.

class ConservativeFixedStrategy(primary_limit: float = 5000000, excess_limit: float = 20000000, higher_limit: float = 25000000, deductible: float = 50000)[source]

Bases: InsuranceStrategy

Conservative strategy with high limits and low deductible.

get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get conservative insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram with high coverage.

class AggressiveFixedStrategy(primary_limit: float = 2000000, excess_limit: float = 5000000, deductible: float = 250000)[source]

Bases: InsuranceStrategy

Aggressive strategy with low limits and high deductible.

get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get aggressive insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram with limited coverage.

class OptimizedStaticStrategy(optimizer: PenaltyMethodOptimizer | None = None, target_roe: float = 0.15, max_ruin_prob: float = 0.01)[source]

Bases: InsuranceStrategy

Strategy using optimization to find best static limits.

optimize_limits(manufacturer: WidgetManufacturer, simulation_engine: Simulation)[source]

Run optimization to find best limits.

Parameters:
get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get optimized insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram with optimized parameters.

class AdaptiveStrategy(base_deductible: float = 100000, base_primary: float = 3000000, base_excess: float = 10000000, adaptation_window: int = 3, adjustment_factor: float = 0.2)[source]

Bases: InsuranceStrategy

Strategy that adjusts based on recent loss experience.

update(losses: ndarray, recoveries: ndarray, year: int)[source]

Update strategy based on recent losses.

Parameters:
  • losses (ndarray) – Recent loss amounts

  • recoveries (ndarray) – Recent recovery amounts

  • year (int) – Current year

get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get adaptive insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram with adapted parameters.

reset()[source]

Reset strategy to initial state.

class BacktestResult(strategy_name: str, simulation_results: SimulationResults | SimulationResults, metrics: ValidationMetrics, execution_time: float, config: SimulationConfig) None[source]

Bases: object

Results from strategy backtesting.

strategy_name

Name of tested strategy

simulation_results

Raw simulation results (either Simulation or MC results)

metrics

Calculated performance metrics

execution_time

Time taken to run backtest

config

Configuration used for backtest

strategy_name: str
simulation_results: SimulationResults | SimulationResults
metrics: ValidationMetrics
execution_time: float
config: SimulationConfig
class StrategyBacktester(simulation_engine: Simulation | None = None, metric_calculator: MetricCalculator | None = None)[source]

Bases: object

Engine for backtesting insurance strategies.

results_cache: Dict[str, BacktestResult]
test_strategy(strategy: InsuranceStrategy, manufacturer: WidgetManufacturer, config: SimulationConfig, use_cache: bool = True) BacktestResult[source]

Test a single strategy.

Parameters:
Return type:

BacktestResult

Returns:

BacktestResult with performance metrics.

test_multiple_strategies(strategies: List[InsuranceStrategy], manufacturer: WidgetManufacturer, config: SimulationConfig) DataFrame[source]

Test multiple strategies and compare.

Parameters:
Return type:

DataFrame

Returns:

DataFrame comparing strategy performance.

ergodic_insurance.summary_statistics module

Comprehensive summary statistics and report generation for simulation results.

This module provides statistical analysis tools, distribution fitting utilities, and formatted report generation for Monte Carlo simulation results.

format_quantile_key(q: float) str[source]

Format a quantile value as a dictionary key using per-mille resolution.

Uses per-mille (parts per thousand) to avoid key collisions for sub-percentile quantiles that are critical for insurance risk metrics.

Parameters:

q (float) – Quantile value in range [0, 1].

Return type:

str

Returns:

Formatted key string, e.g. q0250 for the 25th percentile, q0005 for the 0.5th percentile, q0001 for the 0.1th percentile.

class StatisticalSummary(basic_stats: Dict[str, float], distribution_params: Dict[str, Dict[str, float]], confidence_intervals: Dict[str, Tuple[float, float]], hypothesis_tests: Dict[str, Dict[str, float]], extreme_values: Dict[str, float]) None[source]

Bases: object

Complete statistical summary of simulation results.

basic_stats: Dict[str, float]
distribution_params: Dict[str, Dict[str, float]]
confidence_intervals: Dict[str, Tuple[float, float]]
hypothesis_tests: Dict[str, Dict[str, float]]
extreme_values: Dict[str, float]
to_dataframe() DataFrame[source]

Convert summary to pandas DataFrame.

Return type:

DataFrame

Returns:

DataFrame with all summary statistics

class SummaryStatistics(confidence_level: float = 0.95, bootstrap_iterations: int = 1000, seed: int | None = None)[source]

Bases: object

Calculate comprehensive summary statistics for simulation results.

calculate_summary(data: ndarray, weights: ndarray | None = None) StatisticalSummary[source]

Calculate complete statistical summary.

Parameters:
  • data (ndarray) – Input data array

  • weights (Optional[ndarray]) – Optional weights for weighted statistics

Return type:

StatisticalSummary

Returns:

Complete statistical summary

class TDigest(compression: float = 200)[source]

Bases: object

T-digest data structure for streaming quantile estimation.

Implements the merging digest variant from Dunning & Ertl (2019). Provides accurate quantile estimates, especially at the tails, with bounded memory usage proportional to the compression parameter.

The t-digest maintains a sorted set of centroids (mean, weight) that adaptively cluster data points. Clusters near the tails (q->0 or q->1) are kept small for precision, while clusters near the median can be larger.

Parameters:

compression (float) – Controls accuracy vs memory tradeoff. Higher values give more accuracy but use more memory. Typical range: 100-300. Default 200 gives ~0.2-1% error at median, ~0.005-0.05% at q01/q99.

References

Dunning, T. & Ertl, O. (2019). “Computing Extremely Accurate Quantiles Using t-Digests.” arXiv:1902.04023.

update(value: float) None[source]

Add a single observation to the digest.

Parameters:

value (float) – The value to add.

Return type:

None

update_batch(values: ndarray) None[source]

Add an array of observations to the digest.

Parameters:

values (ndarray) – Array of values to add.

Return type:

None

merge(other: TDigest) None[source]

Merge another t-digest into this one.

After merging, this digest contains the combined information from both digests. The other digest is not modified.

Parameters:

other (TDigest) – Another TDigest to merge into this one.

Return type:

None

quantile(q: float) float[source]

Estimate a single quantile.

Parameters:

q (float) – Quantile to estimate, in range [0, 1].

Return type:

float

Returns:

Estimated value at the given quantile.

Raises:

ValueError – If the digest is empty.

quantiles(qs: List[float]) Dict[str, float][source]

Estimate multiple quantiles.

Parameters:

qs (List[float]) – List of quantiles to estimate, each in range [0, 1].

Return type:

Dict[str, float]

Returns:

Dictionary mapping per-mille quantile keys (e.g. q0250 for the 25th percentile) to estimated values.

cdf(value: float) float[source]

Estimate the cumulative distribution function at a value.

Parameters:

value (float) – The value at which to estimate the CDF.

Return type:

float

Returns:

Estimated probability P(X <= value).

Raises:

ValueError – If the digest is empty.

property centroid_count: int

Return the number of centroids currently stored.

__len__() int[source]

Return the total count of observations added.

Return type:

int

class QuantileCalculator(quantiles: List[float] | None = None, seed: int | None = None)[source]

Bases: object

Efficient quantile calculation for large datasets.

calculate_quantiles(data_hash: int, method: str = 'linear') Dict[str, float][source]

Calculate quantiles with caching.

Parameters:
  • data_hash (int) – Hash of data array for caching

  • method (str) – Interpolation method

Return type:

Dict[str, float]

Returns:

Dictionary of quantile values

calculate(data: ndarray, method: str = 'linear') Dict[str, float][source]

Calculate quantiles for data.

Parameters:
  • data (ndarray) – Input data array

  • method (str) – Interpolation method (‘linear’, ‘nearest’, ‘lower’, ‘higher’, ‘midpoint’)

Return type:

Dict[str, float]

Returns:

Dictionary of quantile values

streaming_quantiles(data_stream: ndarray, compression: float = 200) Dict[str, float][source]

Calculate quantiles for streaming data using the t-digest algorithm.

Uses the t-digest merging digest algorithm (Dunning & Ertl, 2019) for streaming quantile estimation with bounded memory and high accuracy, especially at tail quantiles relevant to insurance risk metrics.

Parameters:
  • data_stream (ndarray) – Streaming data array

  • compression (float) – Controls accuracy vs memory tradeoff. Higher values give more accuracy but use more memory. Typical range: 100-300. Default 200 gives ~0.2-1% error at median, ~0.005-0.05% at q01/q99. Passed directly to TDigest.

Return type:

Dict[str, float]

Returns:

Dictionary of approximate quantile values

class DistributionFitter[source]

Bases: object

Fit and compare multiple probability distributions to data.

DISTRIBUTIONS = {'beta': <scipy.stats._continuous_distns.beta_gen object>, 'exponential': <scipy.stats._continuous_distns.expon_gen object>, 'gamma': <scipy.stats._continuous_distns.gamma_gen object>, 'lognormal': <scipy.stats._continuous_distns.lognorm_gen object>, 'normal': <scipy.stats._continuous_distns.norm_gen object>, 'pareto': <scipy.stats._continuous_distns.pareto_gen object>, 'uniform': <scipy.stats._continuous_distns.uniform_gen object>, 'weibull': <scipy.stats._continuous_distns.weibull_min_gen object>}
fit_all(data: ndarray, distributions: List[str] | None = None) DataFrame[source]

Fit multiple distributions and compare goodness of fit.

Parameters:
  • data (ndarray) – Input data

  • distributions (Optional[List[str]]) – List of distributions to fit (None for all)

Return type:

DataFrame

Returns:

DataFrame comparing distribution fits

get_best_distribution(criterion: str = 'aic') Tuple[str, Dict[str, float]][source]

Get the best-fitting distribution based on criterion.

Parameters:

criterion (str) – Selection criterion (‘aic’, ‘bic’, ‘ks_pvalue’)

Return type:

Tuple[str, Dict[str, float]]

Returns:

Tuple of (distribution name, parameters)

generate_qq_plot_data(data: ndarray, distribution: str) Tuple[ndarray, ndarray][source]

Generate data for Q-Q plot.

Parameters:
  • data (ndarray) – Original data

  • distribution (str) – Distribution name

Return type:

Tuple[ndarray, ndarray]

Returns:

Tuple of (theoretical quantiles, sample quantiles)

class SummaryReportGenerator(style: str = 'markdown')[source]

Bases: object

Generate formatted summary reports for simulation results.

generate_report(summary: StatisticalSummary, title: str = 'Simulation Results Summary', metadata: Dict[str, Any] | None = None) str[source]

Generate formatted report.

Parameters:
Return type:

str

Returns:

Formatted report string

ergodic_insurance.trajectory_storage module

Memory-efficient storage system for simulation trajectories.

This module provides a lightweight storage system for Monte Carlo simulation trajectories that minimizes RAM usage while storing both partial time series data and comprehensive summary statistics.

Features:
  • Memory-mapped numpy arrays for efficient storage

  • Optional HDF5 backend with compression

  • Configurable time series sampling (store every Nth year)

  • Lazy loading to minimize memory footprint

  • Automatic disk space management

  • CSV/JSON export for analysis tools

  • <2GB RAM usage for 100K simulations

  • <1GB disk usage with sampling

Example

>>> from ergodic_insurance.trajectory_storage import TrajectoryStorage
>>> storage = TrajectoryStorage(
...     storage_dir="./trajectories",
...     sample_interval=5,  # Store every 5th year
...     max_disk_usage_gb=1.0
... )
>>> # During simulation
>>> storage.store_simulation(
...     sim_id=0,
...     annual_losses=losses,
...     final_assets=assets,
...     summary_stats=stats
... )
>>> # Later retrieval
>>> data = storage.load_simulation(sim_id=0)
class StorageConfig(storage_dir: str = './trajectory_storage', backend: str = 'memmap', sample_interval: int = 10, max_disk_usage_gb: float = 1.0, compression: bool = True, compression_level: int = 4, chunk_size: int = 1000, enable_summary_stats: bool = True, enable_time_series: bool = True, dtype: Any = <class 'numpy.float32'>) None[source]

Bases: object

Configuration for trajectory storage system.

storage_dir: str = './trajectory_storage'
backend: str = 'memmap'
sample_interval: int = 10
max_disk_usage_gb: float = 1.0
compression: bool = True
compression_level: int = 4
chunk_size: int = 1000
enable_summary_stats: bool = True
enable_time_series: bool = True
dtype

alias of float32

class SimulationSummary(sim_id: int, final_assets: float, total_losses: float, total_recoveries: float, mean_annual_loss: float, max_annual_loss: float, min_annual_loss: float, growth_rate: float, ruin_occurred: bool, ruin_year: int | None = None, volatility: float | None = None) None[source]

Bases: object

Summary statistics for a single simulation.

sim_id: int
final_assets: float
total_losses: float
total_recoveries: float
mean_annual_loss: float
max_annual_loss: float
min_annual_loss: float
growth_rate: float
ruin_occurred: bool
ruin_year: int | None = None
volatility: float | None = None
to_dict() Dict[str, Any][source]

Convert to dictionary for export.

Return type:

Dict[str, Any]

class TrajectoryStorage(config: StorageConfig | None = None)[source]

Bases: object

Memory-efficient storage for simulation trajectories.

Provides lightweight storage using memory-mapped arrays or HDF5, with configurable sampling and automatic disk space management.

store_simulation(sim_id: int, annual_losses: ndarray, insurance_recoveries: ndarray, retained_losses: ndarray, final_assets: float, initial_assets: float, ruin_occurred: bool = False, ruin_year: int | None = None) None[source]

Store simulation trajectory with automatic sampling.

Parameters:
  • sim_id (int) – Simulation identifier

  • annual_losses (ndarray) – Array of annual losses

  • insurance_recoveries (ndarray) – Array of insurance recoveries

  • retained_losses (ndarray) – Array of retained losses

  • final_assets (float) – Final asset value

  • initial_assets (float) – Initial asset value

  • ruin_occurred (bool) – Whether ruin occurred

  • ruin_year (Optional[int]) – Year of ruin (if applicable)

Return type:

None

load_simulation(sim_id: int, load_time_series: bool = False) Dict[str, Any][source]

Load simulation data with lazy loading.

Parameters:
  • sim_id (int) – Simulation identifier

  • load_time_series (bool) – Whether to load time series data

Return type:

Dict[str, Any]

Returns:

Dictionary with simulation data

export_summaries_csv(output_path: str) None[source]

Export all summary statistics to CSV.

Parameters:

output_path (str) – Path for CSV output file

Return type:

None

export_summaries_json(output_path: str) None[source]

Export all summary statistics to JSON.

Parameters:

output_path (str) – Path for JSON output file

Return type:

None

get_storage_stats() Dict[str, Any][source]

Get storage statistics.

Return type:

Dict[str, Any]

Returns:

Dictionary with storage statistics

clear_storage() None[source]

Clear all stored data.

Return type:

None

__enter__()[source]

Context manager entry.

__exit__(exc_type, exc_val, exc_tb)[source]

Context manager exit - ensure data is persisted.

ergodic_insurance.trends module

Trend module for insurance claim frequency and severity adjustments.

This module provides a hierarchy of trend classes that apply multiplicative adjustments to claim frequencies and severities over time. Trends model how insurance risks evolve due to inflation, exposure growth, regulatory changes, or other systematic factors.

Key Concepts:
  • All trends are multiplicative (1.0 = no change, 1.03 = 3% increase)

  • Support both annual and sub-annual (monthly) time steps

  • Seedable for reproducibility in stochastic trends

  • Time-based multipliers for dynamic risk evolution

Example

Basic usage with linear trend:

from ergodic_insurance.trends import LinearTrend, ScenarioTrend

# 3% annual inflation trend
inflation = LinearTrend(annual_rate=0.03)
multiplier_year5 = inflation.get_multiplier(5.0)  # ~1.159

# Custom scenario with varying rates
scenario = ScenarioTrend(
    factors=[1.0, 1.05, 1.08, 1.06, 1.10],
    time_unit="annual"
)
multiplier_year3 = scenario.get_multiplier(3.5)  # Interpolated
Since:

Version 0.4.0 - Core trend infrastructure for ClaimGenerator

class Trend(seed: int | None = None)[source]

Bases: ABC

Abstract base class for all trend implementations.

Defines the interface that all trend classes must implement. Trends provide multiplicative adjustments over time for frequencies and severities in insurance claim modeling.

All trend implementations must provide:
  • get_multiplier(time): Returns multiplicative factor at given time

  • Proper handling of edge cases (negative time, etc.)

  • Reproducibility through seed support (if stochastic)

Examples

Implementing a custom trend:

class StepTrend(Trend):
    def __init__(self, step_time: float, step_factor: float):
        self.step_time = step_time
        self.step_factor = step_factor

    def get_multiplier(self, time: float) -> float:
        if time < 0:
            return 1.0
        return 1.0 if time < self.step_time else self.step_factor
abstractmethod get_multiplier(time: float) float[source]

Get the multiplicative adjustment factor at a given time.

Parameters:

time (float) – Time point (in years from start) to get multiplier for. Can be fractional for sub-annual precision.

Returns:

Multiplicative factor (1.0 = no change, >1.0 = increase,

<1.0 = decrease).

Return type:

float

Note

Implementations should handle negative time gracefully, typically returning 1.0 or the initial value.

reset_seed(seed: int) None[source]

Reset random seed for stochastic trends.

Parameters:

seed (int) – New random seed to use.

Return type:

None

Note

This method allows re-running scenarios with different random outcomes while maintaining reproducibility.

class NoTrend(seed: int | None = None)[source]

Bases: Trend

Default trend implementation with no adjustment over time.

This trend always returns a multiplier of 1.0, representing no change in frequency or severity over time. Useful as a default or baseline.

Examples

Using NoTrend as baseline:

from ergodic_insurance.trends import NoTrend

baseline = NoTrend()

# Always returns 1.0
assert baseline.get_multiplier(0) == 1.0
assert baseline.get_multiplier(10) == 1.0
assert baseline.get_multiplier(-5) == 1.0
get_multiplier(time: float) float[source]

Return constant multiplier of 1.0.

Parameters:

time (float) – Time point (ignored).

Returns:

Always returns 1.0.

Return type:

float

class LinearTrend(annual_rate: float = 0.03, seed: int | None = None)[source]

Bases: Trend

Linear compound growth trend with constant annual rate.

Models exponential growth/decay with a fixed annual rate, similar to compound interest. Commonly used for inflation, exposure growth, or systematic risk changes.

The multiplier at time t is calculated as: (1 + annual_rate)^t

annual_rate

Annual growth rate (0.03 = 3% growth, -0.02 = 2% decay).

Examples

Modeling inflation:

from ergodic_insurance.trends import LinearTrend

# 3% annual inflation
inflation = LinearTrend(annual_rate=0.03)

# After 5 years: 1.03^5 ≈ 1.159
mult_5y = inflation.get_multiplier(5.0)
print(f"5-year inflation factor: {mult_5y:.3f}")

# After 6 months: 1.03^0.5 ≈ 1.015
mult_6m = inflation.get_multiplier(0.5)
print(f"6-month inflation factor: {mult_6m:.3f}")

Modeling exposure decay:

# 2% annual exposure reduction
reduction = LinearTrend(annual_rate=-0.02)
mult_10y = reduction.get_multiplier(10.0)  # 0.98^10 ≈ 0.817
get_multiplier(time: float) float[source]

Calculate compound growth multiplier at given time.

Parameters:

time (float) – Time in years from start. Can be fractional for sub-annual calculations. Negative times return 1.0.

Returns:

Multiplicative factor calculated as (1 + annual_rate)^time.

Returns 1.0 for negative times.

Return type:

float

Examples

Calculating multipliers:

trend = LinearTrend(0.04)  # 4% annual

# Year 1: 1.04
mult_1 = trend.get_multiplier(1.0)

# Year 2.5: 1.04^2.5 ≈ 1.104
mult_2_5 = trend.get_multiplier(2.5)

# Negative time: 1.0
mult_neg = trend.get_multiplier(-1.0)
class RandomWalkTrend(drift: float = 0.0, volatility: float = 0.1, seed: int | None = None)[source]

Bases: Trend

Random walk trend with drift and volatility.

Models a geometric random walk (geometric Brownian motion) where the multiplier evolves as a cumulative product of random changes. Commonly used for modeling market indices, asset prices, or unpredictable long-term trends in insurance markets.

The multiplier at time t follows: M(t) = exp(drift * t + volatility * W(t)) where W(t) is a Brownian motion.

drift

Annual drift rate (expected growth rate).

volatility

Annual volatility (standard deviation of log returns).

cached_path

Cached random path for efficiency.

cached_times

Time points for the cached path.

Examples

Basic random walk with drift:

from ergodic_insurance.trends import RandomWalkTrend

# 2% drift with 10% volatility
trend = RandomWalkTrend(drift=0.02, volatility=0.10, seed=42)

# Generate multipliers
mult_1 = trend.get_multiplier(1.0)  # Random around e^0.02
mult_5 = trend.get_multiplier(5.0)  # More variation

Market-like volatility:

# High volatility market
volatile = RandomWalkTrend(drift=0.0, volatility=0.30)

# Low volatility with positive drift
stable = RandomWalkTrend(drift=0.03, volatility=0.05)
get_multiplier(time: float) float[source]

Get random walk multiplier at given time.

Parameters:

time (float) – Time in years from start. Negative times return 1.0.

Returns:

Multiplicative factor following geometric Brownian motion.

Always positive due to exponential transformation.

Return type:

float

Note

The path is cached on first call for efficiency. All subsequent calls will use the same random path, ensuring consistency within a simulation run.

reset_seed(seed: int) None[source]

Reset random seed and clear cached path.

Parameters:

seed (int) – New random seed to use.

Return type:

None

class MeanRevertingTrend(mean_level: float = 1.0, reversion_speed: float = 0.2, volatility: float = 0.1, initial_level: float = 1.0, seed: int | None = None)[source]

Bases: Trend

Mean-reverting trend using Ornstein-Uhlenbeck process.

Models a trend that tends to revert to a long-term mean level, commonly used for interest rates, insurance market cycles, or any process with cyclical behavior around a stable level.

The process follows: dX(t) = theta*(mu - X(t))*dt + sigma*dW(t) where the multiplier M(t) = exp(X(t))

mean_level

Long-term mean multiplier level.

reversion_speed

Speed of mean reversion (theta).

volatility

Volatility of the process (sigma).

initial_level

Starting multiplier level.

cached_path

Cached process path for efficiency.

cached_times

Time points for the cached path.

Examples

Insurance market cycle:

from ergodic_insurance.trends import MeanRevertingTrend

# Market cycles around 1.0 with 5-year half-life
market = MeanRevertingTrend(
    mean_level=1.0,
    reversion_speed=0.14,  # ln(2)/5 years
    volatility=0.10,
    initial_level=1.1,  # Start in hard market
    seed=42
)

# Will gradually revert to 1.0
mult_1 = market.get_multiplier(1.0)
mult_10 = market.get_multiplier(10.0)  # Closer to 1.0

Interest rate model:

# Interest rates reverting to 3% with high volatility
rates = MeanRevertingTrend(
    mean_level=1.03,
    reversion_speed=0.5,
    volatility=0.15
)
get_multiplier(time: float) float[source]

Get mean-reverting multiplier at given time.

Parameters:

time (float) – Time in years from start. Negative times return 1.0.

Returns:

Multiplicative factor following OU process.

Always positive. Tends toward mean_level over time.

Return type:

float

Note

The path is cached on first call for efficiency. The process exhibits mean reversion: starting values far from the mean will tend to move toward it over time.

reset_seed(seed: int) None[source]

Reset random seed and clear cached path.

Parameters:

seed (int) – New random seed to use.

Return type:

None

class RegimeSwitchingTrend(regimes: List[float] | None = None, transition_probs: List[List[float]] | None = None, initial_regime: int = 0, regime_persistence: float = 1.0, seed: int | None = None)[source]

Bases: Trend

Trend that switches between different market regimes.

Models discrete regime changes such as hard/soft insurance markets, economic cycles, or regulatory environments. Each regime has its own multiplier, and transitions occur stochastically based on probabilities.

regimes

List of regime multipliers.

transition_matrix

Matrix of transition probabilities between regimes.

initial_regime

Starting regime index.

regime_persistence

How long regimes tend to last.

cached_regimes

Cached regime path for efficiency.

cached_times

Time points for the cached path.

Examples

Hard/soft insurance market:

from ergodic_insurance.trends import RegimeSwitchingTrend

# Two regimes: soft (0.9x) and hard (1.2x) markets
market = RegimeSwitchingTrend(
    regimes=[0.9, 1.2],
    transition_probs=[[0.8, 0.2],   # Soft -> [80% stay, 20% to hard]
                      [0.3, 0.7]],  # Hard -> [30% to soft, 70% stay]
    initial_regime=0,  # Start in soft market
    seed=42
)

# Multiplier switches between 0.9 and 1.2
mult_5 = market.get_multiplier(5.0)

Three-regime economic cycle:

# Recession, normal, boom
economy = RegimeSwitchingTrend(
    regimes=[0.8, 1.0, 1.3],
    transition_probs=[
        [0.6, 0.4, 0.0],  # Recession
        [0.1, 0.7, 0.2],  # Normal
        [0.0, 0.5, 0.5],  # Boom
    ],
    initial_regime=1  # Start in normal
)
get_multiplier(time: float) float[source]

Get regime-based multiplier at given time.

Parameters:

time (float) – Time in years from start. Negative times return 1.0.

Returns:

Multiplicative factor for the active regime at time t.

Changes discretely as regimes switch.

Return type:

float

Note

The regime path is cached on first call. Regime changes are stochastic but reproducible with the same seed. The actual regime durations depend on both transition probabilities and the regime_persistence parameter.

reset_seed(seed: int) None[source]

Reset random seed and clear cached regime path.

Parameters:

seed (int) – New random seed to use.

Return type:

None

class ScenarioTrend(factors: List[float] | Dict[float, float], time_unit: str = 'annual', interpolation: str = 'linear', seed: int | None = None)[source]

Bases: Trend

Trend based on explicit scenario factors with interpolation.

Allows specifying exact multiplicative factors at specific time points, with linear interpolation between points. Useful for modeling known future changes, regulatory impacts, or custom scenarios.

factors

List or dict of multiplicative factors.

time_unit

Time unit for the factors (“annual” or “monthly”).

interpolation

Interpolation method (“linear” or “step”).

Examples

Annual scenario with known rates:

from ergodic_insurance.trends import ScenarioTrend

# Year 0: 1.0, Year 1: 1.05, Year 2: 1.08, etc.
scenario = ScenarioTrend(
    factors=[1.0, 1.05, 1.08, 1.06, 1.10],
    time_unit="annual"
)

# Exact points
mult_1 = scenario.get_multiplier(1.0)  # 1.05
mult_2 = scenario.get_multiplier(2.0)  # 1.08

# Interpolated
mult_1_5 = scenario.get_multiplier(1.5)  # ≈1.065

Monthly scenario:

# Monthly adjustment factors
monthly = ScenarioTrend(
    factors=[1.0, 1.01, 1.02, 1.015, 1.025, 1.03],
    time_unit="monthly"
)

# Month 3 (0.25 years)
mult_3m = monthly.get_multiplier(0.25)

Using dictionary for specific times:

# Specific time points
custom = ScenarioTrend(
    factors={0: 1.0, 2: 1.1, 5: 1.2, 10: 1.5},
    interpolation="linear"
)
get_multiplier(time: float) float[source]

Get interpolated multiplier at given time.

Parameters:

time (float) – Time in years from start. Can be fractional. Negative times return 1.0.

Returns:

Multiplicative factor, interpolated from scenario points.
  • Before first point: returns 1.0

  • After last point: returns last factor

  • Between points: interpolated based on method

Return type:

float

Examples

Interpolation behavior:

scenario = ScenarioTrend([1.0, 1.1, 1.2, 1.15])

# Exact points
mult_0 = scenario.get_multiplier(0.0)  # 1.0
mult_2 = scenario.get_multiplier(2.0)  # 1.2

# Linear interpolation
mult_1_5 = scenario.get_multiplier(1.5)  # 1.15

# Beyond range
mult_neg = scenario.get_multiplier(-1.0)  # 1.0
mult_10 = scenario.get_multiplier(10.0)  # 1.15 (last)

ergodic_insurance.validation_metrics module

Validation metrics for walk-forward analysis and strategy backtesting.

This module provides performance metrics and comparison tools for evaluating insurance strategies across training and testing periods in walk-forward validation.

Example

>>> from validation_metrics import ValidationMetrics, MetricCalculator
>>> import numpy as np
>>> # Calculate metrics for a strategy's performance
>>> returns = np.random.normal(0.08, 0.02, 1000)
>>> losses = np.random.exponential(100000, 1000)
>>>
>>> calculator = MetricCalculator()
>>> metrics = calculator.calculate_metrics(
...     returns=returns,
...     losses=losses,
...     final_assets=10000000
... )
>>>
>>> print(f"ROE: {metrics.roe:.2%}")
>>> print(f"Sharpe Ratio: {metrics.sharpe_ratio:.2f}")
class ValidationMetrics(roe: float, ruin_probability: float, growth_rate: float, volatility: float, sharpe_ratio: float = 0.0, max_drawdown: float = 0.0, var_95: float = 0.0, cvar_95: float = 0.0, win_rate: float = 0.0, profit_factor: float = 0.0, recovery_time: float = 0.0, stability: float = 0.0) None[source]

Bases: object

Container for validation performance metrics.

roe

Return on equity (annualized)

ruin_probability

Probability of insolvency

growth_rate

Compound annual growth rate

volatility

Standard deviation of returns

sharpe_ratio

Risk-adjusted return metric

max_drawdown

Maximum peak-to-trough decline

var_95

Value at Risk at 95% confidence

cvar_95

Conditional Value at Risk at 95% confidence

win_rate

Percentage of profitable periods

profit_factor

Ratio of gross profits to gross losses

recovery_time

Average time to recover from drawdown

stability

R-squared of equity curve

roe: float
ruin_probability: float
growth_rate: float
volatility: float
sharpe_ratio: float = 0.0
max_drawdown: float = 0.0
var_95: float = 0.0
cvar_95: float = 0.0
win_rate: float = 0.0
profit_factor: float = 0.0
recovery_time: float = 0.0
stability: float = 0.0
to_dict() Dict[str, float][source]

Convert metrics to dictionary.

Return type:

Dict[str, float]

Returns:

Dictionary of metric values.

compare(other: ValidationMetrics) Dict[str, float][source]

Compare metrics with another set.

Parameters:

other (ValidationMetrics) – Metrics to compare against.

Return type:

Dict[str, float]

Returns:

Dictionary of percentage differences.

class StrategyPerformance(strategy_name: str, in_sample_metrics: ValidationMetrics | None = None, out_sample_metrics: ValidationMetrics | None = None, degradation: Dict[str, float]=<factory>, overfitting_score: float = 0.0, consistency_score: float = 0.0, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Performance tracking for a single strategy.

strategy_name

Name of the strategy

in_sample_metrics

Metrics from training period

out_sample_metrics

Metrics from testing period

degradation

Performance degradation from in-sample to out-sample

overfitting_score

Degree of overfitting (0 = none, 1 = severe)

consistency_score

Consistency across multiple windows

metadata

Additional strategy-specific data

strategy_name: str
in_sample_metrics: ValidationMetrics | None = None
out_sample_metrics: ValidationMetrics | None = None
degradation: Dict[str, float]
overfitting_score: float = 0.0
consistency_score: float = 0.0
metadata: Dict[str, Any]
calculate_degradation()[source]

Calculate performance degradation from in-sample to out-of-sample.

to_dataframe() DataFrame[source]

Convert performance to DataFrame for reporting.

Return type:

DataFrame

Returns:

DataFrame with performance metrics.

class MetricCalculator(risk_free_rate: float = 0.02)[source]

Bases: object

Calculator for performance metrics from simulation results.

calculate_metrics(returns: ndarray, losses: ndarray | None = None, final_assets: ndarray | None = None, initial_assets: float = 10000000, n_years: int | None = None) ValidationMetrics[source]

Calculate comprehensive performance metrics.

Parameters:
  • returns (ndarray) – Array of period returns

  • losses (Optional[ndarray]) – Array of loss amounts (optional)

  • final_assets (Optional[ndarray]) – Array of final asset values (optional)

  • initial_assets (float) – Initial asset value

  • n_years (Optional[int]) – Number of years for annualization

Return type:

ValidationMetrics

Returns:

ValidationMetrics object with calculated metrics.

calculate_rolling_metrics(returns: ndarray, window_size: int = 252) DataFrame[source]

Calculate rolling window metrics.

Parameters:
  • returns (ndarray) – Array of returns

  • window_size (int) – Size of rolling window

Return type:

DataFrame

Returns:

DataFrame with rolling metrics.

class PerformanceTargets(min_roe: float | None = None, max_ruin_probability: float | None = None, min_sharpe_ratio: float | None = None, max_drawdown: float | None = None, min_growth_rate: float | None = None)[source]

Bases: object

User-defined performance targets for strategy evaluation.

min_roe

Minimum acceptable ROE

max_ruin_probability

Maximum acceptable ruin probability

min_sharpe_ratio

Minimum acceptable Sharpe ratio

max_drawdown

Maximum acceptable drawdown

min_growth_rate

Minimum acceptable growth rate

evaluate(metrics: ValidationMetrics) Tuple[bool, List[str]][source]

Evaluate metrics against targets.

Parameters:

metrics (ValidationMetrics) – Metrics to evaluate

Return type:

Tuple[bool, List[str]]

Returns:

Tuple of (meets_all_targets, list_of_failures)

ergodic_insurance.visualization_legacy module

Visualization utilities for professional WSJ-style plots.

This module provides standardized plotting functions with Wall Street Journal aesthetic for insurance analysis and risk metrics visualization.

NOTE: This module now acts as a facade for the new modular visualization package. New code should import directly from ergodic_insurance.visualization.

get_figure_factory(theme: Theme | None = None) FigureFactory | None[source]

Get or create global figure factory instance.

Parameters:

theme (Optional[Theme]) – Optional theme to use (defaults to DEFAULT)

Return type:

Optional[FigureFactory]

Returns:

FigureFactory instance if available, None otherwise

set_wsj_style(use_factory: bool = False, theme: Theme | None = None)[source]

Set matplotlib to use WSJ-style formatting.

Parameters:
  • use_factory (bool) – Whether to use new factory-based styling if available

  • theme (Optional[Theme]) – Optional theme to use with factory (defaults to DEFAULT)

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.set_wsj_style() instead.

format_currency(value: float, decimals: int = 0, abbreviate: bool = False) str[source]

Format value as currency.

Parameters:
  • value (float) – Numeric value to format

  • decimals (int) – Number of decimal places

  • abbreviate (bool) – If True, use K/M/B notation for large numbers

Return type:

str

Returns:

Formatted string (e.g., “$1,000” or “$1K” if abbreviate=True)

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.format_currency() instead.

format_percentage(value: float, decimals: int = 1) str[source]

Format value as percentage.

Parameters:
  • value (float) – Numeric value (0.05 = 5%)

  • decimals (int) – Number of decimal places

Return type:

str

Returns:

Formatted string (e.g., “5.0%”)

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.format_percentage() instead.

class WSJFormatter[source]

Bases: object

Formatter for WSJ-style axis labels.

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.WSJFormatter instead.

static currency_formatter(x, pos)[source]

Format axis values as currency.

static currency(x: float, decimals: int = 1) str[source]

Format value as currency (shortened method name).

Return type:

str

static percentage_formatter(x, pos)[source]

Format axis values as percentage.

static percentage(x: float, decimals: int = 1) str[source]

Format value as percentage (shortened method name).

Return type:

str

static number(x: float, decimals: int = 2) str[source]

Format large numbers with appropriate suffix.

Return type:

str

static millions_formatter(x, pos)[source]

Format axis values in millions.

create_styled_figure(size_type: str = 'medium', theme: Theme | None = None, use_factory: bool = True, **kwargs) Tuple[Figure, Axes | ndarray][source]

Create a figure with automatic styling applied.

Parameters:
  • size_type (str) – Size preset (small, medium, large, blog, technical, presentation)

  • theme (Optional[Theme]) – Optional theme to use

  • use_factory (bool) – Whether to use factory if available

  • **kwargs – Additional arguments for figure creation

Return type:

Tuple[Figure, Union[Axes, ndarray]]

Returns:

Tuple of (figure, axes)

plot_loss_distribution(losses: ndarray | DataFrame, title: str = 'Loss Distribution', bins: int = 50, show_metrics: bool = True, var_levels: List[float] | None = None, figsize: Tuple[int, int] = (12, 6), show_stats: bool = False, log_scale: bool = False, use_factory: bool = False, theme: Theme | None = None) Figure[source]

Create WSJ-style loss distribution plot.

Parameters:
  • losses (Union[ndarray, DataFrame]) – Array of loss values or DataFrame with ‘amount’ column

  • title (str) – Plot title

  • bins (int) – Number of histogram bins

  • show_metrics (bool) – Whether to show VaR/TVaR lines

  • var_levels (Optional[List[float]]) – VaR confidence levels to show

  • figsize (Tuple[int, int]) – Figure size

  • show_stats (bool) – Whether to show statistics

  • log_scale (bool) – Whether to use log scale

  • use_factory (bool) – Whether to use new visualization factory if available

  • theme (Optional[Theme]) – Optional theme to use with factory

Return type:

Figure

Returns:

Matplotlib figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.plot_loss_distribution() instead.

plot_return_period_curve(losses: ndarray | DataFrame, return_periods: ndarray | None = None, scenarios: Dict[str, ndarray] | None = None, title: str = 'Return Period Curves', figsize: Tuple[int, int] = (10, 6), confidence_level: float = 0.95, show_grid: bool = True) Figure[source]

Create WSJ-style return period curve.

Parameters:
  • losses (Union[ndarray, DataFrame]) – Loss amounts (array or DataFrame)

  • return_periods (Optional[ndarray]) – Array of return periods (years), optional

  • scenarios (Optional[Dict[str, ndarray]]) – Optional dict of scenario names to loss arrays

  • title (str) – Plot title

  • figsize (Tuple[int, int]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.plot_return_period_curve() instead.

plot_insurance_layers(layers: List[Dict[str, float]] | DataFrame, total_limit: float | None = None, title: str = 'Insurance Program Structure', figsize: Tuple[int, int] = (10, 6), losses: ndarray | DataFrame | None = None, loss_data: ndarray | DataFrame | None = None, show_expected_loss: bool = False) Figure[source]

Create WSJ-style insurance layer visualization.

Parameters:
  • layers (Union[List[Dict[str, float]], DataFrame]) – List of layer dictionaries or DataFrame with ‘attachment’, ‘limit’ columns

  • total_limit (Optional[float]) – Total program limit (calculated from layers if not provided)

  • title (str) – Plot title

  • figsize (Tuple[int, int]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.plot_insurance_layers() instead.

create_interactive_dashboard(results: Dict[str, Any] | DataFrame, title: str = 'Monte Carlo Simulation Dashboard', height: int = 600, show_distributions: bool = False) Figure[source]

Create interactive Plotly dashboard with WSJ styling.

Parameters:
  • results (Union[Dict[str, Any], DataFrame]) – Dictionary with simulation results or DataFrame

  • title (str) – Dashboard title

  • height (int) – Dashboard height in pixels

  • show_distributions (bool) – Whether to show distribution plots

Return type:

Figure

Returns:

Plotly figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.create_interactive_dashboard() instead.

plot_convergence_diagnostics(convergence_stats: Dict[str, Any], title: str = 'Convergence Diagnostics', figsize: Tuple[int, int] = (12, 8), r_hat_threshold: float = 1.1, show_threshold: bool = False) Figure[source]

Create comprehensive convergence diagnostics plot.

Parameters:
  • convergence_stats (Dict[str, Any]) – Dictionary with convergence statistics

  • title (str) – Plot title

  • figsize (Tuple[int, int]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.plot_convergence_diagnostics() instead.

plot_pareto_frontier_2d(frontier_points: List[Any], x_objective: str, y_objective: str, x_label: str | None = None, y_label: str | None = None, title: str = 'Pareto Frontier', highlight_knees: bool = True, show_trade_offs: bool = False, figsize: Tuple[float, float] = (10, 6)) Figure[source]

Plot 2D Pareto frontier with WSJ styling.

Parameters:
  • frontier_points (List[Any]) – List of ParetoPoint objects

  • x_objective (str) – Name of objective for x-axis

  • y_objective (str) – Name of objective for y-axis

  • x_label (Optional[str]) – Optional custom label for x-axis

  • y_label (Optional[str]) – Optional custom label for y-axis

  • title (str) – Plot title

  • highlight_knees (bool) – Whether to highlight knee points

  • show_trade_offs (bool) – Whether to show trade-off annotations

  • figsize (Tuple[float, float]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

plot_pareto_frontier_3d(frontier_points: List[Any], x_objective: str, y_objective: str, z_objective: str, x_label: str | None = None, y_label: str | None = None, z_label: str | None = None, title: str = '3D Pareto Frontier', figsize: Tuple[float, float] = (12, 8)) Figure[source]

Plot 3D Pareto frontier surface.

Parameters:
  • frontier_points (List[Any]) – List of ParetoPoint objects

  • x_objective (str) – Name of objective for x-axis

  • y_objective (str) – Name of objective for y-axis

  • z_objective (str) – Name of objective for z-axis

  • x_label (Optional[str]) – Optional custom label for x-axis

  • y_label (Optional[str]) – Optional custom label for y-axis

  • z_label (Optional[str]) – Optional custom label for z-axis

  • title (str) – Plot title

  • figsize (Tuple[float, float]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

create_interactive_pareto_frontier(frontier_points: List[Any], objectives: List[str], title: str = 'Interactive Pareto Frontier', height: int = 600, show_dominated: bool = True) Figure[source]

Create interactive Plotly Pareto frontier visualization.

Parameters:
  • frontier_points (List[Any]) – List of ParetoPoint objects

  • objectives (List[str]) – List of objective names to display

  • title (str) – Plot title

  • height (int) – Plot height in pixels

  • show_dominated (bool) – Whether to show dominated region

Return type:

Figure

Returns:

Plotly figure

plot_scenario_comparison(aggregated_results: Any, metrics: List[str] | None = None, figsize: Tuple[float, float] = (14, 8), save_path: str | None = None) Figure[source]

Create comprehensive scenario comparison visualization.

Parameters:
  • aggregated_results (Any) – AggregatedResults object from batch processing

  • metrics (Optional[List[str]]) – List of metrics to compare (default: key metrics)

  • figsize (Tuple[float, float]) – Figure size (width, height)

  • save_path (Optional[str]) – Path to save figure

Return type:

Figure

Returns:

Matplotlib figure

plot_sensitivity_heatmap(aggregated_results: Any, metric: str = 'mean_growth_rate', figsize: Tuple[float, float] = (10, 8), save_path: str | None = None) Figure[source]

Create sensitivity analysis heatmap.

Parameters:
  • aggregated_results (Any) – AggregatedResults with sensitivity analysis

  • metric (str) – Metric to visualize

  • figsize (Tuple[float, float]) – Figure size

  • save_path (Optional[str]) – Path to save figure

Return type:

Figure

Returns:

Matplotlib figure

plot_parameter_sweep_3d(aggregated_results: Any, param1: str, param2: str, metric: str = 'mean_growth_rate', height: int = 600, save_path: str | None = None) Figure[source]

Create 3D surface plot for parameter sweep results.

Parameters:
  • aggregated_results (Any) – AggregatedResults from grid search

  • param1 (str) – First parameter name

  • param2 (str) – Second parameter name

  • metric (str) – Metric to plot on z-axis

  • height (int) – Figure height in pixels

  • save_path (Optional[str]) – Path to save figure

Return type:

Figure

Returns:

Plotly figure

plot_scenario_convergence(batch_results: List[Any], metric: str = 'mean_growth_rate', figsize: Tuple[float, float] = (12, 6), save_path: str | None = None) Figure[source]

Plot convergence of metric across scenarios.

Parameters:
  • batch_results (List[Any]) – List of BatchResult objects

  • metric (str) – Metric to track

  • figsize (Tuple[float, float]) – Figure size

  • save_path (Optional[str]) – Path to save figure

Return type:

Figure

Returns:

Matplotlib figure

ergodic_insurance.walk_forward_validator module

Walk-forward validation system for insurance strategy testing.

This module implements a rolling window validation framework that tests insurance strategies across multiple time periods to detect overfitting and ensure robustness of insurance decisions.

Example

>>> from walk_forward_validator import WalkForwardValidator
>>> from strategy_backtester import ConservativeFixedStrategy, AdaptiveStrategy
>>> # Create validator with 3-year windows
>>> validator = WalkForwardValidator(
...     window_size=3,
...     step_size=1,
...     test_ratio=0.3
... )
>>>
>>> # Define strategies to test
>>> strategies = [
...     ConservativeFixedStrategy(),
...     AdaptiveStrategy()
... ]
>>>
>>> # Run walk-forward validation
>>> results = validator.validate_strategies(
...     strategies=strategies,
...     n_years=10,
...     n_simulations=1000
... )
>>>
>>> # Generate reports
>>> validator.generate_report(results, output_dir="./reports")
class ValidationWindow(window_id: int, train_start: int, train_end: int, test_start: int, test_end: int) None[source]

Bases: object

Represents a single validation window.

window_id

Unique identifier for the window

train_start

Start year of training period

train_end

End year of training period

test_start

Start year of testing period

test_end

End year of testing period

window_id: int
train_start: int
train_end: int
test_start: int
test_end: int
get_train_years() int[source]

Get number of training years.

Return type:

int

get_test_years() int[source]

Get number of testing years.

Return type:

int

__str__() str[source]

String representation.

Return type:

str

class WindowResult(window: ValidationWindow, strategy_performances: Dict[str, ~ergodic_insurance.validation_metrics.StrategyPerformance]=<factory>, optimization_params: Dict[str, ~typing.Dict[str, float]]=<factory>, execution_time: float = 0.0) None[source]

Bases: object

Results from a single validation window.

window

The validation window

strategy_performances

Performance by strategy name

optimization_params

Optimized parameters if applicable

execution_time

Time to process window

window: ValidationWindow
strategy_performances: Dict[str, StrategyPerformance]
optimization_params: Dict[str, Dict[str, float]]
execution_time: float = 0.0
class ValidationResult(window_results: List[WindowResult] = <factory>, strategy_rankings: DataFrame = <factory>, overfitting_analysis: Dict[str, float]=<factory>, consistency_scores: Dict[str, float]=<factory>, best_strategy: str | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Complete walk-forward validation results.

window_results

Results for each window

strategy_rankings

Overall strategy rankings

overfitting_analysis

Overfitting detection results

consistency_scores

Strategy consistency across windows

best_strategy

Recommended strategy based on validation

metadata

Additional validation metadata

window_results: List[WindowResult]
strategy_rankings: DataFrame
overfitting_analysis: Dict[str, float]
consistency_scores: Dict[str, float]
best_strategy: str | None = None
metadata: Dict[str, Any]
class WalkForwardValidator(window_size: int = 3, step_size: int = 1, test_ratio: float = 0.3, simulation_engine: Simulation | None = None, backtester: StrategyBacktester | None = None, performance_targets: PerformanceTargets | None = None)[source]

Bases: object

Walk-forward validation system for insurance strategies.

generate_windows(total_years: int) List[ValidationWindow][source]

Generate validation windows.

Parameters:

total_years (int) – Total years of data available

Return type:

List[ValidationWindow]

Returns:

List of validation windows.

validate_strategies(strategies: List[InsuranceStrategy], n_years: int = 10, n_simulations: int = 1000, manufacturer: WidgetManufacturer | None = None, config: Config | None = None) ValidationResult[source]

Validate strategies using walk-forward analysis.

Parameters:
Return type:

ValidationResult

Returns:

ValidationResult with complete analysis.

generate_report(validation_result: ValidationResult, output_dir: str = './reports', include_visualizations: bool = True) Dict[str, Any][source]

Generate validation reports.

Parameters:
  • validation_result (ValidationResult) – Validation results to report

  • output_dir (str) – Directory for output files

  • include_visualizations (bool) – Whether to include plots

Return type:

Dict[str, Any]

Returns:

Dictionary of generated file paths.

Module contents

Ergodic Insurance Limits - Core Package.

This module provides the main entry point for the Ergodic Insurance Limits package, exposing the key classes and functions for insurance simulation and analysis using ergodic theory. The framework helps optimize insurance retentions and limits for businesses by analyzing time-average outcomes rather than traditional ensemble approaches.

Key Features:
  • Ergodic analysis of insurance decisions

  • Business optimization with insurance constraints

  • Monte Carlo simulation with trajectory storage

  • Insurance strategy backtesting and validation

  • Performance optimization and benchmarking

  • Comprehensive visualization and reporting

Examples

One-call analysis (recommended starting point):

from ergodic_insurance import run_analysis

results = run_analysis(
    initial_assets=10_000_000,
    loss_frequency=2.5,
    loss_severity_mean=1_000_000,
    deductible=500_000,
    coverage_limit=10_000_000,
    premium_rate=0.025,
)
print(results.summary())
results.plot()

Quick start with defaults (creates a $10M manufacturer, 50-year horizon):

from ergodic_insurance import Config

config = Config()  # All defaults — just works

From basic company info:

from ergodic_insurance import Config

config = Config.from_company(
    initial_assets=50_000_000,
    operating_margin=0.12,
)

Note

This module uses lazy imports to avoid circular dependencies during test discovery. All public API classes are accessible through the module’s __all__ list.

Since:

Version 0.4.0