ergodic_insurance package

Subpackages

Submodules

ergodic_insurance.accrual_manager module

Accrual and timing management for financial operations.

This module provides functionality to track timing differences between cash movements and accounting recognition, following GAAP principles.

Uses Decimal for all currency amounts to prevent floating-point precision errors.

class AccrualType(*values)[source]

Bases: Enum

Types of accrued items.

WAGES = 'wages'
INTEREST = 'interest'
TAXES = 'taxes'
INSURANCE_CLAIMS = 'insurance_claims'
REVENUE = 'revenue'
OTHER = 'other'
class PaymentSchedule(*values)[source]

Bases: Enum

Payment schedule types.

IMMEDIATE = 'immediate'
QUARTERLY = 'quarterly'
ANNUAL = 'annual'
CUSTOM = 'custom'
class AccrualItem(item_type: AccrualType, amount: Decimal, period_incurred: int, payment_schedule: PaymentSchedule, payment_dates: List[int] = <factory>, amounts_paid: List[Decimal] = <factory>, description: str = '') None[source]

Bases: object

Individual accrual item with tracking information.

Uses Decimal for all currency amounts to ensure precise calculations.

item_type: AccrualType
amount: Decimal
period_incurred: int
payment_schedule: PaymentSchedule
payment_dates: List[int]
amounts_paid: List[Decimal]
description: str = ''
__post_init__() None[source]

Convert amounts to consistent numeric type (Decimal or float depending on mode).

Return type:

None

property remaining_balance: Decimal

Calculate remaining unpaid balance.

property is_fully_paid: bool

Check if accrual has been fully paid.

__deepcopy__(memo: Dict[int, Any]) AccrualItem[source]

Create a deep copy of this accrual item.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

AccrualItem

Returns:

Independent copy of this AccrualItem

class AccrualManager(fiscal_year_end: int = 12)[source]

Bases: object

Manages accruals and timing differences for financial operations.

Tracks accrued expenses and revenues with various payment schedules, particularly focusing on quarterly tax payments and multi-year claim settlements. Uses FIFO approach for payment matching.

accrued_expenses: Dict[AccrualType, List[AccrualItem]]
accrued_revenues: List[AccrualItem]
current_period: int
__deepcopy__(memo: Dict[int, Any]) AccrualManager[source]

Create a deep copy of this accrual manager.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

AccrualManager

Returns:

Independent copy of this AccrualManager with all accruals

record_expense_accrual(item_type: AccrualType, amount: Decimal | float | int, payment_schedule: PaymentSchedule = PaymentSchedule.IMMEDIATE, payment_dates: List[int] | None = None, description: str = '') AccrualItem[source]

Record an accrued expense.

Parameters:
  • item_type (AccrualType) – Type of expense being accrued

  • amount (Union[Decimal, float, int]) – Total amount to be accrued (converted to Decimal)

  • payment_schedule (PaymentSchedule) – Schedule for payments

  • payment_dates (Optional[List[int]]) – Custom payment dates if schedule is CUSTOM

  • description (str) – Optional description of the accrual

Return type:

AccrualItem

Returns:

The created AccrualItem

record_revenue_accrual(amount: Decimal | float | int, collection_dates: List[int] | None = None, description: str = '') AccrualItem[source]

Record accrued revenue not yet collected.

Parameters:
  • amount (Union[Decimal, float, int]) – Amount of revenue accrued (converted to Decimal)

  • collection_dates (Optional[List[int]]) – Expected collection dates

  • description (str) – Optional description

Return type:

AccrualItem

Returns:

The created AccrualItem

process_payment(item_type: AccrualType, amount: Decimal | float | int, period: int | None = None) List[Tuple[AccrualItem, Decimal]][source]

Process a payment against accrued items using FIFO.

Parameters:
  • item_type (AccrualType) – Type of accrual being paid

  • amount (Union[Decimal, float, int]) – Payment amount (converted to Decimal)

  • period (Optional[int]) – Period when payment is made (defaults to current)

Return type:

List[Tuple[AccrualItem, Decimal]]

Returns:

List of (AccrualItem, amount_applied) tuples with Decimal amounts

get_quarterly_tax_schedule(annual_tax: Decimal | float | int) List[Tuple[int, Decimal]][source]

Calculate quarterly tax payment schedule.

Parameters:

annual_tax (Union[Decimal, float, int]) – Total annual tax liability (converted to Decimal)

Return type:

List[Tuple[int, Decimal]]

Returns:

List of (period, amount) tuples for quarterly payments (Decimal amounts)

get_claim_payment_schedule(claim_amount: Decimal | float | int, development_pattern: List[Decimal | float] | None = None) List[Tuple[int, Decimal]][source]

Calculate insurance claim payment schedule over multiple years.

Parameters:
  • claim_amount (Union[Decimal, float, int]) – Total claim amount (converted to Decimal)

  • development_pattern (Optional[List[Union[Decimal, float]]]) – Percentage paid each year (defaults to standard pattern)

Return type:

List[Tuple[int, Decimal]]

Returns:

List of (period, amount) tuples for claim payments (Decimal amounts)

get_total_accrued_expenses() Decimal[source]

Get total outstanding accrued expenses as Decimal.

Return type:

Decimal

get_total_accrued_revenues() Decimal[source]

Get total outstanding accrued revenues as Decimal.

Return type:

Decimal

get_accruals_by_type(item_type: AccrualType) List[AccrualItem][source]

Get all accruals of a specific type.

Parameters:

item_type (AccrualType) – Type of accrual to retrieve

Return type:

List[AccrualItem]

Returns:

List of accruals of the specified type

get_payments_due(period: int | None = None) Dict[AccrualType, Decimal][source]

Get payments due in a specific period.

Parameters:

period (Optional[int]) – Period to check (defaults to current)

Return type:

Dict[AccrualType, Decimal]

Returns:

Dictionary of payment amounts by type (Decimal values)

advance_period(periods: int = 1)[source]

Advance the current period.

Parameters:

periods (int) – Number of periods to advance

get_balance_sheet_items() Dict[str, Decimal][source]

Get accrual items for balance sheet reporting.

Return type:

Dict[str, Decimal]

Returns:

Dictionary with balance sheet line items (Decimal values)

clear_fully_paid()[source]

Remove fully paid accruals to maintain performance.

ergodic_insurance.accuracy_validator module

Numerical accuracy validation for Monte Carlo simulations.

This module provides tools to validate the numerical accuracy of optimized Monte Carlo simulations against reference implementations, ensuring that performance optimizations don’t compromise result quality.

Key features:
  • High-precision reference implementations

  • Statistical validation of distributions

  • Edge case and boundary condition testing

  • Accuracy comparison metrics

  • Detailed validation reports

Example

>>> from accuracy_validator import AccuracyValidator
>>> import numpy as np
>>>
>>> validator = AccuracyValidator()
>>>
>>> # Compare optimized vs reference implementation
>>> optimized_results = np.random.normal(0.08, 0.02, 10000)
>>> reference_results = np.random.normal(0.08, 0.02, 10000)
>>>
>>> validation = validator.compare_implementations(
...     optimized_results, reference_results
... )
>>> print(f"Accuracy: {validation.accuracy_score:.4f}")

Google-style docstrings are used throughout for Sphinx documentation.

class ValidationResult(accuracy_score: float, mean_error: float = 0.0, max_error: float = 0.0, relative_error: float = 0.0, ks_statistic: float = 0.0, ks_pvalue: float = 0.0, passed_tests: List[str] = <factory>, failed_tests: List[str] = <factory>, edge_cases: Dict[str, bool]=<factory>) None[source]

Bases: object

Results from accuracy validation.

accuracy_score: float

Overall accuracy score (0-1)

mean_error: float = 0.0

Mean absolute error

max_error: float = 0.0

Maximum absolute error

relative_error: float = 0.0

Mean relative error

ks_statistic: float = 0.0

Kolmogorov-Smirnov test statistic

ks_pvalue: float = 0.0

Kolmogorov-Smirnov test p-value

passed_tests: List[str]

List of passed validation tests

failed_tests: List[str]

List of failed validation tests

edge_cases: Dict[str, bool]

Results from edge case testing

is_valid(tolerance: float = 0.01) bool[source]

Check if validation passes within tolerance.

Parameters:

tolerance (float) – Maximum acceptable relative error.

Return type:

bool

Returns:

True if validation passes.

summary() str[source]

Generate validation summary.

Return type:

str

Returns:

Formatted summary string.

class ReferenceImplementations[source]

Bases: object

High-precision reference implementations for validation.

These implementations prioritize accuracy over speed and serve as the ground truth for validation.

static calculate_growth_rate_precise(final_assets: float, initial_assets: float, n_years: float) float[source]

Calculate growth rate with high precision.

Parameters:
  • final_assets (float) – Final asset value.

  • initial_assets (float) – Initial asset value.

  • n_years (float) – Number of years.

Return type:

float

Returns:

Precise growth rate.

static apply_insurance_precise(loss: float, attachment: float, limit: float) Tuple[float, float][source]

Apply insurance with precise calculations.

Parameters:
  • loss (float) – Loss amount.

  • attachment (float) – Insurance attachment point.

  • limit (float) – Insurance limit.

Return type:

Tuple[float, float]

Returns:

Tuple of (retained_loss, recovered_amount).

static calculate_var_precise(losses: ndarray, confidence: float) float[source]

Calculate Value at Risk with high precision.

Parameters:
  • losses (ndarray) – Array of loss amounts.

  • confidence (float) – Confidence level (e.g., 0.95).

Return type:

float

Returns:

VaR at specified confidence level.

static calculate_tvar_precise(losses: ndarray, confidence: float) float[source]

Calculate Tail Value at Risk with high precision.

Parameters:
  • losses (ndarray) – Array of loss amounts.

  • confidence (float) – Confidence level (e.g., 0.95).

Return type:

float

Returns:

TVaR at specified confidence level.

static calculate_ruin_probability_precise(paths: ndarray, threshold: float = 0.0) float[source]

Calculate ruin probability with high precision.

Parameters:
  • paths (ndarray) – Array of asset paths.

  • threshold (float) – Ruin threshold.

Return type:

float

Returns:

Probability of ruin.

class StatisticalValidation[source]

Bases: object

Statistical tests for distribution validation.

static compare_distributions(data1: ndarray, data2: ndarray) Dict[str, Any][source]

Compare two distributions statistically.

Parameters:
  • data1 (ndarray) – First dataset.

  • data2 (ndarray) – Second dataset.

Return type:

Dict[str, Any]

Returns:

Dictionary of statistical test results.

static validate_statistical_properties(data: ndarray, expected_mean: float, expected_std: float, tolerance: float = 0.05) Dict[str, bool][source]

Validate statistical properties of data.

Parameters:
  • data (ndarray) – Data to validate.

  • expected_mean (float) – Expected mean value.

  • expected_std (float) – Expected standard deviation.

  • tolerance (float) – Relative tolerance for validation.

Return type:

Dict[str, bool]

Returns:

Dictionary of validation results.

class EdgeCaseTester[source]

Bases: object

Test edge cases and boundary conditions.

static test_extreme_values() Dict[str, bool][source]

Test handling of extreme values.

Return type:

Dict[str, bool]

Returns:

Dictionary of test results.

static test_boundary_conditions() Dict[str, bool][source]

Test boundary conditions.

Return type:

Dict[str, bool]

Returns:

Dictionary of test results.

class AccuracyValidator(tolerance: float = 0.01)[source]

Bases: object

Main accuracy validation engine.

Provides comprehensive validation of numerical accuracy for Monte Carlo simulations.

compare_implementations(optimized_results: ndarray, reference_results: ndarray, test_name: str = 'Implementation Comparison') ValidationResult[source]

Compare optimized implementation against reference.

Parameters:
  • optimized_results (ndarray) – Results from optimized implementation.

  • reference_results (ndarray) – Results from reference implementation.

  • test_name (str) – Name of the test being performed.

Return type:

ValidationResult

Returns:

ValidationResult with comparison metrics.

validate_growth_rates(optimized_func: Callable, test_cases: List[Tuple] | None = None) ValidationResult[source]

Validate growth rate calculations.

Parameters:
  • optimized_func (Callable) – Optimized growth rate function.

  • test_cases (Optional[List[Tuple]]) – List of (final, initial, years) test cases.

Return type:

ValidationResult

Returns:

ValidationResult for growth rate calculations.

validate_insurance_calculations(optimized_func: Callable, test_cases: List[Tuple] | None = None) ValidationResult[source]

Validate insurance calculations.

Parameters:
  • optimized_func (Callable) – Optimized insurance function.

  • test_cases (Optional[List[Tuple]]) – List of (loss, attachment, limit) test cases.

Return type:

ValidationResult

Returns:

ValidationResult for insurance calculations.

validate_risk_metrics(optimized_var: Callable, optimized_tvar: Callable, test_data: ndarray | None = None) ValidationResult[source]

Validate risk metric calculations.

Parameters:
  • optimized_var (Callable) – Optimized VaR function.

  • optimized_tvar (Callable) – Optimized TVaR function.

  • test_data (Optional[ndarray]) – Test loss data.

Return type:

ValidationResult

Returns:

ValidationResult for risk metrics.

run_full_validation() ValidationResult[source]

Run comprehensive validation suite.

Return type:

ValidationResult

Returns:

Complete ValidationResult.

generate_validation_report(results: List[ValidationResult]) str[source]

Generate comprehensive validation report.

Parameters:

results (List[ValidationResult]) – List of validation results.

Return type:

str

Returns:

Formatted validation report.

ergodic_insurance.adaptive_stopping module

Adaptive stopping criteria for Monte Carlo simulations.

This module implements adaptive stopping rules based on convergence diagnostics, allowing simulations to terminate early when convergence criteria are met.

class StoppingRule(*values)[source]

Bases: Enum

Enumeration of available stopping rules.

R_HAT = 'r_hat'
ESS = 'ess'
RELATIVE_CHANGE = 'relative_change'
MCSE = 'mcse'
GEWEKE = 'geweke'
HEIDELBERGER = 'heidelberger'
COMBINED = 'combined'
CUSTOM = 'custom'
class StoppingCriteria(rule: StoppingRule = StoppingRule.COMBINED, r_hat_threshold: float = 1.05, min_ess: int = 1000, relative_tolerance: float = 0.01, mcse_relative_threshold: float = 0.05, min_iterations: int = 1000, max_iterations: int = 100000, check_interval: int = 100, patience: int = 3, confidence_level: float = 0.95) None[source]

Bases: object

Configuration for stopping criteria.

rule: StoppingRule = 'combined'
r_hat_threshold: float = 1.05
min_ess: int = 1000
relative_tolerance: float = 0.01
mcse_relative_threshold: float = 0.05
min_iterations: int = 1000
max_iterations: int = 100000
check_interval: int = 100
patience: int = 3
confidence_level: float = 0.95
__post_init__()[source]

Validate criteria after initialization.

class ConvergenceStatus(converged: bool, iteration: int, reason: str, diagnostics: Dict[str, float], should_stop: bool, estimated_remaining: int | None = None) None[source]

Bases: object

Container for convergence status information.

converged: bool
iteration: int
reason: str
diagnostics: Dict[str, float]
should_stop: bool
estimated_remaining: int | None = None
__str__() str[source]

String representation of convergence status.

Return type:

str

class AdaptiveStoppingMonitor(criteria: StoppingCriteria | None = None, custom_rule: Callable | None = None)[source]

Bases: object

Monitor for adaptive stopping based on convergence criteria.

Provides sophisticated adaptive stopping with multiple criteria, burn-in detection, and convergence rate estimation.

r_hat_history: List[float]
ess_history: List[float]
mean_history: List[float]
variance_history: List[float]
iteration_history: List[int]
convergence_rate: float | None
estimated_total_iterations: int | None
check_convergence(iteration: int, chains: ndarray, diagnostics: Dict[str, float] | None = None) ConvergenceStatus[source]

Check if convergence criteria are met.

Parameters:
  • iteration (int) – Current iteration number

  • chains (ndarray) – Array of chain values

  • diagnostics (Optional[Dict[str, float]]) – Pre-calculated diagnostics (optional)

Return type:

ConvergenceStatus

Returns:

ConvergenceStatus object with convergence information

detect_adaptive_burn_in(chains: ndarray, method: str = 'geweke') int[source]

Detect burn-in period adaptively.

Parameters:
  • chains (ndarray) – Array of chain values

  • method (str) – Method for burn-in detection

Return type:

int

Returns:

Estimated burn-in period

estimate_convergence_rate(diagnostic_history: List[float], target_value: float = 1.0) Tuple[float, int][source]

Estimate convergence rate and iterations to target.

Parameters:
  • diagnostic_history (List[float]) – History of diagnostic values

  • target_value (float) – Target value for convergence

Return type:

Tuple[float, int]

Returns:

Tuple of (convergence_rate, estimated_iterations_to_target)

get_stopping_summary() Dict[str, Any][source]

Get summary of stopping monitor state.

Return type:

Dict[str, Any]

Returns:

Dictionary with monitor summary information

ergodic_insurance.batch_processor module

Batch processing engine for running multiple simulation scenarios.

This module provides a framework for executing multiple scenarios in parallel or serial, with support for checkpointing, resumption, and result aggregation.

class ProcessingStatus(*values)[source]

Bases: Enum

Status of scenario processing.

PENDING = 'pending'
RUNNING = 'running'
COMPLETED = 'completed'
FAILED = 'failed'
SKIPPED = 'skipped'
class BatchResult(scenario_id: str, scenario_name: str, status: ProcessingStatus, simulation_results: MonteCarloResults | None = None, execution_time: float = 0.0, error_message: str | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Result from a single scenario execution.

scenario_id

Unique scenario identifier

scenario_name

Human-readable scenario name

status

Processing status

simulation_results

Monte Carlo simulation results

execution_time

Time taken to execute scenario

error_message

Error message if failed

metadata

Additional result metadata

scenario_id: str
scenario_name: str
status: ProcessingStatus
simulation_results: MonteCarloResults | None = None
execution_time: float = 0.0
error_message: str | None = None
metadata: Dict[str, Any]
class AggregatedResults(batch_results: ~typing.List[~ergodic_insurance.batch_processor.BatchResult], summary_statistics: ~pandas.core.frame.DataFrame, comparison_metrics: ~typing.Dict[str, ~pandas.core.frame.DataFrame], sensitivity_analysis: ~pandas.core.frame.DataFrame | None = None, execution_summary: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]

Bases: object

Aggregated results from batch processing.

batch_results

Individual scenario results

summary_statistics

Summary stats across scenarios

comparison_metrics

Comparative metrics between scenarios

sensitivity_analysis

Sensitivity analysis results

execution_summary

Batch execution summary

batch_results: List[BatchResult]
summary_statistics: DataFrame
comparison_metrics: Dict[str, DataFrame]
sensitivity_analysis: DataFrame | None = None
execution_summary: Dict[str, Any]
get_successful_results() List[BatchResult][source]

Get only successful results.

Return type:

List[BatchResult]

to_dataframe() DataFrame[source]

Convert results to DataFrame for analysis.

Return type:

DataFrame

Returns:

DataFrame with scenario results

class CheckpointData(completed_scenarios: ~typing.Set[str], failed_scenarios: ~typing.Set[str], batch_results: ~typing.List[~ergodic_insurance.batch_processor.BatchResult], timestamp: ~datetime.datetime, metadata: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]

Bases: object

Checkpoint data for resumable batch processing.

completed_scenarios: Set[str]
failed_scenarios: Set[str]
batch_results: List[BatchResult]
timestamp: datetime
metadata: Dict[str, Any]
class BatchProcessor(loss_generator: ManufacturingLossGenerator | None = None, insurance_program: InsuranceProgram | None = None, manufacturer: WidgetManufacturer | None = None, n_workers: int | None = None, checkpoint_dir: Path | None = None, use_parallel: bool = True, progress_bar: bool = True)[source]

Bases: object

Engine for batch processing multiple simulation scenarios.

batch_results: List[BatchResult]
completed_scenarios: Set[str]
failed_scenarios: Set[str]
process_batch(scenarios: List[ScenarioConfig], resume_from_checkpoint: bool = True, checkpoint_interval: int = 10, max_failures: int | None = None, priority_threshold: int | None = None) AggregatedResults[source]

Process a batch of scenarios.

Parameters:
  • scenarios (List[ScenarioConfig]) – List of scenarios to process

  • resume_from_checkpoint (bool) – Whether to resume from checkpoint

  • checkpoint_interval (int) – Save checkpoint every N scenarios

  • max_failures (Optional[int]) – Maximum allowed failures before stopping

  • priority_threshold (Optional[int]) – Only process scenarios up to this priority

Return type:

AggregatedResults

Returns:

Aggregated results from batch processing

clear_checkpoints() None[source]

Clear all checkpoints.

Return type:

None

export_results(path: str | Path, export_format: str = 'csv') None[source]

Export aggregated results to file.

Parameters:
  • path (Union[str, Path]) – Output file path

  • export_format (str) – Export format (csv, json, excel)

Return type:

None

export_financial_statements(path: str | Path) None[source]

Export comprehensive financial statements to Excel.

Generates detailed financial statements including balance sheets, income statements, cash flow statements, reconciliation reports, and metrics dashboards for each scenario.

Parameters:

path (Union[str, Path]) – Output directory path for Excel files

Return type:

None

ergodic_insurance.benchmarking module

Comprehensive benchmarking suite for Monte Carlo simulations.

This module provides tools for benchmarking Monte Carlo engine performance, targeting 100K simulations in under 60 seconds on 4-core CPUs with <4GB memory.

Key features:
  • Performance benchmarking at multiple scales (1K, 10K, 100K)

  • Memory usage tracking and profiling

  • CPU efficiency monitoring

  • Cache effectiveness measurement

  • Automated performance report generation

  • Comparison of optimization strategies

Example

>>> from benchmarking import BenchmarkSuite, BenchmarkConfig
>>> from monte_carlo import MonteCarloEngine
>>>
>>> suite = BenchmarkSuite()
>>> config = BenchmarkConfig(scales=[1000, 10000, 100000])
>>>
>>> # Run comprehensive benchmarks
>>> results = suite.run_comprehensive_benchmark(engine, config)
>>> print(results.summary())
>>>
>>> # Check if performance targets are met
>>> if results.meets_requirements():
...     print("✓ All performance targets achieved!")

Google-style docstrings are used throughout for Sphinx documentation.

class BenchmarkMetrics(execution_time: float, simulations_per_second: float, memory_peak_mb: float, memory_average_mb: float, cpu_utilization: float = 0.0, cache_hit_rate: float = 0.0, accuracy_score: float = 1.0, convergence_iterations: int = 0) None[source]

Bases: object

Metrics collected during benchmarking.

execution_time

Total execution time in seconds

simulations_per_second

Throughput metric

memory_peak_mb

Peak memory usage in MB

memory_average_mb

Average memory usage in MB

cpu_utilization

Average CPU utilization percentage

cache_hit_rate

Cache effectiveness percentage

accuracy_score

Numerical accuracy score

convergence_iterations

Iterations to convergence

execution_time: float
simulations_per_second: float
memory_peak_mb: float
memory_average_mb: float
cpu_utilization: float = 0.0
cache_hit_rate: float = 0.0
accuracy_score: float = 1.0
convergence_iterations: int = 0
to_dict() Dict[str, Any][source]

Convert metrics to dictionary.

Return type:

Dict[str, Any]

Returns:

Dictionary representation of metrics.

class BenchmarkResult(scale: int, metrics: ~ergodic_insurance.benchmarking.BenchmarkMetrics, configuration: ~typing.Dict[str, ~typing.Any], timestamp: ~datetime.datetime, system_info: ~typing.Dict[str, ~typing.Any] = <factory>, optimizations: ~typing.List[str] = <factory>) None[source]

Bases: object

Results from a benchmark run.

scale

Number of simulations

metrics

Performance metrics

configuration

Configuration used

timestamp

When benchmark was run

system_info

System information

optimizations

Optimizations applied

scale: int
metrics: BenchmarkMetrics
configuration: Dict[str, Any]
timestamp: datetime
system_info: Dict[str, Any]
optimizations: List[str]
meets_target(target_time: float, target_memory: float) bool[source]

Check if result meets performance targets.

Parameters:
  • target_time (float) – Maximum execution time in seconds.

  • target_memory (float) – Maximum memory usage in MB.

Return type:

bool

Returns:

True if targets are met.

summary() str[source]

Generate result summary.

Return type:

str

Returns:

Formatted summary string.

class BenchmarkConfig(scales: List[int] = <factory>, n_years: int = 10, n_workers: int = 4, memory_limit_mb: float = 4000.0, target_times: Dict[int, float]=<factory>, repetitions: int = 3, warmup_runs: int = 2, enable_profiling: bool = True) None[source]

Bases: object

Configuration for benchmarking.

scales

List of simulation counts to test

n_years

Years per simulation

n_workers

Number of parallel workers

memory_limit_mb

Memory limit for testing

target_times

Target execution times per scale

repetitions

Number of repetitions per test

warmup_runs

Number of warmup runs

enable_profiling

Enable detailed profiling

scales: List[int]
n_years: int = 10
n_workers: int = 4
memory_limit_mb: float = 4000.0
target_times: Dict[int, float]
repetitions: int = 3
warmup_runs: int = 2
enable_profiling: bool = True
class SystemProfiler[source]

Bases: object

Profile system resources during benchmarking.

start() None[source]

Start profiling.

Return type:

None

sample() None[source]

Take a resource sample.

Return type:

None

get_metrics() Tuple[float, float, float][source]

Get profiling metrics.

Return type:

Tuple[float, float, float]

Returns:

Tuple of (avg_cpu, peak_memory, avg_memory).

static get_system_info() Dict[str, Any][source]

Get system information.

Return type:

Dict[str, Any]

Returns:

Dictionary of system information.

class BenchmarkRunner(profiler: SystemProfiler | None = None)[source]

Bases: object

Run individual benchmarks with monitoring.

run_single_benchmark(func: Callable, args: Tuple = (), kwargs: Dict | None = None, monitor_interval: float = 0.1) BenchmarkMetrics[source]

Run a single benchmark with monitoring.

Parameters:
  • func (Callable) – Function to benchmark.

  • args (Tuple) – Positional arguments for function.

  • kwargs (Optional[Dict]) – Keyword arguments for function.

  • monitor_interval (float) – Monitoring interval in seconds.

Return type:

BenchmarkMetrics

Returns:

BenchmarkMetrics from the run.

run_with_warmup(func: Callable, args: Tuple = (), kwargs: Dict | None = None, warmup_runs: int = 2, benchmark_runs: int = 3) List[BenchmarkMetrics][source]

Run benchmark with warmup.

Parameters:
  • func (Callable) – Function to benchmark.

  • args (Tuple) – Positional arguments.

  • kwargs (Optional[Dict]) – Keyword arguments.

  • warmup_runs (int) – Number of warmup runs.

  • benchmark_runs (int) – Number of benchmark runs.

Return type:

List[BenchmarkMetrics]

Returns:

List of benchmark metrics.

class BenchmarkSuite[source]

Bases: object

Comprehensive benchmark suite for Monte Carlo simulations.

Provides tools to benchmark performance across different scales and configurations, generating detailed reports.

results: List[BenchmarkResult]
benchmark_scale(engine, scale: int, config: BenchmarkConfig, optimizations: List[str] | None = None) BenchmarkResult[source]

Benchmark at a specific scale.

Parameters:
  • engine – Monte Carlo engine to benchmark.

  • scale (int) – Number of simulations.

  • config (BenchmarkConfig) – Benchmark configuration.

  • optimizations (Optional[List[str]]) – List of applied optimizations.

Return type:

BenchmarkResult

Returns:

BenchmarkResult for this scale.

run_comprehensive_benchmark(engine, config: BenchmarkConfig | None = None) ComprehensiveBenchmarkResult[source]

Run comprehensive benchmark suite.

Parameters:
Return type:

ComprehensiveBenchmarkResult

Returns:

ComprehensiveBenchmarkResult with all results.

compare_configurations(engine_factory: Callable, configurations: List[Dict[str, Any]], scale: int = 10000) ConfigurationComparison[source]

Compare different configurations.

Parameters:
  • engine_factory (Callable) – Factory function to create engines.

  • configurations (List[Dict[str, Any]]) – List of configuration dictionaries.

  • scale (int) – Number of simulations to test.

Return type:

ConfigurationComparison

Returns:

ConfigurationComparison results.

class ComprehensiveBenchmarkResult(results: List[BenchmarkResult], config: BenchmarkConfig, system_info: Dict[str, Any]) None[source]

Bases: object

Results from comprehensive benchmark suite.

results

List of individual benchmark results

config

Configuration used

system_info

System information

results: List[BenchmarkResult]
config: BenchmarkConfig
system_info: Dict[str, Any]
meets_requirements() bool[source]

Check if all requirements are met.

Return type:

bool

Returns:

True if all performance requirements are satisfied.

summary() str[source]

Generate comprehensive summary.

Return type:

str

Returns:

Formatted summary string.

save_report(filepath: str) None[source]

Save benchmark report to file.

Parameters:

filepath (str) – Path to save report.

Return type:

None

class ConfigurationComparison(results: List[Dict[str, Any]]) None[source]

Bases: object

Results from configuration comparison.

results: List[Dict[str, Any]]
best_configuration() Dict[str, Any][source]

Find best configuration.

Return type:

Dict[str, Any]

Returns:

Best configuration based on execution time.

summary() str[source]

Generate comparison summary.

Return type:

str

Returns:

Formatted summary string.

run_quick_benchmark(engine, n_simulations: int = 10000) BenchmarkMetrics[source]

Run a quick benchmark.

Parameters:
  • engine – Monte Carlo engine to benchmark.

  • n_simulations (int) – Number of simulations.

Return type:

BenchmarkMetrics

Returns:

BenchmarkMetrics from the run.

ergodic_insurance.bootstrap_analysis module

Bootstrap confidence interval analysis for simulation results.

This module provides comprehensive bootstrap analysis capabilities for statistical significance testing and confidence interval calculation. Supports both percentile and BCa (bias-corrected and accelerated) methods with parallel processing for performance optimization.

Example

>>> import numpy as np
>>> from bootstrap_analysis import BootstrapAnalyzer
>>> # Create sample data
>>> data = np.random.normal(100, 15, 1000)
>>> analyzer = BootstrapAnalyzer(n_bootstrap=10000, seed=42)
>>> # Calculate confidence interval for mean
>>> ci = analyzer.confidence_interval(data, np.mean)
>>> print(f"95% CI: [{ci[0]:.2f}, {ci[1]:.2f}]")
>>> # Parallel bootstrap for faster computation
>>> ci_parallel = analyzer.confidence_interval(
...     data, np.mean, method='bca', parallel=True
... )
DEFAULT_N_BOOTSTRAP

Default number of bootstrap iterations (10000).

Type:

int

DEFAULT_CONFIDENCE

Default confidence level (0.95).

Type:

float

DEFAULT_N_WORKERS

Default number of parallel workers (4).

Type:

int

class BootstrapResult(statistic: float, confidence_level: float, confidence_interval: Tuple[float, float], bootstrap_distribution: ndarray, method: str, n_bootstrap: int, bias: float | None = None, acceleration: float | None = None, converged: bool = True, metadata: Dict[str, Any] | None = None) None[source]

Bases: object

Container for bootstrap analysis results.

statistic: float
confidence_level: float
confidence_interval: Tuple[float, float]
bootstrap_distribution: ndarray
method: str
n_bootstrap: int
bias: float | None = None
acceleration: float | None = None
converged: bool = True
metadata: Dict[str, Any] | None = None
summary() str[source]

Generate human-readable summary of bootstrap results.

Return type:

str

Returns:

Formatted string with key bootstrap statistics.

class BootstrapAnalyzer(n_bootstrap: int = 10000, confidence_level: float = 0.95, seed: int | None = None, n_workers: int = 4, show_progress: bool = True)[source]

Bases: object

Main class for bootstrap confidence interval analysis.

Provides methods for calculating bootstrap confidence intervals using various methods including percentile and BCa. Supports parallel processing for improved performance with large datasets.

Parameters:
  • n_bootstrap (int) – Number of bootstrap iterations (default 10000).

  • confidence_level (float) – Confidence level for intervals (default 0.95).

  • seed (Optional[int]) – Random seed for reproducibility (default None).

  • n_workers (int) – Number of parallel workers (default 4).

  • show_progress (bool) – Whether to show progress bar (default True).

Example

>>> analyzer = BootstrapAnalyzer(n_bootstrap=5000, confidence_level=0.99)
>>> data = np.random.exponential(2, 1000)
>>> result = analyzer.confidence_interval(data, np.median)
>>> print(result.summary())
DEFAULT_N_BOOTSTRAP = 10000
DEFAULT_CONFIDENCE = 0.95
DEFAULT_N_WORKERS = 4
bootstrap_sample(data: ndarray, statistic: Callable[[ndarray], float], n_samples: int = 1) ndarray[source]

Generate bootstrap samples and compute statistics.

Parameters:
  • data (ndarray) – Input data array.

  • statistic (Callable[[ndarray], float]) – Function to compute on each bootstrap sample.

  • n_samples (int) – Number of bootstrap samples to generate.

Return type:

ndarray

Returns:

Array of bootstrap statistics.

confidence_interval(data: ndarray, statistic: Callable[[ndarray], float], confidence_level: float | None = None, method: str = 'percentile', parallel: bool = False) BootstrapResult[source]

Calculate bootstrap confidence interval for a statistic.

Parameters:
  • data (ndarray) – Input data array.

  • statistic (Callable[[ndarray], float]) – Function to compute the statistic of interest.

  • confidence_level (Optional[float]) – Confidence level (uses default if None).

  • method (str) – ‘percentile’ or ‘bca’ (bias-corrected and accelerated).

  • parallel (bool) – Whether to use parallel processing.

Return type:

BootstrapResult

Returns:

BootstrapResult containing confidence interval and diagnostics.

Raises:

ValueError – If method is not ‘percentile’ or ‘bca’.

multi_confidence_interval(metrics: Dict[str, Tuple[ndarray, Callable[[ndarray], float]]], confidence_level: float | None = None, method: str = 'percentile') Dict[str, BootstrapResult][source]

Calculate bootstrap CIs for multiple metrics using shared resampling indices.

Instead of generating independent bootstrap resamples for each metric, this method generates one set of indices per iteration and applies all statistics, giving a proportional speedup equal to the number of metrics.

Parameters:
  • metrics (Dict[str, Tuple[ndarray, Callable[[ndarray], float]]]) – Dictionary mapping metric names to (data, statistic) tuples. All data arrays must have the same length.

  • confidence_level (Optional[float]) – Confidence level (uses instance default if None).

  • method (str) – 'percentile' or 'bca'.

Return type:

Dict[str, BootstrapResult]

Returns:

Dictionary mapping metric names to BootstrapResult objects.

Raises:

ValueError – If data arrays differ in length or method is invalid.

compare_statistics(data1: ndarray, data2: ndarray, statistic: Callable[[ndarray], float], comparison: str = 'difference') BootstrapResult[source]

Compare statistics between two datasets using bootstrap.

Parameters:
  • data1 (ndarray) – First dataset.

  • data2 (ndarray) – Second dataset.

  • statistic (Callable[[ndarray], float]) – Function to compute on each dataset.

  • comparison (str) – Type of comparison (‘difference’ or ‘ratio’).

Return type:

BootstrapResult

Returns:

BootstrapResult for the comparison statistic.

Raises:

ValueError – If comparison type is not supported.

bootstrap_confidence_interval(data: ~numpy.ndarray | ~typing.List[float], statistic: ~typing.Callable[[~numpy.ndarray], float] = <function mean>, confidence_level: float = 0.95, n_bootstrap: int = 10000, method: str = 'percentile', seed: int | None = None) Tuple[float, Tuple[float, float]][source]

Convenience function for simple bootstrap confidence interval calculation.

Parameters:
  • data (Union[ndarray, List[float]]) – Input data (array or list).

  • statistic (Callable[[ndarray], float]) – Function to compute statistic (default: mean).

  • confidence_level (float) – Confidence level (default: 0.95).

  • n_bootstrap (int) – Number of bootstrap iterations (default: 10000).

  • method (str) – ‘percentile’ or ‘bca’ (default: ‘percentile’).

  • seed (Optional[int]) – Random seed for reproducibility.

Return type:

Tuple[float, Tuple[float, float]]

Returns:

Tuple of (original_statistic, (lower_bound, upper_bound)).

Example

>>> data = np.random.normal(100, 15, 1000)
>>> stat, ci = bootstrap_confidence_interval(data, np.median)
>>> print(f"Median: {stat:.2f}, 95% CI: [{ci[0]:.2f}, {ci[1]:.2f}]")

ergodic_insurance.business_optimizer module

Business outcome optimization algorithms for insurance decisions.

This module implements sophisticated optimization algorithms focused on real business outcomes (ROE, growth rate, survival probability) rather than technical metrics. These algorithms maximize long-term company value through optimal insurance decisions.

Author: Alex Filiakov Date: 2025-01-25

class OptimizationDirection(*values)[source]

Bases: Enum

Direction of optimization for objectives.

MAXIMIZE = 'maximize'
MINIMIZE = 'minimize'
class BusinessObjective(name: str, weight: float = 1.0, target_value: float | None = None, optimization_direction: OptimizationDirection = OptimizationDirection.MAXIMIZE, constraint_type: str | None = None, constraint_value: float | None = None) None[source]

Bases: object

Business optimization objective definition.

name

Name of the objective (e.g., ‘ROE’, ‘bankruptcy_risk’)

weight

Weight in multi-objective optimization (0-1)

target_value

Optional target value for the objective

optimization_direction

Whether to maximize or minimize

constraint_type

Optional constraint type (‘>=’, ‘<=’, ‘==’)

constraint_value

Optional constraint value

name: str
weight: float = 1.0
target_value: float | None = None
optimization_direction: OptimizationDirection = 'maximize'
constraint_type: str | None = None
constraint_value: float | None = None
__post_init__()[source]

Validate objective configuration.

class BusinessConstraints(max_risk_tolerance: float = 0.01, min_roe_threshold: float = 0.1, max_leverage_ratio: float = 2.0, min_liquidity_ratio: float = 1.2, max_premium_budget: float = 0.02, min_coverage_ratio: float = 0.5, regulatory_requirements: Dict[str, float]=<factory>) None[source]

Bases: object

Business optimization constraints.

max_risk_tolerance

Maximum acceptable probability of bankruptcy

min_roe_threshold

Minimum required return on equity

max_leverage_ratio

Maximum debt-to-equity ratio

min_liquidity_ratio

Minimum liquidity requirements

max_premium_budget

Maximum insurance premium as % of revenue

min_coverage_ratio

Minimum coverage as % of assets

regulatory_requirements

Additional regulatory constraints

max_risk_tolerance: float = 0.01
min_roe_threshold: float = 0.1
max_leverage_ratio: float = 2.0
min_liquidity_ratio: float = 1.2
max_premium_budget: float = 0.02
min_coverage_ratio: float = 0.5
regulatory_requirements: Dict[str, float]
__post_init__()[source]

Validate constraint values.

class OptimalStrategy(coverage_limit: float, deductible: float, premium_rate: float, expected_roe: float, bankruptcy_risk: float, growth_rate: float, capital_efficiency: float, recommendations: List[str] = <factory>) None[source]

Bases: object

Optimal insurance strategy result.

coverage_limit

Optimal coverage limit amount

deductible

Optimal deductible amount

premium_rate

Optimal premium rate

expected_roe

Expected ROE with this strategy

bankruptcy_risk

Probability of bankruptcy

growth_rate

Expected growth rate

capital_efficiency

Capital efficiency ratio

recommendations

List of actionable recommendations

coverage_limit: float
deductible: float
premium_rate: float
expected_roe: float
bankruptcy_risk: float
growth_rate: float
capital_efficiency: float
recommendations: List[str]
to_dict() Dict[str, float | List[str]][source]

Convert to dictionary for serialization.

Return type:

Dict[str, Union[float, List[str]]]

class BusinessOptimizationResult(optimal_strategy: OptimalStrategy, objective_values: Dict[str, float], constraint_satisfaction: Dict[str, bool], convergence_info: Dict[str, bool | int | float], sensitivity_analysis: Dict[str, float] | None = None) None[source]

Bases: object

Result of business outcome optimization.

optimal_strategy

The optimal insurance strategy

objective_values

Values achieved for each objective

constraint_satisfaction

Status of constraint satisfaction

convergence_info

Optimization convergence information

sensitivity_analysis

Sensitivity to parameter changes

optimal_strategy: OptimalStrategy
objective_values: Dict[str, float]
constraint_satisfaction: Dict[str, bool]
convergence_info: Dict[str, bool | int | float]
sensitivity_analysis: Dict[str, float] | None = None
is_feasible() bool[source]

Check if all constraints are satisfied.

Return type:

bool

class BusinessOptimizer(manufacturer: WidgetManufacturer, decision_engine: InsuranceDecisionEngine | None = None, ergodic_analyzer: ErgodicAnalyzer | None = None, loss_distribution: LossDistribution | None = None, optimizer_config: BusinessOptimizerConfig | None = None, gpu_config: GPUConfig | None = None)[source]

Bases: object

Optimize business outcomes through insurance decisions.

This class implements sophisticated optimization algorithms focused on real business metrics like ROE, growth rate, and survival probability.

with_manufacturer(manufacturer: WidgetManufacturer) BusinessOptimizer[source]

Create a lightweight optimizer for a different manufacturer.

Reuses the optimizer config, loss distribution, decision engine, and ergodic analyzer from this instance, avoiding the overhead of reconstructing shared components (YAML I/O, engine initialization).

The decision engine retains a reference to the original manufacturer, which is correct because methods like maximize_roe_with_insurance read directly from self.manufacturer rather than through the decision engine.

Parameters:

manufacturer (WidgetManufacturer) – New manufacturer to optimize for.

Return type:

BusinessOptimizer

Returns:

A new BusinessOptimizer sharing this instance’s internal components.

maximize_roe_with_insurance(constraints: BusinessConstraints, time_horizon: int = 10, n_simulations: int = 1000) OptimalStrategy[source]

Maximize ROE subject to business constraints.

Objective: max(ROE_with_insurance - ROE_baseline)

Parameters:
  • constraints (BusinessConstraints) – Business constraints to satisfy

  • time_horizon (int) – Planning horizon in years

  • n_simulations (int) – Number of Monte Carlo simulations

Return type:

OptimalStrategy

Returns:

Optimal insurance strategy maximizing ROE

maximize_roe_gpu(constraints: BusinessConstraints, time_horizon: int = 10, n_simulations: int = 1000, method: str = 'scipy', n_starts: int = 10, top_k: int = 5, de_pop_size: int = 50, de_generations: int = 100) OptimalStrategy[source]

GPU-accelerated ROE maximization with batched objective evaluation.

Uses GPU-batched evaluation for faster optimization. Falls back to CPU-based maximize_roe_with_insurance if GPU is unavailable.

Parameters:
  • constraints (BusinessConstraints) – Business constraints to satisfy

  • time_horizon (int) – Planning horizon in years

  • n_simulations (int) – Number of Monte Carlo simulations per evaluation

  • method (str) – Optimization method - ‘scipy’ (SLSQP with GPU gradient), ‘multi_start’ (GPU-screened multi-start), or ‘de’ (GPU differential evolution)

  • n_starts (int) – Number of starting points for multi-start method

  • top_k (int) – Number of top starting points to optimize

  • de_pop_size (int) – Population size for differential evolution

  • de_generations (int) – Number of generations for DE

Return type:

OptimalStrategy

Returns:

Optimal insurance strategy maximizing ROE

Since:

Version 0.11.0 (Issue #966)

minimize_bankruptcy_risk(growth_targets: Dict[str, float], budget_constraint: float, time_horizon: int = 10) OptimalStrategy[source]

Minimize bankruptcy risk while achieving growth targets.

Objective: min(P(bankruptcy))

Parameters:
  • growth_targets (Dict[str, float]) – Target growth rates (e.g., {‘revenue’: 0.15, ‘assets’: 0.10})

  • budget_constraint (float) – Maximum premium budget

  • time_horizon (int) – Planning horizon in years

Return type:

OptimalStrategy

Returns:

Risk-minimizing insurance strategy

optimize_capital_efficiency(available_capital: float, investment_opportunities: Dict[str, float]) Dict[str, float][source]

Optimize capital allocation across insurance and investments.

Parameters:
  • available_capital (float) – Total capital available for allocation

  • investment_opportunities (Dict[str, float]) – Opportunities with expected returns

Return type:

Dict[str, float]

Returns:

Optimal capital allocation dictionary

analyze_time_horizon_impact(strategies: List[Dict[str, Any]], time_horizons: List[int] | None = None) DataFrame[source]

Analyze strategy performance across different time horizons.

Parameters:
Return type:

DataFrame

Returns:

DataFrame with performance metrics by time horizon

optimize_business_outcomes(objectives: List[BusinessObjective], constraints: BusinessConstraints, time_horizon: int = 10, method: str = 'weighted_sum') BusinessOptimizationResult[source]

Multi-objective optimization of business outcomes.

Parameters:
  • objectives (List[BusinessObjective]) – List of business objectives to optimize

  • constraints (BusinessConstraints) – Business constraints to satisfy

  • time_horizon (int) – Planning horizon in years

  • method (str) – Optimization method (‘weighted_sum’, ‘epsilon_constraint’, ‘pareto’)

Return type:

BusinessOptimizationResult

Returns:

Comprehensive optimization result

ergodic_insurance.claim_development module

Claim development patterns for cash flow modeling.

This module provides classes for modeling realistic claim payment patterns, including immediate and long-tail development patterns typical for manufacturing liability claims. It supports IBNR estimation, reserve calculations, and cash flow projections.

class DevelopmentPatternType(*values)[source]

Bases: Enum

Standard claim development pattern types.

IMMEDIATE = 'immediate'
MEDIUM_TAIL_5YR = 'medium_tail_5yr'
LONG_TAIL_10YR = 'long_tail_10yr'
VERY_LONG_TAIL_15YR = 'very_long_tail_15yr'
CUSTOM = 'custom'
class ClaimDevelopment(pattern_name: str, development_factors: List[float], tail_factor: float = 0.0) None[source]

Bases: object

Claim development pattern for payment timing.

This class defines how claim payments develop over time, with development factors representing the percentage of total claim amount paid in each year.

pattern_name: str
development_factors: List[float]
tail_factor: float = 0.0
__post_init__()[source]

Validate development pattern.

Raises:

ValueError – If development factors are invalid or don’t sum to 1.0.

classmethod create_immediate() ClaimDevelopment[source]

Create immediate payment pattern (property damage).

Return type:

ClaimDevelopment

Returns:

ClaimDevelopment with immediate payment pattern.

classmethod create_medium_tail_5yr() ClaimDevelopment[source]

Create 5-year workers compensation pattern.

Return type:

ClaimDevelopment

Returns:

ClaimDevelopment with 5-year workers compensation pattern.

classmethod create_long_tail_10yr() ClaimDevelopment[source]

Create 10-year general liability pattern.

Return type:

ClaimDevelopment

Returns:

ClaimDevelopment with 10-year general liability pattern.

classmethod create_very_long_tail_15yr() ClaimDevelopment[source]

Create 15-year product liability pattern.

Return type:

ClaimDevelopment

Returns:

ClaimDevelopment with 15-year product liability pattern.

calculate_payments(claim_amount: float, accident_year: int, payment_year: int) float[source]

Calculate payment amount for a specific year.

Parameters:
  • claim_amount (float) – Total claim amount.

  • accident_year (int) – Year when claim occurred.

  • payment_year (int) – Year for which to calculate payment.

Return type:

float

Returns:

Payment amount for the specified year.

get_cumulative_paid(years_since_accident: int) float[source]

Get cumulative percentage paid by year.

Parameters:

years_since_accident (int) – Number of years since accident.

Return type:

float

Returns:

Cumulative percentage paid (0-1).

class DevelopmentPattern(pattern_name: str, cumulative_ldfs: List[float], tail_cdf: float = 1.0) None[source]

Bases: object

CDF-based development pattern for reserve projections.

Unlike ClaimDevelopment (which stores incremental payment percentages), this class stores cumulative development factors (CDFs) suitable for loss reserving calculations. CDF(age) >= 1.0 and is monotonically non-increasing (earlier ages are less developed).

pattern_name

Identifier for this development pattern.

cumulative_ldfs

CDF at each development age starting at age 1. Must be >= 1.0 and monotonically non-increasing.

tail_cdf

CDF beyond last explicit age. Must be >= 1.0.

pattern_name: str
cumulative_ldfs: List[float]
tail_cdf: float = 1.0
__post_init__()[source]

Validate CDF pattern.

pct_developed(development_age: int) float[source]

Return 1/CDF at given age, clamped to [0, 1].

Return type:

float

cdf_at(development_age: int) float[source]

Return raw CDF at given age.

Return type:

float

classmethod from_payment_pattern(payment_pattern: ClaimDevelopment) DevelopmentPattern[source]

Bridge: CDF(age) = 1 / cumulative_paid(age).

Converts a payment-percentage-based ClaimDevelopment pattern into a CDF-based DevelopmentPattern. At each development age the CDF is the reciprocal of the cumulative fraction paid.

Return type:

DevelopmentPattern

classmethod from_age_to_age_factors(name: str, ata_factors: List[float], tail_factor: float = 1.0) DevelopmentPattern[source]

Standard construction from link ratios (age-to-age factors).

CDF at age i = product of ata_factors[i:] * tail_factor. The resulting cumulative_ldfs list has one entry per ATA factor.

Return type:

DevelopmentPattern

class StochasticClaimDevelopment(base_pattern: ClaimDevelopment, concentration: float = 50.0, seed: int | SeedSequence | None = None, stochastic: bool = True)[source]

Bases: ClaimDevelopment

Stochastic variant of ClaimDevelopment using Dirichlet perturbation.

Samples development factors from a Dirichlet distribution centered on the base pattern’s factors, introducing realistic cash flow timing uncertainty. The Dirichlet guarantees factors sum to 1.0 and introduces natural negative correlation between development periods (if one year pays more, others pay less).

Based on Sriram (2021) Dirichlet model for stochastic claims reserving.

Parameters:
  • base_pattern (ClaimDevelopment) – Deterministic ClaimDevelopment to perturb.

  • concentration (float) – Dirichlet concentration parameter (kappa). Higher values produce less noise. Recommended: 200 (very low noise) to 10 (very high noise). Default 50 suits general liability.

  • seed (Union[int, SeedSequence, None]) – Random seed for reproducibility. Accepts int or SeedSequence.

  • stochastic (bool) – If False, uses base pattern factors exactly (deterministic fallback).

__deepcopy__(memo)[source]

Deep copy without re-sampling; preserves the realized factors.

class Claim(claim_id: str, accident_year: int, reported_year: int, initial_estimate: float, claim_type: str = 'general_liability', development_pattern: ClaimDevelopment | None = None, payments_made: Dict[int, float]=<factory>) None[source]

Bases: object

Individual claim with development tracking.

claim_id: str
accident_year: int
reported_year: int
initial_estimate: float
claim_type: str = 'general_liability'
development_pattern: ClaimDevelopment | None = None
payments_made: Dict[int, float]
__post_init__()[source]

Set default development pattern if not provided.

Uses general liability pattern as default if no pattern is specified.

record_payment(year: int, amount: float)[source]

Record a payment made for this claim.

Parameters:
  • year (int) – Year of payment.

  • amount (float) – Payment amount.

get_total_paid() float[source]

Get total amount paid to date.

Return type:

float

Returns:

Sum of all payments made on this claim.

get_outstanding_reserve() float[source]

Calculate outstanding reserve requirement.

Return type:

float

Returns:

Outstanding reserve amount (initial estimate minus payments made).

class ClaimCohort(accident_year: int, claims: List[Claim] = <factory>) None[source]

Bases: object

Cohort of claims from the same accident year.

accident_year: int
claims: List[Claim]
add_claim(claim: Claim)[source]

Add a claim to the cohort.

Parameters:

claim (Claim) – Claim to add.

Raises:

ValueError – If claim is from different accident year.

calculate_payments(payment_year: int) float[source]

Calculate total payments for a specific year.

Parameters:

payment_year (int) – Year for which to calculate payments.

Return type:

float

Returns:

Total payment amount for the year.

get_total_incurred() float[source]

Get total incurred amount for the cohort.

Return type:

float

Returns:

Sum of initial estimates for all claims in the cohort.

get_total_paid() float[source]

Get total amount paid for the cohort.

Return type:

float

Returns:

Sum of all payments made for claims in the cohort.

get_outstanding_reserve() float[source]

Get total outstanding reserve for the cohort.

Return type:

float

Returns:

Sum of outstanding reserves for all claims in the cohort.

class CashFlowProjector(discount_rate: float = 0.03, a_priori_loss_ratio: float | None = None, ibnr_factors: Dict[str, float] | None = None, development_pattern: DevelopmentPattern | None = None, reserve_tail_factor: float | None = None)[source]

Bases: object

Project cash flows based on claim development patterns.

cohorts: Dict[int, ClaimCohort]
add_cohort(cohort: ClaimCohort)[source]

Add a claim cohort to the projector.

Parameters:

cohort (ClaimCohort) – Claim cohort to add.

project_payments(start_year: int, end_year: int) Dict[int, float][source]

Project claim payments for a range of years.

Parameters:
  • start_year (int) – First year of projection.

  • end_year (int) – Last year of projection.

Return type:

Dict[int, float]

Returns:

Dictionary mapping years to payment amounts.

calculate_present_value(payments: Dict[int, float], base_year: int) float[source]

Calculate present value of future payments.

Parameters:
  • payments (Dict[int, float]) – Dictionary of year to payment amount.

  • base_year (int) – Year to discount to.

Return type:

float

Returns:

Present value of all payments.

build_triangle(evaluation_year: int) Dict[int, Dict[int, float]][source]

Build a paid-loss development triangle from actual payment data.

The triangle maps each accident year to a dictionary of {development_age: cumulative_paid}. Development age 0 corresponds to the accident year itself (i.e. payments made in the same calendar year as the accident).

Only cohorts whose accident year is <= evaluation_year are included. For each cohort the maximum observable development age is evaluation_year - accident_year.

Parameters:

evaluation_year (int) – Valuation date; only data through this calendar year is used.

Return type:

Dict[int, Dict[int, float]]

Returns:

Nested dict {accident_year: {dev_age: cumulative_paid}}.

fit_tail_factor(ata_factors: Dict[int, float], method: str = 'bondy') float[source]

Estimate a tail factor from observed age-to-age link ratios.

Parameters:
  • ata_factors (Dict[int, float]) – Dict of {dev_age: link_ratio} from _compute_age_to_age_factors().

  • method (str) – Estimation method. "bondy": tail = last observed LDF when it is between 1.0 and 2.0 (Bondy extrapolation). "inverse_power": fit LDF(d)-1 = a * d^(-b) on log-log scale and extrapolate one period beyond the last observed age.

Return type:

float

Returns:

Estimated tail factor (>= 1.0).

estimate_ibnr(evaluation_year: int, earned_premium: Mapping[int, float] | None = None) float[source]

Estimate IBNR using maturity-adaptive Chain-Ladder / Bornhuetter-Ferguson blend.

Per-cohort logic:

  • Chain-Ladder (CL) projects ultimate losses using empirical age-to-age factors derived from a paid-loss development triangle built from actual cohort payment histories. When sufficient triangle data is available (>=2 cohorts contributing to each link ratio), the CL ultimate for a cohort at development age d is paid_to_date * CDF_to_ultimate(d) where the CDF is the cumulative product of volume-weighted link ratios from age d onward (Friedland §3, CAS Exam 7).

  • When empirical factors are unavailable (e.g. single cohort, no overlapping development periods) CL falls back to the assumed-pattern method: paid_to_date / pct_developed.

  • Bornhuetter-Ferguson (BF) IBNR = ELR * premium * (1 - pct). Requires both an ELR (via tiered fallback) and per-cohort earned_premium. When premium is unavailable, BF is skipped and the blend falls back to CL-only.

  • Blended ultimate uses maturity-adaptive credibility weights: CL weight = pct_developed, BF weight = 1 - pct_developed.

  • IBNR floored at 0 per cohort (E7).

Parameters:
  • evaluation_year (int) – Current evaluation year.

  • earned_premium (Optional[Mapping[int, float]]) – Per-cohort earned premium keyed by accident year, e.g. {2019: 1_000_000, 2020: 1_100_000}. Required for Bornhuetter-Ferguson and Cape Cod methods. When None, the blend falls back to CL-only.

Return type:

float

Returns:

Total estimated IBNR amount across all cohorts.

calculate_total_reserves(evaluation_year: int, earned_premium: Mapping[int, float] | None = None) Dict[str, float][source]

Calculate total reserve requirements.

Parameters:
  • evaluation_year (int) – Current evaluation year.

  • earned_premium (Optional[Mapping[int, float]]) – Per-cohort earned premium keyed by accident year, passed through to estimate_ibnr for Bornhuetter-Ferguson and Cape Cod calculations.

Return type:

Dict[str, float]

Returns:

Dictionary with case reserves, IBNR, and total.

load_ibnr_factors(file_path: str) Dict[str, float][source]

Load IBNR factors from YAML configuration.

Reads the ibnr_factors section from a development-patterns YAML file for use as Tier 3 industry-benchmark ELRs in CashFlowProjector.

Parameters:

file_path (str) – Path to YAML configuration file.

Return type:

Dict[str, float]

Returns:

Dictionary mapping pattern names to IBNR factor values.

load_development_patterns(file_path: str) Dict[str, ClaimDevelopment][source]

Load development patterns from YAML configuration.

Parameters:

file_path (str) – Path to YAML configuration file.

Return type:

Dict[str, ClaimDevelopment]

Returns:

Dictionary mapping pattern names to ClaimDevelopment objects.

ergodic_insurance.config module

Configuration management using Pydantic v2 models.

This package provides comprehensive configuration classes for the Ergodic Insurance simulation framework. It uses Pydantic models for validation, type safety, and automatic serialization/deserialization of configuration parameters.

The configuration system is hierarchical, with specialized configs for different aspects of the simulation (manufacturer, insurance, simulation parameters, etc.) that can be composed into a master configuration.

Sub-modules:

constants: Module-level financial constants (e.g., DEFAULT_RISK_FREE_RATE). core: Master Config class that composes all sub-configs. insurance: Insurance layer, program, and loss distribution configs. manufacturer: Business entity, expense ratio, and industry profile configs. market: Pricing scenarios, transition probabilities, and market cycles. optimizer: BusinessOptimizer and DecisionEngine calibration parameters. presets: Profile metadata, modules, presets, and preset libraries. reporting: Output, logging, and Excel report generation configs. simulation: Simulation execution, growth, debt, and working capital configs.

Key Features:
  • Type-safe configuration with automatic validation

  • Hierarchical configuration structure

  • Profile inheritance and module composition

  • Environment variable support

  • JSON/YAML serialization support

  • Default values with business logic constraints

  • Cross-field validation for consistency

Examples

Quick start with defaults:

from ergodic_insurance import Config

# All defaults — $10M manufacturer, 50-year horizon
config = Config()

From basic company info:

config = Config.from_company(
    initial_assets=50_000_000,
    operating_margin=0.12,
    industry="manufacturing",
)

Full control:

from ergodic_insurance import Config, ManufacturerConfig

config = Config(
    manufacturer=ManufacturerConfig(
        initial_assets=10_000_000,
        asset_turnover_ratio=0.8,
        base_operating_margin=0.08,
        tax_rate=0.25,
        retention_ratio=0.7,
    )
)

Loading from file:

config = Config.from_yaml(Path('config.yaml'))

Note

All monetary values are in nominal dollars unless otherwise specified. Rates and ratios are expressed as decimals (0.1 = 10%).

Since:

Version 0.1.0 (monolithic), refactored in 0.9.0 (Issue #458) Version 0.10.0 (Issue #638) — Config and ConfigV2 merged into single Config

class Config(**data: Any) None[source]

Bases: BaseModel

Complete configuration for the Ergodic Insurance simulation.

This is the unified configuration class that combines all sub-configurations and provides methods for loading, saving, and manipulating configurations. It supports both simple usage (all defaults) and advanced features like profile inheritance, module composition, and preset application.

All sub-configs have sensible defaults, so Config() with no arguments creates a valid configuration for a $10M widget manufacturer.

Examples

Minimal usage:

config = Config()

Override specific parameters:

config = Config(
    manufacturer=ManufacturerConfig(initial_assets=20_000_000)
)

From basic company info:

config = Config.from_company(initial_assets=50_000_000, operating_margin=0.12)

From a profile file:

config = Config.from_profile(Path("profiles/default.yaml"))

With profile inheritance:

config = Config.with_inheritance(
    Path("profiles/custom.yaml"),
    Path("config_dir"),
)
Since:

Version 0.9.0 — original Config + ConfigV2 Version 0.10.0 (Issue #638) — merged into single class

manufacturer: ManufacturerConfig
working_capital: WorkingCapitalConfig
growth: GrowthConfig
debt: DebtConfig
simulation: SimulationConfig
output: OutputConfig
logging: LoggingConfig
profile: ProfileMetadata | None
insurance: InsuranceConfig | None
losses: LossDistributionConfig | None
excel_reporting: ExcelReportConfig | None
working_capital_ratios: WorkingCapitalRatiosConfig | None
expense_ratios: ExpenseRatioConfig | None
depreciation: DepreciationConfig | None
industry_config: IndustryConfig | None
gpu: GPUConfig | None
custom_modules: Dict[str, ModuleConfig]
applied_presets: List[str]
overrides: Dict[str, Any]
classmethod from_company(initial_assets: float = 10000000, operating_margin: float = 0.08, industry: str = 'manufacturing', tax_rate: float = 0.25, growth_rate: float = 0.05, time_horizon_years: int = 50, **kwargs) Config[source]

Create a Config from basic company information.

This factory derives reasonable sub-config defaults from a small number of intuitive business parameters, so actuaries and risk managers can get started quickly without understanding every sub-config class.

Parameters:
  • initial_assets (float) – Starting asset value in dollars.

  • operating_margin (float) – Base operating margin (e.g. 0.08 for 8%).

  • industry (str) – Industry type for deriving defaults. Supported values: “manufacturing”, “service”, “retail”.

  • tax_rate (float) – Corporate tax rate.

  • growth_rate (float) – Annual growth rate.

  • time_horizon_years (int) – Simulation horizon in years.

  • **kwargs – Additional overrides passed to sub-configs.

Return type:

Config

Returns:

Config object with parameters derived from company info.

Examples

Minimal:

config = Config.from_company(initial_assets=50_000_000)

With industry defaults:

config = Config.from_company(
    initial_assets=25_000_000,
    operating_margin=0.15,
    industry="service",
)
classmethod from_yaml(path: Path) Config[source]

Load configuration from YAML file.

Parameters:

path (Path) – Path to YAML configuration file.

Return type:

Config

Returns:

Config object with validated parameters.

Raises:
  • FileNotFoundError – If config file doesn’t exist.

  • ValidationError – If configuration is invalid.

classmethod from_dict(data: dict, base_config: Config | None = None) Config[source]

Create config from dictionary, optionally overriding base config.

Parameters:
  • data (dict) – Dictionary with configuration parameters.

  • base_config (Optional[Config]) – Optional base configuration to override.

Return type:

Config

Returns:

Config object with validated parameters.

classmethod from_profile(profile_path: Path) Config[source]

Load configuration from a profile file.

Parameters:

profile_path (Path) – Path to the profile YAML file.

Return type:

Config

Returns:

Loaded and validated Config instance.

Raises:
  • FileNotFoundError – If profile file doesn’t exist.

  • ValidationError – If configuration is invalid.

classmethod with_inheritance(profile_path: Path, config_dir: Path, _visited: frozenset | None = None) Config[source]

Load configuration with profile inheritance.

Parameters:
  • profile_path (Path) – Path to the profile YAML file.

  • config_dir (Path) – Root configuration directory.

  • _visited (Optional[frozenset]) – Internal set of already-visited profile paths for cycle detection. Callers should not pass this argument.

Return type:

Config

Returns:

Loaded Config with inheritance applied.

Raises:

ValueError – If circular inheritance is detected.

override(overrides: Dict[str, Any]) Config[source]

Create a new config with overridden parameters.

Accepts a dictionary with dot-notation keys to override nested configuration values.

Parameters:

overrides (Dict[str, Any]) – Dictionary mapping dot-notation paths to values. Example: {"manufacturer.tax_rate": 0.21}

Return type:

Config

Returns:

New Config object with overrides applied.

Raises:

ValueError – If a path references an unknown config section or field.

Examples

Override a single parameter:

new_config = config.override({"manufacturer.tax_rate": 0.21})

Override multiple parameters:

new_config = config.override({
    "manufacturer.base_operating_margin": 0.1,
    "simulation.time_horizon_years": 200,
})
with_overrides(overrides: Dict[str, Any]) Config[source]

Create a new config with runtime overrides.

Accepts a dictionary with dot-notation keys to override nested configuration values. Section-level dictionaries are also supported for backward compatibility with ConfigManager.

Parameters:

overrides (Dict[str, Any]) – Dictionary mapping dot-notation paths to values, or section-level dictionaries. Example: {"manufacturer.initial_assets": 20_000_000}

Return type:

Config

Returns:

New Config instance with overrides applied.

Examples

Dot-notation overrides:

new = config.with_overrides({
    "manufacturer.initial_assets": 20_000_000,
    "simulation.time_horizon_years": 100,
})

Section-level dict overrides:

new = config.with_overrides({
    "manufacturer": {"initial_assets": 20_000_000},
})
with_module(module_path: Path) Config[source]

Return a new Config with a configuration module applied.

Merges module data via dict-dump-merge-reconstruct so that every field change goes through Pydantic validation. The original Config instance is not mutated.

Parameters:

module_path (Path) – Path to the module YAML file.

Return type:

Config

Returns:

New Config instance with the module applied.

Since:

Version 0.13.0 (Issue #1295) — replaces apply_module()

with_preset(preset_name: str, preset_data: Dict[str, Any]) Config[source]

Return a new Config with a preset applied.

Merges preset data via dict-dump-merge-reconstruct so that every field change goes through Pydantic validation. The original Config instance is not mutated.

Parameters:
  • preset_name (str) – Name of the preset.

  • preset_data (Dict[str, Any]) – Preset parameters to apply.

Return type:

Config

Returns:

New Config instance with the preset applied.

Since:

Version 0.13.0 (Issue #1295) — replaces apply_preset()

apply_module(module_path: Path) Config[source]

Apply a configuration module.

Deprecated since version 0.13.0: Use with_module() instead. apply_module will be removed in a future release.

Return type:

Config

apply_preset(preset_name: str, preset_data: Dict[str, Any]) Config[source]

Apply a preset to the configuration.

Deprecated since version 0.13.0: Use with_preset() instead. apply_preset will be removed in a future release.

Return type:

Config

validate_config() None[source]

Validate configuration, raising on critical issues.

Checks for missing required sections and logical inconsistencies. Raises ConfigurationError if any critical issues are found.

The method is named validate_config rather than validate to avoid conflicting with Pydantic’s deprecated BaseModel.validate classmethod.

Raises:

ConfigurationError – If the configuration has critical issues. The exception’s issues attribute contains the full list of problems found.

Examples

Return type:

None

Basic validation:

config = Config()
config.validate_config()  # OK — defaults are valid

Catching issues:

try:
    config.validate_config()
except ConfigurationError as e:
    for issue in e.issues:
        print(f"  - {issue}")
Since:

Version 0.14.0 (Issue #1299)

validate_completeness() List[str][source]

Validate configuration completeness (soft check).

Deprecated since version 0.14.0: Use validate_config() instead, which raises ConfigurationError for critical issues. validate_completeness will be removed in a future release.

Return type:

List[str]

Returns:

List of missing or invalid configuration items.

to_yaml(path: Path) None[source]

Save configuration to YAML file.

Parameters:

path (Path) – Path where to save the configuration.

Return type:

None

setup_logging() None[source]

Configure logging based on settings.

Sets up logging handlers for console and/or file output based on the logging configuration.

Return type:

None

ensure_output_dirs() None[source]

Create output directories if they don’t exist.

Ensures that the configured output directory exists, creating it if necessary.

Return type:

None

Since:

Version 0.14.0 (Issue #1299) — renamed from validate_paths

validate_paths() None[source]

Create output directories if they don’t exist.

Deprecated since version 0.14.0: Use ensure_output_dirs() instead. The old name suggested read-only validation but the method actually creates directories. validate_paths will be removed in a future release.

Return type:

None

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

exception ConfigurationError(issues: list[str]) None[source]

Bases: Exception

Raised when configuration validation finds critical issues.

This exception is raised by Config.validate() when the configuration contains critical issues that would lead to incorrect simulation results or runtime failures.

issues

List of specific configuration problems found.

Examples

Catching and inspecting issues:

try:
    config.validate()
except ConfigurationError as e:
    for issue in e.issues:
        print(f"  - {issue}")
Since:

Version 0.14.0 (Issue #1299)

ConfigV2

alias of Config

class InsuranceConfig(**data: Any) None[source]

Bases: BaseModel

Enhanced insurance configuration.

enabled: bool
layers: List[InsuranceLayerConfig]
deductible: float
coinsurance: float
waiting_period_days: int
claims_handling_cost: float
validate_layers()[source]

Ensure layers don’t overlap and are properly ordered.

Returns:

Validated insurance config.

Raises:

ValueError – If layers overlap or are misordered.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class InsuranceLayerConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for a single insurance layer.

name: str
limit: float
attachment: float
base_premium_rate: float
reinstatements: int
aggregate_limit: float | None
limit_type: str
per_occurrence_limit: float | None
validate_layer_structure()[source]

Ensure layer structure is valid.

Returns:

Validated layer config.

Raises:

ValueError – If layer structure is invalid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class LossDistributionConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for loss distributions.

frequency_distribution: str
frequency_annual: float
severity_distribution: str
severity_mean: float
severity_std: float
correlation_factor: float
tail_alpha: float
classmethod validate_frequency_dist(v: str) str[source]

Validate frequency distribution type.

Parameters:

v (str) – Distribution type.

Return type:

str

Returns:

Validated distribution type.

Raises:

ValueError – If distribution type is invalid.

classmethod validate_severity_dist(v: str) str[source]

Validate severity distribution type.

Parameters:

v (str) – Distribution type.

Return type:

str

Returns:

Validated distribution type.

Raises:

ValueError – If distribution type is invalid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class DepreciationConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for depreciation and amortization tracking.

Defines how fixed assets depreciate and prepaid expenses amortize over time.

ppe_useful_life_years: float
prepaid_insurance_amortization_months: int
initial_accumulated_depreciation: float
property annual_depreciation_rate: float

Calculate annual depreciation rate.

property monthly_insurance_amortization_rate: float

Calculate monthly insurance amortization rate.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ExpenseRatioConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for expense categorization and allocation.

Defines how revenue translates to expenses with proper GAAP categorization between COGS and operating expenses (SG&A).

Issue #255: COGS and SG&A breakdown ratios are now configurable to allow the Manufacturer to calculate these values explicitly, rather than having the Reporting layer estimate them with hardcoded ratios.

gross_margin_ratio: float
sga_expense_ratio: float
manufacturing_depreciation_allocation: float
admin_depreciation_allocation: float
direct_materials_ratio: float
direct_labor_ratio: float
manufacturing_overhead_ratio: float
selling_expense_ratio: float
general_admin_ratio: float
validate_depreciation_allocation()[source]

Ensure depreciation allocations sum to 100%.

validate_cogs_breakdown()[source]

Ensure COGS breakdown ratios sum to 100%.

validate_sga_breakdown()[source]

Ensure SG&A breakdown ratios sum to 100%.

property cogs_ratio: float

Calculate COGS as percentage of revenue.

property operating_margin_ratio: float

Calculate operating margin after all operating expenses.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class IndustryConfig(**data: Any) None[source]

Bases: BaseModel

Base configuration for different industry types.

This class defines industry-specific financial parameters that determine how businesses operate, including working capital needs, margin structures, asset composition, and depreciation policies.

industry_type

Name of the industry (e.g., ‘manufacturing’, ‘services’)

Working capital ratios
days_sales_outstanding

Average collection period for receivables (days)

days_inventory_outstanding

Average inventory holding period (days)

days_payables_outstanding

Average payment period to suppliers (days)

Margin structure
gross_margin

Gross profit as percentage of revenue

operating_expense_ratio

Operating expenses as percentage of revenue

Asset composition
current_asset_ratio

Current assets as fraction of total assets

ppe_ratio

Property, Plant & Equipment as fraction of total assets

intangible_ratio

Intangible assets as fraction of total assets

Depreciation
ppe_useful_life

Average useful life of PP&E in years

depreciation_method

Method for calculating depreciation

industry_type: str
days_sales_outstanding: float
days_inventory_outstanding: float
days_payables_outstanding: float
gross_margin: float
operating_expense_ratio: float
current_asset_ratio: float
ppe_ratio: float
intangible_ratio: float
ppe_useful_life: int
depreciation_method: Literal['straight_line', 'declining_balance']
validate_asset_composition()[source]

Validate that asset ratios sum to 1.0.

property working_capital_days: float

Calculate net working capital cycle in days.

property operating_margin: float

Calculate operating margin (EBIT margin).

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ManufacturerConfig(**data: Any) None[source]

Bases: BaseModel

Financial parameters for the widget manufacturer.

This class defines the core financial parameters used to initialize and configure a widget manufacturing company in the simulation. All parameters are validated to ensure realistic business constraints.

initial_assets

Starting asset value in dollars. Must be positive.

asset_turnover_ratio

Revenue per dollar of assets. Typically 0.5-2.0 for manufacturing companies.

base_operating_margin

Core operating margin before insurance costs (EBIT before insurance / Revenue). Typically 5-15% for healthy manufacturers.

tax_rate

Corporate tax rate. Typically 20-30% depending on jurisdiction.

retention_ratio

Portion of earnings retained vs distributed as dividends. Higher retention supports faster growth.

ppe_ratio

Property, Plant & Equipment allocation ratio as fraction of initial assets. Defaults based on operating margin if not specified.

Examples

Conservative manufacturer:

config = ManufacturerConfig(
    initial_assets=5_000_000,
    asset_turnover_ratio=0.6,  # Low turnover
    base_operating_margin=0.05,      # 5% base margin
    tax_rate=0.25,
    retention_ratio=0.9         # High retention
)

Aggressive growth manufacturer:

config = ManufacturerConfig(
    initial_assets=20_000_000,
    asset_turnover_ratio=1.2,  # High turnover
    base_operating_margin=0.12,      # 12% base margin
    tax_rate=0.25,
    retention_ratio=1.0         # Full retention
)

Custom PP&E allocation:

config = ManufacturerConfig(
    initial_assets=15_000_000,
    asset_turnover_ratio=0.9,
    base_operating_margin=0.10,
    tax_rate=0.25,
    retention_ratio=0.8,
    ppe_ratio=0.6  # Override default PP&E allocation
)

Note

The asset turnover ratio and base operating margin together determine the core return on assets (ROA) before insurance costs and taxes. Actual operating margins will be lower when insurance costs are included.

initial_assets: float
asset_turnover_ratio: float
base_operating_margin: float
tax_rate: float
nol_carryforward_enabled: bool
nol_limitation_pct: float
apply_tcja_limitation: bool
retention_ratio: float
ppe_ratio: float | None
insolvency_tolerance: float
expense_ratios: ExpenseRatioConfig | None
fiscal_year_end: int
premium_payment_month: int
revenue_pattern: Literal['uniform', 'seasonal', 'back_loaded']
check_intra_period_liquidity: bool
going_concern_min_current_ratio: float
going_concern_min_dscr: float
going_concern_min_equity_ratio: float
going_concern_min_cash_runway_months: float
going_concern_min_indicators_breached: int
enable_reserve_development: bool
reserve_noise_std: float
ppe_useful_life_years: float
tax_depreciation_life_years: float | None
capex_to_depreciation_ratio: float
working_capital_facility_limit: float | None
lae_ratio: float
set_default_ppe_ratio()[source]

Set default PPE ratio based on operating margin if not provided.

classmethod validate_margin(v: float) float[source]

Warn if base operating margin is unusually high or negative.

Parameters:

v (float) – Base operating margin value to validate (as decimal, e.g., 0.1 for 10%).

Returns:

The validated base operating margin value.

Return type:

float

Note

Margins above 30% are flagged as unusual for manufacturing. Negative margins indicate unprofitable operations before insurance.

classmethod from_industry_config(industry_config, **kwargs)[source]

Create ManufacturerConfig from an IndustryConfig instance.

Parameters:
  • industry_config – IndustryConfig instance with industry-specific parameters

  • **kwargs – Additional parameters to override or supplement

Returns:

ManufacturerConfig instance with parameters derived from industry config

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ManufacturingConfig(**data: Any) None[source]

Bases: IndustryConfig

Configuration for manufacturing companies.

Manufacturing businesses typically have: - Significant inventory holdings - Moderate to high PP&E requirements - Working capital needs for raw materials and WIP - Gross margins of 25-40%

industry_type: str
days_sales_outstanding: float
days_inventory_outstanding: float
days_payables_outstanding: float
gross_margin: float
operating_expense_ratio: float
current_asset_ratio: float
ppe_ratio: float
intangible_ratio: float
ppe_useful_life: int
depreciation_method: Literal['straight_line', 'declining_balance']
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class RetailConfig(**data: Any) None[source]

Bases: IndustryConfig

Configuration for retail companies.

Retail businesses typically have: - High inventory turnover - Moderate PP&E (stores, fixtures) - Fast cash collection (often immediate) - Lower gross margins but efficient operations

industry_type: str
days_sales_outstanding: float
days_inventory_outstanding: float
days_payables_outstanding: float
gross_margin: float
operating_expense_ratio: float
current_asset_ratio: float
ppe_ratio: float
intangible_ratio: float
ppe_useful_life: int
depreciation_method: Literal['straight_line', 'declining_balance']
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ServiceConfig(**data: Any) None[source]

Bases: IndustryConfig

Configuration for service companies.

Service businesses typically have: - Minimal or no inventory - Lower PP&E requirements - Faster cash conversion cycles - Higher gross margins but also higher operating expenses

industry_type: str
days_sales_outstanding: float
days_inventory_outstanding: float
days_payables_outstanding: float
gross_margin: float
operating_expense_ratio: float
current_asset_ratio: float
ppe_ratio: float
intangible_ratio: float
ppe_useful_life: int
depreciation_method: Literal['straight_line', 'declining_balance']
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class MarketCycles(**data: Any) None[source]

Bases: BaseModel

Market cycle configuration and dynamics.

average_duration_years: float
soft_market_duration: float
normal_market_duration: float
hard_market_duration: float
transition_probabilities: TransitionProbabilities
validate_cycle_duration() MarketCycles[source]

Validate that cycle durations are reasonable.

Return type:

MarketCycles

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class PricingScenario(**data: Any) None[source]

Bases: BaseModel

Individual market pricing scenario configuration.

Represents a specific market condition (soft/normal/hard) with associated pricing parameters and market characteristics.

name: str
description: str
market_condition: Literal['soft', 'normal', 'hard']
primary_layer_rate: float
first_excess_rate: float
higher_excess_rate: float
capacity_factor: float
competition_level: Literal['low', 'moderate', 'high']
retention_discount: float
volume_discount: float
loss_ratio_target: float
expense_ratio: float
new_business_appetite: Literal['restrictive', 'selective', 'aggressive']
renewal_retention_focus: Literal['low', 'balanced', 'high']
coverage_enhancement_willingness: Literal['low', 'moderate', 'high']
validate_rate_ordering() PricingScenario[source]

Ensure premium rates follow expected ordering.

Primary rates should be higher than excess rates, and first excess should be higher than higher excess layers.

Return type:

PricingScenario

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class PricingScenarioConfig(**data: Any) None[source]

Bases: BaseModel

Complete pricing scenario configuration.

Contains all market scenarios and cycle dynamics for insurance pricing sensitivity analysis.

scenarios: Dict[str, PricingScenario]
market_cycles: MarketCycles
get_scenario(scenario_name: str) PricingScenario[source]

Get a specific pricing scenario by name.

Parameters:

scenario_name (str) – Name of the scenario to retrieve

Return type:

PricingScenario

Returns:

PricingScenario configuration

Raises:

KeyError – If scenario_name not found

get_rate_multiplier(from_scenario: str, to_scenario: str) float[source]

Calculate rate change multiplier between scenarios.

Parameters:
  • from_scenario (str) – Starting scenario name

  • to_scenario (str) – Target scenario name

Return type:

float

Returns:

Multiplier for premium rates when transitioning

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class TransitionProbabilities(**data: Any) None[source]

Bases: BaseModel

Market state transition probabilities.

soft_to_soft: float
soft_to_normal: float
soft_to_hard: float
normal_to_soft: float
normal_to_normal: float
normal_to_hard: float
hard_to_soft: float
hard_to_normal: float
hard_to_hard: float
validate_probabilities() TransitionProbabilities[source]

Ensure transition probabilities sum to 1.0 for each state.

Return type:

TransitionProbabilities

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class BusinessOptimizerConfig(**data: Any) None[source]

Bases: BaseModel

Calibration parameters for BusinessOptimizer financial heuristics.

Issue #314 (C1): Consolidates all hardcoded financial multipliers from BusinessOptimizer into a single, documentable configuration object.

These are simplified model parameters used by the optimizer’s heuristic methods (_estimate_roe, _estimate_bankruptcy_risk, _estimate_growth_rate, etc.). They are NOT derived from manufacturer data—they are tuning knobs for the optimizer’s internal scoring functions.

base_roe: float
protection_benefit_factor: float
roe_noise_std: float
base_bankruptcy_risk: float
max_risk_reduction: float
premium_burden_risk_factor: float
time_risk_constant: float
base_growth_rate: float
growth_boost_factor: float
premium_drag_factor: float
asset_growth_factor: float
equity_growth_factor: float
risk_transfer_benefit_rate: float
risk_reduction_value: float
stability_value: float
growth_enablement_value: float
assumed_volatility: float
volatility_reduction_factor: float
min_volatility: float
seed: int
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class DecisionEngineConfig(**data: Any) None[source]

Bases: BaseModel

Calibration parameters for InsuranceDecisionEngine heuristics.

Issue #314 (C2): Consolidates hardcoded values from the decision engine’s growth estimation and simulation methods.

base_growth_rate: float
volatility_reduction_factor: float
max_volatility_reduction: float
growth_benefit_factor: float
loss_cv: float
default_optimization_weights: Dict[str, float]
layer_attachment_thresholds: Tuple[float, float]
metrics_n_simulations: int
metrics_time_horizon: int
use_crn: bool
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ModuleConfig(**data: Any) None[source]

Bases: BaseModel

Base class for configuration modules.

module_name: str
module_version: str
dependencies: List[str]
model_config: ClassVar[ConfigDict] = {'extra': 'allow'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class PresetConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for a preset.

preset_name: str
preset_type: str
description: str
parameters: Dict[str, Any]
classmethod validate_preset_type(v: str) str[source]

Validate preset type.

Parameters:

v (str) – Preset type.

Return type:

str

Returns:

Validated preset type.

Raises:

ValueError – If preset type is invalid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class PresetLibrary(**data: Any) None[source]

Bases: BaseModel

Collection of presets for a specific type.

library_type: str
description: str
presets: Dict[str, PresetConfig]
classmethod from_yaml(path: Path) PresetLibrary[source]

Load preset library from YAML file.

Parameters:

path (Path) – Path to preset library YAML file.

Return type:

PresetLibrary

Returns:

Loaded PresetLibrary instance.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ProfileMetadata(**data: Any) None[source]

Bases: BaseModel

Metadata for configuration profiles.

name: str
description: str
version: str
extends: str | None
includes: List[str]
presets: Dict[str, str]
author: str | None
created: datetime | None
tags: List[str]
classmethod validate_name(v: str) str[source]

Ensure profile name is valid.

Parameters:

v (str) – Profile name to validate.

Return type:

str

Returns:

Validated profile name.

Raises:

ValueError – If name contains invalid characters.

classmethod validate_version(v: str) str[source]

Validate semantic version string.

Parameters:

v (str) – Version string to validate.

Return type:

str

Returns:

Validated version string.

Raises:

ValueError – If version format is invalid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ExcelReportConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for Excel report generation.

This is the canonical definition used throughout the codebase. Both ExcelReporter and the unified config hierarchy (Config) use this class.

enabled

Whether Excel reporting is enabled.

output_path

Directory for output files.

include_balance_sheet

Whether to include balance sheet.

include_income_statement

Whether to include income statement.

include_cash_flow

Whether to include cash flow statement.

include_reconciliation

Whether to include reconciliation sheet.

include_metrics_dashboard

Whether to include metrics dashboard.

include_pivot_data

Whether to include pivot-ready data sheet.

formatting

Custom formatting options.

engine

Excel engine to use (‘xlsxwriter’, ‘openpyxl’, ‘auto’, ‘pandas’).

currency_format

Currency format string.

decimal_places

Number of decimal places for numbers.

date_format

Date format string.

enabled: bool
output_path: Path
include_balance_sheet: bool
include_income_statement: bool
include_cash_flow: bool
include_reconciliation: bool
include_metrics_dashboard: bool
include_pivot_data: bool
formatting: Dict[str, Any] | None
engine: str
currency_format: str
decimal_places: int
date_format: str
classmethod validate_engine(v: str) str[source]

Validate Excel engine selection.

Parameters:

v (str) – Engine name to validate.

Return type:

str

Returns:

Validated engine name.

Raises:

ValueError – If engine is not valid.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class LoggingConfig(**data: Any) None[source]

Bases: BaseModel

Logging configuration.

Controls logging behavior including level, output destinations, and message formatting.

enabled: bool
level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR']
log_file: str | None
console_output: bool
format: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class OutputConfig(**data: Any) None[source]

Bases: BaseModel

Output and results configuration.

Controls where and how simulation results are saved, including file formats and checkpoint frequencies.

output_directory: str
file_format: Literal['csv', 'parquet', 'json']
checkpoint_frequency: int
detailed_metrics: bool
property output_path: Path

Get output directory as Path object.

Returns:

Path object for the output directory.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class DebtConfig(**data: Any) None[source]

Bases: BaseModel

Debt financing parameters for insurance claims.

Configures debt financing options and constraints for handling large insurance claims and maintaining liquidity. Companies may need to borrow to cover deductibles or claims exceeding insurance limits.

interest_rate

Annual interest rate on debt (e.g., 0.05 for 5%).

max_leverage_ratio

Maximum debt-to-equity ratio allowed. Higher ratios increase financial risk.

minimum_cash_balance

Minimum cash balance to maintain for operations.

Examples

Conservative debt policy:

debt = DebtConfig(
    interest_rate=0.04,        # 4% borrowing cost
    max_leverage_ratio=1.0,    # Max 1:1 debt/equity
    minimum_cash_balance=1_000_000
)

Aggressive leverage:

debt = DebtConfig(
    interest_rate=0.06,        # Higher rate for risk
    max_leverage_ratio=3.0,    # 3:1 leverage allowed
    minimum_cash_balance=500_000
)

Note

Higher leverage increases return on equity but also increases bankruptcy risk during adverse claim events.

interest_rate: float
max_leverage_ratio: float
minimum_cash_balance: float
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class GrowthConfig(**data: Any) None[source]

Bases: BaseModel

Growth model parameters.

Configures whether the simulation uses deterministic or stochastic growth models, along with the associated parameters. Stochastic models add realistic business volatility to growth trajectories.

type

Growth model type - ‘deterministic’ for fixed growth or ‘stochastic’ for random variation.

annual_growth_rate

Base annual growth rate (e.g., 0.05 for 5%). Can be negative for declining businesses.

volatility

Growth rate volatility (standard deviation) for stochastic models. Zero for deterministic models.

Examples

Stable growth:

growth = GrowthConfig(
    type='deterministic',
    annual_growth_rate=0.03  # 3% steady growth
)

Volatile growth:

growth = GrowthConfig(
    type='stochastic',
    annual_growth_rate=0.05,  # 5% expected
    volatility=0.15           # 15% std dev
)

Note

Stochastic growth uses geometric Brownian motion to model realistic business volatility patterns.

type: Literal['deterministic', 'stochastic']
annual_growth_rate: float
volatility: float
validate_stochastic_params()[source]

Ensure volatility is set for stochastic models.

Returns:

The validated config object.

Return type:

GrowthConfig

Raises:

ValueError – If stochastic model is selected but volatility is zero, which would make it effectively deterministic.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class SimulationConfig(**data: Any) None[source]

Bases: BaseModel

Simulation execution parameters.

Controls how the simulation runs, including time resolution, horizon, and randomization settings. These parameters affect computational performance and result granularity.

time_resolution

Simulation time step - ‘annual’ or ‘monthly’. Monthly provides more granularity but increases computation.

time_horizon_years

Simulation horizon in years. Longer horizons reveal ergodic properties but require more computation.

max_horizon_years

Maximum supported horizon to prevent excessive memory usage.

random_seed

Random seed for reproducibility. None for random.

fiscal_year_end

Month of fiscal year end (1-12). Default is 12 (December) for calendar year alignment. Set to 6 for June, 3 for March, etc. to match different fiscal calendars.

Examples

Quick test simulation:

sim = SimulationConfig(
    time_resolution='annual',
    time_horizon_years=10,
    random_seed=42  # Reproducible
)

Long-term ergodic analysis:

sim = SimulationConfig(
    time_resolution='annual',
    time_horizon_years=500,
    max_horizon_years=1000,
    random_seed=None  # Random each run
)

Non-calendar fiscal year:

sim = SimulationConfig(
    time_resolution='annual',
    time_horizon_years=50,
    fiscal_year_end=6  # June fiscal year end
)

Note

For ergodic analysis, horizons of 100+ years are recommended to observe long-term time averages.

time_resolution: Literal['annual', 'monthly']
time_horizon_years: int
max_horizon_years: int
random_seed: int | None
fiscal_year_end: int
validate_horizons()[source]

Ensure time horizon doesn’t exceed maximum.

Returns:

The validated config object.

Return type:

SimulationConfig

Raises:

ValueError – If time horizon exceeds maximum allowed value, preventing potential memory issues.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class WorkingCapitalConfig(**data: Any) None[source]

Bases: BaseModel

Working capital management parameters.

This class configures how working capital requirements are calculated as a percentage of sales revenue. Working capital represents the funds tied up in day-to-day operations (inventory, receivables, etc.).

percent_of_sales

Working capital as percentage of sales. Typically 15-25% for manufacturers depending on payment terms and inventory turnover.

Examples

Efficient working capital:

wc_config = WorkingCapitalConfig(
    percent_of_sales=0.15  # 15% - lean operations
)

Conservative working capital:

wc_config = WorkingCapitalConfig(
    percent_of_sales=0.30  # 30% - higher inventory/receivables
)

Note

Higher working capital requirements reduce available cash for growth investments but provide operational cushion.

percent_of_sales: float
classmethod validate_working_capital(v: float) float[source]

Validate working capital percentage.

Parameters:

v (float) – Working capital percentage to validate (as decimal).

Returns:

The validated working capital percentage.

Return type:

float

Raises:

ValueError – If working capital percentage exceeds 50% of sales, which would indicate severe operational inefficiency.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class WorkingCapitalRatiosConfig(**data: Any) None[source]

Bases: BaseModel

Enhanced working capital configuration with detailed component ratios.

This extends the basic WorkingCapitalConfig to provide detailed control over individual working capital components using standard financial ratios.

days_sales_outstanding: float
days_inventory_outstanding: float
days_payable_outstanding: float
validate_cash_conversion_cycle()[source]

Validate that cash conversion cycle is reasonable.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class GPUConfig(**data: Any) None[source]

Bases: BaseModel

GPU acceleration configuration.

Controls whether GPU acceleration is used and how GPU resources are managed. When enabled=True but CuPy is not installed, a warning is emitted and operations transparently fall back to NumPy.

enabled

Whether to attempt GPU acceleration.

device_id

CUDA device ordinal to use.

memory_pool

Whether to use CuPy’s memory pool allocator.

pin_memory

Whether to use pinned (page-locked) host memory for faster CPU↔GPU transfers.

random_seed

Optional seed for GPU random number generator.

synchronize

Whether to synchronize after each kernel launch. Useful for profiling but reduces throughput.

Examples

Default (GPU disabled):

gpu_cfg = GPUConfig()

Enable GPU:

gpu_cfg = GPUConfig(enabled=True, device_id=0)
Since:

Version 0.10.0 (Issue #960)

enabled: bool
device_id: int
memory_pool: bool
pin_memory: bool
random_seed: int | None
synchronize: bool
warn_if_unavailable()[source]

Warn when GPU is requested but CuPy is not installed.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

ergodic_insurance.config_compat module (deprecated)

Deprecated since version The: config_compat module has been removed. Use Config from ergodic_insurance.config directly.

ergodic_insurance.config_loader module

Configuration loader with validation and override support.

This module provides utilities for loading, validating, and managing configuration files, with support for caching, overrides, and scenario-based configurations.

Deprecated since version ConfigLoader: is deprecated. Use ConfigManager for new code.

class ConfigLoader(config_dir: Path | None = None)[source]

Bases: object

Handles loading and managing configuration.

A comprehensive configuration management system that supports YAML file loading, validation, caching, and runtime overrides.

Deprecated since version Use: ConfigManager instead.

DEFAULT_CONFIG_DIR = PosixPath('/home/runner/work/Ergodic-Insurance-Limits/Ergodic-Insurance-Limits/ergodic_insurance/data/parameters')
DEFAULT_CONFIG_FILE = 'baseline.yaml'
load(config_name: str = 'baseline', overrides: Dict[str, Any] | None = None) Config[source]

Load configuration with optional overrides.

Parameters:
  • config_name (str) – Name of config file (without .yaml extension) or full path to config file.

  • overrides (Optional[Dict[str, Any]]) – Dictionary of overrides to apply. Supports dot-notation keys ({"manufacturer.tax_rate": 0.21}) and section-level dicts ({"manufacturer": {"tax_rate": 0.21}}).

Return type:

Config

Returns:

Loaded and validated configuration.

Raises:
  • FileNotFoundError – If config file doesn’t exist.

  • ValidationError – If configuration is invalid.

load_scenario(scenario: str, overrides: Dict[str, Any] | None = None) Config[source]

Load a predefined scenario configuration.

Parameters:
  • scenario (str) – Scenario name (“baseline”, “conservative”, “optimistic”).

  • overrides (Optional[Dict[str, Any]]) – Dictionary of overrides to apply.

Return type:

Config

Returns:

Loaded and validated configuration.

Raises:

ValueError – If scenario is not recognized.

compare_configs(config1: str | Config, config2: str | Config) Dict[str, Any][source]

Compare two configurations and return differences.

Parameters:
  • config1 (Union[str, Config]) – First config (name or Config object).

  • config2 (Union[str, Config]) – Second config (name or Config object).

Return type:

Dict[str, Any]

Returns:

Dictionary of differences between configurations.

validate_config(config: str | Config) bool[source]

Validate a configuration.

Parameters:

config (Union[str, Config]) – Configuration to validate (name or Config object).

Return type:

bool

Returns:

True if valid, raises exception otherwise.

Raises:

ValidationError – If configuration is invalid.

load_pricing_scenarios(scenario_file: str = 'insurance_pricing_scenarios') PricingScenarioConfig[source]

Load pricing scenario configuration.

Parameters:

scenario_file (str) – Name of scenario file (without .yaml extension) or full path to scenario file.

Return type:

PricingScenarioConfig

Returns:

Loaded and validated pricing scenario configuration.

Raises:
  • FileNotFoundError – If scenario file not found.

  • ValidationError – If scenario data is invalid.

switch_pricing_scenario(config: Config, scenario_name: str) Config[source]

Switch to a different pricing scenario.

Updates the configuration’s insurance parameters to use rates from the specified pricing scenario.

Parameters:
  • config (Config) – Current configuration.

  • scenario_name (str) – Name of scenario to switch to (inexpensive/baseline/expensive)

Return type:

Config

Returns:

Updated configuration with new pricing scenario.

list_available_configs() list[str][source]

List all available configuration files.

Return type:

list[str]

Returns:

List of configuration file names (without .yaml extension).

clear_cache() None[source]

Clear the configuration cache.

Removes all cached configurations, forcing fresh loads on subsequent requests.

Return type:

None

load_config(config_name: str = 'baseline', overrides: Dict[str, Any] | None = None) Config[source]

Quick helper to load a configuration.

Parameters:
  • config_name (str) – Name of config file or full path.

  • overrides (Optional[Dict[str, Any]]) – Dictionary of overrides. Supports dot-notation keys ({"manufacturer.tax_rate": 0.21}) and section-level dicts ({"manufacturer": {"tax_rate": 0.21}}).

Return type:

Config

Returns:

Loaded configuration.

ergodic_insurance.config_manager module

Configuration manager for the new 3-tier configuration system.

This module provides the main interface for loading and managing configurations using profiles, modules, and presets. It implements a modern configuration architecture that supports inheritance, composition, and runtime overrides.

The configuration system is organized into three tiers:
  1. Profiles: Complete configuration sets (default, conservative, aggressive)

  2. Modules: Reusable components (insurance, losses, stochastic, business)

  3. Presets: Quick-apply templates (market conditions, layer structures)

Example

Basic usage of ConfigManager:

from ergodic_insurance.config_manager import ConfigManager

# Initialize manager
manager = ConfigManager()

# Load a profile
config = manager.load_profile("default")

# Load with overrides
config = manager.load_profile(
    "conservative",
    manufacturer={"base_operating_margin": 0.12},
    growth={"annual_growth_rate": 0.08}
)

# Apply presets
config = manager.load_profile(
    "default",
    presets=["hard_market", "high_volatility"]
)

Note

This module replaces the legacy ConfigLoader and provides full backward compatibility through the config_compat module.

class ConfigManager(config_dir: Path | None = None)[source]

Bases: object

Manages configuration loading with profiles, modules, and presets.

This class provides a comprehensive configuration management system that supports profile inheritance, module composition, preset application, and runtime parameter overrides. It includes caching for performance and validation for correctness.

config_dir

Root configuration directory path

profiles_dir

Directory containing profile configurations

modules_dir

Directory containing module configurations

presets_dir

Directory containing preset libraries

_cache

Internal cache for loaded configurations

_preset_libraries

Cached preset library definitions

Example

Loading configurations with various options:

manager = ConfigManager()

# Simple profile load
config = manager.load_profile("default")

# With module selection
config = manager.load_profile(
    "default",
    modules=["insurance", "stochastic"]
)

# With inheritance chain
config = manager.load_profile("custom/client_abc")
load_profile(profile_name: str = 'default', use_cache: bool = True, **overrides) Config[source]

Load a configuration profile with optional overrides.

This method loads a configuration profile, applies any inheritance chain, includes specified modules, applies presets, and finally applies runtime overrides. The result is cached for performance.

Parameters:
  • profile_name (str) – Name of the profile to load. Can be a simple name (e.g., “default”) or a path to custom profiles (e.g., “custom/client_abc”).

  • use_cache (bool) – Whether to use cached configurations. Set to False when configuration files might have changed during runtime.

  • **overrides – Runtime overrides organized by section. Supports: - modules: List of module names to include - presets: List of preset names to apply - Any configuration section with nested parameters

Returns:

Fully loaded, validated, and merged configuration instance.

Return type:

Config

Raises:
  • FileNotFoundError – If the specified profile doesn’t exist.

  • ValueError – If configuration validation fails.

  • yaml.YAMLError – If YAML parsing fails.

Example

Various ways to load profiles:

# Basic load
config = manager.load_profile("default")

# With overrides
config = manager.load_profile(
    "conservative",
    manufacturer={"base_operating_margin": 0.12},
    simulation={"time_horizon_years": 50}
)

# With presets and modules
config = manager.load_profile(
    "default",
    modules=["insurance", "stochastic"],
    presets=["hard_market"]
)
with_preset(config: Config, preset_type: str, preset_name: str) Config[source]

Create a new configuration with a preset applied.

Parameters:
  • config (Config) – Base configuration (not mutated).

  • preset_type (str) – Type of preset.

  • preset_name (str) – Name of the preset.

Return type:

Config

Returns:

New Config instance with preset applied.

with_overrides(config: Config, overrides: Dict[str, Any]) Config[source]

Create a new configuration with runtime overrides.

Parameters:
  • config (Config) – Base configuration.

  • overrides (Dict[str, Any]) – Override parameters as a dictionary with dot-notation keys or section-level dictionaries.

Return type:

Config

Returns:

New Config instance with overrides applied.

validate(config: Config) List[str][source]

Validate a configuration for completeness and consistency.

Calls Config.validate_config() for critical issues (which raises ConfigurationError), then returns a list of additional advisory warnings.

Parameters:

config (Config) – Configuration to validate.

Return type:

List[str]

Returns:

List of advisory warnings (empty if none).

Raises:

ConfigurationError – If critical configuration issues are found.

list_profiles() List[str][source]

List all available configuration profiles.

Return type:

List[str]

Returns:

List of profile names.

list_modules() List[str][source]

List all available configuration modules.

Return type:

List[str]

Returns:

List of module names.

list_presets() Dict[str, List[str]][source]

List all available presets by type.

Return type:

Dict[str, List[str]]

Returns:

Dictionary mapping preset types to list of preset names.

clear_cache() None[source]

Clear the configuration cache.

Return type:

None

get_profile_metadata(profile_name: str) Dict[str, Any][source]

Get metadata for a profile without loading the full configuration.

Parameters:

profile_name (str) – Name of the profile.

Return type:

Dict[str, Any]

Returns:

Profile metadata dictionary.

create_profile(name: str, description: str, base_profile: str = 'default', custom: bool = True, **config_params) Path[source]

Create a new configuration profile.

Parameters:
  • name (str) – Profile name.

  • description (str) – Profile description.

  • base_profile (str) – Profile to extend from.

  • custom (bool) – Whether to save as custom profile.

  • **config_params – Configuration parameters.

Return type:

Path

Returns:

Path to the created profile file.

ergodic_insurance.config_migrator module

Configuration migration tools for converting legacy YAML files to new 3-tier system.

This module provides utilities to migrate from the old 12-file configuration system to the new profiles/modules/presets architecture.

class ConfigMigrator[source]

Bases: object

Handles migration from legacy configuration to new 3-tier system.

convert_baseline() Dict[str, Any][source]

Convert baseline.yaml to new profile format.

Return type:

Dict[str, Any]

Returns:

Converted configuration as a dictionary.

convert_conservative() Dict[str, Any][source]

Convert conservative.yaml to new profile format.

Return type:

Dict[str, Any]

Returns:

Converted configuration as a dictionary.

convert_optimistic() Dict[str, Any][source]

Convert optimistic.yaml to new profile format.

Return type:

Dict[str, Any]

Returns:

Converted configuration as a dictionary.

extract_modules() None[source]

Extract reusable modules from legacy configuration files.

Return type:

None

create_presets() None[source]

Generate preset libraries from existing configurations.

Return type:

None

validate_migration() bool[source]

Validate that all configurations were successfully migrated.

Return type:

bool

Returns:

True if validation passes, False otherwise.

generate_migration_report() str[source]

Generate a detailed migration report.

Return type:

str

Returns:

Formatted migration report as a string.

run_migration() bool[source]

Run the complete migration process.

Return type:

bool

Returns:

True if migration successful, False otherwise.

ergodic_insurance.convergence module

Convergence diagnostics for Monte Carlo simulations.

This module provides tools for assessing convergence of Monte Carlo simulations including Gelman-Rubin R-hat, effective sample size, and Monte Carlo standard error.

class ConvergenceStats(r_hat: float, ess: float, mcse: float, converged: bool, n_iterations: int, autocorrelation: float) None[source]

Bases: object

Container for convergence statistics.

r_hat: float
ess: float
mcse: float
converged: bool
n_iterations: int
autocorrelation: float
__str__() str[source]

String representation of convergence stats.

Return type:

str

class ConvergenceDiagnostics(r_hat_threshold: float = 1.1, min_ess: int = 1000, relative_mcse_threshold: float = 0.05)[source]

Bases: object

Convergence diagnostics for Monte Carlo simulations.

Provides methods for assessing convergence using multiple chains and calculating effective sample sizes.

calculate_r_hat(chains: ndarray) float[source]

Calculate Gelman-Rubin R-hat statistic.

Parameters:

chains (ndarray) – Array of shape (n_chains, n_iterations) or (n_chains, n_iterations, n_metrics)

Return type:

float

Returns:

R-hat statistic (values close to 1 indicate convergence)

calculate_ess(chain: ndarray, max_lag: int | None = None) float[source]

Calculate effective sample size using Geyer’s initial positive sequence.

Uses Geyer’s (1992, Theorem 3.1) initial positive sequence estimator: ESS = N / tau, where tau = 1 + 2 * sum of consecutive ACF pair sums (rho[2k-1] + rho[2k]) truncated at the first non-positive pair.

Individual autocorrelation values may be negative while pair sums remain positive — this is common for oscillating MCMC chains.

Parameters:
  • chain (ndarray) – 1D array of samples

  • max_lag (Optional[int]) – Maximum lag for autocorrelation calculation

Return type:

float

Returns:

Effective sample size

calculate_batch_ess(chains: ndarray, method: str = 'mean') float | ndarray[source]

Calculate ESS for multiple chains or metrics.

Parameters:
  • chains (ndarray) – Array of shape (n_chains, n_iterations) or (n_chains, n_iterations, n_metrics)

  • method (str) – How to combine ESS across chains (‘mean’, ‘min’, ‘all’)

Return type:

Union[float, ndarray]

Returns:

Combined ESS value(s)

calculate_ess_per_second(chain: ndarray, computation_time: float) float[source]

Calculate ESS per second of computation.

Useful for comparing efficiency of different sampling methods.

Parameters:
  • chain (ndarray) – 1D array of samples

  • computation_time (float) – Time in seconds taken to generate the chain

Return type:

float

Returns:

ESS per second

calculate_mcse(chain: ndarray, ess: float | None = None) float[source]

Calculate Monte Carlo standard error.

Parameters:
  • chain (ndarray) – 1D array of samples

  • ess (Optional[float]) – Effective sample size (calculated if not provided)

Return type:

float

Returns:

Monte Carlo standard error

check_convergence(chains: ndarray | List[ndarray], metric_names: List[str] | None = None) Dict[str, ConvergenceStats][source]

Check convergence for multiple chains and metrics.

Parameters:
  • chains (Union[ndarray, List[ndarray]]) – Array of shape (n_chains, n_iterations, n_metrics) or list of chains

  • metric_names (Optional[List[str]]) – Names of metrics (optional)

Return type:

Dict[str, ConvergenceStats]

Returns:

Dictionary mapping metric names to convergence statistics

geweke_test(chain: ndarray, first_fraction: float = 0.1, last_fraction: float = 0.5) Tuple[float, float][source]

Perform Geweke convergence test.

Compares means of first and last portions of chain using spectral density estimates at zero frequency per Geweke (1992). This correctly accounts for autocorrelation in MCMC and sequential Monte Carlo chains.

Parameters:
  • chain (ndarray) – 1D array of samples

  • first_fraction (float) – Fraction of chain to use for first portion

  • last_fraction (float) – Fraction of chain to use for last portion

Return type:

Tuple[float, float]

Returns:

Tuple of (z-score, p-value)

heidelberger_welch_test(chain: ndarray, alpha: float = 0.05) Dict[str, bool | float][source]

Perform Heidelberger-Welch stationarity and halfwidth tests.

Parameters:
  • chain (ndarray) – 1D array of samples

  • alpha (float) – Significance level

Return type:

Dict[str, Union[bool, float]]

Returns:

Dictionary with test results

ergodic_insurance.convergence_advanced module

Advanced convergence diagnostics for Monte Carlo simulations.

This module extends basic convergence diagnostics with advanced features including autocorrelation analysis, spectral density estimation, and sophisticated ESS calculations.

class SpectralDiagnostics(spectral_density: ndarray, frequencies: ndarray, integrated_autocorr_time: float, effective_sample_size: float) None[source]

Bases: object

Container for spectral analysis results.

spectral_density: ndarray
frequencies: ndarray
integrated_autocorr_time: float
effective_sample_size: float
__str__() str[source]

String representation of spectral diagnostics.

Return type:

str

class AutocorrelationAnalysis(acf_values: ndarray, lags: ndarray, integrated_time: float, initial_monotone_sequence: int, initial_positive_sequence: int) None[source]

Bases: object

Container for autocorrelation analysis results.

acf_values: ndarray
lags: ndarray
integrated_time: float
initial_monotone_sequence: int
initial_positive_sequence: int
__str__() str[source]

String representation of autocorrelation analysis.

Return type:

str

class AdvancedConvergenceDiagnostics(fft_size: int | None = None)[source]

Bases: object

Advanced convergence diagnostics for Monte Carlo simulations.

Provides sophisticated methods for assessing convergence including spectral density estimation, multiple ESS calculation methods, and advanced autocorrelation analysis.

calculate_autocorrelation_full(chain: ndarray, max_lag: int | None = None, method: str = 'fft') AutocorrelationAnalysis[source]

Calculate comprehensive autocorrelation analysis.

Parameters:
  • chain (ndarray) – 1D array of samples

  • max_lag (Optional[int]) – Maximum lag for autocorrelation (None for automatic)

  • method (str) – Method for calculation (“fft”, “direct”, or “biased”)

Return type:

AutocorrelationAnalysis

Returns:

AutocorrelationAnalysis object with detailed results

calculate_spectral_density(chain: ndarray, method: str = 'welch', nperseg: int | None = None) SpectralDiagnostics[source]

Calculate spectral density and related diagnostics.

Parameters:
  • chain (ndarray) – 1D array of samples

  • method (str) – Method for spectral estimation (“welch”, “periodogram”, “multitaper”)

  • nperseg (Optional[int]) – Length of each segment for Welch’s method

Return type:

SpectralDiagnostics

Returns:

SpectralDiagnostics object with spectral analysis results

calculate_ess_batch_means(chain: ndarray, batch_size: int | None = None, n_batches: int | None = None) float[source]

Calculate ESS using batch means method.

Parameters:
  • chain (ndarray) – 1D array of samples

  • batch_size (Optional[int]) – Size of each batch (calculated if None)

  • n_batches (Optional[int]) – Number of batches (calculated if None)

Return type:

float

Returns:

Effective sample size estimate

calculate_ess_overlapping_batch(chain: ndarray, batch_size: int | None = None) float[source]

Calculate ESS using overlapping batch means (more efficient).

Parameters:
  • chain (ndarray) – 1D array of samples

  • batch_size (Optional[int]) – Size of each batch (calculated if None)

Return type:

float

Returns:

Effective sample size estimate

heidelberger_welch_advanced(chain: ndarray, alpha: float = 0.05, eps: float = 0.1, pvalue_threshold: float = 0.05) Dict[str, bool | int | float][source]

Advanced Heidelberger-Welch stationarity test.

Parameters:
  • chain (ndarray) – 1D array of samples

  • alpha (float) – Significance level for confidence intervals

  • eps (float) – Relative precision for halfwidth test

  • pvalue_threshold (float) – P-value threshold for stationarity

Return type:

Dict[str, Union[bool, int, float]]

Returns:

Dictionary with detailed test results

raftery_lewis_diagnostic(chain: ndarray, q: float = 0.025, r: float = 0.005, s: float = 0.95) Dict[str, float][source]

Raftery-Lewis diagnostic for required chain length.

Parameters:
  • chain (ndarray) – 1D array of samples

  • q (float) – Quantile of interest

  • r (float) – Desired accuracy

  • s (float) – Probability of achieving accuracy

Return type:

Dict[str, float]

Returns:

Dictionary with diagnostic results

ergodic_insurance.convergence_plots module

Real-time convergence visualization for Monte Carlo simulations.

This module provides real-time plotting capabilities for monitoring convergence during long-running simulations with minimal computational overhead.

class RealTimeConvergencePlotter(n_parameters: int = 1, n_chains: int = 1, buffer_size: int = 1000, update_interval: int = 100, figsize: Tuple[float, float] = (12, 8))[source]

Bases: object

Real-time convergence plotting with minimal overhead.

Provides animated visualization of convergence diagnostics during Monte Carlo simulations with efficient updating mechanisms.

setup_figure(parameter_names: List[str] | None = None, show_diagnostics: bool = True) Figure[source]

Setup the figure with subplots for real-time monitoring.

Parameters:
  • parameter_names (Optional[List[str]]) – Names of parameters being monitored

  • show_diagnostics (bool) – Whether to show diagnostic panels

Return type:

Figure

Returns:

Matplotlib figure object

update_data(iteration: int, chains_data: ndarray, diagnostics: Dict[str, List[float]] | None = None)[source]

Update data buffers with new samples.

Parameters:
  • iteration (int) – Current iteration number

  • chains_data (ndarray) – Array of shape (n_chains, n_parameters)

  • diagnostics (Optional[Dict[str, List[float]]]) – Optional dictionary with R-hat, ESS values

plot_static_convergence(chains: ndarray, burn_in: int | None = None, thin: int = 1) Figure[source]

Create static convergence plots for completed chains.

Parameters:
  • chains (ndarray) – Array of shape (n_chains, n_iterations, n_parameters)

  • burn_in (Optional[int]) – Burn-in period to highlight

  • thin (int) – Thinning interval for display

Return type:

Figure

Returns:

Figure with convergence plots

plot_ess_evolution(ess_values: List[float] | ndarray, iterations: ndarray | None = None, target_ess: float = 1000) Figure[source]

Plot evolution of effective sample size.

Parameters:
  • ess_values (Union[List[float], ndarray]) – ESS values over iterations

  • iterations (Optional[ndarray]) – Iteration numbers (generated if None)

  • target_ess (float) – Target ESS threshold

Return type:

Figure

Returns:

Figure with ESS evolution plot

plot_autocorrelation_surface(chains: ndarray, max_lag: int = 50, param_idx: int = 0) Figure[source]

Create 3D surface plot of autocorrelation over time.

Parameters:
  • chains (ndarray) – Array of shape (n_chains, n_iterations, n_parameters)

  • max_lag (int) – Maximum lag for ACF

  • param_idx (int) – Parameter index to plot

Return type:

Figure

Returns:

Figure with 3D autocorrelation surface

create_convergence_dashboard(chains: ndarray, diagnostics: Dict[str, Any], parameter_names: List[str] | None = None) Figure[source]

Create comprehensive convergence dashboard.

Parameters:
  • chains (ndarray) – Array of shape (n_chains, n_iterations, n_parameters)

  • diagnostics (Dict[str, Any]) – Dictionary with convergence diagnostics

  • parameter_names (Optional[List[str]]) – Names of parameters

Return type:

Figure

Returns:

Figure with comprehensive dashboard

ergodic_insurance.decimal_utils module

Decimal utilities for financial calculations.

This module provides utilities for precise financial calculations using Python’s decimal.Decimal type. Using Decimal instead of float prevents accumulation errors in iterative simulations and ensures accounting identities hold exactly.

A float mode (Issue #1142) can be activated per-thread via enable_float_mode(). When active, to_decimal() returns float instead of Decimal, eliminating the overhead of Decimal arithmetic in Monte Carlo hot paths while keeping the same numeric API.

Example

Convert a float to decimal for financial use:

from ergodic_insurance.decimal_utils import to_decimal, ZERO

amount = to_decimal(1234.56)
if amount != ZERO:
    print(f"Amount: {amount}")
enable_float_mode() None[source]

Enable float mode for the current thread.

When active, to_decimal() returns float and quantize_currency() rounds with round() instead of Decimal.quantize. This eliminates Decimal overhead in Monte Carlo hot paths (Issue #1142).

Return type:

None

disable_float_mode() None[source]

Disable float mode for the current thread (restore Decimal behaviour).

Return type:

None

is_float_mode() bool[source]

Return True if float mode is active in the current thread.

Return type:

bool

to_decimal(value: float | int | str | Decimal | None) Decimal[source]

Convert a numeric value to Decimal (or float in float mode).

In normal mode, converts floats, ints, strings, or existing Decimals to a standardized Decimal value. Floats are converted via string representation to avoid binary floating point artifacts.

In float mode (Issue #1142), returns float directly, avoiding the cost of Decimal construction.

Parameters:

value (Union[float, int, str, Decimal, None]) – Numeric value to convert. None is converted to zero.

Return type:

Decimal

Returns:

Decimal (or float in float mode) representation of the value.

Example

>>> to_decimal(1234.56)
Decimal('1234.56')
>>> to_decimal(None)
Decimal('0.00')
quantize_currency(value: Decimal | float | int) Decimal[source]

Quantize a value to currency precision (2 decimal places).

Rounds using ROUND_HALF_UP (banker’s rounding away from zero for .5 cases) which is standard for financial calculations.

In float mode, uses round() for speed.

Parameters:

value (Union[Decimal, float, int]) – Numeric value to quantize.

Return type:

Decimal

Returns:

Decimal (or float) rounded to 2 decimal places.

Example

>>> quantize_currency(Decimal("1234.567"))
Decimal('1234.57')
>>> quantize_currency(1234.565)
Decimal('1234.57')
is_zero(value: Decimal | float | int) bool[source]

Check if a value is effectively zero after quantization.

Useful for balance checks where we need exact equality after rounding to currency precision.

Parameters:

value (Union[Decimal, float, int]) – Numeric value to check.

Return type:

bool

Returns:

True if value rounds to zero at currency precision.

Example

>>> is_zero(Decimal("0.001"))
True
>>> is_zero(Decimal("0.01"))
False
sum_decimals(*values: Decimal | float | int) Decimal[source]

Sum multiple values with Decimal precision (or float in float mode).

Converts all values via to_decimal() before summing.

Parameters:

*values (Union[Decimal, float, int]) – Numeric values to sum.

Return type:

Decimal

Returns:

Sum of all values.

Example

>>> sum_decimals(0.1, 0.2, 0.3)
Decimal('0.6')
safe_divide(numerator: Decimal | float | int, denominator: Decimal | float | int, default: Decimal | float | int = Decimal('0.00')) Decimal[source]

Safely divide two values, returning default if denominator is zero.

Parameters:
Return type:

Decimal

Returns:

Result of division, or default if denominator is zero.

Example

>>> safe_divide(100, 4)
Decimal('25')
>>> safe_divide(100, 0, default=Decimal("-1"))
Decimal('-1')

ergodic_insurance.decision_engine module

Algorithmic insurance decision engine for optimal coverage selection.

This module implements a comprehensive decision framework that optimizes insurance purchasing decisions using multi-objective optimization to balance growth targets with bankruptcy risk constraints.

class OptimizationMethod(*values)[source]

Bases: Enum

Available optimization methods.

SLSQP = 'SLSQP'
ENHANCED_SLSQP = 'enhanced_slsqp'
DIFFERENTIAL_EVOLUTION = 'differential_evolution'
WEIGHTED_SUM = 'weighted_sum'
TRUST_REGION = 'trust_region'
PENALTY_METHOD = 'penalty_method'
AUGMENTED_LAGRANGIAN = 'augmented_lagrangian'
MULTI_START = 'multi_start'
class DecisionOptimizationConstraints(max_premium_budget: float = 1000000, min_total_coverage: float = 5000000, max_total_coverage: float = 100000000, max_bankruptcy_probability: float = 0.01, min_retained_limit: float = 100000, max_retained_limit: float = 10000000, max_layers: int = 5, min_layers: int = 1, required_roi_improvement: float = 0.0, max_debt_to_equity: float = 2.0, max_insurance_cost_ratio: float = 0.03, min_coverage_requirement: float = 0.0, max_retention_limit: float = inf, min_coverage_limit: float | None = None, max_coverage_limit: float | None = None) None[source]

Bases: object

Constraints for insurance optimization.

min_total_coverage

Minimum total program coverage (sum of all layer limits). This is not a per-occurrence layer limit; it bounds the overall insurance program size.

max_total_coverage

Maximum total program coverage.

max_premium_budget: float = 1000000
min_total_coverage: float = 5000000
max_total_coverage: float = 100000000
max_bankruptcy_probability: float = 0.01
min_retained_limit: float = 100000
max_retained_limit: float = 10000000
max_layers: int = 5
min_layers: int = 1
required_roi_improvement: float = 0.0
max_debt_to_equity: float = 2.0
max_insurance_cost_ratio: float = 0.03
min_coverage_requirement: float = 0.0
max_retention_limit: float = inf
min_coverage_limit: float | None = None
max_coverage_limit: float | None = None
__post_init__()[source]

Resolve deprecated min_coverage_limit / max_coverage_limit aliases.

class InsuranceDecision(retained_limit: float, layers: List[EnhancedInsuranceLayer], total_premium: float, total_coverage: float, pricing_scenario: str, optimization_method: str, convergence_iterations: int = 0, objective_value: float = 0.0) None[source]

Bases: object

Represents an insurance purchasing decision.

retained_limit: float
layers: List[EnhancedInsuranceLayer]
total_premium: float
total_coverage: float
pricing_scenario: str
optimization_method: str
convergence_iterations: int = 0
objective_value: float = 0.0
__post_init__()[source]

Calculate derived fields.

class GrowthMetrics(ergodic_growth_rate: float) None[source]

Bases: object

Ergodic growth metrics for insurance decision evaluation.

The ergodic growth rate measures the time-average (geometric) growth of firm value under a given insurance program. Unlike the ensemble-average used in traditional expected-value analysis, the time-average captures the compounding effect of sequential outcomes and correctly penalizes variance — making it the appropriate objective for a single firm operating through time.

ergodic_growth_rate: float
class DecisionRiskMetrics(bankruptcy_probability: float, value_at_risk_95: float, conditional_value_at_risk: float) None[source]

Bases: object

Tail-risk metrics for an insurance decision.

bankruptcy_probability is the simulated ruin frequency, value_at_risk_95 is the 95th-percentile loss, and conditional_value_at_risk (Tail Value-at-Risk / Expected Shortfall) is the expected loss in the worst 5% of scenarios — the standard regulatory solvency metric under Solvency II and NAIC RBC frameworks.

bankruptcy_probability: float
value_at_risk_95: float
conditional_value_at_risk: float
class ROEComponentMetrics(operating_roe: float = 0.0, insurance_impact_roe: float = 0.0, tax_effect_roe: float = 0.0) None[source]

Bases: object

Decomposition of Return on Equity into operational drivers.

This DuPont-style breakdown isolates the ROE impact of underwriting operations, the insurance program cost, and the tax shield. Actuaries and CFOs use this decomposition to attribute value creation to the insurance purchasing decision versus organic business performance.

operating_roe: float = 0.0
insurance_impact_roe: float = 0.0
tax_effect_roe: float = 0.0
class ROEMetrics(expected_roe: float, roe_improvement: float, time_weighted_roe: float = 0.0, roe_volatility: float = 0.0, roe_sharpe_ratio: float = 0.0, roe_downside_deviation: float = 0.0, roe_1yr_rolling: float = 0.0, roe_3yr_rolling: float = 0.0, roe_5yr_rolling: float = 0.0, components: ROEComponentMetrics = <factory>) None[source]

Bases: object

Return-on-equity metrics for insurance decision evaluation.

Captures expected ROE, its improvement versus the uninsured baseline, risk-adjusted performance (Sharpe ratio, downside deviation), and rolling averages. The components sub-group isolates the operational, insurance, and tax contributions to ROE — useful for board-level attribution reporting.

expected_roe: float
roe_improvement: float
time_weighted_roe: float = 0.0
roe_volatility: float = 0.0
roe_sharpe_ratio: float = 0.0
roe_downside_deviation: float = 0.0
roe_1yr_rolling: float = 0.0
roe_3yr_rolling: float = 0.0
roe_5yr_rolling: float = 0.0
components: ROEComponentMetrics
class EfficiencyMetrics(premium_to_limit_ratio: float, coverage_adequacy: float, capital_efficiency: float) None[source]

Bases: object

Premium efficiency and coverage adequacy metrics.

premium_to_limit_ratio (rate-on-line) measures how many cents of premium are paid per dollar of limit — a standard broker metric for layer pricing comparisons. coverage_adequacy measures how well the program covers expected aggregate losses. capital_efficiency is the incremental firm value per dollar of premium, analogous to the ROIC of the insurance spend.

premium_to_limit_ratio: float
coverage_adequacy: float
capital_efficiency: float
class DecisionMetrics(*, ergodic_growth_rate: float, bankruptcy_probability: float, expected_roe: float, roe_improvement: float, premium_to_limit_ratio: float, coverage_adequacy: float, capital_efficiency: float, value_at_risk_95: float, conditional_value_at_risk: float, decision_score: float = 0.0, time_weighted_roe: float = 0.0, roe_volatility: float = 0.0, roe_sharpe_ratio: float = 0.0, roe_downside_deviation: float = 0.0, roe_1yr_rolling: float = 0.0, roe_3yr_rolling: float = 0.0, roe_5yr_rolling: float = 0.0, operating_roe: float = 0.0, insurance_impact_roe: float = 0.0, tax_effect_roe: float = 0.0)[source]

Bases: object

Comprehensive metrics for evaluating an insurance decision.

Metrics are organized into four logical groups accessible as sub-objects:

For backward compatibility, all fields remain accessible as flat attributes (e.g. metrics.bankruptcy_probability) in addition to the grouped path (metrics.risk.bankruptcy_probability).

to_dict(group: str | None = None) Dict[str, Any][source]

Serialize metrics to a flat dictionary.

Parameters:

group (Optional[str]) – If None, returns all metrics as a flat dict. If a group name ("growth", "risk", "roe", "efficiency"), returns only that group’s fields. For "roe", component metrics are inlined into the same dict.

Return type:

Dict[str, Any]

Returns:

Dictionary of metric name to value.

Raises:

ValueError – If group is not a recognized group name.

calculate_score(weights: Dict[str, float] | None = None, targets: Dict[str, float] | None = None) float[source]

Calculate weighted decision score.

Normalizes each metric to a [0, 1] scale using configurable targets, then computes a weighted sum.

Parameters:
  • weights (Optional[Dict[str, float]]) – Weights for each metric component. Keys are "growth", "risk", "efficiency", and "adequacy". Values should sum to 1.0. Default: {"growth": 0.3, "risk": 0.3, "efficiency": 0.2, "adequacy": 0.2}.

  • targets (Optional[Dict[str, float]]) –

    Normalization targets that define what “perfect” looks like for growth and risk metrics. Keys:

    • "growth_target" — The growth rate that maps to a score of 1.0. Default 0.10 (10%). A mid-sized manufacturer typically targets 8–12% real growth; 10% represents a reasonable mid-range goal. Typical ranges: conservative 0.05–0.08, moderate 0.08–0.12, aggressive 0.12–0.20.

    • "max_acceptable_risk" — The bankruptcy probability at or above which the risk score is 0.0. Default 0.05 (5%). Below this threshold the score scales linearly to 1.0 (lower risk is better). Typical ranges: conservative 0.01–0.03, moderate 0.03–0.05, aggressive 0.05–0.10.

Return type:

float

Returns:

Weighted score between 0 and 1.

class SensitivityReport(base_decision: InsuranceDecision, base_metrics: DecisionMetrics, parameter_sensitivities: Dict[str, Dict[str, float]], key_drivers: List[str], robust_range: Dict[str, Tuple[float, float]], stress_test_results: Dict[str, DecisionMetrics]) None[source]

Bases: object

Results of sensitivity analysis.

base_decision: InsuranceDecision
base_metrics: DecisionMetrics
parameter_sensitivities: Dict[str, Dict[str, float]]
key_drivers: List[str]
robust_range: Dict[str, Tuple[float, float]]
stress_test_results: Dict[str, DecisionMetrics]
class Recommendations(primary_recommendation: InsuranceDecision, primary_rationale: str, alternative_options: List[Tuple[InsuranceDecision, str]], implementation_timeline: List[str], risk_considerations: List[str], expected_benefits: Dict[str, float], confidence_level: float) None[source]

Bases: object

Executive-ready recommendations from the decision engine.

primary_recommendation: InsuranceDecision
primary_rationale: str
alternative_options: List[Tuple[InsuranceDecision, str]]
implementation_timeline: List[str]
risk_considerations: List[str]
expected_benefits: Dict[str, float]
confidence_level: float
class InsuranceDecisionEngine(manufacturer: WidgetManufacturer, loss_distribution: LossDistribution, pricing_scenario: str = 'baseline', config_manager: ConfigManager | None = None, engine_config: DecisionEngineConfig | None = None, config_loader: Any | None = None)[source]

Bases: object

Algorithmic engine for optimizing insurance decisions.

classmethod from_company(initial_assets: float = 10000000, loss_mean: float = 1000000, loss_cv: float = 1.5, operating_margin: float = 0.08, industry: str = 'manufacturing', tax_rate: float = 0.25, growth_rate: float = 0.05, pricing_scenario: str = 'baseline', seed: int | None = None) InsuranceDecisionEngine[source]

Create a decision engine from basic company parameters.

Factory method that mirrors Config.from_company() so actuaries and risk managers can start optimizing without constructing ManufacturerConfig, WidgetManufacturer, or LossDistribution objects manually.

Parameters:
  • initial_assets (float) – Starting asset value in dollars.

  • loss_mean (float) – Mean annual loss severity in dollars.

  • loss_cv (float) – Loss severity coefficient of variation (std / mean).

  • operating_margin (float) – Base operating margin (e.g. 0.08 for 8%).

  • industry (str) – Industry type for deriving config defaults. Supported values: "manufacturing", "service", "retail".

  • tax_rate (float) – Corporate tax rate.

  • growth_rate (float) – Annual revenue growth rate.

  • pricing_scenario (str) – Insurance market pricing scenario.

  • seed (Optional[int]) – Random seed for the loss distribution.

Return type:

InsuranceDecisionEngine

Returns:

Ready-to-use InsuranceDecisionEngine instance.

Examples

Minimal — uses all defaults:

engine = InsuranceDecisionEngine.from_company(
    initial_assets=10_000_000,
)

Specify loss distribution:

engine = InsuranceDecisionEngine.from_company(
    initial_assets=50_000_000,
    loss_mean=2_000_000,
    loss_cv=2.0,
    industry="service",
)

Full optimization workflow:

engine = InsuranceDecisionEngine.from_company(
    initial_assets=10_000_000,
    loss_mean=1_000_000,
    loss_cv=1.5,
)
decision = engine.optimize(max_premium=500_000)
optimize(max_premium: float | None = None, max_bankruptcy_probability: float = 0.01, method: OptimizationMethod = OptimizationMethod.SLSQP, weights: Dict[str, float] | None = None, **constraint_overrides) InsuranceDecision[source]

Optimize insurance purchasing with sensible defaults.

Convenience wrapper around optimize_insurance_decision() that builds DecisionOptimizationConstraints from keyword arguments, filling in reasonable defaults where omitted.

Parameters:
  • max_premium (Optional[float]) – Maximum annual premium budget. Defaults to 10% of expected annual revenue.

  • max_bankruptcy_probability (float) – Maximum acceptable probability of ruin (default 1%).

  • method (OptimizationMethod) – Optimization algorithm to use.

  • weights (Optional[Dict[str, float]]) – Objective function weights (growth, risk, cost).

  • **constraint_overrides – Additional overrides passed directly to DecisionOptimizationConstraints fields.

Return type:

InsuranceDecision

Returns:

Optimal InsuranceDecision.

Examples

Quick optimization:

decision = engine.optimize(max_premium=500_000)

With custom constraints:

decision = engine.optimize(
    max_premium=1_000_000,
    max_bankruptcy_probability=0.005,
    min_total_coverage=10_000_000,
)
optimize_insurance_decision(constraints: DecisionOptimizationConstraints, method: OptimizationMethod = OptimizationMethod.SLSQP, weights: Dict[str, float] | None = None, _attempted_methods: Set[OptimizationMethod] | None = None) InsuranceDecision[source]

Find optimal insurance structure given constraints.

Uses multi-objective optimization to balance growth, risk, and cost. Falls back through alternative methods if validation fails, tracking attempted methods to prevent infinite recursion.

Parameters:
Return type:

InsuranceDecision

Returns:

Optimal insurance decision

calculate_decision_metrics(decision: InsuranceDecision) DecisionMetrics[source]

Calculate comprehensive metrics for a decision.

Parameters:

decision (InsuranceDecision) – Insurance decision to evaluate

Return type:

DecisionMetrics

Returns:

Comprehensive metrics

run_sensitivity_analysis(base_decision: InsuranceDecision, parameters: List[str] | None = None, variation_range: float = 0.2) SensitivityReport[source]

Analyze decision sensitivity to parameter changes.

Parameters:
  • base_decision (InsuranceDecision) – Base decision to analyze

  • parameters (Optional[List[str]]) – Parameters to test (default: key parameters)

  • variation_range (float) – ±% to vary parameters (default: 20%)

Return type:

SensitivityReport

Returns:

Comprehensive sensitivity report

generate_recommendations(analysis_results: List[Tuple[InsuranceDecision, DecisionMetrics]]) Recommendations[source]

Generate executive-ready recommendations.

Parameters:

analysis_results (List[Tuple[InsuranceDecision, DecisionMetrics]]) – List of (decision, metrics) tuples to analyze

Return type:

Recommendations

Returns:

Comprehensive recommendations

ergodic_insurance.ergodic_analyzer module

Ergodic analysis framework for comparing time-average vs ensemble-average growth.

Implements Ole Peters’ ergodic economics framework for insurance decision making. For multiplicative processes like business growth with volatile losses, ensemble averages and time averages diverge — insurance transforms growth dynamics in ways that traditional expected value analysis cannot capture.

Core class:

ErgodicAnalyzer — time-average growth, ensemble statistics, convergence analysis, and significance testing.

Data containers (re-exported from ergodic_types):

ErgodicData, ErgodicAnalysisResults, ValidationResults

Scenario and pipeline helpers (delegated to submodules):

compare_scenarios(), analyze_simulation_batch(), integrate_loss_ergodic_analysis(), validate_insurance_ergodic_impact()

For usage examples see the Analyzing Results tutorial.

class ErgodicAnalyzer(convergence_threshold: float = 0.01)[source]

Bases: object

Analyzer for ergodic properties of insurance strategies.

Computes time-average vs ensemble-average growth rates, demonstrating that insurance can improve time-average growth even when premiums exceed expected losses.

convergence_threshold

Standard error threshold for Monte Carlo convergence.

For detailed examples see the Analyzing Results tutorial.

calculate_time_average_growth(values: ndarray, time_horizon: int | None = None) float[source]

Calculate time-average growth rate for a single trajectory.

Computes g = (1/T) * ln(X(T)/X(0)), the actual compound growth experienced by a single entity over time.

Parameters:
  • values (ndarray) – Array of values over time (equity, assets, wealth). Length must be >= 2.

  • time_horizon (Optional[int]) – Override for time period T. If None, uses len(values) - 1.

Return type:

float

Returns:

Time-average growth rate per period. Returns -inf for bankrupt trajectories (final value <= 0) and 0.0 when time_horizon <= 0.

Note

Assumes uniform unit time steps. For non-uniform steps, pass an explicit time_horizon.

calculate_ensemble_average(trajectories: List[ndarray] | ndarray, metric: str = 'final_value') Dict[str, float][source]

Calculate ensemble average and statistics across multiple paths.

Parameters:
  • trajectories (Union[List[ndarray], ndarray]) – List of 1-D arrays (variable lengths) or 2-D array (n_paths, n_timesteps).

  • metric (str) – "final_value", "growth_rate", or "full".

Return type:

Dict[str, float]

Returns:

Dict with mean, std, median, survival_rate, n_survived, n_total (and mean_trajectory / std_trajectory for metric "full").

check_convergence(values: ndarray, window_size: int = 100) Tuple[bool, float][source]

Check Monte Carlo convergence using rolling standard error.

Parameters:
  • values (ndarray) – Array of metric values (e.g. time-average growth rates).

  • window_size (int) – Rolling window size (default 100).

Return type:

Tuple[bool, float]

Returns:

(converged, standard_error)converged is True when SE < convergence_threshold.

significance_test(sample1: List[float] | ndarray, sample2: List[float] | ndarray, test_type: str = 'two-sided') Tuple[float, float][source]

Welch’s t-test between two growth rate samples.

Parameters:
  • sample1 (Union[List[float], ndarray]) – First sample (e.g. insured growth rates).

  • sample2 (Union[List[float], ndarray]) – Second sample (e.g. uninsured growth rates).

  • test_type (str) – "two-sided", "greater", or "less".

Return type:

Tuple[float, float]

Returns:

(t_statistic, p_value).

compare_scenarios(insured_results: List[SimulationResults] | ndarray | MonteCarloResults, uninsured_results: List[SimulationResults] | ndarray | MonteCarloResults, metric: str = 'equity') ScenarioComparison[source]

Compare insured vs uninsured scenarios using ergodic analysis.

For detailed examples see the Advanced Scenarios tutorial.

Parameters:
Return type:

ScenarioComparison

Returns:

ScenarioComparison with insured, uninsured, and ergodic_advantage fields. Supports dict-style access for backward compatibility (with deprecation warnings).

Example

Using MonteCarloResults directly:

from ergodic_insurance import ErgodicAnalyzer
from ergodic_insurance.monte_carlo import MonteCarloEngine

engine = MonteCarloEngine(manufacturer, config)
insured_mc = engine.run(insurance_program=program)
uninsured_mc = engine.run(insurance_program=None)

analyzer = ErgodicAnalyzer()
comparison = analyzer.compare_scenarios(insured_mc, uninsured_mc)
analyze_simulation_batch(simulation_results: List[SimulationResults], label: str = 'Scenario') BatchAnalysisResults[source]

Perform comprehensive ergodic analysis on a batch of simulations.

For detailed examples see the Analyzing Results tutorial.

Parameters:
  • simulation_results (List[SimulationResults]) – List of SimulationResults from Monte Carlo runs.

  • label (str) – Descriptive label for this batch.

Return type:

BatchAnalysisResults

Returns:

BatchAnalysisResults with time_average, ensemble_average, convergence, survival_analysis, and ergodic_divergence fields. Supports dict-style access for backward compatibility (with deprecation warnings).

integrate_loss_ergodic_analysis(loss_data: LossData, insurance_program: TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None, manufacturer: Any, time_horizon: int, n_simulations: int = 100) ErgodicAnalysisResults[source]

End-to-end integrated loss modelling and ergodic analysis.

Pipeline: validate -> apply insurance -> aggregate losses -> Monte Carlo -> ergodic metrics -> validate -> package results.

For detailed examples see the Optimization Workflow tutorial.

Parameters:
  • loss_data (LossData) – Standardized loss data.

  • insurance_program (Optional[ergodic_insurance.insurance_program.InsuranceProgram]) – Insurance program or None for uninsured.

  • manufacturer (Any) – Manufacturer model instance.

  • time_horizon (int) – Analysis time horizon in years.

  • n_simulations (int) – Number of Monte Carlo runs (default 100).

Return type:

ErgodicAnalysisResults

Returns:

ErgodicAnalysisResults with growth, survival, insurance impact, and validation status.

validate_insurance_ergodic_impact(base_scenario: SimulationResults, insurance_scenario: SimulationResults, insurance_program: TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None = None) ValidationResults[source]

Validate insurance effects in ergodic calculations.

Checks premium deductions, recovery credits, collateral impacts, and growth rate consistency.

For detailed examples see the Advanced Scenarios tutorial.

Parameters:
  • base_scenario (SimulationResults) – Simulation results without insurance.

  • insurance_scenario (SimulationResults) – Simulation results with insurance.

  • insurance_program (Optional[ergodic_insurance.insurance_program.InsuranceProgram]) – Insurance program (optional, for detailed premium checks).

Return type:

ValidationResults

Returns:

ValidationResults with individual check flags and diagnostics.

class BatchAnalysisResults(label: str, n_simulations: int, time_average: TimeAverageStats, ensemble_average: EnsembleAverageStats, convergence: ConvergenceStats, survival_analysis: SurvivalAnalysisStats, ergodic_divergence: float) None[source]

Bases: _DictAccessMixin

Typed result of ErgodicAnalyzer.analyze_simulation_batch().

label

Descriptive label for this batch.

n_simulations

Number of simulations in the batch.

time_average

Time-average growth rate statistics.

ensemble_average

Ensemble-average statistics.

convergence

Convergence diagnostics.

survival_analysis

Survival analysis metrics.

ergodic_divergence

time_average.mean - ensemble_average.mean. NaN when no valid growth rates exist.

label: str
n_simulations: int
time_average: TimeAverageStats
ensemble_average: EnsembleAverageStats
convergence: ConvergenceStats
survival_analysis: SurvivalAnalysisStats
ergodic_divergence: float
class ErgodicAnalysisResults(time_average_growth: float, ensemble_average_growth: float, survival_rate: float, ergodic_divergence: float, insurance_impact: ~typing.Dict[str, float], validation_passed: bool, metadata: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]

Bases: object

Comprehensive results from integrated ergodic analysis.

time_average_growth

Mean time-average growth rate across all valid simulation paths. May be -inf if all paths ended in bankruptcy.

ensemble_average_growth

Ensemble average growth rate calculated from the mean of initial and final values across all paths.

survival_rate

Fraction of paths that remained solvent [0, 1].

ergodic_divergence

time_average_growth - ensemble_average_growth.

insurance_impact

Insurance-related metrics (premium_cost, recovery_benefit, net_benefit, growth_improvement).

validation_passed

Whether the analysis passed internal validation.

metadata

Additional analysis metadata (n_simulations, time_horizon, n_survived, loss_statistics).

Note

All growth rates are expressed as decimal values (0.05 = 5 %). Always check validation_passed before interpreting results.

time_average_growth: float
ensemble_average_growth: float
survival_rate: float
ergodic_divergence: float
insurance_impact: Dict[str, float]
validation_passed: bool
metadata: Dict[str, Any]
class ErgodicData(time_series: ndarray = <factory>, values: ndarray = <factory>, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Standardized container for ergodic time series analysis.

time_series

Array of time points corresponding to values. Should be monotonically increasing.

values

Array of observed values (e.g. equity, assets) at each time point. Must have the same length as time_series.

metadata

Analysis metadata such as simulation parameters, data source, and units.

time_series: ndarray
values: ndarray
metadata: Dict[str, Any]
validate() bool[source]

Validate data consistency and integrity.

Return type:

bool

Returns:

True if arrays are non-empty and have matching lengths.

class ScenarioComparison(insured: ScenarioMetrics, uninsured: ScenarioMetrics, ergodic_advantage: ErgodicAdvantage) None[source]

Bases: _DictAccessMixin

Typed result of ErgodicAnalyzer.compare_scenarios().

insured

Metrics for the insured scenario.

uninsured

Metrics for the uninsured scenario.

ergodic_advantage

Ergodic advantage comparison.

insured: ScenarioMetrics
uninsured: ScenarioMetrics
ergodic_advantage: ErgodicAdvantage
class ValidationResults(premium_deductions_correct: bool, recoveries_credited: bool, collateral_impacts_included: bool, time_average_reflects_benefit: bool, overall_valid: bool, details: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Results from insurance impact validation analysis.

premium_deductions_correct

Whether premiums are properly deducted from cash flows.

recoveries_credited

Whether recoveries are properly credited.

collateral_impacts_included

Whether collateral costs are modeled.

time_average_reflects_benefit

Whether growth rates reflect insurance benefits.

overall_valid

Master validation flag — all individual checks passed.

details

Detailed diagnostic information from each validation check, useful for troubleshooting failures.

premium_deductions_correct: bool
recoveries_credited: bool
collateral_impacts_included: bool
time_average_reflects_benefit: bool
overall_valid: bool
details: Dict[str, Any]

ergodic_insurance.excel_reporter module

Excel report generation for financial statements and analysis.

This module provides comprehensive Excel report generation functionality, creating professional financial statements, diagnostic reports, and Monte Carlo aggregations with advanced formatting and validation.

Example

Generate Excel report from simulation:

from ergodic_insurance.config import ExcelReportConfig
from ergodic_insurance.excel_reporter import ExcelReporter
from ergodic_insurance.manufacturer import WidgetManufacturer

# Configure report
config = ExcelReportConfig(
    output_path=Path("./reports"),
    include_balance_sheet=True,
    include_income_statement=True,
    include_cash_flow=True
)

# Generate report
reporter = ExcelReporter(config)
output_file = reporter.generate_trajectory_report(
    manufacturer,
    "financial_statements.xlsx"
)
class ExcelReporter(config: ExcelReportConfig | None = None)[source]

Bases: object

Main Excel report generation engine.

This class handles the creation of comprehensive Excel reports from simulation data, including financial statements, metrics dashboards, and reconciliation reports.

config

Report configuration

workbook

Excel workbook object

formats

Dictionary of Excel format objects

engine

Selected Excel engine

generate_trajectory_report(manufacturer: WidgetManufacturer, output_file: str, title: str | None = None) Path[source]

Generate Excel report for a single simulation trajectory.

Creates a comprehensive Excel workbook with financial statements, metrics, and reconciliation for a single simulation run.

Parameters:
  • manufacturer (WidgetManufacturer) – WidgetManufacturer with simulation data

  • output_file (str) – Name of output Excel file

  • title (Optional[str]) – Optional report title

Return type:

Path

Returns:

Path to generated Excel file

generate_monte_carlo_report(results: Any, output_file: str, title: str | None = None) Path[source]

Generate aggregated report from Monte Carlo simulations.

Creates Excel report with statistical summaries across multiple simulation trajectories.

Parameters:
  • results (Any) – Monte Carlo simulation results

  • output_file (str) – Name of output Excel file

  • title (Optional[str]) – Optional report title

Return type:

Path

Returns:

Path to generated Excel file

ergodic_insurance.exposure_base module

Exposure base module for dynamic frequency scaling in insurance losses.

This module provides a hierarchy of exposure classes that dynamically adjust loss frequencies based on actual business metrics from the simulation. The exposure bases now work with real financial state from the manufacturer, not artificial growth projections.

Key Concepts:
  • Exposure bases query actual financial metrics from a state provider

  • Frequency multipliers are calculated from actual vs. base metrics

  • No artificial growth rates or projections

  • Direct integration with WidgetManufacturer financial state

Example

Basic usage with state-driven revenue exposure:

from ergodic_insurance.exposure_base import RevenueExposure
from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator

# Create manufacturer
manufacturer = WidgetManufacturer(config)

# Create exposure linked to manufacturer's actual state
exposure = RevenueExposure(state_provider=manufacturer)

# Create generator with exposure
generator = ManufacturingLossGenerator.create_simple(
    frequency=0.5,
    severity_mean=1_000_000
)

# Losses will be generated based on actual revenue during simulation
Since:

Version 0.3.0 - Complete refactor to state-driven approach

class FinancialStateProvider(*args, **kwargs)[source]

Bases: Protocol

Protocol for providing current financial state to exposure bases.

This protocol defines the interface that any class must implement to provide financial metrics to exposure bases. The WidgetManufacturer class implements this protocol to supply real-time financial data.

property current_revenue: Decimal

Get current revenue.

property current_assets: Decimal

Get current total assets.

property current_equity: Decimal

Get current equity value.

property base_revenue: Decimal

Get base (initial) revenue for comparison.

property base_assets: Decimal

Get base (initial) assets for comparison.

property base_equity: Decimal

Get base (initial) equity for comparison.

class ExposureBase[source]

Bases: ABC

Abstract base class for exposure calculations.

Exposure represents the underlying business metric that drives claim frequency. Common examples include revenue, assets, employee count, or production volume.

Subclasses must implement methods to calculate absolute exposure levels and frequency multipliers at different time points.

abstractmethod get_exposure(time: float) float[source]

Get absolute exposure level at given time.

Parameters:

time (float) – Time in years from simulation start (can be fractional).

Returns:

Exposure level (e.g., revenue in dollars, asset value, etc.).

Must be non-negative.

Return type:

float

abstractmethod get_frequency_multiplier(time: float) float[source]

Get frequency adjustment factor relative to base.

The multiplier is applied to the base frequency to determine the actual claim frequency at a given time.

Parameters:

time (float) – Time in years from simulation start (can be fractional).

Returns:

Multiplier to apply to base frequency. A value of 1.0

means no change from base frequency, 2.0 means double the base frequency, etc. Must be non-negative.

Return type:

float

abstractmethod reset() None[source]

Reset exposure to initial state.

This method should reset any internal state, cached values, or random number generators to their initial conditions. Useful for running multiple independent simulations with the same exposure configuration.

Return type:

None

class RevenueExposure(state_provider: FinancialStateProvider) None[source]

Bases: ExposureBase

Revenue-based exposure using actual financial state.

Models claim frequency that scales with actual business revenue from the simulation, not artificial growth projections. The exposure directly queries the current revenue from the manufacturer’s financial state.

state_provider

Object providing current and base financial metrics. Typically a WidgetManufacturer instance.

Example

Revenue exposure with actual manufacturer state:

from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.config import ManufacturerConfig

manufacturer = WidgetManufacturer(
    ManufacturerConfig(initial_assets=10_000_000)
)
exposure = RevenueExposure(state_provider=manufacturer)

# Exposure reflects actual manufacturer revenue
current_rev = exposure.get_exposure(1.0)
multiplier = exposure.get_frequency_multiplier(1.0)
state_provider: FinancialStateProvider
get_exposure(time: float) float[source]

Return current actual revenue from manufacturer.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Calculate multiplier from actual revenue ratio.

Return type:

float

reset() None[source]

No internal state to reset for state-driven exposure.

Return type:

None

class AssetExposure(state_provider: FinancialStateProvider) None[source]

Bases: ExposureBase

Asset-based exposure using actual financial state.

Models claim frequency based on actual asset values from the simulation, tracking real asset changes from operations, claims, and business growth. Suitable for businesses where physical assets drive risk exposure.

Frequency scales linearly with assets as more assets generally mean more insurable items that can generate claims.

state_provider

Object providing current and base financial metrics. Typically a WidgetManufacturer instance.

Example

Asset exposure with actual manufacturer state:

manufacturer = WidgetManufacturer(
    ManufacturerConfig(initial_assets=50_000_000)
)
exposure = AssetExposure(state_provider=manufacturer)

# Exposure reflects actual asset changes
current_assets = exposure.get_exposure(1.0)
multiplier = exposure.get_frequency_multiplier(1.0)
state_provider: FinancialStateProvider
get_exposure(time: float) float[source]

Return current actual assets from manufacturer.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Calculate multiplier from actual asset ratio.

Return type:

float

reset() None[source]

No internal state to reset for state-driven exposure.

Return type:

None

class EquityExposure(state_provider: FinancialStateProvider) None[source]

Bases: ExposureBase

Equity-based exposure using actual financial state.

Models claim frequency based on actual equity values from the simulation, tracking real equity changes from profits, losses, and retained earnings. Suitable for financial analysis where equity represents business scale.

state_provider

Object providing current and base financial metrics. Typically a WidgetManufacturer instance.

Example

Equity exposure with actual manufacturer state:

manufacturer = WidgetManufacturer(
    ManufacturerConfig(initial_assets=20_000_000)
)
exposure = EquityExposure(state_provider=manufacturer)

# Exposure reflects actual equity changes
current_equity = exposure.get_exposure(1.0)
multiplier = exposure.get_frequency_multiplier(1.0)
state_provider: FinancialStateProvider
get_exposure(time: float) float[source]

Return current actual equity from manufacturer.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Higher equity implies larger operations.

Return type:

float

reset() None[source]

No internal state to reset for state-driven exposure.

Return type:

None

class EmployeeExposure(base_employees: int, hiring_rate: float = 0.0, automation_factor: float = 0.0) None[source]

Bases: ExposureBase

Exposure based on employee count.

Models claim frequency based on workforce size, accounting for hiring and automation effects. Suitable for businesses where employee-related risks dominate (workers comp, employment practices, etc.).

base_employees

Initial number of employees.

hiring_rate

Annual net hiring rate (can be negative for downsizing).

automation_factor

Annual reduction in exposure per employee due to automation.

Example

Employee exposure with automation:

exposure = EmployeeExposure(
    base_employees=500,
    hiring_rate=0.05,  # 5% annual growth
    automation_factor=0.02  # 2% automation improvement
)
base_employees: int
hiring_rate: float = 0.0
automation_factor: float = 0.0
__post_init__()[source]

Validate inputs.

get_exposure(time: float) float[source]

Calculate employee count with hiring and automation effects.

Return type:

float

get_frequency_multiplier(time: float) float[source]

More employees = more workplace incidents, but automation helps.

Return type:

float

reset() None[source]

No state to reset.

Return type:

None

class ProductionExposure(base_units: float, growth_rate: float = 0.0, seasonality: Callable[[float], float] | None = None, quality_improvement_rate: float = 0.0) None[source]

Bases: ExposureBase

Exposure based on production volume/units.

Models claim frequency based on production output, with support for seasonal patterns and quality improvements that reduce defect rates.

base_units

Initial production volume (units per year).

growth_rate

Annual production growth rate.

seasonality

Optional function returning seasonal multiplier.

quality_improvement_rate

Annual reduction in defect-related claims.

Example

Production exposure with seasonality:

def seasonal_pattern(time):
    # Higher production in Q4
    return 1.0 + 0.3 * np.sin(2 * np.pi * time)

exposure = ProductionExposure(
    base_units=100_000,
    growth_rate=0.08,
    seasonality=seasonal_pattern,
    quality_improvement_rate=0.03
)
base_units: float
growth_rate: float = 0.0
seasonality: Callable[[float], float] | None = None
quality_improvement_rate: float = 0.0
__post_init__()[source]

Validate inputs.

get_exposure(time: float) float[source]

Calculate production volume with growth and seasonality.

Return type:

float

get_frequency_multiplier(time: float) float[source]

More production = more potential defects, but quality improvements help.

Return type:

float

reset() None[source]

No state to reset.

Return type:

None

class CompositeExposure(exposures: Dict[str, ExposureBase], weights: Dict[str, float]) None[source]

Bases: ExposureBase

Weighted combination of multiple exposure bases.

Allows modeling complex businesses with multiple risk drivers by combining different exposure types with specified weights.

exposures

Dictionary of named exposure bases.

weights

Dictionary of weights for each exposure (will be normalized).

Example

Composite exposure for diversified business:

composite = CompositeExposure(
    exposures={
        'revenue': RevenueExposure(base_revenue=50_000_000, growth_rate=0.05),
        'assets': AssetExposure(base_assets=100_000_000),
        'employees': EmployeeExposure(base_employees=500)
    },
    weights={'revenue': 0.5, 'assets': 0.3, 'employees': 0.2}
)
exposures: Dict[str, ExposureBase]
weights: Dict[str, float]
__post_init__()[source]

Normalize weights to sum to 1.0.

get_exposure(time: float) float[source]

Weighted average of constituent exposures.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Weighted average of frequency multipliers.

Return type:

float

reset() None[source]

Reset all constituent exposures.

Return type:

None

class ScenarioExposure(scenarios: Dict[str, List[float]], selected_scenario: str, interpolation: str = 'linear') None[source]

Bases: ExposureBase

Predefined exposure scenarios for planning and stress testing.

Allows specification of exact exposure paths for scenario analysis, with interpolation between specified time points.

scenarios

Dictionary mapping scenario names to exposure paths.

selected_scenario

Currently active scenario name.

interpolation

Interpolation method (‘linear’, ‘cubic’, ‘nearest’).

Example

Scenario-based exposure planning:

scenarios = {
    'baseline': [100, 105, 110, 116, 122],
    'recession': [100, 95, 90, 92, 96],
    'expansion': [100, 112, 125, 140, 155]
}

exposure = ScenarioExposure(
    scenarios=scenarios,
    selected_scenario='recession',
    interpolation='linear'
)
scenarios: Dict[str, List[float]]
selected_scenario: str
interpolation: str = 'linear'
__post_init__()[source]

Validate scenarios.

get_exposure(time: float) float[source]

Interpolate exposure from scenario path.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Derive multiplier from exposure level.

Return type:

float

reset() None[source]

Cache base exposure.

Return type:

None

class StochasticExposure(base_value: float, process_type: str, parameters: Dict[str, float], seed: int | None = None) None[source]

Bases: ExposureBase

Stochastic exposure evolution using various processes.

Supports multiple stochastic processes for advanced exposure modeling: - Geometric Brownian Motion (GBM) - Mean-reverting (Ornstein-Uhlenbeck) - Jump diffusion

base_value

Initial exposure value.

process_type

Type of stochastic process (‘gbm’, ‘mean_reverting’, ‘jump_diffusion’).

parameters

Process-specific parameters.

seed

Random seed for reproducibility.

Example

GBM exposure process:

exposure = StochasticExposure(
    base_value=100_000_000,
    process_type='gbm',
    parameters={
        'drift': 0.05,      # 5% drift
        'volatility': 0.20  # 20% volatility
    },
    seed=42
)
base_value: float
process_type: str
parameters: Dict[str, float]
seed: int | None = None
__post_init__()[source]

Initialize and validate.

reset()[source]

Reset stochastic paths.

get_exposure(time: float) float[source]

Generate or retrieve stochastic path.

Return type:

float

get_frequency_multiplier(time: float) float[source]

Derive multiplier from exposure level.

Return type:

float

ergodic_insurance.financial_statements module

Financial statement compilation and generation.

This module provides classes for generating standard financial statements (Balance Sheet, Income Statement, Cash Flow Statement) from simulation data. It supports both single trajectory and Monte Carlo aggregated reports with reconciliation capabilities.

Example

Generate financial statements from a manufacturer simulation:

from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.financial_statements import FinancialStatementGenerator

# Run simulation
manufacturer = WidgetManufacturer(config)
for year in range(10):
    manufacturer.step()

# Generate statements
generator = FinancialStatementGenerator(manufacturer)
balance_sheet = generator.generate_balance_sheet(year=5)
income_statement = generator.generate_income_statement(year=5)
cash_flow = generator.generate_cash_flow_statement(year=5)
class FinancialStatementConfig(currency_symbol: str = '$', decimal_places: int = 0, include_yoy_change: bool = True, include_percentages: bool = True, fiscal_year_end: int | None = None, consolidate_monthly: bool = True) None[source]

Bases: object

Configuration for financial statement generation.

currency_symbol

Symbol to use for currency formatting

decimal_places

Number of decimal places for numeric values

include_yoy_change

Whether to include year-over-year changes

include_percentages

Whether to include percentage breakdowns

fiscal_year_end

Month of fiscal year end (1-12). If None, inherits from the central Config.simulation.fiscal_year_end setting. Defaults to 12 (December) if neither is set, for calendar year alignment.

consolidate_monthly

Whether to consolidate monthly data into annual

currency_symbol: str = '$'
decimal_places: int = 0
include_yoy_change: bool = True
include_percentages: bool = True
fiscal_year_end: int | None = None
consolidate_monthly: bool = True
class CashFlowStatement(metrics_history: List[Dict[str, Decimal | float | int | bool]], config: Any | None = None, ledger: Ledger | None = None)[source]

Bases: object

Generates cash flow statements using indirect or direct method.

This class creates properly structured cash flow statements with three sections (Operating, Investing, Financing) following GAAP standards. Supports both the indirect method (starting from net income) and the direct method (summing ledger entries) for operating activities.

When a ledger is provided, the direct method is available, which provides perfect reconciliation and audit trail for all cash flows.

metrics_history

List of metrics dictionaries from simulation

config

Configuration object with business parameters

ledger

Optional Ledger for direct method cash flow generation

generate_statement(year: int, period: str = 'annual', method: str = 'indirect') DataFrame[source]

Generate cash flow statement for specified year.

Parameters:
  • year (int) – Year index (0-based) for statement

  • period (str) – ‘annual’ or ‘monthly’ period type

  • method (str) – ‘indirect’ (default) or ‘direct’. Direct method requires a ledger to be provided during initialization.

Return type:

DataFrame

Returns:

DataFrame containing formatted cash flow statement

Raises:
  • IndexError – If year is out of range

  • ValueError – If direct method requested but no ledger available

class FinancialStatementGenerator(manufacturer: WidgetManufacturer | None = None, manufacturer_data: Dict[str, Any] | None = None, config: FinancialStatementConfig | None = None, ledger: Ledger | None = None)[source]

Bases: object

Generates financial statements from simulation data.

This class compiles standard financial statements (Balance Sheet, Income Statement, Cash Flow) from manufacturer metrics history. It handles both annual and monthly data, performs reconciliation checks, and calculates derived financial metrics.

When a ledger is provided (either directly or via the manufacturer), direct method cash flow statements can be generated, providing perfect reconciliation and audit trail for all cash transactions.

manufacturer_data

Raw simulation data from manufacturer

config

Configuration for statement generation

metrics_history

List of metrics dictionaries from simulation

years_available

Number of years of data available

ledger

Optional Ledger for direct method cash flow generation

generate_balance_sheet(year: int, compare_years: List[int] | None = None) DataFrame[source]

Generate balance sheet for specified year.

Creates a standard balance sheet with assets, liabilities, and equity sections. Includes year-over-year comparisons if configured.

When a ledger is available, balances are derived directly from the ledger using get_balance() for each account, ensuring perfect reconciliation. Otherwise, falls back to metrics_history from the manufacturer.

Parameters:
  • year (int) – Year index (0-based) for balance sheet

  • compare_years (Optional[List[int]]) – Optional list of years to compare against

Return type:

DataFrame

Returns:

DataFrame containing balance sheet data

Raises:

IndexError – If year is out of range

generate_income_statement(year: int, compare_years: List[int] | None = None, monthly: bool = False) DataFrame[source]

Generate income statement for specified year with proper GAAP structure.

Creates a standard income statement following US GAAP with proper categorization of COGS, operating expenses, and non-operating items. Supports both annual and monthly statement generation.

When a ledger is available, revenue and expenses are derived from ledger period changes using get_period_change(), ensuring perfect reconciliation. Otherwise, falls back to metrics_history from the manufacturer.

Parameters:
  • year (int) – Year index (0-based) for income statement

  • compare_years (Optional[List[int]]) – Optional list of years to compare against

  • monthly (bool) – If True, generate monthly estimates (annual ÷ 12, clearly labeled as averages when actual monthly data is unavailable)

Return type:

DataFrame

Returns:

DataFrame containing income statement data with GAAP structure

Raises:

IndexError – If year is out of range

generate_cash_flow_statement(year: int, period: str = 'annual', method: str = 'indirect') DataFrame[source]

Generate cash flow statement for specified year using CashFlowStatement class.

Creates a cash flow statement with three distinct sections (Operating, Investing, Financing). Supports both indirect method (starting from net income) and direct method (summing ledger entries) for operating activities.

When a ledger is available, the direct method is preferred as it provides perfect reconciliation and audit trail for all cash transactions by summing actual ledger entries.

Parameters:
  • year (int) – Year index (0-based) for cash flow statement

  • period (str) – ‘annual’ or ‘monthly’ for period type

  • method (str) – ‘indirect’ (default) or ‘direct’. Direct method requires a ledger to be available. When ledger is present and no method specified, direct method may be preferred for better accuracy.

Return type:

DataFrame

Returns:

DataFrame containing cash flow statement data

Raises:
  • IndexError – If year is out of range

  • ValueError – If direct method requested but no ledger available

generate_reconciliation_report(year: int) DataFrame[source]

Generate reconciliation report for financial statements.

Validates that financial statements balance and reconcile properly, checking key accounting identities and relationships.

Parameters:

year (int) – Year index (0-based) for reconciliation

Return type:

DataFrame

Returns:

DataFrame containing reconciliation checks and results

class MonteCarloStatementAggregator(monte_carlo_results: List[Dict] | DataFrame, config: FinancialStatementConfig | None = None)[source]

Bases: object

Aggregates financial statements across Monte Carlo simulations.

This class processes multiple simulation trajectories to create statistical summaries of financial statements, showing means, percentiles, and confidence intervals.

results

Monte Carlo simulation results

config

Configuration for statement generation

aggregate_balance_sheets(year: int, percentiles: List[float] | None = None) DataFrame[source]

Aggregate balance sheets across simulations.

Parameters:
  • year (int) – Year index to aggregate

  • percentiles (Optional[List[float]]) – Percentiles to calculate (defaults to [5, 25, 50, 75, 95])

Return type:

DataFrame

Returns:

DataFrame with aggregated balance sheet statistics

aggregate_income_statements(year: int, percentiles: List[float] | None = None) DataFrame[source]

Aggregate income statements across simulations.

Parameters:
  • year (int) – Year index to aggregate

  • percentiles (Optional[List[float]]) – Percentiles to calculate (defaults to [5, 25, 50, 75, 95])

Return type:

DataFrame

Returns:

DataFrame with aggregated income statement statistics

generate_convergence_analysis() DataFrame[source]

Analyze convergence of financial metrics across simulations.

Return type:

DataFrame

Returns:

DataFrame showing convergence statistics

ergodic_insurance.hjb_solver module

Hamilton-Jacobi-Bellman solver for optimal insurance control.

This module implements a Hamilton-Jacobi-Bellman (HJB) partial differential equation solver for finding optimal insurance strategies through dynamic programming. The solver handles multi-dimensional state spaces and provides theoretically optimal control policies.

The HJB equation provides globally optimal solutions by solving:

∂V/∂t + max_u[L^u V + f(x,u)] = 0

where V is the value function, L^u is the controlled infinitesimal generator, and f(x,u) is the running cost/reward.

Author: Alex Filiakov Date: 2025-01-26

exception NumericalDivergenceError[source]

Bases: RuntimeError

Raised when the HJB solver detects NaN or Inf in the value function.

class TimeSteppingScheme(*values)[source]

Bases: Enum

Time stepping schemes for PDE integration.

EXPLICIT = 'explicit'
IMPLICIT = 'implicit'
CRANK_NICOLSON = 'crank_nicolson'
class BoundaryCondition(*values)[source]

Bases: Enum

Types of boundary conditions.

DIRICHLET = 'dirichlet'
NEUMANN = 'neumann'
ABSORBING = 'absorbing'
REFLECTING = 'reflecting'
class StateVariable(name: str, min_value: float, max_value: float, num_points: int, boundary_lower: BoundaryCondition = BoundaryCondition.ABSORBING, boundary_upper: BoundaryCondition = BoundaryCondition.ABSORBING, log_scale: bool = False) None[source]

Bases: object

Definition of a state variable in the HJB problem.

name: str
min_value: float
max_value: float
num_points: int
boundary_lower: BoundaryCondition = 'absorbing'
boundary_upper: BoundaryCondition = 'absorbing'
log_scale: bool = False
__post_init__()[source]

Validate state variable configuration.

get_grid() ndarray[source]

Generate grid points for this variable.

Return type:

ndarray

Returns:

Array of grid points

class ControlVariable(name: str, min_value: float, max_value: float, num_points: int = 50, continuous: bool = True, log_scale: bool = False) None[source]

Bases: object

Definition of a control variable in the HJB problem.

name: str
min_value: float
max_value: float
num_points: int = 50
continuous: bool = True
log_scale: bool = False
__post_init__()[source]

Validate control variable configuration.

get_values() ndarray[source]

Get discrete control values for optimization.

Return type:

ndarray

Returns:

Array of control values

class StateSpace(state_variables: List[StateVariable]) None[source]

Bases: object

Multi-dimensional state space for HJB problem.

Handles arbitrary dimensionality with proper grid management and boundary condition enforcement.

state_variables: List[StateVariable]
__post_init__()[source]

Initialize derived attributes.

get_boundary_mask() ndarray[source]

Get boolean mask for boundary points.

Return type:

ndarray

Returns:

Boolean array where True indicates boundary points

interpolate_value(value_function: ndarray, points: ndarray) ndarray[source]

Interpolate value function at arbitrary points.

Parameters:
  • value_function (ndarray) – Value function on grid

  • points (ndarray) – Points to interpolate at (shape: [n_points, n_dims])

Return type:

ndarray

Returns:

Interpolated values

class UtilityFunction[source]

Bases: ABC

Abstract base class for utility functions.

Defines the interface for utility functions used in the HJB equation. Concrete implementations should provide both the utility value and its derivative.

abstractmethod evaluate(wealth: ndarray) ndarray[source]

Evaluate utility at given wealth levels.

Parameters:

wealth (ndarray) – Wealth values

Return type:

ndarray

Returns:

Utility values

abstractmethod derivative(wealth: ndarray) ndarray[source]

Compute marginal utility (first derivative).

Parameters:

wealth (ndarray) – Wealth values

Return type:

ndarray

Returns:

Marginal utility values

abstractmethod inverse_derivative(marginal_utility: ndarray) ndarray[source]

Compute inverse of marginal utility.

Used for finding optimal controls in some formulations.

Parameters:

marginal_utility (ndarray) – Marginal utility values

Return type:

ndarray

Returns:

Wealth values corresponding to given marginal utilities

class LogUtility(wealth_floor: float = 1e-06)[source]

Bases: UtilityFunction

Logarithmic utility function for ergodic optimization.

U(w) = log(w)

This utility function maximizes the long-term growth rate and is particularly suitable for ergodic analysis.

evaluate(wealth: ndarray) ndarray[source]

Evaluate log utility.

Return type:

ndarray

derivative(wealth: ndarray) ndarray[source]

Compute marginal utility: U’(w) = 1/w.

Return type:

ndarray

inverse_derivative(marginal_utility: ndarray) ndarray[source]

Compute inverse: (U’)^(-1)(m) = 1/m.

Return type:

ndarray

class PowerUtility(risk_aversion: float = 2.0, wealth_floor: float = 1e-06)[source]

Bases: UtilityFunction

Power (CRRA) utility function with risk aversion parameter.

U(w) = w^(1-γ)/(1-γ) for γ ≠ 1 U(w) = log(w) for γ = 1

where γ is the coefficient of relative risk aversion.

evaluate(wealth: ndarray) ndarray[source]

Evaluate power utility.

Return type:

ndarray

derivative(wealth: ndarray) ndarray[source]

Compute marginal utility: U’(w) = w^(-γ).

Return type:

ndarray

inverse_derivative(marginal_utility: ndarray) ndarray[source]

Compute inverse: (U’)^(-1)(m) = m^(-1/γ).

Return type:

ndarray

class ExpectedWealth[source]

Bases: UtilityFunction

Linear utility function for risk-neutral wealth maximization.

U(w) = w

This represents risk-neutral preferences where the goal is to maximize expected wealth.

evaluate(wealth: ndarray) ndarray[source]

Evaluate linear utility.

Return type:

ndarray

derivative(wealth: ndarray) ndarray[source]

Compute marginal utility: U’(w) = 1.

Return type:

ndarray

inverse_derivative(marginal_utility: ndarray) ndarray[source]

Inverse is undefined for constant marginal utility.

Return type:

ndarray

class HJBProblem(state_space: StateSpace, control_variables: List[ControlVariable], utility_function: UtilityFunction, dynamics: Callable[[ndarray, ndarray, float], ndarray], running_cost: Callable[[ndarray, ndarray, float], ndarray], terminal_value: Callable[[ndarray], ndarray] | None = None, discount_rate: float = 0.0, time_horizon: float | None = None, diffusion: Callable[[ndarray, ndarray, float], ndarray] | None = None) None[source]

Bases: object

Complete specification of an HJB optimal control problem.

state_space: StateSpace
control_variables: List[ControlVariable]
utility_function: UtilityFunction
dynamics: Callable[[ndarray, ndarray, float], ndarray]
running_cost: Callable[[ndarray, ndarray, float], ndarray]
terminal_value: Callable[[ndarray], ndarray] | None = None
discount_rate: float = 0.0
time_horizon: float | None = None
diffusion: Callable[[ndarray, ndarray, float], ndarray] | None = None

Optional callback returning σ²(x,u,t) with same shape as dynamics output. When provided, the solver includes the ½σ²·∇²V diffusion term.

__post_init__()[source]

Validate problem specification.

class HJBSolverConfig(time_step: float = 0.01, max_iterations: int = 1000, tolerance: float = 1e-06, scheme: TimeSteppingScheme = TimeSteppingScheme.EXPLICIT, use_sparse: bool = True, verbose: bool = True, inner_max_iterations: int = 100, inner_tolerance_factor: float = 0.1, rannacher_steps: int = 2, control_search_strategy: str = 'auto', control_memory_budget_mb: int = 256) None[source]

Bases: object

Configuration for HJB solver.

time_step: float = 0.01
max_iterations: int = 1000
tolerance: float = 1e-06
scheme: TimeSteppingScheme = 'explicit'
use_sparse: bool = True
verbose: bool = True
inner_max_iterations: int = 100
inner_tolerance_factor: float = 0.1
rannacher_steps: int = 2
control_search_strategy: str = 'auto'
control_memory_budget_mb: int = 256
class HJBSolver(problem: HJBProblem, config: HJBSolverConfig)[source]

Bases: object

Hamilton-Jacobi-Bellman PDE solver for optimal control.

Implements finite difference methods with upwind schemes for solving HJB equations. Supports multi-dimensional state spaces and various boundary conditions.

value_function: ndarray | None
optimal_policy: dict[str, ndarray] | None
solve() Tuple[ndarray, Dict[str, ndarray]][source]

Solve the HJB equation using policy iteration.

Return type:

Tuple[ndarray, Dict[str, ndarray]]

Returns:

Tuple of (value_function, optimal_policy_dict)

extract_feedback_control(state: ndarray) Dict[str, float][source]

Extract feedback control law at given state.

Parameters:

state (ndarray) – Current state values

Return type:

Dict[str, float]

Returns:

Dictionary of control variable names to optimal values

compute_convergence_metrics() Dict[str, Any][source]

Compute metrics for assessing solution quality.

Return type:

Dict[str, Any]

Returns:

Dictionary of convergence metrics

create_custom_utility(evaluate_func: Callable[[ndarray], ndarray], derivative_func: Callable[[ndarray], ndarray], inverse_derivative_func: Callable[[ndarray], ndarray] | None = None) UtilityFunction[source]

Factory function for creating custom utility functions.

This function allows users to create custom utility functions by providing the evaluation and derivative functions. This is the recommended way to add new utility functions beyond the built-in ones.

Parameters:
Return type:

UtilityFunction

Returns:

Custom utility function instance

Example

>>> # Create exponential utility: U(w) = 1 - exp(-α*w)
>>> def exp_eval(w):
...     alpha = 0.01
...     return 1 - np.exp(-alpha * w)
>>> def exp_deriv(w):
...     alpha = 0.01
...     return alpha * np.exp(-alpha * w)
>>> exp_utility = create_custom_utility(exp_eval, exp_deriv)

ergodic_insurance.insurance module

Insurance policy structure and claim processing.

Deprecated since version The: classes in this module (InsurancePolicy and InsuranceLayer) are deprecated. Use InsuranceProgram and EnhancedInsuranceLayer instead.

Migration examples:

# Before (deprecated):
from ergodic_insurance.insurance import InsurancePolicy
policy = InsurancePolicy.from_simple(deductible=1_000_000, limit=5_000_000, premium_rate=0.03)

# After (recommended):
from ergodic_insurance.insurance_program import InsuranceProgram
program = InsuranceProgram.simple(deductible=1_000_000, limit=5_000_000, rate=0.03)
Since:

Version 0.1.0

class InsuranceLayer(attachment_point: float, limit: float, rate: float) None[source]

Bases: object

Represents a single insurance layer.

Each layer has an attachment point (where coverage starts), a limit (maximum coverage), and a rate (premium percentage). Insurance layers are the building blocks of complex insurance programs.

attachment_point

Dollar amount where this layer starts providing coverage. Also known as the retention or excess point.

limit

Maximum coverage amount from this layer. The layer covers losses from attachment_point to (attachment_point + limit).

rate

Premium rate as a percentage of the limit. For example, 0.03 means 3% of limit as annual premium.

Examples

Primary layer with $1M retention:

primary = InsuranceLayer(
    attachment_point=1_000_000,  # $1M retention
    limit=5_000_000,             # $5M limit
    rate=0.025                   # 2.5% rate
)

# This covers losses from $1M to $6M
# Annual premium = $5M × 2.5% = $125,000

Excess layer in a tower:

excess = InsuranceLayer(
    attachment_point=6_000_000,  # Attaches at $6M
    limit=10_000_000,            # $10M limit
    rate=0.01                    # 1% rate (lower for excess)
)

Note

Layers are typically structured in towers with each successive layer attaching where the previous layer exhausts.

attachment_point: float
limit: float
rate: float
__post_init__()[source]

Validate insurance layer parameters.

Raises:

ValueError – If attachment_point is negative, limit is non-positive, or rate is negative.

calculate_recovery(loss_amount: float) float[source]

Calculate recovery from this layer for a given loss.

Determines how much of a loss is covered by this specific layer based on its attachment point and limit.

Parameters:

loss_amount (float) – Total loss amount in dollars to recover.

Returns:

Amount recovered from this layer in dollars. Returns 0

if loss is below attachment point, partial recovery if loss partially penetrates layer, or full limit if loss exceeds layer exhaust point.

Return type:

float

Examples

Layer with $1M attachment, $5M limit:

layer = InsuranceLayer(1_000_000, 5_000_000, 0.02)

# Loss below attachment
recovery = layer.calculate_recovery(500_000)  # Returns 0

# Loss partially in layer
recovery = layer.calculate_recovery(3_000_000)  # Returns 2M

# Loss exceeds layer
recovery = layer.calculate_recovery(10_000_000)  # Returns 5M (full limit)
calculate_premium() float[source]

Calculate premium for this layer.

Returns:

Annual premium amount in dollars (rate × limit).

Return type:

float

Examples

Calculate annual cost:

layer = InsuranceLayer(1_000_000, 10_000_000, 0.015)
premium = layer.calculate_premium()  # Returns 150,000
print(f"Annual premium: ${premium:,.0f}")
class InsurancePolicy(layers: List[InsuranceLayer], deductible: float = 0.0, pricing_enabled: bool = False, pricer: InsurancePricer | None = None)[source]

Bases: object

Multi-layer insurance policy with deductible.

Manages multiple insurance layers and processes claims across them, handling proper allocation of losses to each layer in sequence. Supports both static and dynamic pricing models.

The policy structure follows standard commercial insurance practices: 1. Insured pays deductible first 2. Losses then penetrate layers in order of attachment 3. Each layer pays up to its limit 4. Insured bears losses exceeding all coverage

layers

List of InsuranceLayer objects sorted by attachment point.

deductible

Self-insured retention before insurance applies.

pricing_enabled

Whether to use dynamic pricing models.

pricer

Optional pricing engine for market-based premiums.

pricing_results

History of pricing calculations.

Examples

Standard commercial property program:

# Build insurance program
policy = InsurancePolicy(
    layers=[
        InsuranceLayer(500_000, 4_500_000, 0.03),   # Primary
        InsuranceLayer(5_000_000, 10_000_000, 0.02), # Excess
        InsuranceLayer(15_000_000, 25_000_000, 0.01) # Umbrella
    ],
    deductible=500_000  # $500K SIR
)

# Process various claims
small_claim = policy.process_claim(100_000)  # All on deductible
medium_claim = policy.process_claim(3_000_000)  # Hits primary
large_claim = policy.process_claim(20_000_000)  # Multiple layers

Note

Layers are automatically sorted by attachment point to ensure proper claim allocation regardless of input order.

pricing_results: List[Any]
process_claim(claim_amount: float) Tuple[float, float][source]

Process a claim through the insurance structure.

Allocates a loss across the deductible and insurance layers, calculating how much is paid by the company versus insurance. Total insurance recovery is capped at (claim_amount - deductible) to prevent over-recovery when layer configurations overlap with the deductible region.

Parameters:

claim_amount (float) – Total claim amount.

Return type:

Tuple[float, float]

Returns:

Tuple of (company_payment, insurance_recovery).

calculate_recovery(claim_amount: float) float[source]

Calculate total insurance recovery for a claim.

Recovery is capped at (claim_amount - deductible) to prevent over-recovery when layer configurations overlap with the deductible region.

Parameters:

claim_amount (float) – Total claim amount.

Return type:

float

Returns:

Total insurance recovery amount.

calculate_premium() float[source]

Calculate total premium across all layers.

Return type:

float

Returns:

Total annual premium.

classmethod from_simple(deductible: float, limit: float, premium_rate: float, **kwargs) InsurancePolicy[source]

Create a single-layer insurance policy from basic parameters.

Convenience factory for the most common use case: a single primary layer where the attachment point equals the deductible.

Parameters:
  • deductible (float) – Self-insured retention in dollars. The insured pays this amount before coverage begins.

  • limit (float) – Maximum coverage amount in dollars above the deductible.

  • premium_rate (float) – Annual premium as a fraction of the limit (e.g. 0.025 for 2.5%).

  • **kwargs – Additional keyword arguments forwarded to the InsurancePolicy constructor (e.g. pricing_enabled, pricer).

Return type:

InsurancePolicy

Returns:

InsurancePolicy with a single layer whose attachment point equals the deductible.

Examples

Quick single-layer policy:

policy = InsurancePolicy.from_simple(
    deductible=500_000,
    limit=10_000_000,
    premium_rate=0.025,
)

This is equivalent to:

layer = InsuranceLayer(
    attachment_point=500_000,
    limit=10_000_000,
    rate=0.025,
)
policy = InsurancePolicy(layers=[layer], deductible=500_000)
classmethod from_yaml(config_path: str) InsurancePolicy[source]

Load insurance policy from YAML configuration.

Parameters:

config_path (str) – Path to YAML configuration file.

Return type:

InsurancePolicy

Returns:

InsurancePolicy configured from YAML.

get_total_coverage() float[source]

Get total coverage across all layers.

Return type:

float

Returns:

Maximum possible insurance coverage.

to_enhanced_program() TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None[source]

Convert to enhanced InsuranceProgram for advanced features.

Return type:

Optional[ergodic_insurance.insurance_program.InsuranceProgram]

Returns:

InsuranceProgram instance with same configuration.

apply_pricing(expected_revenue: float, market_cycle: MarketCycle | None = None, loss_generator: ManufacturingLossGenerator | None = None) None[source]

Apply dynamic pricing to all layers in the policy.

Updates layer rates based on frequency/severity calculations.

Parameters:
Raises:

ValueError – If pricing not enabled or pricer not configured

Return type:

None

classmethod create_with_pricing(layers: List[InsuranceLayer], loss_generator: ManufacturingLossGenerator, expected_revenue: float, market_cycle: MarketCycle | None = None, deductible: float = 0.0) InsurancePolicy[source]

Create insurance policy with dynamic pricing.

Factory method that creates a policy with pricing already applied.

Parameters:
Return type:

InsurancePolicy

Returns:

InsurancePolicy with pricing applied

ergodic_insurance.insurance_accounting module

Insurance premium accounting module.

This module provides proper insurance premium accounting with prepaid asset tracking and systematic monthly amortization following GAAP principles.

Uses Decimal for all currency amounts to prevent floating-point precision errors in iterative calculations.

class InsuranceRecovery(amount: Decimal, claim_id: str, year_approved: int, amount_received: Decimal = <factory>) None[source]

Bases: object

Represents an insurance claim recovery receivable.

amount

Recovery amount approved by insurance (Decimal)

claim_id

Unique identifier for the claim

year_approved

Year when recovery was approved

amount_received

Amount received to date (Decimal)

amount: Decimal
claim_id: str
year_approved: int
amount_received: Decimal
__post_init__() None[source]

Convert amounts to Decimal if needed (runtime check for backwards compatibility).

Return type:

None

property outstanding: Decimal

Calculate outstanding receivable amount.

__deepcopy__(memo: Dict[int, Any]) InsuranceRecovery[source]

Create a deep copy of this insurance recovery.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

InsuranceRecovery

Returns:

Independent copy of this InsuranceRecovery

class InsuranceAccounting(prepaid_insurance: Decimal = <factory>, monthly_expense: Decimal = <factory>, annual_premium: Decimal = <factory>, months_in_period: int = 12, current_month: int = 0, recoveries: List[InsuranceRecovery] = <factory>) None[source]

Bases: object

Manages insurance premium accounting with proper GAAP treatment.

This class tracks annual insurance premium payments as prepaid assets and amortizes them monthly over the coverage period using straight-line amortization. It also tracks insurance claim recoveries separately from claim liabilities.

All currency amounts use Decimal for precise financial calculations.

prepaid_insurance

Current prepaid insurance asset balance (Decimal)

monthly_expense

Calculated monthly insurance expense (Decimal)

annual_premium

Total annual premium amount (Decimal)

months_in_period

Number of months in coverage period (default 12)

current_month

Current month in coverage period

recoveries

List of insurance recoveries receivable

prepaid_insurance: Decimal
monthly_expense: Decimal
annual_premium: Decimal
months_in_period: int = 12
current_month: int = 0
recoveries: List[InsuranceRecovery]
__post_init__() None[source]

Convert amounts to Decimal if needed (runtime check for backwards compatibility).

Return type:

None

__deepcopy__(memo: Dict[int, Any]) InsuranceAccounting[source]

Create a deep copy of this insurance accounting instance.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

InsuranceAccounting

Returns:

Independent copy of this InsuranceAccounting with all recoveries

pay_annual_premium(premium_amount: Decimal | float | int) Dict[str, Decimal][source]

Record annual premium payment at start of coverage period.

Parameters:

premium_amount (Union[Decimal, float, int]) – Annual premium amount to pay (converted to Decimal)

Returns:

  • cash_outflow: Cash paid for premium

  • prepaid_asset: Prepaid insurance asset created

  • monthly_expense: Calculated monthly expense

Return type:

Dictionary with transaction details as Decimal

record_monthly_expense() Dict[str, Decimal][source]

Amortize monthly insurance expense from prepaid asset.

Records one month of insurance expense by reducing the prepaid asset and recognizing the expense. Uses straight-line amortization over the coverage period.

Returns:

  • insurance_expense: Monthly expense recognized

  • prepaid_reduction: Reduction in prepaid asset

  • remaining_prepaid: Remaining prepaid balance

Return type:

Dictionary with transaction details as Decimal

record_claim_recovery(recovery_amount: Decimal | float | int, claim_id: str | None = None, year: int = 0) Dict[str, Decimal][source]

Record insurance claim recovery as receivable.

Parameters:
  • recovery_amount (Union[Decimal, float, int]) – Amount approved for recovery from insurance (converted to Decimal)

  • claim_id (Optional[str]) – Optional unique identifier for the claim

  • year (int) – Year when recovery was approved

Returns:

  • insurance_receivable: New receivable amount

  • total_receivables: Total outstanding receivables

Return type:

Dictionary with recovery details as Decimal

receive_recovery_payment(amount: Decimal | float | int, claim_id: str | None = None) Dict[str, Decimal][source]

Record receipt of insurance recovery payment.

Parameters:
  • amount (Union[Decimal, float, int]) – Amount received from insurance (converted to Decimal)

  • claim_id (Optional[str]) – Optional claim ID to apply payment to

Returns:

  • cash_received: Cash inflow amount

  • receivable_reduction: Reduction in receivables

  • remaining_receivables: Total remaining receivables

Return type:

Dictionary with payment details as Decimal

get_total_receivables() Decimal[source]

Calculate total outstanding insurance receivables.

Return type:

Decimal

Returns:

Total amount of outstanding insurance receivables as Decimal

get_amortization_schedule() List[Dict[str, int | Decimal]][source]

Generate remaining amortization schedule.

Return type:

List[Dict[str, Union[int, Decimal]]]

Returns:

List of monthly amortization entries remaining (amounts as Decimal)

reset_for_new_period() None[source]

Reset accounting for a new coverage period.

Clears current period data while preserving recoveries.

Return type:

None

get_summary() Dict[str, int | Decimal][source]

Get summary of current insurance accounting status.

Return type:

Dict[str, Union[int, Decimal]]

Returns:

Dictionary with key accounting metrics (amounts as Decimal)

ergodic_insurance.insurance_pricing module

Insurance pricing module with market cycle support.

This module implements realistic insurance premium calculation based on frequency and severity distributions, replacing hardcoded premium rates in simulations. It supports market cycle adjustments and integrates with existing loss generators and insurance structures.

Example

Basic usage for pricing an insurance program:

from ergodic_insurance.insurance_pricing import InsurancePricer, MarketCycle
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator

# Initialize loss generator and pricer
loss_gen = ManufacturingLossGenerator()
pricer = InsurancePricer(
    loss_generator=loss_gen,
    loss_ratio=0.70,
    market_cycle=MarketCycle.NORMAL
)

# Price an insurance program
program = InsuranceProgram(layers=[...])
priced_program = pricer.price_insurance_program(
    program,
    expected_revenue=15_000_000
)

# Get total premium
total_premium = priced_program.calculate_premium()
MarketCycle[source]

Enum representing market conditions (HARD, NORMAL, SOFT)

class MarketCycle(*values)[source]

Bases: Enum

Market cycle states affecting insurance pricing.

Each state corresponds to a target loss ratio that insurers use to price coverage. Lower loss ratios (hard markets) result in higher premiums.

HARD

Seller’s market with limited capacity (60% loss ratio)

NORMAL

Balanced market conditions (70% loss ratio)

SOFT

Buyer’s market with excess capacity (80% loss ratio)

HARD = 0.6
NORMAL = 0.7
SOFT = 0.8
class PricingParameters(loss_ratio: float = 0.7, expense_ratio: float = 0.25, profit_margin: float = 0.05, risk_loading: float = 0.1, confidence_level: float = 0.95, simulation_years: int = 10, min_premium: float = 1000.0, max_rate_on_line: float = 0.5, alae_ratio: float = 0.1, ulae_ratio: float = 0.05, development_pattern: ClaimDevelopment | None = None) None[source]

Bases: object

Parameters for insurance pricing calculations.

loss_ratio

Target loss ratio for pricing (claims/premium)

expense_ratio

Operating expense ratio excluding LAE (default 0.25). Covers commissions, overhead, admin, and other non-LAE expenses.

profit_margin

Target profit margin (default 0.05)

risk_loading

Additional loading for uncertainty (default 0.10)

confidence_level

Confidence level for pricing (default 0.95)

simulation_years

Years to simulate for pricing (default 10)

min_premium

Minimum premium floor (default 1000)

max_rate_on_line

Maximum rate on line cap (default 0.50)

alae_ratio

Allocated LAE ratio as fraction of pure premium (default 0.10). Covers claim-specific costs such as legal fees and expert witnesses. Typical range for commercial lines is 0.08-0.15.

ulae_ratio

Unallocated LAE ratio as fraction of pure premium (default 0.05). Covers general claims department overhead. Typical range is 0.03-0.08.

development_pattern

Optional claim development pattern for developing losses to ultimate (default None). When set, simulated losses are adjusted by age-to-ultimate factors per ASOP 25 so that immature accident years are brought to their expected ultimate value before pure premium calculation. Use None to skip development (equivalent to assuming all losses are already at ultimate).

loss_ratio: float = 0.7
expense_ratio: float = 0.25
profit_margin: float = 0.05
risk_loading: float = 0.1
confidence_level: float = 0.95
simulation_years: int = 10
min_premium: float = 1000.0
max_rate_on_line: float = 0.5
alae_ratio: float = 0.1
ulae_ratio: float = 0.05
development_pattern: ClaimDevelopment | None = None
property lae_ratio: float

Combined LAE ratio (ALAE + ULAE) as fraction of pure premium.

class LayerPricing(attachment_point: float, limit: float, expected_frequency: float, expected_severity: float, pure_premium: float, technical_premium: float, market_premium: float, rate_on_line: float, confidence_interval: Tuple[float, float], lae_loading: float = 0.0, development_factor: float = 1.0) None[source]

Bases: object

Pricing details for a single insurance layer.

attachment_point

Where coverage starts

limit

Maximum coverage amount

expected_frequency

Expected claims per year hitting this layer

expected_severity

Average severity of claims in this layer

pure_premium

Expected loss cost

technical_premium

Pure premium with expenses and profit

market_premium

Final premium after market adjustments

rate_on_line

Premium as percentage of limit

confidence_interval

(lower, upper) bounds at confidence level

lae_loading

LAE component calculated from dedicated ALAE/ULAE ratios

development_factor

Average age-to-ultimate LDF applied (1.0 = no development)

attachment_point: float
limit: float
expected_frequency: float
expected_severity: float
pure_premium: float
technical_premium: float
market_premium: float
rate_on_line: float
confidence_interval: Tuple[float, float]
lae_loading: float = 0.0
development_factor: float = 1.0
class LayerPricer(severity_distribution: LossDistribution, frequency: float = 1.0) None[source]

Bases: object

Analytical layer pricing using limited expected values.

Computes expected losses in an excess layer (attachment, attachment + limit) directly from a fitted severity distribution, without simulation. This replaces ad-hoc rate heuristics with actuarially sound formulas based on the limited expected value (LEV), increased limits factors (ILFs), loss elimination ratios (LERs), and exposure curves (Lee diagrams).

The fundamental identity is:

E[loss in layer (a, a+l)] = LEV(a + l) - LEV(a)

where LEV(d) = E[min(X, d)] is the limited expected value at d.

References

  • Lee (1988) — Loss Distributions (exposure curves)

  • Miccolis (1977) — On the Theory of Increased Limits and Excess of Loss Pricing

  • Klugman, Panjer, Willmot — Loss Models, Chapter 5

Parameters:
  • severity_distribution (LossDistribution) – Fitted severity distribution with a limited_expected_value(limit) method.

  • frequency (float) – Expected annual claim frequency for the distribution.

Example

Analytical pricing of an excess layer:

from ergodic_insurance.loss_distributions import ParetoLoss
from ergodic_insurance.insurance_pricing import LayerPricer

severity = ParetoLoss(alpha=2.5, xm=100_000)
pricer = LayerPricer(severity, frequency=5.0)

# Expected loss in the $1M xs $500K layer
layer_loss = pricer.expected_layer_loss(500_000, 1_000_000)

# Increased Limits Factor at $2M relative to $500K basic limit
ilf = pricer.increased_limits_factor(2_000_000, basic_limit=500_000)
expected_layer_loss(attachment: float, limit: float) float[source]

Calculate the expected loss in an excess layer.

E[loss in (a, a+l)] = LEV(a + l) - LEV(a)

This is the pure premium for a single occurrence in the layer, multiplied by frequency to get the annual expected layer loss.

Parameters:
  • attachment (float) – Layer attachment point (deductible).

  • limit (float) – Layer limit (width of coverage).

Return type:

float

Returns:

Annual expected loss cost for the layer.

increased_limits_factor(limit: float, basic_limit: float) float[source]

Calculate the Increased Limits Factor (ILF).

ILF(L) = LEV(L) / LEV(B)

where B is the basic limit and L is the desired limit. Used to price policies at higher limits relative to a base.

Reference: Miccolis (1977); CAS Study Note on ILF ratemaking.

Parameters:
  • limit (float) – Target policy limit.

  • basic_limit (float) – Basic (reference) limit.

Return type:

float

Returns:

ILF ratio (>= 1.0 when limit >= basic_limit).

loss_elimination_ratio(deductible: float) float[source]

Calculate the Loss Elimination Ratio (LER).

LER(d) = LEV(d) / E[X]

The proportion of total expected losses eliminated by a deductible d. An LER of 0.40 means the deductible removes 40% of expected losses.

Reference: Klugman, Panjer, Willmot — Loss Models, Definition 5.2.

Parameters:

deductible (float) – Deductible amount.

Return type:

float

Returns:

LER in [0, 1]. Returns 0.0 if E[X] is infinite.

exposure_curve(n_points: int = 100) Dict[str, List[float]][source]

Compute the exposure curve (Lee diagram).

The exposure curve relates the retention as a fraction of the policy limit to the proportion of expected losses retained. Formally:

G(r) = LEV(r * M) / LEV(M)

where M is the maximum possible loss (or a large practical upper bound) and r ranges from 0 to 1.

Useful for visualising how much loss is retained vs. ceded at different attachment points.

Reference: Lee (1988).

Parameters:

n_points (int) – Number of points on the curve (default 100).

Return type:

Dict[str, List[float]]

Returns:

Dictionary with ‘retention_pct’ and ‘loss_eliminated_pct’ lists, each of length n_points + 1 (including the origin).

class InsurancePricer(loss_generator: 'ManufacturingLossGenerator' | None = None, loss_ratio: float | None = None, market_cycle: MarketCycle | None = None, parameters: PricingParameters | None = None, exposure: 'ExposureBase' | None = None, seed: int | None = None)[source]

Bases: object

Calculate insurance premiums based on loss distributions and market conditions.

This class provides methods to price individual layers and complete insurance programs using frequency/severity distributions from loss generators. It supports market cycle adjustments and maintains backward compatibility with fixed rates.

Parameters:

Example

Pricing with different market conditions:

# Hard market pricing (higher premiums)
hard_pricer = InsurancePricer(
    loss_generator=loss_gen,
    market_cycle=MarketCycle.HARD
)

# Soft market pricing (lower premiums)
soft_pricer = InsurancePricer(
    loss_generator=loss_gen,
    market_cycle=MarketCycle.SOFT
)
calculate_pure_premium(attachment_point: float, limit: float, expected_revenue: float, simulation_years: int | None = None) Tuple[float, Dict[str, Any]][source]

Calculate pure premium for a layer via mean annual aggregate.

Pure premium is the mean of the simulated annual aggregate losses in the layer (CAS Exam 5 / Werner & Modlin Chapter 4). When a development_pattern is configured on the pricing parameters, each simulation year’s aggregate losses are developed to ultimate using age-to-ultimate factors (ASOP 25 / CAS Ratemaking Chapter 4). Older simulation years are treated as more mature while the most recent year is treated as the least mature, mirroring standard experience-rating practice.

Parameters:
  • attachment_point (float) – Where layer coverage starts

  • limit (float) – Maximum coverage from this layer

  • expected_revenue (float) – Expected annual revenue for scaling

  • simulation_years (Optional[int]) – Years to simulate (default from parameters)

Return type:

Tuple[float, Dict[str, Any]]

Returns:

Tuple of (pure_premium, statistics_dict) with detailed metrics

Raises:

ValueError – If loss_generator is not configured

calculate_technical_premium(pure_premium: float, limit: float) float[source]

Convert pure premium to technical premium with risk and LAE loading.

Technical premium adds a risk loading for parameter uncertainty to the pure premium, plus LAE (loss adjustment expense) as a known cost component per ASOP 29. Expense and profit margins are applied separately via the actuarial pricing identity in calculate_market_premium() to avoid double-counting.

Formula:
technical_premium = pure_premium * (1 + risk_loading)
  • pure_premium * lae_ratio

Parameters:
  • pure_premium (float) – Expected loss cost

  • limit (float) – Layer limit for rate capping

Return type:

float

Returns:

Technical premium amount

calculate_market_premium(technical_premium: float, market_cycle: MarketCycle | None = None) float[source]

Apply expense, profit, and market cycle loadings to technical premium.

Uses the standard actuarial pricing identity:

Premium = Pure_Premium / (1 - V - Q)

where V is the expense ratio and Q is the profit margin (Werner & Modlin, Basic Ratemaking, Ch. 7). The market cycle then scales this indicated premium to reflect competitive pressure.

With default parameters (V=0.25, Q=0.05, loss_ratio=0.70) the formula reduces to technical_premium / cycle_loss_ratio, preserving backward compatibility.

Parameters:
  • technical_premium (float) – Premium with risk and LAE loadings

  • market_cycle (Optional[MarketCycle]) – Optional market cycle override

Return type:

float

Returns:

Market-adjusted premium incorporating expenses, profit, and competitive cycle effects

price_layer(attachment_point: float, limit: float, expected_revenue: float, market_cycle: MarketCycle | None = None) LayerPricing[source]

Price a single insurance layer.

Complete pricing process from pure premium through market adjustment.

Parameters:
  • attachment_point (float) – Where coverage starts

  • limit (float) – Maximum coverage amount

  • expected_revenue (float) – Expected annual revenue

  • market_cycle (Optional[MarketCycle]) – Optional market cycle override

Return type:

LayerPricing

Returns:

LayerPricing object with all pricing details

price_insurance_program(program: ergodic_insurance.insurance_program.InsuranceProgram, expected_revenue: float | None = None, time: float = 0.0, market_cycle: MarketCycle | None = None, update_program: bool = True) ergodic_insurance.insurance_program.InsuranceProgram[source]

Price a complete insurance program.

Prices all layers in the program and optionally updates their rates.

Parameters:
  • program (ergodic_insurance.insurance_program.InsuranceProgram) – Insurance program to price

  • expected_revenue (Optional[float]) – Expected annual revenue (optional if using exposure)

  • time (float) – Time for exposure calculation (default 0.0)

  • market_cycle (Optional[MarketCycle]) – Optional market cycle override

  • update_program (bool) – Whether to update program layer rates

Return type:

ergodic_insurance.insurance_program.InsuranceProgram

Returns:

Program with updated pricing (original or copy based on update_program)

price_insurance_policy(policy: InsurancePolicy, expected_revenue: float, market_cycle: MarketCycle | None = None, update_policy: bool = True) InsurancePolicy[source]

Price a basic insurance policy.

Deprecated since version Use: price_insurance_program() instead.

Prices all layers in the policy and optionally updates their rates.

Parameters:
  • policy (InsurancePolicy) – Insurance policy to price

  • expected_revenue (float) – Expected annual revenue

  • market_cycle (Optional[MarketCycle]) – Optional market cycle override

  • update_policy (bool) – Whether to update policy layer rates

Return type:

InsurancePolicy

Returns:

Policy with updated pricing (original or copy based on update_policy)

compare_market_cycles(attachment_point: float, limit: float, expected_revenue: float) Dict[str, LayerPricing][source]

Compare pricing across different market cycles.

Useful for understanding market impact on premiums.

Parameters:
  • attachment_point (float) – Where coverage starts

  • limit (float) – Maximum coverage amount

  • expected_revenue (float) – Expected annual revenue

Return type:

Dict[str, LayerPricing]

Returns:

Dictionary mapping market cycle names to pricing results

simulate_cycle_transition(program: ergodic_insurance.insurance_program.InsuranceProgram, expected_revenue: float, years: int = 10, transition_probs: Dict[str, float] | None = None) List[Dict[str, Any]][source]

Simulate insurance pricing over market cycle transitions.

Models how premiums change as markets transition between states.

Parameters:
  • program (ergodic_insurance.insurance_program.InsuranceProgram) – Insurance program to simulate

  • expected_revenue (float) – Expected annual revenue

  • years (int) – Number of years to simulate

  • transition_probs (Optional[Dict[str, float]]) – Market transition probabilities

Return type:

List[Dict[str, Any]]

Returns:

List of annual results with cycle states and premiums

static create_from_config(config: Dict[str, Any], loss_generator: ManufacturingLossGenerator | None = None) InsurancePricer[source]

Create pricer from configuration dictionary.

Parameters:
  • config (Dict[str, Any]) – Configuration dictionary. Supports an optional development_pattern key whose value is the name of a standard DevelopmentPatternType (e.g. "long_tail_10yr") or a dict with factors and optional tail_factor.

  • loss_generator (Optional[ManufacturingLossGenerator]) – Optional loss generator

Return type:

InsurancePricer

Returns:

Configured InsurancePricer instance

ergodic_insurance.insurance_program module

Multi-layer insurance program with reinstatements and advanced features.

This module provides comprehensive insurance program management including multi-layer structures, reinstatements, attachment points, and accurate loss allocation for manufacturing risk transfer optimization.

class ReinstatementType(*values)[source]

Bases: Enum

Types of reinstatement provisions.

NONE = 'none'
PRO_RATA = 'pro_rata'
FULL = 'full'
FREE = 'free'
class ProgramOptimizationConstraints(max_total_premium: float | None = None, min_total_coverage: float | None = None, max_layers: int = 5, min_layers: int = 3, max_attachment_gap: float = 0.0, min_roe_improvement: float = 0.15, max_iterations: int = 1000, convergence_tolerance: float = 1e-06) None[source]

Bases: object

Constraints for insurance program optimization.

max_total_premium: float | None = None
min_total_coverage: float | None = None
max_layers: int = 5
min_layers: int = 3
max_attachment_gap: float = 0.0
min_roe_improvement: float = 0.15
max_iterations: int = 1000
convergence_tolerance: float = 1e-06
class OptimalStructure(layers: List[EnhancedInsuranceLayer], deductible: float, total_premium: float, total_coverage: float, ergodic_benefit: float, roe_improvement: float, optimization_metrics: Dict[str, Any], convergence_achieved: bool, iterations_used: int) None[source]

Bases: object

Result of insurance structure optimization.

layers: List[EnhancedInsuranceLayer]
deductible: float
total_premium: float
total_coverage: float
ergodic_benefit: float
roe_improvement: float
optimization_metrics: Dict[str, Any]
convergence_achieved: bool
iterations_used: int
class EnhancedInsuranceLayer(attachment_point: float, limit: float, base_premium_rate: float, reinstatements: int = 0, reinstatement_premium: float = 1.0, reinstatement_type: ReinstatementType = ReinstatementType.PRO_RATA, aggregate_limit: float | None = None, participation_rate: float = 1.0, limit_type: str = 'per-occurrence', per_occurrence_limit: float | None = None, premium_rate_exposure: ExposureBase | None = None) None[source]

Bases: object

Insurance layer with reinstatement support and advanced features.

Extends basic layer functionality with reinstatements, tracking, and more sophisticated premium calculations.

attachment_point: float
limit: float
base_premium_rate: float
reinstatements: int = 0
reinstatement_premium: float = 1.0
reinstatement_type: ReinstatementType = 'pro_rata'
aggregate_limit: float | None = None
participation_rate: float = 1.0
limit_type: str = 'per-occurrence'
per_occurrence_limit: float | None = None
premium_rate_exposure: ExposureBase | None = None
exhausted: float = 0.0
__post_init__()[source]

Validate layer parameters.

calculate_base_premium(time: float = 0.0) float[source]

Calculate base premium for this layer.

Parameters:

time (float) – Time in years for exposure calculation (default 0.0).

Return type:

float

Returns:

Base premium amount (rate × limit × exposure_multiplier).

calculate_reinstatement_premium(timing_factor: float = 1.0) float[source]

Calculate premium for a single reinstatement.

Parameters:

timing_factor (float) – Pro-rata factor based on policy period remaining (0-1).

Return type:

float

Returns:

Reinstatement premium amount.

can_respond(loss_amount: float) bool[source]

Check if this layer can respond to a loss.

Parameters:

loss_amount (float) – Total loss amount.

Return type:

bool

Returns:

True if loss exceeds attachment point.

calculate_layer_loss(total_loss: float) float[source]

Calculate the portion of loss hitting this layer.

Parameters:

total_loss (float) – Total loss amount.

Return type:

float

Returns:

Amount of loss allocated to this layer (before applying limits).

class LayerState(layer: EnhancedInsuranceLayer, used_limit: float = 0.0, reinstatements_used: int = 0, total_claims_paid: float = 0.0, reinstatement_premiums_paid: float = 0.0, is_exhausted: bool = False, aggregate_used: float = 0.0) None[source]

Bases: object

Tracks the current state of an insurance layer during simulation.

Maintains utilization, reinstatement count, and exhaustion status for accurate multi-claim processing.

layer: EnhancedInsuranceLayer
current_limit: float
used_limit: float = 0.0
reinstatements_used: int = 0
total_claims_paid: float = 0.0
reinstatement_premiums_paid: float = 0.0
is_exhausted: bool = False
aggregate_used: float = 0.0
__post_init__()[source]

Initialize current limit to layer’s base limit.

process_claim(claim_amount: float, timing_factor: float = 1.0) Tuple[float, float][source]

Process a claim against this layer.

Parameters:
  • claim_amount (float) – Amount of loss allocated to this layer.

  • timing_factor (float) – Pro-rata factor for reinstatement premium.

Return type:

Tuple[float, float]

Returns:

Tuple of (amount_paid, reinstatement_premium).

reset()[source]

Reset layer state for new policy period.

get_available_limit() float[source]

Get currently available limit.

Return type:

float

Returns:

Available limit for claims.

get_utilization_rate() float[source]

Calculate layer utilization rate.

Return type:

float

Returns:

Utilization as percentage of total available limit.

class InsuranceProgram(layers: List[EnhancedInsuranceLayer], deductible: float = 0.0, name: str = 'Manufacturing Insurance Program', pricing_enabled: bool = False, pricer: InsurancePricer | None = None, max_history_years: int = 50)[source]

Bases: object

Comprehensive multi-layer insurance program manager.

Handles complex insurance structures with multiple layers, reinstatements, and sophisticated claim allocation.

pricing_results: List[Any]
classmethod create_fresh(source: ergodic_insurance.insurance_program.InsuranceProgram) ergodic_insurance.insurance_program.InsuranceProgram[source]

Create a fresh program from an existing program’s configuration.

Factory method that avoids copy.deepcopy by constructing a new instance directly from immutable layer definitions. Use this in hot loops (e.g. Monte Carlo workers) where each simulation needs clean initial state.

The new instance shares the same EnhancedInsuranceLayer objects (which are immutable after construction) but builds fresh LayerState wrappers with zeroed counters.

Parameters:

source (InsuranceProgram) – An existing program whose configuration is reused.

Return type:

InsuranceProgram

Returns:

A new InsuranceProgram with pristine mutable state.

classmethod from_config(insurance_config: InsuranceConfig, name: str = 'Insurance Program', **kwargs) ergodic_insurance.insurance_program.InsuranceProgram[source]

Create an insurance program from an InsuranceConfig object.

Bridges the config system (Pydantic models loaded from YAML/profiles) to the runtime simulation by converting each InsuranceLayerConfig into an EnhancedInsuranceLayer.

Parameters:
  • insurance_config (InsuranceConfig) – Validated insurance configuration, typically obtained via Config.from_yaml(...).insurance.

  • name (str) – Program identifier.

  • **kwargs – Additional keyword arguments forwarded to the InsuranceProgram constructor (e.g. pricing_enabled).

Return type:

InsuranceProgram

Returns:

Configured InsuranceProgram ready for simulation.

Raises:

TypeError – If insurance_config is not an InsuranceConfig.

Examples

From a YAML file:

config = Config.from_yaml(Path("my_config.yaml"))
program = InsuranceProgram.from_config(config.insurance)

Inline config:

from ergodic_insurance.config.insurance import (
    InsuranceConfig, InsuranceLayerConfig,
)
ic = InsuranceConfig(
    deductible=500_000,
    layers=[
        InsuranceLayerConfig(
            name="Primary",
            attachment=500_000,
            limit=5_000_000,
            base_premium_rate=0.015,
        ),
    ],
)
program = InsuranceProgram.from_config(ic)
classmethod simple(deductible: float, limit: float, rate: float, name: str = 'Simple Insurance Program', **kwargs) ergodic_insurance.insurance_program.InsuranceProgram[source]

Create a single-layer insurance program from basic parameters.

Convenience factory for the most common use case: a single primary layer where the attachment point equals the deductible.

Parameters:
  • deductible (float) – Self-insured retention in dollars.

  • limit (float) – Maximum coverage amount in dollars above the deductible.

  • rate (float) – Annual premium as a fraction of the limit (e.g. 0.025 for 2.5%).

  • name (str) – Program identifier.

  • **kwargs – Additional keyword arguments (e.g. pricing_enabled, pricer).

Return type:

InsuranceProgram

Returns:

InsuranceProgram with a single layer.

Examples

Quick single-layer program:

program = InsuranceProgram.simple(
    deductible=500_000,
    limit=10_000_000,
    rate=0.025,
)
calculate_premium(time: float = 0.0) float[source]

Calculate total annual premium for the program.

Parameters:

time (float) – Time in years for exposure calculation (default 0.0).

Return type:

float

Returns:

Total base premium across all layers.

calculate_annual_premium(time: float = 0.0) float[source]

Calculate total annual premium for the program.

Deprecated since version Use: calculate_premium() instead. This method will be removed in a future release.

Parameters:

time (float) – Time in years for exposure calculation (default 0.0).

Return type:

float

Returns:

Total base premium across all layers.

process_claim(claim_amount: float, timing_factor: float = 1.0) ClaimResult[source]

Process a single claim through the insurance structure.

Parameters:
  • claim_amount (float) – Total claim amount.

  • timing_factor (float) – Pro-rata factor for reinstatement premiums.

Return type:

ClaimResult

Returns:

Typed ClaimResult with claim allocation details.

process_annual_claims(claims: List[float], claim_times: List[float] | None = None) Dict[str, Any][source]

Process all claims for a policy year.

Parameters:
  • claims (List[float]) – List of claim amounts.

  • claim_times (Optional[List[float]]) – Optional list of claim times (0-1 for year fraction).

Return type:

Dict[str, Any]

Returns:

Dictionary with annual summary statistics.

reset_annual()[source]

Reset program state for new policy year.

get_program_summary() Dict[str, Any][source]

Get current program state summary.

Return type:

Dict[str, Any]

Returns:

Dictionary with program statistics.

get_total_coverage() float[source]

Calculate maximum possible coverage.

Return type:

float

Returns:

Maximum claim amount that can be covered.

calculate_ergodic_benefit(loss_history: List[List[float]], manufacturer_profile: Dict[str, Any] | None = None, time_horizon: int = 50) Dict[str, float][source]

Calculate ergodic benefit of insurance structure.

Quantifies time-average growth improvement from insurance coverage versus ensemble-average cost.

Parameters:
  • loss_history (List[List[float]]) – Historical loss data (list of annual loss lists).

  • manufacturer_profile (Optional[Dict[str, Any]]) – Company profile with assets, revenue, etc.

  • time_horizon (int) – Time horizon for ergodic calculation (default 50, matching SimulationConfig.time_horizon_years).

Return type:

Dict[str, float]

Returns:

Dictionary with ergodic metrics.

find_optimal_attachment_points(loss_data: List[float], num_layers: int = 4, percentiles: List[float] | None = None) List[float][source]

Find optimal attachment points based on loss frequency/severity.

Uses data-driven approach to minimize gaps while optimizing cost.

Parameters:
  • loss_data (List[float]) – Historical loss amounts.

  • num_layers (int) – Number of layers to optimize.

  • percentiles (Optional[List[float]]) – Optional percentiles for attachment points.

Return type:

List[float]

Returns:

List of optimal attachment points.

optimize_layer_widths(attachment_points: List[float], total_budget: float, capacity_constraints: Dict[str, float] | None = None, loss_data: List[float] | None = None) List[float][source]

Optimize layer widths given attachment points and constraints.

Parameters:
  • attachment_points (List[float]) – Fixed attachment points for layers.

  • total_budget (float) – Total premium budget.

  • capacity_constraints (Optional[Dict[str, float]]) – Optional max capacity per layer.

  • loss_data (Optional[List[float]]) – Optional loss data for severity analysis.

Return type:

List[float]

Returns:

List of optimal layer widths.

optimize_layer_structure(loss_data: List[List[float]], company_profile: Dict[str, Any] | None = None, constraints: ProgramOptimizationConstraints | None = None) OptimalStructure[source]

Optimize complete insurance layer structure.

Main optimization method that orchestrates layer count, attachment points, and widths to maximize ergodic benefit.

Parameters:
Return type:

OptimalStructure

Returns:

Optimal insurance structure.

classmethod from_yaml(config_path: str) ergodic_insurance.insurance_program.InsuranceProgram[source]

Load insurance program from YAML configuration.

Parameters:

config_path (str) – Path to YAML configuration file.

Return type:

InsuranceProgram

Returns:

Configured InsuranceProgram instance.

classmethod create_standard_manufacturing_program(deductible: float = 250000) ergodic_insurance.insurance_program.InsuranceProgram[source]

Create standard manufacturing insurance program.

Builds a tower with up to 4 layers above the deductible: Primary ($deductible-$5M), 1st Excess ($5M-$25M), 2nd Excess ($25M-$50M), CAT ($50M-$100M).

Layers whose limit would be zero or negative (because the deductible exceeds their attachment boundary) are skipped.

Parameters:

deductible (float) – Self-insured retention amount.

Return type:

InsuranceProgram

Returns:

Standard manufacturing insurance program.

apply_pricing(expected_revenue: float, market_cycle: MarketCycle | None = None, loss_generator: ManufacturingLossGenerator | None = None) None[source]

Apply dynamic pricing to all layers in the program.

Updates layer premium rates based on frequency/severity calculations.

Parameters:
Raises:

ValueError – If pricing not enabled or pricer not configured

Return type:

None

get_pricing_summary() Dict[str, Any][source]

Get summary of current pricing.

Return type:

Dict[str, Any]

Returns:

Dictionary with pricing details for each layer

classmethod create_with_pricing(layers: List[EnhancedInsuranceLayer], loss_generator: ManufacturingLossGenerator, expected_revenue: float, market_cycle: MarketCycle | None = None, deductible: float = 0.0, name: str = 'Priced Insurance Program') ergodic_insurance.insurance_program.InsuranceProgram[source]

Create insurance program with dynamic pricing.

Factory method that creates a program with pricing already applied.

Parameters:
Return type:

InsuranceProgram

Returns:

InsuranceProgram with pricing applied

class ProgramState(program: ergodic_insurance.insurance_program.InsuranceProgram, max_history_years: int | None = None, years_simulated: int = 0, total_claims: deque = <factory>, total_recoveries: deque = <factory>, total_premiums: deque = <factory>, annual_results: deque = <factory>) None[source]

Bases: object

Tracks multi-year insurance program state for simulations.

Maintains historical data and statistics across multiple policy periods for long-term analysis.

History lists use bounded collections.deque instances to prevent unbounded memory growth during long simulations. Running totals maintain accurate lifetime statistics regardless of the history window size.

program: InsuranceProgram
max_history_years: int | None = None
years_simulated: int = 0
total_claims: deque
total_recoveries: deque
total_premiums: deque
annual_results: deque
__post_init__()[source]

Re-initialise history deques with the configured maxlen.

simulate_year(annual_claims: List[float], claim_times: List[float] | None = None) Dict[str, Any][source]

Simulate one year of the insurance program.

Parameters:
Return type:

Dict[str, Any]

Returns:

Annual results dictionary.

get_summary_statistics() Dict[str, Any][source]

Calculate summary statistics across all simulated years.

Uses running totals so that results remain accurate even after older entries have been evicted from the bounded history deques.

Return type:

Dict[str, Any]

Returns:

Dictionary with multi-year statistics.

ergodic_insurance.ledger module

Event-sourcing ledger for financial transactions.

This module implements a simple ledger system that tracks individual financial transactions using double-entry accounting. This provides transaction-level detail that is lost when using only point-in-time metrics snapshots.

The ledger enables: - Perfect reconciliation between financial statements - Direct method cash flow statement generation - Audit trail for all financial changes - Understanding of WHY balances changed (e.g., “was this AR change a

write-off or a payment?”)

Example

Record a sale on credit:

ledger = Ledger()
ledger.record_double_entry(
    date=5,  # Year 5
    debit_account="accounts_receivable",
    credit_account="revenue",
    amount=1_000_000,
    description="Annual sales on credit"
)

Generate cash flows for a period:

operating_cash_flows = ledger.get_cash_flows(period=5)
print(f"Cash from customers: ${operating_cash_flows['cash_from_customers']:,.0f}")
class AccountType(*values)[source]

Bases: Enum

Classification of accounts per GAAP chart of accounts.

ASSET

Resources owned by the company (debit normal balance)

LIABILITY

Obligations owed to others (credit normal balance)

EQUITY

Owner’s residual interest (credit normal balance)

REVENUE

Income from operations (credit normal balance)

EXPENSE

Costs of operations (debit normal balance)

ASSET = 'asset'
LIABILITY = 'liability'
EQUITY = 'equity'
REVENUE = 'revenue'
EXPENSE = 'expense'
class AccountName(*values)[source]

Bases: Enum

Standard account names for the chart of accounts.

Using this enum instead of raw strings prevents typos that would silently result in zero balances on financial statements. See Issue #260.

Account names are grouped by their AccountType:

Assets (debit normal balance):

CASH, ACCOUNTS_RECEIVABLE, INVENTORY, PREPAID_INSURANCE, INSURANCE_RECEIVABLES, GROSS_PPE, ACCUMULATED_DEPRECIATION, RESTRICTED_CASH, COLLATERAL, DEFERRED_TAX_ASSET

Liabilities (credit normal balance):

ACCOUNTS_PAYABLE, ACCRUED_EXPENSES, ACCRUED_WAGES, ACCRUED_TAXES, ACCRUED_INTEREST, CLAIM_LIABILITIES, SHORT_TERM_BORROWINGS, UNEARNED_REVENUE

Equity (credit normal balance):

RETAINED_EARNINGS, COMMON_STOCK, DIVIDENDS

Revenue (credit normal balance):

REVENUE, SALES_REVENUE, INTEREST_INCOME, INSURANCE_RECOVERY

Expenses (debit normal balance):

COST_OF_GOODS_SOLD, OPERATING_EXPENSES, DEPRECIATION_EXPENSE, INSURANCE_EXPENSE, INSURANCE_LOSS, LAE_EXPENSE, TAX_EXPENSE, INTEREST_EXPENSE, COLLATERAL_EXPENSE, WAGE_EXPENSE

Example

Use AccountName instead of strings to prevent typos:

from ergodic_insurance.ledger import AccountName, Ledger

ledger = Ledger()
ledger.record_double_entry(
    date=5,
    debit_account=AccountName.ACCOUNTS_RECEIVABLE,  # Safe
    credit_account=AccountName.REVENUE,
    amount=1_000_000,
    transaction_type=TransactionType.REVENUE,
)

# This would be a compile/lint error:
# debit_account=AccountName.ACCOUNT_RECEIVABLE  # Typo caught!
CASH = 'cash'
ACCOUNTS_RECEIVABLE = 'accounts_receivable'
INVENTORY = 'inventory'
PREPAID_INSURANCE = 'prepaid_insurance'
INSURANCE_RECEIVABLES = 'insurance_receivables'
GROSS_PPE = 'gross_ppe'
ACCUMULATED_DEPRECIATION = 'accumulated_depreciation'
RESTRICTED_CASH = 'restricted_cash'
COLLATERAL = 'collateral'
DEFERRED_TAX_ASSET = 'deferred_tax_asset'
DTA_VALUATION_ALLOWANCE = 'dta_valuation_allowance'
ACCOUNTS_PAYABLE = 'accounts_payable'
ACCRUED_EXPENSES = 'accrued_expenses'
ACCRUED_WAGES = 'accrued_wages'
ACCRUED_TAXES = 'accrued_taxes'
ACCRUED_INTEREST = 'accrued_interest'
CLAIM_LIABILITIES = 'claim_liabilities'
SHORT_TERM_BORROWINGS = 'short_term_borrowings'
DEFERRED_TAX_LIABILITY = 'deferred_tax_liability'
UNEARNED_REVENUE = 'unearned_revenue'
RETAINED_EARNINGS = 'retained_earnings'
COMMON_STOCK = 'common_stock'
DIVIDENDS = 'dividends'
REVENUE = 'revenue'
SALES_REVENUE = 'sales_revenue'
INTEREST_INCOME = 'interest_income'
INSURANCE_RECOVERY = 'insurance_recovery'
COST_OF_GOODS_SOLD = 'cost_of_goods_sold'
OPERATING_EXPENSES = 'operating_expenses'
DEPRECIATION_EXPENSE = 'depreciation_expense'
INSURANCE_EXPENSE = 'insurance_expense'
INSURANCE_LOSS = 'insurance_loss'
TAX_EXPENSE = 'tax_expense'
INTEREST_EXPENSE = 'interest_expense'
COLLATERAL_EXPENSE = 'collateral_expense'
WAGE_EXPENSE = 'wage_expense'
LAE_EXPENSE = 'lae_expense'
RESERVE_DEVELOPMENT = 'reserve_development'
class EntryType(*values)[source]

Bases: Enum

Type of ledger entry - debit or credit.

In double-entry accounting: - DEBIT increases assets and expenses, decreases liabilities and equity - CREDIT decreases assets and expenses, increases liabilities and equity

DEBIT = 'debit'
CREDIT = 'credit'
class TransactionType(*values)[source]

Bases: Enum

Classification of transaction for cash flow statement mapping.

These types enable automatic classification into operating, investing, or financing activities for cash flow statement generation.

REVENUE = 'revenue'
COLLECTION = 'collection'
EXPENSE = 'expense'
PAYMENT = 'payment'
WAGE_PAYMENT = 'wage_payment'
INTEREST_PAYMENT = 'interest_payment'
INVENTORY_PURCHASE = 'inventory_purchase'
INVENTORY_SALE = 'inventory_sale'
INSURANCE_PREMIUM = 'insurance_premium'
INSURANCE_CLAIM = 'insurance_claim'
TAX_ACCRUAL = 'tax_accrual'
TAX_PAYMENT = 'tax_payment'
DTA_ADJUSTMENT = 'dta_adjustment'
DTL_ADJUSTMENT = 'dtl_adjustment'
RESERVE_DEVELOPMENT = 'reserve_development'
DEPRECIATION = 'depreciation'
WORKING_CAPITAL = 'working_capital'
CAPEX = 'capex'
ASSET_SALE = 'asset_sale'
DIVIDEND = 'dividend'
EQUITY_ISSUANCE = 'equity_issuance'
DEBT_ISSUANCE = 'debt_issuance'
DEBT_REPAYMENT = 'debt_repayment'
ADJUSTMENT = 'adjustment'
ACCRUAL = 'accrual'
WRITE_OFF = 'write_off'
REVALUATION = 'revaluation'
LIQUIDATION = 'liquidation'
TRANSFER = 'transfer'
RETAINED_EARNINGS = 'retained_earnings'
class LedgerEntry(date: int, account: str, amount: Decimal, entry_type: EntryType, transaction_type: TransactionType, description: str = '', reference_id: str = <factory>, timestamp: datetime | None = None, month: int = 0) None[source]

Bases: object

A single entry in the accounting ledger.

Each entry represents one side of a double-entry transaction. A complete transaction always has matching debits and credits.

date

Period (year) when the transaction occurred

account

Name of the account affected (e.g., “cash”, “accounts_receivable”)

amount

Dollar amount of the entry (always positive)

entry_type

DEBIT or CREDIT

transaction_type

Classification for cash flow mapping

description

Human-readable description of the transaction

reference_id

Lightweight ID linking both sides of a double-entry transaction

timestamp

Datetime when entry was recorded (None in simulation hot path)

month

Optional month within the year (0-11)

date: int
account: str
amount: Decimal
entry_type: EntryType
transaction_type: TransactionType
description: str
reference_id: str
timestamp: datetime | None
month: int
__post_init__() None[source]

Validate entry after initialization.

Return type:

None

property signed_amount: Decimal

Return amount with sign based on entry type.

For balance calculations: - Assets/Expenses: Debit positive, Credit negative - Liabilities/Equity/Revenue: Credit positive, Debit negative

This method returns the raw signed amount for debits (+) and credits (-). The Ledger class handles account type normalization.

__deepcopy__(memo: Dict[int, Any]) LedgerEntry[source]

Create a deep copy of this ledger entry.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

LedgerEntry

Returns:

Independent copy of this LedgerEntry

class Ledger(strict_validation: bool = True, simulation_mode: bool = False) None[source]

Bases: object

Double-entry accounting ledger for event sourcing.

The Ledger tracks all financial transactions at the entry level, enabling perfect reconciliation and direct method cash flow generation.

entries

List of all ledger entries

chart_of_accounts

Mapping of account names to their types

Thread Safety:

This class is not thread-safe. Concurrent reads are safe, but concurrent writes (record, record_double_entry, prune_entries, clear) or a mix of reads and writes require external synchronisation (e.g. a threading.Lock). Each simulation trial should use its own Ledger instance.

entries: List[LedgerEntry]
chart_of_accounts: Dict[str, AccountType]
record(entry: LedgerEntry) None[source]

Record a single ledger entry.

Parameters:

entry (LedgerEntry) – The LedgerEntry to add to the ledger

Raises:

ValueError – If strict_validation is True and the account name is not in the chart of accounts.

Return type:

None

Note

Prefer using record_double_entry() for complete transactions to ensure debits always equal credits.

record_double_entry(date: int, debit_account: AccountName | str, credit_account: AccountName | str, amount: Decimal | float | int, transaction_type: TransactionType, description: str = '', month: int = 0) Tuple[LedgerEntry | None, LedgerEntry | None][source]

Record a complete double-entry transaction.

Creates matching debit and credit entries with the same reference_id.

Parameters:
  • date (int) – Period (year) of the transaction

  • debit_account (Union[AccountName, str]) – Account to debit (increase assets/expenses). Can be AccountName enum (recommended) or string.

  • credit_account (Union[AccountName, str]) – Account to credit (increase liabilities/equity/revenue). Can be AccountName enum (recommended) or string.

  • amount (Union[Decimal, float, int]) – Dollar amount of the transaction (converted to Decimal)

  • transaction_type (TransactionType) – Classification for cash flow mapping

  • description (str) – Human-readable description

  • month (int) – Optional month within the year (0-11)

Return type:

Tuple[Optional[LedgerEntry], Optional[LedgerEntry]]

Returns:

Tuple of (debit_entry, credit_entry), or (None, None) for zero-amount transactions (Issue #315).

Raises:

ValueError – If amount is negative, or if account names are invalid (when strict_validation is True)

Example

Record a cash sale using AccountName enum (recommended):

debit, credit = ledger.record_double_entry(
    date=5,
    debit_account=AccountName.CASH,
    credit_account=AccountName.REVENUE,
    amount=500_000,
    transaction_type=TransactionType.REVENUE,
    description="Cash sales"
)

String account names still work but are validated:

debit, credit = ledger.record_double_entry(
    date=5,
    debit_account="cash",  # Validated against chart
    credit_account="revenue",
    amount=500_000,
    transaction_type=TransactionType.REVENUE,
)
get_balance(account: AccountName | str, as_of_date: int | None = None) Decimal[source]

Calculate the balance for an account.

Parameters:
  • account (Union[AccountName, str]) – Name of the account (AccountName enum recommended, string accepted)

  • as_of_date (Optional[int]) – Optional period to calculate balance as of (inclusive). When None, returns from cache in O(1). When specified, iterates through entries (O(N) for historical queries).

Return type:

Decimal

Returns:

Current balance of the account as Decimal, properly signed based on account type: - Assets/Expenses: positive = debit balance - Liabilities/Equity/Revenue: positive = credit balance

Example

Get current cash balance:

cash = ledger.get_balance(AccountName.CASH)
print(f"Cash: ${cash:,.0f}")

# String also works (validated)
cash = ledger.get_balance("cash")
get_period_change(account: AccountName | str, period: int, month: int | None = None) Decimal[source]

Calculate the change in account balance for a specific period.

Parameters:
  • account (Union[AccountName, str]) – Name of the account (AccountName enum recommended, string accepted)

  • period (int) – Year/period to calculate change for

  • month (Optional[int]) – Optional specific month within the period

Return type:

Decimal

Returns:

Net change in account balance during the period as Decimal

get_entries(account: AccountName | str | None = None, start_date: int | None = None, end_date: int | None = None, transaction_type: TransactionType | None = None) List[LedgerEntry][source]

Query ledger entries with optional filters.

Parameters:
  • account (Union[AccountName, str, None]) – Filter by account name (AccountName enum or string)

  • start_date (Optional[int]) – Filter by minimum period (inclusive)

  • end_date (Optional[int]) – Filter by maximum period (inclusive)

  • transaction_type (Optional[TransactionType]) – Filter by transaction classification

Return type:

List[LedgerEntry]

Returns:

List of matching LedgerEntry objects

Example

Get all cash entries for year 5:

cash_entries = ledger.get_entries(
    account=AccountName.CASH,
    start_date=5,
    end_date=5
)
sum_by_transaction_type(transaction_type: TransactionType, period: int, account: AccountName | str | None = None, entry_type: EntryType | None = None) Decimal[source]

Sum entries by transaction type for cash flow extraction.

Parameters:
Return type:

Decimal

Returns:

Sum of matching entries as Decimal (absolute value)

Example

Get total collections for year 5:

collections = ledger.sum_by_transaction_type(
    transaction_type=TransactionType.COLLECTION,
    period=5,
    account=AccountName.CASH,
    entry_type=EntryType.DEBIT
)
get_cash_flows(period: int) Dict[str, Decimal][source]

Extract cash flows for direct method cash flow statement.

Sums all cash-affecting transactions by category for the specified period.

Parameters:

period (int) – Year/period to extract cash flows for

Returns:

  • cash_from_customers: Collections on AR + cash sales

  • cash_to_suppliers: Inventory + expense payments

  • cash_for_insurance: Premium payments

  • cash_for_claim_losses: Claim-related asset reduction payments

  • cash_for_taxes: Tax payments

  • cash_for_wages: Wage payments

  • cash_for_interest: Interest payments

  • capital_expenditures: PP&E purchases

  • dividends_paid: Dividend payments

  • net_operating: Total operating cash flow

  • net_investing: Total investing cash flow

  • net_financing: Total financing cash flow

Return type:

Dictionary with cash flow categories as Decimal values

Example

Generate direct method cash flow:

flows = ledger.get_cash_flows(period=5)
print(f"Operating: ${flows['net_operating']:,.0f}")
print(f"Investing: ${flows['net_investing']:,.0f}")
print(f"Financing: ${flows['net_financing']:,.0f}")
verify_balance() Tuple[bool, Decimal][source]

Verify that debits equal credits (accounting equation).

Return type:

Tuple[bool, Decimal]

Returns:

Tuple of (is_balanced, difference) - is_balanced: True if debits exactly equal credits (using Decimal precision) - difference: Total debits minus total credits as Decimal

Example

Check ledger integrity:

balanced, diff = ledger.verify_balance()
if not balanced:
    warnings.warn(
        f"Ledger out of balance by ${diff:,.2f}",
        stacklevel=2,
    )
get_trial_balance(as_of_date: int | None = None) Dict[str, Decimal][source]

Generate a trial balance showing all account balances.

When as_of_date is None, reads directly from the O(1) balance cache. When a date is specified, performs a single O(N) pass over all entries instead of the previous O(N*M) approach (Issue #315).

Parameters:

as_of_date (Optional[int]) – Optional period to generate balance as of

Return type:

Dict[str, Decimal]

Returns:

Dictionary mapping account names to their balances as Decimal

Example

Review all balances:

trial = ledger.get_trial_balance()
for account, balance in trial.items():
    print(f"{account}: ${balance:,.0f}")
prune_entries(before_date: int) int[source]

Discard entries older than before_date to bound memory (Issue #315).

Before discarding, a per-account balance snapshot is computed so that get_balance(account, as_of_date) and get_trial_balance still return correct values for dates >= the prune point.

Entries with date < before_date are removed. The current balance cache (_balances) is unaffected because it already holds the cumulative totals.

Parameters:

before_date (int) – Entries with date strictly less than this value are pruned.

Return type:

int

Returns:

Number of entries removed.

Note

After pruning, historical queries for dates prior to before_date will reflect the snapshot balance at the prune boundary, not the true historical balance at that earlier date.

clear() None[source]

Clear all entries from the ledger.

Useful for resetting the ledger during simulation reset. Also resets the balance cache (Issue #259) and pruning state (Issue #315).

Return type:

None

__len__() int[source]

Return the number of entries in the ledger.

Return type:

int

__repr__() str[source]

Return string representation of the ledger.

Return type:

str

__deepcopy__(memo: Dict[int, Any]) Ledger[source]

Create a deep copy of this ledger.

Preserves all entries and the balance cache for O(1) balance queries.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

Ledger

Returns:

Independent copy of this Ledger with all entries and cached balances

ergodic_insurance.loss_distributions module

Enhanced loss distributions for manufacturing risk modeling.

This module provides parametric loss distributions for realistic insurance claim modeling, including attritional losses, large losses, and catastrophic events with revenue-dependent frequency scaling.

class LossDistribution(seed: int | SeedSequence | None = None)[source]

Bases: ABC

Abstract base class for loss severity distributions.

Provides a common interface for generating loss amounts and calculating statistical properties of the distribution.

abstractmethod generate_severity(n_samples: int) ndarray[source]

Generate loss severity samples.

Parameters:

n_samples (int) – Number of samples to generate.

Return type:

ndarray

Returns:

Array of loss amounts.

abstractmethod expected_value() float[source]

Calculate the analytical expected value of the distribution.

Return type:

float

Returns:

Expected value when analytically available, otherwise estimated.

abstractmethod limited_expected_value(limit: float) float[source]

Calculate the limited expected value E[min(X, limit)].

The limited expected value (LEV) is the expected claim cost when losses are capped at a given limit. It is the fundamental building block for layer pricing, increased limits factors, and loss elimination ratios.

Reference: Klugman, Panjer, Willmot — Loss Models, Chapter 5.

Parameters:

limit (float) – The maximum value to which losses are capped.

Return type:

float

Returns:

E[min(X, limit)], the expected value of the loss limited to limit.

reset_seed(seed) None[source]

Reset the random seed for reproducibility.

Parameters:

seed – New random seed to use (int or SeedSequence).

Return type:

None

class LognormalLoss(mean: float | None = None, cv: float | None = None, mu: float | None = None, sigma: float | None = None, seed: int | None = None)[source]

Bases: LossDistribution

Lognormal loss severity distribution.

Common for attritional and large losses in manufacturing. Parameters can be specified as either (mean, cv) or (mu, sigma).

generate_severity(n_samples: int) ndarray[source]

Generate lognormal loss samples.

Parameters:

n_samples (int) – Number of samples to generate.

Return type:

ndarray

Returns:

Array of loss amounts.

expected_value() float[source]

Calculate expected value of lognormal distribution.

Return type:

float

Returns:

Analytical expected value.

limited_expected_value(limit: float) float[source]

Calculate the limited expected value for the lognormal distribution.

LEV(d) = E[X] * Phi((ln(d) - mu - sigma^2) / sigma)
  • d * (1 - Phi((ln(d) - mu) / sigma))

where Phi is the standard normal CDF.

Reference: Klugman, Panjer, Willmot — Loss Models, Theorem 5.3.

Parameters:

limit (float) – The cap on individual losses.

Return type:

float

Returns:

E[min(X, d)] for the lognormal distribution.

class ParetoLoss(alpha: float, xm: float, seed: int | None = None)[source]

Bases: LossDistribution

Pareto loss severity distribution for catastrophic events.

Heavy-tailed distribution suitable for modeling extreme losses with potentially unbounded severity.

generate_severity(n_samples: int) ndarray[source]

Generate Pareto loss samples.

Parameters:

n_samples (int) – Number of samples to generate.

Return type:

ndarray

Returns:

Array of loss amounts.

expected_value() float[source]

Calculate expected value of Pareto distribution.

Return type:

float

Returns:

Analytical expected value if it exists (alpha > 1), else inf.

limited_expected_value(limit: float) float[source]

Calculate the limited expected value for the Pareto Type I distribution.

For X ~ Pareto(alpha, xm) with S(x) = (xm/x)^alpha, x >= xm:

LEV(d) = d if d <= xm LEV(d) = alpha*xm/(alpha-1) - xm^alpha * d^(1-alpha) / (alpha-1)

if d > xm, alpha > 1

LEV(d) = xm * (1 + ln(d/xm)) if d > xm, alpha = 1

Derived via the survival function integral: LEV(d) = integral_0^d S(x) dx.

Reference: Klugman, Panjer, Willmot — Loss Models, Chapter 5.

Parameters:

limit (float) – The cap on individual losses.

Return type:

float

Returns:

E[min(X, d)] for the Pareto distribution.

class GeneralizedParetoLoss(severity_shape: float, severity_scale: float, seed: int | SeedSequence | None = None)[source]

Bases: LossDistribution

Generalized Pareto distribution for modeling excesses over threshold.

Implements the GPD using scipy.stats.genpareto for Peaks Over Threshold (POT) extreme value modeling. According to the Pickands-Balkema-de Haan theorem, excesses over a sufficiently high threshold asymptotically follow a GPD.

The distribution models: P(X - u | X > u) ~ GPD(ξ, β)

Shape parameter interpretation: - ξ < 0: Bounded distribution (Type III - short-tailed) - ξ = 0: Exponential distribution (Type I - medium-tailed) - ξ > 0: Pareto-type distribution (Type II - heavy-tailed)

generate_severity(n_samples: int) ndarray[source]

Generate GPD samples (excesses above threshold).

Parameters:

n_samples (int) – Number of samples to generate.

Return type:

ndarray

Returns:

Array of excess amounts above threshold.

expected_value() float[source]

Calculate expected excess above threshold.

Return type:

float

Returns:

Analytical expected value if it exists (ξ < 1), else inf. E[X - u | X > u] = β / (1 - ξ) for ξ < 1

limited_expected_value(limit: float) float[source]

Calculate the limited expected value for the Generalized Pareto distribution.

For X ~ GPD(xi, beta) with loc=0:

When xi != 0:

LEV(d) = (beta / (1 - xi)) * [1 - (1 + xi*d/beta)^(1 - 1/xi)] valid for d < -beta/xi when xi < 0 (upper bound of support).

When xi == 0 (exponential):

LEV(d) = beta * (1 - exp(-d/beta))

For d >= upper bound (xi < 0): LEV(d) = E[X] (full expected value).

Reference: Klugman, Panjer, Willmot — Loss Models, Chapter 5;

Hosking & Wallis (1987).

Parameters:

limit (float) – The cap on individual losses.

Return type:

float

Returns:

E[min(X, d)] for the GPD.

class LossEvent(amount: float, time: float = 0.0, loss_type: str = 'operational', timestamp: float | None = None, event_type: str | None = None, description: str | None = None) None[source]

Bases: object

Represents a single loss event with timing and amount.

amount: float
time: float = 0.0
loss_type: str = 'operational'
timestamp: float | None = None
event_type: str | None = None
description: str | None = None
__post_init__()[source]

Handle alternative parameter names.

__le__(other)[source]

Support ordering by amount.

__lt__(other)[source]

Support ordering by amount.

class LossData(timestamps: ndarray = <factory>, loss_amounts: ndarray = <factory>, loss_types: List[str] = <factory>, claim_ids: List[str] = <factory>, development_factors: ndarray | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Unified loss data structure for cross-module compatibility.

This dataclass provides a standardized interface for loss data that can be used consistently across all modules in the framework.

timestamps: ndarray
loss_amounts: ndarray
loss_types: List[str]
claim_ids: List[str]
development_factors: ndarray | None = None
metadata: Dict[str, Any]
validate() bool[source]

Validate data consistency.

Return type:

bool

Returns:

True if data is valid and consistent, False otherwise.

to_ergodic_format() ergodic_insurance.ergodic_analyzer.ErgodicData[source]

Convert to ergodic analyzer format.

Return type:

ergodic_insurance.ergodic_analyzer.ErgodicData

Returns:

Data formatted for ergodic analysis.

apply_insurance(program: ergodic_insurance.insurance_program.InsuranceProgram) LossData[source]

Apply insurance recoveries to losses.

Parameters:

program (ergodic_insurance.insurance_program.InsuranceProgram) – Insurance program to apply.

Return type:

LossData

Returns:

New LossData with insurance recoveries applied.

classmethod from_loss_events(events: List[LossEvent]) LossData[source]

Create LossData from a list of LossEvent objects.

Parameters:

events (List[LossEvent]) – List of LossEvent objects.

Return type:

LossData

Returns:

LossData instance with consolidated event data.

to_loss_events() List[LossEvent][source]

Convert LossData back to LossEvent list.

Return type:

List[LossEvent]

Returns:

List of LossEvent objects.

get_annual_aggregates(years: int) Dict[int, float][source]

Aggregate losses by year.

Parameters:

years (int) – Number of years to aggregate over.

Return type:

Dict[int, float]

Returns:

Dictionary mapping year to total loss amount.

calculate_statistics() Dict[str, float][source]

Calculate comprehensive statistics for the loss data.

Return type:

Dict[str, float]

Returns:

Dictionary of statistical metrics.

class FrequencyGenerator(base_frequency: float, revenue_scaling_exponent: float = 0.0, reference_revenue: float = 10000000, seed: int | None = None)[source]

Bases: object

Base class for generating loss event frequencies.

Supports revenue-dependent scaling of claim frequencies.

reseed(seed) None[source]

Re-seed the random state.

Parameters:

seed – New random seed (int or SeedSequence).

Return type:

None

get_scaled_frequency(revenue: float) float[source]

Calculate revenue-scaled frequency.

Parameters:

revenue (float) – Current revenue level (can be float or Decimal).

Return type:

float

Returns:

Scaled frequency parameter.

generate_event_times(duration: float, revenue: float) ndarray[source]

Generate event times using Poisson process.

Parameters:
  • duration (float) – Time period in years.

  • revenue (float) – Revenue level for frequency scaling.

Return type:

ndarray

Returns:

Array of event times.

class AttritionalLossGenerator(base_frequency: float = 5.0, severity_mean: float = 25000, severity_cv: float = 1.5, revenue_scaling_exponent: float = 0.5, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]

Bases: object

Generator for high-frequency, low-severity attritional losses.

Typical for widget manufacturing: worker injuries, quality defects, minor property damage.

reseed(seed) None[source]

Re-seed all internal random states.

Parameters:

seed – New random seed (int or SeedSequence). A SeedSequence is used internally to derive independent child seeds for frequency and severity.

Return type:

None

generate_losses(duration: float, revenue: float) List[LossEvent][source]

Generate attritional loss events.

Parameters:
  • duration (float) – Simulation period in years.

  • revenue (float) – Current revenue level.

Return type:

List[LossEvent]

Returns:

List of loss events.

class LargeLossGenerator(base_frequency: float = 0.3, severity_mean: float = 2000000, severity_cv: float = 2.0, revenue_scaling_exponent: float = 0.7, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]

Bases: object

Generator for medium-frequency, medium-severity large losses.

Typical for manufacturing: product recalls, major equipment failures, litigation settlements.

reseed(seed) None[source]

Re-seed all internal random states.

Parameters:

seed – New random seed (int or SeedSequence). A SeedSequence is used internally to derive independent child seeds for frequency and severity.

Return type:

None

generate_losses(duration: float, revenue: float) List[LossEvent][source]

Generate large loss events.

Parameters:
  • duration (float) – Simulation period in years.

  • revenue (float) – Current revenue level.

Return type:

List[LossEvent]

Returns:

List of loss events.

class CatastrophicLossGenerator(base_frequency: float = 0.03, severity_alpha: float = 2.5, severity_xm: float = 1000000, revenue_scaling_exponent: float = 0.0, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]

Bases: object

Generator for low-frequency, high-severity catastrophic losses.

Uses Pareto distribution for heavy-tailed severity modeling. Examples: major equipment failure, facility damage, environmental disasters.

reseed(seed) None[source]

Re-seed all internal random states.

Parameters:

seed – New random seed (int or SeedSequence). A SeedSequence is used internally to derive independent child seeds for frequency and severity.

Return type:

None

generate_losses(duration: float, revenue: float = 10000000) List[LossEvent][source]

Generate catastrophic loss events.

Parameters:
  • duration (float) – Simulation period in years.

  • revenue (float) – Current revenue level (not used for scaling).

Return type:

List[LossEvent]

Returns:

List of loss events.

class ManufacturingLossGenerator(attritional_params: dict | None = None, large_params: dict | None = None, catastrophic_params: dict | None = None, extreme_params: dict | None = None, exposure: ExposureBase | None = None, seed: int | None = None, frequency_trend: Trend | None = None, severity_trend: Trend | None = None)[source]

Bases: object

Composite loss generator for widget manufacturing risks.

Combines attritional, large, and catastrophic loss generators to provide comprehensive risk modeling.

reseed(seed: int) None[source]

Re-seed all internal random states using SeedSequence.

Derives independent child seeds for each sub-generator so that parallel workers produce statistically distinct loss sequences.

Parameters:

seed (int) – New random seed.

Return type:

None

classmethod create_simple(frequency: float = 0.1, severity_mean: float = 5000000, severity_std: float = 2000000, seed: int | None = None, *, attritional_frequency_ratio: float = 0.9, attritional_severity_factor: float = 0.5, large_frequency_ratio: float = 0.1, large_severity_factor: float = 2.0, large_cv_factor: float = 1.5, catastrophic_frequency: float = 0.001, catastrophic_pareto_alpha: float = 2.5, catastrophic_severity_factor: float = 5.0, frequency_trend: Trend | None = None, severity_trend: Trend | None = None) ManufacturingLossGenerator[source]

Create a simple loss generator (migration helper from ClaimGenerator).

This factory method provides a simplified interface similar to ClaimGenerator, making migration easier. It creates a generator with mostly attritional losses and minimal catastrophic risk.

Parameters:
  • frequency (float) – Annual frequency of losses (Poisson lambda).

  • severity_mean (float) – Mean loss amount in dollars.

  • severity_std (float) – Standard deviation of loss amount.

  • seed (Optional[int]) – Random seed for reproducibility.

  • attritional_frequency_ratio (float) – Fraction of frequency allocated to attritional losses. Default 0.9 (90%).

  • attritional_severity_factor (float) – Multiplier on severity_mean for attritional loss mean. Default 0.5.

  • large_frequency_ratio (float) – Fraction of frequency allocated to large losses. Default 0.1 (10%).

  • large_severity_factor (float) – Multiplier on severity_mean for large loss mean. Default 2.0.

  • large_cv_factor (float) – Multiplier on CV for large loss variability. Default 1.5.

  • catastrophic_frequency (float) – Absolute annual frequency for catastrophic losses (not scaled by frequency). Default 0.001.

  • catastrophic_pareto_alpha (float) – Pareto shape parameter for catastrophic losses. Default 2.5.

  • catastrophic_severity_factor (float) – Multiplier on severity_mean for the Pareto xm (minimum catastrophic loss). Default 5.0.

Return type:

ManufacturingLossGenerator

Returns:

ManufacturingLossGenerator configured for simple use case.

Examples

Simple usage (equivalent to ClaimGenerator):

generator = ManufacturingLossGenerator.create_simple(
    frequency=0.1,
    severity_mean=5_000_000,
    severity_std=2_000_000,
    seed=42
)
losses, stats = generator.generate_losses(duration=10, revenue=10_000_000)

Accessing loss amounts:

total_loss = sum(loss.amount for loss in losses)
print(f"Total losses: ${total_loss:,.0f}")
print(f"Number of events: {stats['total_losses']}")

Note

For advanced features (multiple loss types, extreme value modeling), use the standard __init__ method with explicit parameters.

See also

Migration guide: docs/migration_guides/claim_generator_migration.md

generate_losses(duration: float, revenue: float, include_catastrophic: bool = True, time: float = 0.0) Tuple[List[LossEvent], Dict[str, Any]][source]

Generate all types of losses for manufacturing operations.

Parameters:
  • duration (float) – Simulation period in years.

  • revenue (float) – Current revenue level.

  • include_catastrophic (bool) – Whether to include catastrophic events.

  • time (float) – Current time for exposure calculation (default 0.0).

Return type:

Tuple[List[LossEvent], Dict[str, Any]]

Returns:

Tuple of (all_losses, statistics_dict).

validate_distributions(n_simulations: int = 10000, duration: float = 1.0, revenue: float = 10000000) Dict[str, Dict[str, float]][source]

Validate distribution properties through simulation.

Parameters:
  • n_simulations (int) – Number of simulations to run.

  • duration (float) – Duration of each simulation.

  • revenue (float) – Revenue level for testing.

Return type:

Dict[str, Dict[str, float]]

Returns:

Dictionary of validation statistics.

perform_statistical_tests(samples: ndarray, distribution_type: str, params: Dict[str, Any], significance_level: float = 0.05) Dict[str, Any][source]

Perform statistical tests to validate distribution fit.

Parameters:
  • samples (ndarray) – Generated samples to test.

  • distribution_type (str) – Type of distribution (‘lognormal’ or ‘pareto’).

  • params (Dict[str, Any]) – Distribution parameters.

  • significance_level (float) – Significance level for tests.

Return type:

Dict[str, Any]

Returns:

Dictionary with test results.

ergodic_insurance.manufacturer module

Widget manufacturer financial model implementation.

This module implements the core financial model for a widget manufacturing company, providing comprehensive balance sheet management, insurance claim processing, and stochastic modeling capabilities. It serves as the central component of the ergodic insurance optimization framework.

The manufacturer model simulates realistic business operations including:
  • Asset-based revenue generation with configurable turnover ratios

  • Operating income calculations with industry-standard margins

  • Multi-layer insurance claim processing with deductibles and limits

  • Letter of credit collateral management for claim liabilities

  • Actuarial claim payment schedules over multiple years

  • Dynamic balance sheet evolution with growth and volatility

  • Integration with sophisticated stochastic processes

  • Comprehensive financial metrics and ratio analysis

Key Components:
  • WidgetManufacturer: Main financial model class

  • ClaimLiability: Actuarial claim payment tracking (re-exported)

  • TaxHandler: Tax calculation and accrual (re-exported)

Examples

Basic manufacturer setup and simulation:

from ergodic_insurance import ManufacturerConfig
from ergodic_insurance.manufacturer import WidgetManufacturer

config = ManufacturerConfig(
    initial_assets=10_000_000,
    asset_turnover_ratio=0.8,
    base_operating_margin=0.08,
    tax_rate=0.25,
    retention_ratio=0.7
)

manufacturer = WidgetManufacturer(config)

metrics = manufacturer.step(
    letter_of_credit_rate=0.015,
    growth_rate=0.05
)

print(f"ROE: {metrics['roe']:.1%}")
class WidgetManufacturer(config: ManufacturerConfig, stochastic_process: StochasticProcess | None = None, use_float: bool = False, simulation_mode: bool = False)[source]

Bases: BalanceSheetMixin, ClaimProcessingMixin, IncomeCalculationMixin, SolvencyMixin, MetricsCalculationMixin

Financial model for a widget manufacturing company.

This class models the complete financial operations of a manufacturing company including revenue generation, claim processing, collateral management, and balance sheet evolution over time.

The manufacturer maintains a balance sheet with assets, equity, and tracks insurance-related collateral. It can process insurance claims with multi-year payment schedules and manages working capital requirements.

config

Manufacturing configuration parameters

stochastic_process

Optional stochastic process for revenue volatility

assets

Current total assets

collateral

Letter of credit collateral for insurance claims

restricted_assets

Assets restricted as collateral

equity

Current equity (assets minus liabilities)

year

Current simulation year

outstanding_liabilities

List of active claim liabilities

metrics_history

Historical metrics for each simulation period

bankruptcy

Whether the company has gone bankrupt

bankruptcy_year

Year when bankruptcy occurred (if applicable)

Example

Running a multi-year simulation:

manufacturer = WidgetManufacturer(config)

for year in range(10):
    losses, _ = loss_generator.generate_losses(duration=1, revenue=revenue)
    for loss in losses:
        manufacturer.process_insurance_claim(
            loss.amount, deductible, limit
        )
    metrics = manufacturer.step(letter_of_credit_rate=0.015)
    print(f"Year {year}: ROE={metrics['roe']:.1%}")
property current_revenue: Decimal

Get current revenue based on current assets and turnover ratio.

property current_assets: Decimal

Get current total assets.

property current_equity: Decimal

Get current equity value.

property base_revenue: Decimal

Get base (initial) revenue for comparison.

property base_assets: Decimal

Get base (initial) assets for comparison.

property base_equity: Decimal

Get base (initial) equity for comparison.

__deepcopy__(memo: Dict[int, Any]) WidgetManufacturer[source]

Create a deep copy preserving all state for Monte Carlo forking.

Parameters:

memo (Dict[int, Any]) – Dictionary of already copied objects (for cycle detection)

Return type:

WidgetManufacturer

Returns:

Independent copy of this WidgetManufacturer with all state preserved

__getstate__() Dict[str, Any][source]

Get state for pickling (required for Windows multiprocessing).

Return type:

Dict[str, Any]

Returns:

Dictionary of all instance attributes

__setstate__(state: Dict[str, Any]) None[source]

Restore state from pickle (required for Windows multiprocessing).

Parameters:

state (Dict[str, Any]) – Dictionary of instance attributes to restore

Return type:

None

process_accrued_payments(time_resolution: str = 'annual', max_payable: Decimal | float | None = None) Decimal[source]

Process due accrual payments for the current period.

Parameters:
  • time_resolution (str) – “annual” or “monthly” for determining current period

  • max_payable (Union[Decimal, float, None]) – Optional maximum amount that can be paid.

Return type:

Decimal

Returns:

Total cash payments made for accruals in this period

record_wage_accrual(amount: float, payment_schedule: PaymentSchedule = PaymentSchedule.IMMEDIATE) None[source]

Record accrued wages to be paid later.

Parameters:
  • amount (float) – Wage amount to accrue

  • payment_schedule (PaymentSchedule) – When wages will be paid

Return type:

None

step(letter_of_credit_rate: Decimal | float = 0.015, growth_rate: Decimal | float = 0.0, time_resolution: str = 'annual', apply_stochastic: bool = False) Dict[str, Decimal | float | int | bool][source]

Execute one time step of the financial model simulation.

Parameters:
  • letter_of_credit_rate (float) – Annual interest rate for letter of credit.

  • growth_rate (float) – Revenue growth rate for the period.

  • time_resolution (str) – “annual” or “monthly”.

  • apply_stochastic (bool) – Whether to apply stochastic shocks.

Returns:

Comprehensive financial metrics dictionary.

Return type:

Dict[str, float]

reset() None[source]

Reset the manufacturer to initial state for new simulation.

This method restores all financial parameters to their configured initial values and clears historical data, enabling fresh simulation runs from the same starting point.

Bug Fixes (Issue #305): - FIX 1: Uses config.ppe_ratio directly instead of recalculating from margins - FIX 2: Initializes AR/Inventory to steady-state (matching __init__) instead of zero

Return type:

None

copy() WidgetManufacturer[source]

Create a deep copy of the manufacturer for parallel simulations.

Returns:

A new manufacturer instance with same configuration.

Return type:

WidgetManufacturer

classmethod create_fresh(config: ManufacturerConfig, stochastic_process: StochasticProcess | None = None, use_float: bool = False, simulation_mode: bool = False) WidgetManufacturer[source]

Create a fresh manufacturer from configuration alone.

Factory method that avoids copy.deepcopy by constructing a new instance directly from its config. Use this in hot loops (e.g. Monte Carlo workers) where each simulation needs a clean initial state.

Parameters:
  • config (ManufacturerConfig) – Manufacturing configuration parameters.

  • stochastic_process (Optional[StochasticProcess]) – Optional stochastic process instance. The caller is responsible for ensuring independence (e.g. by deep-copying the process once before passing it in).

  • use_float (bool) – If True, enable float mode (Issue #1142).

  • simulation_mode (bool) – If True, ledger skips entry storage (Issue #1146).

Return type:

WidgetManufacturer

Returns:

A new WidgetManufacturer in its initial state.

ergodic_insurance.monte_carlo module

High-performance Monte Carlo simulation engine for insurance optimization.

class MonteCarloConfig(n_simulations: int = 100000, n_years: int = 10, n_chains: int = 4, convergence_mcse_threshold: float = 0.01, parallel: bool = True, n_workers: int | None = None, chunk_size: int = 10000, use_float32: bool = False, cache_results: bool = True, checkpoint_interval: int | None = None, progress_bar: bool = True, seed: int | None = None, use_enhanced_parallel: bool = True, monitor_performance: bool = True, adaptive_chunking: bool = True, shared_memory: bool = True, enable_trajectory_storage: bool = False, trajectory_storage_config: StorageConfig | None = None, enable_advanced_aggregation: bool = True, aggregation_config: AggregationConfig | None = None, generate_summary_report: bool = False, summary_report_format: str = 'markdown', compute_bootstrap_ci: bool = False, bootstrap_confidence_level: float = 0.95, bootstrap_n_iterations: int = 10000, bootstrap_method: str = 'percentile', ruin_evaluation: List[int] | None = None, insolvency_tolerance: float = 10000, letter_of_credit_rate: float = 0.015, growth_rate: float = 0.0, time_resolution: str = 'annual', apply_stochastic: bool = False, enable_ledger_pruning: bool = True, crn_base_seed: int | None = None, use_gpu: bool = False) None[source]

Bases: object

Configuration for Monte Carlo simulation.

n_simulations

Number of simulation paths

n_years

Number of years per simulation

n_chains

Number of chains (retained for backward compatibility; not used for IID Monte Carlo convergence diagnostics — see issue #1353)

convergence_mcse_threshold

Maximum relative MCSE (MCSE / |mean|) for declaring convergence. Default 0.01 (1% of mean).

parallel

Whether to use multiprocessing

n_workers

Number of parallel workers (None for auto)

chunk_size

Size of chunks for parallel processing

use_float32

Use float32 for memory efficiency

cache_results

Cache intermediate results

checkpoint_interval

Save checkpoint every N simulations

progress_bar

Show progress bar

seed

Random seed for reproducibility

use_enhanced_parallel

Use enhanced parallel executor for better performance

monitor_performance

Track detailed performance metrics

adaptive_chunking

Enable adaptive chunk sizing

shared_memory

Enable shared memory for read-only data

letter_of_credit_rate

Annual LoC rate for collateral costs (default 1.5%)

growth_rate

Revenue growth rate per period (default 0.0)

time_resolution

Time step resolution, “annual” or “monthly” (default “annual”)

apply_stochastic

Whether to apply stochastic shocks (default False)

enable_ledger_pruning

Prune old ledger entries each year to bound memory (default False)

crn_base_seed

Common Random Numbers base seed for cross-scenario comparison. When set, the loss generator and stochastic process are reseeded at each (sim_id, year) boundary using SeedSequence([crn_base_seed, sim_id, year]). This ensures that compared scenarios (e.g. different deductibles) experience the same underlying random draws each year, dramatically reducing estimator variance for growth-lift metrics. (default None, disabled)

n_simulations: int = 100000
n_years: int = 10
n_chains: int = 4
convergence_mcse_threshold: float = 0.01
parallel: bool = True
n_workers: int | None = None
chunk_size: int = 10000
use_float32: bool = False
cache_results: bool = True
checkpoint_interval: int | None = None
progress_bar: bool = True
seed: int | None = None
use_enhanced_parallel: bool = True
monitor_performance: bool = True
adaptive_chunking: bool = True
shared_memory: bool = True
enable_trajectory_storage: bool = False
trajectory_storage_config: StorageConfig | None = None
enable_advanced_aggregation: bool = True
aggregation_config: AggregationConfig | None = None
generate_summary_report: bool = False
summary_report_format: str = 'markdown'
compute_bootstrap_ci: bool = False
bootstrap_confidence_level: float = 0.95
bootstrap_n_iterations: int = 10000
bootstrap_method: str = 'percentile'
ruin_evaluation: List[int] | None = None
insolvency_tolerance: float = 10000
letter_of_credit_rate: float = 0.015
growth_rate: float = 0.0
time_resolution: str = 'annual'
apply_stochastic: bool = False
enable_ledger_pruning: bool = True
crn_base_seed: int | None = None
use_gpu: bool = False
__post_init__()[source]

Validate configuration parameters.

Raises:

ValueError – If any parameter is out of its valid range.

class MonteCarloResults(final_assets: ndarray, annual_losses: ndarray, insurance_recoveries: ndarray, retained_losses: ndarray, growth_rates: ndarray, ruin_probability: Dict[str, float], metrics: Dict[str, float], convergence: Dict[str, ConvergenceStats], execution_time: float, config: MonteCarloConfig, performance_metrics: PerformanceMetrics | None = None, aggregated_results: Dict[str, Any] | None = None, time_series_aggregation: Dict[str, Any] | None = None, statistical_summary: Any | None = None, summary_report: str | None = None, bootstrap_confidence_intervals: Dict[str, Tuple[float, float]] | None = None) None[source]

Bases: object

Results from Monte Carlo simulation.

final_assets

Final asset values for each simulation

annual_losses

Annual loss amounts

insurance_recoveries

Insurance recovery amounts

retained_losses

Retained loss amounts

growth_rates

Realized growth rates

ruin_probability

Probability of ruin

metrics

Risk metrics calculated from results

convergence

Convergence statistics

execution_time

Total execution time in seconds

config

Simulation configuration used

performance_metrics

Detailed performance metrics (if monitoring enabled)

aggregated_results

Advanced aggregation results (if enabled)

time_series_aggregation

Time series aggregation results (if enabled)

statistical_summary

Complete statistical summary (if enabled)

summary_report

Formatted summary report (if generated)

bootstrap_confidence_intervals

Bootstrap confidence intervals for key metrics

final_assets: ndarray
annual_losses: ndarray
insurance_recoveries: ndarray
retained_losses: ndarray
growth_rates: ndarray
ruin_probability: Dict[str, float]
metrics: Dict[str, float]
convergence: Dict[str, ConvergenceStats]
execution_time: float
config: MonteCarloConfig
performance_metrics: PerformanceMetrics | None = None
aggregated_results: Dict[str, Any] | None = None
time_series_aggregation: Dict[str, Any] | None = None
statistical_summary: Any | None = None
summary_report: str | None = None
bootstrap_confidence_intervals: Dict[str, Tuple[float, float]] | None = None
summary() str[source]

Generate summary of simulation results.

Return type:

str

class MonteCarloEngine(loss_generator: ManufacturingLossGenerator, insurance_program: InsuranceProgram, manufacturer: WidgetManufacturer, config: MonteCarloConfig | None = None)[source]

Bases: object

High-performance Monte Carlo simulation engine for insurance analysis.

Provides efficient Monte Carlo simulation with support for parallel processing, convergence monitoring, checkpointing, and comprehensive result aggregation. Optimized for both high-end and budget hardware configurations.

Examples

Basic Monte Carlo simulation:

from .monte_carlo import MonteCarloEngine, MonteCarloConfig
from .loss_distributions import ManufacturingLossGenerator
from .insurance_program import InsuranceProgram
from .manufacturer import WidgetManufacturer

# Configure simulation
config = MonteCarloConfig(
    n_simulations=10000,
    n_years=20,
    parallel=True,
    n_workers=4
)

# Create components
loss_gen = ManufacturingLossGenerator()
insurance = InsuranceProgram.create_standard_program()
manufacturer = WidgetManufacturer.from_config()

# Run Monte Carlo
engine = MonteCarloEngine(
    loss_generator=loss_gen,
    insurance_program=insurance,
    manufacturer=manufacturer,
    config=config
)
results = engine.run()

print(f"Survival rate: {results.survival_rate:.1%}")
print(f"Mean ROE: {results.mean_roe:.2%}")

Advanced simulation with convergence monitoring:

# Enable convergence checking
config = MonteCarloConfig(
    n_simulations=100000,
    check_convergence=True,
    convergence_tolerance=0.001,
    min_iterations=1000
)

engine = MonteCarloEngine(
    loss_generator=loss_gen,
    insurance_program=insurance,
    manufacturer=manufacturer,
    config=config
)

# Run with progress tracking
results = engine.run(show_progress=True)

# Check convergence
if results.converged:
    print(f"Converged after {results.iterations} iterations")
    print(f"Standard error: {results.standard_error:.4f}")
loss_generator

Generator for manufacturing loss events

insurance_program

Insurance coverage structure

manufacturer

Manufacturing company financial model

config

Simulation configuration parameters

convergence_diagnostics

Convergence monitoring tools

See also

MonteCarloConfig: Configuration parameters MonteCarloResults: Simulation results container ParallelExecutor: Enhanced parallel processing ConvergenceDiagnostics: Convergence analysis tools

trajectory_storage: TrajectoryStorage | None
result_aggregator: ResultAggregator | None
time_series_aggregator: TimeSeriesAggregator | None
summary_statistics: SummaryStatistics | None
run(progress_callback: Callable[[int, int, float], None] | None = None, cancel_event: Event | None = None) MonteCarloResults[source]

Execute Monte Carlo simulation.

Parameters:
  • progress_callback (Optional[Callable[[int, int, float], None]]) – Optional callback invoked with (completed, total, elapsed_seconds) after each batch of simulations completes. Useful for GUI progress bars, web dashboards, or any non-terminal environment.

  • cancel_event (Optional[Event]) – Optional threading.Event. When set, the engine will stop after the current batch and return partial results.

Return type:

MonteCarloResults

Returns:

MonteCarloResults object with all outputs

export_results(results: MonteCarloResults, filepath: Path, file_format: str = 'csv') None[source]

Export simulation results to file.

Parameters:
  • results (MonteCarloResults) – Simulation results to export

  • filepath (Path) – Output file path

  • file_format (str) – Export format (‘csv’, ‘json’, ‘hdf5’)

Return type:

None

compute_bootstrap_confidence_intervals(results: MonteCarloResults, confidence_level: float = 0.95, n_bootstrap: int = 10000, method: str = 'percentile', show_progress: bool = False) Dict[str, Tuple[float, float]][source]

Compute bootstrap confidence intervals for key simulation metrics.

All metrics share a single set of bootstrap resampling indices per iteration, reducing total work from 7 * n_bootstrap to n_bootstrap iterations.

Parameters:
  • results (MonteCarloResults) – Simulation results to analyze.

  • confidence_level (float) – Confidence level for intervals (default 0.95).

  • n_bootstrap (int) – Number of bootstrap iterations (default 10000).

  • method (str) – Bootstrap method (‘percentile’ or ‘bca’).

  • show_progress (bool) – Whether to show progress bar.

Return type:

Dict[str, Tuple[float, float]]

Returns:

Dictionary mapping metric names to (lower, upper) confidence bounds.

run_with_progress_monitoring(check_intervals: List[int] | None = None, convergence_threshold: float = 1.1, early_stopping: bool = True, show_progress: bool = True) MonteCarloResults[source]

Run simulation with progress tracking and convergence monitoring.

Return type:

MonteCarloResults

run_with_convergence_monitoring(target_r_hat: float = 1.05, check_interval: int = 10000, max_iterations: int | None = None) MonteCarloResults[source]

Run simulation with automatic convergence monitoring.

Parameters:
  • target_r_hat (float) – Target R-hat for convergence

  • check_interval (int) – Check convergence every N simulations

  • max_iterations (Optional[int]) – Maximum iterations (None for no limit)

Return type:

MonteCarloResults

Returns:

Converged simulation results

estimate_ruin_probability(config: RuinProbabilityConfig | None = None) RuinProbabilityResults[source]

Estimate ruin probability over multiple time horizons.

Delegates to RuinProbabilityAnalyzer for specialized analysis.

Parameters:

config (Optional[RuinProbabilityConfig]) – Configuration for ruin probability estimation

Return type:

RuinProbabilityResults

Returns:

RuinProbabilityResults with comprehensive bankruptcy analysis

ergodic_insurance.monte_carlo_worker module

Standalone worker function for multiprocessing Monte Carlo simulations.

run_chunk_standalone(chunk: Tuple[int, int, int | None], loss_generator: ManufacturingLossGenerator, insurance_program: InsuranceProgram, manufacturer: WidgetManufacturer, config_dict: Dict[str, Any]) Dict[str, ndarray | List[Dict[int, bool]]][source]

Standalone function to run a chunk of simulations for multiprocessing.

This function is independent of the MonteCarloEngine class and can be pickled for multiprocessing on all platforms including Windows.

Parameters:
Return type:

Dict[str, Union[ndarray, List[Dict[int, bool]]]]

Returns:

Dictionary with simulation results for the chunk

ergodic_insurance.optimal_control module

Optimal control strategies for insurance decisions.

This module provides implementations of various control strategies derived from the HJB solver, including feedback control laws, state-dependent insurance limits, and integration with the simulation framework.

Key Components:
  • ControlSpace: Defines feasible insurance control parameters

  • ControlStrategy: Abstract base for control strategies

  • StaticControl: Fixed insurance parameters throughout simulation

  • HJBFeedbackControl: State-dependent optimal control from HJB solution

  • TimeVaryingControl: Predetermined time-based control schedule

  • OptimalController: Integrates control strategies with simulations

Typical Workflow:
  1. Solve HJB equation to get optimal policy

  2. Create control strategy (e.g., HJBFeedbackControl)

  3. Initialize OptimalController with strategy

  4. Apply controls in simulation loop

  5. Track and analyze performance

Example

>>> # Solve HJB problem
>>> solver = HJBSolver(problem, config)
>>> value_func, policy = solver.solve()
>>>
>>> # Create feedback control
>>> control_space = ControlSpace(
...     limits=[(1e6, 5e7)],
...     retentions=[(1e5, 1e7)]
... )
>>> strategy = HJBFeedbackControl(solver, control_space)
>>>
>>> # Apply in simulation
>>> controller = OptimalController(strategy, control_space)
>>> insurance = controller.apply_control(manufacturer, time=t)

Author: Alex Filiakov Date: 2025-01-26

class HJBControllerConfig(wealth_min: float = 1000000.0, wealth_max: float = 100000000.0, wealth_points: int = 30, time_points: int = 15, limit_min: float = 1000000.0, limit_max: float = 50000000.0, limit_points: int = 10, retention_min: float = 100000.0, retention_max: float = 10000000.0, retention_points: int = 10, coverage_min: float = 0.9, coverage_max: float = 1.0, growth_rate: float | None = None, premium_rate_base: float = 0.02, premium_rate_scaling: float = 10000000.0, loss_volatility: float = 0.15, coverage_volatility_factor: float = 0.7, discount_rate: float = 0.05, solver_time_step: float = 0.05, solver_max_iterations: int = 50, solver_tolerance: float = 0.0001, solver_verbose: bool = False) None[source]

Bases: object

Configuration for create_hjb_controller().

Consolidates hardcoded numeric literals from the HJB controller factory into a single configurable object with backward-compatible defaults.

wealth_min: float = 1000000.0
wealth_max: float = 100000000.0
wealth_points: int = 30
time_points: int = 15
limit_min: float = 1000000.0
limit_max: float = 50000000.0
limit_points: int = 10
retention_min: float = 100000.0
retention_max: float = 10000000.0
retention_points: int = 10
coverage_min: float = 0.9
coverage_max: float = 1.0
growth_rate: float | None = None
premium_rate_base: float = 0.02
premium_rate_scaling: float = 10000000.0
loss_volatility: float = 0.15
coverage_volatility_factor: float = 0.7
discount_rate: float = 0.05
solver_time_step: float = 0.05
solver_max_iterations: int = 50
solver_tolerance: float = 0.0001
solver_verbose: bool = False
class PremiumEstimationConfig(base_rate: float = 0.02, max_retention_discount: float = 0.5, retention_scaling: float = 10000000.0, limit_scaling: float = 1000000.0, limit_log_factor: float = 0.1) None[source]

Bases: object

Configuration for OptimalController._estimate_premium_rate().

Extracts the heuristic premium-estimation magic numbers into a configurable object with backward-compatible defaults.

base_rate: float = 0.02
max_retention_discount: float = 0.5
retention_scaling: float = 10000000.0
limit_scaling: float = 1000000.0
limit_log_factor: float = 0.1
class ControlMode(*values)[source]

Bases: Enum

Mode of control application.

STATIC

Fixed control parameters that never change.

STATE_FEEDBACK

Control depends on current system state.

TIME_VARYING

Control follows predetermined time schedule.

ADAPTIVE

Control adapts based on observed history.

STATIC = 'static'
STATE_FEEDBACK = 'state_feedback'
TIME_VARYING = 'time_varying'
ADAPTIVE = 'adaptive'
class ControlSpace(limits: ~typing.List[~typing.Tuple[float, float]], retentions: ~typing.List[~typing.Tuple[float, float]], coverage_percentages: ~typing.List[~typing.Tuple[float, float]] = <factory>, reinsurance_limits: ~typing.List[~typing.Tuple[float, float]] | None = None) None[source]

Bases: object

Definition of the control space for insurance decisions.

limits: List[Tuple[float, float]]
retentions: List[Tuple[float, float]]
coverage_percentages: List[Tuple[float, float]]
reinsurance_limits: List[Tuple[float, float]] | None = None
__post_init__()[source]

Validate control space configuration.

Raises:
  • ValueError – If limits and retentions have different number of layers.

  • ValueError – If coverage percentages don’t match number of layers.

  • ValueError – If any bounds are invalid (min >= max).

  • ValueError – If coverage percentages are outside [0, 1] range.

get_dimensions() int[source]

Get total number of control dimensions.

Returns:

Total number of control variables across all layers

and control types.

Return type:

int

Note

Used for determining the size of control vectors in optimization algorithms.

to_array(limits: List[float], retentions: List[float], coverages: List[float] | None = None) ndarray[source]

Convert control parameters to array format.

Parameters:
  • limits (List[float]) – Insurance limits for each layer.

  • retentions (List[float]) – Retention levels for each layer.

  • coverages (Optional[List[float]]) – Optional coverage percentages. If None, defaults to full coverage.

Returns:

Flattened control array suitable for

optimization algorithms.

Return type:

np.ndarray

Note

Array order is: [limits, retentions, coverages].

from_array(control_array: ndarray) Dict[str, List[float]][source]

Convert control array back to named parameters.

Parameters:

control_array (ndarray) – Flattened control array from optimization.

Returns:

Dictionary with keys ‘limits’,

’retentions’, and ‘coverages’ mapping to lists of values for each layer.

Return type:

Dict[str, List[float]]

Note

Inverse operation of to_array().

class ControlStrategy[source]

Bases: ABC

Abstract base class for control strategies.

All control strategies must implement methods to: 1. Determine control actions based on state/time 2. Update internal parameters based on outcomes

abstractmethod get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]

Get control action for current state and time.

Parameters:
  • state (Dict[str, float]) – Current state dictionary containing keys like ‘wealth’, ‘assets’, ‘cumulative_losses’, etc.

  • time (float) – Current simulation time.

Returns:

Control actions with keys ‘limits’,

’retentions’, and ‘coverages’, each mapping to lists of values.

Return type:

Dict[str, Any]

abstractmethod update(state: Dict[str, float], outcome: Dict[str, float])[source]

Update strategy based on observed outcome.

Parameters:
  • state (Dict[str, float]) – State where control was applied.

  • outcome (Dict[str, float]) – Observed outcome containing keys like ‘losses’, ‘premium_costs’, ‘claim_payments’, etc.

Note

May be no-op for non-adaptive strategies.

class StaticControl(limits: List[float], retentions: List[float], coverages: List[float] | None = None)[source]

Bases: ControlStrategy

Static control strategy with fixed parameters.

This is the simplest control strategy where insurance parameters remain constant throughout the simulation.

get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]

Return fixed control parameters.

Parameters:
  • state (Dict[str, float]) – Current state (ignored for static control).

  • time (float) – Current time (ignored for static control).

Returns:

Fixed control parameters.

Return type:

Dict[str, Any]

update(state: Dict[str, float], outcome: Dict[str, float])[source]

No updates for static control.

Parameters:
  • state (Dict[str, float]) – State where control was applied (ignored).

  • outcome (Dict[str, float]) – Observed outcome (ignored).

class HJBFeedbackControl(hjb_solver: HJBSolver, control_space: ControlSpace, state_mapping: Callable[[Dict[str, float]], ndarray] | None = None)[source]

Bases: ControlStrategy

State-feedback control derived from HJB solution.

This strategy uses the optimal policy computed by the HJB solver to determine insurance parameters based on the current state.

get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]

Get optimal control from HJB policy.

Parameters:
  • state (Dict[str, float]) – Current simulation state dictionary.

  • time (float) – Current time (may be included in state mapping).

Returns:

Optimal control parameters with keys

’limits’, ‘retentions’, and ‘coverages’.

Return type:

Dict[str, Any]

Note

Uses linear interpolation of the HJB optimal policy between grid points.

update(state: Dict[str, float], outcome: Dict[str, float])[source]

No updates needed for HJB feedback control.

Parameters:
  • state (Dict[str, float]) – State where control was applied (ignored).

  • outcome (Dict[str, float]) – Observed outcome (ignored).

Note

HJB policy is precomputed and doesn’t adapt online.

class TimeVaryingControl(time_schedule: List[float], limits_schedule: List[List[float]], retentions_schedule: List[List[float]], coverages_schedule: List[List[float]] | None = None)[source]

Bases: ControlStrategy

Time-varying control strategy with predetermined schedule.

This strategy adjusts insurance parameters according to a predetermined time schedule, useful for seasonal or cyclical risks.

get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]

Get control parameters for current time.

Parameters:
  • state (Dict[str, float]) – Current state (ignored for time-based control).

  • time (float) – Current simulation time.

Returns:

Control parameters interpolated linearly

between scheduled time points.

Return type:

Dict[str, Any]

Note

Uses nearest value for times outside the schedule range.

update(state: Dict[str, float], outcome: Dict[str, float])[source]

No updates for predetermined schedule.

Parameters:
  • state (Dict[str, float]) – State where control was applied (ignored).

  • outcome (Dict[str, float]) – Observed outcome (ignored).

class OptimalController(strategy: ControlStrategy, control_space: ControlSpace, premium_config: PremiumEstimationConfig | None = None)[source]

Bases: object

Controller that applies optimal strategies in simulation.

This class integrates control strategies with the simulation framework, managing the application of controls and tracking performance.

control_history: list[Dict[str, Any]]
state_history: list[Dict[str, float]]
outcome_history: list[Dict[str, float]]
apply_control(manufacturer: WidgetManufacturer, state: Dict[str, float] | None = None, time: float = 0.0) InsuranceProgram[source]

Apply control strategy to create insurance program.

Parameters:
  • manufacturer (WidgetManufacturer) – Current manufacturer instance for extracting state if not provided.

  • state (Optional[Dict[str, float]]) – Optional state override. If None, state is extracted from manufacturer using _extract_state().

  • time (float) – Current simulation time.

Returns:

Insurance program with layers configured

according to the control strategy.

Return type:

InsuranceProgram

Note

Records control and state in history for later analysis.

update_outcome(outcome: Dict[str, float])[source]

Update controller with observed outcome.

Parameters:

outcome (Dict[str, float]) – Observed outcome dictionary with keys like ‘losses’, ‘premium_costs’, ‘claim_payments’, etc.

Note

Calls strategy.update() if strategy is adaptive.

get_performance_summary() DataFrame[source]

Get summary of controller performance.

Returns:

DataFrame with columns for step number,

state variables (prefixed with state_), control variables (prefixed with control_), and outcomes (prefixed with outcome_).

Return type:

pd.DataFrame

Note

Useful for analyzing control strategy effectiveness and creating visualizations.

reset()[source]

Reset controller history.

Clears all recorded history to prepare for new simulation run.

create_hjb_controller(manufacturer: WidgetManufacturer, simulation_years: int = 10, utility_type: str = 'log', risk_aversion: float = 2.0, config: HJBControllerConfig | None = None) OptimalController[source]

Convenience function to create HJB-based controller.

Creates and solves a simplified HJB problem for insurance optimization, then returns a controller configured with the optimal policy.

Parameters:
  • manufacturer (WidgetManufacturer) – Manufacturer instance for extracting model parameters like growth rates and risk characteristics.

  • simulation_years (int) – Time horizon for optimization. Longer horizons may require more grid points for accuracy.

  • utility_type (str) – Type of utility function: - ‘log’: Logarithmic utility (Kelly criterion) - ‘power’: Power/CRRA utility with risk aversion - ‘linear’: Risk-neutral expected wealth

  • risk_aversion (float) – Coefficient of relative risk aversion for power utility. Higher values imply more conservative policies. Ignored for log and linear utilities.

Returns:

Controller with HJB feedback strategy configured

for the specified problem.

Return type:

OptimalController

Raises:

ValueError – If utility_type is not recognized.

Example

>>> from ergodic_insurance.manufacturer import WidgetManufacturer
>>> from ergodic_insurance.config import ManufacturerConfig
>>>
>>> # Set up manufacturer
>>> config = ManufacturerConfig()
>>> manufacturer = WidgetManufacturer(config)
>>>
>>> # Create HJB controller with power utility
>>> controller = create_hjb_controller(
...     manufacturer,
...     simulation_years=10,
...     utility_type="power",
...     risk_aversion=2.0
... )
>>>
>>> # Apply control at current state
>>> insurance = controller.apply_control(manufacturer, time=0)
>>>
>>> # Run simulation step
>>> losses = manufacturer.generate_losses()
>>> manufacturer.apply_losses(losses, insurance)
>>>
>>> # Update controller with outcome
>>> outcome = {'losses': losses, 'premium': insurance.total_premium}
>>> controller.update_outcome(outcome)

Note

This function creates a simplified 2D state space (wealth, time) and single-layer insurance for demonstration. Production systems would use higher-dimensional state spaces and multiple layers.

ergodic_insurance.optimization module

Advanced optimization algorithms for constrained insurance decision making.

This module implements sophisticated optimization methods including trust-region, penalty methods, augmented Lagrangian, and multi-start techniques for finding global optima in complex insurance optimization problems.

class ConstraintType(*values)[source]

Bases: Enum

Types of constraints in optimization.

EQUALITY = 'eq'
INEQUALITY = 'ineq'
BOUNDS = 'bounds'
class ConstraintViolation(constraint_name: str, violation_amount: float, constraint_type: ConstraintType, current_value: float, limit_value: float, is_satisfied: bool) None[source]

Bases: object

Information about constraint violations.

constraint_name: str
violation_amount: float
constraint_type: ConstraintType
current_value: float
limit_value: float
is_satisfied: bool
__str__() str[source]

String representation of violation.

Return type:

str

class ConvergenceMonitor(max_iterations: int = 1000, tolerance: float = 1e-06, objective_history: List[float] = <factory>, constraint_violation_history: List[float] = <factory>, gradient_norm_history: List[float] = <factory>, step_size_history: List[float] = <factory>, iteration_count: int = 0, converged: bool = False, convergence_message: str = '') None[source]

Bases: object

Monitor and track convergence of optimization algorithms.

max_iterations: int = 1000
tolerance: float = 1e-06
objective_history: List[float]
constraint_violation_history: List[float]
gradient_norm_history: List[float]
step_size_history: List[float]
iteration_count: int = 0
converged: bool = False
convergence_message: str = ''
update(objective: float, constraint_violation: float = 0.0, gradient_norm: float = 0.0, step_size: float = 0.0)[source]

Update convergence history.

get_summary() Dict[str, Any][source]

Get convergence summary statistics.

Return type:

Dict[str, Any]

class AdaptivePenaltyParameters(initial_penalty: float = 10.0, penalty_increase_factor: float = 2.0, max_penalty: float = 1000000.0, constraint_tolerance: float = 0.0001, penalty_update_frequency: int = 10, current_penalties: Dict[str, float]=<factory>) None[source]

Bases: object

Parameters for adaptive penalty method.

initial_penalty: float = 10.0
penalty_increase_factor: float = 2.0
max_penalty: float = 1000000.0
constraint_tolerance: float = 0.0001
penalty_update_frequency: int = 10
current_penalties: Dict[str, float]
update_penalties(violations: List[ConstraintViolation])[source]

Update penalty parameters based on constraint violations.

class TrustRegionOptimizer(objective_fn: Callable, gradient_fn: Callable | None = None, hessian_fn: Callable | None = None, constraints: List[Dict[str, Any]] | None = None, bounds: Bounds | None = None)[source]

Bases: object

Trust-region constrained optimization with adaptive radius adjustment.

optimize(x0: ndarray, initial_radius: float = 1.0, max_radius: float = 10.0, eta: float = 0.15, max_iter: int = 1000, tol: float = 1e-06) OptimizeResult[source]

Run trust-region optimization.

Parameters:
  • x0 (ndarray) – Initial point

  • initial_radius (float) – Initial trust region radius

  • max_radius (float) – Maximum trust region radius

  • eta (float) – Minimum reduction ratio for accepting step

  • max_iter (int) – Maximum iterations

  • tol (float) – Convergence tolerance

Return type:

OptimizeResult

Returns:

Optimization result

class PenaltyMethodOptimizer(objective_fn: Callable, constraints: List[Dict[str, Any]], bounds: Bounds | None = None, exact_penalty: bool = False)[source]

Bases: object

Optimization using penalty method with adaptive penalty parameters.

By default, uses a quadratic (L2) penalty: penalty * violation^2. The quadratic formulation only drives the solution to exact feasibility as the penalty parameter tends to infinity. For any finite penalty the converged solution is an interior approximation that systematically violates inequality constraints by O(1/penalty) (see Nocedal & Wright, Numerical Optimization, 2nd ed., Ch. 17.1).

When exact_penalty=True, uses an exact (L1) penalty instead: penalty * violation. For a sufficiently large (but finite) penalty the L1 formulation yields a solution that satisfies the constraints exactly (Fletcher, Practical Methods of Optimization, Ch. 12.2). The trade-off is that the L1 penalty is non-smooth, which may affect gradient-based inner solvers.

For problems that require exact constraint satisfaction with bounded penalty parameters, consider AugmentedLagrangianOptimizer (augmented Lagrangian with multiplier estimates) or TrustRegionOptimizer (wraps SciPy trust-constr).

optimize(x0: ndarray, method: str = 'L-BFGS-B', max_outer_iter: int = 50, max_inner_iter: int = 100, tol: float = 1e-06) OptimizeResult[source]

Run penalty method optimization.

Parameters:
  • x0 (ndarray) – Initial point

  • method (str) – Inner optimization method

  • max_outer_iter (int) – Maximum outer iterations

  • max_inner_iter (int) – Maximum inner iterations per outer loop

  • tol (float) – Convergence tolerance

Return type:

OptimizeResult

Returns:

Optimization result

class AugmentedLagrangianOptimizer(objective_fn: Callable, constraints: List[Dict[str, Any]], bounds: Bounds | None = None)[source]

Bases: object

Augmented Lagrangian method for constrained optimization.

optimize(x0: ndarray, max_outer_iter: int = 50, max_inner_iter: int = 100, tol: float = 1e-06, rho_init: float = 1.0, rho_max: float = 10000.0) OptimizeResult[source]

Run augmented Lagrangian optimization.

Parameters:
  • x0 (ndarray) – Initial point

  • max_outer_iter (int) – Maximum outer iterations

  • max_inner_iter (int) – Maximum inner iterations

  • tol (float) – Convergence tolerance

  • rho_init (float) – Initial penalty parameter

  • rho_max (float) – Maximum penalty parameter

Return type:

OptimizeResult

Returns:

Optimization result

class MultiStartOptimizer(objective_fn: Callable, bounds: Bounds, constraints: List[Dict[str, Any]] | None = None, base_optimizer: str = 'SLSQP', gpu_config: GPUConfig | None = None)[source]

Bases: object

Multi-start optimization for finding global optima.

optimize(n_starts: int = 10, x0: ndarray | None = None, seed: int | None = None, parallel: bool = False) OptimizeResult[source]

Run multi-start optimization.

Parameters:
  • n_starts (int) – Number of random starts

  • x0 (Optional[ndarray]) – Optional initial point (included as first start)

  • seed (Optional[int]) – Random seed for reproducibility

  • parallel (bool) – Whether to run starts in parallel

Return type:

OptimizeResult

Returns:

Best optimization result across all starts

screen_starting_points_gpu(starting_points: List[ndarray], top_k: int = 5) List[ndarray][source]

Screen starting points by evaluating all in a GPU-accelerated batch.

Evaluates all starting points simultaneously using GPU array operations, then returns the top-k points with the best objective values.

Parameters:
  • starting_points (List[ndarray]) – List of starting points to evaluate

  • top_k (int) – Number of best starting points to return

Return type:

List[ndarray]

Returns:

List of top-k starting points sorted by objective value

Since:

Version 0.11.0 (Issue #966)

class EnhancedSLSQPOptimizer(objective_fn: Callable, gradient_fn: Callable | None = None, constraints: List[Dict[str, Any]] | None = None, bounds: Bounds | None = None)[source]

Bases: object

Enhanced SLSQP with adaptive step sizing and improved convergence.

step_size: float
prev_x: ndarray | None
prev_obj: float | None
optimize(x0: ndarray, adaptive_step: bool = True, line_search: str = 'armijo', max_iter: int = 1000, tol: float = 1e-06) OptimizeResult[source]

Run enhanced SLSQP optimization.

Parameters:
  • x0 (ndarray) – Initial point

  • adaptive_step (bool) – Whether to use adaptive step sizing

  • line_search (str) – Line search method (“armijo” or “wolfe”)

  • max_iter (int) – Maximum iterations

  • tol (float) – Convergence tolerance

Return type:

OptimizeResult

Returns:

Optimization result

create_optimizer(method: str, objective_fn: Callable, constraints: List[Dict[str, Any]] | None = None, bounds: Bounds | None = None, gpu_config: GPUConfig | None = None, **kwargs) Any[source]

Factory function to create appropriate optimizer.

Parameters:
  • method (str) – Optimization method name

  • objective_fn (Callable) – Objective function

  • constraints (Optional[List[Dict[str, Any]]]) – Optional constraints

  • bounds (Optional[Bounds]) – Optional bounds

  • gpu_config (Optional[GPUConfig]) – Optional GPU configuration for accelerated operations

  • **kwargs – Additional optimizer-specific arguments

Return type:

Any

Returns:

Configured optimizer instance

ergodic_insurance.parallel_executor module

CPU-optimized parallel execution engine for Monte Carlo simulations.

This module provides enhanced parallel processing capabilities optimized for budget hardware (4-8 cores) with intelligent chunking, shared memory management, and minimal serialization overhead.

Features:
  • Smart dynamic chunking based on CPU resources and workload

  • Shared memory for read-only data structures

  • CPU affinity optimization for cache locality

  • Minimal IPC overhead (<5% target)

  • Memory-efficient execution (<4GB for 100K simulations)

Example

>>> from ergodic_insurance.parallel_executor import ParallelExecutor
>>> executor = ParallelExecutor(n_workers=4)
>>> results = executor.map_reduce(
...     work_function=simulate_path,
...     work_items=range(100000),
...     reduce_function=combine_results,
...     shared_data={'config': simulation_config}
... )
Author:

Alex Filiakov

Date:

2025-08-26

class CPUProfile(n_cores: int, n_threads: int, cache_sizes: Dict[str, int], available_memory: int, cpu_freq: float, system_load: float) None[source]

Bases: object

CPU performance profile for optimization decisions.

n_cores: int
n_threads: int
cache_sizes: Dict[str, int]
available_memory: int
cpu_freq: float
system_load: float
classmethod detect() CPUProfile[source]

Detect current CPU profile.

Returns:

Current system CPU profile

Return type:

CPUProfile

class ChunkingStrategy(initial_chunk_size: int = 1000, min_chunk_size: int = 100, max_chunk_size: int = 10000, target_chunks_per_worker: int = 10, adaptive: bool = True, profile_samples: int = 100) None[source]

Bases: object

Dynamic chunking strategy for parallel workloads.

initial_chunk_size: int = 1000
min_chunk_size: int = 100
max_chunk_size: int = 10000
target_chunks_per_worker: int = 10
adaptive: bool = True
profile_samples: int = 100
calculate_optimal_chunk_size(n_items: int, n_workers: int, item_complexity: float = 1.0, cpu_profile: CPUProfile | None = None) int[source]

Calculate optimal chunk size based on workload and resources.

Parameters:
  • n_items (int) – Total number of work items

  • n_workers (int) – Number of parallel workers

  • item_complexity (float) – Relative complexity of each item (1.0 = baseline)

  • cpu_profile (Optional[CPUProfile]) – CPU profile for optimization

Returns:

Optimal chunk size

Return type:

int

class SharedMemoryConfig(enable_shared_arrays: bool = True, enable_shared_objects: bool = True, compression: bool = False, cleanup_on_exit: bool = True, skip_hmac: bool = False) None[source]

Bases: object

Configuration for shared memory optimization.

enable_shared_arrays: bool = True
enable_shared_objects: bool = True
compression: bool = False
cleanup_on_exit: bool = True
skip_hmac: bool = False

Bypass HMAC signing/verification for shared memory transfers.

HMAC integrity checks are valuable for persistent caches (file-based) but add unnecessary overhead for ephemeral shared memory that is created and consumed within the same process group. Set to True when shared data is purely in-process to avoid per-chunk SHA-256 signing/verification.

class SharedMemoryManager(config: SharedMemoryConfig | None = None)[source]

Bases: object

Manager for shared memory resources.

Handles creation, access, and cleanup of shared memory segments for both numpy arrays and serialized objects.

shared_arrays: Dict[str, Tuple[SharedMemory, tuple, dtype]]
shared_objects: Dict[str, SharedMemory]
share_array(name: str, array: ndarray) str[source]

Share a numpy array via shared memory.

Parameters:
  • name (str) – Unique identifier for the array

  • array (ndarray) – Numpy array to share

Returns:

Shared memory name for retrieval

Return type:

str

get_array(shm_name: str, shape: tuple, dtype: dtype) ndarray[source]

Retrieve a shared numpy array.

Parameters:
  • shm_name (str) – Shared memory name

  • shape (tuple) – Array shape

  • dtype (dtype) – Array data type

Returns:

Shared array (view, not copy)

Return type:

np.ndarray

share_object(name: str, obj: Any) str[source]

Share a serialized object via shared memory.

Parameters:
  • name (str) – Unique identifier for the object

  • obj (Any) – Object to share

Returns:

Shared memory name for retrieval

Return type:

str

get_object(shm_name: str, size: int, compressed: bool = False) Any[source]

Retrieve a shared object.

Parameters:
  • shm_name (str) – Shared memory name

  • size (int) – Size of serialized data

  • compressed (bool) – Whether data is compressed

Returns:

Deserialized object

Return type:

Any

get_object_size(name: str) int[source]

Get the actual stored size of a shared object.

Parameters:

name (str) – Object identifier used in share_object

Returns:

Actual byte size stored in shared memory

Return type:

int

cleanup()[source]

Clean up all shared memory resources.

__del__()[source]

Cleanup on deletion.

class PerformanceMetrics(total_time: float = 0.0, setup_time: float = 0.0, computation_time: float = 0.0, serialization_time: float = 0.0, reduction_time: float = 0.0, memory_peak: int = 0, cpu_utilization: float = 0.0, items_per_second: float = 0.0, speedup: float = 1.0, total_items: int = 0, failed_items: int = 0) None[source]

Bases: object

Performance metrics for parallel execution.

total_time: float = 0.0
setup_time: float = 0.0
computation_time: float = 0.0
serialization_time: float = 0.0
reduction_time: float = 0.0
memory_peak: int = 0
cpu_utilization: float = 0.0
items_per_second: float = 0.0
speedup: float = 1.0
total_items: int = 0
failed_items: int = 0
summary() str[source]

Generate performance summary.

Returns:

Formatted performance summary

Return type:

str

class ParallelExecutor(n_workers: int | None = None, chunking_strategy: ChunkingStrategy | None = None, shared_memory_config: SharedMemoryConfig | None = None, monitor_performance: bool = True, max_failure_rate: float | None = None)[source]

Bases: object

CPU-optimized parallel executor for Monte Carlo simulations.

Provides intelligent work distribution, shared memory management, and performance monitoring for efficient parallel execution on budget hardware.

map_reduce(work_function: Callable, work_items: List | range, reduce_function: Callable | None = None, shared_data: Dict[str, Any] | None = None, progress_bar: bool = True, progress_callback: Callable[[int, int, float], None] | None = None, cancel_event: Event | None = None) Any[source]

Execute parallel map-reduce operation.

Parameters:
  • work_function (Callable) – Function to apply to each work item

  • work_items (Union[List, range]) – List or range of work items

  • reduce_function (Optional[Callable]) – Function to combine results (None for list)

  • shared_data (Optional[Dict[str, Any]]) – Data to share across all workers

  • progress_bar (bool) – Show progress bar

  • progress_callback (Optional[Callable[[int, int, float], None]]) – Optional callback invoked with (completed, total, elapsed_seconds) after each chunk.

  • cancel_event (Optional[Event]) – Optional threading.Event; when set the executor stops after the current chunk and returns partial results.

Returns:

Combined results from reduce function or list of results

Return type:

Any

get_performance_report() str[source]

Get performance report.

Returns:

Formatted performance report

Return type:

str

__enter__()[source]

Context manager entry.

__exit__(exc_type, exc_val, exc_tb)[source]

Context manager exit with cleanup.

parallel_map(func: Callable, items: List | range, n_workers: int | None = None, progress: bool = True) List[Any][source]

Simple parallel map operation.

Parameters:
  • func (Callable) – Function to apply

  • items (Union[List, range]) – Items to process

  • n_workers (Optional[int]) – Number of workers

  • progress (bool) – Show progress bar

Returns:

Results

Return type:

List[Any]

parallel_aggregate(func: Callable, items: List | range, reducer: Callable, n_workers: int | None = None, shared_data: Dict | None = None, progress: bool = True) Any[source]

Parallel map-reduce operation.

Parameters:
  • func (Callable) – Function to apply to each item

  • items (Union[List, range]) – Items to process

  • reducer (Callable) – Function to combine results

  • n_workers (Optional[int]) – Number of workers

  • shared_data (Optional[Dict]) – Data to share across workers

  • progress (bool) – Show progress bar

Returns:

Aggregated result

Return type:

Any

ergodic_insurance.parameter_sweep module

Parameter sweep utilities for systematic exploration of parameter space.

This module provides utilities for systematic parameter sweeps across the full parameter space to identify optimal regions and validate robustness of recommendations across different scenarios.

Features:
  • Efficient grid search across parameter combinations

  • Parallel execution for large sweeps using multiprocessing

  • Result aggregation and storage with HDF5/Parquet support

  • Scenario comparison tools for side-by-side analysis

  • Optimal region identification using percentile-based methods

  • Pre-defined scenarios for company sizes, loss scenarios, and market conditions

  • Adaptive refinement near optima for efficient exploration

  • Progress tracking and resumption capabilities

Example

>>> from ergodic_insurance.parameter_sweep import ParameterSweeper, SweepConfig
>>> from ergodic_insurance.business_optimizer import BusinessOptimizer
>>>
>>> # Create optimizer
>>> optimizer = BusinessOptimizer(manufacturer)
>>>
>>> # Initialize sweeper
>>> sweeper = ParameterSweeper(optimizer)
>>>
>>> # Define parameter sweep
>>> config = SweepConfig(
...     parameters={
...         "initial_assets": [1e6, 10e6, 100e6],
...         "base_operating_margin": [0.05, 0.08, 0.12],
...         "loss_frequency": [3, 5, 8]
...     },
...     fixed_params={"time_horizon": 10},
...     metrics_to_track=["optimal_roe", "ruin_probability"]
... )
>>>
>>> # Execute sweep
>>> results = sweeper.sweep(config)
>>>
>>> # Find optimal regions
>>> optimal, summary = sweeper.find_optimal_regions(
...     results,
...     objective="optimal_roe",
...     constraints={"ruin_probability": (0, 0.01)}
... )
Author:

Alex Filiakov

Date:

2025-08-29

class SweepConfig(parameters: ~typing.Dict[str, ~typing.List[~typing.Any]], fixed_params: ~typing.Dict[str, ~typing.Any] = <factory>, metrics_to_track: ~typing.List[str] = <factory>, n_workers: int | None = None, batch_size: int = 100, adaptive_refinement: bool = False, refinement_threshold: float = 90.0, save_intermediate: bool = True, cache_dir: str = './cache/sweeps') None[source]

Bases: object

Configuration for parameter sweep.

parameters

Dictionary mapping parameter names to lists of values to sweep

fixed_params

Fixed parameters that don’t vary across sweep

metrics_to_track

List of metric names to extract from results

n_workers

Number of parallel workers for execution

batch_size

Size of batches for parallel processing

adaptive_refinement

Whether to adaptively refine near optima

refinement_threshold

Percentile threshold for refinement (e.g., 90 for top 10%)

save_intermediate

Whether to save intermediate results

cache_dir

Directory for caching results

parameters: Dict[str, List[Any]]
fixed_params: Dict[str, Any]
metrics_to_track: List[str]
n_workers: int | None = None
batch_size: int = 100
adaptive_refinement: bool = False
refinement_threshold: float = 90.0
save_intermediate: bool = True
cache_dir: str = './cache/sweeps'
__post_init__()[source]

Validate configuration and set defaults.

generate_grid() List[Dict[str, Any]][source]

Generate parameter grid for sweep.

Return type:

List[Dict[str, Any]]

Returns:

List of dictionaries, each containing a complete parameter configuration

estimate_runtime(seconds_per_run: float = 1.0) str[source]

Estimate total runtime for sweep.

Parameters:

seconds_per_run (float) – Estimated seconds per single parameter configuration

Return type:

str

Returns:

Human-readable runtime estimate

class ParameterSweeper(optimizer: BusinessOptimizer | None = None, cache_dir: str = './cache/sweeps', use_parallel: bool = True)[source]

Bases: object

Systematic parameter sweep utilities for insurance optimization.

This class provides methods for exploring the parameter space through grid search, identifying optimal regions, and comparing scenarios.

optimizer

Business optimizer instance for running optimizations

cache_dir

Directory for storing cached results

results_cache

In-memory cache of optimization results

use_parallel

Whether to use parallel processing

results_cache: Dict[str, Dict[str, Any]]
sweep(config: SweepConfig, progress_callback: Callable | None = None) DataFrame[source]

Execute parameter sweep with parallel processing.

Parameters:
Return type:

DataFrame

Returns:

DataFrame containing sweep results with all parameter combinations and metrics

create_scenarios() Dict[str, SweepConfig][source]

Create pre-defined scenario configurations.

Return type:

Dict[str, SweepConfig]

Returns:

Dictionary of scenario names to SweepConfig objects

find_optimal_regions(results: DataFrame, objective: str = 'optimal_roe', constraints: Dict[str, Tuple[float, float]] | None = None, top_percentile: float = 90) Tuple[DataFrame, DataFrame][source]

Identify optimal parameter regions.

Parameters:
  • results (DataFrame) – DataFrame of sweep results

  • objective (str) – Objective metric to optimize

  • constraints (Optional[Dict[str, Tuple[float, float]]]) – Dictionary mapping metric names to (min, max) constraint tuples

  • top_percentile (float) – Percentile threshold for optimal region (e.g., 90 for top 10%)

Return type:

Tuple[DataFrame, DataFrame]

Returns:

Tuple of (optimal results DataFrame, parameter statistics DataFrame)

compare_scenarios(results: Dict[str, DataFrame], metrics: List[str] | None = None, normalize: bool = False) DataFrame[source]

Compare results across multiple scenarios.

Parameters:
  • results (Dict[str, DataFrame]) – Dictionary mapping scenario names to result DataFrames

  • metrics (Optional[List[str]]) – List of metrics to compare (default: all common metrics)

  • normalize (bool) – Whether to normalize metrics to [0, 1] range

Return type:

DataFrame

Returns:

DataFrame with scenario comparison

load_results(sweep_hash: str) DataFrame | None[source]

Load cached sweep results.

Parameters:

sweep_hash (str) – Sweep configuration hash

Return type:

Optional[DataFrame]

Returns:

Results DataFrame if found, None otherwise

export_results(results: DataFrame, output_file: str, file_format: str = 'parquet') None[source]

Export results to specified format.

Parameters:
  • results (DataFrame) – Results DataFrame

  • output_file (str) – Output file path

  • file_format (str) – Export format (‘parquet’, ‘csv’, ‘excel’, ‘hdf5’)

Return type:

None

ergodic_insurance.pareto_frontier module

Pareto frontier analysis for multi-objective optimization.

This module provides comprehensive tools for generating, analyzing, and visualizing Pareto frontiers in multi-objective optimization problems, particularly focused on insurance optimization trade-offs between ROE, risk, and costs.

class ObjectiveType(*values)[source]

Bases: Enum

Types of objectives in multi-objective optimization.

MAXIMIZE = 'maximize'
MINIMIZE = 'minimize'
class Objective(name: str, type: ObjectiveType, weight: float = 1.0, normalize: bool = True, bounds: Tuple[float, float] | None = None) None[source]

Bases: object

Definition of an optimization objective.

name

Name of the objective (e.g., ‘ROE’, ‘risk’, ‘cost’)

type

Whether to maximize or minimize this objective

weight

Weight for weighted sum method (0-1)

normalize

Whether to normalize this objective

bounds

Optional bounds for this objective as (min, max)

name: str
type: ObjectiveType
weight: float = 1.0
normalize: bool = True
bounds: Tuple[float, float] | None = None
class ParetoPoint(objectives: ~typing.Dict[str, float], decision_variables: ~numpy.ndarray, is_dominated: bool = False, crowding_distance: float = 0.0, trade_offs: ~typing.Dict[str, float] = <factory>) None[source]

Bases: object

A point on the Pareto frontier.

objectives

Dictionary of objective values

decision_variables

The decision variables that produce these objectives

is_dominated

Whether this point is dominated by another

crowding_distance

Crowding distance metric for this point

trade_offs

Trade-off ratios with neighboring points

objectives: Dict[str, float]
decision_variables: ndarray
is_dominated: bool = False
crowding_distance: float = 0.0
trade_offs: Dict[str, float]
dominates(other: ParetoPoint, objectives: List[Objective]) bool[source]

Check if this point dominates another point.

Parameters:
  • other (ParetoPoint) – Another Pareto point to compare

  • objectives (List[Objective]) – List of objectives to consider

Return type:

bool

Returns:

True if this point dominates the other

class ParetoFrontier(objectives: List[Objective], objective_function: Callable, bounds: List[Tuple[float, float]], constraints: List[Dict[str, Any]] | None = None, seed: int | None = None, gpu_config: GPUConfig | None = None)[source]

Bases: object

Generator and analyzer for Pareto frontiers.

This class provides methods for generating Pareto frontiers using various algorithms and analyzing the resulting trade-offs.

frontier_points: List[ParetoPoint]
generate_weighted_sum(n_points: int = 50, method: str = 'SLSQP') List[ParetoPoint][source]

Generate Pareto frontier using weighted sum method.

Parameters:
  • n_points (int) – Number of points to generate on the frontier

  • method (str) – Optimization method to use

Return type:

List[ParetoPoint]

Returns:

List of Pareto points forming the frontier

generate_epsilon_constraint(n_points: int = 50, method: str = 'SLSQP') List[ParetoPoint][source]

Generate Pareto frontier using epsilon-constraint method.

Parameters:
  • n_points (int) – Number of points to generate

  • method (str) – Optimization method to use

Return type:

List[ParetoPoint]

Returns:

List of Pareto points forming the frontier

generate_evolutionary(n_generations: int = 100, population_size: int = 50) List[ParetoPoint][source]

Generate Pareto frontier using evolutionary algorithm.

Parameters:
  • n_generations (int) – Number of generations for evolution

  • population_size (int) – Size of population in each generation

Return type:

List[ParetoPoint]

Returns:

List of Pareto points forming the frontier

calculate_hypervolume(reference_point: Dict[str, float] | None = None) float[source]

Calculate hypervolume indicator for the Pareto frontier.

Parameters:

reference_point (Optional[Dict[str, float]]) – Reference point for hypervolume calculation

Return type:

float

Returns:

Hypervolume value

get_knee_points(n_knees: int = 1, method: str = 'perpendicular_distance') List[ParetoPoint][source]

Find knee points on the Pareto frontier.

Knee points represent points of maximum curvature on the frontier, where small improvements in one objective require large sacrifices in others (the diminishing-returns inflection point).

Three methods are supported:

  • "perpendicular_distance" (default): Finds the point(s) with maximum perpendicular distance from the line (2-D) or hyperplane (n-D) connecting the extreme points of the frontier in normalized objective space (Das, 1999).

  • "angle": Finds the point(s) where adjacent frontier segments form the sharpest angle, indicating maximum curvature (Branke et al., 2004). Most reliable for bi-objective frontiers.

  • "topsis": Finds the point(s) closest to the ideal (utopia) point in normalized space (Hwang & Yoon, 1981). This was the default in earlier versions.

Parameters:
  • n_knees (int) – Number of knee points to identify.

  • method (str) – Knee detection method. One of "perpendicular_distance", "angle", or "topsis".

Return type:

List[ParetoPoint]

Returns:

List of knee points.

Raises:

ValueError – If method is not a recognised method name.

References

Das, I. (1999). On characterizing the ‘knee’ of the Pareto curve based on Normal-Boundary Intersection. Structural Optimization, 18(2-3), 107-115.

Branke, J., Deb, K., Dierolf, H., & Osswald, M. (2004). Finding knees in multi-objective optimization. PPSN VIII, Springer, 722-731.

Hwang, C.L., & Yoon, K. (1981). Multiple Attribute Decision Making. Springer.

to_dataframe() DataFrame[source]

Convert frontier points to pandas DataFrame.

Return type:

DataFrame

Returns:

DataFrame with objectives and decision variables

ergodic_insurance.performance_optimizer module

Performance optimization module for Monte Carlo simulations.

This module provides tools and strategies to optimize the performance of Monte Carlo simulations, targeting 100K simulations in under 60 seconds on budget hardware (4-core CPU, 8GB RAM).

Key features:
  • Execution profiling and bottleneck identification

  • Vectorized operations for loss generation and insurance calculations

  • Smart caching for repeated calculations

  • Memory optimization for large-scale simulations

  • Integration with parallel execution framework

Example

>>> from performance_optimizer import PerformanceOptimizer
>>> from monte_carlo import MonteCarloEngine
>>>
>>> optimizer = PerformanceOptimizer()
>>> engine = MonteCarloEngine(config=config)
>>>
>>> # Profile execution
>>> profile_results = optimizer.profile_execution(engine, n_simulations=1000)
>>> print(profile_results.bottlenecks)
>>>
>>> # Apply optimizations
>>> optimized_engine = optimizer.optimize_engine(engine)
>>> results = optimized_engine.run()

Google-style docstrings are used throughout for Sphinx documentation.

jit(*args, **kwargs)[source]
class ProfileResult(total_time: float, bottlenecks: List[str] = <factory>, function_times: Dict[str, float]=<factory>, memory_usage: float = 0.0, recommendations: List[str] = <factory>) None[source]

Bases: object

Results from performance profiling.

total_time

Total execution time in seconds

bottlenecks

List of performance bottlenecks identified

function_times

Dictionary mapping function names to execution times

memory_usage

Peak memory usage in MB

recommendations

List of optimization recommendations

total_time: float
bottlenecks: List[str]
function_times: Dict[str, float]
memory_usage: float = 0.0
recommendations: List[str]
summary() str[source]

Generate a summary of profiling results.

Return type:

str

Returns:

Formatted summary string.

class OptimizationConfig(enable_vectorization: bool = True, enable_caching: bool = True, cache_size: int = 1000, enable_numba: bool = True, memory_limit_mb: float = 4000.0, chunk_size: int = 10000) None[source]

Bases: object

Configuration for performance optimization.

enable_vectorization

Use vectorized operations

enable_caching

Use smart caching

cache_size

Maximum cache entries

enable_numba

Use Numba JIT compilation

memory_limit_mb

Memory usage limit in MB

chunk_size

Chunk size for batch processing

enable_vectorization: bool = True
enable_caching: bool = True
cache_size: int = 1000
enable_numba: bool = True
memory_limit_mb: float = 4000.0
chunk_size: int = 10000
class SmartCache(max_size: int = 1000)[source]

Bases: object

Smart caching system for repeated calculations.

Provides intelligent caching with memory management and hit rate tracking. Uses a heap-based eviction strategy for O(log N) amortized eviction instead of O(N) scanning.

cache: Dict[Tuple, Any]
access_counts: Dict[Tuple, int]
get(key: Tuple) Any | None[source]

Get value from cache.

Parameters:

key (Tuple) – Cache key (must be hashable).

Return type:

Optional[Any]

Returns:

Cached value or None if not found.

set(key: Tuple, value: Any) None[source]

Set value in cache with O(log N) amortized eviction.

Parameters:
  • key (Tuple) – Cache key (must be hashable).

  • value (Any) – Value to cache.

Return type:

None

property hit_rate: float

Calculate cache hit rate.

Returns:

Hit rate as percentage.

clear() None[source]

Clear the cache.

Return type:

None

class VectorizedOperations[source]

Bases: object

Vectorized operations for performance optimization.

static calculate_growth_rates(final_assets: ndarray, initial_assets: float, n_years: float) ndarray[source]

Calculate growth rates using vectorized operations.

Parameters:
  • final_assets (ndarray) – Array of final asset values.

  • initial_assets (float) – Initial asset value.

  • n_years (float) – Number of years.

Return type:

ndarray

Returns:

Array of growth rates.

static apply_insurance_vectorized(losses: ndarray, attachment: float, limit: float) Tuple[ndarray, ndarray][source]

Apply insurance coverage using vectorized operations.

Parameters:
  • losses (ndarray) – Array of loss amounts.

  • attachment (float) – Insurance attachment point.

  • limit (float) – Insurance limit.

Return type:

Tuple[ndarray, ndarray]

Returns:

Tuple of (retained_losses, recovered_amounts).

static calculate_premiums_vectorized(limits: ndarray, rates: ndarray) ndarray[source]

Calculate premiums using vectorized operations.

Parameters:
  • limits (ndarray) – Array of insurance limits.

  • rates (ndarray) – Array of premium rates.

Return type:

ndarray

Returns:

Array of premium amounts.

class PerformanceOptimizer(config: OptimizationConfig | None = None)[source]

Bases: object

Main performance optimization engine.

Provides profiling, optimization, and monitoring capabilities for Monte Carlo simulations.

profile_execution(func: Callable, *args, **kwargs) ProfileResult[source]

Profile function execution to identify bottlenecks.

Parameters:
  • func (Callable) – Function to profile.

  • *args – Positional arguments for function.

  • **kwargs – Keyword arguments for function.

Return type:

ProfileResult

Returns:

ProfileResult with profiling data.

optimize_loss_generation(losses: List[float], batch_size: int = 10000) ndarray[source]

Optimize loss generation using vectorization.

Parameters:
  • losses (List[float]) – List of loss values.

  • batch_size (int) – Size of processing batches.

Return type:

ndarray

Returns:

Optimized loss array.

optimize_insurance_calculation(losses: ndarray, layers: List[Tuple[float, float, float]]) Dict[str, Any][source]

Optimize insurance calculations using vectorization and caching.

Parameters:
Return type:

Dict[str, Any]

Returns:

Dictionary with optimized results.

optimize_memory_usage() Dict[str, Any][source]

Optimize memory usage for large simulations.

Return type:

Dict[str, Any]

Returns:

Dictionary with memory optimization metrics.

get_optimization_summary() str[source]

Get summary of optimization status.

Return type:

str

Returns:

Formatted optimization summary.

cached_calculation(cache_size: int = 128)[source]

Decorator for caching expensive calculations.

Parameters:

cache_size (int) – Maximum cache size.

Returns:

Decorated function with caching.

profile_function(func: Callable) Callable[source]

Decorator to profile function execution.

Parameters:

func (Callable) – Function to profile.

Return type:

Callable

Returns:

Decorated function with profiling.

ergodic_insurance.progress_monitor module

Lightweight progress monitoring for Monte Carlo simulations.

This module provides efficient progress tracking with minimal performance overhead, including ETA estimation, convergence summaries, and console output.

class ProgressStats(current_iteration: int, total_iterations: int, start_time: float, elapsed_time: float, estimated_time_remaining: float, iterations_per_second: float, convergence_checks: Tuple[int, float]]=<factory>, converged: bool = False, converged_at: int | None = None) None[source]

Bases: object

Statistics for progress monitoring.

current_iteration: int
total_iterations: int
start_time: float
elapsed_time: float
estimated_time_remaining: float
iterations_per_second: float
convergence_checks: List[Tuple[int, float]]
converged: bool = False
converged_at: int | None = None
summary() str[source]

Generate progress summary.

Return type:

str

class ProgressMonitor(total_iterations: int, check_intervals: List[int] | None = None, update_frequency: int = 1000, show_console: bool = True, convergence_threshold: float = 1.1)[source]

Bases: object

Lightweight progress monitor for Monte Carlo simulations.

Provides real-time progress tracking with minimal performance overhead (<1%). Includes ETA estimation, convergence monitoring, and console output.

iteration_times: List[float]
convergence_checks: List[Tuple[int, float]]
converged_at: int | None
update(iteration: int, convergence_value: float | None = None) bool[source]

Update progress and check for convergence.

Parameters:
  • iteration (int) – Current iteration number

  • convergence_value (Optional[float]) – Optional convergence metric (e.g., R-hat)

Return type:

bool

Returns:

True if should continue, False if converged and should stop

get_stats() ProgressStats[source]

Get current progress statistics.

Return type:

ProgressStats

Returns:

ProgressStats object with current metrics

generate_convergence_summary() Dict[str, Any][source]

Generate detailed convergence summary.

Return type:

Dict[str, Any]

Returns:

Dictionary with convergence analysis results

finish() ProgressStats[source]

Finish progress monitoring and return final stats.

Return type:

ProgressStats

Returns:

Final progress statistics

get_overhead_percentage() float[source]

Get the monitoring overhead as a percentage of total elapsed time.

Return type:

float

Returns:

Overhead percentage (0-100)

reset() None[source]

Reset the monitor to initial state.

Return type:

None

__enter__() ProgressMonitor[source]

Enter context manager.

Return type:

ProgressMonitor

__exit__(exc_type, exc_val, exc_tb) None[source]

Exit context manager and finish monitoring.

Return type:

None

finalize() None[source]

Finalize progress monitoring and log summary.

Return type:

None

ergodic_insurance.result_aggregator module

Advanced result aggregation framework for Monte Carlo simulations.

This module provides comprehensive aggregation capabilities for simulation results, supporting hierarchical aggregation, time-series analysis, and memory-efficient processing of large datasets.

class AggregationConfig(percentiles: List[float] = <factory>, calculate_moments: bool = True, calculate_distribution_fit: bool = False, chunk_size: int = 10000, cache_results: bool = True, precision: int = 6) None[source]

Bases: object

Configuration for result aggregation.

percentiles: List[float]
calculate_moments: bool = True
calculate_distribution_fit: bool = False
chunk_size: int = 10000
cache_results: bool = True
precision: int = 6
class BaseAggregator(config: AggregationConfig | None = None)[source]

Bases: ABC

Abstract base class for result aggregation.

Provides common functionality for all aggregation types.

abstractmethod aggregate(data: ndarray) Dict[str, Any][source]

Perform aggregation on data.

Parameters:

data (ndarray) – Input data array

Return type:

Dict[str, Any]

Returns:

Dictionary of aggregated statistics

class ResultAggregator(config: AggregationConfig | None = None, custom_functions: Dict[str, Callable] | None = None)[source]

Bases: BaseAggregator

Main aggregator for simulation results.

Provides comprehensive aggregation of Monte Carlo simulation results with support for custom aggregation functions.

aggregate(data: ndarray) Dict[str, Any][source]

Aggregate simulation results.

Parameters:

data (ndarray) – Array of simulation results

Return type:

Dict[str, Any]

Returns:

Dictionary containing all aggregated statistics

class TimeSeriesAggregator(config: AggregationConfig | None = None, window_size: int = 12)[source]

Bases: BaseAggregator

Aggregator for time-series data.

Supports annual, cumulative, and rolling window aggregations.

aggregate(data: ndarray) Dict[str, Any][source]

Aggregate time-series data.

Parameters:

data (ndarray) – 2D array where rows are time periods and columns are simulations

Return type:

Dict[str, Any]

Returns:

Dictionary of time-series aggregations

class PercentileTracker(percentiles: List[float], max_samples: int = 100000, seed: int | None = None)[source]

Bases: object

Efficient percentile tracking for streaming data.

Uses the t-digest algorithm (Dunning & Ertl, 2019) for memory-efficient percentile calculation on large datasets. The t-digest provides bounded memory usage and high accuracy, especially at tail percentiles relevant to insurance risk metrics (VaR, TVaR).

update(values: ndarray) None[source]

Update tracker with new values.

Parameters:

values (ndarray) – New values to add

Return type:

None

get_percentiles() Dict[str, float][source]

Get current percentile estimates.

Return type:

Dict[str, float]

Returns:

Dictionary of percentile values keyed as ‘pNN’.

merge(other: PercentileTracker) None[source]

Merge another tracker into this one.

Combines t-digest sketches from parallel simulation chunks without loss of accuracy.

Parameters:

other (PercentileTracker) – Another PercentileTracker to merge into this one.

Return type:

None

reset() None[source]

Reset tracker state.

Return type:

None

class ResultExporter[source]

Bases: object

Export aggregated results to various formats.

static to_csv(results: Dict[str, Any], filepath: Path, index_label: str = 'metric') None[source]

Export results to CSV file.

Parameters:
  • results (Dict[str, Any]) – Aggregated results dictionary

  • filepath (Path) – Output file path

  • index_label (str) – Label for index column

Return type:

None

static to_json(results: Dict[str, Any], filepath: Path, indent: int = 2) None[source]

Export results to JSON file.

Parameters:
  • results (Dict[str, Any]) – Aggregated results dictionary

  • filepath (Path) – Output file path

  • indent (int) – JSON indentation level

Return type:

None

static to_hdf5(results: Dict[str, Any], filepath: Path, compression: str = 'gzip') None[source]

Export results to HDF5 file.

Parameters:
  • results (Dict[str, Any]) – Aggregated results dictionary

  • filepath (Path) – Output file path

  • compression (str) – Compression algorithm to use

Return type:

None

class HierarchicalAggregator(levels: List[str], config: AggregationConfig | None = None)[source]

Bases: object

Aggregator for hierarchical data structures.

Supports multi-level aggregation across different dimensions (e.g., scenario -> year -> simulation).

aggregate_hierarchy(data: Dict[str, Any], level: int = 0) Dict[str, Any][source]

Recursively aggregate hierarchical data.

Parameters:
  • data (Dict[str, Any]) – Hierarchical data dictionary

  • level (int) – Current level in hierarchy

Return type:

Dict[str, Any]

Returns:

Aggregated results at all levels

ergodic_insurance.risk_metrics module

Comprehensive risk metrics suite for tail risk analysis.

This module provides industry-standard risk metrics including VaR, TVaR, PML, and Expected Shortfall to quantify tail risk and support insurance optimization decisions.

class RiskMetricsResult(metric_name: str, value: float, confidence_level: float | None = None, confidence_interval: Tuple[float, float] | None = None, metadata: Dict[str, Any] | None = None) None[source]

Bases: object

Container for risk metric calculation results.

metric_name: str
value: float
confidence_level: float | None = None
confidence_interval: Tuple[float, float] | None = None
metadata: Dict[str, Any] | None = None
class RiskMetrics(losses: ndarray, weights: ndarray | None = None, seed: int | None = None, convention: Literal['loss', 'return'] = 'loss')[source]

Bases: object

Calculate comprehensive risk metrics for loss distributions.

This class provides industry-standard risk metrics for analyzing tail risk in insurance and financial applications.

Sign convention: By default, input values are treated as losses (positive values = losses, negative values = gains). All percentile-based metrics (VaR, TVaR, PML, Expected Shortfall) are computed on this loss distribution.

If your data represents returns (positive values = gains), pass convention="return" and the class will automatically negate the input internally so that the loss-based metrics remain correct.

A heuristic check is performed on construction: if more than 80 % of finite values are negative under the "loss" convention, a warning is emitted because the data likely represents returns rather than losses.

var(confidence: float = 0.99, method: str = 'empirical', bootstrap_ci: bool = False, n_bootstrap: int = 1000) float | RiskMetricsResult[source]
Overloads:
  • self, confidence (float), method (str), bootstrap_ci (Literal[False]), n_bootstrap (int) → float

  • self, confidence (float), method (str), bootstrap_ci (Literal[True]), n_bootstrap (int) → RiskMetricsResult

Calculate Value at Risk (VaR).

VaR represents the loss amount that will not be exceeded with a given confidence level over a specific time period.

Parameters:
  • confidence (float) – Confidence level (e.g., 0.99 for 99% VaR).

  • method (str) – ‘empirical’ or ‘parametric’ (assumes normal distribution).

  • bootstrap_ci (bool) – Deprecated. Use var_with_ci() instead. When True, delegates to var_with_ci() and returns a RiskMetricsResult. Will be removed in a future release.

  • n_bootstrap (int) – Deprecated. Use var_with_ci(n_bootstrap=...) instead.

Returns:

VaR value as a float. (When the deprecated bootstrap_ci flag is True, returns RiskMetricsResult for backward compatibility.)

Raises:

ValueError – If confidence level is not in (0, 1).

var_with_ci(confidence: float = 0.99, method: str = 'empirical', n_bootstrap: int = 1000) RiskMetricsResult[source]

Calculate Value at Risk (VaR) with bootstrap confidence intervals.

Parameters:
  • confidence (float) – Confidence level (e.g., 0.99 for 99% VaR).

  • method (str) – ‘empirical’ or ‘parametric’ (assumes normal distribution).

  • n_bootstrap (int) – Number of bootstrap samples for CI calculation.

Return type:

RiskMetricsResult

Returns:

RiskMetricsResult containing the VaR value and confidence interval.

Raises:

ValueError – If confidence level is not in (0, 1).

tvar(confidence: float = 0.99, var_value: float | None = None) float[source]

Calculate Tail Value at Risk (TVaR/CVaR).

TVaR represents the expected loss given that the loss exceeds VaR. It’s a coherent risk measure that satisfies sub-additivity.

Parameters:
  • confidence (float) – Confidence level for VaR threshold.

  • var_value (Optional[float]) – Pre-calculated VaR value (if None, will calculate).

Return type:

float

Returns:

TVaR value as a float.

tvar_with_ci(confidence: float = 0.99, n_bootstrap: int = 1000) RiskMetricsResult[source]

Calculate Tail Value at Risk (TVaR/CVaR) with bootstrap confidence intervals.

Parameters:
  • confidence (float) – Confidence level for VaR threshold.

  • n_bootstrap (int) – Number of bootstrap samples for CI calculation.

Return type:

RiskMetricsResult

Returns:

RiskMetricsResult containing the TVaR value and confidence interval.

expected_shortfall(threshold: float) float[source]

Calculate Expected Shortfall (ES) above a threshold.

ES is the average of all losses that exceed a given threshold. Delegates to tvar() with a pre-computed VaR value.

Parameters:

threshold (float) – Loss threshold.

Return type:

float

Returns:

Expected shortfall value, or 0.0 if no losses exceed threshold.

pml(return_period: int) float[source]

Calculate Probable Maximum Loss (PML) for a given return period.

PML represents the loss amount expected to be equaled or exceeded once every ‘return_period’ years on average.

Parameters:

return_period (int) – Return period in years (e.g., 100 for 100-year event).

Return type:

float

Returns:

PML value.

Raises:

ValueError – If return period is less than 1.

conditional_tail_expectation(confidence: float = 0.99) float[source]

Calculate Conditional Tail Expectation (CTE).

CTE is similar to TVaR but uses a slightly different calculation method. It’s the expected value of losses that exceed the VaR threshold.

Parameters:

confidence (float) – Confidence level.

Return type:

float

Returns:

CTE value.

maximum_drawdown() float[source]

Calculate Maximum Drawdown on cumulative losses.

Computes the largest peak-to-trough decline in the cumulative sum of losses, not in portfolio value. This measures the worst stretch of accumulated losses and is useful for sizing reserves. It is not the same as the standard portfolio-return drawdown commonly used in asset management.

Note

When convention="return" was used at construction, the stored losses are the negated returns. The drawdown is therefore computed on the cumulative negated returns, which represents the cumulative loss experienced by the portfolio.

Return type:

float

Returns:

Maximum drawdown value (non-negative).

economic_capital(confidence: float = 0.999, expected_loss: float | None = None) float[source]

Calculate Economic Capital requirement.

Economic capital is the amount of capital needed to cover unexpected losses at a given confidence level.

Parameters:
  • confidence (float) – Confidence level (typically 99.9% for regulatory).

  • expected_loss (Optional[float]) – Expected loss (if None, will calculate mean).

Return type:

float

Returns:

Economic capital requirement.

return_period_curve(return_periods: ndarray | None = None) Tuple[ndarray, ndarray][source]

Generate return period curve (exceedance probability curve).

Parameters:

return_periods (Optional[ndarray]) – Array of return periods to calculate. If None, uses standard periods.

Return type:

Tuple[ndarray, ndarray]

Returns:

Tuple of (return_periods, loss_values).

tail_index(threshold: float | None = None) float[source]

Estimate the Pareto tail index alpha via Hill’s method.

Computes the Pareto shape parameter alpha (= 1 / gamma), where gamma is the extreme value index from Hill (1975). Larger alpha means thinner tails; smaller alpha means heavier tails.

Note

The classical Hill estimator returns gamma = (1/k) * sum(ln(X_i/u)). This method returns its reciprocal, alpha = k / sum(ln(X_i/u)), which is the maximum-likelihood estimate of the Pareto shape parameter. To recover the Hill gamma, compute 1 / tail_index().

Parameters:

threshold (Optional[float]) – Threshold for tail definition (if None, uses 90th percentile).

Return type:

float

Returns:

Estimated Pareto shape parameter alpha (= 1 / Hill gamma).

risk_adjusted_metrics(returns: ndarray | None = None, risk_free_rate: float = 0.02) Dict[str, float][source]

Calculate risk-adjusted return metrics.

When returns is None the method derives returns as -self.losses. Because the "return" convention negates the input on construction, -self.losses correctly recovers the original return values regardless of which convention was used.

Parameters:
  • returns (Optional[ndarray]) – Array of returns (if None, uses negative of losses).

  • risk_free_rate (float) – Risk-free rate for Sharpe ratio calculation.

Return type:

Dict[str, float]

Returns:

Dictionary of risk-adjusted metrics.

coherence_test() Dict[str, bool][source]

Test coherence properties of risk measures.

A coherent risk measure satisfies: 1. Monotonicity 2. Sub-additivity 3. Positive homogeneity 4. Translation invariance

Return type:

Dict[str, bool]

Returns:

Dictionary indicating which properties are satisfied.

summary_statistics() Dict[str, float][source]

Calculate comprehensive summary statistics.

Return type:

Dict[str, float]

Returns:

Dictionary of summary statistics.

plot_distribution(bins: int = 50, show_metrics: bool = True, confidence_levels: List[float] | None = None, figsize: Tuple[int, int] = (12, 8)) Figure[source]

Plot loss distribution with risk metrics overlay.

Parameters:
  • bins (int) – Number of bins for histogram.

  • show_metrics (bool) – Whether to show VaR and TVaR lines.

  • confidence_levels (Optional[List[float]]) – Confidence levels for metrics to show.

  • figsize (Tuple[int, int]) – Figure size.

Return type:

Figure

Returns:

Matplotlib figure object.

compare_risk_metrics(scenarios: Dict[str, ndarray], confidence_levels: List[float] | None = None) DataFrame[source]

Compare risk metrics across multiple scenarios.

Parameters:
  • scenarios (Dict[str, ndarray]) – Dictionary mapping scenario names to loss arrays.

  • confidence_levels (Optional[List[float]]) – Confidence levels to evaluate.

Return type:

DataFrame

Returns:

DataFrame with comparative metrics.

class ROEAnalyzer(roe_series: ndarray, equity_series: ndarray | None = None)[source]

Bases: object

Comprehensive ROE analysis framework.

This class provides specialized metrics and analysis tools for Return on Equity (ROE) calculations, including time-weighted averages, component breakdowns, and volatility analysis.

time_weighted_average() float[source]

Calculate time-weighted average ROE using geometric mean.

Time-weighted average gives equal weight to each period regardless of the equity level, providing a measure of consistent performance.

Return type:

float

Returns:

Time-weighted average ROE.

equity_weighted_average() float[source]

Calculate equity-weighted average ROE.

Equity-weighted average gives more weight to periods with higher equity levels, reflecting the actual dollar impact.

Return type:

float

Returns:

Equity-weighted average ROE.

rolling_statistics(window: int) Dict[str, ndarray][source]

Calculate rolling window statistics for ROE.

Parameters:

window (int) – Window size in periods.

Return type:

Dict[str, ndarray]

Returns:

Dictionary with rolling mean, std, min, max arrays.

volatility_metrics() Dict[str, float][source]

Calculate comprehensive volatility metrics for ROE.

Return type:

Dict[str, float]

Returns:

Dictionary with volatility measures.

performance_ratios(risk_free_rate: float = 0.02) Dict[str, float][source]

Calculate performance ratios for ROE.

Parameters:

risk_free_rate (float) – Risk-free rate for Sharpe/Sortino calculations.

Return type:

Dict[str, float]

Returns:

Dictionary with performance ratios.

distribution_analysis() Dict[str, float][source]

Analyze the distribution of ROE values.

Return type:

Dict[str, float]

Returns:

Dictionary with distribution statistics.

stability_analysis(periods: List[int] | None = None) Dict[str, Any][source]

Analyze ROE stability across different time periods.

Parameters:

periods (Optional[List[int]]) – List of period lengths to analyze (default: [1, 3, 5, 10]).

Return type:

Dict[str, Any]

Returns:

Dictionary with stability metrics for each period.

ergodic_insurance.ruin_probability module

Ruin probability analysis for insurance optimization.

This module provides specialized classes and methods for analyzing bankruptcy and ruin probabilities in insurance scenarios.

class RuinProbabilityConfig(time_horizons: List[int] = <factory>, n_simulations: int = 10000, min_assets_threshold: float = 1000000, min_equity_threshold: float = 0.0, debt_service_coverage_ratio: float = 1.25, consecutive_negative_periods: int = 3, early_stopping: bool = True, parallel: bool = True, n_workers: int | None = None, seed: int | None = None, n_bootstrap: int = 1000, bootstrap_confidence_level: float = 0.95) None[source]

Bases: object

Configuration for ruin probability analysis.

time_horizons: List[int]
n_simulations: int = 10000
min_assets_threshold: float = 1000000
min_equity_threshold: float = 0.0
debt_service_coverage_ratio: float = 1.25
consecutive_negative_periods: int = 3
early_stopping: bool = True
parallel: bool = True
n_workers: int | None = None
seed: int | None = None
n_bootstrap: int = 1000
bootstrap_confidence_level: float = 0.95
class RuinProbabilityResults(time_horizons: ndarray, ruin_probabilities: ndarray, confidence_intervals: ndarray, bankruptcy_causes: Dict[str, ndarray], survival_curves: ndarray, execution_time: float, n_simulations: int, convergence_achieved: bool, mid_year_ruin_count: int = 0, ruin_month_distribution: Dict[int, int] | None = None) None[source]

Bases: object

Results from ruin probability analysis.

time_horizons

Array of time horizons analyzed (in years).

ruin_probabilities

Probability of ruin at each time horizon.

confidence_intervals

Bootstrap confidence intervals for probabilities.

bankruptcy_causes

Distribution of bankruptcy causes by horizon.

survival_curves

Survival probability curves over time.

execution_time

Total execution time in seconds.

n_simulations

Number of simulations run.

convergence_achieved

Whether convergence criteria were met.

mid_year_ruin_count

Number of simulations with mid-year ruin (Issue #279).

ruin_month_distribution

Distribution of ruin events by month (0-11).

time_horizons: ndarray
ruin_probabilities: ndarray
confidence_intervals: ndarray
bankruptcy_causes: Dict[str, ndarray]
survival_curves: ndarray
execution_time: float
n_simulations: int
convergence_achieved: bool
mid_year_ruin_count: int = 0
ruin_month_distribution: Dict[int, int] | None = None
summary() str[source]

Generate summary report.

Return type:

str

class RuinProbabilityAnalyzer(manufacturer, loss_generator, insurance_program, config)[source]

Bases: object

Analyzer for ruin probability calculations.

analyze_ruin_probability(config: RuinProbabilityConfig | None = None) RuinProbabilityResults[source]

Analyze ruin probability across multiple time horizons.

Parameters:

config (Optional[RuinProbabilityConfig]) – Configuration for analysis

Return type:

RuinProbabilityResults

Returns:

RuinProbabilityResults with analysis results

ergodic_insurance.safe_pickle module

Safe pickle serialization with HMAC integrity validation.

This module provides HMAC-signed pickle operations to prevent arbitrary code execution from tampered cache files. All file-based pickle operations in the codebase should use these functions instead of raw pickle.load/pickle.dump.

The HMAC key is stored in a .pickle_hmac_key file within the cache directory (or a default location). Files written with safe_dump can only be loaded by safe_load if the HMAC signature matches, preventing deserialization of untrusted data.

Also provides deterministic_hash() as a replacement for Python’s non-deterministic built-in hash() function.

class RestrictedUnpickler(file, *, fix_imports=True, encoding='ASCII', errors='strict', buffers=())[source]

Bases: Unpickler

Unpickler with a class allowlist to prevent arbitrary code execution.

Defense-in-depth: even if HMAC verification is bypassed (e.g. key compromise), only classes from explicitly allowed modules can be instantiated during deserialization. This blocks common RCE vectors such as os.system, subprocess.Popen, and builtins.exec.

See: https://docs.python.org/3/library/pickle.html#restricting-globals

find_class(module: str, name: str) Any[source]

Return an object from a specified module.

If necessary, the module will be imported. Subclasses may override this method (e.g. to restrict unpickling of arbitrary classes and functions).

This method is called whenever a class or a function object is needed. Both arguments passed are str objects.

Return type:

Any

safe_dump(obj: Any, f, protocol: int = 5, key_dir: Path | None = None) None[source]

Pickle dump with HMAC signature prepended.

Parameters:
  • obj (Any) – Object to serialize

  • f – Writable binary file object

  • protocol (int) – Pickle protocol version

  • key_dir (Optional[Path]) – Directory containing the HMAC key

Return type:

None

safe_load(f, key_dir: Path | None = None) Any[source]

Pickle load with HMAC verification.

Parameters:
  • f – Readable binary file object

  • key_dir (Optional[Path]) – Directory containing the HMAC key

Return type:

Any

Returns:

Deserialized object

Raises:

ValueError – If HMAC verification fails or file is too short

safe_dumps(obj: Any, protocol: int = 5, key_dir: Path | None = None) bytes[source]

Pickle dumps with HMAC signature prepended.

Parameters:
  • obj (Any) – Object to serialize

  • protocol (int) – Pickle protocol version

  • key_dir (Optional[Path]) – Directory containing the HMAC key

Return type:

bytes

Returns:

HMAC signature + pickled bytes

safe_loads(data: bytes, key_dir: Path | None = None) Any[source]

Pickle loads with HMAC verification.

Parameters:
  • data (bytes) – HMAC signature + pickled bytes

  • key_dir (Optional[Path]) – Directory containing the HMAC key

Return type:

Any

Returns:

Deserialized object

Raises:

ValueError – If HMAC verification fails or data is too short

deterministic_hash(*args: str, length: int = 16) str[source]

Generate a deterministic hash from string arguments.

Uses SHA-256 instead of Python’s non-deterministic hash(). This produces the same result across process restarts regardless of PYTHONHASHSEED.

Parameters:
  • *args (str) – String values to hash

  • length (int) – Number of hex characters to return (max 64)

Return type:

str

Returns:

Hex digest string of specified length

ergodic_insurance.scenario_manager module

Scenario management system for batch processing simulations.

This module provides a framework for managing multiple simulation scenarios, parameter sweeps, and configuration variations for comprehensive analysis.

class ScenarioType(*values)[source]

Bases: Enum

Types of scenario generation methods.

SINGLE = 'single'
CUSTOM = 'custom'
SENSITIVITY = 'sensitivity'
class ParameterSpec(**data: Any) None[source]

Bases: BaseModel

Specification for parameter variations in scenarios.

name

Parameter name (dot notation for nested params)

values

List of values for grid search

min_value

Minimum value for random search

max_value

Maximum value for random search

n_samples

Number of samples for random search

distribution

Distribution type for random sampling

base_value

Base value for sensitivity analysis

variation_pct

Percentage variation for sensitivity

name: str
values: List[Any] | None
min_value: float | None
max_value: float | None
n_samples: int
distribution: str
base_value: Any | None
variation_pct: float
classmethod validate_name(v: str) str[source]

Validate parameter name format.

Return type:

str

generate_values(method: ScenarioType, rng: Generator | None = None) List[Any][source]

Generate parameter values based on method.

Parameters:
Return type:

List[Any]

Returns:

List of parameter values

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class ScenarioConfig(scenario_id: str, name: str, description: str = '', base_config: Config | None = None, simulation_config: MonteCarloConfig | None = None, parameter_overrides: Dict[str, ~typing.Any]=<factory>, tags: Set[str] = <factory>, priority: int = 100, created_at: datetime = <factory>, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Configuration for a single scenario.

scenario_id: str
name: str
description: str = ''
base_config: Config | None = None
simulation_config: MonteCarloConfig | None = None
parameter_overrides: Dict[str, Any]
tags: Set[str]
priority: int = 100
created_at: datetime
metadata: Dict[str, Any]
__post_init__()[source]

Initialize scenario with defaults.

generate_id() str[source]

Generate unique scenario ID from configuration.

Return type:

str

Returns:

Unique scenario identifier

apply_overrides(config: Any) Any[source]

Apply parameter overrides to configuration.

Parameters:

config (Any) – Configuration object to modify

Return type:

Any

Returns:

Modified configuration

to_dict() Dict[str, Any][source]

Convert scenario to dictionary representation.

Return type:

Dict[str, Any]

Returns:

Dictionary representation

class ScenarioManager[source]

Bases: object

Manager for creating and organizing simulation scenarios.

scenarios: List[ScenarioConfig]
scenario_index: Dict[str, ScenarioConfig]
create_scenario(name: str, base_config: Config | None = None, simulation_config: MonteCarloConfig | None = None, parameter_overrides: Dict[str, Any] | None = None, description: str = '', tags: Set[str] | None = None, priority: int = 100) ScenarioConfig[source]

Create a single scenario.

Parameters:
Return type:

ScenarioConfig

Returns:

Created scenario configuration

add_scenario(scenario: ScenarioConfig) None[source]

Add scenario to manager.

Parameters:

scenario (ScenarioConfig) – Scenario to add

Return type:

None

Create scenarios for grid search over parameters.

Parameters:
Return type:

List[ScenarioConfig]

Returns:

List of created scenarios

Create scenarios for random search over parameters.

Parameters:
Return type:

List[ScenarioConfig]

Returns:

List of created scenarios

create_sensitivity_analysis(base_name: str, parameter_specs: List[ParameterSpec], base_config: Config | None = None, simulation_config: MonteCarloConfig | None = None, tags: Set[str] | None = None) List[ScenarioConfig][source]

Create scenarios for sensitivity analysis.

Parameters:
Return type:

List[ScenarioConfig]

Returns:

List of created scenarios

get_scenarios_by_tag(tag: str) List[ScenarioConfig][source]

Get scenarios with specific tag.

Parameters:

tag (str) – Tag to filter by

Return type:

List[ScenarioConfig]

Returns:

List of matching scenarios

get_scenarios_by_priority(max_priority: int = 100) List[ScenarioConfig][source]

Get scenarios up to priority threshold.

Parameters:

max_priority (int) – Maximum priority value (inclusive)

Return type:

List[ScenarioConfig]

Returns:

Sorted list of scenarios

clear_scenarios() None[source]

Clear all scenarios.

Return type:

None

export_scenarios(path: str | Path) None[source]

Export scenarios to JSON file.

Parameters:

path (Union[str, Path]) – Output file path

Return type:

None

import_scenarios(path: str | Path) None[source]

Import scenarios from JSON file.

Parameters:

path (Union[str, Path]) – Input file path

Return type:

None

ergodic_insurance.sensitivity module

Comprehensive sensitivity analysis tools for insurance optimization.

This module provides tools for analyzing how changes in key parameters affect optimization results, including one-at-a-time (OAT) analysis, tornado diagrams, and two-way sensitivity analysis with efficient caching.

Example

Basic sensitivity analysis for a single parameter:

from ergodic_insurance.sensitivity import SensitivityAnalyzer
from ergodic_insurance.business_optimizer import BusinessOptimizer
from ergodic_insurance.manufacturer import WidgetManufacturer

# Setup optimizer
manufacturer = WidgetManufacturer(initial_assets=10_000_000)
optimizer = BusinessOptimizer(manufacturer)

# Run sensitivity analysis
analyzer = SensitivityAnalyzer(base_config, optimizer)
result = analyzer.analyze_parameter(
    "frequency",
    param_range=(3, 8),
    n_points=11
)

# Generate tornado diagram
tornado_data = analyzer.create_tornado_diagram(
    parameters=["frequency", "severity_mean", "premium_rate"],
    metric="optimal_roe"
)

Changed in version 0.7.0: Replaced bare print() warning calls with logging.warning(). See :issue:`382`.

Author: Alex Filiakov Date: 2025-01-29

class SensitivityResult(parameter: str, baseline_value: float, variations: ndarray, metrics: Dict[str, ndarray], parameter_path: str | None = None, units: str | None = None) None[source]

Bases: object

Results from sensitivity analysis for a single parameter.

parameter

Name of the parameter being analyzed

baseline_value

Original value of the parameter

variations

Array of parameter values tested

metrics

Dictionary of metric arrays for each variation

parameter_path

Nested path to parameter (e.g., “manufacturer.base_operating_margin”)

units

Optional units for the parameter (e.g., “percentage”, “dollars”)

parameter: str
baseline_value: float
variations: ndarray
metrics: Dict[str, ndarray]
parameter_path: str | None = None
units: str | None = None
calculate_impact(metric: str) float[source]

Calculate signed point elasticity of a metric w.r.t. this parameter.

Uses central finite differences at the baseline to estimate the derivative, then normalises to a unit-free elasticity:

elasticity = (dM/dP) * (P_baseline / M_baseline)

A positive value means increasing the parameter increases the metric; a negative value means increasing the parameter decreases the metric.

Parameters:

metric (str) – Name of the metric to calculate impact for

Return type:

float

Returns:

Signed point elasticity at the baseline

Raises:

KeyError – If metric not found in results

get_metric_bounds(metric: str) Tuple[float, float][source]

Get the minimum and maximum values for a metric.

Parameters:

metric (str) – Name of the metric

Return type:

Tuple[float, float]

Returns:

Tuple of (min_value, max_value)

Raises:

KeyError – If metric not found in results

to_dataframe() DataFrame[source]

Convert results to a pandas DataFrame.

Return type:

DataFrame

Returns:

DataFrame with variations and all metrics

class TwoWaySensitivityResult(parameter1: str, parameter2: str, values1: ndarray, values2: ndarray, metric_grid: ndarray, metric_name: str) None[source]

Bases: object

Results from two-way sensitivity analysis.

parameter1

Name of first parameter

parameter2

Name of second parameter

values1

Array of values for first parameter

values2

Array of values for second parameter

metric_grid

2D array of metric values [len(values1), len(values2)]

metric_name

Name of the metric analyzed

parameter1: str
parameter2: str
values1: ndarray
values2: ndarray
metric_grid: ndarray
metric_name: str
find_optimal_region(target_value: float, tolerance: float = 0.05) ndarray[source]

Find parameter combinations that achieve target metric value.

Parameters:
  • target_value (float) – Target value for the metric

  • tolerance (float) – Relative tolerance for matching (default 5%)

Return type:

ndarray

Returns:

Boolean mask array indicating satisfactory regions

to_dataframe() DataFrame[source]

Convert to DataFrame for easier manipulation.

Return type:

DataFrame

Returns:

DataFrame with multi-index for parameters and metric values

class SensitivityAnalyzer(base_config: Dict[str, Any], optimizer: Any, cache_dir: Path | None = None)[source]

Bases: object

Comprehensive sensitivity analysis tools for optimization.

This class provides methods for analyzing how parameter changes affect optimization outcomes, with built-in caching for efficiency.

base_config

Base configuration dictionary

optimizer

Optimizer object with an optimize() method

results_cache

Cache for optimization results

cache_dir

Directory for persistent cache storage

results_cache: Dict[str, Any]
analyze_parameter(param_name: str, param_range: Tuple[float, float] | None = None, n_points: int = 11, param_path: str | None = None, relative_range: float = 0.3) SensitivityResult[source]

Analyze sensitivity to a single parameter.

Parameters:
  • param_name (str) – Name of parameter to analyze

  • param_range (Optional[Tuple[float, float]]) – (min, max) range for parameter values

  • n_points (int) – Number of points to evaluate

  • param_path (Optional[str]) – Nested path to parameter (e.g., “manufacturer.tax_rate”)

  • relative_range (float) – If param_range not provided, use ±relative_range from baseline

Return type:

SensitivityResult

Returns:

SensitivityResult with analysis results

Raises:

KeyError – If parameter not found in base configuration

create_tornado_diagram(parameters: List[str | Tuple[str, str]], metric: str = 'optimal_roe', relative_range: float = 0.3, n_points: int = 11) DataFrame[source]

Create tornado diagram data for parameter impacts.

Parameters:
  • parameters (List[Union[str, Tuple[str, str]]]) – List of parameter names or (name, path) tuples

  • metric (str) – Metric to analyze

  • relative_range (float) – Relative range for parameter variations

  • n_points (int) – Number of points for analysis

Returns:

  • parameter: Parameter name

  • impact: Absolute impact value

  • direction: “positive” or “negative”

  • low_value: Metric value at parameter minimum

  • high_value: Metric value at parameter maximum

  • baseline: Metric value at baseline

  • baseline_param: Baseline parameter value

Return type:

DataFrame sorted by impact magnitude with columns

analyze_two_way(param1: str | Tuple[str, str], param2: str | Tuple[str, str], param1_range: Tuple[float, float] | None = None, param2_range: Tuple[float, float] | None = None, n_points1: int = 10, n_points2: int = 10, metric: str = 'optimal_roe', relative_range: float = 0.3, max_workers: int | None = None) TwoWaySensitivityResult[source]

Perform two-way sensitivity analysis.

Parameters:
  • param1 (Union[str, Tuple[str, str]]) – First parameter name or (name, path) tuple

  • param2 (Union[str, Tuple[str, str]]) – Second parameter name or (name, path) tuple

  • param1_range (Optional[Tuple[float, float]]) – Range for first parameter

  • param2_range (Optional[Tuple[float, float]]) – Range for second parameter

  • n_points1 (int) – Number of points for first parameter

  • n_points2 (int) – Number of points for second parameter

  • metric (str) – Metric to analyze

  • relative_range (float) – Relative range if explicit ranges not provided

  • max_workers (Optional[int]) – Maximum number of threads for parallel optimization. If None or 1, optimizations run sequentially (default). Uses ThreadPoolExecutor so that NumPy/SciPy work can release the GIL for true parallelism without pickling overhead.

Return type:

TwoWaySensitivityResult

Returns:

TwoWaySensitivityResult with grid of metric values

clear_cache() None[source]

Clear all cached results.

Return type:

None

analyze_parameter_group(parameter_group: Dict[str, Tuple[float, float]], n_points: int = 11, metric: str = 'optimal_roe') Dict[str, SensitivityResult][source]

Analyze sensitivity for a group of parameters.

Parameters:
  • parameter_group (Dict[str, Tuple[float, float]]) – Dictionary of parameter names to (min, max) ranges

  • n_points (int) – Number of points for each parameter

  • metric (str) – Primary metric for analysis

Return type:

Dict[str, SensitivityResult]

Returns:

Dictionary of parameter names to SensitivityResult objects

ergodic_insurance.sensitivity_visualization module

Visualization utilities for sensitivity analysis results.

This module provides publication-ready visualization functions for sensitivity analysis results, including tornado diagrams, two-way sensitivity heatmaps, and parameter impact charts.

Example

Creating a tornado diagram:

from ergodic_insurance.sensitivity_visualization import plot_tornado_diagram

# Assuming tornado_data is a DataFrame from SensitivityAnalyzer
fig = plot_tornado_diagram(
    tornado_data,
    title="Parameter Sensitivity Analysis",
    metric_label="ROE Impact"
)
fig.savefig("tornado_diagram.png", dpi=300, bbox_inches='tight')

Author: Alex Filiakov Date: 2025-01-29

plot_tornado_diagram(tornado_data: DataFrame, title: str = 'Sensitivity Analysis - Tornado Diagram', metric_label: str = 'Impact on Objective', figsize: Tuple[float, float] = (10, 6), n_params: int | None = None, color_positive: str = '#2E7D32', color_negative: str = '#C62828', show_values: bool = True) Figure[source]

Create a tornado diagram for sensitivity analysis results.

Parameters:
  • tornado_data (DataFrame) – DataFrame with columns: parameter, impact, direction, low_value, high_value, baseline

  • title (str) – Plot title

  • metric_label (str) – Label for the x-axis

  • figsize (Tuple[float, float]) – Figure size as (width, height)

  • n_params (Optional[int]) – Number of top parameters to show (None for all)

  • color_positive (str) – Color for positive impacts

  • color_negative (str) – Color for negative impacts

  • show_values (bool) – Whether to show numeric values on bars

Return type:

Figure

Returns:

Matplotlib Figure object

plot_two_way_sensitivity(result: TwoWaySensitivityResult, title: str | None = None, cmap: str = 'RdYlGn', figsize: Tuple[float, float] = (10, 8), show_contours: bool = True, contour_levels: int | None = 10, optimal_point: Tuple[float, float] | None = None, fmt: str = '.2f') Figure[source]

Create a heatmap for two-way sensitivity analysis.

Parameters:
  • result (TwoWaySensitivityResult) – TwoWaySensitivityResult object

  • title (Optional[str]) – Plot title (auto-generated if None)

  • cmap (str) – Colormap name

  • figsize (Tuple[float, float]) – Figure size as (width, height)

  • show_contours (bool) – Whether to show contour lines

  • contour_levels (Optional[int]) – Number of contour levels

  • optimal_point (Optional[Tuple[float, float]]) – Optional (param1_value, param2_value) to mark

  • fmt (str) – Format string for contour labels. Can be: - New-style format like ‘.2f’ or ‘.2%’ - Old-style format like ‘%.2f’ - Callable that takes a number and returns a string

Return type:

Figure

Returns:

Matplotlib Figure object

plot_parameter_sweep(result: SensitivityResult, metrics: List[str] | None = None, title: str | None = None, figsize: Tuple[float, float] = (12, 8), normalize: bool = False, mark_baseline: bool = True) Figure[source]

Plot multiple metrics against parameter variations.

Parameters:
  • result (SensitivityResult) – SensitivityResult object

  • metrics (Optional[List[str]]) – List of metrics to plot (None for all)

  • title (Optional[str]) – Plot title (auto-generated if None)

  • figsize (Tuple[float, float]) – Figure size as (width, height)

  • normalize (bool) – Whether to normalize metrics to [0, 1]

  • mark_baseline (bool) – Whether to mark the baseline value

Return type:

Figure

Returns:

Matplotlib Figure object

create_sensitivity_report(analyzer: SensitivityAnalyzer, parameters: List[str | Tuple[str, str]], output_dir: str | None = None, metric: str = 'optimal_roe', formats: List[str] | None = None) Dict[str, Any][source]

Generate a complete sensitivity analysis report.

Parameters:
  • analyzer (SensitivityAnalyzer) – SensitivityAnalyzer object with results

  • parameters (List[Union[str, Tuple[str, str]]]) – List of parameters to analyze

  • output_dir (Optional[str]) – Directory to save figures (None for no saving)

  • metric (str) – Primary metric for analysis

  • formats (Optional[List[str]]) – File formats to save figures in

Return type:

Dict[str, Any]

Returns:

Dictionary with generated figures and analysis summary

plot_sensitivity_matrix(results: Dict[str, SensitivityResult], metric: str = 'optimal_roe', figsize: Tuple[float, float] = (12, 10), cmap: str = 'coolwarm', show_values: bool = True) Figure[source]

Create a matrix plot showing sensitivity across multiple parameters.

Parameters:
  • results (Dict[str, SensitivityResult]) – Dictionary of parameter names to SensitivityResult objects

  • metric (str) – Metric to display

  • figsize (Tuple[float, float]) – Figure size as (width, height)

  • cmap (str) – Colormap name

  • show_values (bool) – Whether to show numeric values in cells

Return type:

Figure

Returns:

Matplotlib Figure object

ergodic_insurance.setup module

ergodic_insurance.simulation module

Simulation engine for time evolution of widget manufacturer model.

This module provides the main simulation engine that orchestrates the time evolution of the widget manufacturer financial model, managing loss events, financial calculations, and result collection.

The simulation framework supports both single-path and Monte Carlo simulations, enabling comprehensive analysis of insurance strategies and business outcomes under uncertainty. It tracks detailed financial metrics, processes insurance claims, and handles bankruptcy conditions appropriately.

Key Features:
  • Single-path trajectory simulation with detailed metrics

  • Monte Carlo simulation support through integration

  • Insurance claim processing with policy application

  • Financial statement tracking and ROE calculation

  • Bankruptcy detection and proper termination

  • Comprehensive result analysis and export capabilities

Examples

Basic simulation:

from ergodic_insurance import Simulation, Config
from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator

config = Config()
manufacturer = WidgetManufacturer(config.manufacturer)
loss_generator = ManufacturingLossGenerator.create_simple(
    frequency=0.1, severity_mean=5_000_000, seed=42
)

sim = Simulation(
    manufacturer=manufacturer,
    loss_generator=loss_generator,
    time_horizon=50
)
results = sim.run()

print(f"Mean ROE: {results.summary_stats()['mean_roe']:.2%}")

Note

This module is thread-safe for parallel Monte Carlo simulations when each thread has its own Simulation instance.

Since:

Version 0.1.0

class SimulationResults(years: ndarray, assets: ndarray, equity: ndarray, roe: ndarray, revenue: ndarray, net_income: ndarray, claim_counts: ndarray, claim_amounts: ndarray, insolvency_year: int | None = None) None[source]

Bases: object

Container for simulation trajectory data.

Holds the complete time series of financial metrics and events from a single simulation run, with methods for analysis and export.

This dataclass provides comprehensive storage for all simulation outputs and includes utility methods for calculating derived metrics, performing statistical analysis, and exporting data for further processing.

years

Array of simulation years (0 to time_horizon-1).

assets

Total assets at each year.

equity

Shareholder equity at each year.

roe

Return on equity for each year.

revenue

Annual revenue for each year.

net_income

Annual net income for each year.

claim_counts

Number of claims in each year.

claim_amounts

Total claim amount in each year.

insolvency_year

Year when bankruptcy occurred (None if survived).

Examples

Analyzing simulation results:

results = simulation.run()

# Get summary statistics
stats = results.summary_stats()
print(f"Survival: {stats['survived']}")
print(f"Mean ROE: {stats['mean_roe']:.2%}")

# Export to DataFrame
df = results.to_dataframe()
df.to_csv('simulation_results.csv')

# Calculate volatility metrics
volatility = results.calculate_roe_volatility()
print(f"ROE Sharpe Ratio: {volatility['roe_sharpe']:.2f}")

Note

All financial values are in nominal dollars without inflation adjustment. ROE calculations handle edge cases like zero equity appropriately.

years: ndarray
assets: ndarray
equity: ndarray
roe: ndarray
revenue: ndarray
net_income: ndarray
claim_counts: ndarray
claim_amounts: ndarray
insolvency_year: int | None = None
property survived: bool

Whether the entity survived the full simulation without insolvency.

property n_years: int

Number of simulation years.

property mean_roe: float

Arithmetic mean of non-NaN ROE values.

property final_equity: float

Equity at the end of the simulation.

property final_assets: float

Total assets at the end of the simulation.

property total_claims: float

Cumulative claim amounts over the simulation.

to_dataframe() DataFrame[source]

Convert simulation results to pandas DataFrame.

Returns:

DataFrame with columns for year, assets, equity, roe,

revenue, net_income, claim_count, and claim_amount.

Return type:

pd.DataFrame

Examples

Export to Excel:

df = results.to_dataframe()
df.to_excel('results.xlsx', index=False)
calculate_time_weighted_roe() float[source]

Calculate time-weighted average ROE.

Time-weighted ROE gives equal weight to each period regardless of the equity level, providing a better measure of consistent performance over time. Uses geometric mean for proper compounding.

Returns:

Time-weighted average ROE as a decimal (e.g., 0.08 for 8%).

Return type:

float

Note

This method uses geometric mean of growth factors (1 + ROE) to properly account for compounding effects. NaN values are excluded from the calculation.

Examples

Compare different ROE measures:

simple_avg = np.mean(results.roe)
time_weighted = results.calculate_time_weighted_roe()
print(f"Simple average: {simple_avg:.2%}")
print(f"Time-weighted: {time_weighted:.2%}")
calculate_rolling_roe(window: int) ndarray[source]

Calculate rolling window ROE.

Parameters:

window (int) – Window size in years (e.g., 1, 3, 5). Must be positive and not exceed the data length.

Returns:

Array of rolling ROE values. Values are NaN for

positions where the full window is not available.

Return type:

np.ndarray

Raises:

ValueError – If window size exceeds data length.

Examples

Calculate and plot rolling ROE:

rolling_3yr = results.calculate_rolling_roe(3)
plt.plot(results.years, rolling_3yr, label='3-Year Rolling ROE')
plt.axhline(y=0.08, color='r', linestyle='--', label='Target')
calculate_roe_components(base_operating_margin: float = 0.08, tax_rate: float = 0.25) Dict[str, ndarray][source]

Calculate ROE component breakdown.

Decomposes ROE into operating, insurance, and tax components using DuPont-style analysis. This helps identify the drivers of ROE performance and the impact of insurance decisions.

Parameters:
  • base_operating_margin (float) – Baseline operating margin for the business. Defaults to 0.08 (8%). Can be sourced from manufacturer.config.base_operating_margin.

  • tax_rate (float) – Corporate tax rate. Defaults to 0.25 (25%). Can be sourced from manufacturer.config.tax_rate.

Returns:

Dictionary containing:
  • ’operating_roe’: Base business ROE without claims

  • ’insurance_impact’: ROE reduction from claims/premiums

  • ’tax_effect’: Impact of taxes on ROE

  • ’total_roe’: Actual ROE for reference

Return type:

Dict[str, np.ndarray]

Note

This is a simplified decomposition. Actual implementation would require more detailed financial data for precise attribution.

Examples

Analyze ROE drivers:

components = results.calculate_roe_components()
operating_avg = np.mean(components['operating_roe'])
insurance_drag = np.mean(components['insurance_impact'])
print(f"Operating ROE: {operating_avg:.2%}")
print(f"Insurance drag: {insurance_drag:.2%}")

Using manufacturer config values:

components = results.calculate_roe_components(
    base_operating_margin=manufacturer.config.base_operating_margin,
    tax_rate=manufacturer.config.tax_rate,
)
calculate_roe_volatility() Dict[str, float][source]

Calculate ROE volatility metrics.

Computes various risk-adjusted performance metrics for ROE, including standard deviation, downside deviation, Sharpe ratio, and coefficient of variation.

Returns:

Dictionary containing:
  • ’roe_std’: Standard deviation of ROE

  • ’roe_downside_deviation’: Downside deviation from mean

  • ’roe_sharpe’: Sharpe ratio using 2% risk-free rate

  • ’roe_coefficient_variation’: Coefficient of variation (std/mean)

Return type:

Dict[str, float]

Note

Returns zeros for all metrics if insufficient data (< 2 observations). Sharpe ratio uses a 2% risk-free rate assumption.

Examples

Risk-adjusted performance analysis:

volatility = results.calculate_roe_volatility()
if volatility['roe_sharpe'] > 1.0:
    print("Strong risk-adjusted performance")
print(f"Downside risk: {volatility['roe_downside_deviation']:.2%}")
summary_stats() Dict[str, float][source]

Calculate summary statistics for the simulation.

Computes comprehensive summary statistics including ROE metrics, rolling averages, volatility measures, and survival indicators.

Returns:
Dict[str, float]: Dictionary containing:
  • Basic ROE metrics (mean, std, median, time-weighted)

  • Rolling averages (1, 3, 5 year)

  • Final state (assets, equity)

  • Claims statistics (total, frequency)

  • Survival indicators (survived, insolvency_year)

  • Volatility metrics (from calculate_roe_volatility)

Examples:

Generate summary report:

stats = results.summary_stats()

print("Performance Summary:")
print(f"  Mean ROE: {stats['mean_roe']:.2%}")
print(f"  Volatility: {stats['std_roe']:.2%}")
print(f"  Sharpe Ratio: {stats['roe_sharpe']:.2f}")

print("
Risk Summary:”)

print(f” Survived: {stats[‘survived’]}”) print(f” Total Claims: ${stats[‘total_claims’]:,.0f}”)

Return type:

Dict[str, float]

class Simulation(manufacturer: WidgetManufacturer, loss_generator: ManufacturingLossGenerator | List[ManufacturingLossGenerator] | None = None, insurance_policy: InsuranceProgram | InsurancePolicy | None = None, time_horizon: int = 50, seed: int | None = None, growth_rate: float = 0.0, letter_of_credit_rate: float = 0.015, copy: bool = True)[source]

Bases: object

Simulation engine for widget manufacturer time evolution.

The main simulation class that coordinates the time evolution of the widget manufacturer model, processing losses and tracking financial performance over the specified time horizon.

Supports both single-path and Monte Carlo simulations, with comprehensive tracking of financial metrics, loss events, and bankruptcy conditions.

Examples

Basic simulation setup and execution:

from ergodic_insurance.config import ManufacturerConfig
from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator
from ergodic_insurance.insurance_program import InsuranceProgram
from ergodic_insurance.simulation import Simulation

# Create manufacturer
config = ManufacturerConfig(initial_assets=10_000_000)
manufacturer = WidgetManufacturer(config)

# Create insurance program
program = InsuranceProgram.simple(
    deductible=500_000,
    limit=5_000_000,
    rate=0.02,
)

# Run simulation
sim = Simulation(
    manufacturer=manufacturer,
    loss_generator=ManufacturingLossGenerator.create_simple(seed=42),
    insurance_policy=program,
    time_horizon=10
)
results = sim.run()

# Analyze results
print(f"Mean ROE: {results.summary_stats()['mean_roe']:.2%}")
print(f"Survived: {results.insolvency_year is None}")

Running Monte Carlo simulation:

# Use MonteCarloEngine for multiple paths
monte_carlo = MonteCarloEngine(
    base_simulation=sim,
    n_simulations=1000,
    parallel=True
)
mc_results = monte_carlo.run()

print(f"Survival rate: {mc_results.survival_rate:.1%}")
print(f"95% VaR: ${mc_results.var_95:,.0f}")
manufacturer

The widget manufacturer being simulated

loss_generator

Generator for loss events

insurance_policy

Optional insurance coverage

time_horizon

Simulation duration in years

seed

Random seed for reproducibility

See also

SimulationResults: Container for simulation output MonteCarloEngine: For running multiple simulation paths WidgetManufacturer: The core financial model ManufacturingLossGenerator: For generating loss events

insolvency_year: int | None
classmethod from_config(config: Config, seed: int | None = None, **kwargs) Simulation[source]

Create a fully configured simulation from a Config object.

Builds the manufacturer, loss generator, and insurance program automatically from the unified configuration, closing the gap between YAML/profile configuration and runtime simulation.

Parameters:
  • config (Config) – Complete simulation configuration, typically loaded via Config.from_yaml() or Config.from_company().

  • seed (Optional[int]) – Random seed for reproducibility. Passed to the loss generator and the simulation.

  • **kwargs – Additional keyword arguments forwarded to the Simulation constructor.

Return type:

Simulation

Returns:

Ready-to-run Simulation instance.

Examples

End-to-end from YAML:

config = Config.from_yaml(Path("my_config.yaml"))
sim = Simulation.from_config(config, seed=42)
results = sim.run()

From company info:

config = Config.from_company(initial_assets=50_000_000)
sim = Simulation.from_config(config)
step_annual(year: int, losses: List[LossEvent]) Dict[str, Any][source]

Execute single annual time step.

Processes losses for the year, applies insurance coverage, updates manufacturer financial state, and returns metrics.

Parameters:
  • year (int) – Current simulation year (0-indexed).

  • losses (List[LossEvent]) – List of LossEvent objects for this year.

Returns:

Dictionary containing metrics:
  • All metrics from manufacturer.step()

  • ’claim_count’: Number of losses this year

  • ’claim_amount’: Total loss amount before insurance

  • ’company_payment’: Amount paid by company after deductible

  • ’insurance_recovery’: Amount recovered from insurance

Return type:

Dict[str, float]

Note

This method modifies the manufacturer state in-place. Insurance premiums are deducted from both assets and equity to maintain balance sheet integrity.

Side Effects:
  • Modifies manufacturer.assets and manufacturer.equity

  • Updates manufacturer internal state via step() method

run(progress_interval: int = 100, progress_callback: Callable[[int, int, float], None] | None = None, cancel_event: Event | None = None) SimulationResults[source]

Run the full simulation over the specified time horizon.

Executes a complete simulation trajectory, processing claims each year, updating the manufacturer’s financial state, and tracking all metrics. The simulation terminates early if the manufacturer becomes insolvent.

Parameters:
  • progress_interval (int) – How often to log progress (in years). Set to 0 to disable progress logging. Useful for long simulations.

  • progress_callback (Optional[Callable[[int, int, float], None]]) – Optional callback invoked with (completed_years, total_years, elapsed_seconds) after each year completes. Useful for GUI progress bars, web dashboards, or any non-terminal environment.

  • cancel_event (Optional[Event]) – Optional threading.Event. When set, the simulation will stop after the current year and return partial results (same pattern as insolvency early-stop).

Returns:

  • Complete time series of financial metrics

  • Claim history and amounts

  • ROE trajectory

  • Insolvency year (if bankruptcy occurred)

Return type:

SimulationResults object containing

Examples

Run simulation with progress updates:

sim = Simulation(manufacturer, time_horizon=1000)
results = sim.run(progress_interval=100)  # Log every 100 years

# Check if company survived
if results.insolvency_year:
    print(f"Bankruptcy in year {results.insolvency_year}")
else:
    print(f"Survived {len(results.years)} years")

Analyze simulation results:

results = sim.run()
df = results.to_dataframe()

# Plot equity evolution
import matplotlib.pyplot as plt
plt.plot(df['year'], df['equity'])
plt.xlabel('Year')
plt.ylabel('Equity ($)')
plt.title('Company Equity Over Time')
plt.show()

Note

The simulation uses pre-generated claims for efficiency. All claims are generated at the start based on the configured loss distributions and random seed.

See also

step_annual(): Single year simulation step SimulationResults: Output data structure

run_with_loss_data(loss_data: LossData, validate: bool = True, progress_interval: int = 100) SimulationResults[source]

Run simulation using standardized LossData.

Parameters:
  • loss_data (LossData) – Standardized loss data.

  • validate (bool) – Whether to validate loss data before running.

  • progress_interval (int) – How often to log progress.

Return type:

SimulationResults

Returns:

SimulationResults object with full trajectory.

get_trajectory() DataFrame[source]

Get simulation trajectory as pandas DataFrame.

This is a convenience method that runs the simulation if needed and returns the results as a DataFrame.

Return type:

DataFrame

Returns:

DataFrame with simulation trajectory.

classmethod run_monte_carlo(config: Config, insurance_policy: InsuranceProgram | InsurancePolicy | None = None, n_scenarios: int = 10000, batch_size: int = 1000, n_jobs: int = 7, checkpoint_dir: Path | None = None, checkpoint_frequency: int = 5000, seed: int | None = None, resume: bool = True) Dict[str, Any][source]

Run Monte Carlo simulation using the MonteCarloEngine.

Deprecated since version Use: ergodic_insurance.run_monte_carlo() instead.

Return type:

Dict[str, Any]

classmethod compare_insurance_strategies(config: Config, insurance_policies: Mapping[str, InsuranceProgram | InsurancePolicy], n_scenarios: int = 1000, n_jobs: int = 7, seed: int | None = None) StrategyComparisonResult[source]

Compare multiple insurance strategies via Monte Carlo.

Deprecated since version Use: ergodic_insurance.compare_strategies() instead.

Return type:

StrategyComparisonResult

ergodic_insurance.statistical_tests module

Statistical hypothesis testing utilities for simulation results.

This module provides bootstrap-based hypothesis testing functions for comparing strategies, validating performance differences, and assessing statistical significance of simulation outcomes.

Example

>>> from statistical_tests import test_difference_in_means
>>> import numpy as np
>>> # Compare two strategies
>>> strategy_a_returns = np.random.normal(0.08, 0.02, 1000)
>>> strategy_b_returns = np.random.normal(0.10, 0.03, 1000)
>>> result = test_difference_in_means(
...     strategy_a_returns,
...     strategy_b_returns,
...     alternative='less'
... )
>>> print(f"P-value: {result.p_value:.4f}")
>>> print(f"Strategy B is better: {result.reject_null}")
DEFAULT_N_BOOTSTRAP

Default bootstrap iterations for tests (10000).

Type:

int

DEFAULT_ALPHA

Default significance level (0.05).

Type:

float

class HypothesisTestResult(test_statistic: float, p_value: float, reject_null: bool, confidence_interval: Tuple[float, float], null_hypothesis: str, alternative: str, alpha: float, method: str, bootstrap_distribution: ndarray | None = None, metadata: Dict[str, Any] | None = None) None[source]

Bases: object

Container for hypothesis test results.

test_statistic: float
p_value: float
reject_null: bool
confidence_interval: Tuple[float, float]
null_hypothesis: str
alternative: str
alpha: float
method: str
bootstrap_distribution: ndarray | None = None
metadata: Dict[str, Any] | None = None
summary() str[source]

Generate human-readable summary of test results.

Return type:

str

Returns:

Formatted string with test results and interpretation.

difference_in_means_test(sample1: ndarray, sample2: ndarray, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]

Test difference in means between two samples using bootstrap.

Tests the null hypothesis that the means of two populations are equal against various alternatives using bootstrap resampling.

Parameters:
  • sample1 (ndarray) – First sample array.

  • sample2 (ndarray) – Second sample array.

  • alternative (str) – Type of alternative hypothesis: - ‘two-sided’: means are different - ‘less’: mean1 < mean2 - ‘greater’: mean1 > mean2

  • alpha (float) – Significance level (default 0.05).

  • n_bootstrap (int) – Number of bootstrap iterations (default 10000).

  • seed (Optional[int]) – Random seed for reproducibility.

Return type:

HypothesisTestResult

Returns:

HypothesisTestResult containing test statistics and decision.

Raises:

ValueError – If alternative is not valid.

Example

>>> # Test if Strategy A has lower returns than Strategy B
>>> result = test_difference_in_means(
...     returns_a, returns_b, alternative='less'
... )
>>> if result.reject_null:
...     print("Strategy B significantly outperforms Strategy A")
ratio_of_metrics_test(sample1: ~numpy.ndarray, sample2: ~numpy.ndarray, statistic: ~typing.Callable[[~numpy.ndarray], float] = <function mean>, null_ratio: float = 1.0, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]

Test ratio of metrics between two samples using bootstrap.

Tests whether the ratio of a statistic (e.g., mean, median) between two samples equals a specified value (typically 1.0).

Parameters:
  • sample1 (ndarray) – First sample array.

  • sample2 (ndarray) – Second sample array.

  • statistic (Callable[[ndarray], float]) – Function to compute on each sample (default: mean).

  • null_ratio (float) – Null hypothesis ratio value (default: 1.0).

  • alternative (str) – Alternative hypothesis type.

  • alpha (float) – Significance level.

  • n_bootstrap (int) – Number of bootstrap iterations.

  • seed (Optional[int]) – Random seed.

Return type:

HypothesisTestResult

Returns:

HypothesisTestResult for the ratio test.

Example

>>> # Test if ROE ratio differs from 1.0
>>> result = test_ratio_of_metrics(
...     roe_strategy_a,
...     roe_strategy_b,
...     statistic=np.median,
...     null_ratio=1.0
... )
paired_comparison_test(paired_differences: ndarray, null_value: float = 0.0, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]

Test paired differences using bootstrap.

Tests whether paired differences (e.g., from matched scenarios) have a mean equal to a specified value (typically 0).

Parameters:
  • paired_differences (ndarray) – Array of paired differences.

  • null_value (float) – Null hypothesis value for mean difference (default: 0).

  • alternative (str) – Alternative hypothesis type.

  • alpha (float) – Significance level.

  • n_bootstrap (int) – Number of bootstrap iterations.

  • seed (Optional[int]) – Random seed.

Return type:

HypothesisTestResult

Returns:

HypothesisTestResult for the paired test.

Example

>>> # Test if insurance improves outcomes
>>> differences = outcomes_with_insurance - outcomes_without_insurance
>>> result = paired_comparison_test(differences, alternative='greater')
bootstrap_hypothesis_test(data: ndarray, null_hypothesis: Callable[[ndarray], ndarray], test_statistic: Callable[[ndarray], float], alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]

General bootstrap hypothesis testing framework.

Allows testing of custom hypotheses using any test statistic.

Parameters:
  • data (ndarray) – Input data array.

  • null_hypothesis (Callable[[ndarray], ndarray]) – Function that transforms data to satisfy null.

  • test_statistic (Callable[[ndarray], float]) – Function to compute test statistic.

  • alternative (str) – Alternative hypothesis type.

  • alpha (float) – Significance level.

  • n_bootstrap (int) – Number of bootstrap iterations.

  • seed (Optional[int]) – Random seed.

Return type:

HypothesisTestResult

Returns:

HypothesisTestResult for the custom test.

Example

>>> # Test if variance exceeds threshold
>>> def null_transform(x):
...     return x * np.sqrt(threshold_var / np.var(x))
>>> result = bootstrap_hypothesis_test(
...     data, null_transform, np.var, alternative='greater'
... )
multiple_comparison_correction(p_values: List[float], method: str = 'bonferroni', alpha: float = 0.05) Tuple[ndarray, ndarray][source]

Apply multiple comparison correction to p-values.

Adjusts p-values when multiple hypothesis tests are performed to control family-wise error rate or false discovery rate.

Parameters:
  • p_values (List[float]) – List of p-values from multiple tests.

  • method (str) – Correction method: - ‘bonferroni’: Bonferroni correction - ‘holm’: Holm-Bonferroni method - ‘fdr’: Benjamini-Hochberg FDR

  • alpha (float) – Overall significance level.

Return type:

Tuple[ndarray, ndarray]

Returns:

Tuple of (adjusted_p_values, reject_decisions).

Example

>>> p_vals = [0.01, 0.04, 0.03, 0.20]
>>> adj_p, reject = multiple_comparison_correction(p_vals)
>>> print(f"Significant tests: {np.sum(reject)}")

ergodic_insurance.stochastic_processes module

Stochastic processes for financial modeling.

This module provides various stochastic process implementations for modeling financial volatility, including Geometric Brownian Motion, lognormal volatility, and mean-reverting processes. These are used to add realistic randomness to revenue and growth modeling in the manufacturing simulation.

class StochasticConfig(**data: Any) None[source]

Bases: BaseModel

Configuration for stochastic processes.

Defines parameters common to all stochastic process implementations, including volatility, drift, random seed, and time step parameters.

volatility: float
drift: float
random_seed: int | None
time_step: float
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class StochasticProcess(config: StochasticConfig)[source]

Bases: ABC

Abstract base class for stochastic processes.

Provides common interface and functionality for all stochastic process implementations used in financial modeling. All concrete implementations must provide a generate_shock method.

abstractmethod generate_shock(current_value: float) float[source]

Generate a stochastic shock for the current time step.

Parameters:

current_value (float) – Current value of the process

Return type:

float

Returns:

Multiplicative shock to apply to the value

reset(seed: int | None = None) None[source]

Reset the random number generator.

Parameters:

seed (Optional[int]) – Optional new seed to use

Return type:

None

class GeometricBrownianMotion(config: StochasticConfig)[source]

Bases: StochasticProcess

Geometric Brownian Motion process using Euler-Maruyama discretization.

Implements GBM with exact lognormal solution for high numerical accuracy. Commonly used for modeling asset prices and growth rates with constant relative volatility.

generate_shock(current_value: float) float[source]

Generate a multiplicative shock using GBM.

Uses the Euler-Maruyama discretization: dS = μ*S*dt + σ*S*dW

Which gives multiplicative shock: S(t+dt)/S(t) = exp((μ - σ²/2)*dt + σ*√dt*Z)

where Z ~ N(0,1)

Parameters:

current_value (float) – Current value (not used in GBM, included for interface)

Return type:

float

Returns:

Multiplicative shock factor

class LognormalVolatility(config: StochasticConfig)[source]

Bases: StochasticProcess

Simple lognormal volatility generator for revenue/sales.

Provides simpler alternative to full GBM by applying lognormal shocks centered around 1.0. Suitable for modeling revenue variations without drift components.

generate_shock(current_value: float) float[source]

Generate a lognormal multiplicative shock.

Simpler than full GBM - just applies lognormal volatility around 1.0. Shock = exp(σ*Z) where Z ~ N(0,1)

This gives E[shock] ≈ 1 for small σ (actually exp(σ²/2))

Parameters:

current_value (float) – Current value (not used)

Return type:

float

Returns:

Multiplicative shock factor centered around 1.0

class MeanRevertingProcess(config: StochasticConfig, mean_level: float = 1.0, reversion_speed: float = 0.5)[source]

Bases: StochasticProcess

Exponential Ornstein-Uhlenbeck mean-reverting process.

Implements geometric (exponential) mean-reverting dynamics that produce strictly positive multiplicative shocks. Suitable for modeling variables that revert to long-term average levels, such as operating margins or capacity utilization rates.

The process operates in log-space:

d(log x) = θ*(log(μ) - log(x))*dt + σ*dW

This guarantees positive shocks and state-independent volatility.

References

Dixit & Pindyck (1994), Investment Under Uncertainty, Ch. 3.

generate_shock(current_value: float) float[source]

Generate mean-reverting multiplicative shock via exponential OU.

Uses the exact discrete-time transition for the Ornstein-Uhlenbeck process in log-space, which is unbiased for any time step dt:

log(X_{t+dt}) | log(X_t) ~ N(conditional_mean, conditional_var)

where:

conditional_mean = log(μ) + (log(X_t) - log(μ)) * exp(-θ*dt) conditional_var = σ² * (1 - exp(-2θ*dt)) / (2θ)

The multiplicative shock is X_{t+dt} / X_t.

Parameters:

current_value (float) – Current value of the process (must be positive)

Return type:

float

Returns:

Strictly positive multiplicative shock

create_stochastic_process(process_type: str, volatility: float, drift: float = 0.0, random_seed: int | None = None, time_step: float = 1.0) StochasticProcess[source]

Factory function to create stochastic processes.

Parameters:
  • process_type (str) – Type of process (“gbm”, “lognormal”, “mean_reverting”)

  • volatility (float) – Annual volatility

  • drift (float) – Annual drift rate (for GBM)

  • random_seed (Optional[int]) – Random seed for reproducibility

  • time_step (float) – Time step in years

Return type:

StochasticProcess

Returns:

StochasticProcess instance

Raises:

ValueError – If process_type is not recognized

ergodic_insurance.strategy_backtester module

Strategy backtesting framework for insurance decision strategies.

This module provides base classes and implementations for various insurance strategies that can be tested and compared in walk-forward validation.

Example

>>> from strategy_backtester import ConservativeFixedStrategy, StrategyBacktester
>>> from simulation import SimulationEngine
>>> # Create and configure a strategy
>>> strategy = ConservativeFixedStrategy(
...     primary_limit=5000000,
...     excess_limit=20000000,
...     deductible=100000
... )
>>>
>>> # Run backtest
>>> backtester = StrategyBacktester(simulation_engine)
>>> results = backtester.test_strategy(
...     strategy=strategy,
...     n_simulations=1000,
...     n_years=10
... )
class InsuranceStrategy(name: str)[source]

Bases: ABC

Abstract base class for insurance strategies.

Defines the interface that all insurance strategies must implement for use in backtesting and walk-forward validation.

abstractmethod get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get insurance program for the current state.

Parameters:
  • manufacturer (WidgetManufacturer) – Current manufacturer state

  • historical_losses (Optional[ndarray]) – Past loss data for adaptive strategies

  • current_year (int) – Current year in simulation

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram or None for no insurance.

update(losses: ndarray, recoveries: ndarray, year: int)[source]

Update strategy based on recent experience.

Parameters:
  • losses (ndarray) – Recent loss amounts

  • recoveries (ndarray) – Recent recovery amounts

  • year (int) – Current year

reset()[source]

Reset strategy to initial state.

get_description() str[source]

Get strategy description.

Return type:

str

Returns:

Human-readable strategy description.

class NoInsuranceStrategy[source]

Bases: InsuranceStrategy

Baseline strategy with no insurance.

get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Return no insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

None to indicate no insurance.

class ConservativeFixedStrategy(primary_limit: float = 5000000, excess_limit: float = 20000000, higher_limit: float = 25000000, deductible: float = 50000)[source]

Bases: InsuranceStrategy

Conservative strategy with high limits and low deductible.

get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get conservative insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram with high coverage.

class AggressiveFixedStrategy(primary_limit: float = 2000000, excess_limit: float = 5000000, deductible: float = 250000)[source]

Bases: InsuranceStrategy

Aggressive strategy with low limits and high deductible.

get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get aggressive insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram with limited coverage.

class OptimizedStaticStrategy(optimizer: PenaltyMethodOptimizer | None = None, target_roe: float = 0.15, max_ruin_prob: float = 0.01)[source]

Bases: InsuranceStrategy

Strategy using optimization to find best static limits.

optimize_limits(manufacturer: WidgetManufacturer, simulation_engine: Simulation)[source]

Run optimization to find best limits.

Parameters:
get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get optimized insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram with optimized parameters.

class AdaptiveStrategy(base_deductible: float = 100000, base_primary: float = 3000000, base_excess: float = 10000000, adaptation_window: int = 3, adjustment_factor: float = 0.2)[source]

Bases: InsuranceStrategy

Strategy that adjusts based on recent loss experience.

update(losses: ndarray, recoveries: ndarray, year: int)[source]

Update strategy based on recent losses.

Parameters:
  • losses (ndarray) – Recent loss amounts

  • recoveries (ndarray) – Recent recovery amounts

  • year (int) – Current year

get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]

Get adaptive insurance program.

Return type:

Optional[InsuranceProgram]

Returns:

InsuranceProgram with adapted parameters.

reset()[source]

Reset strategy to initial state.

class BacktestResult(strategy_name: str, simulation_results: SimulationResults | MonteCarloResults, metrics: ValidationMetrics, execution_time: float, config: MonteCarloConfig) None[source]

Bases: object

Results from strategy backtesting.

strategy_name

Name of tested strategy

simulation_results

Raw simulation results (either Simulation or MC results)

metrics

Calculated performance metrics

execution_time

Time taken to run backtest

config

Configuration used for backtest

strategy_name: str
simulation_results: SimulationResults | MonteCarloResults
metrics: ValidationMetrics
execution_time: float
config: MonteCarloConfig
class StrategyBacktester(simulation_engine: Simulation | None = None, metric_calculator: MetricCalculator | None = None)[source]

Bases: object

Engine for backtesting insurance strategies.

results_cache: Dict[str, BacktestResult]
test_strategy(strategy: InsuranceStrategy, manufacturer: WidgetManufacturer, config: MonteCarloConfig, use_cache: bool = True) BacktestResult[source]

Test a single strategy.

Parameters:
Return type:

BacktestResult

Returns:

BacktestResult with performance metrics.

test_multiple_strategies(strategies: List[InsuranceStrategy], manufacturer: WidgetManufacturer, config: MonteCarloConfig) DataFrame[source]

Test multiple strategies and compare.

Parameters:
Return type:

DataFrame

Returns:

DataFrame comparing strategy performance.

ergodic_insurance.summary_statistics module

Comprehensive summary statistics and report generation for simulation results.

This module provides statistical analysis tools, distribution fitting utilities, and formatted report generation for Monte Carlo simulation results.

format_quantile_key(q: float) str[source]

Format a quantile value as a dictionary key using per-mille resolution.

Uses per-mille (parts per thousand) to avoid key collisions for sub-percentile quantiles that are critical for insurance risk metrics.

Parameters:

q (float) – Quantile value in range [0, 1].

Return type:

str

Returns:

Formatted key string, e.g. q0250 for the 25th percentile, q0005 for the 0.5th percentile, q0001 for the 0.1th percentile.

class StatisticalSummary(basic_stats: Dict[str, float], distribution_params: Dict[str, Dict[str, float]], confidence_intervals: Dict[str, Tuple[float, float]], hypothesis_tests: Dict[str, Dict[str, float]], extreme_values: Dict[str, float]) None[source]

Bases: object

Complete statistical summary of simulation results.

basic_stats: Dict[str, float]
distribution_params: Dict[str, Dict[str, float]]
confidence_intervals: Dict[str, Tuple[float, float]]
hypothesis_tests: Dict[str, Dict[str, float]]
extreme_values: Dict[str, float]
to_dataframe() DataFrame[source]

Convert summary to pandas DataFrame.

Return type:

DataFrame

Returns:

DataFrame with all summary statistics

class SummaryStatistics(confidence_level: float = 0.95, bootstrap_iterations: int = 1000, seed: int | None = None, assume_iid: bool = True)[source]

Bases: object

Calculate comprehensive summary statistics for simulation results.

calculate_summary(data: ndarray, weights: ndarray | None = None) StatisticalSummary[source]

Calculate complete statistical summary.

Parameters:
  • data (ndarray) – Input data array

  • weights (Optional[ndarray]) – Optional weights for weighted statistics

Return type:

StatisticalSummary

Returns:

Complete statistical summary

class TDigest(compression: float = 200)[source]

Bases: object

T-digest data structure for streaming quantile estimation.

Implements the merging digest variant from Dunning & Ertl (2019). Provides accurate quantile estimates, especially at the tails, with bounded memory usage proportional to the compression parameter.

The t-digest maintains a sorted set of centroids (mean, weight) that adaptively cluster data points. Clusters near the tails (q->0 or q->1) are kept small for precision, while clusters near the median can be larger.

Parameters:

compression (float) – Controls accuracy vs memory tradeoff. Higher values give more accuracy but use more memory. Typical range: 100-300. Default 200 gives ~0.2-1% error at median, ~0.005-0.05% at q01/q99.

References

Dunning, T. & Ertl, O. (2019). “Computing Extremely Accurate Quantiles Using t-Digests.” arXiv:1902.04023.

update(value: float) None[source]

Add a single observation to the digest.

Parameters:

value (float) – The value to add.

Raises:

ValueError – If the value is NaN or infinity.

Return type:

None

update_batch(values: ndarray) None[source]

Add an array of observations to the digest.

Parameters:

values (ndarray) – Array of values to add.

Raises:

ValueError – If any value is NaN or infinity.

Return type:

None

merge(other: TDigest) None[source]

Merge another t-digest into this one.

After merging, this digest contains the combined information from both digests. The other digest is not modified.

Parameters:

other (TDigest) – Another TDigest to merge into this one.

Return type:

None

quantile(q: float) float[source]

Estimate a single quantile.

Parameters:

q (float) – Quantile to estimate, in range [0, 1].

Return type:

float

Returns:

Estimated value at the given quantile.

Raises:

ValueError – If the digest is empty.

quantiles(qs: List[float]) Dict[str, float][source]

Estimate multiple quantiles.

Parameters:

qs (List[float]) – List of quantiles to estimate, each in range [0, 1].

Return type:

Dict[str, float]

Returns:

Dictionary mapping per-mille quantile keys (e.g. q0250 for the 25th percentile) to estimated values.

cdf(value: float) float[source]

Estimate the cumulative distribution function at a value.

Parameters:

value (float) – The value at which to estimate the CDF.

Return type:

float

Returns:

Estimated probability P(X <= value).

Raises:

ValueError – If the digest is empty.

property centroid_count: int

Return the number of centroids currently stored.

__len__() int[source]

Return the total count of observations added.

Return type:

int

class QuantileCalculator(quantiles: List[float] | None = None, seed: int | None = None)[source]

Bases: object

Efficient quantile calculation for large datasets.

calculate_quantiles(data_hash: int, method: str = 'linear') Dict[str, float][source]

Calculate quantiles with caching.

Parameters:
  • data_hash (int) – Hash of data array for caching

  • method (str) – Interpolation method

Return type:

Dict[str, float]

Returns:

Dictionary of quantile values

calculate(data: ndarray, method: str = 'linear') Dict[str, float][source]

Calculate quantiles for data.

Parameters:
  • data (ndarray) – Input data array

  • method (str) – Interpolation method (‘linear’, ‘nearest’, ‘lower’, ‘higher’, ‘midpoint’)

Return type:

Dict[str, float]

Returns:

Dictionary of quantile values

streaming_quantiles(data_stream: ndarray, compression: float = 200) Dict[str, float][source]

Calculate quantiles for streaming data using the t-digest algorithm.

Uses the t-digest merging digest algorithm (Dunning & Ertl, 2019) for streaming quantile estimation with bounded memory and high accuracy, especially at tail quantiles relevant to insurance risk metrics.

Parameters:
  • data_stream (ndarray) – Streaming data array

  • compression (float) – Controls accuracy vs memory tradeoff. Higher values give more accuracy but use more memory. Typical range: 100-300. Default 200 gives ~0.2-1% error at median, ~0.005-0.05% at q01/q99. Passed directly to TDigest.

Return type:

Dict[str, float]

Returns:

Dictionary of approximate quantile values

class DistributionFitter[source]

Bases: object

Fit and compare multiple probability distributions to data.

DISTRIBUTIONS = {'beta': <scipy.stats._continuous_distns.beta_gen object>, 'exponential': <scipy.stats._continuous_distns.expon_gen object>, 'gamma': <scipy.stats._continuous_distns.gamma_gen object>, 'lognormal': <scipy.stats._continuous_distns.lognorm_gen object>, 'normal': <scipy.stats._continuous_distns.norm_gen object>, 'pareto': <scipy.stats._continuous_distns.pareto_gen object>, 'uniform': <scipy.stats._continuous_distns.uniform_gen object>, 'weibull': <scipy.stats._continuous_distns.weibull_min_gen object>}
fit_all(data: ndarray, distributions: List[str] | None = None) DataFrame[source]

Fit multiple distributions and compare goodness of fit.

Parameters:
  • data (ndarray) – Input data

  • distributions (Optional[List[str]]) – List of distributions to fit (None for all)

Return type:

DataFrame

Returns:

DataFrame comparing distribution fits

get_best_distribution(criterion: str = 'aic') Tuple[str, Dict[str, float]][source]

Get the best-fitting distribution based on criterion.

Parameters:

criterion (str) – Selection criterion (‘aic’, ‘bic’, ‘ks_pvalue’)

Return type:

Tuple[str, Dict[str, float]]

Returns:

Tuple of (distribution name, parameters)

generate_qq_plot_data(data: ndarray, distribution: str) Tuple[ndarray, ndarray][source]

Generate data for Q-Q plot.

Parameters:
  • data (ndarray) – Original data

  • distribution (str) – Distribution name

Return type:

Tuple[ndarray, ndarray]

Returns:

Tuple of (theoretical quantiles, sample quantiles)

class SummaryReportGenerator(style: str = 'markdown')[source]

Bases: object

Generate formatted summary reports for simulation results.

generate_report(summary: StatisticalSummary, title: str = 'Simulation Results Summary', metadata: Dict[str, Any] | None = None) str[source]

Generate formatted report.

Parameters:
Return type:

str

Returns:

Formatted report string

ergodic_insurance.trajectory_storage module

Memory-efficient storage system for simulation trajectories.

This module provides a lightweight storage system for Monte Carlo simulation trajectories that minimizes RAM usage while storing both partial time series data and comprehensive summary statistics.

Features:
  • Memory-mapped numpy arrays for efficient storage

  • Optional HDF5 backend with compression

  • Configurable time series sampling (store every Nth year)

  • Lazy loading to minimize memory footprint

  • Automatic disk space management

  • CSV/JSON export for analysis tools

  • <2GB RAM usage for 100K simulations

  • <1GB disk usage with sampling

Example

>>> from ergodic_insurance.trajectory_storage import TrajectoryStorage
>>> storage = TrajectoryStorage(
...     storage_dir="./trajectories",
...     sample_interval=5,  # Store every 5th year
...     max_disk_usage_gb=1.0
... )
>>> # During simulation
>>> storage.store_simulation(
...     sim_id=0,
...     annual_losses=losses,
...     final_assets=assets,
...     summary_stats=stats
... )
>>> # Later retrieval
>>> data = storage.load_simulation(sim_id=0)
class StorageConfig(storage_dir: str = './trajectory_storage', backend: str = 'memmap', sample_interval: int = 10, max_disk_usage_gb: float = 1.0, compression: bool = True, compression_level: int = 4, chunk_size: int = 1000, enable_summary_stats: bool = True, enable_time_series: bool = True, dtype: Any = <class 'numpy.float32'>) None[source]

Bases: object

Configuration for trajectory storage system.

storage_dir: str = './trajectory_storage'
backend: str = 'memmap'
sample_interval: int = 10
max_disk_usage_gb: float = 1.0
compression: bool = True
compression_level: int = 4
chunk_size: int = 1000
enable_summary_stats: bool = True
enable_time_series: bool = True
dtype

alias of float32

class SimulationSummary(sim_id: int, final_assets: float, total_losses: float, total_recoveries: float, mean_annual_loss: float, max_annual_loss: float, min_annual_loss: float, growth_rate: float, ruin_occurred: bool, ruin_year: int | None = None, volatility: float | None = None) None[source]

Bases: object

Summary statistics for a single simulation.

sim_id: int
final_assets: float
total_losses: float
total_recoveries: float
mean_annual_loss: float
max_annual_loss: float
min_annual_loss: float
growth_rate: float
ruin_occurred: bool
ruin_year: int | None = None
volatility: float | None = None
to_dict() Dict[str, Any][source]

Convert to dictionary for export.

Return type:

Dict[str, Any]

class TrajectoryStorage(config: StorageConfig | None = None)[source]

Bases: object

Memory-efficient storage for simulation trajectories.

Provides lightweight storage using memory-mapped arrays or HDF5, with configurable sampling and automatic disk space management.

store_simulation(sim_id: int, annual_losses: ndarray, insurance_recoveries: ndarray, retained_losses: ndarray, final_assets: float, initial_assets: float, ruin_occurred: bool = False, ruin_year: int | None = None) None[source]

Store simulation trajectory with automatic sampling.

Parameters:
  • sim_id (int) – Simulation identifier

  • annual_losses (ndarray) – Array of annual losses

  • insurance_recoveries (ndarray) – Array of insurance recoveries

  • retained_losses (ndarray) – Array of retained losses

  • final_assets (float) – Final asset value

  • initial_assets (float) – Initial asset value

  • ruin_occurred (bool) – Whether ruin occurred

  • ruin_year (Optional[int]) – Year of ruin (if applicable)

Return type:

None

load_simulation(sim_id: int, load_time_series: bool = False) Dict[str, Any][source]

Load simulation data with lazy loading.

Parameters:
  • sim_id (int) – Simulation identifier

  • load_time_series (bool) – Whether to load time series data

Return type:

Dict[str, Any]

Returns:

Dictionary with simulation data

export_summaries_csv(output_path: str) None[source]

Export all summary statistics to CSV.

Parameters:

output_path (str) – Path for CSV output file

Return type:

None

export_summaries_json(output_path: str) None[source]

Export all summary statistics to JSON.

Parameters:

output_path (str) – Path for JSON output file

Return type:

None

get_storage_stats() Dict[str, Any][source]

Get storage statistics.

Return type:

Dict[str, Any]

Returns:

Dictionary with storage statistics

clear_storage() None[source]

Clear all stored data.

Return type:

None

__enter__()[source]

Context manager entry.

__exit__(exc_type, exc_val, exc_tb)[source]

Context manager exit - ensure data is persisted.

ergodic_insurance.trends module

Trend module for insurance claim frequency and severity adjustments.

This module provides a hierarchy of trend classes that apply multiplicative adjustments to claim frequencies and severities over time. Trends model how insurance risks evolve due to inflation, exposure growth, regulatory changes, or other systematic factors.

Key Concepts:
  • All trends are multiplicative (1.0 = no change, 1.03 = 3% increase)

  • Support both annual and sub-annual (monthly) time steps

  • Seedable for reproducibility in stochastic trends

  • Time-based multipliers for dynamic risk evolution

Example

Basic usage with linear trend:

from ergodic_insurance.trends import LinearTrend, ScenarioTrend

# 3% annual inflation trend
inflation = LinearTrend(annual_rate=0.03)
multiplier_year5 = inflation.get_multiplier(5.0)  # ~1.159

# Custom scenario with varying rates
scenario = ScenarioTrend(
    factors=[1.0, 1.05, 1.08, 1.06, 1.10],
    time_unit="annual"
)
multiplier_year3 = scenario.get_multiplier(3.5)  # Interpolated
Since:

Version 0.4.0 - Core trend infrastructure for ClaimGenerator

class Trend(seed: int | None = None)[source]

Bases: ABC

Abstract base class for all trend implementations.

Defines the interface that all trend classes must implement. Trends provide multiplicative adjustments over time for frequencies and severities in insurance claim modeling.

All trend implementations must provide:
  • get_multiplier(time): Returns multiplicative factor at given time

  • Proper handling of edge cases (negative time, etc.)

  • Reproducibility through seed support (if stochastic)

Examples

Implementing a custom trend:

class StepTrend(Trend):
    def __init__(self, step_time: float, step_factor: float):
        self.step_time = step_time
        self.step_factor = step_factor

    def get_multiplier(self, time: float) -> float:
        if time < 0:
            return 1.0
        return 1.0 if time < self.step_time else self.step_factor
abstractmethod get_multiplier(time: float) float[source]

Get the multiplicative adjustment factor at a given time.

Parameters:

time (float) – Time point (in years from start) to get multiplier for. Can be fractional for sub-annual precision.

Returns:

Multiplicative factor (1.0 = no change, >1.0 = increase,

<1.0 = decrease).

Return type:

float

Note

Implementations should handle negative time gracefully, typically returning 1.0 or the initial value.

reset_seed(seed: int) None[source]

Reset random seed for stochastic trends.

Parameters:

seed (int) – New random seed to use.

Return type:

None

Note

This method allows re-running scenarios with different random outcomes while maintaining reproducibility.

class NoTrend(seed: int | None = None)[source]

Bases: Trend

Default trend implementation with no adjustment over time.

This trend always returns a multiplier of 1.0, representing no change in frequency or severity over time. Useful as a default or baseline.

Examples

Using NoTrend as baseline:

from ergodic_insurance.trends import NoTrend

baseline = NoTrend()

# Always returns 1.0
assert baseline.get_multiplier(0) == 1.0
assert baseline.get_multiplier(10) == 1.0
assert baseline.get_multiplier(-5) == 1.0
get_multiplier(time: float) float[source]

Return constant multiplier of 1.0.

Parameters:

time (float) – Time point (ignored).

Returns:

Always returns 1.0.

Return type:

float

class LinearTrend(annual_rate: float = 0.03, seed: int | None = None)[source]

Bases: Trend

Linear compound growth trend with constant annual rate.

Models exponential growth/decay with a fixed annual rate, similar to compound interest. Commonly used for inflation, exposure growth, or systematic risk changes.

The multiplier at time t is calculated as: (1 + annual_rate)^t

annual_rate

Annual growth rate (0.03 = 3% growth, -0.02 = 2% decay).

Examples

Modeling inflation:

from ergodic_insurance.trends import LinearTrend

# 3% annual inflation
inflation = LinearTrend(annual_rate=0.03)

# After 5 years: 1.03^5 ≈ 1.159
mult_5y = inflation.get_multiplier(5.0)
print(f"5-year inflation factor: {mult_5y:.3f}")

# After 6 months: 1.03^0.5 ≈ 1.015
mult_6m = inflation.get_multiplier(0.5)
print(f"6-month inflation factor: {mult_6m:.3f}")

Modeling exposure decay:

# 2% annual exposure reduction
reduction = LinearTrend(annual_rate=-0.02)
mult_10y = reduction.get_multiplier(10.0)  # 0.98^10 ≈ 0.817
get_multiplier(time: float) float[source]

Calculate compound growth multiplier at given time.

Parameters:

time (float) – Time in years from start. Can be fractional for sub-annual calculations. Negative times return 1.0.

Returns:

Multiplicative factor calculated as (1 + annual_rate)^time.

Returns 1.0 for negative times.

Return type:

float

Examples

Calculating multipliers:

trend = LinearTrend(0.04)  # 4% annual

# Year 1: 1.04
mult_1 = trend.get_multiplier(1.0)

# Year 2.5: 1.04^2.5 ≈ 1.104
mult_2_5 = trend.get_multiplier(2.5)

# Negative time: 1.0
mult_neg = trend.get_multiplier(-1.0)
class RandomWalkTrend(drift: float = 0.0, volatility: float = 0.1, seed: int | None = None)[source]

Bases: Trend

Random walk trend with drift and volatility.

Models a geometric random walk (geometric Brownian motion) where the multiplier evolves as a cumulative product of random changes. Commonly used for modeling market indices, asset prices, or unpredictable long-term trends in insurance markets.

The multiplier at time t follows: M(t) = exp(drift * t + volatility * W(t)) where W(t) is a Brownian motion.

drift

Annual drift rate (expected growth rate).

volatility

Annual volatility (standard deviation of log returns).

cached_path

Cached random path for efficiency.

cached_times

Time points for the cached path.

Examples

Basic random walk with drift:

from ergodic_insurance.trends import RandomWalkTrend

# 2% drift with 10% volatility
trend = RandomWalkTrend(drift=0.02, volatility=0.10, seed=42)

# Generate multipliers
mult_1 = trend.get_multiplier(1.0)  # Random around e^0.02
mult_5 = trend.get_multiplier(5.0)  # More variation

Market-like volatility:

# High volatility market
volatile = RandomWalkTrend(drift=0.0, volatility=0.30)

# Low volatility with positive drift
stable = RandomWalkTrend(drift=0.03, volatility=0.05)
get_multiplier(time: float) float[source]

Get random walk multiplier at given time.

Parameters:

time (float) – Time in years from start. Negative times return 1.0.

Returns:

Multiplicative factor following geometric Brownian motion.

Always positive due to exponential transformation.

Return type:

float

Note

The path is cached on first call for efficiency. All subsequent calls will use the same random path, ensuring consistency within a simulation run.

reset_seed(seed: int) None[source]

Reset random seed and clear cached path.

Parameters:

seed (int) – New random seed to use.

Return type:

None

class MeanRevertingTrend(mean_level: float = 1.0, reversion_speed: float = 0.2, volatility: float = 0.1, initial_level: float = 1.0, seed: int | None = None)[source]

Bases: Trend

Mean-reverting trend using Ornstein-Uhlenbeck process.

Models a trend that tends to revert to a long-term mean level, commonly used for interest rates, insurance market cycles, or any process with cyclical behavior around a stable level.

The process follows: dX(t) = theta*(mu - X(t))*dt + sigma*dW(t) where the multiplier M(t) = exp(X(t))

mean_level

Long-term mean multiplier level.

reversion_speed

Speed of mean reversion (theta).

volatility

Volatility of the process (sigma).

initial_level

Starting multiplier level.

cached_path

Cached process path for efficiency.

cached_times

Time points for the cached path.

Examples

Insurance market cycle:

from ergodic_insurance.trends import MeanRevertingTrend

# Market cycles around 1.0 with 5-year half-life
market = MeanRevertingTrend(
    mean_level=1.0,
    reversion_speed=0.14,  # ln(2)/5 years
    volatility=0.10,
    initial_level=1.1,  # Start in hard market
    seed=42
)

# Will gradually revert to 1.0
mult_1 = market.get_multiplier(1.0)
mult_10 = market.get_multiplier(10.0)  # Closer to 1.0

Interest rate model:

# Interest rates reverting to 3% with high volatility
rates = MeanRevertingTrend(
    mean_level=1.03,
    reversion_speed=0.5,
    volatility=0.15
)
get_multiplier(time: float) float[source]

Get mean-reverting multiplier at given time.

Parameters:

time (float) – Time in years from start. Negative times return 1.0.

Returns:

Multiplicative factor following OU process.

Always positive. Tends toward mean_level over time.

Return type:

float

Note

The path is cached on first call for efficiency. The process exhibits mean reversion: starting values far from the mean will tend to move toward it over time.

reset_seed(seed: int) None[source]

Reset random seed and clear cached path.

Parameters:

seed (int) – New random seed to use.

Return type:

None

class RegimeSwitchingTrend(regimes: List[float] | None = None, transition_probs: List[List[float]] | None = None, initial_regime: int = 0, regime_persistence: float = 1.0, seed: int | None = None)[source]

Bases: Trend

Trend that switches between different market regimes.

Models discrete regime changes such as hard/soft insurance markets, economic cycles, or regulatory environments. Each regime has its own multiplier, and transitions occur stochastically based on probabilities.

regimes

List of regime multipliers.

transition_matrix

Matrix of transition probabilities between regimes.

initial_regime

Starting regime index.

regime_persistence

How long regimes tend to last.

cached_regimes

Cached regime path for efficiency.

cached_times

Time points for the cached path.

Examples

Hard/soft insurance market:

from ergodic_insurance.trends import RegimeSwitchingTrend

# Two regimes: soft (0.9x) and hard (1.2x) markets
market = RegimeSwitchingTrend(
    regimes=[0.9, 1.2],
    transition_probs=[[0.8, 0.2],   # Soft -> [80% stay, 20% to hard]
                      [0.3, 0.7]],  # Hard -> [30% to soft, 70% stay]
    initial_regime=0,  # Start in soft market
    seed=42
)

# Multiplier switches between 0.9 and 1.2
mult_5 = market.get_multiplier(5.0)

Three-regime economic cycle:

# Recession, normal, boom
economy = RegimeSwitchingTrend(
    regimes=[0.8, 1.0, 1.3],
    transition_probs=[
        [0.6, 0.4, 0.0],  # Recession
        [0.1, 0.7, 0.2],  # Normal
        [0.0, 0.5, 0.5],  # Boom
    ],
    initial_regime=1  # Start in normal
)
get_multiplier(time: float) float[source]

Get regime-based multiplier at given time.

Parameters:

time (float) – Time in years from start. Negative times return 1.0.

Returns:

Multiplicative factor for the active regime at time t.

Changes discretely as regimes switch.

Return type:

float

Note

The regime path is cached on first call. Regime changes are stochastic but reproducible with the same seed. The actual regime durations depend on both transition probabilities and the regime_persistence parameter.

reset_seed(seed: int) None[source]

Reset random seed and clear cached regime path.

Parameters:

seed (int) – New random seed to use.

Return type:

None

class ScenarioTrend(factors: List[float] | Dict[float, float], time_unit: str = 'annual', interpolation: str = 'linear', seed: int | None = None)[source]

Bases: Trend

Trend based on explicit scenario factors with interpolation.

Allows specifying exact multiplicative factors at specific time points, with linear interpolation between points. Useful for modeling known future changes, regulatory impacts, or custom scenarios.

factors

List or dict of multiplicative factors.

time_unit

Time unit for the factors (“annual” or “monthly”).

interpolation

Interpolation method (“linear” or “step”).

Examples

Annual scenario with known rates:

from ergodic_insurance.trends import ScenarioTrend

# Year 0: 1.0, Year 1: 1.05, Year 2: 1.08, etc.
scenario = ScenarioTrend(
    factors=[1.0, 1.05, 1.08, 1.06, 1.10],
    time_unit="annual"
)

# Exact points
mult_1 = scenario.get_multiplier(1.0)  # 1.05
mult_2 = scenario.get_multiplier(2.0)  # 1.08

# Interpolated
mult_1_5 = scenario.get_multiplier(1.5)  # ≈1.065

Monthly scenario:

# Monthly adjustment factors
monthly = ScenarioTrend(
    factors=[1.0, 1.01, 1.02, 1.015, 1.025, 1.03],
    time_unit="monthly"
)

# Month 3 (0.25 years)
mult_3m = monthly.get_multiplier(0.25)

Using dictionary for specific times:

# Specific time points
custom = ScenarioTrend(
    factors={0: 1.0, 2: 1.1, 5: 1.2, 10: 1.5},
    interpolation="linear"
)
get_multiplier(time: float) float[source]

Get interpolated multiplier at given time.

Parameters:

time (float) – Time in years from start. Can be fractional. Negative times return 1.0.

Returns:

Multiplicative factor, interpolated from scenario points.
  • Before first point: returns 1.0

  • After last point: returns last factor

  • Between points: interpolated based on method

Return type:

float

Examples

Interpolation behavior:

scenario = ScenarioTrend([1.0, 1.1, 1.2, 1.15])

# Exact points
mult_0 = scenario.get_multiplier(0.0)  # 1.0
mult_2 = scenario.get_multiplier(2.0)  # 1.2

# Linear interpolation
mult_1_5 = scenario.get_multiplier(1.5)  # 1.15

# Beyond range
mult_neg = scenario.get_multiplier(-1.0)  # 1.0
mult_10 = scenario.get_multiplier(10.0)  # 1.15 (last)

ergodic_insurance.validation_metrics module

Validation metrics for walk-forward analysis and strategy backtesting.

This module provides performance metrics and comparison tools for evaluating insurance strategies across training and testing periods in walk-forward validation.

Example

>>> from validation_metrics import ValidationMetrics, MetricCalculator
>>> import numpy as np
>>> # Calculate metrics for a strategy's performance
>>> returns = np.random.normal(0.08, 0.02, 1000)
>>> losses = np.random.exponential(100000, 1000)
>>>
>>> calculator = MetricCalculator()
>>> metrics = calculator.calculate_metrics(
...     returns=returns,
...     losses=losses,
...     final_assets=10000000
... )
>>>
>>> print(f"ROE: {metrics.roe:.2%}")
>>> print(f"Sharpe Ratio: {metrics.sharpe_ratio:.2f}")
class ValidationMetrics(roe: float, ruin_probability: float, growth_rate: float, volatility: float, sharpe_ratio: float = 0.0, max_drawdown: float = 0.0, var_95: float = 0.0, cvar_95: float = 0.0, win_rate: float = 0.0, profit_factor: float = 0.0, recovery_time: float = 0.0, stability: float = 0.0) None[source]

Bases: object

Container for validation performance metrics.

roe

Return on equity (annualized)

ruin_probability

Probability of insolvency

growth_rate

Compound annual growth rate

volatility

Standard deviation of returns

sharpe_ratio

Risk-adjusted return metric

max_drawdown

Maximum peak-to-trough decline

var_95

Value at Risk at 95% confidence

cvar_95

Conditional Value at Risk at 95% confidence

win_rate

Percentage of profitable periods

profit_factor

Ratio of gross profits to gross losses

recovery_time

Average time to recover from drawdown

stability

R-squared of equity curve

roe: float
ruin_probability: float
growth_rate: float
volatility: float
sharpe_ratio: float = 0.0
max_drawdown: float = 0.0
var_95: float = 0.0
cvar_95: float = 0.0
win_rate: float = 0.0
profit_factor: float = 0.0
recovery_time: float = 0.0
stability: float = 0.0
to_dict() Dict[str, float][source]

Convert metrics to dictionary.

Return type:

Dict[str, float]

Returns:

Dictionary of metric values.

compare(other: ValidationMetrics) Dict[str, float][source]

Compare metrics with another set.

Parameters:

other (ValidationMetrics) – Metrics to compare against.

Return type:

Dict[str, float]

Returns:

Dictionary of percentage differences.

class StrategyPerformance(strategy_name: str, in_sample_metrics: ValidationMetrics | None = None, out_sample_metrics: ValidationMetrics | None = None, degradation: Dict[str, float]=<factory>, overfitting_score: float = 0.0, consistency_score: float = 0.0, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Performance tracking for a single strategy.

strategy_name

Name of the strategy

in_sample_metrics

Metrics from training period

out_sample_metrics

Metrics from testing period

degradation

Performance degradation from in-sample to out-sample

overfitting_score

Degree of overfitting (0 = none, 1 = severe)

consistency_score

Consistency across multiple windows

metadata

Additional strategy-specific data

strategy_name: str
in_sample_metrics: ValidationMetrics | None = None
out_sample_metrics: ValidationMetrics | None = None
degradation: Dict[str, float]
overfitting_score: float = 0.0
consistency_score: float = 0.0
metadata: Dict[str, Any]
calculate_degradation()[source]

Calculate performance degradation from in-sample to out-of-sample.

to_dataframe() DataFrame[source]

Convert performance to DataFrame for reporting.

Return type:

DataFrame

Returns:

DataFrame with performance metrics.

class MetricCalculator(risk_free_rate: float = 0.02)[source]

Bases: object

Calculator for performance metrics from simulation results.

calculate_metrics(returns: ndarray, losses: ndarray | None = None, final_assets: ndarray | None = None, initial_assets: float = 10000000, n_years: int | None = None) ValidationMetrics[source]

Calculate comprehensive performance metrics.

Parameters:
  • returns (ndarray) – Array of period returns

  • losses (Optional[ndarray]) – Array of loss amounts (optional)

  • final_assets (Optional[ndarray]) – Array of final asset values (optional)

  • initial_assets (float) – Initial asset value

  • n_years (Optional[int]) – Number of years for annualization

Return type:

ValidationMetrics

Returns:

ValidationMetrics object with calculated metrics.

calculate_rolling_metrics(returns: ndarray, window_size: int = 252) DataFrame[source]

Calculate rolling window metrics.

Parameters:
  • returns (ndarray) – Array of returns

  • window_size (int) – Size of rolling window

Return type:

DataFrame

Returns:

DataFrame with rolling metrics.

class PerformanceTargets(min_roe: float | None = None, max_ruin_probability: float | None = None, min_sharpe_ratio: float | None = None, max_drawdown: float | None = None, min_growth_rate: float | None = None)[source]

Bases: object

User-defined performance targets for strategy evaluation.

min_roe

Minimum acceptable ROE

max_ruin_probability

Maximum acceptable ruin probability

min_sharpe_ratio

Minimum acceptable Sharpe ratio

max_drawdown

Maximum acceptable drawdown

min_growth_rate

Minimum acceptable growth rate

evaluate(metrics: ValidationMetrics) Tuple[bool, List[str]][source]

Evaluate metrics against targets.

Parameters:

metrics (ValidationMetrics) – Metrics to evaluate

Return type:

Tuple[bool, List[str]]

Returns:

Tuple of (meets_all_targets, list_of_failures)

ergodic_insurance.visualization_legacy module

Visualization utilities for professional WSJ-style plots.

This module provides standardized plotting functions with Wall Street Journal aesthetic for insurance analysis and risk metrics visualization.

NOTE: This module now acts as a facade for the new modular visualization package. New code should import directly from ergodic_insurance.visualization.

get_figure_factory(theme: Theme | None = None) FigureFactory | None[source]

Get or create global figure factory instance.

Parameters:

theme (Optional[Theme]) – Optional theme to use (defaults to DEFAULT)

Return type:

Optional[FigureFactory]

Returns:

FigureFactory instance if available, None otherwise

set_wsj_style(use_factory: bool = False, theme: Theme | None = None)[source]

Set matplotlib to use WSJ-style formatting.

Parameters:
  • use_factory (bool) – Whether to use new factory-based styling if available

  • theme (Optional[Theme]) – Optional theme to use with factory (defaults to DEFAULT)

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.set_wsj_style() instead.

format_currency(value: float, decimals: int = 0, abbreviate: bool = False) str[source]

Format value as currency.

Parameters:
  • value (float) – Numeric value to format

  • decimals (int) – Number of decimal places

  • abbreviate (bool) – If True, use K/M/B notation for large numbers

Return type:

str

Returns:

Formatted string (e.g., “$1,000” or “$1K” if abbreviate=True)

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.format_currency() instead.

format_percentage(value: float, decimals: int = 1) str[source]

Format value as percentage.

Parameters:
  • value (float) – Numeric value (0.05 = 5%)

  • decimals (int) – Number of decimal places

Return type:

str

Returns:

Formatted string (e.g., “5.0%”)

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.format_percentage() instead.

class WSJFormatter[source]

Bases: object

Formatter for WSJ-style axis labels.

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.WSJFormatter instead.

static currency_formatter(x, pos)[source]

Format axis values as currency.

static currency(x: float, decimals: int = 1) str[source]

Format value as currency (shortened method name).

Return type:

str

static percentage_formatter(x, pos)[source]

Format axis values as percentage.

static percentage(x: float, decimals: int = 1) str[source]

Format value as percentage (shortened method name).

Return type:

str

static number(x: float, decimals: int = 2) str[source]

Format large numbers with appropriate suffix.

Return type:

str

static millions_formatter(x, pos)[source]

Format axis values in millions.

create_styled_figure(size_type: str = 'medium', theme: Theme | None = None, use_factory: bool = True, **kwargs) Tuple[Figure, Axes | ndarray][source]

Create a figure with automatic styling applied.

Parameters:
  • size_type (str) – Size preset (small, medium, large, blog, technical, presentation)

  • theme (Optional[Theme]) – Optional theme to use

  • use_factory (bool) – Whether to use factory if available

  • **kwargs – Additional arguments for figure creation

Return type:

Tuple[Figure, Union[Axes, ndarray]]

Returns:

Tuple of (figure, axes)

plot_loss_distribution(losses: ndarray | DataFrame, title: str = 'Loss Distribution', bins: int = 50, show_metrics: bool = True, var_levels: List[float] | None = None, figsize: Tuple[int, int] = (12, 6), show_stats: bool = False, log_scale: bool = False, use_factory: bool = False, theme: Theme | None = None) Figure[source]

Create WSJ-style loss distribution plot.

Parameters:
  • losses (Union[ndarray, DataFrame]) – Array of loss values or DataFrame with ‘amount’ column

  • title (str) – Plot title

  • bins (int) – Number of histogram bins

  • show_metrics (bool) – Whether to show VaR/TVaR lines

  • var_levels (Optional[List[float]]) – VaR confidence levels to show

  • figsize (Tuple[int, int]) – Figure size

  • show_stats (bool) – Whether to show statistics

  • log_scale (bool) – Whether to use log scale

  • use_factory (bool) – Whether to use new visualization factory if available

  • theme (Optional[Theme]) – Optional theme to use with factory

Return type:

Figure

Returns:

Matplotlib figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.plot_loss_distribution() instead.

plot_return_period_curve(losses: ndarray | DataFrame, return_periods: ndarray | None = None, scenarios: Dict[str, ndarray] | None = None, title: str = 'Return Period Curves', figsize: Tuple[int, int] = (10, 6), confidence_level: float = 0.95, show_grid: bool = True) Figure[source]

Create WSJ-style return period curve.

Parameters:
Return type:

Figure

Returns:

Matplotlib figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.plot_return_period_curve() instead.

plot_insurance_layers(layers: List[Dict[str, float]] | DataFrame, total_limit: float | None = None, title: str = 'Insurance Program Structure', figsize: Tuple[int, int] = (10, 6), losses: ndarray | DataFrame | None = None, loss_data: ndarray | DataFrame | None = None, show_expected_loss: bool = False) Figure[source]

Create WSJ-style insurance layer visualization.

Parameters:
  • layers (Union[List[Dict[str, float]], DataFrame]) – List of layer dictionaries or DataFrame with ‘attachment’, ‘limit’ columns

  • total_limit (Optional[float]) – Total program limit (calculated from layers if not provided)

  • title (str) – Plot title

  • figsize (Tuple[int, int]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.plot_insurance_layers() instead.

create_interactive_dashboard(results: Dict[str, Any] | DataFrame, title: str = 'Monte Carlo Simulation Dashboard', height: int = 600, show_distributions: bool = False) Figure[source]

Create interactive Plotly dashboard with WSJ styling.

Parameters:
  • results (Union[Dict[str, Any], DataFrame]) – Dictionary with simulation results or DataFrame

  • title (str) – Dashboard title

  • height (int) – Dashboard height in pixels

  • show_distributions (bool) – Whether to show distribution plots

Return type:

Figure

Returns:

Plotly figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.create_interactive_dashboard() instead.

plot_convergence_diagnostics(convergence_stats: Dict[str, Any], title: str = 'Convergence Diagnostics', figsize: Tuple[int, int] = (12, 8), r_hat_threshold: float = 1.1, show_threshold: bool = False) Figure[source]

Create comprehensive convergence diagnostics plot.

Parameters:
  • convergence_stats (Dict[str, Any]) – Dictionary with convergence statistics

  • title (str) – Plot title

  • figsize (Tuple[int, int]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

Deprecated since version 2.0.0: Use ergodic_insurance.visualization.plot_convergence_diagnostics() instead.

plot_pareto_frontier_2d(frontier_points: List[Any], x_objective: str, y_objective: str, x_label: str | None = None, y_label: str | None = None, title: str = 'Pareto Frontier', highlight_knees: bool = True, show_trade_offs: bool = False, figsize: Tuple[float, float] = (10, 6)) Figure[source]

Plot 2D Pareto frontier with WSJ styling.

Parameters:
  • frontier_points (List[Any]) – List of ParetoPoint objects

  • x_objective (str) – Name of objective for x-axis

  • y_objective (str) – Name of objective for y-axis

  • x_label (Optional[str]) – Optional custom label for x-axis

  • y_label (Optional[str]) – Optional custom label for y-axis

  • title (str) – Plot title

  • highlight_knees (bool) – Whether to highlight knee points

  • show_trade_offs (bool) – Whether to show trade-off annotations

  • figsize (Tuple[float, float]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

plot_pareto_frontier_3d(frontier_points: List[Any], x_objective: str, y_objective: str, z_objective: str, x_label: str | None = None, y_label: str | None = None, z_label: str | None = None, title: str = '3D Pareto Frontier', figsize: Tuple[float, float] = (12, 8)) Figure[source]

Plot 3D Pareto frontier surface.

Parameters:
  • frontier_points (List[Any]) – List of ParetoPoint objects

  • x_objective (str) – Name of objective for x-axis

  • y_objective (str) – Name of objective for y-axis

  • z_objective (str) – Name of objective for z-axis

  • x_label (Optional[str]) – Optional custom label for x-axis

  • y_label (Optional[str]) – Optional custom label for y-axis

  • z_label (Optional[str]) – Optional custom label for z-axis

  • title (str) – Plot title

  • figsize (Tuple[float, float]) – Figure size

Return type:

Figure

Returns:

Matplotlib figure

create_interactive_pareto_frontier(frontier_points: List[Any], objectives: List[str], title: str = 'Interactive Pareto Frontier', height: int = 600, show_dominated: bool = True) Figure[source]

Create interactive Plotly Pareto frontier visualization.

Parameters:
  • frontier_points (List[Any]) – List of ParetoPoint objects

  • objectives (List[str]) – List of objective names to display

  • title (str) – Plot title

  • height (int) – Plot height in pixels

  • show_dominated (bool) – Whether to show dominated region

Return type:

Figure

Returns:

Plotly figure

plot_scenario_comparison(aggregated_results: Any, metrics: List[str] | None = None, figsize: Tuple[float, float] = (14, 8), save_path: str | None = None) Figure[source]

Create comprehensive scenario comparison visualization.

Parameters:
  • aggregated_results (Any) – AggregatedResults object from batch processing

  • metrics (Optional[List[str]]) – List of metrics to compare (default: key metrics)

  • figsize (Tuple[float, float]) – Figure size (width, height)

  • save_path (Optional[str]) – Path to save figure

Return type:

Figure

Returns:

Matplotlib figure

plot_sensitivity_heatmap(aggregated_results: Any, metric: str = 'mean_growth_rate', figsize: Tuple[float, float] = (10, 8), save_path: str | None = None) Figure[source]

Create sensitivity analysis heatmap.

Parameters:
  • aggregated_results (Any) – AggregatedResults with sensitivity analysis

  • metric (str) – Metric to visualize

  • figsize (Tuple[float, float]) – Figure size

  • save_path (Optional[str]) – Path to save figure

Return type:

Figure

Returns:

Matplotlib figure

plot_parameter_sweep_3d(aggregated_results: Any, param1: str, param2: str, metric: str = 'mean_growth_rate', height: int = 600, save_path: str | None = None) Figure[source]

Create 3D surface plot for parameter sweep results.

Parameters:
  • aggregated_results (Any) – AggregatedResults from grid search

  • param1 (str) – First parameter name

  • param2 (str) – Second parameter name

  • metric (str) – Metric to plot on z-axis

  • height (int) – Figure height in pixels

  • save_path (Optional[str]) – Path to save figure

Return type:

Figure

Returns:

Plotly figure

plot_scenario_convergence(batch_results: List[Any], metric: str = 'mean_growth_rate', figsize: Tuple[float, float] = (12, 6), save_path: str | None = None) Figure[source]

Plot convergence of metric across scenarios.

Parameters:
  • batch_results (List[Any]) – List of BatchResult objects

  • metric (str) – Metric to track

  • figsize (Tuple[float, float]) – Figure size

  • save_path (Optional[str]) – Path to save figure

Return type:

Figure

Returns:

Matplotlib figure

ergodic_insurance.walk_forward_validator module

Walk-forward validation system for insurance strategy testing.

This module implements a rolling window validation framework that tests insurance strategies across multiple time periods to detect overfitting and ensure robustness of insurance decisions.

Example

>>> from walk_forward_validator import WalkForwardValidator
>>> from strategy_backtester import ConservativeFixedStrategy, AdaptiveStrategy
>>> # Create validator with 3-year windows
>>> validator = WalkForwardValidator(
...     window_size=3,
...     step_size=1,
...     test_ratio=0.3
... )
>>>
>>> # Define strategies to test
>>> strategies = [
...     ConservativeFixedStrategy(),
...     AdaptiveStrategy()
... ]
>>>
>>> # Run walk-forward validation
>>> results = validator.validate_strategies(
...     strategies=strategies,
...     n_years=10,
...     n_simulations=1000
... )
>>>
>>> # Generate reports
>>> validator.generate_report(results, output_dir="./reports")
class ValidationWindow(window_id: int, train_start: int, train_end: int, test_start: int, test_end: int) None[source]

Bases: object

Represents a single validation window.

window_id

Unique identifier for the window

train_start

Start year of training period

train_end

End year of training period

test_start

Start year of testing period

test_end

End year of testing period

window_id: int
train_start: int
train_end: int
test_start: int
test_end: int
get_train_years() int[source]

Get number of training years.

Return type:

int

get_test_years() int[source]

Get number of testing years.

Return type:

int

__str__() str[source]

String representation.

Return type:

str

class WindowResult(window: ValidationWindow, strategy_performances: Dict[str, ~ergodic_insurance.validation_metrics.StrategyPerformance]=<factory>, optimization_params: Dict[str, ~typing.Dict[str, float]]=<factory>, execution_time: float = 0.0) None[source]

Bases: object

Results from a single validation window.

window

The validation window

strategy_performances

Performance by strategy name

optimization_params

Optimized parameters if applicable

execution_time

Time to process window

window: ValidationWindow
strategy_performances: Dict[str, StrategyPerformance]
optimization_params: Dict[str, Dict[str, float]]
execution_time: float = 0.0
class ValidationResult(window_results: List[WindowResult] = <factory>, strategy_rankings: DataFrame = <factory>, overfitting_analysis: Dict[str, float]=<factory>, consistency_scores: Dict[str, float]=<factory>, best_strategy: str | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]

Bases: object

Complete walk-forward validation results.

window_results

Results for each window

strategy_rankings

Overall strategy rankings

overfitting_analysis

Overfitting detection results

consistency_scores

Strategy consistency across windows

best_strategy

Recommended strategy based on validation

metadata

Additional validation metadata

window_results: List[WindowResult]
strategy_rankings: DataFrame
overfitting_analysis: Dict[str, float]
consistency_scores: Dict[str, float]
best_strategy: str | None = None
metadata: Dict[str, Any]
class WalkForwardValidator(window_size: int = 3, step_size: int = 1, test_ratio: float = 0.3, simulation_engine: Simulation | None = None, backtester: StrategyBacktester | None = None, performance_targets: PerformanceTargets | None = None, max_workers: int | None = None, seed: int | None = None)[source]

Bases: object

Walk-forward validation system for insurance strategies.

generate_windows(total_years: int) List[ValidationWindow][source]

Generate validation windows.

Parameters:

total_years (int) – Total years of data available

Return type:

List[ValidationWindow]

Returns:

List of validation windows.

validate_strategies(strategies: List[InsuranceStrategy], n_years: int = 10, n_simulations: int = 1000, manufacturer: WidgetManufacturer | None = None, config: Config | None = None) ValidationResult[source]

Validate strategies using walk-forward analysis.

Parameters:
Return type:

ValidationResult

Returns:

ValidationResult with complete analysis.

generate_report(validation_result: ValidationResult, output_dir: str = './reports', include_visualizations: bool = True) Dict[str, Any][source]

Generate validation reports.

Parameters:
  • validation_result (ValidationResult) – Validation results to report

  • output_dir (str) – Directory for output files

  • include_visualizations (bool) – Whether to include plots

Return type:

Dict[str, Any]

Returns:

Dictionary of generated file paths.

Module contents

Ergodic Insurance Limits - Core Package.

This module provides the main entry point for the Ergodic Insurance Limits package, exposing the key classes and functions for insurance simulation and analysis using ergodic theory. The framework helps optimize insurance retentions and limits for businesses by analyzing time-average outcomes rather than traditional ensemble approaches.

Key Features:
  • Ergodic analysis of insurance decisions

  • Business optimization with insurance constraints

  • Monte Carlo simulation with trajectory storage

  • Insurance strategy backtesting and validation

  • Performance optimization and benchmarking

  • Comprehensive visualization and reporting

Top-level Exports:

The top-level __all__ exposes the essential classes for most workflows:

  • run_analysis / AnalysisResults — one-call analysis entry point

  • Config / ManufacturerConfig — configuration

  • InsuranceProgram / EnhancedInsuranceLayer — insurance modeling

  • Simulation / SimulationResults — running simulations

All other classes remain importable from their respective submodules (see Import Recipes below) and via from ergodic_insurance import <name> for backward compatibility.

Examples

One-call analysis (recommended starting point):

from ergodic_insurance import run_analysis

results = run_analysis(
    initial_assets=10_000_000,
    loss_frequency=2.5,
    loss_severity_mean=1_000_000,
    deductible=500_000,
    limit=10_000_000,
    premium_rate=0.025,
)
print(results.summary())
results.plot()

Quick start with defaults (creates a $10M manufacturer, 50-year horizon):

from ergodic_insurance import Config

config = Config()  # All defaults — just works

From basic company info:

from ergodic_insurance import Config

config = Config.from_company(
    initial_assets=50_000_000,
    operating_margin=0.12,
)
Import Recipes:

Loss modeling:

from ergodic_insurance.loss_distributions import (
    LossEvent, LossData, ManufacturingLossGenerator,
)

Business simulation:

from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.monte_carlo import MonteCarloEngine, MonteCarloResults

Ergodic & risk analysis:

from ergodic_insurance.ergodic_analyzer import ErgodicAnalyzer
from ergodic_insurance.ergodic_types import (
    ErgodicData, ErgodicAnalysisResults, ValidationResults,
    ScenarioComparison, BatchAnalysisResults,
)
from ergodic_insurance.scenario_analysis import (
    compare_scenarios, analyze_simulation_batch,
)
from ergodic_insurance.integrated_analysis import (
    integrate_loss_ergodic_analysis,
    validate_insurance_ergodic_impact,
)
from ergodic_insurance.risk_metrics import RiskMetrics
from ergodic_insurance.ruin_probability import RuinProbabilityAnalyzer

Insurance pricing:

from ergodic_insurance.insurance_pricing import InsurancePricer, MarketCycle

Business optimization:

from ergodic_insurance.business_optimizer import (
    BusinessOptimizer, BusinessObjective, BusinessConstraints,
    OptimalStrategy, BusinessOptimizationResult,
)

Strategies & backtesting:

from ergodic_insurance.strategy_backtester import (
    InsuranceStrategy, NoInsuranceStrategy, ConservativeFixedStrategy,
    AggressiveFixedStrategy, OptimizedStaticStrategy, AdaptiveStrategy,
    StrategyBacktester,
)
from ergodic_insurance.walk_forward_validator import WalkForwardValidator

Sensitivity analysis:

from ergodic_insurance.sensitivity import (
    SensitivityAnalyzer, SensitivityResult, TwoWaySensitivityResult,
)

Visualization:

from ergodic_insurance.visualization import StyleManager, Theme, FigureFactory

Validation & performance:

from ergodic_insurance.validation_metrics import (
    ValidationMetrics, MetricCalculator, PerformanceTargets,
)
from ergodic_insurance.accuracy_validator import AccuracyValidator, ValidationResult
from ergodic_insurance.performance_optimizer import (
    PerformanceOptimizer, OptimizationConfig,
)

Ledger (event sourcing):

from ergodic_insurance.ledger import (
    Ledger, LedgerEntry, TransactionType, EntryType, AccountType, AccountName,
)

Note

This module uses lazy imports to avoid circular dependencies during test discovery. All classes listed in the Import Recipes above are also accessible as from ergodic_insurance import <name> for backward compatibility, but they are not included in __all__ and will not appear in IDE auto-complete at the top level.

Since:

Version 0.4.0