ergodic_insurance package
Subpackages
- ergodic_insurance.reporting package
- Submodules
- ergodic_insurance.reporting.cache_manager module
StorageBackendCacheConfigCacheConfig.cache_dirCacheConfig.max_cache_size_gbCacheConfig.ttl_hoursCacheConfig.compressionCacheConfig.compression_levelCacheConfig.enable_memory_mappingCacheConfig.backendCacheConfig.backend_configCacheConfig.cache_dirCacheConfig.max_cache_size_gbCacheConfig.ttl_hoursCacheConfig.compressionCacheConfig.compression_levelCacheConfig.enable_memory_mappingCacheConfig.backendCacheConfig.backend_configCacheConfig.__post_init__()
CacheStatsCacheStats.total_size_bytesCacheStats.n_entriesCacheStats.n_hitsCacheStats.n_missesCacheStats.hit_rateCacheStats.avg_load_time_msCacheStats.avg_save_time_msCacheStats.oldest_entryCacheStats.newest_entryCacheStats.total_size_bytesCacheStats.n_entriesCacheStats.n_hitsCacheStats.n_missesCacheStats.hit_rateCacheStats.avg_load_time_msCacheStats.avg_save_time_msCacheStats.oldest_entryCacheStats.newest_entryCacheStats.update_hit_rate()
CacheKeyBaseStorageBackendLocalStorageBackendCacheManagerCacheManager.configCacheManager.statsCacheManager.backendCacheManager.cache_simulation_paths()CacheManager.load_simulation_paths()CacheManager.cache_processed_results()CacheManager.load_processed_results()CacheManager.cache_figure()CacheManager.invalidate_cache()CacheManager.clear_cache()CacheManager.get_cache_stats()CacheManager.warm_cache()CacheManager.validate_cache()
- ergodic_insurance.reporting.config module
FigureConfigFigureConfig.nameFigureConfig.captionFigureConfig.sourceFigureConfig.widthFigureConfig.heightFigureConfig.dpiFigureConfig.positionFigureConfig.cache_keyFigureConfig.nameFigureConfig.captionFigureConfig.sourceFigureConfig.widthFigureConfig.heightFigureConfig.dpiFigureConfig.positionFigureConfig.cache_keyFigureConfig.validate_source()FigureConfig.model_config
TableConfigTableConfig.nameTableConfig.captionTableConfig.data_sourceTableConfig.formatTableConfig.columnsTableConfig.indexTableConfig.precisionTableConfig.styleTableConfig.nameTableConfig.captionTableConfig.data_sourceTableConfig.formatTableConfig.columnsTableConfig.indexTableConfig.precisionTableConfig.styleTableConfig.model_config
SectionConfigSectionConfig.titleSectionConfig.levelSectionConfig.contentSectionConfig.figuresSectionConfig.tablesSectionConfig.subsectionsSectionConfig.page_breakSectionConfig.titleSectionConfig.levelSectionConfig.contentSectionConfig.figuresSectionConfig.tablesSectionConfig.subsectionsSectionConfig.page_breakSectionConfig.model_config
ReportMetadataReportMetadata.titleReportMetadata.subtitleReportMetadata.authorsReportMetadata.dateReportMetadata.versionReportMetadata.organizationReportMetadata.confidentialityReportMetadata.keywordsReportMetadata.abstractReportMetadata.titleReportMetadata.subtitleReportMetadata.authorsReportMetadata.dateReportMetadata.versionReportMetadata.organizationReportMetadata.confidentialityReportMetadata.keywordsReportMetadata.abstractReportMetadata.model_config
ReportStyleReportStyle.font_familyReportStyle.font_sizeReportStyle.line_spacingReportStyle.marginsReportStyle.page_sizeReportStyle.orientationReportStyle.header_footerReportStyle.page_numbersReportStyle.color_schemeReportStyle.font_familyReportStyle.font_sizeReportStyle.line_spacingReportStyle.marginsReportStyle.page_sizeReportStyle.orientationReportStyle.header_footerReportStyle.page_numbersReportStyle.color_schemeReportStyle.model_config
ReportConfigReportConfig.metadataReportConfig.styleReportConfig.sectionsReportConfig.templateReportConfig.output_formatsReportConfig.output_dirReportConfig.cache_dirReportConfig.debugReportConfig.metadataReportConfig.styleReportConfig.sectionsReportConfig.templateReportConfig.output_formatsReportConfig.output_dirReportConfig.cache_dirReportConfig.debugReportConfig.create_directories()ReportConfig.to_yaml()ReportConfig.from_yaml()ReportConfig.model_config
create_executive_config()create_technical_config()
- ergodic_insurance.reporting.executive_report module
ExecutiveReportExecutiveReport.resultsExecutiveReport.style_managerExecutiveReport.key_metricsExecutiveReport.generate()ExecutiveReport.generate_roe_frontier()ExecutiveReport.generate_performance_table()ExecutiveReport.generate_decision_matrix()ExecutiveReport.generate_convergence_plot()ExecutiveReport.generate_convergence_table()
- ergodic_insurance.reporting.formatters module
- ergodic_insurance.reporting.insight_extractor module
- ergodic_insurance.reporting.report_builder module
- ergodic_insurance.reporting.scenario_comparator module
ScenarioComparisonScenarioComparison.scenariosScenarioComparison.metricsScenarioComparison.parametersScenarioComparison.statisticsScenarioComparison.diffsScenarioComparison.rankingsScenarioComparison.scenariosScenarioComparison.metricsScenarioComparison.parametersScenarioComparison.statisticsScenarioComparison.diffsScenarioComparison.rankingsScenarioComparison.get_metric_df()ScenarioComparison.get_top_performers()
ScenarioComparator
- ergodic_insurance.reporting.table_generator module
TableGeneratorTableGenerator.default_formatTableGenerator.precisionTableGenerator.max_widthTableGenerator.number_formatterTableGenerator.color_coderTableGenerator.table_formatterTableGenerator.generate()TableGenerator.generate_summary_statistics()TableGenerator.generate_comparison_table()TableGenerator.generate_decision_matrix()TableGenerator.generate_optimal_limits_by_size()TableGenerator.generate_quick_reference_matrix()TableGenerator.generate_parameter_grid()TableGenerator.generate_loss_distribution_params()TableGenerator.generate_insurance_pricing_grid()TableGenerator.generate_statistical_validation()TableGenerator.generate_comprehensive_results()TableGenerator.generate_walk_forward_validation()TableGenerator.export_to_file()
create_performance_table()create_parameter_table()create_sensitivity_table()
- ergodic_insurance.reporting.technical_report module
TechnicalReportTechnicalReport.resultsTechnicalReport.parametersTechnicalReport.validation_metricsTechnicalReport.generate()TechnicalReport.generate_parameter_sensitivity_plot()TechnicalReport.generate_qq_plot()TechnicalReport.generate_model_parameters_table()TechnicalReport.generate_correlation_matrix_plot()
- ergodic_insurance.reporting.validator module
- Module contents
CacheManagerCacheManager.configCacheManager.statsCacheManager.backendCacheManager.cache_simulation_paths()CacheManager.load_simulation_paths()CacheManager.cache_processed_results()CacheManager.load_processed_results()CacheManager.cache_figure()CacheManager.invalidate_cache()CacheManager.clear_cache()CacheManager.get_cache_stats()CacheManager.warm_cache()CacheManager.validate_cache()
CacheConfigCacheConfig.cache_dirCacheConfig.max_cache_size_gbCacheConfig.ttl_hoursCacheConfig.compressionCacheConfig.compression_levelCacheConfig.enable_memory_mappingCacheConfig.backendCacheConfig.backend_configCacheConfig.cache_dirCacheConfig.max_cache_size_gbCacheConfig.ttl_hoursCacheConfig.compressionCacheConfig.compression_levelCacheConfig.enable_memory_mappingCacheConfig.backendCacheConfig.backend_configCacheConfig.__post_init__()
CacheStatsCacheStats.total_size_bytesCacheStats.n_entriesCacheStats.n_hitsCacheStats.n_missesCacheStats.hit_rateCacheStats.avg_load_time_msCacheStats.avg_save_time_msCacheStats.oldest_entryCacheStats.newest_entryCacheStats.total_size_bytesCacheStats.n_entriesCacheStats.n_hitsCacheStats.n_missesCacheStats.hit_rateCacheStats.avg_load_time_msCacheStats.avg_save_time_msCacheStats.oldest_entryCacheStats.newest_entryCacheStats.update_hit_rate()
StorageBackendCacheKeyReportConfigReportConfig.metadataReportConfig.styleReportConfig.sectionsReportConfig.templateReportConfig.output_formatsReportConfig.output_dirReportConfig.cache_dirReportConfig.debugReportConfig.metadataReportConfig.styleReportConfig.sectionsReportConfig.templateReportConfig.output_formatsReportConfig.output_dirReportConfig.cache_dirReportConfig.debugReportConfig.create_directories()ReportConfig.to_yaml()ReportConfig.from_yaml()ReportConfig.model_config
ReportMetadataReportMetadata.titleReportMetadata.subtitleReportMetadata.authorsReportMetadata.dateReportMetadata.versionReportMetadata.organizationReportMetadata.confidentialityReportMetadata.keywordsReportMetadata.abstractReportMetadata.titleReportMetadata.subtitleReportMetadata.authorsReportMetadata.dateReportMetadata.versionReportMetadata.organizationReportMetadata.confidentialityReportMetadata.keywordsReportMetadata.abstractReportMetadata.model_config
ReportStyleReportStyle.font_familyReportStyle.font_sizeReportStyle.line_spacingReportStyle.marginsReportStyle.page_sizeReportStyle.orientationReportStyle.header_footerReportStyle.page_numbersReportStyle.color_schemeReportStyle.font_familyReportStyle.font_sizeReportStyle.line_spacingReportStyle.marginsReportStyle.page_sizeReportStyle.orientationReportStyle.header_footerReportStyle.page_numbersReportStyle.color_schemeReportStyle.model_config
SectionConfigSectionConfig.titleSectionConfig.levelSectionConfig.contentSectionConfig.figuresSectionConfig.tablesSectionConfig.subsectionsSectionConfig.page_breakSectionConfig.titleSectionConfig.levelSectionConfig.contentSectionConfig.figuresSectionConfig.tablesSectionConfig.subsectionsSectionConfig.page_breakSectionConfig.model_config
FigureConfigFigureConfig.nameFigureConfig.captionFigureConfig.sourceFigureConfig.widthFigureConfig.heightFigureConfig.dpiFigureConfig.positionFigureConfig.cache_keyFigureConfig.nameFigureConfig.captionFigureConfig.sourceFigureConfig.widthFigureConfig.heightFigureConfig.dpiFigureConfig.positionFigureConfig.cache_keyFigureConfig.validate_source()FigureConfig.model_config
TableConfigTableConfig.nameTableConfig.captionTableConfig.data_sourceTableConfig.formatTableConfig.columnsTableConfig.indexTableConfig.precisionTableConfig.styleTableConfig.nameTableConfig.captionTableConfig.data_sourceTableConfig.formatTableConfig.columnsTableConfig.indexTableConfig.precisionTableConfig.styleTableConfig.model_config
create_executive_config()create_technical_config()NumberFormatterColorCoderTableFormatterformat_for_export()TableGeneratorTableGenerator.default_formatTableGenerator.precisionTableGenerator.max_widthTableGenerator.number_formatterTableGenerator.color_coderTableGenerator.table_formatterTableGenerator.generate()TableGenerator.generate_summary_statistics()TableGenerator.generate_comparison_table()TableGenerator.generate_decision_matrix()TableGenerator.generate_optimal_limits_by_size()TableGenerator.generate_quick_reference_matrix()TableGenerator.generate_parameter_grid()TableGenerator.generate_loss_distribution_params()TableGenerator.generate_insurance_pricing_grid()TableGenerator.generate_statistical_validation()TableGenerator.generate_comprehensive_results()TableGenerator.generate_walk_forward_validation()TableGenerator.export_to_file()
create_performance_table()create_parameter_table()create_sensitivity_table()ReportBuilderExecutiveReportExecutiveReport.resultsExecutiveReport.style_managerExecutiveReport.key_metricsExecutiveReport.generate()ExecutiveReport.generate_roe_frontier()ExecutiveReport.generate_performance_table()ExecutiveReport.generate_decision_matrix()ExecutiveReport.generate_convergence_plot()ExecutiveReport.generate_convergence_table()
TechnicalReportTechnicalReport.resultsTechnicalReport.parametersTechnicalReport.validation_metricsTechnicalReport.generate()TechnicalReport.generate_parameter_sensitivity_plot()TechnicalReport.generate_qq_plot()TechnicalReport.generate_model_parameters_table()TechnicalReport.generate_correlation_matrix_plot()
ReportValidatorvalidate_results_data()validate_parameters()
- ergodic_insurance.visualization package
- Submodules
- ergodic_insurance.visualization.annotations module
- ergodic_insurance.visualization.batch_plots module
- ergodic_insurance.visualization.core module
set_wsj_style()format_currency()format_percentage()WSJFormatterWSJFormatter.currency_formatter()WSJFormatter.currency()WSJFormatter.percentage_formatter()WSJFormatter.percentage()WSJFormatter.number()WSJFormatter.millions_formatter()WSJFormatter.currency_formatter()WSJFormatter.currency()WSJFormatter.percentage_formatter()WSJFormatter.percentage()WSJFormatter.number()WSJFormatter.millions_formatter()
- ergodic_insurance.visualization.executive_plots module
safe_tight_layout()plot_loss_distribution()plot_return_period_curve()plot_insurance_layers()plot_roe_ruin_frontier()plot_ruin_cliff()plot_simulation_architecture()plot_sample_paths()plot_optimal_coverage_heatmap()plot_sensitivity_tornado()plot_robustness_heatmap()plot_premium_multiplier()plot_breakeven_timeline()
- ergodic_insurance.visualization.export module
- ergodic_insurance.visualization.figure_factory module
FigureFactoryFigureFactory.create_figure()FigureFactory.create_subplots()FigureFactory.create_line_plot()FigureFactory.create_bar_plot()FigureFactory.create_scatter_plot()FigureFactory.create_histogram()FigureFactory.create_heatmap()FigureFactory.create_box_plot()FigureFactory.format_axis_currency()FigureFactory.format_axis_percentage()FigureFactory.add_annotations()FigureFactory.save_figure()
- ergodic_insurance.visualization.improved_tower_plot module
- ergodic_insurance.visualization.interactive_plots module
- ergodic_insurance.visualization.style_manager module
ThemeColorPaletteColorPalette.primaryColorPalette.secondaryColorPalette.accentColorPalette.warningColorPalette.successColorPalette.neutralColorPalette.backgroundColorPalette.textColorPalette.gridColorPalette.seriesColorPalette.primaryColorPalette.secondaryColorPalette.accentColorPalette.warningColorPalette.successColorPalette.neutralColorPalette.backgroundColorPalette.textColorPalette.gridColorPalette.series
FontConfigFontConfig.familyFontConfig.size_baseFontConfig.size_titleFontConfig.size_labelFontConfig.size_tickFontConfig.size_legendFontConfig.weight_normalFontConfig.weight_boldFontConfig.familyFontConfig.size_baseFontConfig.size_titleFontConfig.size_labelFontConfig.size_tickFontConfig.size_legendFontConfig.weight_normalFontConfig.weight_bold
FigureConfigFigureConfig.size_smallFigureConfig.size_mediumFigureConfig.size_largeFigureConfig.size_blogFigureConfig.size_technicalFigureConfig.size_presentationFigureConfig.dpi_screenFigureConfig.dpi_webFigureConfig.dpi_printFigureConfig.size_smallFigureConfig.size_mediumFigureConfig.size_largeFigureConfig.size_blogFigureConfig.size_technicalFigureConfig.size_presentationFigureConfig.dpi_screenFigureConfig.dpi_webFigureConfig.dpi_print
GridConfigGridConfig.show_gridGridConfig.grid_alphaGridConfig.grid_linewidthGridConfig.spine_topGridConfig.spine_rightGridConfig.spine_bottomGridConfig.spine_leftGridConfig.spine_linewidthGridConfig.tick_major_widthGridConfig.tick_minor_widthGridConfig.show_gridGridConfig.grid_alphaGridConfig.grid_linewidthGridConfig.spine_topGridConfig.spine_rightGridConfig.spine_bottomGridConfig.spine_leftGridConfig.spine_linewidthGridConfig.tick_major_widthGridConfig.tick_minor_width
StyleManagerStyleManager.themesStyleManager.set_theme()StyleManager.get_theme_config()StyleManager.get_colors()StyleManager.get_fonts()StyleManager.get_figure_config()StyleManager.get_grid_config()StyleManager.update_colors()StyleManager.update_fonts()StyleManager.apply_style()StyleManager.get_figure_size()StyleManager.get_dpi()StyleManager.load_config()StyleManager.save_config()StyleManager.create_style_sheet()StyleManager.inherit_from()
- ergodic_insurance.visualization.technical_plots module
plot_convergence_diagnostics()plot_pareto_frontier_2d()plot_pareto_frontier_3d()create_interactive_pareto_frontier()plot_trace_plots()plot_loss_distribution_validation()plot_monte_carlo_convergence()plot_enhanced_convergence_diagnostics()plot_ergodic_divergence()plot_path_dependent_wealth()plot_correlation_structure()plot_premium_decomposition()plot_capital_efficiency_frontier_3d()
- Module contents
set_wsj_style()format_currency()format_percentage()WSJFormatterWSJFormatter.currency_formatter()WSJFormatter.currency()WSJFormatter.percentage_formatter()WSJFormatter.percentage()WSJFormatter.number()WSJFormatter.millions_formatter()WSJFormatter.currency_formatter()WSJFormatter.currency()WSJFormatter.percentage_formatter()WSJFormatter.percentage()WSJFormatter.number()WSJFormatter.millions_formatter()
plot_loss_distribution()plot_return_period_curve()plot_insurance_layers()plot_roe_ruin_frontier()plot_ruin_cliff()plot_convergence_diagnostics()plot_enhanced_convergence_diagnostics()plot_ergodic_divergence()plot_trace_plots()plot_loss_distribution_validation()plot_monte_carlo_convergence()plot_pareto_frontier_2d()plot_pareto_frontier_3d()plot_path_dependent_wealth()create_interactive_pareto_frontier()plot_correlation_structure()plot_premium_decomposition()plot_capital_efficiency_frontier_3d()create_interactive_dashboard()create_time_series_dashboard()create_correlation_heatmap()create_risk_dashboard()plot_scenario_comparison()plot_sensitivity_heatmap()plot_parameter_sweep_3d()plot_scenario_convergence()plot_parallel_scenarios()add_value_labels()add_trend_annotation()add_callout()add_benchmark_line()add_shaded_region()add_data_source()add_footnote()save_figure()save_for_publication()save_for_presentation()save_for_web()batch_export()FigureFactoryFigureFactory.create_figure()FigureFactory.create_subplots()FigureFactory.create_line_plot()FigureFactory.create_bar_plot()FigureFactory.create_scatter_plot()FigureFactory.create_histogram()FigureFactory.create_heatmap()FigureFactory.create_box_plot()FigureFactory.format_axis_currency()FigureFactory.format_axis_percentage()FigureFactory.add_annotations()FigureFactory.save_figure()
StyleManagerStyleManager.themesStyleManager.set_theme()StyleManager.get_theme_config()StyleManager.get_colors()StyleManager.get_fonts()StyleManager.get_figure_config()StyleManager.get_grid_config()StyleManager.update_colors()StyleManager.update_fonts()StyleManager.apply_style()StyleManager.get_figure_size()StyleManager.get_dpi()StyleManager.load_config()StyleManager.save_config()StyleManager.create_style_sheet()StyleManager.inherit_from()
Theme
- ergodic_insurance.visualization_infra package
- Submodules
- ergodic_insurance.visualization_infra.figure_factory module
FigureFactoryFigureFactory.create_figure()FigureFactory.create_subplots()FigureFactory.create_line_plot()FigureFactory.create_bar_plot()FigureFactory.create_scatter_plot()FigureFactory.create_histogram()FigureFactory.create_heatmap()FigureFactory.create_box_plot()FigureFactory.format_axis_currency()FigureFactory.format_axis_percentage()FigureFactory.add_annotations()FigureFactory.save_figure()
- ergodic_insurance.visualization_infra.style_manager module
ThemeColorPaletteColorPalette.primaryColorPalette.secondaryColorPalette.accentColorPalette.warningColorPalette.successColorPalette.neutralColorPalette.backgroundColorPalette.textColorPalette.gridColorPalette.seriesColorPalette.primaryColorPalette.secondaryColorPalette.accentColorPalette.warningColorPalette.successColorPalette.neutralColorPalette.backgroundColorPalette.textColorPalette.gridColorPalette.series
FontConfigFontConfig.familyFontConfig.size_baseFontConfig.size_titleFontConfig.size_labelFontConfig.size_tickFontConfig.size_legendFontConfig.weight_normalFontConfig.weight_boldFontConfig.familyFontConfig.size_baseFontConfig.size_titleFontConfig.size_labelFontConfig.size_tickFontConfig.size_legendFontConfig.weight_normalFontConfig.weight_bold
FigureConfigFigureConfig.size_smallFigureConfig.size_mediumFigureConfig.size_largeFigureConfig.size_blogFigureConfig.size_technicalFigureConfig.size_presentationFigureConfig.dpi_screenFigureConfig.dpi_webFigureConfig.dpi_printFigureConfig.size_smallFigureConfig.size_mediumFigureConfig.size_largeFigureConfig.size_blogFigureConfig.size_technicalFigureConfig.size_presentationFigureConfig.dpi_screenFigureConfig.dpi_webFigureConfig.dpi_print
GridConfigGridConfig.show_gridGridConfig.grid_alphaGridConfig.grid_linewidthGridConfig.spine_topGridConfig.spine_rightGridConfig.spine_bottomGridConfig.spine_leftGridConfig.spine_linewidthGridConfig.tick_major_widthGridConfig.tick_minor_widthGridConfig.show_gridGridConfig.grid_alphaGridConfig.grid_linewidthGridConfig.spine_topGridConfig.spine_rightGridConfig.spine_bottomGridConfig.spine_leftGridConfig.spine_linewidthGridConfig.tick_major_widthGridConfig.tick_minor_width
StyleManagerStyleManager.themesStyleManager.set_theme()StyleManager.get_theme_config()StyleManager.get_colors()StyleManager.get_fonts()StyleManager.get_figure_config()StyleManager.get_grid_config()StyleManager.update_colors()StyleManager.update_fonts()StyleManager.apply_style()StyleManager.get_figure_size()StyleManager.get_dpi()StyleManager.load_config()StyleManager.save_config()StyleManager.create_style_sheet()StyleManager.inherit_from()
- Module contents
StyleManagerStyleManager.themesStyleManager.set_theme()StyleManager.get_theme_config()StyleManager.get_colors()StyleManager.get_fonts()StyleManager.get_figure_config()StyleManager.get_grid_config()StyleManager.update_colors()StyleManager.update_fonts()StyleManager.apply_style()StyleManager.get_figure_size()StyleManager.get_dpi()StyleManager.load_config()StyleManager.save_config()StyleManager.create_style_sheet()StyleManager.inherit_from()
ThemeFigureFactoryFigureFactory.create_figure()FigureFactory.create_subplots()FigureFactory.create_line_plot()FigureFactory.create_bar_plot()FigureFactory.create_scatter_plot()FigureFactory.create_histogram()FigureFactory.create_heatmap()FigureFactory.create_box_plot()FigureFactory.format_axis_currency()FigureFactory.format_axis_percentage()FigureFactory.add_annotations()FigureFactory.save_figure()
Submodules
ergodic_insurance.accrual_manager module
Accrual and timing management for financial operations.
This module provides functionality to track timing differences between cash movements and accounting recognition, following GAAP principles.
Uses Decimal for all currency amounts to prevent floating-point precision errors.
- class AccrualType(*values)[source]
Bases:
EnumTypes of accrued items.
- WAGES = 'wages'
- INTEREST = 'interest'
- TAXES = 'taxes'
- INSURANCE_CLAIMS = 'insurance_claims'
- REVENUE = 'revenue'
- OTHER = 'other'
- class PaymentSchedule(*values)[source]
Bases:
EnumPayment schedule types.
- IMMEDIATE = 'immediate'
- QUARTERLY = 'quarterly'
- ANNUAL = 'annual'
- CUSTOM = 'custom'
- class AccrualItem(item_type: AccrualType, amount: Decimal, period_incurred: int, payment_schedule: PaymentSchedule, payment_dates: List[int] = <factory>, amounts_paid: List[Decimal] = <factory>, description: str = '') None[source]
Bases:
objectIndividual accrual item with tracking information.
Uses Decimal for all currency amounts to ensure precise calculations.
- item_type: AccrualType
- payment_schedule: PaymentSchedule
- __post_init__() None[source]
Convert amounts to Decimal if needed (runtime check for backwards compatibility).
- Return type:
- class AccrualManager[source]
Bases:
objectManages accruals and timing differences for financial operations.
Tracks accrued expenses and revenues with various payment schedules, particularly focusing on quarterly tax payments and multi-year claim settlements. Uses FIFO approach for payment matching.
- accrued_expenses: Dict[AccrualType, List[AccrualItem]]
- accrued_revenues: List[AccrualItem]
- __deepcopy__(memo: Dict[int, Any]) AccrualManager[source]
Create a deep copy of this accrual manager.
- record_expense_accrual(item_type: AccrualType, amount: Decimal | float | int, payment_schedule: PaymentSchedule = PaymentSchedule.IMMEDIATE, payment_dates: List[int] | None = None, description: str = '') AccrualItem[source]
Record an accrued expense.
- Parameters:
item_type (
AccrualType) – Type of expense being accruedamount (
Union[Decimal,float,int]) – Total amount to be accrued (converted to Decimal)payment_schedule (
PaymentSchedule) – Schedule for paymentspayment_dates (
Optional[List[int]]) – Custom payment dates if schedule is CUSTOMdescription (
str) – Optional description of the accrual
- Return type:
- Returns:
The created AccrualItem
- record_revenue_accrual(amount: Decimal | float | int, collection_dates: List[int] | None = None, description: str = '') AccrualItem[source]
Record accrued revenue not yet collected.
- process_payment(item_type: AccrualType, amount: Decimal | float | int, period: int | None = None) List[Tuple[AccrualItem, Decimal]][source]
Process a payment against accrued items using FIFO.
- Parameters:
- Return type:
- Returns:
List of (AccrualItem, amount_applied) tuples with Decimal amounts
- get_quarterly_tax_schedule(annual_tax: Decimal | float | int) List[Tuple[int, Decimal]][source]
Calculate quarterly tax payment schedule.
- get_claim_payment_schedule(claim_amount: Decimal | float | int, development_pattern: List[Decimal | float] | None = None) List[Tuple[int, Decimal]][source]
Calculate insurance claim payment schedule over multiple years.
- Parameters:
- Return type:
- Returns:
List of (period, amount) tuples for claim payments (Decimal amounts)
- get_total_accrued_expenses() Decimal[source]
Get total outstanding accrued expenses as Decimal.
- Return type:
- get_total_accrued_revenues() Decimal[source]
Get total outstanding accrued revenues as Decimal.
- Return type:
- get_accruals_by_type(item_type: AccrualType) List[AccrualItem][source]
Get all accruals of a specific type.
- Parameters:
item_type (
AccrualType) – Type of accrual to retrieve- Return type:
- Returns:
List of accruals of the specified type
- get_payments_due(period: int | None = None) Dict[AccrualType, Decimal][source]
Get payments due in a specific period.
- advance_period(periods: int = 1)[source]
Advance the current period.
- Parameters:
periods (
int) – Number of periods to advance
ergodic_insurance.accuracy_validator module
Numerical accuracy validation for Monte Carlo simulations.
This module provides tools to validate the numerical accuracy of optimized Monte Carlo simulations against reference implementations, ensuring that performance optimizations don’t compromise result quality.
- Key features:
High-precision reference implementations
Statistical validation of distributions
Edge case and boundary condition testing
Accuracy comparison metrics
Detailed validation reports
Example
>>> from accuracy_validator import AccuracyValidator
>>> import numpy as np
>>>
>>> validator = AccuracyValidator()
>>>
>>> # Compare optimized vs reference implementation
>>> optimized_results = np.random.normal(0.08, 0.02, 10000)
>>> reference_results = np.random.normal(0.08, 0.02, 10000)
>>>
>>> validation = validator.compare_implementations(
... optimized_results, reference_results
... )
>>> print(f"Accuracy: {validation.accuracy_score:.4f}")
Google-style docstrings are used throughout for Sphinx documentation.
- class ValidationResult(accuracy_score: float, mean_error: float = 0.0, max_error: float = 0.0, relative_error: float = 0.0, ks_statistic: float = 0.0, ks_pvalue: float = 0.0, passed_tests: List[str] = <factory>, failed_tests: List[str] = <factory>, edge_cases: Dict[str, bool]=<factory>) None[source]
Bases:
objectResults from accuracy validation.
- class ReferenceImplementations[source]
Bases:
objectHigh-precision reference implementations for validation.
These implementations prioritize accuracy over speed and serve as the ground truth for validation.
- static calculate_growth_rate_precise(final_assets: float, initial_assets: float, n_years: float) float[source]
Calculate growth rate with high precision.
- static apply_insurance_precise(loss: float, attachment: float, limit: float) Tuple[float, float][source]
Apply insurance with precise calculations.
- static calculate_var_precise(losses: ndarray, confidence: float) float[source]
Calculate Value at Risk with high precision.
- class StatisticalValidation[source]
Bases:
objectStatistical tests for distribution validation.
- static compare_distributions(data1: ndarray, data2: ndarray) Dict[str, Any][source]
Compare two distributions statistically.
- class EdgeCaseTester[source]
Bases:
objectTest edge cases and boundary conditions.
- class AccuracyValidator(tolerance: float = 0.01)[source]
Bases:
objectMain accuracy validation engine.
Provides comprehensive validation of numerical accuracy for Monte Carlo simulations.
- compare_implementations(optimized_results: ndarray, reference_results: ndarray, test_name: str = 'Implementation Comparison') ValidationResult[source]
Compare optimized implementation against reference.
- Parameters:
- Return type:
- Returns:
ValidationResult with comparison metrics.
- validate_growth_rates(optimized_func: Callable, test_cases: List[Tuple] | None = None) ValidationResult[source]
Validate growth rate calculations.
- Parameters:
- Return type:
- Returns:
ValidationResult for growth rate calculations.
- validate_insurance_calculations(optimized_func: Callable, test_cases: List[Tuple] | None = None) ValidationResult[source]
Validate insurance calculations.
- Parameters:
- Return type:
- Returns:
ValidationResult for insurance calculations.
- validate_risk_metrics(optimized_var: Callable, optimized_tvar: Callable, test_data: ndarray | None = None) ValidationResult[source]
Validate risk metric calculations.
- Parameters:
- Return type:
- Returns:
ValidationResult for risk metrics.
- run_full_validation() ValidationResult[source]
Run comprehensive validation suite.
- Return type:
- Returns:
Complete ValidationResult.
- generate_validation_report(results: List[ValidationResult]) str[source]
Generate comprehensive validation report.
- Parameters:
results (
List[ValidationResult]) – List of validation results.- Return type:
- Returns:
Formatted validation report.
ergodic_insurance.adaptive_stopping module
Adaptive stopping criteria for Monte Carlo simulations.
This module implements adaptive stopping rules based on convergence diagnostics, allowing simulations to terminate early when convergence criteria are met.
- class StoppingRule(*values)[source]
Bases:
EnumEnumeration of available stopping rules.
- R_HAT = 'r_hat'
- ESS = 'ess'
- RELATIVE_CHANGE = 'relative_change'
- MCSE = 'mcse'
- GEWEKE = 'geweke'
- HEIDELBERGER = 'heidelberger'
- COMBINED = 'combined'
- CUSTOM = 'custom'
- class StoppingCriteria(rule: StoppingRule = StoppingRule.COMBINED, r_hat_threshold: float = 1.05, min_ess: int = 1000, relative_tolerance: float = 0.01, mcse_relative_threshold: float = 0.05, min_iterations: int = 1000, max_iterations: int = 100000, check_interval: int = 100, patience: int = 3, confidence_level: float = 0.95) None[source]
Bases:
objectConfiguration for stopping criteria.
- rule: StoppingRule = 'combined'
- class ConvergenceStatus(converged: bool, iteration: int, reason: str, diagnostics: Dict[str, float], should_stop: bool, estimated_remaining: int | None = None) None[source]
Bases:
objectContainer for convergence status information.
- class AdaptiveStoppingMonitor(criteria: StoppingCriteria | None = None, custom_rule: Callable | None = None)[source]
Bases:
objectMonitor for adaptive stopping based on convergence criteria.
Provides sophisticated adaptive stopping with multiple criteria, burn-in detection, and convergence rate estimation.
- check_convergence(iteration: int, chains: ndarray, diagnostics: Dict[str, float] | None = None) ConvergenceStatus[source]
Check if convergence criteria are met.
- detect_adaptive_burn_in(chains: ndarray, method: str = 'geweke') int[source]
Detect burn-in period adaptively.
ergodic_insurance.batch_processor module
Batch processing engine for running multiple simulation scenarios.
This module provides a framework for executing multiple scenarios in parallel or serial, with support for checkpointing, resumption, and result aggregation.
- class ProcessingStatus(*values)[source]
Bases:
EnumStatus of scenario processing.
- PENDING = 'pending'
- RUNNING = 'running'
- COMPLETED = 'completed'
- FAILED = 'failed'
- SKIPPED = 'skipped'
- class BatchResult(scenario_id: str, scenario_name: str, status: ProcessingStatus, simulation_results: SimulationResults | None = None, execution_time: float = 0.0, error_message: str | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]
Bases:
objectResult from a single scenario execution.
- scenario_id
Unique scenario identifier
- scenario_name
Human-readable scenario name
- status
Processing status
- simulation_results
Monte Carlo simulation results
- execution_time
Time taken to execute scenario
- error_message
Error message if failed
- metadata
Additional result metadata
- status: ProcessingStatus
- simulation_results: SimulationResults | None = None
- class AggregatedResults(batch_results: ~typing.List[~ergodic_insurance.batch_processor.BatchResult], summary_statistics: ~pandas.core.frame.DataFrame, comparison_metrics: ~typing.Dict[str, ~pandas.core.frame.DataFrame], sensitivity_analysis: ~pandas.core.frame.DataFrame | None = None, execution_summary: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]
Bases:
objectAggregated results from batch processing.
- batch_results
Individual scenario results
- summary_statistics
Summary stats across scenarios
- comparison_metrics
Comparative metrics between scenarios
- sensitivity_analysis
Sensitivity analysis results
- execution_summary
Batch execution summary
- batch_results: List[BatchResult]
- summary_statistics: DataFrame
- get_successful_results() List[BatchResult][source]
Get only successful results.
- Return type:
- class CheckpointData(completed_scenarios: ~typing.Set[str], failed_scenarios: ~typing.Set[str], batch_results: ~typing.List[~ergodic_insurance.batch_processor.BatchResult], timestamp: ~datetime.datetime, metadata: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]
Bases:
objectCheckpoint data for resumable batch processing.
- batch_results: List[BatchResult]
- class BatchProcessor(loss_generator: ManufacturingLossGenerator | None = None, insurance_program: InsuranceProgram | None = None, manufacturer: WidgetManufacturer | None = None, n_workers: int | None = None, checkpoint_dir: Path | None = None, use_parallel: bool = True, progress_bar: bool = True)[source]
Bases:
objectEngine for batch processing multiple simulation scenarios.
- batch_results: List[BatchResult]
- process_batch(scenarios: List[ScenarioConfig], resume_from_checkpoint: bool = True, checkpoint_interval: int = 10, max_failures: int | None = None, priority_threshold: int | None = None) AggregatedResults[source]
Process a batch of scenarios.
- Parameters:
scenarios (
List[ScenarioConfig]) – List of scenarios to processresume_from_checkpoint (
bool) – Whether to resume from checkpointcheckpoint_interval (
int) – Save checkpoint every N scenariosmax_failures (
Optional[int]) – Maximum allowed failures before stoppingpriority_threshold (
Optional[int]) – Only process scenarios up to this priority
- Return type:
- Returns:
Aggregated results from batch processing
- export_results(path: str | Path, export_format: str = 'csv') None[source]
Export aggregated results to file.
ergodic_insurance.benchmarking module
Comprehensive benchmarking suite for Monte Carlo simulations.
This module provides tools for benchmarking Monte Carlo engine performance, targeting 100K simulations in under 60 seconds on 4-core CPUs with <4GB memory.
- Key features:
Performance benchmarking at multiple scales (1K, 10K, 100K)
Memory usage tracking and profiling
CPU efficiency monitoring
Cache effectiveness measurement
Automated performance report generation
Comparison of optimization strategies
Example
>>> from benchmarking import BenchmarkSuite, BenchmarkConfig
>>> from monte_carlo import MonteCarloEngine
>>>
>>> suite = BenchmarkSuite()
>>> config = BenchmarkConfig(scales=[1000, 10000, 100000])
>>>
>>> # Run comprehensive benchmarks
>>> results = suite.run_comprehensive_benchmark(engine, config)
>>> print(results.summary())
>>>
>>> # Check if performance targets are met
>>> if results.meets_requirements():
... print("✓ All performance targets achieved!")
Google-style docstrings are used throughout for Sphinx documentation.
- class BenchmarkMetrics(execution_time: float, simulations_per_second: float, memory_peak_mb: float, memory_average_mb: float, cpu_utilization: float = 0.0, cache_hit_rate: float = 0.0, accuracy_score: float = 1.0, convergence_iterations: int = 0) None[source]
Bases:
objectMetrics collected during benchmarking.
- execution_time
Total execution time in seconds
- simulations_per_second
Throughput metric
- memory_peak_mb
Peak memory usage in MB
- memory_average_mb
Average memory usage in MB
- cpu_utilization
Average CPU utilization percentage
- cache_hit_rate
Cache effectiveness percentage
- accuracy_score
Numerical accuracy score
- convergence_iterations
Iterations to convergence
- class BenchmarkResult(scale: int, metrics: ~ergodic_insurance.benchmarking.BenchmarkMetrics, configuration: ~typing.Dict[str, ~typing.Any], timestamp: ~datetime.datetime, system_info: ~typing.Dict[str, ~typing.Any] = <factory>, optimizations: ~typing.List[str] = <factory>) None[source]
Bases:
objectResults from a benchmark run.
- scale
Number of simulations
- metrics
Performance metrics
- configuration
Configuration used
- timestamp
When benchmark was run
- system_info
System information
- optimizations
Optimizations applied
- metrics: BenchmarkMetrics
- class BenchmarkConfig(scales: List[int] = <factory>, n_years: int = 10, n_workers: int = 4, memory_limit_mb: float = 4000.0, target_times: Dict[int, float]=<factory>, repetitions: int = 3, warmup_runs: int = 2, enable_profiling: bool = True) None[source]
Bases:
objectConfiguration for benchmarking.
- scales
List of simulation counts to test
- n_years
Years per simulation
- n_workers
Number of parallel workers
- memory_limit_mb
Memory limit for testing
- target_times
Target execution times per scale
- repetitions
Number of repetitions per test
- warmup_runs
Number of warmup runs
- enable_profiling
Enable detailed profiling
- class SystemProfiler[source]
Bases:
objectProfile system resources during benchmarking.
- class BenchmarkRunner(profiler: SystemProfiler | None = None)[source]
Bases:
objectRun individual benchmarks with monitoring.
- run_single_benchmark(func: Callable, args: Tuple = (), kwargs: Dict | None = None, monitor_interval: float = 0.1) BenchmarkMetrics[source]
Run a single benchmark with monitoring.
- class BenchmarkSuite[source]
Bases:
objectComprehensive benchmark suite for Monte Carlo simulations.
Provides tools to benchmark performance across different scales and configurations, generating detailed reports.
- results: List[BenchmarkResult]
- benchmark_scale(engine, scale: int, config: BenchmarkConfig, optimizations: List[str] | None = None) BenchmarkResult[source]
Benchmark at a specific scale.
- Parameters:
engine – Monte Carlo engine to benchmark.
scale (
int) – Number of simulations.config (
BenchmarkConfig) – Benchmark configuration.optimizations (
Optional[List[str]]) – List of applied optimizations.
- Return type:
- Returns:
BenchmarkResult for this scale.
- run_comprehensive_benchmark(engine, config: BenchmarkConfig | None = None) ComprehensiveBenchmarkResult[source]
Run comprehensive benchmark suite.
- Parameters:
engine – Monte Carlo engine to benchmark.
config (
Optional[BenchmarkConfig]) – Benchmark configuration.
- Return type:
- Returns:
ComprehensiveBenchmarkResult with all results.
- class ComprehensiveBenchmarkResult(results: List[BenchmarkResult], config: BenchmarkConfig, system_info: Dict[str, Any]) None[source]
Bases:
objectResults from comprehensive benchmark suite.
- results
List of individual benchmark results
- config
Configuration used
- system_info
System information
- results: List[BenchmarkResult]
- config: BenchmarkConfig
- meets_requirements() bool[source]
Check if all requirements are met.
- Return type:
- Returns:
True if all performance requirements are satisfied.
- class ConfigurationComparison(results: List[Dict[str, Any]]) None[source]
Bases:
objectResults from configuration comparison.
- run_quick_benchmark(engine, n_simulations: int = 10000) BenchmarkMetrics[source]
Run a quick benchmark.
- Parameters:
engine – Monte Carlo engine to benchmark.
n_simulations (
int) – Number of simulations.
- Return type:
- Returns:
BenchmarkMetrics from the run.
ergodic_insurance.bootstrap_analysis module
Bootstrap confidence interval analysis for simulation results.
This module provides comprehensive bootstrap analysis capabilities for statistical significance testing and confidence interval calculation. Supports both percentile and BCa (bias-corrected and accelerated) methods with parallel processing for performance optimization.
Example
>>> import numpy as np
>>> from bootstrap_analysis import BootstrapAnalyzer
>>> # Create sample data
>>> data = np.random.normal(100, 15, 1000)
>>> analyzer = BootstrapAnalyzer(n_bootstrap=10000, seed=42)
>>> # Calculate confidence interval for mean
>>> ci = analyzer.confidence_interval(data, np.mean)
>>> print(f"95% CI: [{ci[0]:.2f}, {ci[1]:.2f}]")
>>> # Parallel bootstrap for faster computation
>>> ci_parallel = analyzer.confidence_interval(
... data, np.mean, method='bca', parallel=True
... )
- class BootstrapResult(statistic: float, confidence_level: float, confidence_interval: Tuple[float, float], bootstrap_distribution: ndarray, method: str, n_bootstrap: int, bias: float | None = None, acceleration: float | None = None, converged: bool = True, metadata: Dict[str, Any] | None = None) None[source]
Bases:
objectContainer for bootstrap analysis results.
- class BootstrapAnalyzer(n_bootstrap: int = 10000, confidence_level: float = 0.95, seed: int | None = None, n_workers: int = 4, show_progress: bool = True)[source]
Bases:
objectMain class for bootstrap confidence interval analysis.
Provides methods for calculating bootstrap confidence intervals using various methods including percentile and BCa. Supports parallel processing for improved performance with large datasets.
- Parameters:
n_bootstrap (
int) – Number of bootstrap iterations (default 10000).confidence_level (
float) – Confidence level for intervals (default 0.95).seed (
Optional[int]) – Random seed for reproducibility (default None).n_workers (
int) – Number of parallel workers (default 4).show_progress (
bool) – Whether to show progress bar (default True).
Example
>>> analyzer = BootstrapAnalyzer(n_bootstrap=5000, confidence_level=0.99) >>> data = np.random.exponential(2, 1000) >>> result = analyzer.confidence_interval(data, np.median) >>> print(result.summary())
- DEFAULT_N_BOOTSTRAP = 10000
- DEFAULT_CONFIDENCE = 0.95
- DEFAULT_N_WORKERS = 4
- bootstrap_sample(data: ndarray, statistic: Callable[[ndarray], float], n_samples: int = 1) ndarray[source]
Generate bootstrap samples and compute statistics.
- confidence_interval(data: ndarray, statistic: Callable[[ndarray], float], confidence_level: float | None = None, method: str = 'percentile', parallel: bool = False) BootstrapResult[source]
Calculate bootstrap confidence interval for a statistic.
- Parameters:
data (
ndarray) – Input data array.statistic (
Callable[[ndarray],float]) – Function to compute the statistic of interest.confidence_level (
Optional[float]) – Confidence level (uses default if None).method (
str) – ‘percentile’ or ‘bca’ (bias-corrected and accelerated).parallel (
bool) – Whether to use parallel processing.
- Return type:
- Returns:
BootstrapResult containing confidence interval and diagnostics.
- Raises:
ValueError – If method is not ‘percentile’ or ‘bca’.
- compare_statistics(data1: ndarray, data2: ndarray, statistic: Callable[[ndarray], float], comparison: str = 'difference') BootstrapResult[source]
Compare statistics between two datasets using bootstrap.
- Parameters:
- Return type:
- Returns:
BootstrapResult for the comparison statistic.
- Raises:
ValueError – If comparison type is not supported.
- bootstrap_confidence_interval(data: ~numpy.ndarray | ~typing.List[float], statistic: ~typing.Callable[[~numpy.ndarray], float] = <function mean>, confidence_level: float = 0.95, n_bootstrap: int = 10000, method: str = 'percentile', seed: int | None = None) Tuple[float, Tuple[float, float]][source]
Convenience function for simple bootstrap confidence interval calculation.
- Parameters:
data (
Union[ndarray,List[float]]) – Input data (array or list).statistic (
Callable[[ndarray],float]) – Function to compute statistic (default: mean).confidence_level (
float) – Confidence level (default: 0.95).n_bootstrap (
int) – Number of bootstrap iterations (default: 10000).method (
str) – ‘percentile’ or ‘bca’ (default: ‘percentile’).
- Return type:
- Returns:
Tuple of (original_statistic, (lower_bound, upper_bound)).
Example
>>> data = np.random.normal(100, 15, 1000) >>> stat, ci = bootstrap_confidence_interval(data, np.median) >>> print(f"Median: {stat:.2f}, 95% CI: [{ci[0]:.2f}, {ci[1]:.2f}]")
ergodic_insurance.business_optimizer module
Business outcome optimization algorithms for insurance decisions.
This module implements sophisticated optimization algorithms focused on real business outcomes (ROE, growth rate, survival probability) rather than technical metrics. These algorithms maximize long-term company value through optimal insurance decisions.
Author: Alex Filiakov Date: 2025-01-25
- class OptimizationDirection(*values)[source]
Bases:
EnumDirection of optimization for objectives.
- MAXIMIZE = 'maximize'
- MINIMIZE = 'minimize'
- class BusinessObjective(name: str, weight: float = 1.0, target_value: float | None = None, optimization_direction: OptimizationDirection = OptimizationDirection.MAXIMIZE, constraint_type: str | None = None, constraint_value: float | None = None) None[source]
Bases:
objectBusiness optimization objective definition.
- name
Name of the objective (e.g., ‘ROE’, ‘bankruptcy_risk’)
- weight
Weight in multi-objective optimization (0-1)
- target_value
Optional target value for the objective
- optimization_direction
Whether to maximize or minimize
- constraint_type
Optional constraint type (‘>=’, ‘<=’, ‘==’)
- constraint_value
Optional constraint value
- optimization_direction: OptimizationDirection = 'maximize'
- class BusinessConstraints(max_risk_tolerance: float = 0.01, min_roe_threshold: float = 0.1, max_leverage_ratio: float = 2.0, min_liquidity_ratio: float = 1.2, max_premium_budget: float = 0.02, min_coverage_ratio: float = 0.5, regulatory_requirements: Dict[str, float]=<factory>) None[source]
Bases:
objectBusiness optimization constraints.
- max_risk_tolerance
Maximum acceptable probability of bankruptcy
- min_roe_threshold
Minimum required return on equity
- max_leverage_ratio
Maximum debt-to-equity ratio
- min_liquidity_ratio
Minimum liquidity requirements
Maximum insurance premium as % of revenue
- min_coverage_ratio
Minimum coverage as % of assets
- regulatory_requirements
Additional regulatory constraints
- class OptimalStrategy(coverage_limit: float, deductible: float, premium_rate: float, expected_roe: float, bankruptcy_risk: float, growth_rate: float, capital_efficiency: float, recommendations: List[str] = <factory>) None[source]
Bases:
objectOptimal insurance strategy result.
- coverage_limit
Optimal coverage limit amount
- deductible
Optimal deductible amount
Optimal premium rate
- expected_roe
Expected ROE with this strategy
- bankruptcy_risk
Probability of bankruptcy
- growth_rate
Expected growth rate
- capital_efficiency
Capital efficiency ratio
- recommendations
List of actionable recommendations
- class BusinessOptimizationResult(optimal_strategy: OptimalStrategy, objective_values: Dict[str, float], constraint_satisfaction: Dict[str, bool], convergence_info: Dict[str, bool | int | float], sensitivity_analysis: Dict[str, float] | None = None) None[source]
Bases:
objectResult of business outcome optimization.
- optimal_strategy
The optimal insurance strategy
- objective_values
Values achieved for each objective
- constraint_satisfaction
Status of constraint satisfaction
- convergence_info
Optimization convergence information
- sensitivity_analysis
Sensitivity to parameter changes
- optimal_strategy: OptimalStrategy
- class BusinessOptimizer(manufacturer: WidgetManufacturer, decision_engine: InsuranceDecisionEngine | None = None, ergodic_analyzer: ErgodicAnalyzer | None = None, loss_distribution: LossDistribution | None = None, optimizer_config: BusinessOptimizerConfig | None = None)[source]
Bases:
objectOptimize business outcomes through insurance decisions.
This class implements sophisticated optimization algorithms focused on real business metrics like ROE, growth rate, and survival probability.
- maximize_roe_with_insurance(constraints: BusinessConstraints, time_horizon: int = 10, n_simulations: int = 1000) OptimalStrategy[source]
Maximize ROE subject to business constraints.
Objective: max(ROE_with_insurance - ROE_baseline)
- Parameters:
constraints (
BusinessConstraints) – Business constraints to satisfytime_horizon (
int) – Planning horizon in yearsn_simulations (
int) – Number of Monte Carlo simulations
- Return type:
- Returns:
Optimal insurance strategy maximizing ROE
- minimize_bankruptcy_risk(growth_targets: Dict[str, float], budget_constraint: float, time_horizon: int = 10) OptimalStrategy[source]
Minimize bankruptcy risk while achieving growth targets.
Objective: min(P(bankruptcy))
- optimize_capital_efficiency(available_capital: float, investment_opportunities: Dict[str, float]) Dict[str, float][source]
Optimize capital allocation across insurance and investments.
- analyze_time_horizon_impact(strategies: List[Dict[str, Any]], time_horizons: List[int] | None = None) DataFrame[source]
Analyze strategy performance across different time horizons.
- optimize_business_outcomes(objectives: List[BusinessObjective], constraints: BusinessConstraints, time_horizon: int = 10, method: str = 'weighted_sum') BusinessOptimizationResult[source]
Multi-objective optimization of business outcomes.
- Parameters:
objectives (
List[BusinessObjective]) – List of business objectives to optimizeconstraints (
BusinessConstraints) – Business constraints to satisfytime_horizon (
int) – Planning horizon in yearsmethod (
str) – Optimization method (‘weighted_sum’, ‘epsilon_constraint’, ‘pareto’)
- Return type:
- Returns:
Comprehensive optimization result
ergodic_insurance.claim_development module
Claim development patterns for cash flow modeling.
This module provides classes for modeling realistic claim payment patterns, including immediate and long-tail development patterns typical for manufacturing liability claims. It supports IBNR estimation, reserve calculations, and cash flow projections.
- class DevelopmentPatternType(*values)[source]
Bases:
EnumStandard claim development pattern types.
- IMMEDIATE = 'immediate'
- MEDIUM_TAIL_5YR = 'medium_tail_5yr'
- LONG_TAIL_10YR = 'long_tail_10yr'
- VERY_LONG_TAIL_15YR = 'very_long_tail_15yr'
- CUSTOM = 'custom'
- class ClaimDevelopment(pattern_name: str, development_factors: List[float], tail_factor: float = 0.0) None[source]
Bases:
objectClaim development pattern for payment timing.
This class defines how claim payments develop over time, with development factors representing the percentage of total claim amount paid in each year.
- __post_init__()[source]
Validate development pattern.
- Raises:
ValueError – If development factors are invalid or don’t sum to 1.0.
- classmethod create_immediate() ClaimDevelopment[source]
Create immediate payment pattern (property damage).
- Return type:
- Returns:
ClaimDevelopment with immediate payment pattern.
- classmethod create_medium_tail_5yr() ClaimDevelopment[source]
Create 5-year workers compensation pattern.
- Return type:
- Returns:
ClaimDevelopment with 5-year workers compensation pattern.
- classmethod create_long_tail_10yr() ClaimDevelopment[source]
Create 10-year general liability pattern.
- Return type:
- Returns:
ClaimDevelopment with 10-year general liability pattern.
- classmethod create_very_long_tail_15yr() ClaimDevelopment[source]
Create 15-year product liability pattern.
- Return type:
- Returns:
ClaimDevelopment with 15-year product liability pattern.
- class Claim(claim_id: str, accident_year: int, reported_year: int, initial_estimate: float, claim_type: str = 'general_liability', development_pattern: ClaimDevelopment | None = None, payments_made: Dict[int, float]=<factory>) None[source]
Bases:
objectIndividual claim with development tracking.
- development_pattern: ClaimDevelopment | None = None
- __post_init__()[source]
Set default development pattern if not provided.
Uses general liability pattern as default if no pattern is specified.
- class ClaimCohort(accident_year: int, claims: List[Claim] = <factory>) None[source]
Bases:
objectCohort of claims from the same accident year.
- add_claim(claim: Claim)[source]
Add a claim to the cohort.
- Parameters:
claim (
Claim) – Claim to add.- Raises:
ValueError – If claim is from different accident year.
- get_total_incurred() float[source]
Get total incurred amount for the cohort.
- Return type:
- Returns:
Sum of initial estimates for all claims in the cohort.
- class CashFlowProjector(discount_rate: float = 0.03)[source]
Bases:
objectProject cash flows based on claim development patterns.
- cohorts: Dict[int, ClaimCohort]
- add_cohort(cohort: ClaimCohort)[source]
Add a claim cohort to the projector.
- Parameters:
cohort (
ClaimCohort) – Claim cohort to add.
- project_payments(start_year: int, end_year: int) Dict[int, float][source]
Project claim payments for a range of years.
- calculate_present_value(payments: Dict[int, float], base_year: int) float[source]
Calculate present value of future payments.
ergodic_insurance.config module
Configuration management using Pydantic v2 models.
This module provides comprehensive configuration classes for the Ergodic Insurance simulation framework. It uses Pydantic models for validation, type safety, and automatic serialization/deserialization of configuration parameters.
The configuration system is hierarchical, with specialized configs for different aspects of the simulation (manufacturer, insurance, simulation parameters, etc.) that can be composed into a master configuration.
- Key Features:
Type-safe configuration with automatic validation
Hierarchical configuration structure
Environment variable support
JSON/YAML serialization support
Default values with business logic constraints
Cross-field validation for consistency
Examples
Quick start with defaults:
from ergodic_insurance import Config
# All defaults — $10M manufacturer, 50-year horizon
config = Config()
From basic company info:
config = Config.from_company(
initial_assets=50_000_000,
operating_margin=0.12,
industry="manufacturing",
)
Full control:
from ergodic_insurance import Config, ManufacturerConfig
config = Config(
manufacturer=ManufacturerConfig(
initial_assets=10_000_000,
asset_turnover_ratio=0.8,
base_operating_margin=0.08,
tax_rate=0.25,
retention_ratio=0.7,
)
)
Loading from file:
config = Config.from_yaml(Path('config.yaml'))
Note
All monetary values are in nominal dollars unless otherwise specified. Rates and ratios are expressed as decimals (0.1 = 10%).
- Since:
Version 0.1.0
- DEFAULT_RISK_FREE_RATE: float = 0.02
Default risk-free rate (2%) used for Sharpe ratio and risk-adjusted calculations.
- class BusinessOptimizerConfig(base_roe: float = 0.15, protection_benefit_factor: float = 0.05, roe_noise_std: float = 0.1, base_bankruptcy_risk: float = 0.02, max_risk_reduction: float = 0.015, premium_burden_risk_factor: float = 0.5, time_risk_constant: float = 20.0, base_growth_rate: float = 0.1, growth_boost_factor: float = 0.03, premium_drag_factor: float = 0.5, asset_growth_factor: float = 0.8, equity_growth_factor: float = 1.1, risk_transfer_benefit_rate: float = 0.05, risk_reduction_value: float = 0.03, stability_value: float = 0.02, growth_enablement_value: float = 0.03, assumed_volatility: float = 0.2, volatility_reduction_factor: float = 0.05, min_volatility: float = 0.05) None[source]
Bases:
objectCalibration parameters for BusinessOptimizer financial heuristics.
Issue #314 (C1): Consolidates all hardcoded financial multipliers from BusinessOptimizer into a single, documentable configuration object.
These are simplified model parameters used by the optimizer’s heuristic methods (_estimate_roe, _estimate_bankruptcy_risk, _estimate_growth_rate, etc.). They are NOT derived from manufacturer data—they are tuning knobs for the optimizer’s internal scoring functions.
- protection_benefit_factor: float = 0.05
Coverage-to-assets ratio multiplier for protection benefit.
Multiplier converting premium burden ratio to risk increase.
Multiplier for premium-to-revenue drag on growth.
- class DecisionEngineConfig(base_growth_rate: float = 0.08, volatility_reduction_factor: float = 0.3, max_volatility_reduction: float = 0.15, growth_benefit_factor: float = 0.5) None[source]
Bases:
objectCalibration parameters for InsuranceDecisionEngine heuristics.
Issue #314 (C2): Consolidates hardcoded values from the decision engine’s growth estimation and simulation methods.
- class ManufacturerConfig(**data: Any) None[source]
Bases:
BaseModelFinancial parameters for the widget manufacturer.
This class defines the core financial parameters used to initialize and configure a widget manufacturing company in the simulation. All parameters are validated to ensure realistic business constraints.
- initial_assets
Starting asset value in dollars. Must be positive.
- asset_turnover_ratio
Revenue per dollar of assets. Typically 0.5-2.0 for manufacturing companies.
- base_operating_margin
Core operating margin before insurance costs (EBIT before insurance / Revenue). Typically 5-15% for healthy manufacturers.
- tax_rate
Corporate tax rate. Typically 20-30% depending on jurisdiction.
- retention_ratio
Portion of earnings retained vs distributed as dividends. Higher retention supports faster growth.
- ppe_ratio
Property, Plant & Equipment allocation ratio as fraction of initial assets. Defaults based on operating margin if not specified.
Examples
Conservative manufacturer:
config = ManufacturerConfig( initial_assets=5_000_000, asset_turnover_ratio=0.6, # Low turnover base_operating_margin=0.05, # 5% base margin tax_rate=0.25, retention_ratio=0.9 # High retention )
Aggressive growth manufacturer:
config = ManufacturerConfig( initial_assets=20_000_000, asset_turnover_ratio=1.2, # High turnover base_operating_margin=0.12, # 12% base margin tax_rate=0.25, retention_ratio=1.0 # Full retention )
Custom PP&E allocation:
config = ManufacturerConfig( initial_assets=15_000_000, asset_turnover_ratio=0.9, base_operating_margin=0.10, tax_rate=0.25, retention_ratio=0.8, ppe_ratio=0.6 # Override default PP&E allocation )
Note
The asset turnover ratio and base operating margin together determine the core return on assets (ROA) before insurance costs and taxes. Actual operating margins will be lower when insurance costs are included.
- expense_ratios: ExpenseRatioConfig | None
- classmethod validate_margin(v: float) float[source]
Warn if base operating margin is unusually high or negative.
- Parameters:
v (
float) – Base operating margin value to validate (as decimal, e.g., 0.1 for 10%).- Returns:
The validated base operating margin value.
- Return type:
Note
Margins above 30% are flagged as unusual for manufacturing. Negative margins indicate unprofitable operations before insurance.
- classmethod from_industry_config(industry_config, **kwargs)[source]
Create ManufacturerConfig from an IndustryConfig instance.
- Parameters:
industry_config – IndustryConfig instance with industry-specific parameters
**kwargs – Additional parameters to override or supplement
- Returns:
ManufacturerConfig instance with parameters derived from industry config
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class WorkingCapitalConfig(**data: Any) None[source]
Bases:
BaseModelWorking capital management parameters.
This class configures how working capital requirements are calculated as a percentage of sales revenue. Working capital represents the funds tied up in day-to-day operations (inventory, receivables, etc.).
- percent_of_sales
Working capital as percentage of sales. Typically 15-25% for manufacturers depending on payment terms and inventory turnover.
Examples
Efficient working capital:
wc_config = WorkingCapitalConfig( percent_of_sales=0.15 # 15% - lean operations )
Conservative working capital:
wc_config = WorkingCapitalConfig( percent_of_sales=0.30 # 30% - higher inventory/receivables )
Note
Higher working capital requirements reduce available cash for growth investments but provide operational cushion.
- classmethod validate_working_capital(v: float) float[source]
Validate working capital percentage.
- Parameters:
v (
float) – Working capital percentage to validate (as decimal).- Returns:
The validated working capital percentage.
- Return type:
- Raises:
ValueError – If working capital percentage exceeds 50% of sales, which would indicate severe operational inefficiency.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class GrowthConfig(**data: Any) None[source]
Bases:
BaseModelGrowth model parameters.
Configures whether the simulation uses deterministic or stochastic growth models, along with the associated parameters. Stochastic models add realistic business volatility to growth trajectories.
- type
Growth model type - ‘deterministic’ for fixed growth or ‘stochastic’ for random variation.
- annual_growth_rate
Base annual growth rate (e.g., 0.05 for 5%). Can be negative for declining businesses.
- volatility
Growth rate volatility (standard deviation) for stochastic models. Zero for deterministic models.
Examples
Stable growth:
growth = GrowthConfig( type='deterministic', annual_growth_rate=0.03 # 3% steady growth )
Volatile growth:
growth = GrowthConfig( type='stochastic', annual_growth_rate=0.05, # 5% expected volatility=0.15 # 15% std dev )
Note
Stochastic growth uses geometric Brownian motion to model realistic business volatility patterns.
- validate_stochastic_params()[source]
Ensure volatility is set for stochastic models.
- Returns:
The validated config object.
- Return type:
- Raises:
ValueError – If stochastic model is selected but volatility is zero, which would make it effectively deterministic.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class DebtConfig(**data: Any) None[source]
Bases:
BaseModelDebt financing parameters for insurance claims.
Configures debt financing options and constraints for handling large insurance claims and maintaining liquidity. Companies may need to borrow to cover deductibles or claims exceeding insurance limits.
- interest_rate
Annual interest rate on debt (e.g., 0.05 for 5%).
- max_leverage_ratio
Maximum debt-to-equity ratio allowed. Higher ratios increase financial risk.
- minimum_cash_balance
Minimum cash balance to maintain for operations.
Examples
Conservative debt policy:
debt = DebtConfig( interest_rate=0.04, # 4% borrowing cost max_leverage_ratio=1.0, # Max 1:1 debt/equity minimum_cash_balance=1_000_000 )
Aggressive leverage:
debt = DebtConfig( interest_rate=0.06, # Higher rate for risk max_leverage_ratio=3.0, # 3:1 leverage allowed minimum_cash_balance=500_000 )
Note
Higher leverage increases return on equity but also increases bankruptcy risk during adverse claim events.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class SimulationConfig(**data: Any) None[source]
Bases:
BaseModelSimulation execution parameters.
Controls how the simulation runs, including time resolution, horizon, and randomization settings. These parameters affect computational performance and result granularity.
- time_resolution
Simulation time step - ‘annual’ or ‘monthly’. Monthly provides more granularity but increases computation.
- time_horizon_years
Simulation horizon in years. Longer horizons reveal ergodic properties but require more computation.
- max_horizon_years
Maximum supported horizon to prevent excessive memory usage.
- random_seed
Random seed for reproducibility. None for random.
- fiscal_year_end
Month of fiscal year end (1-12). Default is 12 (December) for calendar year alignment. Set to 6 for June, 3 for March, etc. to match different fiscal calendars.
Examples
Quick test simulation:
sim = SimulationConfig( time_resolution='annual', time_horizon_years=10, random_seed=42 # Reproducible )
Long-term ergodic analysis:
sim = SimulationConfig( time_resolution='annual', time_horizon_years=500, max_horizon_years=1000, random_seed=None # Random each run )
Non-calendar fiscal year:
sim = SimulationConfig( time_resolution='annual', time_horizon_years=50, fiscal_year_end=6 # June fiscal year end )
Note
For ergodic analysis, horizons of 100+ years are recommended to observe long-term time averages.
- validate_horizons()[source]
Ensure time horizon doesn’t exceed maximum.
- Returns:
The validated config object.
- Return type:
- Raises:
ValueError – If time horizon exceeds maximum allowed value, preventing potential memory issues.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class OutputConfig(**data: Any) None[source]
Bases:
BaseModelOutput and results configuration.
Controls where and how simulation results are saved, including file formats and checkpoint frequencies.
- property output_path: Path
Get output directory as Path object.
- Returns:
Path object for the output directory.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class LoggingConfig(**data: Any) None[source]
Bases:
BaseModelLogging configuration.
Controls logging behavior including level, output destinations, and message formatting.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class Config(**data: Any) None[source]
Bases:
BaseModelComplete configuration for the Ergodic Insurance simulation.
This is the main configuration class that combines all sub-configurations and provides methods for loading, saving, and manipulating configurations.
All sub-configs have sensible defaults, so
Config()with no arguments creates a valid configuration for a $10M widget manufacturer.Examples
Minimal usage:
config = Config()
Override specific parameters:
config = Config( manufacturer=ManufacturerConfig(initial_assets=20_000_000) )
From basic company info:
config = Config.from_company(initial_assets=50_000_000, operating_margin=0.12)
- manufacturer: ManufacturerConfig
- working_capital: WorkingCapitalConfig
- growth: GrowthConfig
- debt: DebtConfig
- simulation: SimulationConfig
- output: OutputConfig
- logging: LoggingConfig
- classmethod from_company(initial_assets: float = 10000000, operating_margin: float = 0.08, industry: str = 'manufacturing', tax_rate: float = 0.25, growth_rate: float = 0.05, time_horizon_years: int = 50, **kwargs) Config[source]
Create a Config from basic company information.
This factory derives reasonable sub-config defaults from a small number of intuitive business parameters, so actuaries and risk managers can get started quickly without understanding every sub-config class.
- Parameters:
initial_assets (
float) – Starting asset value in dollars.operating_margin (
float) – Base operating margin (e.g. 0.08 for 8%).industry (
str) – Industry type for deriving defaults. Supported values: “manufacturing”, “service”, “retail”.tax_rate (
float) – Corporate tax rate.growth_rate (
float) – Annual growth rate.time_horizon_years (
int) – Simulation horizon in years.**kwargs – Additional overrides passed to sub-configs.
- Return type:
- Returns:
Config object with parameters derived from company info.
Examples
Minimal:
config = Config.from_company(initial_assets=50_000_000)
With industry defaults:
config = Config.from_company( initial_assets=25_000_000, operating_margin=0.15, industry="service", )
- classmethod from_yaml(path: Path) Config[source]
Load configuration from YAML file.
- Parameters:
path (
Path) – Path to YAML configuration file.- Return type:
- Returns:
Config object with validated parameters.
- Raises:
FileNotFoundError – If config file doesn’t exist.
ValidationError – If configuration is invalid.
- classmethod from_dict(data: dict, base_config: Config | None = None) Config[source]
Create config from dictionary, optionally overriding base config.
- override(**kwargs) Config[source]
Create a new config with overridden parameters.
- Parameters:
**kwargs – Parameters to override in dot notation e.g., manufacturer__operating_margin=0.1.
- Return type:
- Returns:
New Config object with overrides applied.
- setup_logging() None[source]
Configure logging based on settings.
Sets up logging handlers for console and/or file output based on the logging configuration.
- Return type:
- validate_paths() None[source]
Create output directories if they don’t exist.
Ensures that the configured output directory exists, creating it if necessary.
- Return type:
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class PricingScenario(**data: Any) None[source]
Bases:
BaseModelIndividual market pricing scenario configuration.
Represents a specific market condition (soft/normal/hard) with associated pricing parameters and market characteristics.
- validate_rate_ordering() PricingScenario[source]
Ensure premium rates follow expected ordering.
Primary rates should be higher than excess rates, and first excess should be higher than higher excess layers.
- Return type:
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class TransitionProbabilities(**data: Any) None[source]
Bases:
BaseModelMarket state transition probabilities.
- validate_probabilities() TransitionProbabilities[source]
Ensure transition probabilities sum to 1.0 for each state.
- Return type:
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class MarketCycles(**data: Any) None[source]
Bases:
BaseModelMarket cycle configuration and dynamics.
- transition_probabilities: TransitionProbabilities
- validate_cycle_duration() MarketCycles[source]
Validate that cycle durations are reasonable.
- Return type:
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class PricingScenarioConfig(**data: Any) None[source]
Bases:
BaseModelComplete pricing scenario configuration.
Contains all market scenarios and cycle dynamics for insurance pricing sensitivity analysis.
- scenarios: Dict[str, PricingScenario]
- market_cycles: MarketCycles
- get_scenario(scenario_name: str) PricingScenario[source]
Get a specific pricing scenario by name.
- get_rate_multiplier(from_scenario: str, to_scenario: str) float[source]
Calculate rate change multiplier between scenarios.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class ProfileMetadata(**data: Any) None[source]
Bases:
BaseModelMetadata for configuration profiles.
- classmethod validate_name(v: str) str[source]
Ensure profile name is valid.
- Parameters:
v (
str) – Profile name to validate.- Return type:
- Returns:
Validated profile name.
- Raises:
ValueError – If name contains invalid characters.
- classmethod validate_version(v: str) str[source]
Validate semantic version string.
- Parameters:
v (
str) – Version string to validate.- Return type:
- Returns:
Validated version string.
- Raises:
ValueError – If version format is invalid.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class InsuranceLayerConfig(**data: Any) None[source]
Bases:
BaseModelConfiguration for a single insurance layer.
- validate_layer_structure()[source]
Ensure layer structure is valid.
- Returns:
Validated layer config.
- Raises:
ValueError – If layer structure is invalid.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class InsuranceConfig(**data: Any) None[source]
Bases:
BaseModelEnhanced insurance configuration.
- layers: List[InsuranceLayerConfig]
- validate_layers()[source]
Ensure layers don’t overlap and are properly ordered.
- Returns:
Validated insurance config.
- Raises:
ValueError – If layers overlap or are misordered.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class LossDistributionConfig(**data: Any) None[source]
Bases:
BaseModelConfiguration for loss distributions.
- classmethod validate_frequency_dist(v: str) str[source]
Validate frequency distribution type.
- Parameters:
v (
str) – Distribution type.- Return type:
- Returns:
Validated distribution type.
- Raises:
ValueError – If distribution type is invalid.
- classmethod validate_severity_dist(v: str) str[source]
Validate severity distribution type.
- Parameters:
v (
str) – Distribution type.- Return type:
- Returns:
Validated distribution type.
- Raises:
ValueError – If distribution type is invalid.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class ModuleConfig(**data: Any) None[source]
Bases:
BaseModelBase class for configuration modules.
- model_config: ClassVar[ConfigDict] = {'extra': 'allow'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class PresetConfig(**data: Any) None[source]
Bases:
BaseModelConfiguration for a preset.
- classmethod validate_preset_type(v: str) str[source]
Validate preset type.
- Parameters:
v (
str) – Preset type.- Return type:
- Returns:
Validated preset type.
- Raises:
ValueError – If preset type is invalid.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class WorkingCapitalRatiosConfig(**data: Any) None[source]
Bases:
BaseModelEnhanced working capital configuration with detailed component ratios.
This extends the basic WorkingCapitalConfig to provide detailed control over individual working capital components using standard financial ratios.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class ExpenseRatioConfig(**data: Any) None[source]
Bases:
BaseModelConfiguration for expense categorization and allocation.
Defines how revenue translates to expenses with proper GAAP categorization between COGS and operating expenses (SG&A).
Issue #255: COGS and SG&A breakdown ratios are now configurable to allow the Manufacturer to calculate these values explicitly, rather than having the Reporting layer estimate them with hardcoded ratios.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class DepreciationConfig(**data: Any) None[source]
Bases:
BaseModelConfiguration for depreciation and amortization tracking.
Defines how fixed assets depreciate and prepaid expenses amortize over time.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class ExcelReportConfig(**data: Any) None[source]
Bases:
BaseModelConfiguration for Excel report generation.
- classmethod validate_engine(v: str) str[source]
Validate Excel engine selection.
- Parameters:
v (
str) – Engine name to validate.- Return type:
- Returns:
Validated engine name.
- Raises:
ValueError – If engine is not valid.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class IndustryConfig(industry_type: str = 'manufacturing', days_sales_outstanding: float = 45, days_inventory_outstanding: float = 60, days_payables_outstanding: float = 30, gross_margin: float = 0.35, operating_expense_ratio: float = 0.25, current_asset_ratio: float = 0.4, ppe_ratio: float = 0.5, intangible_ratio: float = 0.1, ppe_useful_life: int = 10, depreciation_method: str = 'straight_line') None[source]
Bases:
objectBase configuration for different industry types.
This class defines industry-specific financial parameters that determine how businesses operate, including working capital needs, margin structures, asset composition, and depreciation policies.
- industry_type
Name of the industry (e.g., ‘manufacturing’, ‘services’)
- Working capital ratios
- days_sales_outstanding
Average collection period for receivables (days)
- days_inventory_outstanding
Average inventory holding period (days)
- days_payables_outstanding
Average payment period to suppliers (days)
- Margin structure
- gross_margin
Gross profit as percentage of revenue
- operating_expense_ratio
Operating expenses as percentage of revenue
- Asset composition
- current_asset_ratio
Current assets as fraction of total assets
- ppe_ratio
Property, Plant & Equipment as fraction of total assets
- intangible_ratio
Intangible assets as fraction of total assets
- Depreciation
- ppe_useful_life
Average useful life of PP&E in years
- depreciation_method
Method for calculating depreciation
- class ManufacturingConfig(**kwargs: Any) None[source]
Bases:
IndustryConfigConfiguration for manufacturing companies.
Manufacturing businesses typically have: - Significant inventory holdings - Moderate to high PP&E requirements - Working capital needs for raw materials and WIP - Gross margins of 25-40%
- class ServiceConfig(**kwargs: Any) None[source]
Bases:
IndustryConfigConfiguration for service companies.
Service businesses typically have: - Minimal or no inventory - Lower PP&E requirements - Faster cash conversion cycles - Higher gross margins but also higher operating expenses
- class RetailConfig(**kwargs: Any) None[source]
Bases:
IndustryConfigConfiguration for retail companies.
Retail businesses typically have: - High inventory turnover - Moderate PP&E (stores, fixtures) - Fast cash collection (often immediate) - Lower gross margins but efficient operations
- class ConfigV2(**data: Any) None[source]
Bases:
BaseModelEnhanced unified configuration model for the 3-tier system.
- profile: ProfileMetadata
- manufacturer: ManufacturerConfig
- working_capital: WorkingCapitalConfig
- growth: GrowthConfig
- debt: DebtConfig
- simulation: SimulationConfig
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- output: OutputConfig
- logging: LoggingConfig
- insurance: InsuranceConfig | None
- losses: LossDistributionConfig | None
- excel_reporting: ExcelReportConfig | None
- working_capital_ratios: WorkingCapitalRatiosConfig | None
- expense_ratios: ExpenseRatioConfig | None
- depreciation: DepreciationConfig | None
- industry_config: IndustryConfig | None
- custom_modules: Dict[str, ModuleConfig]
- classmethod from_profile(profile_path: Path) ConfigV2[source]
Load configuration from a profile file.
- Parameters:
profile_path (
Path) – Path to the profile YAML file.- Return type:
- Returns:
Loaded and validated ConfigV2 instance.
- Raises:
FileNotFoundError – If profile file doesn’t exist.
ValidationError – If configuration is invalid.
- classmethod with_inheritance(profile_path: Path, config_dir: Path) ConfigV2[source]
Load configuration with profile inheritance.
- apply_preset(preset_name: str, preset_data: Dict[str, Any]) None[source]
Apply a preset to the configuration.
- class PresetLibrary(**data: Any) None[source]
Bases:
BaseModelCollection of presets for a specific type.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- presets: Dict[str, PresetConfig]
- classmethod from_yaml(path: Path) PresetLibrary[source]
Load preset library from YAML file.
- Parameters:
path (
Path) – Path to preset library YAML file.- Return type:
- Returns:
Loaded PresetLibrary instance.
ergodic_insurance.config_compat module
Backward compatibility layer for the legacy configuration system.
This module provides adapters and shims to ensure existing code continues to work while transitioning to the new 3-tier configuration system.
- class LegacyConfigAdapter[source]
Bases:
objectAdapter to support old ConfigLoader interface using new ConfigManager.
- load(config_name: str = 'baseline', override_params: Dict[str, Any] | None = None, **kwargs) Config[source]
Load configuration using legacy interface.
- load_config(config_name: str = 'baseline', override_params: Dict[str, Any] | None = None, **kwargs) Config[source]
Legacy function interface for loading configurations.
- migrate_config_usage(file_path: Path) None[source]
Helper to migrate old config usage in a Python file.
- class ConfigTranslator[source]
Bases:
objectUtilities for translating between old and new configuration formats.
- static legacy_to_v2(legacy_config: Config) Dict[str, Any][source]
Convert legacy Config to ConfigV2 format.
- static v2_to_legacy(config_v2: ConfigV2) Dict[str, Any][source]
Convert ConfigV2 to legacy Config format.
ergodic_insurance.config_loader module
Configuration loader with validation and override support.
This module provides utilities for loading, validating, and managing configuration files, with support for caching, overrides, and scenario-based configurations.
NOTE: This module now uses the new ConfigManager through the compatibility layer. It maintains the same interface for backward compatibility.
- class ConfigLoader(config_dir: Path | None = None)[source]
Bases:
objectHandles loading and managing configuration.
A comprehensive configuration management system that supports YAML file loading, validation, caching, and runtime overrides.
NOTE: This class now delegates to LegacyConfigAdapter for backward compatibility while using the new ConfigManager internally.
- DEFAULT_CONFIG_DIR = PosixPath('/home/runner/work/Ergodic-Insurance-Limits/Ergodic-Insurance-Limits/ergodic_insurance/data/parameters')
- DEFAULT_CONFIG_FILE = 'baseline.yaml'
- load(config_name: str = 'baseline', overrides: Dict[str, Any] | None = None, **kwargs: Any) Config[source]
Load configuration with optional overrides.
- Parameters:
- Return type:
- Returns:
Loaded and validated configuration.
- Raises:
FileNotFoundError – If config file doesn’t exist.
ValidationError – If configuration is invalid.
- load_scenario(scenario: str, overrides: Dict[str, Any] | None = None, **kwargs: Any) Config[source]
Load a predefined scenario configuration.
- Parameters:
- Return type:
- Returns:
Loaded and validated configuration.
- Raises:
ValueError – If scenario is not recognized.
- compare_configs(config1: str | Config, config2: str | Config) Dict[str, Any][source]
Compare two configurations and return differences.
- load_pricing_scenarios(scenario_file: str = 'insurance_pricing_scenarios') PricingScenarioConfig[source]
Load pricing scenario configuration.
- Parameters:
scenario_file (
str) – Name of scenario file (without .yaml extension) or full path to scenario file.- Return type:
- Returns:
Loaded and validated pricing scenario configuration.
- Raises:
FileNotFoundError – If scenario file not found.
ValidationError – If scenario data is invalid.
- switch_pricing_scenario(config: Config, scenario_name: str) Config[source]
Switch to a different pricing scenario.
Updates the configuration’s insurance parameters to use rates from the specified pricing scenario.
ergodic_insurance.config_manager module
Configuration manager for the new 3-tier configuration system.
This module provides the main interface for loading and managing configurations using profiles, modules, and presets. It implements a modern configuration architecture that supports inheritance, composition, and runtime overrides.
- The configuration system is organized into three tiers:
Profiles: Complete configuration sets (default, conservative, aggressive)
Modules: Reusable components (insurance, losses, stochastic, business)
Presets: Quick-apply templates (market conditions, layer structures)
Example
Basic usage of ConfigManager:
from ergodic_insurance.config_manager import ConfigManager
# Initialize manager
manager = ConfigManager()
# Load a profile
config = manager.load_profile("default")
# Load with overrides
config = manager.load_profile(
"conservative",
manufacturer={"base_operating_margin": 0.12},
growth={"annual_growth_rate": 0.08}
)
# Apply presets
config = manager.load_profile(
"default",
presets=["hard_market", "high_volatility"]
)
Note
This module replaces the legacy ConfigLoader and provides full backward compatibility through the config_compat module.
- class ConfigManager(config_dir: Path | None = None)[source]
Bases:
objectManages configuration loading with profiles, modules, and presets.
This class provides a comprehensive configuration management system that supports profile inheritance, module composition, preset application, and runtime parameter overrides. It includes caching for performance and validation for correctness.
- config_dir
Root configuration directory path
- profiles_dir
Directory containing profile configurations
- modules_dir
Directory containing module configurations
- presets_dir
Directory containing preset libraries
- _cache
Internal cache for loaded configurations
- _preset_libraries
Cached preset library definitions
Example
Loading configurations with various options:
manager = ConfigManager() # Simple profile load config = manager.load_profile("default") # With module selection config = manager.load_profile( "default", modules=["insurance", "stochastic"] ) # With inheritance chain config = manager.load_profile("custom/client_abc")
- load_profile(profile_name: str = 'default', use_cache: bool = True, **overrides) ConfigV2[source]
Load a configuration profile with optional overrides.
This method loads a configuration profile, applies any inheritance chain, includes specified modules, applies presets, and finally applies runtime overrides. The result is cached for performance.
- Parameters:
profile_name (
str) – Name of the profile to load. Can be a simple name (e.g., “default”) or a path to custom profiles (e.g., “custom/client_abc”).use_cache (
bool) – Whether to use cached configurations. Set to False when configuration files might have changed during runtime.**overrides – Runtime overrides organized by section. Supports: - modules: List of module names to include - presets: List of preset names to apply - Any configuration section with nested parameters
- Returns:
Fully loaded, validated, and merged configuration instance.
- Return type:
- Raises:
FileNotFoundError – If the specified profile doesn’t exist.
ValueError – If configuration validation fails.
yaml.YAMLError – If YAML parsing fails.
Example
Various ways to load profiles:
# Basic load config = manager.load_profile("default") # With overrides config = manager.load_profile( "conservative", manufacturer={"base_operating_margin": 0.12}, simulation={"time_horizon_years": 50} ) # With presets and modules config = manager.load_profile( "default", modules=["insurance", "stochastic"], presets=["hard_market"] )
- with_preset(config: ConfigV2, preset_type: str, preset_name: str) ConfigV2[source]
Create a new configuration with a preset applied.
- with_overrides(config: ConfigV2, **overrides) ConfigV2[source]
Create a new configuration with runtime overrides.
- validate(config: ConfigV2) List[str][source]
Validate a configuration for completeness and consistency.
- get_profile_metadata(profile_name: str) Dict[str, Any][source]
Get metadata for a profile without loading the full configuration.
ergodic_insurance.config_migrator module
Configuration migration tools for converting legacy YAML files to new 3-tier system.
This module provides utilities to migrate from the old 12-file configuration system to the new profiles/modules/presets architecture.
- class ConfigMigrator[source]
Bases:
objectHandles migration from legacy configuration to new 3-tier system.
- extract_modules() None[source]
Extract reusable modules from legacy configuration files.
- Return type:
- validate_migration() bool[source]
Validate that all configurations were successfully migrated.
- Return type:
- Returns:
True if validation passes, False otherwise.
ergodic_insurance.convergence module
Convergence diagnostics for Monte Carlo simulations.
This module provides tools for assessing convergence of Monte Carlo simulations including Gelman-Rubin R-hat, effective sample size, and Monte Carlo standard error.
- class ConvergenceStats(r_hat: float, ess: float, mcse: float, converged: bool, n_iterations: int, autocorrelation: float) None[source]
Bases:
objectContainer for convergence statistics.
- class ConvergenceDiagnostics(r_hat_threshold: float = 1.1, min_ess: int = 1000, relative_mcse_threshold: float = 0.05)[source]
Bases:
objectConvergence diagnostics for Monte Carlo simulations.
Provides methods for assessing convergence using multiple chains and calculating effective sample sizes.
- calculate_ess(chain: ndarray, max_lag: int | None = None) float[source]
Calculate effective sample size using autocorrelation.
Uses the formula: ESS = N / (1 + 2 * sum(autocorrelations)) where the sum is truncated at the first negative autocorrelation.
- calculate_batch_ess(chains: ndarray, method: str = 'mean') float | ndarray[source]
Calculate ESS for multiple chains or metrics.
- calculate_ess_per_second(chain: ndarray, computation_time: float) float[source]
Calculate ESS per second of computation.
Useful for comparing efficiency of different sampling methods.
- calculate_mcse(chain: ndarray, ess: float | None = None) float[source]
Calculate Monte Carlo standard error.
- check_convergence(chains: ndarray | List[ndarray], metric_names: List[str] | None = None) Dict[str, ConvergenceStats][source]
Check convergence for multiple chains and metrics.
- geweke_test(chain: ndarray, first_fraction: float = 0.1, last_fraction: float = 0.5) Tuple[float, float][source]
Perform Geweke convergence test.
Compares means of first and last portions of chain.
ergodic_insurance.convergence_advanced module
Advanced convergence diagnostics for Monte Carlo simulations.
This module extends basic convergence diagnostics with advanced features including autocorrelation analysis, spectral density estimation, and sophisticated ESS calculations.
- class SpectralDiagnostics(spectral_density: ndarray, frequencies: ndarray, integrated_autocorr_time: float, effective_sample_size: float) None[source]
Bases:
objectContainer for spectral analysis results.
- class AutocorrelationAnalysis(acf_values: ndarray, lags: ndarray, integrated_time: float, initial_monotone_sequence: int, initial_positive_sequence: int) None[source]
Bases:
objectContainer for autocorrelation analysis results.
- class AdvancedConvergenceDiagnostics(fft_size: int | None = None)[source]
Bases:
objectAdvanced convergence diagnostics for Monte Carlo simulations.
Provides sophisticated methods for assessing convergence including spectral density estimation, multiple ESS calculation methods, and advanced autocorrelation analysis.
- calculate_autocorrelation_full(chain: ndarray, max_lag: int | None = None, method: str = 'fft') AutocorrelationAnalysis[source]
Calculate comprehensive autocorrelation analysis.
- Parameters:
- Return type:
- Returns:
AutocorrelationAnalysis object with detailed results
- calculate_spectral_density(chain: ndarray, method: str = 'welch', nperseg: int | None = None) SpectralDiagnostics[source]
Calculate spectral density and related diagnostics.
- Parameters:
- Return type:
- Returns:
SpectralDiagnostics object with spectral analysis results
- calculate_ess_batch_means(chain: ndarray, batch_size: int | None = None, n_batches: int | None = None) float[source]
Calculate ESS using batch means method.
- calculate_ess_overlapping_batch(chain: ndarray, batch_size: int | None = None) float[source]
Calculate ESS using overlapping batch means (more efficient).
- heidelberger_welch_advanced(chain: ndarray, alpha: float = 0.05, eps: float = 0.1, pvalue_threshold: float = 0.05) Dict[str, bool | int | float][source]
Advanced Heidelberger-Welch stationarity test.
- Parameters:
- Return type:
- Returns:
Dictionary with detailed test results
ergodic_insurance.convergence_plots module
Real-time convergence visualization for Monte Carlo simulations.
This module provides real-time plotting capabilities for monitoring convergence during long-running simulations with minimal computational overhead.
- class RealTimeConvergencePlotter(n_parameters: int = 1, n_chains: int = 1, buffer_size: int = 1000, update_interval: int = 100, figsize: Tuple[float, float] = (12, 8))[source]
Bases:
objectReal-time convergence plotting with minimal overhead.
Provides animated visualization of convergence diagnostics during Monte Carlo simulations with efficient updating mechanisms.
- setup_figure(parameter_names: List[str] | None = None, show_diagnostics: bool = True) Figure[source]
Setup the figure with subplots for real-time monitoring.
- update_data(iteration: int, chains_data: ndarray, diagnostics: Dict[str, List[float]] | None = None)[source]
Update data buffers with new samples.
- plot_static_convergence(chains: ndarray, burn_in: int | None = None, thin: int = 1) Figure[source]
Create static convergence plots for completed chains.
- plot_ess_evolution(ess_values: List[float] | ndarray, iterations: ndarray | None = None, target_ess: float = 1000) Figure[source]
Plot evolution of effective sample size.
- plot_autocorrelation_surface(chains: ndarray, max_lag: int = 50, param_idx: int = 0) Figure[source]
Create 3D surface plot of autocorrelation over time.
ergodic_insurance.decimal_utils module
Decimal utilities for financial calculations.
This module provides utilities for precise financial calculations using Python’s decimal.Decimal type. Using Decimal instead of float prevents accumulation errors in iterative simulations and ensures accounting identities hold exactly.
Example
Convert a float to decimal for financial use:
from ergodic_insurance.decimal_utils import to_decimal, ZERO
amount = to_decimal(1234.56)
if amount != ZERO:
print(f"Amount: {amount}")
- to_decimal(value: float | int | str | Decimal | None) Decimal[source]
Convert a numeric value to Decimal with proper handling.
Converts floats, ints, strings, or existing Decimals to a standardized Decimal value. Floats are converted via string representation to avoid binary floating point artifacts.
- Parameters:
value (
Union[float,int,str,Decimal,None]) – Numeric value to convert. None is converted to ZERO.- Return type:
- Returns:
Decimal representation of the value.
Example
>>> to_decimal(1234.56) Decimal('1234.56') >>> to_decimal(None) Decimal('0.00')
- quantize_currency(value: Decimal | float | int) Decimal[source]
Quantize a value to currency precision (2 decimal places).
Rounds using ROUND_HALF_UP (banker’s rounding away from zero for .5 cases) which is standard for financial calculations.
- Parameters:
value (
Union[Decimal,float,int]) – Numeric value to quantize.- Return type:
- Returns:
Decimal rounded to 2 decimal places.
Example
>>> quantize_currency(Decimal("1234.567")) Decimal('1234.57') >>> quantize_currency(1234.565) Decimal('1234.57')
- is_zero(value: Decimal | float | int) bool[source]
Check if a value is effectively zero after quantization.
Useful for balance checks where we need exact equality after rounding to currency precision.
- Parameters:
value (
Union[Decimal,float,int]) – Numeric value to check.- Return type:
- Returns:
True if value rounds to zero at currency precision.
Example
>>> is_zero(Decimal("0.001")) True >>> is_zero(Decimal("0.01")) False
- sum_decimals(*values: Decimal | float | int) Decimal[source]
Sum multiple values with Decimal precision.
Converts all values to Decimal before summing to maintain precision.
- Parameters:
*values (
Union[Decimal,float,int]) – Numeric values to sum.- Return type:
- Returns:
Decimal sum of all values.
Example
>>> sum_decimals(0.1, 0.2, 0.3) Decimal('0.6')
- safe_divide(numerator: Decimal | float | int, denominator: Decimal | float | int, default: Decimal | float | int = Decimal('0.00')) Decimal[source]
Safely divide two values, returning default if denominator is zero.
- Parameters:
- Return type:
- Returns:
Result of division, or default if denominator is zero.
Example
>>> safe_divide(100, 4) Decimal('25') >>> safe_divide(100, 0, default=Decimal("-1")) Decimal('-1')
ergodic_insurance.decision_engine module
Algorithmic insurance decision engine for optimal coverage selection.
This module implements a comprehensive decision framework that optimizes insurance purchasing decisions using multi-objective optimization to balance growth targets with bankruptcy risk constraints.
- class OptimizationMethod(*values)[source]
Bases:
EnumAvailable optimization methods.
- SLSQP = 'SLSQP'
- ENHANCED_SLSQP = 'enhanced_slsqp'
- DIFFERENTIAL_EVOLUTION = 'differential_evolution'
- WEIGHTED_SUM = 'weighted_sum'
- TRUST_REGION = 'trust_region'
- PENALTY_METHOD = 'penalty_method'
- AUGMENTED_LAGRANGIAN = 'augmented_lagrangian'
- MULTI_START = 'multi_start'
- class OptimizationConstraints(max_premium_budget: float = 1000000, min_coverage_limit: float = 5000000, max_coverage_limit: float = 100000000, max_bankruptcy_probability: float = 0.01, min_retained_limit: float = 100000, max_retained_limit: float = 10000000, max_layers: int = 5, min_layers: int = 1, required_roi_improvement: float = 0.0, max_debt_to_equity: float = 2.0, max_insurance_cost_ratio: float = 0.03, min_coverage_requirement: float = 0.0, max_retention_limit: float = inf) None[source]
Bases:
objectConstraints for insurance optimization.
- class InsuranceDecision(retained_limit: float, layers: List[EnhancedInsuranceLayer], total_premium: float, total_coverage: float, pricing_scenario: str, optimization_method: str, convergence_iterations: int = 0, objective_value: float = 0.0) None[source]
Bases:
objectRepresents an insurance purchasing decision.
- layers: List[EnhancedInsuranceLayer]
- class DecisionMetrics(ergodic_growth_rate: float, bankruptcy_probability: float, expected_roe: float, roe_improvement: float, premium_to_limit_ratio: float, coverage_adequacy: float, capital_efficiency: float, value_at_risk_95: float, conditional_value_at_risk: float, decision_score: float = 0.0, time_weighted_roe: float = 0.0, roe_volatility: float = 0.0, roe_sharpe_ratio: float = 0.0, roe_downside_deviation: float = 0.0, roe_1yr_rolling: float = 0.0, roe_3yr_rolling: float = 0.0, roe_5yr_rolling: float = 0.0, operating_roe: float = 0.0, insurance_impact_roe: float = 0.0, tax_effect_roe: float = 0.0) None[source]
Bases:
objectComprehensive metrics for evaluating an insurance decision.
- class SensitivityReport(base_decision: InsuranceDecision, base_metrics: DecisionMetrics, parameter_sensitivities: Dict[str, Dict[str, float]], key_drivers: List[str], robust_range: Dict[str, Tuple[float, float]], stress_test_results: Dict[str, DecisionMetrics]) None[source]
Bases:
objectResults of sensitivity analysis.
- base_decision: InsuranceDecision
- base_metrics: DecisionMetrics
- stress_test_results: Dict[str, DecisionMetrics]
- class Recommendations(primary_recommendation: InsuranceDecision, primary_rationale: str, alternative_options: List[Tuple[InsuranceDecision, str]], implementation_timeline: List[str], risk_considerations: List[str], expected_benefits: Dict[str, float], confidence_level: float) None[source]
Bases:
objectExecutive-ready recommendations from the decision engine.
- primary_recommendation: InsuranceDecision
- alternative_options: List[Tuple[InsuranceDecision, str]]
- class InsuranceDecisionEngine(manufacturer: WidgetManufacturer, loss_distribution: LossDistribution, pricing_scenario: str = 'baseline', config_loader: ConfigLoader | None = None, engine_config: DecisionEngineConfig | None = None)[source]
Bases:
objectAlgorithmic engine for optimizing insurance decisions.
- optimize_insurance_decision(constraints: OptimizationConstraints, method: OptimizationMethod = OptimizationMethod.SLSQP, weights: Dict[str, float] | None = None, _attempted_methods: Set[OptimizationMethod] | None = None) InsuranceDecision[source]
Find optimal insurance structure given constraints.
Uses multi-objective optimization to balance growth, risk, and cost. Falls back through alternative methods if validation fails, tracking attempted methods to prevent infinite recursion.
- Parameters:
constraints (
OptimizationConstraints) – Optimization constraintsmethod (
OptimizationMethod) – Optimization method to useweights (
Optional[Dict[str,float]]) – Objective function weights (default: balanced)_attempted_methods (
Optional[Set[OptimizationMethod]]) – Internal set tracking methods already tried (prevents infinite recursion). Callers should not set this.
- Return type:
- Returns:
Optimal insurance decision
- calculate_decision_metrics(decision: InsuranceDecision) DecisionMetrics[source]
Calculate comprehensive metrics for a decision.
- Parameters:
decision (
InsuranceDecision) – Insurance decision to evaluate- Return type:
- Returns:
Comprehensive metrics
- run_sensitivity_analysis(base_decision: InsuranceDecision, parameters: List[str] | None = None, variation_range: float = 0.2) SensitivityReport[source]
Analyze decision sensitivity to parameter changes.
- Parameters:
base_decision (
InsuranceDecision) – Base decision to analyzeparameters (
Optional[List[str]]) – Parameters to test (default: key parameters)variation_range (
float) – ±% to vary parameters (default: 20%)
- Return type:
- Returns:
Comprehensive sensitivity report
- generate_recommendations(analysis_results: List[Tuple[InsuranceDecision, DecisionMetrics]]) Recommendations[source]
Generate executive-ready recommendations.
- Parameters:
analysis_results (
List[Tuple[InsuranceDecision,DecisionMetrics]]) – List of (decision, metrics) tuples to analyze- Return type:
- Returns:
Comprehensive recommendations
ergodic_insurance.ergodic_analyzer module
Ergodic analysis framework for comparing time-average vs ensemble-average growth.
This module provides the theoretical foundation and computational tools for applying ergodic economics to insurance decision making. It implements Ole Peters’ framework for distinguishing between ensemble averages (what we expect to happen across many parallel scenarios) and time averages (what actually happens to a single entity over time).
The key insight is that for multiplicative processes like business growth with volatile losses, the ensemble average and time average diverge significantly. Insurance transforms the growth process in ways that traditional expected value analysis cannot capture, often making insurance optimal even when premiums exceed expected losses by substantial margins.
- Key Concepts:
Time Average Growth Rate: The growth rate experienced by a single business entity over time, calculated as g = (1/T) * ln(X(T)/X(0)). This captures the actual compound growth experience.
Ensemble Average Growth Rate: The expected growth rate calculated across many parallel scenarios at each time point. This represents the traditional expected value approach.
Ergodic Divergence: The difference between time and ensemble averages, indicating non-ergodic behavior where individual experience differs from statistical expectations.
Survival Rate: The fraction of simulation paths that remain solvent, capturing the probability dimension ignored by pure growth metrics.
- Theoretical Foundation:
Based on Ole Peters’ ergodic economics framework (Peters, 2019; Peters & Gell-Mann, 2016), this module demonstrates that:
Multiplicative Growth: Business equity follows multiplicative dynamics where losses compound over time in non-linear ways.
Jensen’s Inequality: For concave utility functions (log wealth), the expected value of a function differs from the function of expected values.
Path Dependence: The order and timing of losses matters critically, making time-average analysis essential for decision making.
Insurance as Growth Optimization: Insurance can increase time-average growth rates even when premiums appear “expensive” from ensemble perspective.
- Core Classes:
ErgodicAnalyzer: Main analysis engine with comparison methodsErgodicData: Standardized data container for time series analysisErgodicAnalysisResults: Comprehensive results from integrated analysisValidationResults: Insurance impact validation results
Examples
Basic ergodic comparison between insured and uninsured scenarios:
import numpy as np
from ergodic_insurance import ErgodicAnalyzer
# Initialize analyzer
analyzer = ErgodicAnalyzer(convergence_threshold=0.01)
# Simulate equity trajectories (example data)
insured_trajectories = [
np.array([10e6, 10.2e6, 10.5e6, 10.8e6, 11.1e6]), # Stable growth
np.array([10e6, 10.1e6, 10.3e6, 10.6e6, 10.9e6]), # Stable growth
np.array([10e6, 10.3e6, 10.7e6, 11.0e6, 11.4e6]) # Stable growth
]
uninsured_trajectories = [
np.array([10e6, 10.5e6, 8.2e6, 12.1e6, 13.5e6]), # Volatile
np.array([10e6, 9.8e6, 5.1e6, 0]), # Bankruptcy
np.array([10e6, 10.8e6, 11.2e6, 14.8e6, 16.2e6]) # High growth
]
# Compare scenarios
comparison = analyzer.compare_scenarios(
insured_trajectories,
uninsured_trajectories,
metric="equity"
)
print(f"Insured time-average growth: {comparison['insured']['time_average_mean']:.1%}")
print(f"Uninsured time-average growth: {comparison['uninsured']['time_average_mean']:.1%}")
print(f"Ergodic advantage: {comparison['ergodic_advantage']['time_average_gain']:.1%}")
print(f"Survival rate improvement: {comparison['ergodic_advantage']['survival_gain']:.1%}")
Monte Carlo analysis with convergence checking:
from ergodic_insurance.simulation import run_monte_carlo
# Run Monte Carlo simulations (pseudo-code)
simulation_results = run_monte_carlo(
n_simulations=1000,
time_horizon=20,
insurance_enabled=True
)
# Analyze batch results
analysis = analyzer.analyze_simulation_batch(
simulation_results,
label="Insured Scenario"
)
print(f"Time-average growth: {analysis['time_average']['mean']:.2%} ± {analysis['time_average']['std']:.2%}")
print(f"Ensemble average: {analysis['ensemble_average']['mean']:.2%}")
print(f"Ergodic divergence: {analysis['ergodic_divergence']:.2%}")
print(f"Convergence: {analysis['convergence']['converged']} (SE: {analysis['convergence']['standard_error']:.4f})")
Integration with loss modeling:
from ergodic_insurance import LossData, InsuranceProgram, WidgetManufacturer
# Set up integrated analysis
loss_data = LossData.from_distribution(
frequency_lambda=2.5,
severity_mean=1_000_000,
severity_cv=2.0
)
insurance = InsuranceProgram(
layers=[(0, 1_000_000, 0.015), (1_000_000, 10_000_000, 0.008)]
)
manufacturer = WidgetManufacturer(config)
# Run integrated ergodic analysis
results = analyzer.integrate_loss_ergodic_analysis(
loss_data=loss_data,
insurance_program=insurance,
manufacturer=manufacturer,
time_horizon=20,
n_simulations=1000
)
print(f"Time-average growth rate: {results.time_average_growth:.2%}")
print(f"Ensemble average growth: {results.ensemble_average_growth:.2%}")
print(f"Survival rate: {results.survival_rate:.1%}")
print(f"Insurance benefit: ${results.insurance_impact['net_benefit']:,.0f}")
print(f"Analysis valid: {results.validation_passed}")
- Implementation Notes:
All growth rate calculations use natural logarithms for mathematical consistency
Infinite values (from bankruptcy) are handled gracefully in statistical calculations
Convergence checking uses standard error to determine Monte Carlo adequacy
Significance testing employs t-tests for comparing growth rate distributions
Variable-length trajectories (due to insolvency) are supported throughout
- Performance Optimization:
Vectorized numpy operations for large Monte Carlo batches
Efficient handling of mixed-length trajectory data
Memory-conscious processing of large simulation datasets
Configurable convergence thresholds to balance accuracy and computation time
References
Peters, O. (2019). “The ergodicity problem in economics.” Nature Physics, 15(12), 1216-1221.
Peters, O., & Gell-Mann, M. (2016). “Evaluating gambles using dynamics.” Chaos, 26(2), 023103.
Kelly, J. L. (1956). “A new interpretation of information rate.” Bell System Technical Journal, 35(4), 917-926.
See also
simulation: Monte Carlo simulation framework
manufacturer: Financial model for business dynamics
insurance_program: Insurance structure modeling
optimization: Optimization algorithms using ergodic metrics
- class ErgodicData(time_series: ndarray = <factory>, values: ndarray = <factory>, metadata: Dict[str, ~typing.Any]=<factory>) None[source]
Bases:
objectStandardized data container for ergodic time series analysis.
This class provides a consistent format for storing and validating time series data used in ergodic calculations. It ensures data integrity and provides metadata tracking for analysis reproducibility.
- time_series
Array of time points corresponding to values. Should be monotonically increasing for meaningful analysis.
- Type:
np.ndarray
- values
Array of observed values (e.g., equity, assets) at each time point. Must have same length as time_series.
- Type:
np.ndarray
- metadata
Dictionary containing analysis metadata such as simulation parameters, data source, units, etc.
- Type:
Dict[str, Any]
Examples
Create ergodic data for analysis:
import numpy as np # Equity trajectory over 10 years data = ErgodicData( time_series=np.arange(11), # Years 0-10 values=np.array([10e6, 10.2e6, 10.5e6, 10.1e6, 10.8e6, 11.2e6, 10.9e6, 11.5e6, 12.1e6, 12.8e6, 13.2e6]), metadata={ 'units': 'USD', 'metric': 'equity', 'simulation_id': 'run_001', 'scenario': 'insured' } ) # Validate data consistency assert data.validate(), "Data validation failed"
Handle validation failures:
# Mismatched lengths will fail validation invalid_data = ErgodicData( time_series=np.arange(10), values=np.arange(5), # Wrong length metadata={'note': 'This will fail validation'} ) if not invalid_data.validate(): print("Data validation failed - fix before analysis")
Note
The validate() method should be called before using data in ergodic calculations to ensure mathematical operations will succeed.
See also
ErgodicAnalyzer: Main analysis class that uses ErgodicDataErgodicAnalysisResults: Results format for ergodic calculations- validate() bool[source]
Validate data consistency and integrity.
Performs comprehensive validation of the ergodic data to ensure it meets requirements for mathematical analysis. This includes checking array lengths, data types, and basic reasonableness of values.
- Returns:
- True if all validation checks pass, False otherwise.
False indicates the data should not be used in ergodic calculations without correction.
- Return type:
Examples
Validate data before analysis:
data = ErgodicData( time_series=np.arange(10), values=np.random.randn(10) + 100, metadata={'units': 'USD'} ) if data.validate(): print("Data validated - ready for analysis") else: print("Data validation failed - check inputs")
- Validation Checks:
Arrays have matching lengths
Arrays are not empty
Time series is monotonic (if more than one point)
Values are numeric (not NaN in inappropriate places)
- class ErgodicAnalysisResults(time_average_growth: float, ensemble_average_growth: float, survival_rate: float, ergodic_divergence: float, insurance_impact: ~typing.Dict[str, float], validation_passed: bool, metadata: ~typing.Dict[str, ~typing.Any] = <factory>) None[source]
Bases:
objectComprehensive results from integrated ergodic analysis.
This class encapsulates all results from a complete ergodic analysis, including growth metrics, survival statistics, insurance impacts, and validation status. It provides a standardized format for reporting and comparing different insurance strategies.
- time_average_growth
Mean time-average growth rate across all valid simulation paths. Calculated as the average of individual path growth rates: mean(ln(X_final/X_initial)/T). May be -inf if all paths resulted in bankruptcy.
- Type:
- ensemble_average_growth
Ensemble average growth rate calculated from the mean of initial and final values across all paths: ln(mean(X_final)/mean(X_initial))/T. Always finite for valid data.
- Type:
- survival_rate
Fraction of simulation paths that remained solvent throughout the analysis period. Range: [0.0, 1.0].
- Type:
- ergodic_divergence
Difference between time-average and ensemble average growth rates (time_average_growth - ensemble_average_growth). Positive values indicate time-average exceeds ensemble average.
- Type:
- insurance_impact
Dictionary containing insurance-related metrics such as: - ‘premium_cost’: Total premium payments - ‘recovery_benefit’: Total insurance recoveries - ‘net_benefit’: Net financial benefit of insurance - ‘growth_improvement’: Improvement in growth rate from insurance
- validation_passed
Whether the analysis passed internal validation checks for data consistency and mathematical validity.
- Type:
- metadata
Additional analysis metadata including: - ‘n_simulations’: Number of Monte Carlo simulations - ‘time_horizon’: Analysis time horizon - ‘n_survived’: Absolute number of paths that survived - ‘loss_statistics’: Statistics about loss distributions
- Type:
Dict[str, Any]
Examples
Interpret analysis results:
# Example results from ergodic analysis results = ErgodicAnalysisResults( time_average_growth=0.045, # 4.5% annual growth ensemble_average_growth=0.052, # 5.2% ensemble average survival_rate=0.95, # 95% survival rate ergodic_divergence=-0.007, # -0.7% divergence insurance_impact={ 'premium_cost': 2_500_000, 'recovery_benefit': 8_200_000, 'net_benefit': 5_700_000, 'growth_improvement': 0.012 }, validation_passed=True, metadata={ 'n_simulations': 1000, 'time_horizon': 20, 'n_survived': 950 } ) # Interpret results if results.validation_passed: print(f"Time-average growth: {results.time_average_growth:.1%}") print(f"Ensemble average: {results.ensemble_average_growth:.1%}") if results.ergodic_divergence < 0: print("Insurance reduces volatility drag (ergodic benefit)") if results.insurance_impact['net_benefit'] > 0: print(f"Insurance provides net benefit: ${results.insurance_impact['net_benefit']:,.0f}") else: print("Analysis validation failed - results may be unreliable")
Compare multiple scenarios:
def compare_results(results_a, results_b, label_a="Scenario A", label_b="Scenario B"): print(f"{label_a} vs {label_b}:") print(f" Time-average growth: {results_a.time_average_growth:.2%} vs {results_b.time_average_growth:.2%}") print(f" Survival rate: {results_a.survival_rate:.1%} vs {results_b.survival_rate:.1%}") print(f" Ergodic divergence: {results_a.ergodic_divergence:.3f} vs {results_b.ergodic_divergence:.3f}") growth_advantage = results_a.time_average_growth - results_b.time_average_growth survival_advantage = results_a.survival_rate - results_b.survival_rate print(f" Advantages: Growth={growth_advantage:.2%}, Survival={survival_advantage:.1%}")
Note
All growth rates are expressed as decimal values (0.05 = 5% annual growth). Negative ergodic_divergence indicates insurance reduces “volatility drag”. Always check validation_passed before interpreting results.
See also
ErgodicAnalyzer: Class that generates these resultsErgodicAnalyzer.integrate_loss_ergodic_analysis(): Method producing these resultsValidationResults: Detailed validation information
- class ValidationResults(premium_deductions_correct: bool, recoveries_credited: bool, collateral_impacts_included: bool, time_average_reflects_benefit: bool, overall_valid: bool, details: Dict[str, ~typing.Any]=<factory>) None[source]
Bases:
objectComprehensive results from insurance impact validation analysis.
This class encapsulates the results of detailed validation checks performed on insurance effects in ergodic analysis. It provides both high-level validation status and detailed diagnostic information to help identify and resolve any modeling inconsistencies.
Whether insurance premiums are properly deducted from cash flows. True indicates expected premium costs match observed differences in net income between scenarios.
- Type:
- recoveries_credited
Whether insurance recoveries are properly credited to improve financial outcomes. True indicates insured scenarios show appropriate financial benefit from loss recoveries.
- Type:
- collateral_impacts_included
Whether letter of credit costs and asset restrictions are properly modeled. True indicates collateral requirements are reflected in financial calculations.
- Type:
- time_average_reflects_benefit
Whether time-average growth rate calculations properly reflect insurance benefits. True indicates growth improvements are consistent with insurance effects.
- Type:
- overall_valid
Master validation flag indicating whether all individual checks passed. True means the ergodic analysis results are reliable and properly reflect insurance impacts.
- Type:
- details
Detailed diagnostic information from each validation check, including specific metrics, calculations, and discrepancy measurements. Used for troubleshooting validation failures.
- Type:
Dict[str, Any]
Examples
Interpret validation results:
validation = analyzer.validate_insurance_ergodic_impact( base_scenario, insurance_scenario, insurance_program ) if validation.overall_valid: print("✓ All validation checks passed") print("Ergodic analysis results are reliable") else: print("⚠ Validation issues detected:") if not validation.premium_deductions_correct: print(" - Premium deduction mismatch") if not validation.recoveries_credited: print(" - Recovery crediting issue") if not validation.collateral_impacts_included: print(" - Collateral impact missing") if not validation.time_average_reflects_benefit: print(" - Growth calculation inconsistency") print("Review model implementation before using results")
Access detailed diagnostics:
if 'premium_check' in validation.details: premium_info = validation.details['premium_check'] expected = premium_info['expected'] actual = premium_info['actual_diff'] print(f"Premium validation: Expected ${expected:,.0f}, Got ${actual:,.0f}") if abs(expected - actual) > expected * 0.05: # 5% tolerance print("⚠ Significant premium discrepancy detected")
Note
A failed overall validation doesn’t necessarily mean the analysis is wrong - it may indicate edge cases or modeling assumptions that need review. Always examine the details for specific guidance on issues.
See also
ErgodicAnalyzer.validate_insurance_ergodic_impact(): Method generating these resultsErgodicAnalysisResults: Main analysis results that this validation supports
- class ErgodicAnalyzer(convergence_threshold: float = 0.01)[source]
Bases:
objectAdvanced analyzer for ergodic properties of insurance strategies.
This class implements the core computational engine for ergodic economics analysis in insurance contexts. It provides methods to calculate and compare time-average versus ensemble-average growth rates, demonstrating the fundamental difference between traditional expected-value thinking and actual experienced growth over time.
The analyzer addresses the key ergodic insight that for multiplicative processes (like business growth with volatile losses), what happens to an ensemble of businesses differs from what happens to any individual business over time. Insurance can improve time-average growth even when it appears costly from an ensemble (expected value) perspective.
- Key Capabilities:
Time-average growth rate calculation for individual trajectories
Ensemble average computation across multiple simulation paths
Statistical significance testing of insurance benefits
Monte Carlo convergence analysis
Integrated loss modeling and insurance impact assessment
Comprehensive validation of insurance effects
- convergence_threshold
Standard error threshold for determining Monte Carlo convergence. Lower values require more simulations but provide higher confidence in results.
- Type:
- Mathematical Foundation:
- Time-Average Growth: For a trajectory X(t), the time-average growth rate is:
g_time = (1/T) * ln(X(T)/X(0))
- Ensemble Average Growth: Across N paths, the ensemble growth rate is:
g_ensemble = (1/T) * ln(⟨X(T)⟩/⟨X(0)⟩)
- Ergodic Divergence: The difference g_time - g_ensemble indicates
non-ergodic behavior where individual experience differs from statistical expectations.
Examples
Basic analyzer setup and usage:
from ergodic_insurance import ErgodicAnalyzer import numpy as np # Initialize with tight convergence criteria analyzer = ErgodicAnalyzer(convergence_threshold=0.005) # Calculate time-average growth for a single trajectory equity_path = np.array([10e6, 10.5e6, 9.8e6, 11.2e6, 12.1e6]) time_avg_growth = analyzer.calculate_time_average_growth(equity_path) print(f"Time-average growth: {time_avg_growth:.2%} annually")
Ensemble analysis with multiple trajectories:
# Multiple simulation paths (some ending in bankruptcy) trajectories = [ np.array([10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6]), # Survivor np.array([10e6, 9.2e6, 8.1e6, 6.8e6, 0]), # Bankruptcy np.array([10e6, 10.8e6, 11.5e6, 12.8e6, 14.2e6]), # High growth np.array([10e6, 9.8e6, 10.2e6, 10.6e6, 11.1e6]) # Stable growth ] # Calculate ensemble statistics ensemble_stats = analyzer.calculate_ensemble_average( trajectories, metric="growth_rate" ) print(f"Ensemble growth rate: {ensemble_stats['mean']:.2%}") print(f"Survival rate: {ensemble_stats['survival_rate']:.1%}") print(f"Growth rate std dev: {ensemble_stats['std']:.2%}")
Insurance scenario comparison:
# Compare insured vs uninsured scenarios insured_paths = generate_insured_trajectories() # Your simulation code uninsured_paths = generate_uninsured_trajectories() # Your simulation code comparison = analyzer.compare_scenarios( insured_paths, uninsured_paths, metric="equity" ) # Extract key insights time_avg_benefit = comparison['ergodic_advantage']['time_average_gain'] survival_benefit = comparison['ergodic_advantage']['survival_gain'] is_significant = comparison['ergodic_advantage']['significant'] print(f"Time-average growth improvement: {time_avg_benefit:.2%}") print(f"Survival rate improvement: {survival_benefit:.1%}") print(f"Statistically significant: {is_significant}")
Monte Carlo convergence analysis:
# Run large Monte Carlo study simulation_results = run_monte_carlo_study(n_sims=2000) analysis = analyzer.analyze_simulation_batch( simulation_results, label="High-Coverage Insurance" ) # Check if we have enough simulations if analysis['convergence']['converged']: print("Monte Carlo has converged - results are reliable") print(f"Standard error: {analysis['convergence']['standard_error']:.4f}") else: print("Need more simulations for convergence") needed_se = analyzer.convergence_threshold current_se = analysis['convergence']['standard_error'] factor = (current_se / needed_se) ** 2 print(f"Suggest ~{int(2000 * factor)} simulations")
- Advanced Features:
The analyzer provides several advanced capabilities for robust analysis:
Variable-Length Trajectories: Handles paths that end early due to bankruptcy, maintaining proper statistics across mixed survival scenarios.
Significance Testing: Built-in t-tests to determine if observed differences between scenarios are statistically meaningful.
Convergence Monitoring: Automated checking of Monte Carlo convergence using rolling standard error calculations.
Integrated Validation: Comprehensive validation of insurance effects to ensure results accurately reflect premium costs, recoveries, and collateral impacts.
- Performance Notes:
Optimized for large Monte Carlo datasets (1000+ simulations)
Memory-efficient processing of variable-length trajectories
Vectorized calculations where possible for speed
Graceful handling of edge cases (bankruptcy, infinite values)
See also
ErgodicAnalysisResults: Comprehensive results formatValidationResults: Insurance impact validation resultsintegrate_loss_ergodic_analysis(): End-to-end analysis pipelinecompare_scenarios(): Core scenario comparison functionality- calculate_time_average_growth(values: ndarray, time_horizon: int | None = None) float[source]
Calculate time-average growth rate for a single trajectory.
This method implements the core ergodic calculation for individual path growth rates using the logarithmic growth formula. It handles edge cases gracefully, including bankruptcy scenarios and invalid data.
The time-average growth rate represents the actual compound growth experienced by a single entity over time, which differs fundamentally from ensemble averages in multiplicative processes.
- Parameters:
values (np.ndarray) – Array of values over time (e.g., equity, assets, wealth). Should be monotonic in time with positive values for meaningful growth calculations. Length must be >= 2 for growth calculation.
time_horizon (Optional[int]) – Specific time horizon to use for calculation. If None, uses the full trajectory length minus 1. Useful for comparing trajectories of different lengths or analyzing partial periods.
- Returns:
- Time-average growth rate as decimal (0.05 = 5% annual growth).
Special return values: - -inf: Trajectory ended in bankruptcy (final value <= 0) - 0.0: Single time point or zero time horizon - Finite value: Calculated growth rate
- Return type:
Examples
Calculate growth for successful trajectory:
import numpy as np # 5-year equity trajectory equity = np.array([10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6]) growth = analyzer.calculate_time_average_growth(equity) print(f"Growth rate: {growth:.2%} annually") # Output: Growth rate: 5.68% annually
Handle bankruptcy scenario:
# Trajectory ending in bankruptcy failed_equity = np.array([10e6, 9.2e6, 7.1e6, 4.8e6, 0]) growth = analyzer.calculate_time_average_growth(failed_equity) print(f"Growth rate: {growth}") # Output: Growth rate: -inf
Analyze partial trajectory:
# Long trajectory, analyze first 3 years only long_equity = np.array([10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6, 13.1e6]) partial_growth = analyzer.calculate_time_average_growth( long_equity, time_horizon=3 ) # Analyzes first 4 points (years 0-3)
Handle trajectories with initial zeros:
# Trajectory starting from zero (invalid) invalid_equity = np.array([0, 1e6, 2e6, 3e6, 4e6]) growth = analyzer.calculate_time_average_growth(invalid_equity) # Will find first valid positive value and calculate from there
- Mathematical Details:
- The calculation uses the formula:
g = (1/T) * ln(X(T) / X(0))
Where: - g: time-average growth rate - T: time horizon - X(T): final value - X(0): initial value - ln: natural logarithm
This formula gives the constant compound growth rate that would produce the observed change from initial to final value.
- Edge Cases:
Empty array: Returns -inf
Single value: Returns 0.0
Final value <= 0: Returns -inf (bankruptcy)
All values <= 0: Returns -inf
Zero time horizon: Returns 0.0 if positive, -inf if negative
Note
This is the fundamental calculation in ergodic economics, representing the growth rate that a single entity actually experiences over time, as opposed to what we might expect from ensemble averages.
Warning
The method filters out non-positive values when finding the initial value, which may skip early periods of the trajectory. Ensure your data represents meaningful business values (positive equity/assets).
See also
calculate_ensemble_average(): For ensemble growth calculationscompare_scenarios(): For comparing time vs ensemble averages
- calculate_ensemble_average(trajectories: List[ndarray] | ndarray, metric: str = 'final_value') Dict[str, float][source]
Calculate ensemble average and statistics across multiple simulation paths.
This method computes ensemble statistics representing the traditional expected value approach to analyzing multiple parallel scenarios. It handles variable-length trajectories (due to bankruptcy) and provides comprehensive statistics for comparison with time-average calculations.
The ensemble perspective answers: “What would happen on average across many parallel businesses?” This differs from the time-average perspective of “What happens to one business over time?”
- Parameters:
trajectories (Union[List[np.ndarray], np.ndarray]) – Multiple simulation trajectories. Can be: - List of 1D numpy arrays (supports variable lengths) - 2D numpy array with shape (n_paths, n_timesteps) Each trajectory represents values over time (equity, assets, etc.)
metric (str) – Type of ensemble statistic to compute: - “final_value”: Statistics of final values across paths - “growth_rate”: Statistics of growth rates across paths - “full”: Average trajectory at each time step (fixed-length only) Defaults to “final_value”.
- Returns:
- Dictionary containing ensemble statistics:
’mean’: Mean of the selected metric across all valid paths
’std’: Standard deviation of the metric
’median’: Median value of the metric
’survival_rate’: Fraction of paths avoiding bankruptcy
’n_survived’: Absolute number of surviving paths
’n_total’: Total number of input paths
For metric=”full”: - ‘mean_trajectory’: Mean values at each time step - ‘std_trajectory’: Standard deviations at each time step
- Return type:
Examples
Analyze final equity values:
import numpy as np # Multiple simulation results trajectories = [ np.array([10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6]), # Success np.array([10e6, 9.2e6, 8.1e6, 6.8e6, 0]), # Bankruptcy np.array([10e6, 10.8e6, 11.5e6, 12.8e6, 14.2e6]), # High growth ] final_stats = analyzer.calculate_ensemble_average( trajectories, metric="final_value" ) print(f"Average final equity: ${final_stats['mean']:,.0f}") print(f"Survival rate: {final_stats['survival_rate']:.1%}") print(f"Standard deviation: ${final_stats['std']:,.0f}")
Analyze growth rate distribution:
growth_stats = analyzer.calculate_ensemble_average( trajectories, metric="growth_rate" ) print(f"Average growth rate: {growth_stats['mean']:.2%}") print(f"Growth rate volatility: {growth_stats['std']:.2%}") print(f"Median growth: {growth_stats['median']:.2%}")
Full trajectory analysis (fixed-length only):
# Convert to fixed-length array fixed_trajectories = np.array([ [10e6, 10.5e6, 11.2e6, 11.8e6, 12.5e6], [10e6, 9.8e6, 10.1e6, 10.6e6, 11.1e6], [10e6, 10.8e6, 11.5e6, 12.8e6, 14.2e6] ]) full_stats = analyzer.calculate_ensemble_average( fixed_trajectories, metric="full" ) mean_path = full_stats['mean_trajectory'] print(f"Mean trajectory: {mean_path}") # Shows average value at each time step
Handle mixed survival scenarios:
mixed_trajectories = [ np.array([10e6, 11e6, 12e6]), # Short survivor np.array([10e6, 9e6, 0]), # Early bankruptcy np.array([10e6, 11e6, 12e6, 13e6]), # Long survivor ] stats = analyzer.calculate_ensemble_average(mixed_trajectories) print(f"{stats['n_survived']}/{stats['n_total']} paths survived") print(f"Survival rate: {stats['survival_rate']:.1%}")
- Statistical Interpretation:
Mean: The expected value under the ensemble perspective. For multiplicative processes, this may differ significantly from what any individual entity experiences.
Standard Deviation: Measures the spread of outcomes across the ensemble, indicating the uncertainty in individual results.
Survival Rate: Critical metric often ignored in traditional expected value analysis. Shows the probability of avoiding bankruptcy.
Median: Often more representative than mean for skewed distributions common in financial modeling.
- Edge Cases:
Empty trajectory list: Returns zeros/NaN appropriately
All paths end in bankruptcy: survival_rate=0, mean/median may be 0
Single trajectory: Statistics reduce to that trajectory’s values
Mixed lengths: Handled gracefully with proper filtering
- Performance Notes:
Optimized for large numbers of trajectories (1000+ paths)
Memory efficient for mixed-length trajectory lists
Vectorized calculations where possible
See also
calculate_time_average_growth(): Individual trajectory analysiscompare_scenarios(): Ensemble vs time-average comparisonanalyze_simulation_batch(): Comprehensive batch analysis
- check_convergence(values: ndarray, window_size: int = 100) Tuple[bool, float][source]
Check Monte Carlo convergence using rolling standard error analysis.
This method determines whether a Monte Carlo simulation has run enough iterations to provide statistically reliable results. It uses rolling standard error calculations to assess whether adding more simulations would significantly change the estimated mean.
Convergence analysis is crucial for ergodic analysis because insufficient simulations can lead to misleading conclusions about insurance benefits. The method provides both a binary convergence decision and quantitative standard error metrics for informed decision making.
- Parameters:
values (np.ndarray) – Array of values to check for convergence, typically time-average growth rates from Monte Carlo simulations. Should contain at least window_size values for meaningful analysis. Infinite values (from bankruptcy) are handled appropriately.
window_size (int) – Size of rolling window for convergence assessment. Larger windows provide more stable convergence detection but require more data points. Typical values: - 50: Quick convergence check for small samples - 100: Standard convergence analysis (default) - 200: Conservative convergence for high precision Must be <= len(values) for analysis to proceed.
- Returns:
- Convergence assessment results:
converged (bool): Whether the series has converged according to the specified threshold. True indicates sufficient simulations.
standard_error (float): Current standard error of the mean based on the last window_size observations. Lower values indicate higher precision and greater confidence in results.
- Return type:
Examples
Check convergence during Monte Carlo analysis:
import numpy as np # Simulate running Monte Carlo with growth rate collection growth_rates = [] for i in range(2000): # Up to 2000 simulations # Run single simulation (pseudo-code) result = run_single_simulation() growth_rate = analyzer.calculate_time_average_growth(result.equity) growth_rates.append(growth_rate) # Check convergence every 100 simulations if (i + 1) % 100 == 0 and i >= 100: converged, se = analyzer.check_convergence( np.array(growth_rates), window_size=100 ) print(f"Simulation {i+1}: SE={se:.4f}, Converged={converged}") if converged: print(f"✓ Convergence achieved after {i+1} simulations") break if not converged: print(f"⚠ Convergence not achieved after {len(growth_rates)} simulations") print(f"Current standard error: {se:.4f}") print(f"Target threshold: {analyzer.convergence_threshold:.4f}")
Adaptive Monte Carlo with convergence monitoring:
def run_adaptive_monte_carlo(target_precision=0.01, max_sims=5000): '''Run Monte Carlo until convergence or maximum simulations.''' results = [] for i in range(max_sims): # Run simulation sim_result = run_single_simulation() results.append(sim_result) # Extract growth rates for convergence check growth_rates = [analyzer.calculate_time_average_growth(r.equity) for r in results] # Check convergence (need at least 100 for stability) if i >= 100: converged, se = analyzer.check_convergence( np.array([g for g in growth_rates if np.isfinite(g)]) ) if converged and se <= target_precision: print(f"Achieved target precision after {i+1} simulations") return results, True print(f"Maximum simulations reached without convergence") return results, False # Run adaptive analysis results, converged = run_adaptive_monte_carlo() if converged: print("Analysis complete with sufficient precision") else: print("Consider increasing maximum simulations")
Convergence diagnostics and troubleshooting:
# Analyze convergence pattern growth_rates = np.array([...]) # Your Monte Carlo results # Check convergence with different window sizes window_sizes = [50, 100, 150, 200] print("=== Convergence Analysis ===") for ws in window_sizes: if len(growth_rates) >= ws: converged, se = analyzer.check_convergence(growth_rates, ws) print(f"Window {ws:3d}: SE={se:.5f}, Converged={converged}") # Plot convergence pattern (conceptual) rolling_means = [] rolling_ses = [] for i in range(100, len(growth_rates), 10): subset = growth_rates[:i] converged, se = analyzer.check_convergence(subset) rolling_means.append(np.mean(subset[np.isfinite(subset)])) rolling_ses.append(se) # Analyze convergence stability recent_se_trend = np.diff(rolling_ses[-10:]) # Last 10 points if np.mean(recent_se_trend) < 0: print("✓ Standard error decreasing - convergence improving") else: print("⚠ Standard error not decreasing - may need more simulations")
Compare convergence across scenarios:
# Check convergence for both insured and uninsured scenarios scenarios = { 'insured': insured_growth_rates, 'uninsured': uninsured_growth_rates } convergence_status = {} for name, rates in scenarios.items(): converged, se = analyzer.check_convergence(rates) convergence_status[name] = { 'converged': converged, 'standard_error': se, 'n_simulations': len(rates) } print("=== Scenario Convergence Status ===") for name, status in convergence_status.items(): print(f"{name:10}: {status['converged']} " f"(SE={status['standard_error']:.4f}, n={status['n_simulations']})") # Determine if comparison is valid both_converged = all(s['converged'] for s in convergence_status.values()) if both_converged: print("✓ Both scenarios converged - comparison is reliable") else: print("⚠ Incomplete convergence - results may be unreliable")
- Mathematical Background:
The method calculates the standard error of the mean for the most recent window_size observations:
SE = σ / √n
Where: - σ = standard deviation of the sample - n = sample size (window_size)
Convergence is achieved when SE < convergence_threshold, indicating that the sample mean is stable within the desired precision.
- Convergence Guidelines:
Standard Error Thresholds: - SE < 0.005: High precision (recommended for final analysis) - SE < 0.01: Standard precision (adequate for most decisions) - SE < 0.02: Low precision (suitable for initial exploration) - SE > 0.02: Insufficient precision (run more simulations)
Sample Size Rules of Thumb: - n < 100: Generally insufficient for convergence assessment - n = 100-500: May achieve convergence for low-volatility scenarios - n = 500-2000: Standard range for most insurance analyses - n > 2000: High-precision analysis or high-volatility scenarios
- Edge Cases:
Fewer observations than window_size: Returns (False, inf)
All infinite values: Returns (False, inf)
High volatility data: May require very large samples for convergence
Bimodal distributions: Standard error may not capture full uncertainty
- Performance Notes:
Fast execution even for large arrays (10,000+ observations)
Memory efficient rolling window calculations
Robust handling of infinite and missing values
See also
convergence_threshold: Threshold used for convergence decisionanalyze_simulation_batch(): Includes automatic convergence analysiscalculate_ensemble_average(): Ensemble statistics that benefit from convergence
- compare_scenarios(insured_results: List[SimulationResults] | ndarray, uninsured_results: List[SimulationResults] | ndarray, metric: str = 'equity') Dict[str, Any][source]
Compare insured vs uninsured scenarios using comprehensive ergodic analysis.
This is the core method for demonstrating ergodic advantages of insurance. It performs side-by-side comparison of insured and uninsured scenarios, calculating both time-average and ensemble-average growth rates to reveal the fundamental difference between expected value thinking and actual experienced growth.
The comparison reveals how insurance can be optimal from a time-average perspective even when it appears costly from an ensemble (expected value) perspective - the key insight of ergodic economics applied to insurance.
- Args:
- insured_results (Union[List[SimulationResults], np.ndarray]): Simulation
results from insured scenarios. Can be:
List of SimulationResults objects from Monte Carlo runs
List of numpy arrays representing trajectories
2D numpy array with shape (n_simulations, n_timesteps)
- uninsured_results (Union[List[SimulationResults], np.ndarray]):
Simulation results from uninsured scenarios, same format as insured_results. Should have same number of simulations for valid comparison.
metric (str): Financial metric to analyze for comparison:
“equity”: Company equity over time (recommended)
“assets”: Total assets over time
“cash”: Available cash over time
Any attribute available in SimulationResults objects
Defaults to “equity”.
- Returns:
Dict[str, Any]: Comprehensive comparison results with nested structure:
‘insured’ (Dict): Insured scenario statistics:
‘time_average_mean’: Mean time-average growth rate
‘time_average_median’: Median time-average growth rate
‘time_average_std’: Standard deviation of growth rates
‘ensemble_average’: Ensemble average growth rate
‘survival_rate’: Fraction avoiding bankruptcy
‘n_survived’: Absolute number of survivors
‘uninsured’ (Dict): Uninsured scenario statistics:
Same structure as ‘insured’
‘ergodic_advantage’ (Dict): Comparative metrics:
‘time_average_gain’: Difference in time-average growth
‘ensemble_average_gain’: Difference in ensemble averages
‘survival_gain’: Improvement in survival rate
‘t_statistic’: t-test statistic for significance
‘p_value’: p-value for statistical significance
‘significant’: Boolean indicating significance (p < 0.05)
- Examples:
Basic insurance vs no insurance comparison:
# Run Monte Carlo simulations (pseudo-code) insured_sims = run_simulations(insurance_enabled=True, n_sims=1000) uninsured_sims = run_simulations(insurance_enabled=False, n_sims=1000) # Compare scenarios comparison = analyzer.compare_scenarios( insured_sims, uninsured_sims, metric="equity" ) # Extract key insights time_avg_gain = comparison['ergodic_advantage']['time_average_gain'] survival_gain = comparison['ergodic_advantage']['survival_gain'] is_significant = comparison['ergodic_advantage']['significant'] print(f"Time-average growth improvement: {time_avg_gain:.2%}") print(f"Survival rate improvement: {survival_gain:.1%}") print(f"Statistical significance: {is_significant}")
Detailed analysis of results:
# Examine both perspectives insured = comparison['insured'] uninsured = comparison['uninsured'] advantage = comparison['ergodic_advantage'] print("
- === ENSEMBLE PERSPECTIVE (Traditional Analysis) ===”)
print(f”Insured ensemble growth: {insured[‘ensemble_average’]:.2%}”) print(f”Uninsured ensemble growth: {uninsured[‘ensemble_average’]:.2%}”) print(f”Ensemble advantage: {advantage[‘ensemble_average_gain’]:.2%}”)
print(”
- === TIME-AVERAGE PERSPECTIVE (Ergodic Analysis) ===”)
print(f”Insured time-average growth: {insured[‘time_average_mean’]:.2%}”) print(f”Uninsured time-average growth: {uninsured[‘time_average_mean’]:.2%}”) print(f”Time-average advantage: {advantage[‘time_average_gain’]:.2%}”)
print(”
- === SURVIVAL ANALYSIS ===”)
print(f”Insured survival rate: {insured[‘survival_rate’]:.1%}”) print(f”Uninsured survival rate: {uninsured[‘survival_rate’]:.1%}”) print(f”Survival improvement: {advantage[‘survival_gain’]:.1%}”)
# Interpret ergodic vs ensemble difference if advantage[‘time_average_gain’] > advantage[‘ensemble_average_gain’]:
print(”
- ✓ Insurance shows ergodic advantage!”)
print(” Time-average benefit exceeds ensemble expectation”)
- else:
print(”
! No clear ergodic advantage detected”)
Statistical significance analysis:
if comparison['ergodic_advantage']['significant']: p_val = comparison['ergodic_advantage']['p_value'] t_stat = comparison['ergodic_advantage']['t_statistic'] print(f"Results are statistically significant:") print(f" t-statistic: {t_stat:.3f}") print(f" p-value: {p_val:.4f}") print(f" Confidence level: {(1-p_val)*100:.1f}%") else: print("Results not statistically significant") print("Consider running more simulations")
Multiple metric analysis:
# Compare different financial metrics metrics_to_analyze = ['equity', 'assets', 'cash'] results = {} for metric in metrics_to_analyze: results[metric] = analyzer.compare_scenarios( insured_sims, uninsured_sims, metric=metric ) # Find metric showing strongest insurance advantage best_metric = max(metrics_to_analyze, key=lambda m: results[m]['ergodic_advantage']['time_average_gain'] ) print(f"Strongest insurance advantage in: {best_metric}") gain = results[best_metric]['ergodic_advantage']['time_average_gain'] print(f"Time-average improvement: {gain:.2%}")
- Mathematical Background:
The comparison reveals the ergodic/non-ergodic nature of financial processes by calculating:
- Time-Average Growth: Mean of individual trajectory growth rates:
g_time = mean([ln(X_i(T)/X_i(0))/T for each path i])
- Ensemble Average Growth: Growth of the ensemble mean:
g_ensemble = ln(mean([X_i(T)])/mean([X_i(0)]))/T
Ergodic Divergence: g_time - g_ensemble
For multiplicative processes with volatility, these typically differ, with insurance often improving time-average more than ensemble average.
- Interpretation Guidelines:
Positive Time-Average Gain: Insurance improves actual experienced growth rates, even if ensemble analysis suggests otherwise.
Survival Rate Improvement: Critical for long-term viability, often the primary benefit of insurance in high-volatility scenarios.
Statistical Significance: p < 0.05 indicates results are unlikely due to random chance, supporting reliability of conclusions.
- Edge Cases:
All paths bankrupt in one scenario: Handled with -inf growth rates
Mismatched simulation counts: Statistics calculated on available data
Identical scenarios: All advantages will be zero
High volatility: May require more simulations for significance
- Performance Notes:
Handles thousands of simulation paths efficiently
Memory-conscious processing of large trajectory datasets
Automatic handling of variable-length trajectories
- See Also:
calculate_time_average_growth(): Individual path analysiscalculate_ensemble_average(): Ensemble statisticssignificance_test(): Statistical testing detailsErgodicAnalysisResults: Comprehensive results format
- significance_test(sample1: List[float] | ndarray, sample2: List[float] | ndarray, test_type: str = 'two-sided') Tuple[float, float][source]
Perform statistical significance test between two growth rate samples.
This method conducts a two-sample t-test to determine whether observed differences between insured and uninsured scenarios are statistically significant or could reasonably be attributed to random variation. Statistical significance provides confidence that ergodic advantages are genuine rather than artifacts of sampling variability.
- Parameters:
sample1 (Union[List[float], np.ndarray]) – First sample of growth rates, typically from insured scenarios. Should contain time-average growth rates from individual simulation paths. Infinite values (from bankruptcy) are automatically handled.
sample2 (Union[List[float], np.ndarray]) – Second sample of growth rates, typically from uninsured scenarios. Should be comparable to sample1 with same underlying business conditions but different insurance coverage.
test_type (str) – Type of statistical test to perform: - “two-sided”: Tests if samples have different means (default) - “greater”: Tests if sample1 mean > sample2 mean - “less”: Tests if sample1 mean < sample2 mean Defaults to “two-sided” for general hypothesis testing.
- Returns:
- Statistical test results:
t_statistic (float): t-test statistic value. Positive values indicate sample1 has higher mean than sample2.
p_value (float): Probability of observing the data under the null hypothesis of no difference. Lower values indicate stronger evidence against the null hypothesis.
- Return type:
Examples
Test insurance benefit significance:
import numpy as np # Growth rates from Monte Carlo simulations insured_growth = np.array([0.048, 0.051, 0.047, 0.049, 0.052, ...]) uninsured_growth = np.array([0.038, -np.inf, 0.042, 0.035, 0.041, ...]) # Two-sided test for any difference t_stat, p_value = analyzer.significance_test( insured_growth, uninsured_growth, test_type="two-sided" ) print(f"t-statistic: {t_stat:.3f}") print(f"p-value: {p_value:.4f}") if p_value < 0.05: print("✓ Statistically significant difference at 5% level") else: print("No significant difference detected")
One-sided test for insurance superiority:
# Test if insurance provides superior growth rates t_stat, p_value = analyzer.significance_test( insured_growth, uninsured_growth, test_type="greater" ) print(f"Testing if insured > uninsured:") print(f"t-statistic: {t_stat:.3f}") print(f"p-value: {p_value:.4f}") if p_value < 0.01: print("✓ Strong evidence that insurance improves growth (p < 0.01)") elif p_value < 0.05: print("✓ Moderate evidence that insurance improves growth (p < 0.05)") elif p_value < 0.10: print("? Weak evidence that insurance improves growth (p < 0.10)") else: print("No significant evidence that insurance improves growth")
Comprehensive significance analysis:
# Test multiple hypotheses tests = [ ("two-sided", "Any difference"), ("greater", "Insurance superior"), ("less", "Insurance inferior") ] print("=== Statistical Significance Analysis ===") for test_type, description in tests: t_stat, p_value = analyzer.significance_test( insured_growth, uninsured_growth, test_type ) significance = "***" if p_value < 0.001 else "**" if p_value < 0.01 else "*" if p_value < 0.05 else "" if p_value < 0.10 else "n.s." print(f"{description:20}: t={t_stat:6.3f}, p={p_value:.4f} {significance}")
Sample size and power analysis:
# Check if samples are large enough for reliable testing n1, n2 = len(insured_growth), len(uninsured_growth) if n1 < 30 or n2 < 30: print(f"⚠ Small sample sizes (n1={n1}, n2={n2})") print("Consider running more simulations for robust results") # Calculate effect size (Cohen's d) mean1, mean2 = np.mean(insured_growth), np.mean(uninsured_growth) pooled_std = np.sqrt(((n1-1)*np.var(insured_growth) + (n2-1)*np.var(uninsured_growth)) / (n1+n2-2)) cohens_d = (mean1 - mean2) / pooled_std print(f"Effect size (Cohen's d): {cohens_d:.3f}") if abs(cohens_d) > 0.8: print("Large effect size") elif abs(cohens_d) > 0.5: print("Medium effect size") elif abs(cohens_d) > 0.2: print("Small effect size") else: print("Very small effect size")
- Statistical Interpretation:
p-value Guidelines: - p < 0.001: Very strong evidence against null hypothesis (*) - p < 0.01: Strong evidence () - p < 0.05: Moderate evidence (*) - p < 0.10: Weak evidence - p >= 0.10: No significant evidence
t-statistic Guidelines: -
|t| > 3: Very large effect -|t| > 2: Large effect -|t| > 1: Moderate effect -|t| < 1: Small effect- Assumptions and Limitations:
t-test Assumptions: 1. Samples are independent 2. Data approximately normally distributed (robust to violations with large n) 3. Equal variances (Welch’s t-test used automatically if needed)
Handling of Infinite Values: The method automatically excludes infinite values (from bankruptcy scenarios) using scipy’s nan_policy=’omit’. This is appropriate since infinite values represent qualitatively different outcomes (business failure) rather than extreme but finite growth rates.
Multiple Testing: If performing multiple significance tests, consider adjusting significance levels (e.g., Bonferroni correction) to account for increased Type I error probability.
- Performance Notes:
Efficient for samples up to 10,000+ observations
Automatic handling of missing/infinite values
Uses scipy.stats for robust statistical calculations
See also
compare_scenarios(): Includes automatic significance testingcheck_convergence(): For determining adequate sample sizes scipy.stats.ttest_ind: Underlying statistical test implementation
- analyze_simulation_batch(simulation_results: List[SimulationResults], label: str = 'Scenario') Dict[str, Any][source]
Perform comprehensive ergodic analysis on a batch of simulation results.
This method provides a complete analysis of a single scenario (e.g., all insured simulations or all uninsured simulations), including time-average and ensemble statistics, convergence analysis, and survival metrics. It serves as a comprehensive diagnostic tool for understanding the ergodic properties of a particular insurance strategy.
- Args:
- simulation_results (List[SimulationResults]): List of simulation result
objects from Monte Carlo runs. Each should contain trajectory data including equity, assets, years, and potential insolvency information. Typically 100-2000 simulations for robust analysis.
- label (str): Descriptive label for this batch of simulations, used
in reporting and metadata. Examples: “High Deductible”, “Full Coverage”, “Base Case”, “Stress Scenario”. Defaults to “Scenario”.
- Returns:
Dict[str, Any]: Comprehensive analysis dictionary with nested structure:
‘label’ (str): The provided label for identification
‘n_simulations’ (int): Number of simulations analyzed
‘time_average’ (Dict): Time-average growth statistics:
‘mean’: Mean time-average growth rate across all paths
‘median’: Median time-average growth rate
‘std’: Standard deviation of growth rates
‘min’: Minimum growth rate observed
‘max’: Maximum growth rate observed
‘ensemble_average’ (Dict): Ensemble statistics:
‘mean’: Ensemble average growth rate
‘std’: Standard deviation of ensemble
‘median’: Median of ensemble
‘survival_rate’: Fraction avoiding bankruptcy
‘n_survived’: Absolute number of survivors
‘n_total’: Total number of simulations
‘convergence’ (Dict): Monte Carlo convergence analysis:
‘converged’: Boolean indicating if results have converged
‘standard_error’: Current standard error of the estimates
‘threshold’: Convergence threshold used
‘survival_analysis’ (Dict): Survival metrics:
‘survival_rate’: Fraction avoiding bankruptcy (duplicate)
‘mean_survival_time’: Average time to insolvency or end
‘ergodic_divergence’ (float): Difference between time-average and ensemble-average growth rates
- Examples:
Analyze a batch of insured simulations:
# Run Monte Carlo simulations insured_results = [] for i in range(1000): sim = run_single_simulation(insurance_enabled=True, seed=i) insured_results.append(sim) # Comprehensive analysis analysis = analyzer.analyze_simulation_batch( insured_results, label="Full Insurance Coverage" ) # Report key findings print(f"
- === {analysis[‘label’]} Analysis ===”)
print(f”Simulations: {analysis[‘n_simulations’]}”) print(f”Time-average growth: {analysis[‘time_average’][‘mean’]:.2%} ± {analysis[‘time_average’][‘std’]:.2%}”) print(f”Ensemble average: {analysis[‘ensemble_average’][‘mean’]:.2%}”) print(f”Survival rate: {analysis[‘survival_analysis’][‘survival_rate’]:.1%}”) print(f”Ergodic divergence: {analysis[‘ergodic_divergence’]:.3f}”)
Check Monte Carlo convergence:
if analysis['convergence']['converged']: print(f"✓ Results have converged (SE: {analysis['convergence']['standard_error']:.4f})") print("Analysis is reliable for decision making") else: current_se = analysis['convergence']['standard_error'] target_se = analysis['convergence']['threshold'] print(f"⚠ Convergence not reached (SE: {current_se:.4f} > {target_se:.4f})") # Estimate additional simulations needed current_n = analysis['n_simulations'] factor = (current_se / target_se) ** 2 recommended_n = int(current_n * factor) additional_needed = recommended_n - current_n print(f"Recommend ~{additional_needed} additional simulations")
Compare growth rate distributions:
# Analyze distribution characteristics time_avg = analysis['time_average'] print(f"
- === Growth Rate Distribution ===”)
print(f”Mean: {time_avg[‘mean’]:.2%}”) print(f”Median: {time_avg[‘median’]:.2%}”) print(f”Std Dev: {time_avg[‘std’]:.2%}”) print(f”Range: {time_avg[‘min’]:.2%} to {time_avg[‘max’]:.2%}”)
# Check for skewness if time_avg[‘mean’] > time_avg[‘median’]:
print(“Distribution is right-skewed (long tail of high growth)”)
- elif time_avg[‘mean’] < time_avg[‘median’]:
print(“Distribution is left-skewed (long tail of poor performance)”)
- else:
print(“Distribution appears roughly symmetric”)
Survival analysis insights:
survival = analysis['survival_analysis'] ensemble = analysis['ensemble_average'] print(f"
- === Survival Analysis ===”)
print(f”Survival rate: {survival[‘survival_rate’]:.1%}”) print(f”Survivors: {ensemble[‘n_survived’]}/{ensemble[‘n_total’]}”) print(f”Mean time to insolvency/end: {survival[‘mean_survival_time’]:.1f} years”)
# Risk assessment if survival[‘survival_rate’] < 0.9:
print(”⚠ High bankruptcy risk - consider more insurance”)
- elif survival[‘survival_rate’] > 0.99:
print(”✓ Very low bankruptcy risk - insurance is effective”)
- else:
print(”✓ Moderate bankruptcy risk - acceptable for most businesses”)
Ergodic divergence interpretation:
divergence = analysis['ergodic_divergence'] if abs(divergence) < 0.001: # Less than 0.1% print("Minimal ergodic divergence - process is nearly ergodic") elif divergence > 0: print(f"Positive ergodic divergence ({divergence:.3f})") print("Time-average exceeds ensemble average - favorable") else: print(f"Negative ergodic divergence ({divergence:.3f})") print("Ensemble average exceeds time-average - volatility drag")
- Use Cases:
Single Scenario Analysis: Understand the characteristics of one insurance configuration before comparing alternatives.
Convergence Diagnostics: Determine if enough simulations have been run for reliable conclusions.
Risk Assessment: Evaluate bankruptcy probabilities and growth rate distributions for risk management decisions.
Parameter Sensitivity: Analyze how changes in insurance parameters affect ergodic properties by comparing batch analyses.
- Performance Notes:
Efficient processing of 1000+ simulation results
Memory-conscious handling of trajectory data
Automatic filtering of invalid/infinite growth rates
Vectorized calculations for speed
- Warning:
Large numbers of bankruptcy scenarios may skew statistics. Check the survival rate and consider whether the scenario parameters are realistic for your analysis goals.
- See Also:
compare_scenarios(): For comparing multiple scenario batchescheck_convergence(): For detailed convergence analysisSimulationResults: Expected format for simulation_resultsErgodicAnalysisResults: Alternative comprehensive results format
- integrate_loss_ergodic_analysis(loss_data: LossData, insurance_program: TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None, manufacturer: Any, time_horizon: int, n_simulations: int = 100) ErgodicAnalysisResults[source]
Perform end-to-end integrated loss modeling and ergodic analysis.
This method provides a complete pipeline from loss generation through insurance application to final ergodic analysis. It demonstrates the full power of the ergodic framework by seamlessly connecting actuarial loss modeling with business financial modeling and ergodic growth analysis.
The integration pipeline follows these steps: 1. Validate Input Data: Ensure loss data meets quality standards 2. Apply Insurance Program: Calculate recoveries and net exposures 3. Generate Annual Loss Aggregates: Convert to time-series format 4. Run Monte Carlo Simulations: Execute business simulations with losses 5. Calculate Ergodic Metrics: Analyze time-average vs ensemble behavior 6. Validate Results: Ensure mathematical and business logic consistency 7. Package Results: Return comprehensive analysis in standardized format
- Args:
- loss_data (LossData): Standardized loss data object containing loss
frequency and severity distributions. Must pass validation checks including proper distribution parameters and reasonable ranges.
- insurance_program (Optional[InsuranceProgram]): Insurance program to
apply to losses. If None, analysis proceeds with no insurance coverage. Program should specify layers, deductibles, limits, and premium rates.
- manufacturer (Any): Manufacturer model instance for running business
simulations. Should be configured with appropriate initial conditions and financial parameters. Must support claim processing and annual step operations.
time_horizon (int): Analysis time horizon in years. Typical values:
10-20 years: Standard analysis period
50+ years: Long-term ergodic behavior
5-10 years: Quick analysis for parameter exploration
- n_simulations (int): Number of Monte Carlo simulations to run.
More simulations provide better statistical reliability:
100: Quick analysis for development/testing
1000: Standard analysis for decision making
5000+: High-precision analysis for final recommendations
Defaults to 100 for reasonable performance.
- Returns:
ErgodicAnalysisResults: Comprehensive analysis results containing:
Time-average and ensemble-average growth rates
Survival rates and ergodic divergence
Insurance impact metrics (premiums, recoveries, net benefit)
Validation status and detailed metadata
All necessary information for decision making
- Examples:
Basic integrated analysis:
from ergodic_insurance import LossData, InsuranceProgram, WidgetManufacturer, ManufacturerConfig # Set up loss data loss_data = LossData.from_poisson_lognormal( frequency_lambda=2.5, # 2.5 claims per year on average severity_mean=1_000_000, # $1M average claim severity_cv=2.0, # High variability time_horizon=20 ) # Configure insurance program insurance = InsuranceProgram([ # (attachment, limit, rate) (0, 1_000_000, 0.015), # $1M primary layer at 1.5% (1_000_000, 10_000_000, 0.008), # $10M excess at 0.8% (11_000_000, 50_000_000, 0.004) # $50M umbrella at 0.4% ]) # Set up manufacturer config = ManufacturerConfig( initial_assets=25_000_000, base_operating_margin=0.08, asset_turnover_ratio=0.75 ) manufacturer = WidgetManufacturer(config) # Run integrated analysis results = analyzer.integrate_loss_ergodic_analysis( loss_data=loss_data, insurance_program=insurance, manufacturer=manufacturer, time_horizon=20, n_simulations=1000 ) # Interpret results if results.validation_passed: print(f"Time-average growth: {results.time_average_growth:.2%}") print(f"Ensemble average: {results.ensemble_average_growth:.2%}") print(f"Survival rate: {results.survival_rate:.1%}") print(f"Ergodic divergence: {results.ergodic_divergence:.3f}") net_benefit = results.insurance_impact['net_benefit'] print(f"Insurance net benefit: ${net_benefit:,.0f}") if results.ergodic_divergence > 0: print("✓ Insurance shows ergodic advantage") else: print("⚠ Analysis validation failed - check inputs")
Compare insured vs uninsured scenarios:
# Run analysis with insurance insured_results = analyzer.integrate_loss_ergodic_analysis( loss_data, insurance, manufacturer, 20, 1000 ) # Run analysis without insurance uninsured_results = analyzer.integrate_loss_ergodic_analysis( loss_data, None, manufacturer, 20, 1000 ) # Compare outcomes if insured_results.validation_passed and uninsured_results.validation_passed: growth_improvement = (insured_results.time_average_growth - uninsured_results.time_average_growth) survival_improvement = (insured_results.survival_rate - uninsured_results.survival_rate) print(f"Growth rate improvement: {growth_improvement:.2%}") print(f"Survival rate improvement: {survival_improvement:.1%}") if growth_improvement > 0 and survival_improvement > 0: print("✓ Insurance provides clear benefits") elif growth_improvement > 0: print("✓ Insurance improves growth despite survival costs") elif survival_improvement > 0: print("✓ Insurance improves survival despite growth costs") else: print("? Insurance benefits unclear - review parameters")
Parameter sensitivity analysis:
# Test different loss frequencies frequencies = [1.0, 2.0, 3.0, 4.0, 5.0] results = {} for freq in frequencies: test_loss_data = LossData.from_poisson_lognormal( frequency_lambda=freq, severity_mean=1_000_000, severity_cv=2.0, time_horizon=20 ) result = analyzer.integrate_loss_ergodic_analysis( test_loss_data, insurance, manufacturer, 20, 500 ) results[freq] = result # Find optimal frequency range for insurance benefit for freq, result in results.items(): if result.validation_passed: print(f"Frequency {freq}: Growth={result.time_average_growth:.2%}, " f"Survival={result.survival_rate:.1%}")
Detailed insurance impact analysis:
if results.validation_passed: impact = results.insurance_impact metadata = results.metadata print(f"
- === Insurance Impact Analysis ===”)
print(f”Total premiums paid: ${impact.get(‘premium_cost’, 0):,.0f}”) print(f”Total recoveries: ${impact.get(‘recovery_benefit’, 0):,.0f}”) print(f”Net financial benefit: ${impact.get(‘net_benefit’, 0):,.0f}”) print(f”Growth rate improvement: {impact.get(‘growth_improvement’, 0):.2%}”)
# Calculate benefit ratios premium_cost = impact.get(‘premium_cost’, 1) # Avoid division by zero if premium_cost > 0:
recovery_ratio = impact.get(‘recovery_benefit’, 0) / premium_cost benefit_ratio = impact.get(‘net_benefit’, 0) / premium_cost
print(f”
- === Efficiency Metrics ===”)
print(f”Recovery ratio: {recovery_ratio:.2f}x premiums”) print(f”Net benefit ratio: {benefit_ratio:.2f}x premiums”)
- if benefit_ratio > 0:
print(”✓ Insurance provides positive net value”)
- else:
print(”⚠ Insurance costs exceed benefits in expectation”) print(” (But may still provide ergodic advantages)”)
- Validation and Error Handling:
The method includes comprehensive validation at multiple stages:
Input Validation: - Loss data consistency checks - Insurance program parameter validation - Manufacturer model state verification
Process Validation: - Simulation convergence monitoring - Mathematical consistency checks - Business logic validation
Output Validation: - Result reasonableness checks - Statistical significance assessment - Cross-validation with alternative methods
- Performance Considerations:
Optimized for 100-5000 simulation runs
Memory-efficient trajectory storage
Parallel processing capabilities where available
Progress monitoring for long-running analyses
- Error Conditions:
Returns results with validation_passed=False if: - Loss data fails validation checks - All simulation paths end in bankruptcy - Mathematical inconsistencies detected - Insufficient data for statistical analysis
- See Also:
ErgodicAnalysisResults: Detailed results formatLossData: Loss data requirementsInsuranceProgram: Insurance setupvalidate_insurance_ergodic_impact(): Additional validation methods
- Return type:
- validate_insurance_ergodic_impact(base_scenario: SimulationResults, insurance_scenario: SimulationResults, insurance_program: TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None = None) ValidationResults[source]
Comprehensively validate insurance effects in ergodic calculations.
This method performs detailed validation to ensure that insurance impacts are properly reflected in the ergodic analysis. It checks the mathematical consistency and business logic of insurance effects on cash flows, growth rates, and survival probabilities.
The validation is crucial for ensuring that ergodic analysis results are reliable and that observed insurance benefits (or costs) are genuine rather than artifacts of modeling errors or inconsistent implementations.
- Validation Checks Performed:
Premium Deduction Validation: Verifies that insurance premiums are properly deducted from cash flows and reflected in net income
Recovery Credit Validation: Confirms that insurance recoveries are properly credited and improve financial outcomes
Collateral Impact Validation: Checks that letter of credit costs and asset restrictions are properly modeled
Growth Rate Consistency: Validates that time-average growth calculations properly reflect insurance benefits
- Args:
- base_scenario (SimulationResults): Simulation results from baseline
scenario without insurance coverage. Should represent the same business conditions and loss realizations as insurance_scenario but without insurance program applied.
- insurance_scenario (SimulationResults): Simulation results from scenario
with insurance coverage. Should be directly comparable to base_scenario with only insurance coverage as the differentiating factor.
- insurance_program (Optional[InsuranceProgram]): The insurance program
that was applied in insurance_scenario. If provided, enables more detailed validation of premium calculations and coverage effects. If None, performs validation based on observed differences only.
- Returns:
ValidationResults: Comprehensive validation results containing:
premium_deductions_correct: Boolean indicating premium validation
recoveries_credited: Boolean indicating recovery validation
collateral_impacts_included: Boolean indicating collateral validation
time_average_reflects_benefit: Boolean indicating growth validation
overall_valid: Boolean indicating overall validation status
details: Dict with detailed validation information and metrics
- Examples:
Basic validation after scenario comparison:
# Run paired simulations base_sim = run_simulation(insurance_enabled=False, seed=12345) insured_sim = run_simulation(insurance_enabled=True, seed=12345) # Validate insurance effects validation = analyzer.validate_insurance_ergodic_impact( base_sim, insured_sim, insurance_program ) if validation.overall_valid: print("✓ Insurance effects properly modeled") print(f" Premium deductions: {validation.premium_deductions_correct}") print(f" Recoveries credited: {validation.recoveries_credited}") print(f" Collateral impacts: {validation.collateral_impacts_included}") print(f" Growth consistency: {validation.time_average_reflects_benefit}") else: print("⚠ Validation issues detected") print("Review modeling implementation")
Detailed validation diagnostics:
validation = analyzer.validate_insurance_ergodic_impact( base_scenario, insurance_scenario, insurance_program ) # Examine premium validation details if 'premium_check' in validation.details: premium_info = validation.details['premium_check'] print(f"
- === Premium Validation ===”)
print(f”Expected premium: ${premium_info[‘expected’]:,.0f}”) print(f”Actual cost difference: ${premium_info[‘actual_diff’]:,.0f}”) print(f”Validation passed: {premium_info[‘valid’]}”)
- if not premium_info[‘valid’]:
diff = abs(premium_info[‘expected’] - premium_info[‘actual_diff’]) print(f”⚠ Premium discrepancy: ${diff:,.0f}”)
# Examine recovery validation details if ‘recovery_check’ in validation.details:
recovery_info = validation.details[‘recovery_check’] print(f”
- === Recovery Validation ===”)
print(f”Base scenario claims: ${recovery_info[‘base_claims’]:,.0f}”) print(f”Insured scenario claims: ${recovery_info[‘insured_claims’]:,.0f}”) print(f”Base final equity: ${recovery_info[‘base_final_equity’]:,.0f}”) print(f”Insured final equity: ${recovery_info[‘insured_final_equity’]:,.0f}”) print(f”Validation passed: {recovery_info[‘valid’]}”)
# Examine growth rate validation if ‘growth_check’ in validation.details:
growth_info = validation.details[‘growth_check’] print(f”
- === Growth Rate Validation ===”)
print(f”Base growth rate: {growth_info[‘base_growth’]:.2%}”) print(f”Insured growth rate: {growth_info[‘insured_growth’]:.2%}”) print(f”Growth improvement: {growth_info[‘improvement’]:.2%}”) print(f”Validation passed: {growth_info[‘valid’]}”)
- if growth_info[‘improvement’] > 0:
print(”✓ Insurance improves time-average growth”)
- elif np.isfinite(growth_info[‘insured_growth’]) and not np.isfinite(growth_info[‘base_growth’]):
print(”✓ Insurance prevents bankruptcy (infinite improvement)”)
Validation in Monte Carlo context:
# Validate across multiple random seeds validation_results = [] for seed in range(10): # Test 10 paired simulations base = run_simulation(insurance_enabled=False, seed=seed) insured = run_simulation(insurance_enabled=True, seed=seed) validation = analyzer.validate_insurance_ergodic_impact( base, insured, insurance_program ) validation_results.append(validation.overall_valid) # Check consistency across seeds validation_rate = sum(validation_results) / len(validation_results) print(f"Validation rate across seeds: {validation_rate:.1%}") if validation_rate < 0.8: print("⚠ Inconsistent validation - check model implementation") else: print("✓ Consistent validation across scenarios")
Integration with scenario comparison:
# Run comparison analysis comparison = analyzer.compare_scenarios( [insured_sim], [base_sim], metric="equity" ) # Validate the comparison validation = analyzer.validate_insurance_ergodic_impact( base_sim, insured_sim, insurance_program ) # Cross-check results if validation.overall_valid and comparison['ergodic_advantage']['significant']: print("✓ Validated significant ergodic advantage from insurance") print(f" Time-average improvement: {comparison['ergodic_advantage']['time_average_gain']:.2%}") print(f" Statistical significance: p = {comparison['ergodic_advantage']['p_value']:.4f}") elif validation.overall_valid: print("✓ Insurance effects validated but not statistically significant") print("Consider running more simulations or adjusting parameters") else: print("⚠ Validation failed - results may be unreliable") print("Review model implementation before drawing conclusions")
- Validation Logic Details:
Premium Validation: Compares expected premium costs (from insurance program) with actual observed difference in net income between scenarios. Allows for small numerical differences (<1% of expected premium).
Recovery Validation: Checks that insurance scenario shows better financial performance despite potential premium costs. Allows for 5% variance to account for timing differences and model approximations.
Collateral Validation: Verifies that letter of credit costs and asset restrictions are reflected in the financial calculations. Checks for non-zero differences in asset levels between scenarios.
Growth Rate Validation: Ensures that time-average growth calculations properly reflect insurance benefits, especially in scenarios with significant loss exposure. Handles bankruptcy cases appropriately.
- Common Validation Failures:
Premium costs not properly deducted from cash flows
Insurance recoveries not credited to reduce net losses
Letter of credit collateral costs not included in expense calculations
Inconsistent treatment of bankruptcy scenarios
Timing mismatches between premium payments and loss occurrences
- Troubleshooting:
If validation fails, check: 1. Consistent random seed usage between base and insured scenarios 2. Proper integration of insurance program with manufacturer model 3. Correct timing of premium payments and loss recoveries 4. Accurate letter of credit cost calculations 5. Consistent handling of bankruptcy and survival scenarios
- Performance Notes:
Fast execution for single scenario pairs
Efficient for batch validation across multiple seeds
Comprehensive diagnostics with minimal computational overhead
- See Also:
ValidationResults: Detailed validation results formatcompare_scenarios(): Main scenario comparison methodintegrate_loss_ergodic_analysis(): End-to-end analysis pipelineInsuranceProgram: Insurance modeling
- Return type:
ergodic_insurance.excel_reporter module
Excel report generation for financial statements and analysis.
This module provides comprehensive Excel report generation functionality, creating professional financial statements, diagnostic reports, and Monte Carlo aggregations with advanced formatting and validation.
Example
Generate Excel report from simulation:
from ergodic_insurance.excel_reporter import ExcelReporter, ExcelReportConfig
from ergodic_insurance.manufacturer import WidgetManufacturer
# Configure report
config = ExcelReportConfig(
output_path=Path("./reports"),
include_balance_sheet=True,
include_income_statement=True,
include_cash_flow=True
)
# Generate report
reporter = ExcelReporter(config)
output_file = reporter.generate_trajectory_report(
manufacturer,
"financial_statements.xlsx"
)
- class ExcelReportConfig(output_path: Path = <factory>, include_balance_sheet: bool = True, include_income_statement: bool = True, include_cash_flow: bool = True, include_reconciliation: bool = True, include_metrics_dashboard: bool = True, include_pivot_data: bool = True, formatting: Dict[str, ~typing.Any] | None=None, engine: str = 'auto', currency_format: str = '$#, ##0', decimal_places: int = 0, date_format: str = 'yyyy-mm-dd') None[source]
Bases:
objectConfiguration for Excel report generation.
- output_path
Directory for output files
- include_balance_sheet
Whether to include balance sheet
- include_income_statement
Whether to include income statement
- include_cash_flow
Whether to include cash flow statement
- include_reconciliation
Whether to include reconciliation sheet
- include_metrics_dashboard
Whether to include metrics dashboard
- include_pivot_data
Whether to include pivot-ready data sheet
- formatting
Custom formatting options
- engine
Excel engine to use (‘xlsxwriter’, ‘openpyxl’, ‘auto’)
- currency_format
Currency format string
- decimal_places
Number of decimal places for numbers
- date_format
Date format string
- class ExcelReporter(config: ExcelReportConfig | None = None)[source]
Bases:
objectMain Excel report generation engine.
This class handles the creation of comprehensive Excel reports from simulation data, including financial statements, metrics dashboards, and reconciliation reports.
- config
Report configuration
- workbook
Excel workbook object
- formats
Dictionary of Excel format objects
- engine
Selected Excel engine
- generate_trajectory_report(manufacturer: WidgetManufacturer, output_file: str, title: str | None = None) Path[source]
Generate Excel report for a single simulation trajectory.
Creates a comprehensive Excel workbook with financial statements, metrics, and reconciliation for a single simulation run.
- Parameters:
manufacturer (
WidgetManufacturer) – WidgetManufacturer with simulation dataoutput_file (
str) – Name of output Excel file
- Return type:
- Returns:
Path to generated Excel file
ergodic_insurance.exposure_base module
Exposure base module for dynamic frequency scaling in insurance losses.
This module provides a hierarchy of exposure classes that dynamically adjust loss frequencies based on actual business metrics from the simulation. The exposure bases now work with real financial state from the manufacturer, not artificial growth projections.
- Key Concepts:
Exposure bases query actual financial metrics from a state provider
Frequency multipliers are calculated from actual vs. base metrics
No artificial growth rates or projections
Direct integration with WidgetManufacturer financial state
Example
Basic usage with state-driven revenue exposure:
from ergodic_insurance.exposure_base import RevenueExposure
from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator
# Create manufacturer
manufacturer = WidgetManufacturer(config)
# Create exposure linked to manufacturer's actual state
exposure = RevenueExposure(state_provider=manufacturer)
# Create generator with exposure
generator = ManufacturingLossGenerator.create_simple(
frequency=0.5,
severity_mean=1_000_000
)
# Losses will be generated based on actual revenue during simulation
- Since:
Version 0.3.0 - Complete refactor to state-driven approach
- class FinancialStateProvider(*args, **kwargs)[source]
Bases:
ProtocolProtocol for providing current financial state to exposure bases.
This protocol defines the interface that any class must implement to provide financial metrics to exposure bases. The WidgetManufacturer class implements this protocol to supply real-time financial data.
- class ExposureBase[source]
Bases:
ABCAbstract base class for exposure calculations.
Exposure represents the underlying business metric that drives claim frequency. Common examples include revenue, assets, employee count, or production volume.
Subclasses must implement methods to calculate absolute exposure levels and frequency multipliers at different time points.
- class RevenueExposure(state_provider: FinancialStateProvider) None[source]
Bases:
ExposureBaseRevenue-based exposure using actual financial state.
Models claim frequency that scales with actual business revenue from the simulation, not artificial growth projections. The exposure directly queries the current revenue from the manufacturer’s financial state.
- state_provider
Object providing current and base financial metrics. Typically a WidgetManufacturer instance.
Example
Revenue exposure with actual manufacturer state:
from ergodic_insurance.manufacturer import WidgetManufacturer from ergodic_insurance.config import ManufacturerConfig manufacturer = WidgetManufacturer( ManufacturerConfig(initial_assets=10_000_000) ) exposure = RevenueExposure(state_provider=manufacturer) # Exposure reflects actual manufacturer revenue current_rev = exposure.get_exposure(1.0) multiplier = exposure.get_frequency_multiplier(1.0)
- state_provider: FinancialStateProvider
- get_exposure(time: float) float[source]
Return current actual revenue from manufacturer.
- Return type:
- class AssetExposure(state_provider: FinancialStateProvider) None[source]
Bases:
ExposureBaseAsset-based exposure using actual financial state.
Models claim frequency based on actual asset values from the simulation, tracking real asset changes from operations, claims, and business growth. Suitable for businesses where physical assets drive risk exposure.
Frequency scales linearly with assets as more assets generally mean more insurable items that can generate claims.
- state_provider
Object providing current and base financial metrics. Typically a WidgetManufacturer instance.
Example
Asset exposure with actual manufacturer state:
manufacturer = WidgetManufacturer( ManufacturerConfig(initial_assets=50_000_000) ) exposure = AssetExposure(state_provider=manufacturer) # Exposure reflects actual asset changes current_assets = exposure.get_exposure(1.0) multiplier = exposure.get_frequency_multiplier(1.0)
- state_provider: FinancialStateProvider
- get_exposure(time: float) float[source]
Return current actual assets from manufacturer.
- Return type:
- class EquityExposure(state_provider: FinancialStateProvider) None[source]
Bases:
ExposureBaseEquity-based exposure using actual financial state.
Models claim frequency based on actual equity values from the simulation, tracking real equity changes from profits, losses, and retained earnings. Suitable for financial analysis where equity represents business scale.
- state_provider
Object providing current and base financial metrics. Typically a WidgetManufacturer instance.
Example
Equity exposure with actual manufacturer state:
manufacturer = WidgetManufacturer( ManufacturerConfig(initial_assets=20_000_000) ) exposure = EquityExposure(state_provider=manufacturer) # Exposure reflects actual equity changes current_equity = exposure.get_exposure(1.0) multiplier = exposure.get_frequency_multiplier(1.0)
- state_provider: FinancialStateProvider
- get_exposure(time: float) float[source]
Return current actual equity from manufacturer.
- Return type:
- class EmployeeExposure(base_employees: int, hiring_rate: float = 0.0, automation_factor: float = 0.0) None[source]
Bases:
ExposureBaseExposure based on employee count.
Models claim frequency based on workforce size, accounting for hiring and automation effects. Suitable for businesses where employee-related risks dominate (workers comp, employment practices, etc.).
- base_employees
Initial number of employees.
- hiring_rate
Annual net hiring rate (can be negative for downsizing).
- automation_factor
Annual reduction in exposure per employee due to automation.
Example
Employee exposure with automation:
exposure = EmployeeExposure( base_employees=500, hiring_rate=0.05, # 5% annual growth automation_factor=0.02 # 2% automation improvement )
- get_exposure(time: float) float[source]
Calculate employee count with hiring and automation effects.
- Return type:
- class ProductionExposure(base_units: float, growth_rate: float = 0.0, seasonality: Callable[[float], float] | None = None, quality_improvement_rate: float = 0.0) None[source]
Bases:
ExposureBaseExposure based on production volume/units.
Models claim frequency based on production output, with support for seasonal patterns and quality improvements that reduce defect rates.
- base_units
Initial production volume (units per year).
- growth_rate
Annual production growth rate.
- seasonality
Optional function returning seasonal multiplier.
- quality_improvement_rate
Annual reduction in defect-related claims.
Example
Production exposure with seasonality:
def seasonal_pattern(time): # Higher production in Q4 return 1.0 + 0.3 * np.sin(2 * np.pi * time) exposure = ProductionExposure( base_units=100_000, growth_rate=0.08, seasonality=seasonal_pattern, quality_improvement_rate=0.03 )
- get_exposure(time: float) float[source]
Calculate production volume with growth and seasonality.
- Return type:
- class CompositeExposure(exposures: Dict[str, ExposureBase], weights: Dict[str, float]) None[source]
Bases:
ExposureBaseWeighted combination of multiple exposure bases.
Allows modeling complex businesses with multiple risk drivers by combining different exposure types with specified weights.
- exposures
Dictionary of named exposure bases.
- weights
Dictionary of weights for each exposure (will be normalized).
Example
Composite exposure for diversified business:
composite = CompositeExposure( exposures={ 'revenue': RevenueExposure(base_revenue=50_000_000, growth_rate=0.05), 'assets': AssetExposure(base_assets=100_000_000), 'employees': EmployeeExposure(base_employees=500) }, weights={'revenue': 0.5, 'assets': 0.3, 'employees': 0.2} )
- exposures: Dict[str, ExposureBase]
- class ScenarioExposure(scenarios: Dict[str, List[float]], selected_scenario: str, interpolation: str = 'linear') None[source]
Bases:
ExposureBasePredefined exposure scenarios for planning and stress testing.
Allows specification of exact exposure paths for scenario analysis, with interpolation between specified time points.
- scenarios
Dictionary mapping scenario names to exposure paths.
- selected_scenario
Currently active scenario name.
- interpolation
Interpolation method (‘linear’, ‘cubic’, ‘nearest’).
Example
Scenario-based exposure planning:
scenarios = { 'baseline': [100, 105, 110, 116, 122], 'recession': [100, 95, 90, 92, 96], 'expansion': [100, 112, 125, 140, 155] } exposure = ScenarioExposure( scenarios=scenarios, selected_scenario='recession', interpolation='linear' )
- class StochasticExposure(base_value: float, process_type: str, parameters: Dict[str, float], seed: int | None = None) None[source]
Bases:
ExposureBaseStochastic exposure evolution using various processes.
Supports multiple stochastic processes for advanced exposure modeling: - Geometric Brownian Motion (GBM) - Mean-reverting (Ornstein-Uhlenbeck) - Jump diffusion
- base_value
Initial exposure value.
- process_type
Type of stochastic process (‘gbm’, ‘mean_reverting’, ‘jump_diffusion’).
- parameters
Process-specific parameters.
- seed
Random seed for reproducibility.
Example
GBM exposure process:
exposure = StochasticExposure( base_value=100_000_000, process_type='gbm', parameters={ 'drift': 0.05, # 5% drift 'volatility': 0.20 # 20% volatility }, seed=42 )
ergodic_insurance.financial_statements module
Financial statement compilation and generation.
This module provides classes for generating standard financial statements (Balance Sheet, Income Statement, Cash Flow Statement) from simulation data. It supports both single trajectory and Monte Carlo aggregated reports with reconciliation capabilities.
Example
Generate financial statements from a manufacturer simulation:
from ergodic_insurance.manufacturer import WidgetManufacturer
from ergodic_insurance.financial_statements import FinancialStatementGenerator
# Run simulation
manufacturer = WidgetManufacturer(config)
for year in range(10):
manufacturer.step()
# Generate statements
generator = FinancialStatementGenerator(manufacturer)
balance_sheet = generator.generate_balance_sheet(year=5)
income_statement = generator.generate_income_statement(year=5)
cash_flow = generator.generate_cash_flow_statement(year=5)
- class FinancialStatementConfig(currency_symbol: str = '$', decimal_places: int = 0, include_yoy_change: bool = True, include_percentages: bool = True, fiscal_year_end: int | None = None, consolidate_monthly: bool = True, current_claims_ratio: float = 0.1) None[source]
Bases:
objectConfiguration for financial statement generation.
- currency_symbol
Symbol to use for currency formatting
- decimal_places
Number of decimal places for numeric values
- include_yoy_change
Whether to include year-over-year changes
- include_percentages
Whether to include percentage breakdowns
- fiscal_year_end
Month of fiscal year end (1-12). If None, inherits from the central Config.simulation.fiscal_year_end setting. Defaults to 12 (December) if neither is set, for calendar year alignment.
- consolidate_monthly
Whether to consolidate monthly data into annual
- current_claims_ratio
Fraction of claim liabilities classified as current (due within one year). Defaults to 0.1 (10%). Should be derived from actual claim payment schedules when available.
- class CashFlowStatement(metrics_history: List[Dict[str, Decimal | float | int | bool]], config: Any | None = None, ledger: Ledger | None = None)[source]
Bases:
objectGenerates cash flow statements using indirect or direct method.
This class creates properly structured cash flow statements with three sections (Operating, Investing, Financing) following GAAP standards. Supports both the indirect method (starting from net income) and the direct method (summing ledger entries) for operating activities.
When a ledger is provided, the direct method is available, which provides perfect reconciliation and audit trail for all cash flows.
- metrics_history
List of metrics dictionaries from simulation
- config
Configuration object with business parameters
- ledger
Optional Ledger for direct method cash flow generation
- generate_statement(year: int, period: str = 'annual', method: str = 'indirect') DataFrame[source]
Generate cash flow statement for specified year.
- Parameters:
- Return type:
DataFrame- Returns:
DataFrame containing formatted cash flow statement
- Raises:
IndexError – If year is out of range
ValueError – If direct method requested but no ledger available
- class FinancialStatementGenerator(manufacturer: WidgetManufacturer | None = None, manufacturer_data: Dict[str, Any] | None = None, config: FinancialStatementConfig | None = None, ledger: Ledger | None = None)[source]
Bases:
objectGenerates financial statements from simulation data.
This class compiles standard financial statements (Balance Sheet, Income Statement, Cash Flow) from manufacturer metrics history. It handles both annual and monthly data, performs reconciliation checks, and calculates derived financial metrics.
When a ledger is provided (either directly or via the manufacturer), direct method cash flow statements can be generated, providing perfect reconciliation and audit trail for all cash transactions.
- manufacturer_data
Raw simulation data from manufacturer
- config
Configuration for statement generation
- metrics_history
List of metrics dictionaries from simulation
- years_available
Number of years of data available
- ledger
Optional Ledger for direct method cash flow generation
- generate_balance_sheet(year: int, compare_years: List[int] | None = None) DataFrame[source]
Generate balance sheet for specified year.
Creates a standard balance sheet with assets, liabilities, and equity sections. Includes year-over-year comparisons if configured.
When a ledger is available, balances are derived directly from the ledger using get_balance() for each account, ensuring perfect reconciliation. Otherwise, falls back to metrics_history from the manufacturer.
- Parameters:
- Return type:
DataFrame- Returns:
DataFrame containing balance sheet data
- Raises:
IndexError – If year is out of range
- generate_income_statement(year: int, compare_years: List[int] | None = None, monthly: bool = False) DataFrame[source]
Generate income statement for specified year with proper GAAP structure.
Creates a standard income statement following US GAAP with proper categorization of COGS, operating expenses, and non-operating items. Supports both annual and monthly statement generation.
When a ledger is available, revenue and expenses are derived from ledger period changes using get_period_change(), ensuring perfect reconciliation. Otherwise, falls back to metrics_history from the manufacturer.
- Parameters:
- Return type:
DataFrame- Returns:
DataFrame containing income statement data with GAAP structure
- Raises:
IndexError – If year is out of range
- generate_cash_flow_statement(year: int, period: str = 'annual', method: str = 'indirect') DataFrame[source]
Generate cash flow statement for specified year using CashFlowStatement class.
Creates a cash flow statement with three distinct sections (Operating, Investing, Financing). Supports both indirect method (starting from net income) and direct method (summing ledger entries) for operating activities.
When a ledger is available, the direct method is preferred as it provides perfect reconciliation and audit trail for all cash transactions by summing actual ledger entries.
- Parameters:
year (
int) – Year index (0-based) for cash flow statementperiod (
str) – ‘annual’ or ‘monthly’ for period typemethod (
str) – ‘indirect’ (default) or ‘direct’. Direct method requires a ledger to be available. When ledger is present and no method specified, direct method may be preferred for better accuracy.
- Return type:
DataFrame- Returns:
DataFrame containing cash flow statement data
- Raises:
IndexError – If year is out of range
ValueError – If direct method requested but no ledger available
- generate_reconciliation_report(year: int) DataFrame[source]
Generate reconciliation report for financial statements.
Validates that financial statements balance and reconcile properly, checking key accounting identities and relationships.
- Parameters:
year (
int) – Year index (0-based) for reconciliation- Return type:
DataFrame- Returns:
DataFrame containing reconciliation checks and results
- class MonteCarloStatementAggregator(monte_carlo_results: List[Dict] | DataFrame, config: FinancialStatementConfig | None = None)[source]
Bases:
objectAggregates financial statements across Monte Carlo simulations.
This class processes multiple simulation trajectories to create statistical summaries of financial statements, showing means, percentiles, and confidence intervals.
- results
Monte Carlo simulation results
- config
Configuration for statement generation
- aggregate_balance_sheets(year: int, percentiles: List[float] | None = None) DataFrame[source]
Aggregate balance sheets across simulations.
ergodic_insurance.hjb_solver module
Hamilton-Jacobi-Bellman solver for optimal insurance control.
This module implements a Hamilton-Jacobi-Bellman (HJB) partial differential equation solver for finding optimal insurance strategies through dynamic programming. The solver handles multi-dimensional state spaces and provides theoretically optimal control policies.
The HJB equation provides globally optimal solutions by solving:
∂V/∂t + max_u[L^u V + f(x,u)] = 0
where V is the value function, L^u is the controlled infinitesimal generator, and f(x,u) is the running cost/reward.
Author: Alex Filiakov Date: 2025-01-26
- class TimeSteppingScheme(*values)[source]
Bases:
EnumTime stepping schemes for PDE integration.
- EXPLICIT = 'explicit'
- IMPLICIT = 'implicit'
- CRANK_NICOLSON = 'crank_nicolson'
- class BoundaryCondition(*values)[source]
Bases:
EnumTypes of boundary conditions.
- DIRICHLET = 'dirichlet'
- NEUMANN = 'neumann'
- ABSORBING = 'absorbing'
- REFLECTING = 'reflecting'
- class StateVariable(name: str, min_value: float, max_value: float, num_points: int, boundary_lower: BoundaryCondition = BoundaryCondition.ABSORBING, boundary_upper: BoundaryCondition = BoundaryCondition.ABSORBING, log_scale: bool = False) None[source]
Bases:
objectDefinition of a state variable in the HJB problem.
- boundary_lower: BoundaryCondition = 'absorbing'
- boundary_upper: BoundaryCondition = 'absorbing'
- class ControlVariable(name: str, min_value: float, max_value: float, num_points: int = 50, continuous: bool = True) None[source]
Bases:
objectDefinition of a control variable in the HJB problem.
- class StateSpace(state_variables: List[StateVariable]) None[source]
Bases:
objectMulti-dimensional state space for HJB problem.
Handles arbitrary dimensionality with proper grid management and boundary condition enforcement.
- state_variables: List[StateVariable]
- get_boundary_mask() ndarray[source]
Get boolean mask for boundary points.
- Return type:
- Returns:
Boolean array where True indicates boundary points
- class UtilityFunction[source]
Bases:
ABCAbstract base class for utility functions.
Defines the interface for utility functions used in the HJB equation. Concrete implementations should provide both the utility value and its derivative.
- abstractmethod derivative(wealth: ndarray) ndarray[source]
Compute marginal utility (first derivative).
- class LogUtility(wealth_floor: float = 1e-06)[source]
Bases:
UtilityFunctionLogarithmic utility function for ergodic optimization.
U(w) = log(w)
This utility function maximizes the long-term growth rate and is particularly suitable for ergodic analysis.
- class PowerUtility(risk_aversion: float = 2.0, wealth_floor: float = 1e-06)[source]
Bases:
UtilityFunctionPower (CRRA) utility function with risk aversion parameter.
U(w) = w^(1-γ)/(1-γ) for γ ≠ 1 U(w) = log(w) for γ = 1
where γ is the coefficient of relative risk aversion.
- class ExpectedWealth[source]
Bases:
UtilityFunctionLinear utility function for risk-neutral wealth maximization.
U(w) = w
This represents risk-neutral preferences where the goal is to maximize expected wealth.
- class HJBProblem(state_space: StateSpace, control_variables: List[ControlVariable], utility_function: UtilityFunction, dynamics: Callable[[ndarray, ndarray, float], ndarray], running_cost: Callable[[ndarray, ndarray, float], ndarray], terminal_value: Callable[[ndarray], ndarray] | None = None, discount_rate: float = 0.0, time_horizon: float | None = None) None[source]
Bases:
objectComplete specification of an HJB optimal control problem.
- state_space: StateSpace
- control_variables: List[ControlVariable]
- utility_function: UtilityFunction
- class HJBSolverConfig(time_step: float = 0.01, max_iterations: int = 1000, tolerance: float = 1e-06, scheme: TimeSteppingScheme = TimeSteppingScheme.IMPLICIT, use_sparse: bool = True, verbose: bool = True) None[source]
Bases:
objectConfiguration for HJB solver.
- scheme: TimeSteppingScheme = 'implicit'
- class HJBSolver(problem: HJBProblem, config: HJBSolverConfig)[source]
Bases:
objectHamilton-Jacobi-Bellman PDE solver for optimal control.
Implements finite difference methods with upwind schemes for solving HJB equations. Supports multi-dimensional state spaces and various boundary conditions.
- create_custom_utility(evaluate_func: Callable[[ndarray], ndarray], derivative_func: Callable[[ndarray], ndarray], inverse_derivative_func: Callable[[ndarray], ndarray] | None = None) UtilityFunction[source]
Factory function for creating custom utility functions.
This function allows users to create custom utility functions by providing the evaluation and derivative functions. This is the recommended way to add new utility functions beyond the built-in ones.
- Parameters:
- Return type:
- Returns:
Custom utility function instance
Example
>>> # Create exponential utility: U(w) = 1 - exp(-α*w) >>> def exp_eval(w): ... alpha = 0.01 ... return 1 - np.exp(-alpha * w) >>> def exp_deriv(w): ... alpha = 0.01 ... return alpha * np.exp(-alpha * w) >>> exp_utility = create_custom_utility(exp_eval, exp_deriv)
ergodic_insurance.insurance module
Insurance policy structure and claim processing.
This module provides classes for modeling multi-layer insurance policies with configurable attachment points, limits, and premium rates. It supports complex insurance structures commonly used in commercial insurance including excess layers, umbrella coverage, and multi-layer towers.
The module integrates with pricing engines for dynamic premium calculation and supports both static and market-driven pricing models.
- Key Features:
Multi-layer insurance towers with attachment points and limits
Deductible and self-insured retention handling
Dynamic pricing integration with market cycles
Claim allocation across multiple layers
Premium calculation with various rating methods
Examples
Simple single-layer policy:
from ergodic_insurance.insurance import InsurancePolicy, InsuranceLayer
# $5M excess $1M with 3% rate
layer = InsuranceLayer(
attachment_point=1_000_000,
limit=5_000_000,
rate=0.03
)
policy = InsurancePolicy(
layers=[layer],
deductible=500_000
)
# Process a $3M claim
company_payment, insurance_recovery = policy.process_claim(3_000_000)
Multi-layer tower:
# Build a insurance tower
layers = [
InsuranceLayer(1_000_000, 4_000_000, 0.025), # Primary
InsuranceLayer(5_000_000, 5_000_000, 0.015), # First excess
InsuranceLayer(10_000_000, 10_000_000, 0.01), # Second excess
]
tower = InsurancePolicy(layers, deductible=1_000_000)
annual_premium = tower.calculate_premium()
Note
For advanced features like reinstatements and complex multi-layer programs, see the insurance_program module which provides EnhancedInsuranceLayer and InsuranceProgram classes.
- Since:
Version 0.1.0
- class InsuranceLayer(attachment_point: float, limit: float, rate: float) None[source]
Bases:
objectRepresents a single insurance layer.
Each layer has an attachment point (where coverage starts), a limit (maximum coverage), and a rate (premium percentage). Insurance layers are the building blocks of complex insurance programs.
- attachment_point
Dollar amount where this layer starts providing coverage. Also known as the retention or excess point.
- limit
Maximum coverage amount from this layer. The layer covers losses from attachment_point to (attachment_point + limit).
- rate
Premium rate as a percentage of the limit. For example, 0.03 means 3% of limit as annual premium.
Examples
Primary layer with $1M retention:
primary = InsuranceLayer( attachment_point=1_000_000, # $1M retention limit=5_000_000, # $5M limit rate=0.025 # 2.5% rate ) # This covers losses from $1M to $6M # Annual premium = $5M × 2.5% = $125,000
Excess layer in a tower:
excess = InsuranceLayer( attachment_point=6_000_000, # Attaches at $6M limit=10_000_000, # $10M limit rate=0.01 # 1% rate (lower for excess) )
Note
Layers are typically structured in towers with each successive layer attaching where the previous layer exhausts.
- __post_init__()[source]
Validate insurance layer parameters.
- Raises:
ValueError – If attachment_point is negative, limit is non-positive, or rate is negative.
- calculate_recovery(loss_amount: float) float[source]
Calculate recovery from this layer for a given loss.
Determines how much of a loss is covered by this specific layer based on its attachment point and limit.
- Parameters:
loss_amount (
float) – Total loss amount in dollars to recover.- Returns:
- Amount recovered from this layer in dollars. Returns 0
if loss is below attachment point, partial recovery if loss partially penetrates layer, or full limit if loss exceeds layer exhaust point.
- Return type:
Examples
Layer with $1M attachment, $5M limit:
layer = InsuranceLayer(1_000_000, 5_000_000, 0.02) # Loss below attachment recovery = layer.calculate_recovery(500_000) # Returns 0 # Loss partially in layer recovery = layer.calculate_recovery(3_000_000) # Returns 2M # Loss exceeds layer recovery = layer.calculate_recovery(10_000_000) # Returns 5M (full limit)
Calculate premium for this layer.
- Returns:
Annual premium amount in dollars (rate × limit).
- Return type:
Examples
Calculate annual cost:
layer = InsuranceLayer(1_000_000, 10_000_000, 0.015) premium = layer.calculate_premium() # Returns 150,000 print(f"Annual premium: ${premium:,.0f}")
- class InsurancePolicy(layers: List[InsuranceLayer], deductible: float = 0.0, pricing_enabled: bool = False, pricer: InsurancePricer | None = None)[source]
Bases:
objectMulti-layer insurance policy with deductible.
Manages multiple insurance layers and processes claims across them, handling proper allocation of losses to each layer in sequence. Supports both static and dynamic pricing models.
The policy structure follows standard commercial insurance practices: 1. Insured pays deductible first 2. Losses then penetrate layers in order of attachment 3. Each layer pays up to its limit 4. Insured bears losses exceeding all coverage
- layers
List of InsuranceLayer objects sorted by attachment point.
- deductible
Self-insured retention before insurance applies.
- pricing_enabled
Whether to use dynamic pricing models.
- pricer
Optional pricing engine for market-based premiums.
- pricing_results
History of pricing calculations.
Examples
Standard commercial property program:
# Build insurance program policy = InsurancePolicy( layers=[ InsuranceLayer(500_000, 4_500_000, 0.03), # Primary InsuranceLayer(5_000_000, 10_000_000, 0.02), # Excess InsuranceLayer(15_000_000, 25_000_000, 0.01) # Umbrella ], deductible=500_000 # $500K SIR ) # Process various claims small_claim = policy.process_claim(100_000) # All on deductible medium_claim = policy.process_claim(3_000_000) # Hits primary large_claim = policy.process_claim(20_000_000) # Multiple layers
Note
Layers are automatically sorted by attachment point to ensure proper claim allocation regardless of input order.
- process_claim(claim_amount: float) Tuple[float, float][source]
Process a claim through the insurance structure.
Allocates a loss across the deductible and insurance layers, calculating how much is paid by the company versus insurance. Total insurance recovery is capped at (claim_amount - deductible) to prevent over-recovery when layer configurations overlap with the deductible region.
- calculate_recovery(claim_amount: float) float[source]
Calculate total insurance recovery for a claim.
Recovery is capped at (claim_amount - deductible) to prevent over-recovery when layer configurations overlap with the deductible region.
Calculate total premium across all layers.
- Return type:
- Returns:
Total annual premium.
- classmethod from_yaml(config_path: str) InsurancePolicy[source]
Load insurance policy from YAML configuration.
- Parameters:
config_path (
str) – Path to YAML configuration file.- Return type:
- Returns:
InsurancePolicy configured from YAML.
- get_total_coverage() float[source]
Get total coverage across all layers.
- Return type:
- Returns:
Maximum possible insurance coverage.
- to_enhanced_program() TypeAliasForwardRef('ergodic_insurance.insurance_program.InsuranceProgram') | None[source]
Convert to enhanced InsuranceProgram for advanced features.
- Return type:
Optional[ergodic_insurance.insurance_program.InsuranceProgram]- Returns:
InsuranceProgram instance with same configuration.
- apply_pricing(expected_revenue: float, market_cycle: MarketCycle | None = None, loss_generator: ManufacturingLossGenerator | None = None) None[source]
Apply dynamic pricing to all layers in the policy.
Updates layer rates based on frequency/severity calculations.
- Parameters:
expected_revenue (
float) – Expected annual revenue for scalingmarket_cycle (
Optional[MarketCycle]) – Optional market cycle stateloss_generator (
Optional[ManufacturingLossGenerator]) – Optional loss generator (uses pricer’s if not provided)
- Raises:
ValueError – If pricing not enabled or pricer not configured
- Return type:
- classmethod create_with_pricing(layers: List[InsuranceLayer], loss_generator: ManufacturingLossGenerator, expected_revenue: float, market_cycle: MarketCycle | None = None, deductible: float = 0.0) InsurancePolicy[source]
Create insurance policy with dynamic pricing.
Factory method that creates a policy with pricing already applied.
- Parameters:
layers (
List[InsuranceLayer]) – Initial layer structureloss_generator (
ManufacturingLossGenerator) – Loss generator for pricingexpected_revenue (
float) – Expected annual revenuemarket_cycle (
Optional[MarketCycle]) – Market cycle statedeductible (
float) – Self-insured retention
- Return type:
- Returns:
InsurancePolicy with pricing applied
ergodic_insurance.insurance_accounting module
Insurance premium accounting module.
This module provides proper insurance premium accounting with prepaid asset tracking and systematic monthly amortization following GAAP principles.
Uses Decimal for all currency amounts to prevent floating-point precision errors in iterative calculations.
- class InsuranceRecovery(amount: Decimal, claim_id: str, year_approved: int, amount_received: Decimal = <factory>) None[source]
Bases:
objectRepresents an insurance claim recovery receivable.
- amount
Recovery amount approved by insurance (Decimal)
- claim_id
Unique identifier for the claim
- year_approved
Year when recovery was approved
- amount_received
Amount received to date (Decimal)
- __post_init__() None[source]
Convert amounts to Decimal if needed (runtime check for backwards compatibility).
- Return type:
- class InsuranceAccounting(prepaid_insurance: Decimal = <factory>, monthly_expense: Decimal = <factory>, annual_premium: Decimal = <factory>, months_in_period: int = 12, current_month: int = 0, recoveries: List[InsuranceRecovery] = <factory>) None[source]
Bases:
objectManages insurance premium accounting with proper GAAP treatment.
This class tracks annual insurance premium payments as prepaid assets and amortizes them monthly over the coverage period using straight-line amortization. It also tracks insurance claim recoveries separately from claim liabilities.
All currency amounts use Decimal for precise financial calculations.
- prepaid_insurance
Current prepaid insurance asset balance (Decimal)
- monthly_expense
Calculated monthly insurance expense (Decimal)
Total annual premium amount (Decimal)
- months_in_period
Number of months in coverage period (default 12)
- current_month
Current month in coverage period
- recoveries
List of insurance recoveries receivable
- recoveries: List[InsuranceRecovery]
- __post_init__() None[source]
Convert amounts to Decimal if needed (runtime check for backwards compatibility).
- Return type:
- __deepcopy__(memo: Dict[int, Any]) InsuranceAccounting[source]
Create a deep copy of this insurance accounting instance.
Record annual premium payment at start of coverage period.
- Parameters:
premium_amount (
Union[Decimal,float,int]) – Annual premium amount to pay (converted to Decimal)- Returns:
cash_outflow: Cash paid for premium
prepaid_asset: Prepaid insurance asset created
monthly_expense: Calculated monthly expense
- Return type:
Dictionary with transaction details as Decimal
- record_monthly_expense() Dict[str, Decimal][source]
Amortize monthly insurance expense from prepaid asset.
Records one month of insurance expense by reducing the prepaid asset and recognizing the expense. Uses straight-line amortization over the coverage period.
- Returns:
insurance_expense: Monthly expense recognized
prepaid_reduction: Reduction in prepaid asset
remaining_prepaid: Remaining prepaid balance
- Return type:
Dictionary with transaction details as Decimal
- record_claim_recovery(recovery_amount: Decimal | float | int, claim_id: str | None = None, year: int = 0) Dict[str, Decimal][source]
Record insurance claim recovery as receivable.
- Parameters:
- Returns:
insurance_receivable: New receivable amount
total_receivables: Total outstanding receivables
- Return type:
Dictionary with recovery details as Decimal
- receive_recovery_payment(amount: Decimal | float | int, claim_id: str | None = None) Dict[str, Decimal][source]
Record receipt of insurance recovery payment.
- Parameters:
- Returns:
cash_received: Cash inflow amount
receivable_reduction: Reduction in receivables
remaining_receivables: Total remaining receivables
- Return type:
Dictionary with payment details as Decimal
- get_total_receivables() Decimal[source]
Calculate total outstanding insurance receivables.
- Return type:
- Returns:
Total amount of outstanding insurance receivables as Decimal
- get_amortization_schedule() List[Dict[str, int | Decimal]][source]
Generate remaining amortization schedule.
ergodic_insurance.insurance_pricing module
Insurance pricing module with market cycle support.
This module implements realistic insurance premium calculation based on frequency and severity distributions, replacing hardcoded premium rates in simulations. It supports market cycle adjustments and integrates with existing loss generators and insurance structures.
Example
Basic usage for pricing an insurance program:
from ergodic_insurance.insurance_pricing import InsurancePricer, MarketCycle
from ergodic_insurance.loss_distributions import ManufacturingLossGenerator
# Initialize loss generator and pricer
loss_gen = ManufacturingLossGenerator()
pricer = InsurancePricer(
loss_generator=loss_gen,
loss_ratio=0.70,
market_cycle=MarketCycle.NORMAL
)
# Price an insurance program
program = InsuranceProgram(layers=[...])
priced_program = pricer.price_insurance_program(
program,
expected_revenue=15_000_000
)
# Get total premium
total_premium = priced_program.calculate_annual_premium()
- class MarketCycle(*values)[source]
Bases:
EnumMarket cycle states affecting insurance pricing.
Each state corresponds to a target loss ratio that insurers use to price coverage. Lower loss ratios (hard markets) result in higher premiums.
- HARD
Seller’s market with limited capacity (60% loss ratio)
- NORMAL
Balanced market conditions (70% loss ratio)
- SOFT
Buyer’s market with excess capacity (80% loss ratio)
- HARD = 0.6
- NORMAL = 0.7
- SOFT = 0.8
- class PricingParameters(loss_ratio: float = 0.7, expense_ratio: float = 0.25, profit_margin: float = 0.05, risk_loading: float = 0.1, confidence_level: float = 0.95, simulation_years: int = 10, min_premium: float = 1000.0, max_rate_on_line: float = 0.5) None[source]
Bases:
objectParameters for insurance pricing calculations.
- loss_ratio
Target loss ratio for pricing (claims/premium)
- expense_ratio
Operating expense ratio (default 0.25)
- profit_margin
Target profit margin (default 0.05)
- risk_loading
Additional loading for uncertainty (default 0.10)
- confidence_level
Confidence level for pricing (default 0.95)
- simulation_years
Years to simulate for pricing (default 10)
Minimum premium floor (default 1000)
- max_rate_on_line
Maximum rate on line cap (default 0.50)
- class LayerPricing(attachment_point: float, limit: float, expected_frequency: float, expected_severity: float, pure_premium: float, technical_premium: float, market_premium: float, rate_on_line: float, confidence_interval: Tuple[float, float]) None[source]
Bases:
objectPricing details for a single insurance layer.
- attachment_point
Where coverage starts
- limit
Maximum coverage amount
- expected_frequency
Expected claims per year hitting this layer
- expected_severity
Average severity of claims in this layer
Expected loss cost
Pure premium with expenses and profit
Final premium after market adjustments
- rate_on_line
Premium as percentage of limit
- confidence_interval
(lower, upper) bounds at confidence level
- class InsurancePricer(loss_generator: 'ManufacturingLossGenerator' | None = None, loss_ratio: float | None = None, market_cycle: MarketCycle | None = None, parameters: PricingParameters | None = None, exposure: 'ExposureBase' | None = None, seed: int | None = None)[source]
Bases:
objectCalculate insurance premiums based on loss distributions and market conditions.
This class provides methods to price individual layers and complete insurance programs using frequency/severity distributions from loss generators. It supports market cycle adjustments and maintains backward compatibility with fixed rates.
- Parameters:
loss_generator (
Optional[ManufacturingLossGenerator]) – Manufacturing loss generator for frequency/severity dataloss_ratio (
Optional[float]) – Target loss ratio for pricing (or use market_cycle)market_cycle (
Optional[MarketCycle]) – Market cycle state (overrides loss_ratio if provided)parameters (
Optional[PricingParameters]) – Additional pricing parametersseed (
Optional[int]) – Random seed for reproducible simulations
Example
Pricing with different market conditions:
# Hard market pricing (higher premiums) hard_pricer = InsurancePricer( loss_generator=loss_gen, market_cycle=MarketCycle.HARD ) # Soft market pricing (lower premiums) soft_pricer = InsurancePricer( loss_generator=loss_gen, market_cycle=MarketCycle.SOFT )
Calculate pure premium for a layer using frequency/severity.
Pure premium represents the expected loss cost without expenses, profit, or risk loading.
- Parameters:
- Return type:
- Returns:
Tuple of (pure_premium, statistics_dict) with detailed metrics
- Raises:
ValueError – If loss_generator is not configured
Convert pure premium to technical premium with risk loading.
Technical premium adds a risk loading for parameter uncertainty to the pure premium. Expense and profit margins are applied separately via the loss ratio in calculate_market_premium() to avoid double-counting.
Apply market cycle adjustment to technical premium.
Market premium = Technical premium / Loss ratio
- Parameters:
technical_premium (
float) – Premium with expenses and loadingsmarket_cycle (
Optional[MarketCycle]) – Optional market cycle override
- Return type:
- Returns:
Market-adjusted premium
- price_layer(attachment_point: float, limit: float, expected_revenue: float, market_cycle: MarketCycle | None = None) LayerPricing[source]
Price a single insurance layer.
Complete pricing process from pure premium through market adjustment.
- Parameters:
attachment_point (
float) – Where coverage startslimit (
float) – Maximum coverage amountexpected_revenue (
float) – Expected annual revenuemarket_cycle (
Optional[MarketCycle]) – Optional market cycle override
- Return type:
- Returns:
LayerPricing object with all pricing details
- price_insurance_program(program: ergodic_insurance.insurance_program.InsuranceProgram, expected_revenue: float | None = None, time: float = 0.0, market_cycle: MarketCycle | None = None, update_program: bool = True) ergodic_insurance.insurance_program.InsuranceProgram[source]
Price a complete insurance program.
Prices all layers in the program and optionally updates their rates.
- Parameters:
program (ergodic_insurance.insurance_program.InsuranceProgram) – Insurance program to price
expected_revenue (
Optional[float]) – Expected annual revenue (optional if using exposure)time (
float) – Time for exposure calculation (default 0.0)market_cycle (
Optional[MarketCycle]) – Optional market cycle overrideupdate_program (
bool) – Whether to update program layer rates
- Return type:
ergodic_insurance.insurance_program.InsuranceProgram
- Returns:
Program with updated pricing (original or copy based on update_program)
- price_insurance_policy(policy: InsurancePolicy, expected_revenue: float, market_cycle: MarketCycle | None = None, update_policy: bool = True) InsurancePolicy[source]
Price a basic insurance policy.
Prices all layers in the policy and optionally updates their rates.
- Parameters:
policy (
InsurancePolicy) – Insurance policy to priceexpected_revenue (
float) – Expected annual revenuemarket_cycle (
Optional[MarketCycle]) – Optional market cycle overrideupdate_policy (
bool) – Whether to update policy layer rates
- Return type:
- Returns:
Policy with updated pricing (original or copy based on update_policy)
- compare_market_cycles(attachment_point: float, limit: float, expected_revenue: float) Dict[str, LayerPricing][source]
Compare pricing across different market cycles.
Useful for understanding market impact on premiums.
- simulate_cycle_transition(program: ergodic_insurance.insurance_program.InsuranceProgram, expected_revenue: float, years: int = 10, transition_probs: Dict[str, float] | None = None) List[Dict[str, Any]][source]
Simulate insurance pricing over market cycle transitions.
Models how premiums change as markets transition between states.
- Parameters:
- Return type:
- Returns:
List of annual results with cycle states and premiums
- static create_from_config(config: Dict[str, Any], loss_generator: ManufacturingLossGenerator | None = None) InsurancePricer[source]
Create pricer from configuration dictionary.
- Parameters:
loss_generator (
Optional[ManufacturingLossGenerator]) – Optional loss generator
- Return type:
- Returns:
Configured InsurancePricer instance
ergodic_insurance.insurance_program module
Multi-layer insurance program with reinstatements and advanced features.
This module provides comprehensive insurance program management including multi-layer structures, reinstatements, attachment points, and accurate loss allocation for manufacturing risk transfer optimization.
- class ReinstatementType(*values)[source]
Bases:
EnumTypes of reinstatement provisions.
- NONE = 'none'
- PRO_RATA = 'pro_rata'
- FULL = 'full'
- FREE = 'free'
- class OptimizationConstraints(max_total_premium: float | None = None, min_total_coverage: float | None = None, max_layers: int = 5, min_layers: int = 3, max_attachment_gap: float = 0.0, min_roe_improvement: float = 0.15, max_iterations: int = 1000, convergence_tolerance: float = 1e-06) None[source]
Bases:
objectConstraints for insurance program optimization.
- class OptimalStructure(layers: List[EnhancedInsuranceLayer], deductible: float, total_premium: float, total_coverage: float, ergodic_benefit: float, roe_improvement: float, optimization_metrics: Dict[str, Any], convergence_achieved: bool, iterations_used: int) None[source]
Bases:
objectResult of insurance structure optimization.
- layers: List[EnhancedInsuranceLayer]
- class EnhancedInsuranceLayer(attachment_point: float, limit: float, base_premium_rate: float, reinstatements: int = 0, reinstatement_premium: float = 1.0, reinstatement_type: ReinstatementType = ReinstatementType.PRO_RATA, aggregate_limit: float | None = None, participation_rate: float = 1.0, limit_type: str = 'per-occurrence', per_occurrence_limit: float | None = None, premium_rate_exposure: ExposureBase | None = None) None[source]
Bases:
objectInsurance layer with reinstatement support and advanced features.
Extends basic layer functionality with reinstatements, tracking, and more sophisticated premium calculations.
- reinstatement_type: ReinstatementType = 'pro_rata'
Calculate base premium for this layer.
Calculate premium for a single reinstatement.
- class LayerState(layer: EnhancedInsuranceLayer, used_limit: float = 0.0, reinstatements_used: int = 0, total_claims_paid: float = 0.0, reinstatement_premiums_paid: float = 0.0, is_exhausted: bool = False, aggregate_used: float = 0.0) None[source]
Bases:
objectTracks the current state of an insurance layer during simulation.
Maintains utilization, reinstatement count, and exhaustion status for accurate multi-claim processing.
- layer: EnhancedInsuranceLayer
- process_claim(claim_amount: float, timing_factor: float = 1.0) Tuple[float, float][source]
Process a claim against this layer.
- class InsuranceProgram(layers: List[EnhancedInsuranceLayer], deductible: float = 0.0, name: str = 'Manufacturing Insurance Program', pricing_enabled: bool = False, pricer: InsurancePricer | None = None)[source]
Bases:
objectComprehensive multi-layer insurance program manager.
Handles complex insurance structures with multiple layers, reinstatements, and sophisticated claim allocation.
Calculate total annual premium for the program.
- process_claim(claim_amount: float, timing_factor: float = 1.0) Dict[str, Any][source]
Process a single claim through the insurance structure.
- process_annual_claims(claims: List[float], claim_times: List[float] | None = None) Dict[str, Any][source]
Process all claims for a policy year.
- get_total_coverage() float[source]
Calculate maximum possible coverage.
- Return type:
- Returns:
Maximum claim amount that can be covered.
- calculate_ergodic_benefit(loss_history: List[List[float]], manufacturer_profile: Dict[str, Any] | None = None, time_horizon: int = 100) Dict[str, float][source]
Calculate ergodic benefit of insurance structure.
Quantifies time-average growth improvement from insurance coverage versus ensemble-average cost.
- Parameters:
- Return type:
- Returns:
Dictionary with ergodic metrics.
- find_optimal_attachment_points(loss_data: List[float], num_layers: int = 4, percentiles: List[float] | None = None) List[float][source]
Find optimal attachment points based on loss frequency/severity.
Uses data-driven approach to minimize gaps while optimizing cost.
- optimize_layer_widths(attachment_points: List[float], total_budget: float, capacity_constraints: Dict[str, float] | None = None, loss_data: List[float] | None = None) List[float][source]
Optimize layer widths given attachment points and constraints.
- Parameters:
- Return type:
- Returns:
List of optimal layer widths.
- optimize_layer_structure(loss_data: List[List[float]], company_profile: Dict[str, Any] | None = None, constraints: OptimizationConstraints | None = None) OptimalStructure[source]
Optimize complete insurance layer structure.
Main optimization method that orchestrates layer count, attachment points, and widths to maximize ergodic benefit.
- Parameters:
- Return type:
- Returns:
Optimal insurance structure.
- classmethod from_yaml(config_path: str) ergodic_insurance.insurance_program.InsuranceProgram[source]
Load insurance program from YAML configuration.
- Parameters:
config_path (
str) – Path to YAML configuration file.- Return type:
ergodic_insurance.insurance_program.InsuranceProgram
- Returns:
Configured InsuranceProgram instance.
- classmethod create_standard_manufacturing_program(deductible: float = 250000) ergodic_insurance.insurance_program.InsuranceProgram[source]
Create standard manufacturing insurance program.
- Parameters:
deductible (
float) – Self-insured retention amount.- Return type:
ergodic_insurance.insurance_program.InsuranceProgram
- Returns:
Standard manufacturing insurance program.
- apply_pricing(expected_revenue: float, market_cycle: MarketCycle | None = None, loss_generator: ManufacturingLossGenerator | None = None) None[source]
Apply dynamic pricing to all layers in the program.
Updates layer premium rates based on frequency/severity calculations.
- Parameters:
expected_revenue (
float) – Expected annual revenue for scalingmarket_cycle (
Optional[MarketCycle]) – Optional market cycle stateloss_generator (
Optional[ManufacturingLossGenerator]) – Optional loss generator (uses pricer’s if not provided)
- Raises:
ValueError – If pricing not enabled or pricer not configured
- Return type:
- classmethod create_with_pricing(layers: List[EnhancedInsuranceLayer], loss_generator: ManufacturingLossGenerator, expected_revenue: float, market_cycle: MarketCycle | None = None, deductible: float = 0.0, name: str = 'Priced Insurance Program') ergodic_insurance.insurance_program.InsuranceProgram[source]
Create insurance program with dynamic pricing.
Factory method that creates a program with pricing already applied.
- Parameters:
layers (
List[EnhancedInsuranceLayer]) – Initial layer structureloss_generator (
ManufacturingLossGenerator) – Loss generator for pricingexpected_revenue (
float) – Expected annual revenuemarket_cycle (
Optional[MarketCycle]) – Market cycle statedeductible (
float) – Self-insured retentionname (
str) – Program name
- Return type:
ergodic_insurance.insurance_program.InsuranceProgram
- Returns:
InsuranceProgram with pricing applied
- class ProgramState(program: ergodic_insurance.insurance_program.InsuranceProgram, years_simulated: int = 0, total_claims: List[float] = <factory>, total_recoveries: List[float] = <factory>, total_premiums: List[float] = <factory>, annual_results: List[Dict] = <factory>) None[source]
Bases:
objectTracks multi-year insurance program state for simulations.
Maintains historical data and statistics across multiple policy periods for long-term analysis.
- program: InsuranceProgram
- annual_results: List[Dict]
ergodic_insurance.ledger module
Event-sourcing ledger for financial transactions.
This module implements a simple ledger system that tracks individual financial transactions using double-entry accounting. This provides transaction-level detail that is lost when using only point-in-time metrics snapshots.
The ledger enables: - Perfect reconciliation between financial statements - Direct method cash flow statement generation - Audit trail for all financial changes - Understanding of WHY balances changed (e.g., “was this AR change a
write-off or a payment?”)
Example
Record a sale on credit:
ledger = Ledger()
ledger.record_double_entry(
date=5, # Year 5
debit_account="accounts_receivable",
credit_account="revenue",
amount=1_000_000,
description="Annual sales on credit"
)
Generate cash flows for a period:
operating_cash_flows = ledger.get_cash_flows(period=5)
print(f"Cash from customers: ${operating_cash_flows['cash_from_customers']:,.0f}")
- class AccountType(*values)[source]
Bases:
EnumClassification of accounts per GAAP chart of accounts.
- ASSET
Resources owned by the company (debit normal balance)
- LIABILITY
Obligations owed to others (credit normal balance)
- EQUITY
Owner’s residual interest (credit normal balance)
- REVENUE
Income from operations (credit normal balance)
- EXPENSE
Costs of operations (debit normal balance)
- ASSET = 'asset'
- LIABILITY = 'liability'
- EQUITY = 'equity'
- REVENUE = 'revenue'
- EXPENSE = 'expense'
- class AccountName(*values)[source]
Bases:
EnumStandard account names for the chart of accounts.
Using this enum instead of raw strings prevents typos that would silently result in zero balances on financial statements. See Issue #260.
Account names are grouped by their AccountType:
- Assets (debit normal balance):
CASH, ACCOUNTS_RECEIVABLE, INVENTORY, PREPAID_INSURANCE, INSURANCE_RECEIVABLES, GROSS_PPE, ACCUMULATED_DEPRECIATION, RESTRICTED_CASH, COLLATERAL, DEFERRED_TAX_ASSET
- Liabilities (credit normal balance):
ACCOUNTS_PAYABLE, ACCRUED_EXPENSES, ACCRUED_WAGES, ACCRUED_TAXES, ACCRUED_INTEREST, CLAIM_LIABILITIES, UNEARNED_REVENUE
- Equity (credit normal balance):
RETAINED_EARNINGS, COMMON_STOCK, DIVIDENDS
- Revenue (credit normal balance):
REVENUE, SALES_REVENUE, INTEREST_INCOME, INSURANCE_RECOVERY
- Expenses (debit normal balance):
COST_OF_GOODS_SOLD, OPERATING_EXPENSES, DEPRECIATION_EXPENSE, INSURANCE_EXPENSE, INSURANCE_LOSS, TAX_EXPENSE, INTEREST_EXPENSE, COLLATERAL_EXPENSE, WAGE_EXPENSE
Example
Use AccountName instead of strings to prevent typos:
from ergodic_insurance.ledger import AccountName, Ledger ledger = Ledger() ledger.record_double_entry( date=5, debit_account=AccountName.ACCOUNTS_RECEIVABLE, # Safe credit_account=AccountName.REVENUE, amount=1_000_000, transaction_type=TransactionType.REVENUE, ) # This would be a compile/lint error: # debit_account=AccountName.ACCOUNT_RECEIVABLE # Typo caught!
- CASH = 'cash'
- ACCOUNTS_RECEIVABLE = 'accounts_receivable'
- INVENTORY = 'inventory'
- PREPAID_INSURANCE = 'prepaid_insurance'
- INSURANCE_RECEIVABLES = 'insurance_receivables'
- GROSS_PPE = 'gross_ppe'
- ACCUMULATED_DEPRECIATION = 'accumulated_depreciation'
- RESTRICTED_CASH = 'restricted_cash'
- COLLATERAL = 'collateral'
- DEFERRED_TAX_ASSET = 'deferred_tax_asset'
- ACCOUNTS_PAYABLE = 'accounts_payable'
- ACCRUED_EXPENSES = 'accrued_expenses'
- ACCRUED_WAGES = 'accrued_wages'
- ACCRUED_TAXES = 'accrued_taxes'
- ACCRUED_INTEREST = 'accrued_interest'
- CLAIM_LIABILITIES = 'claim_liabilities'
- UNEARNED_REVENUE = 'unearned_revenue'
- RETAINED_EARNINGS = 'retained_earnings'
- COMMON_STOCK = 'common_stock'
- DIVIDENDS = 'dividends'
- REVENUE = 'revenue'
- SALES_REVENUE = 'sales_revenue'
- INTEREST_INCOME = 'interest_income'
- INSURANCE_RECOVERY = 'insurance_recovery'
- COST_OF_GOODS_SOLD = 'cost_of_goods_sold'
- OPERATING_EXPENSES = 'operating_expenses'
- DEPRECIATION_EXPENSE = 'depreciation_expense'
- INSURANCE_EXPENSE = 'insurance_expense'
- INSURANCE_LOSS = 'insurance_loss'
- TAX_EXPENSE = 'tax_expense'
- INTEREST_EXPENSE = 'interest_expense'
- COLLATERAL_EXPENSE = 'collateral_expense'
- WAGE_EXPENSE = 'wage_expense'
- class EntryType(*values)[source]
Bases:
EnumType of ledger entry - debit or credit.
In double-entry accounting: - DEBIT increases assets and expenses, decreases liabilities and equity - CREDIT decreases assets and expenses, increases liabilities and equity
- DEBIT = 'debit'
- CREDIT = 'credit'
- class TransactionType(*values)[source]
Bases:
EnumClassification of transaction for cash flow statement mapping.
These types enable automatic classification into operating, investing, or financing activities for cash flow statement generation.
- REVENUE = 'revenue'
- COLLECTION = 'collection'
- EXPENSE = 'expense'
- PAYMENT = 'payment'
- WAGE_PAYMENT = 'wage_payment'
- INTEREST_PAYMENT = 'interest_payment'
- INVENTORY_PURCHASE = 'inventory_purchase'
- INVENTORY_SALE = 'inventory_sale'
- INSURANCE_PREMIUM = 'insurance_premium'
- INSURANCE_CLAIM = 'insurance_claim'
- TAX_ACCRUAL = 'tax_accrual'
- TAX_PAYMENT = 'tax_payment'
- DTA_ADJUSTMENT = 'dta_adjustment'
- DEPRECIATION = 'depreciation'
- WORKING_CAPITAL = 'working_capital'
- CAPEX = 'capex'
- ASSET_SALE = 'asset_sale'
- DIVIDEND = 'dividend'
- EQUITY_ISSUANCE = 'equity_issuance'
- DEBT_ISSUANCE = 'debt_issuance'
- DEBT_REPAYMENT = 'debt_repayment'
- ADJUSTMENT = 'adjustment'
- ACCRUAL = 'accrual'
- WRITE_OFF = 'write_off'
- REVALUATION = 'revaluation'
- LIQUIDATION = 'liquidation'
- TRANSFER = 'transfer'
- RETAINED_EARNINGS = 'retained_earnings'
- class LedgerEntry(date: int, account: str, amount: Decimal, entry_type: EntryType, transaction_type: TransactionType, description: str = '', reference_id: str = <factory>, timestamp: datetime = <factory>, month: int = 0) None[source]
Bases:
objectA single entry in the accounting ledger.
Each entry represents one side of a double-entry transaction. A complete transaction always has matching debits and credits.
- date
Period (year) when the transaction occurred
- account
Name of the account affected (e.g., “cash”, “accounts_receivable”)
- amount
Dollar amount of the entry (always positive)
- entry_type
DEBIT or CREDIT
- transaction_type
Classification for cash flow mapping
- description
Human-readable description of the transaction
- reference_id
UUID linking both sides of a double-entry transaction
- timestamp
Actual datetime when entry was recorded (for audit)
- month
Optional month within the year (0-11)
- transaction_type: TransactionType
- property signed_amount: Decimal
Return amount with sign based on entry type.
For balance calculations: - Assets/Expenses: Debit positive, Credit negative - Liabilities/Equity/Revenue: Credit positive, Debit negative
This method returns the raw signed amount for debits (+) and credits (-). The Ledger class handles account type normalization.
- class Ledger(strict_validation: bool = True) None[source]
Bases:
objectDouble-entry accounting ledger for event sourcing.
The Ledger tracks all financial transactions at the entry level, enabling perfect reconciliation and direct method cash flow generation.
- entries
List of all ledger entries
- chart_of_accounts
Mapping of account names to their types
- Thread Safety:
This class is not thread-safe. Concurrent reads are safe, but concurrent writes (
record,record_double_entry,prune_entries,clear) or a mix of reads and writes require external synchronisation (e.g. athreading.Lock). Each simulation trial should use its ownLedgerinstance.
- entries: List[LedgerEntry]
- chart_of_accounts: Dict[str, AccountType]
- record(entry: LedgerEntry) None[source]
Record a single ledger entry.
- Parameters:
entry (
LedgerEntry) – The LedgerEntry to add to the ledger- Raises:
ValueError – If strict_validation is True and the account name is not in the chart of accounts.
- Return type:
Note
Prefer using record_double_entry() for complete transactions to ensure debits always equal credits.
- record_double_entry(date: int, debit_account: AccountName | str, credit_account: AccountName | str, amount: Decimal | float | int, transaction_type: TransactionType, description: str = '', month: int = 0) Tuple[LedgerEntry | None, LedgerEntry | None][source]
Record a complete double-entry transaction.
Creates matching debit and credit entries with the same reference_id.
- Parameters:
date (
int) – Period (year) of the transactiondebit_account (
Union[AccountName,str]) – Account to debit (increase assets/expenses). Can be AccountName enum (recommended) or string.credit_account (
Union[AccountName,str]) – Account to credit (increase liabilities/equity/revenue). Can be AccountName enum (recommended) or string.amount (
Union[Decimal,float,int]) – Dollar amount of the transaction (converted to Decimal)transaction_type (
TransactionType) – Classification for cash flow mappingdescription (
str) – Human-readable descriptionmonth (
int) – Optional month within the year (0-11)
- Return type:
- Returns:
Tuple of (debit_entry, credit_entry), or (None, None) for zero-amount transactions (Issue #315).
- Raises:
ValueError – If amount is negative, or if account names are invalid (when strict_validation is True)
Example
Record a cash sale using AccountName enum (recommended):
debit, credit = ledger.record_double_entry( date=5, debit_account=AccountName.CASH, credit_account=AccountName.REVENUE, amount=500_000, transaction_type=TransactionType.REVENUE, description="Cash sales" )
String account names still work but are validated:
debit, credit = ledger.record_double_entry( date=5, debit_account="cash", # Validated against chart credit_account="revenue", amount=500_000, transaction_type=TransactionType.REVENUE, )
- get_balance(account: AccountName | str, as_of_date: int | None = None) Decimal[source]
Calculate the balance for an account.
- Parameters:
account (
Union[AccountName,str]) – Name of the account (AccountName enum recommended, string accepted)as_of_date (
Optional[int]) – Optional period to calculate balance as of (inclusive). When None, returns from cache in O(1). When specified, iterates through entries (O(N) for historical queries).
- Return type:
- Returns:
Current balance of the account as Decimal, properly signed based on account type: - Assets/Expenses: positive = debit balance - Liabilities/Equity/Revenue: positive = credit balance
Example
Get current cash balance:
cash = ledger.get_balance(AccountName.CASH) print(f"Cash: ${cash:,.0f}") # String also works (validated) cash = ledger.get_balance("cash")
- get_period_change(account: AccountName | str, period: int, month: int | None = None) Decimal[source]
Calculate the change in account balance for a specific period.
- Parameters:
account (
Union[AccountName,str]) – Name of the account (AccountName enum recommended, string accepted)period (
int) – Year/period to calculate change formonth (
Optional[int]) – Optional specific month within the period
- Return type:
- Returns:
Net change in account balance during the period as Decimal
- get_entries(account: AccountName | str | None = None, start_date: int | None = None, end_date: int | None = None, transaction_type: TransactionType | None = None) List[LedgerEntry][source]
Query ledger entries with optional filters.
- Parameters:
account (
Union[AccountName,str,None]) – Filter by account name (AccountName enum or string)start_date (
Optional[int]) – Filter by minimum period (inclusive)end_date (
Optional[int]) – Filter by maximum period (inclusive)transaction_type (
Optional[TransactionType]) – Filter by transaction classification
- Return type:
- Returns:
List of matching LedgerEntry objects
Example
Get all cash entries for year 5:
cash_entries = ledger.get_entries( account=AccountName.CASH, start_date=5, end_date=5 )
- sum_by_transaction_type(transaction_type: TransactionType, period: int, account: AccountName | str | None = None, entry_type: EntryType | None = None) Decimal[source]
Sum entries by transaction type for cash flow extraction.
- Parameters:
transaction_type (
TransactionType) – Classification to sumperiod (
int) – Year/period to sumaccount (
Union[AccountName,str,None]) – Optional account filter (AccountName enum or string)entry_type (
Optional[EntryType]) – Optional debit/credit filter
- Return type:
- Returns:
Sum of matching entries as Decimal (absolute value)
Example
Get total collections for year 5:
collections = ledger.sum_by_transaction_type( transaction_type=TransactionType.COLLECTION, period=5, account=AccountName.CASH, entry_type=EntryType.DEBIT )
- get_cash_flows(period: int) Dict[str, Decimal][source]
Extract cash flows for direct method cash flow statement.
Sums all cash-affecting transactions by category for the specified period.
- Parameters:
period (
int) – Year/period to extract cash flows for- Returns:
cash_from_customers: Collections on AR + cash sales
cash_to_suppliers: Inventory + expense payments
cash_for_insurance: Premium payments
cash_for_claim_losses: Claim-related asset reduction payments
cash_for_taxes: Tax payments
cash_for_wages: Wage payments
cash_for_interest: Interest payments
capital_expenditures: PP&E purchases
dividends_paid: Dividend payments
net_operating: Total operating cash flow
net_investing: Total investing cash flow
net_financing: Total financing cash flow
- Return type:
Dictionary with cash flow categories as Decimal values
Example
Generate direct method cash flow:
flows = ledger.get_cash_flows(period=5) print(f"Operating: ${flows['net_operating']:,.0f}") print(f"Investing: ${flows['net_investing']:,.0f}") print(f"Financing: ${flows['net_financing']:,.0f}")
- verify_balance() Tuple[bool, Decimal][source]
Verify that debits equal credits (accounting equation).
- Return type:
- Returns:
Tuple of (is_balanced, difference) - is_balanced: True if debits exactly equal credits (using Decimal precision) - difference: Total debits minus total credits as Decimal
Example
Check ledger integrity:
balanced, diff = ledger.verify_balance() if not balanced: print(f"Warning: Ledger out of balance by ${diff:,.2f}")
- get_trial_balance(as_of_date: int | None = None) Dict[str, Decimal][source]
Generate a trial balance showing all account balances.
When
as_of_dateis None, reads directly from the O(1) balance cache. When a date is specified, performs a single O(N) pass over all entries instead of the previous O(N*M) approach (Issue #315).- Parameters:
as_of_date (
Optional[int]) – Optional period to generate balance as of- Return type:
- Returns:
Dictionary mapping account names to their balances as Decimal
Example
Review all balances:
trial = ledger.get_trial_balance() for account, balance in trial.items(): print(f"{account}: ${balance:,.0f}")
- prune_entries(before_date: int) int[source]
Discard entries older than before_date to bound memory (Issue #315).
Before discarding, a per-account balance snapshot is computed so that
get_balance(account, as_of_date)andget_trial_balancestill return correct values for dates >= the prune point.Entries with
date < before_dateare removed. The current balance cache (_balances) is unaffected because it already holds the cumulative totals.- Parameters:
before_date (
int) – Entries withdatestrictly less than this value are pruned.- Return type:
- Returns:
Number of entries removed.
Note
After pruning, historical queries for dates prior to
before_datewill reflect the snapshot balance at the prune boundary, not the true historical balance at that earlier date.
- clear() None[source]
Clear all entries from the ledger.
Useful for resetting the ledger during simulation reset. Also resets the balance cache (Issue #259) and pruning state (Issue #315).
- Return type:
ergodic_insurance.loss_distributions module
Enhanced loss distributions for manufacturing risk modeling.
This module provides parametric loss distributions for realistic insurance claim modeling, including attritional losses, large losses, and catastrophic events with revenue-dependent frequency scaling.
- class LossDistribution(seed: int | SeedSequence | None = None)[source]
Bases:
ABCAbstract base class for loss severity distributions.
Provides a common interface for generating loss amounts and calculating statistical properties of the distribution.
- class LognormalLoss(mean: float | None = None, cv: float | None = None, mu: float | None = None, sigma: float | None = None, seed: int | None = None)[source]
Bases:
LossDistributionLognormal loss severity distribution.
Common for attritional and large losses in manufacturing. Parameters can be specified as either (mean, cv) or (mu, sigma).
- class ParetoLoss(alpha: float, xm: float, seed: int | None = None)[source]
Bases:
LossDistributionPareto loss severity distribution for catastrophic events.
Heavy-tailed distribution suitable for modeling extreme losses with potentially unbounded severity.
- class GeneralizedParetoLoss(severity_shape: float, severity_scale: float, seed: int | SeedSequence | None = None)[source]
Bases:
LossDistributionGeneralized Pareto distribution for modeling excesses over threshold.
Implements the GPD using scipy.stats.genpareto for Peaks Over Threshold (POT) extreme value modeling. According to the Pickands-Balkema-de Haan theorem, excesses over a sufficiently high threshold asymptotically follow a GPD.
The distribution models: P(X - u | X > u) ~ GPD(ξ, β)
Shape parameter interpretation: - ξ < 0: Bounded distribution (Type III - short-tailed) - ξ = 0: Exponential distribution (Type I - medium-tailed) - ξ > 0: Pareto-type distribution (Type II - heavy-tailed)
- class LossEvent(amount: float, time: float = 0.0, loss_type: str = 'operational', timestamp: float | None = None, event_type: str | None = None, description: str | None = None) None[source]
Bases:
objectRepresents a single loss event with timing and amount.
- class LossData(timestamps: ndarray = <factory>, loss_amounts: ndarray = <factory>, loss_types: List[str] = <factory>, claim_ids: List[str] = <factory>, development_factors: ndarray | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]
Bases:
objectUnified loss data structure for cross-module compatibility.
This dataclass provides a standardized interface for loss data that can be used consistently across all modules in the framework.
- validate() bool[source]
Validate data consistency.
- Return type:
- Returns:
True if data is valid and consistent, False otherwise.
- to_ergodic_format() ergodic_insurance.ergodic_analyzer.ErgodicData[source]
Convert to ergodic analyzer format.
- Return type:
ergodic_insurance.ergodic_analyzer.ErgodicData
- Returns:
Data formatted for ergodic analysis.
- apply_insurance(program: ergodic_insurance.insurance_program.InsuranceProgram) LossData[source]
Apply insurance recoveries to losses.
- Parameters:
program (ergodic_insurance.insurance_program.InsuranceProgram) – Insurance program to apply.
- Return type:
- Returns:
New LossData with insurance recoveries applied.
- classmethod from_loss_events(events: List[LossEvent]) LossData[source]
Create LossData from a list of LossEvent objects.
- class FrequencyGenerator(base_frequency: float, revenue_scaling_exponent: float = 0.0, reference_revenue: float = 10000000, seed: int | None = None)[source]
Bases:
objectBase class for generating loss event frequencies.
Supports revenue-dependent scaling of claim frequencies.
- reseed(seed) None[source]
Re-seed the random state.
- Parameters:
seed – New random seed (int or SeedSequence).
- Return type:
- class AttritionalLossGenerator(base_frequency: float = 5.0, severity_mean: float = 25000, severity_cv: float = 1.5, revenue_scaling_exponent: float = 0.5, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]
Bases:
objectGenerator for high-frequency, low-severity attritional losses.
Typical for widget manufacturing: worker injuries, quality defects, minor property damage.
- class LargeLossGenerator(base_frequency: float = 0.3, severity_mean: float = 2000000, severity_cv: float = 2.0, revenue_scaling_exponent: float = 0.7, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]
Bases:
objectGenerator for medium-frequency, medium-severity large losses.
Typical for manufacturing: product recalls, major equipment failures, litigation settlements.
- class CatastrophicLossGenerator(base_frequency: float = 0.03, severity_alpha: float = 2.5, severity_xm: float = 1000000, revenue_scaling_exponent: float = 0.0, reference_revenue: float = 10000000, exposure: ExposureBase | None = None, seed: int | None = None)[source]
Bases:
objectGenerator for low-frequency, high-severity catastrophic losses.
Uses Pareto distribution for heavy-tailed severity modeling. Examples: major equipment failure, facility damage, environmental disasters.
- reseed(seed) None[source]
Re-seed all internal random states.
- Parameters:
seed – New random seed (int or SeedSequence). A SeedSequence is used internally to derive independent child seeds for frequency and severity.
- Return type:
- class ManufacturingLossGenerator(attritional_params: dict | None = None, large_params: dict | None = None, catastrophic_params: dict | None = None, extreme_params: dict | None = None, exposure: ExposureBase | None = None, seed: int | None = None)[source]
Bases:
objectComposite loss generator for widget manufacturing risks.
Combines attritional, large, and catastrophic loss generators to provide comprehensive risk modeling.
- reseed(seed: int) None[source]
Re-seed all internal random states using SeedSequence.
Derives independent child seeds for each sub-generator so that parallel workers produce statistically distinct loss sequences.
- classmethod create_simple(frequency: float = 0.1, severity_mean: float = 5000000, severity_std: float = 2000000, seed: int | None = None) ManufacturingLossGenerator[source]
Create a simple loss generator (migration helper from ClaimGenerator).
This factory method provides a simplified interface similar to ClaimGenerator, making migration easier. It creates a generator with mostly attritional losses and minimal catastrophic risk.
- Parameters:
- Return type:
- Returns:
ManufacturingLossGenerator configured for simple use case.
Examples
Simple usage (equivalent to ClaimGenerator):
generator = ManufacturingLossGenerator.create_simple( frequency=0.1, severity_mean=5_000_000, severity_std=2_000_000, seed=42 ) losses, stats = generator.generate_losses(duration=10, revenue=10_000_000)
Accessing loss amounts:
total_loss = sum(loss.amount for loss in losses) print(f"Total losses: ${total_loss:,.0f}") print(f"Number of events: {stats['total_losses']}")
Note
For advanced features (multiple loss types, extreme value modeling), use the standard __init__ method with explicit parameters.
See also
Migration guide: docs/migration_guides/claim_generator_migration.md
- generate_losses(duration: float, revenue: float, include_catastrophic: bool = True, time: float = 0.0) Tuple[List[LossEvent], Dict[str, Any]][source]
Generate all types of losses for manufacturing operations.
- Parameters:
- Return type:
- Returns:
Tuple of (all_losses, statistics_dict).
ergodic_insurance.manufacturer module
Widget manufacturer financial model implementation.
This module implements the core financial model for a widget manufacturing company, providing comprehensive balance sheet management, insurance claim processing, and stochastic modeling capabilities. It serves as the central component of the ergodic insurance optimization framework.
- The manufacturer model simulates realistic business operations including:
Asset-based revenue generation with configurable turnover ratios
Operating income calculations with industry-standard margins
Multi-layer insurance claim processing with deductibles and limits
Letter of credit collateral management for claim liabilities
Actuarial claim payment schedules over multiple years
Dynamic balance sheet evolution with growth and volatility
Integration with sophisticated stochastic processes
Comprehensive financial metrics and ratio analysis
- Key Components:
WidgetManufacturer: Main financial model classClaimLiability: Actuarial claim payment tracking (re-exported)TaxHandler: Tax calculation and accrual (re-exported)
Examples
Basic manufacturer setup and simulation:
from ergodic_insurance import ManufacturerConfig, WidgetManufacturer
config = ManufacturerConfig(
initial_assets=10_000_000,
asset_turnover_ratio=0.8,
base_operating_margin=0.08,
tax_rate=0.25,
retention_ratio=0.7
)
manufacturer = WidgetManufacturer(config)
metrics = manufacturer.step(
letter_of_credit_rate=0.015,
growth_rate=0.05
)
print(f"ROE: {metrics['roe']:.1%}")
- class WidgetManufacturer(config: ManufacturerConfig, stochastic_process: StochasticProcess | None = None)[source]
Bases:
BalanceSheetMixin,ClaimProcessingMixin,IncomeCalculationMixin,SolvencyMixin,MetricsCalculationMixinFinancial model for a widget manufacturing company.
This class models the complete financial operations of a manufacturing company including revenue generation, claim processing, collateral management, and balance sheet evolution over time.
The manufacturer maintains a balance sheet with assets, equity, and tracks insurance-related collateral. It can process insurance claims with multi-year payment schedules and manages working capital requirements.
- config
Manufacturing configuration parameters
- stochastic_process
Optional stochastic process for revenue volatility
- assets
Current total assets
- collateral
Letter of credit collateral for insurance claims
- restricted_assets
Assets restricted as collateral
- equity
Current equity (assets minus liabilities)
- year
Current simulation year
- outstanding_liabilities
List of active claim liabilities
- metrics_history
Historical metrics for each simulation period
- bankruptcy
Whether the company has gone bankrupt
- bankruptcy_year
Year when bankruptcy occurred (if applicable)
Example
Running a multi-year simulation:
manufacturer = WidgetManufacturer(config) for year in range(10): losses, _ = loss_generator.generate_losses(duration=1, revenue=revenue) for loss in losses: manufacturer.process_insurance_claim( loss.amount, deductible, limit ) metrics = manufacturer.step(letter_of_credit_rate=0.015) print(f"Year {year}: ROE={metrics['roe']:.1%}")
- __deepcopy__(memo: Dict[int, Any]) WidgetManufacturer[source]
Create a deep copy preserving all state for Monte Carlo forking.
- __getstate__() Dict[str, Any][source]
Get state for pickling (required for Windows multiprocessing).
- __setstate__(state: Dict[str, Any]) None[source]
Restore state from pickle (required for Windows multiprocessing).
- process_accrued_payments(time_resolution: str = 'annual', max_payable: Decimal | float | None = None) Decimal[source]
Process due accrual payments for the current period.
- record_wage_accrual(amount: float, payment_schedule: PaymentSchedule = PaymentSchedule.IMMEDIATE) None[source]
Record accrued wages to be paid later.
- Parameters:
amount (
float) – Wage amount to accruepayment_schedule (
PaymentSchedule) – When wages will be paid
- Return type:
- step(letter_of_credit_rate: Decimal | float = 0.015, growth_rate: Decimal | float = 0.0, time_resolution: str = 'annual', apply_stochastic: bool = False) Dict[str, Decimal | float | int | bool][source]
Execute one time step of the financial model simulation.
- Parameters:
- Returns:
Comprehensive financial metrics dictionary.
- Return type:
- reset() None[source]
Reset the manufacturer to initial state for new simulation.
This method restores all financial parameters to their configured initial values and clears historical data, enabling fresh simulation runs from the same starting point.
Bug Fixes (Issue #305): - FIX 1: Uses config.ppe_ratio directly instead of recalculating from margins - FIX 2: Initializes AR/Inventory to steady-state (matching __init__) instead of zero
- Return type:
- copy() WidgetManufacturer[source]
Create a deep copy of the manufacturer for parallel simulations.
- Returns:
A new manufacturer instance with same configuration.
- Return type:
- classmethod create_fresh(config: ManufacturerConfig, stochastic_process: StochasticProcess | None = None) WidgetManufacturer[source]
Create a fresh manufacturer from configuration alone.
Factory method that avoids
copy.deepcopyby constructing a new instance directly from its config. Use this in hot loops (e.g. Monte Carlo workers) where each simulation needs a clean initial state.- Parameters:
config (
ManufacturerConfig) – Manufacturing configuration parameters.stochastic_process (
Optional[StochasticProcess]) – Optional stochastic process instance. The caller is responsible for ensuring independence (e.g. by deep-copying the process once before passing it in).
- Return type:
- Returns:
A new WidgetManufacturer in its initial state.
ergodic_insurance.monte_carlo module
High-performance Monte Carlo simulation engine for insurance optimization.
- class SimulationConfig(n_simulations: int = 100000, n_years: int = 10, n_chains: int = 4, parallel: bool = True, n_workers: int | None = None, chunk_size: int = 10000, use_float32: bool = False, cache_results: bool = True, checkpoint_interval: int | None = None, progress_bar: bool = True, seed: int | None = None, use_enhanced_parallel: bool = True, monitor_performance: bool = True, adaptive_chunking: bool = True, shared_memory: bool = True, enable_trajectory_storage: bool = False, trajectory_storage_config: StorageConfig | None = None, enable_advanced_aggregation: bool = True, aggregation_config: AggregationConfig | None = None, generate_summary_report: bool = False, summary_report_format: str = 'markdown', compute_bootstrap_ci: bool = False, bootstrap_confidence_level: float = 0.95, bootstrap_n_iterations: int = 10000, bootstrap_method: str = 'percentile', ruin_evaluation: List[int] | None = None, insolvency_tolerance: float = 10000, letter_of_credit_rate: float = 0.015, growth_rate: float = 0.0, time_resolution: str = 'annual', apply_stochastic: bool = False, enable_ledger_pruning: bool = False, crn_base_seed: int | None = None) None[source]
Bases:
objectConfiguration for Monte Carlo simulation.
- n_simulations
Number of simulation paths
- n_years
Number of years per simulation
- n_chains
Number of parallel chains for convergence
- parallel
Whether to use multiprocessing
- n_workers
Number of parallel workers (None for auto)
- chunk_size
Size of chunks for parallel processing
- use_float32
Use float32 for memory efficiency
- cache_results
Cache intermediate results
- checkpoint_interval
Save checkpoint every N simulations
- progress_bar
Show progress bar
- seed
Random seed for reproducibility
- use_enhanced_parallel
Use enhanced parallel executor for better performance
- monitor_performance
Track detailed performance metrics
- adaptive_chunking
Enable adaptive chunk sizing
Enable shared memory for read-only data
- letter_of_credit_rate
Annual LoC rate for collateral costs (default 1.5%)
- growth_rate
Revenue growth rate per period (default 0.0)
- time_resolution
Time step resolution, “annual” or “monthly” (default “annual”)
- apply_stochastic
Whether to apply stochastic shocks (default False)
- enable_ledger_pruning
Prune old ledger entries each year to bound memory (default False)
- crn_base_seed
Common Random Numbers base seed for cross-scenario comparison. When set, the loss generator and stochastic process are reseeded at each (sim_id, year) boundary using SeedSequence([crn_base_seed, sim_id, year]). This ensures that compared scenarios (e.g. different deductibles) experience the same underlying random draws each year, dramatically reducing estimator variance for growth-lift metrics. (default None, disabled)
- trajectory_storage_config: StorageConfig | None = None
- aggregation_config: AggregationConfig | None = None
- class SimulationResults(final_assets: ndarray, annual_losses: ndarray, insurance_recoveries: ndarray, retained_losses: ndarray, growth_rates: ndarray, ruin_probability: Dict[str, float], metrics: Dict[str, float], convergence: Dict[str, ConvergenceStats], execution_time: float, config: SimulationConfig, performance_metrics: PerformanceMetrics | None = None, aggregated_results: Dict[str, Any] | None = None, time_series_aggregation: Dict[str, Any] | None = None, statistical_summary: Any | None = None, summary_report: str | None = None, bootstrap_confidence_intervals: Dict[str, Tuple[float, float]] | None = None) None[source]
Bases:
objectResults from Monte Carlo simulation.
- final_assets
Final asset values for each simulation
- annual_losses
Annual loss amounts
- insurance_recoveries
Insurance recovery amounts
- retained_losses
Retained loss amounts
- growth_rates
Realized growth rates
- ruin_probability
Probability of ruin
- metrics
Risk metrics calculated from results
- convergence
Convergence statistics
- execution_time
Total execution time in seconds
- config
Simulation configuration used
- performance_metrics
Detailed performance metrics (if monitoring enabled)
- aggregated_results
Advanced aggregation results (if enabled)
- time_series_aggregation
Time series aggregation results (if enabled)
- statistical_summary
Complete statistical summary (if enabled)
- summary_report
Formatted summary report (if generated)
- bootstrap_confidence_intervals
Bootstrap confidence intervals for key metrics
- convergence: Dict[str, ConvergenceStats]
- config: SimulationConfig
- performance_metrics: PerformanceMetrics | None = None
- class MonteCarloEngine(loss_generator: ManufacturingLossGenerator, insurance_program: InsuranceProgram, manufacturer: WidgetManufacturer, config: SimulationConfig | None = None)[source]
Bases:
objectHigh-performance Monte Carlo simulation engine for insurance analysis.
Provides efficient Monte Carlo simulation with support for parallel processing, convergence monitoring, checkpointing, and comprehensive result aggregation. Optimized for both high-end and budget hardware configurations.
Examples
Basic Monte Carlo simulation:
from .monte_carlo import MonteCarloEngine, SimulationConfig from .loss_distributions import ManufacturingLossGenerator from .insurance_program import InsuranceProgram from .manufacturer import WidgetManufacturer # Configure simulation config = SimulationConfig( n_simulations=10000, n_years=20, parallel=True, n_workers=4 ) # Create components loss_gen = ManufacturingLossGenerator() insurance = InsuranceProgram.create_standard_program() manufacturer = WidgetManufacturer.from_config() # Run Monte Carlo engine = MonteCarloEngine( loss_generator=loss_gen, insurance_program=insurance, manufacturer=manufacturer, config=config ) results = engine.run() print(f"Survival rate: {results.survival_rate:.1%}") print(f"Mean ROE: {results.mean_roe:.2%}")
Advanced simulation with convergence monitoring:
# Enable convergence checking config = SimulationConfig( n_simulations=100000, check_convergence=True, convergence_tolerance=0.001, min_iterations=1000 ) engine = MonteCarloEngine( loss_generator=loss_gen, insurance_program=insurance, manufacturer=manufacturer, config=config ) # Run with progress tracking results = engine.run(show_progress=True) # Check convergence if results.converged: print(f"Converged after {results.iterations} iterations") print(f"Standard error: {results.standard_error:.4f}")
- loss_generator
Generator for manufacturing loss events
- insurance_program
Insurance coverage structure
- manufacturer
Manufacturing company financial model
- config
Simulation configuration parameters
- convergence_diagnostics
Convergence monitoring tools
See also
SimulationConfig: Configuration parametersMonteCarloResults: Simulation results containerParallelExecutor: Enhanced parallel processingConvergenceDiagnostics: Convergence analysis tools- trajectory_storage: TrajectoryStorage | None
- result_aggregator: ResultAggregator | None
- time_series_aggregator: TimeSeriesAggregator | None
- summary_statistics: SummaryStatistics | None
- run() SimulationResults[source]
Execute Monte Carlo simulation.
- Return type:
- Returns:
SimulationResults object with all outputs
- export_results(results: SimulationResults, filepath: Path, file_format: str = 'csv') None[source]
Export simulation results to file.
- Parameters:
results (
SimulationResults) – Simulation results to exportfilepath (
Path) – Output file pathfile_format (
str) – Export format (‘csv’, ‘json’, ‘hdf5’)
- Return type:
- compute_bootstrap_confidence_intervals(results: SimulationResults, confidence_level: float = 0.95, n_bootstrap: int = 10000, method: str = 'percentile', show_progress: bool = False) Dict[str, Tuple[float, float]][source]
Compute bootstrap confidence intervals for key simulation metrics.
- Parameters:
results (
SimulationResults) – Simulation results to analyze.confidence_level (
float) – Confidence level for intervals (default 0.95).n_bootstrap (
int) – Number of bootstrap iterations (default 10000).method (
str) – Bootstrap method (‘percentile’ or ‘bca’).show_progress (
bool) – Whether to show progress bar.
- Return type:
- Returns:
Dictionary mapping metric names to (lower, upper) confidence bounds.
- run_with_progress_monitoring(check_intervals: List[int] | None = None, convergence_threshold: float = 1.1, early_stopping: bool = True, show_progress: bool = True) SimulationResults[source]
Run simulation with progress tracking and convergence monitoring.
- Return type:
- run_with_convergence_monitoring(target_r_hat: float = 1.05, check_interval: int = 10000, max_iterations: int | None = None) SimulationResults[source]
Run simulation with automatic convergence monitoring.
- Parameters:
- Return type:
- Returns:
Converged simulation results
- estimate_ruin_probability(config: RuinProbabilityConfig | None = None) RuinProbabilityResults[source]
Estimate ruin probability over multiple time horizons.
Delegates to RuinProbabilityAnalyzer for specialized analysis.
- Parameters:
config (
Optional[RuinProbabilityConfig]) – Configuration for ruin probability estimation- Return type:
- Returns:
RuinProbabilityResults with comprehensive bankruptcy analysis
ergodic_insurance.monte_carlo_worker module
Standalone worker function for multiprocessing Monte Carlo simulations.
- run_chunk_standalone(chunk: Tuple[int, int, int | None], loss_generator: ManufacturingLossGenerator, insurance_program: InsuranceProgram, manufacturer: WidgetManufacturer, config_dict: Dict[str, Any]) Dict[str, ndarray | List[Dict[int, bool]]][source]
Standalone function to run a chunk of simulations for multiprocessing.
This function is independent of the MonteCarloEngine class and can be pickled for multiprocessing on all platforms including Windows.
- Parameters:
chunk (
Tuple[int,int,Optional[int]]) – Tuple of (start_idx, end_idx, seed)loss_generator (
ManufacturingLossGenerator) – Loss generator instanceinsurance_program (
InsuranceProgram) – Insurance program instancemanufacturer (
WidgetManufacturer) – Manufacturer instanceconfig_dict (
Dict[str,Any]) – Configuration dictionary with necessary parameters
- Return type:
- Returns:
Dictionary with simulation results for the chunk
ergodic_insurance.optimal_control module
Optimal control strategies for insurance decisions.
This module provides implementations of various control strategies derived from the HJB solver, including feedback control laws, state-dependent insurance limits, and integration with the simulation framework.
- Key Components:
ControlSpace: Defines feasible insurance control parameters
ControlStrategy: Abstract base for control strategies
StaticControl: Fixed insurance parameters throughout simulation
HJBFeedbackControl: State-dependent optimal control from HJB solution
TimeVaryingControl: Predetermined time-based control schedule
OptimalController: Integrates control strategies with simulations
- Typical Workflow:
Solve HJB equation to get optimal policy
Create control strategy (e.g., HJBFeedbackControl)
Initialize OptimalController with strategy
Apply controls in simulation loop
Track and analyze performance
Example
>>> # Solve HJB problem
>>> solver = HJBSolver(problem, config)
>>> value_func, policy = solver.solve()
>>>
>>> # Create feedback control
>>> control_space = ControlSpace(
... limits=[(1e6, 5e7)],
... retentions=[(1e5, 1e7)]
... )
>>> strategy = HJBFeedbackControl(solver, control_space)
>>>
>>> # Apply in simulation
>>> controller = OptimalController(strategy, control_space)
>>> insurance = controller.apply_control(manufacturer, time=t)
Author: Alex Filiakov Date: 2025-01-26
- class ControlMode(*values)[source]
Bases:
EnumMode of control application.
- STATIC
Fixed control parameters that never change.
- STATE_FEEDBACK
Control depends on current system state.
- TIME_VARYING
Control follows predetermined time schedule.
- ADAPTIVE
Control adapts based on observed history.
- STATIC = 'static'
- STATE_FEEDBACK = 'state_feedback'
- TIME_VARYING = 'time_varying'
- ADAPTIVE = 'adaptive'
- class ControlSpace(limits: ~typing.List[~typing.Tuple[float, float]], retentions: ~typing.List[~typing.Tuple[float, float]], coverage_percentages: ~typing.List[~typing.Tuple[float, float]] = <factory>, reinsurance_limits: ~typing.List[~typing.Tuple[float, float]] | None = None) None[source]
Bases:
objectDefinition of the control space for insurance decisions.
- __post_init__()[source]
Validate control space configuration.
- Raises:
ValueError – If limits and retentions have different number of layers.
ValueError – If coverage percentages don’t match number of layers.
ValueError – If any bounds are invalid (min >= max).
ValueError – If coverage percentages are outside [0, 1] range.
- get_dimensions() int[source]
Get total number of control dimensions.
- Returns:
- Total number of control variables across all layers
and control types.
- Return type:
Note
Used for determining the size of control vectors in optimization algorithms.
- to_array(limits: List[float], retentions: List[float], coverages: List[float] | None = None) ndarray[source]
Convert control parameters to array format.
- Parameters:
- Returns:
- Flattened control array suitable for
optimization algorithms.
- Return type:
np.ndarray
Note
Array order is: [limits, retentions, coverages].
- from_array(control_array: ndarray) Dict[str, List[float]][source]
Convert control array back to named parameters.
- Parameters:
control_array (
ndarray) – Flattened control array from optimization.- Returns:
- Dictionary with keys ‘limits’,
’retentions’, and ‘coverages’ mapping to lists of values for each layer.
- Return type:
Note
Inverse operation of to_array().
- class ControlStrategy[source]
Bases:
ABCAbstract base class for control strategies.
All control strategies must implement methods to: 1. Determine control actions based on state/time 2. Update internal parameters based on outcomes
- abstractmethod get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]
Get control action for current state and time.
- Parameters:
- Returns:
- Control actions with keys ‘limits’,
’retentions’, and ‘coverages’, each mapping to lists of values.
- Return type:
Dict[str, Any]
- class StaticControl(limits: List[float], retentions: List[float], coverages: List[float] | None = None)[source]
Bases:
ControlStrategyStatic control strategy with fixed parameters.
This is the simplest control strategy where insurance parameters remain constant throughout the simulation.
- class HJBFeedbackControl(hjb_solver: HJBSolver, control_space: ControlSpace, state_mapping: Callable[[Dict[str, float]], ndarray] | None = None)[source]
Bases:
ControlStrategyState-feedback control derived from HJB solution.
This strategy uses the optimal policy computed by the HJB solver to determine insurance parameters based on the current state.
- get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]
Get optimal control from HJB policy.
- Parameters:
- Returns:
- Optimal control parameters with keys
’limits’, ‘retentions’, and ‘coverages’.
- Return type:
Dict[str, Any]
Note
Uses linear interpolation of the HJB optimal policy between grid points.
- class TimeVaryingControl(time_schedule: List[float], limits_schedule: List[List[float]], retentions_schedule: List[List[float]], coverages_schedule: List[List[float]] | None = None)[source]
Bases:
ControlStrategyTime-varying control strategy with predetermined schedule.
This strategy adjusts insurance parameters according to a predetermined time schedule, useful for seasonal or cyclical risks.
- get_control(state: Dict[str, float], time: float = 0.0) Dict[str, Any][source]
Get control parameters for current time.
- Parameters:
- Returns:
- Control parameters interpolated linearly
between scheduled time points.
- Return type:
Dict[str, Any]
Note
Uses nearest value for times outside the schedule range.
- class OptimalController(strategy: ControlStrategy, control_space: ControlSpace)[source]
Bases:
objectController that applies optimal strategies in simulation.
This class integrates control strategies with the simulation framework, managing the application of controls and tracking performance.
- apply_control(manufacturer: WidgetManufacturer, state: Dict[str, float] | None = None, time: float = 0.0) InsuranceProgram[source]
Apply control strategy to create insurance program.
- Parameters:
- Returns:
- Insurance program with layers configured
according to the control strategy.
- Return type:
Note
Records control and state in history for later analysis.
- update_outcome(outcome: Dict[str, float])[source]
Update controller with observed outcome.
- Parameters:
outcome (
Dict[str,float]) – Observed outcome dictionary with keys like ‘losses’, ‘premium_costs’, ‘claim_payments’, etc.
Note
Calls strategy.update() if strategy is adaptive.
- get_performance_summary() DataFrame[source]
Get summary of controller performance.
- Returns:
- DataFrame with columns for step number,
state variables (prefixed with
state_), control variables (prefixed withcontrol_), and outcomes (prefixed withoutcome_).
- Return type:
pd.DataFrame
Note
Useful for analyzing control strategy effectiveness and creating visualizations.
- create_hjb_controller(manufacturer: WidgetManufacturer, simulation_years: int = 10, utility_type: str = 'log', risk_aversion: float = 2.0) OptimalController[source]
Convenience function to create HJB-based controller.
Creates and solves a simplified HJB problem for insurance optimization, then returns a controller configured with the optimal policy.
- Parameters:
manufacturer (
WidgetManufacturer) – Manufacturer instance for extracting model parameters like growth rates and risk characteristics.simulation_years (
int) – Time horizon for optimization. Longer horizons may require more grid points for accuracy.utility_type (
str) – Type of utility function: - ‘log’: Logarithmic utility (Kelly criterion) - ‘power’: Power/CRRA utility with risk aversion - ‘linear’: Risk-neutral expected wealthrisk_aversion (
float) – Coefficient of relative risk aversion for power utility. Higher values imply more conservative policies. Ignored for log and linear utilities.
- Returns:
- Controller with HJB feedback strategy configured
for the specified problem.
- Return type:
- Raises:
ValueError – If utility_type is not recognized.
Example
>>> from ergodic_insurance.manufacturer import WidgetManufacturer >>> from ergodic_insurance.config import ManufacturerConfig >>> >>> # Set up manufacturer >>> config = ManufacturerConfig() >>> manufacturer = WidgetManufacturer(config) >>> >>> # Create HJB controller with power utility >>> controller = create_hjb_controller( ... manufacturer, ... simulation_years=10, ... utility_type="power", ... risk_aversion=2.0 ... ) >>> >>> # Apply control at current state >>> insurance = controller.apply_control(manufacturer, time=0) >>> >>> # Run simulation step >>> losses = manufacturer.generate_losses() >>> manufacturer.apply_losses(losses, insurance) >>> >>> # Update controller with outcome >>> outcome = {'losses': losses, 'premium': insurance.total_premium} >>> controller.update_outcome(outcome)
Note
This function creates a simplified 2D state space (wealth, time) and single-layer insurance for demonstration. Production systems would use higher-dimensional state spaces and multiple layers.
ergodic_insurance.optimization module
Advanced optimization algorithms for constrained insurance decision making.
This module implements sophisticated optimization methods including trust-region, penalty methods, augmented Lagrangian, and multi-start techniques for finding global optima in complex insurance optimization problems.
- class ConstraintType(*values)[source]
Bases:
EnumTypes of constraints in optimization.
- EQUALITY = 'eq'
- INEQUALITY = 'ineq'
- BOUNDS = 'bounds'
- class ConstraintViolation(constraint_name: str, violation_amount: float, constraint_type: ConstraintType, current_value: float, limit_value: float, is_satisfied: bool) None[source]
Bases:
objectInformation about constraint violations.
- constraint_type: ConstraintType
- class ConvergenceMonitor(max_iterations: int = 1000, tolerance: float = 1e-06, objective_history: List[float] = <factory>, constraint_violation_history: List[float] = <factory>, gradient_norm_history: List[float] = <factory>, step_size_history: List[float] = <factory>, iteration_count: int = 0, converged: bool = False, convergence_message: str = '') None[source]
Bases:
objectMonitor and track convergence of optimization algorithms.
- class AdaptivePenaltyParameters(initial_penalty: float = 10.0, penalty_increase_factor: float = 2.0, max_penalty: float = 1000000.0, constraint_tolerance: float = 0.0001, penalty_update_frequency: int = 10, current_penalties: Dict[str, float]=<factory>) None[source]
Bases:
objectParameters for adaptive penalty method.
- update_penalties(violations: List[ConstraintViolation])[source]
Update penalty parameters based on constraint violations.
- class TrustRegionOptimizer(objective_fn: Callable, gradient_fn: Callable | None = None, hessian_fn: Callable | None = None, constraints: List[Dict[str, Any]] | None = None, bounds: Bounds | None = None)[source]
Bases:
objectTrust-region constrained optimization with adaptive radius adjustment.
- class PenaltyMethodOptimizer(objective_fn: Callable, constraints: List[Dict[str, Any]], bounds: Bounds | None = None)[source]
Bases:
objectOptimization using penalty method with adaptive penalty parameters.
- class AugmentedLagrangianOptimizer(objective_fn: Callable, constraints: List[Dict[str, Any]], bounds: Bounds | None = None)[source]
Bases:
objectAugmented Lagrangian method for constrained optimization.
- class MultiStartOptimizer(objective_fn: Callable, bounds: Bounds, constraints: List[Dict[str, Any]] | None = None, base_optimizer: str = 'SLSQP')[source]
Bases:
objectMulti-start optimization for finding global optima.
- class EnhancedSLSQPOptimizer(objective_fn: Callable, gradient_fn: Callable | None = None, constraints: List[Dict[str, Any]] | None = None, bounds: Bounds | None = None)[source]
Bases:
objectEnhanced SLSQP with adaptive step sizing and improved convergence.
ergodic_insurance.parallel_executor module
CPU-optimized parallel execution engine for Monte Carlo simulations.
This module provides enhanced parallel processing capabilities optimized for budget hardware (4-8 cores) with intelligent chunking, shared memory management, and minimal serialization overhead.
- Features:
Smart dynamic chunking based on CPU resources and workload
Shared memory for read-only data structures
CPU affinity optimization for cache locality
Minimal IPC overhead (<5% target)
Memory-efficient execution (<4GB for 100K simulations)
Example
>>> from ergodic_insurance.parallel_executor import ParallelExecutor
>>> executor = ParallelExecutor(n_workers=4)
>>> results = executor.map_reduce(
... work_function=simulate_path,
... work_items=range(100000),
... reduce_function=combine_results,
... shared_data={'config': simulation_config}
... )
- Author:
Alex Filiakov
- Date:
2025-08-26
- class CPUProfile(n_cores: int, n_threads: int, cache_sizes: Dict[str, int], available_memory: int, cpu_freq: float, system_load: float) None[source]
Bases:
objectCPU performance profile for optimization decisions.
- classmethod detect() CPUProfile[source]
Detect current CPU profile.
- Returns:
Current system CPU profile
- Return type:
- class ChunkingStrategy(initial_chunk_size: int = 1000, min_chunk_size: int = 100, max_chunk_size: int = 10000, target_chunks_per_worker: int = 10, adaptive: bool = True, profile_samples: int = 100) None[source]
Bases:
objectDynamic chunking strategy for parallel workloads.
Bases:
objectConfiguration for shared memory optimization.
Bases:
objectManager for shared memory resources.
Handles creation, access, and cleanup of shared memory segments for both numpy arrays and serialized objects.
Share a numpy array via shared memory.
Retrieve a shared numpy array.
Share a serialized object via shared memory.
Retrieve a shared object.
Clean up all shared memory resources.
Cleanup on deletion.
- class PerformanceMetrics(total_time: float = 0.0, setup_time: float = 0.0, computation_time: float = 0.0, serialization_time: float = 0.0, reduction_time: float = 0.0, memory_peak: int = 0, cpu_utilization: float = 0.0, items_per_second: float = 0.0, speedup: float = 1.0) None[source]
Bases:
objectPerformance metrics for parallel execution.
- class ParallelExecutor(n_workers: int | None = None, chunking_strategy: ChunkingStrategy | None = None, shared_memory_config: SharedMemoryConfig | None = None, monitor_performance: bool = True)[source]
Bases:
objectCPU-optimized parallel executor for Monte Carlo simulations.
Provides intelligent work distribution, shared memory management, and performance monitoring for efficient parallel execution on budget hardware.
- map_reduce(work_function: Callable, work_items: List | range, reduce_function: Callable | None = None, shared_data: Dict[str, Any] | None = None, progress_bar: bool = True) Any[source]
Execute parallel map-reduce operation.
- Parameters:
work_function (
Callable) – Function to apply to each work itemwork_items (
Union[List,range]) – List or range of work itemsreduce_function (
Optional[Callable]) – Function to combine results (None for list)shared_data (
Optional[Dict[str,Any]]) – Data to share across all workersprogress_bar (
bool) – Show progress bar
- Returns:
Combined results from reduce function or list of results
- Return type:
Any
- parallel_map(func: Callable, items: List | range, n_workers: int | None = None, progress: bool = True) List[Any][source]
Simple parallel map operation.
ergodic_insurance.parameter_sweep module
Parameter sweep utilities for systematic exploration of parameter space.
This module provides utilities for systematic parameter sweeps across the full parameter space to identify optimal regions and validate robustness of recommendations across different scenarios.
- Features:
Efficient grid search across parameter combinations
Parallel execution for large sweeps using multiprocessing
Result aggregation and storage with HDF5/Parquet support
Scenario comparison tools for side-by-side analysis
Optimal region identification using percentile-based methods
Pre-defined scenarios for company sizes, loss scenarios, and market conditions
Adaptive refinement near optima for efficient exploration
Progress tracking and resumption capabilities
Example
>>> from ergodic_insurance.parameter_sweep import ParameterSweeper, SweepConfig
>>> from ergodic_insurance.business_optimizer import BusinessOptimizer
>>>
>>> # Create optimizer
>>> optimizer = BusinessOptimizer(manufacturer)
>>>
>>> # Initialize sweeper
>>> sweeper = ParameterSweeper(optimizer)
>>>
>>> # Define parameter sweep
>>> config = SweepConfig(
... parameters={
... "initial_assets": [1e6, 10e6, 100e6],
... "base_operating_margin": [0.05, 0.08, 0.12],
... "loss_frequency": [3, 5, 8]
... },
... fixed_params={"time_horizon": 10},
... metrics_to_track=["optimal_roe", "ruin_probability"]
... )
>>>
>>> # Execute sweep
>>> results = sweeper.sweep(config)
>>>
>>> # Find optimal regions
>>> optimal, summary = sweeper.find_optimal_regions(
... results,
... objective="optimal_roe",
... constraints={"ruin_probability": (0, 0.01)}
... )
- Author:
Alex Filiakov
- Date:
2025-08-29
- class SweepConfig(parameters: ~typing.Dict[str, ~typing.List[~typing.Any]], fixed_params: ~typing.Dict[str, ~typing.Any] = <factory>, metrics_to_track: ~typing.List[str] = <factory>, n_workers: int | None = None, batch_size: int = 100, adaptive_refinement: bool = False, refinement_threshold: float = 90.0, save_intermediate: bool = True, cache_dir: str = './cache/sweeps') None[source]
Bases:
objectConfiguration for parameter sweep.
- parameters
Dictionary mapping parameter names to lists of values to sweep
- fixed_params
Fixed parameters that don’t vary across sweep
- metrics_to_track
List of metric names to extract from results
- n_workers
Number of parallel workers for execution
- batch_size
Size of batches for parallel processing
- adaptive_refinement
Whether to adaptively refine near optima
- refinement_threshold
Percentile threshold for refinement (e.g., 90 for top 10%)
- save_intermediate
Whether to save intermediate results
- cache_dir
Directory for caching results
- class ParameterSweeper(optimizer: BusinessOptimizer | None = None, cache_dir: str = './cache/sweeps', use_parallel: bool = True)[source]
Bases:
objectSystematic parameter sweep utilities for insurance optimization.
This class provides methods for exploring the parameter space through grid search, identifying optimal regions, and comparing scenarios.
- optimizer
Business optimizer instance for running optimizations
- cache_dir
Directory for storing cached results
- results_cache
In-memory cache of optimization results
- use_parallel
Whether to use parallel processing
- sweep(config: SweepConfig, progress_callback: Callable | None = None) DataFrame[source]
Execute parameter sweep with parallel processing.
- Parameters:
config (
SweepConfig) – Sweep configurationprogress_callback (
Optional[Callable]) – Optional callback for progress updates
- Return type:
DataFrame- Returns:
DataFrame containing sweep results with all parameter combinations and metrics
- create_scenarios() Dict[str, SweepConfig][source]
Create pre-defined scenario configurations.
- Return type:
- Returns:
Dictionary of scenario names to SweepConfig objects
- find_optimal_regions(results: DataFrame, objective: str = 'optimal_roe', constraints: Dict[str, Tuple[float, float]] | None = None, top_percentile: float = 90) Tuple[DataFrame, DataFrame][source]
Identify optimal parameter regions.
- Parameters:
results (
DataFrame) – DataFrame of sweep resultsobjective (
str) – Objective metric to optimizeconstraints (
Optional[Dict[str,Tuple[float,float]]]) – Dictionary mapping metric names to (min, max) constraint tuplestop_percentile (
float) – Percentile threshold for optimal region (e.g., 90 for top 10%)
- Return type:
Tuple[DataFrame,DataFrame]- Returns:
Tuple of (optimal results DataFrame, parameter statistics DataFrame)
- compare_scenarios(results: Dict[str, DataFrame], metrics: List[str] | None = None, normalize: bool = False) DataFrame[source]
Compare results across multiple scenarios.
- Parameters:
- Return type:
DataFrame- Returns:
DataFrame with scenario comparison
ergodic_insurance.pareto_frontier module
Pareto frontier analysis for multi-objective optimization.
This module provides comprehensive tools for generating, analyzing, and visualizing Pareto frontiers in multi-objective optimization problems, particularly focused on insurance optimization trade-offs between ROE, risk, and costs.
- class ObjectiveType(*values)[source]
Bases:
EnumTypes of objectives in multi-objective optimization.
- MAXIMIZE = 'maximize'
- MINIMIZE = 'minimize'
- class Objective(name: str, type: ObjectiveType, weight: float = 1.0, normalize: bool = True, bounds: Tuple[float, float] | None = None) None[source]
Bases:
objectDefinition of an optimization objective.
- name
Name of the objective (e.g., ‘ROE’, ‘risk’, ‘cost’)
- type
Whether to maximize or minimize this objective
- weight
Weight for weighted sum method (0-1)
- normalize
Whether to normalize this objective
- bounds
Optional bounds for this objective as (min, max)
- type: ObjectiveType
- class ParetoPoint(objectives: ~typing.Dict[str, float], decision_variables: ~numpy.ndarray, is_dominated: bool = False, crowding_distance: float = 0.0, trade_offs: ~typing.Dict[str, float] = <factory>) None[source]
Bases:
objectA point on the Pareto frontier.
- objectives
Dictionary of objective values
- decision_variables
The decision variables that produce these objectives
- is_dominated
Whether this point is dominated by another
- crowding_distance
Crowding distance metric for this point
- trade_offs
Trade-off ratios with neighboring points
- dominates(other: ParetoPoint, objectives: List[Objective]) bool[source]
Check if this point dominates another point.
- Parameters:
other (
ParetoPoint) – Another Pareto point to compareobjectives (
List[Objective]) – List of objectives to consider
- Return type:
- Returns:
True if this point dominates the other
- class ParetoFrontier(objectives: List[Objective], objective_function: Callable, bounds: List[Tuple[float, float]], constraints: List[Dict[str, Any]] | None = None, seed: int | None = None)[source]
Bases:
objectGenerator and analyzer for Pareto frontiers.
This class provides methods for generating Pareto frontiers using various algorithms and analyzing the resulting trade-offs.
- frontier_points: List[ParetoPoint]
- generate_weighted_sum(n_points: int = 50, method: str = 'SLSQP') List[ParetoPoint][source]
Generate Pareto frontier using weighted sum method.
- Parameters:
- Return type:
- Returns:
List of Pareto points forming the frontier
- generate_epsilon_constraint(n_points: int = 50, method: str = 'SLSQP') List[ParetoPoint][source]
Generate Pareto frontier using epsilon-constraint method.
- Parameters:
- Return type:
- Returns:
List of Pareto points forming the frontier
- generate_evolutionary(n_generations: int = 100, population_size: int = 50) List[ParetoPoint][source]
Generate Pareto frontier using evolutionary algorithm.
- Parameters:
- Return type:
- Returns:
List of Pareto points forming the frontier
- calculate_hypervolume(reference_point: Dict[str, float] | None = None) float[source]
Calculate hypervolume indicator for the Pareto frontier.
- get_knee_points(n_knees: int = 1) List[ParetoPoint][source]
Find knee points on the Pareto frontier.
Knee points represent good trade-offs where small improvements in one objective require large sacrifices in others.
- Parameters:
n_knees (
int) – Number of knee points to identify- Return type:
- Returns:
List of knee points
ergodic_insurance.performance_optimizer module
Performance optimization module for Monte Carlo simulations.
This module provides tools and strategies to optimize the performance of Monte Carlo simulations, targeting 100K simulations in under 60 seconds on budget hardware (4-core CPU, 8GB RAM).
- Key features:
Execution profiling and bottleneck identification
Vectorized operations for loss generation and insurance calculations
Smart caching for repeated calculations
Memory optimization for large-scale simulations
Integration with parallel execution framework
Example
>>> from performance_optimizer import PerformanceOptimizer
>>> from monte_carlo import MonteCarloEngine
>>>
>>> optimizer = PerformanceOptimizer()
>>> engine = MonteCarloEngine(config=config)
>>>
>>> # Profile execution
>>> profile_results = optimizer.profile_execution(engine, n_simulations=1000)
>>> print(profile_results.bottlenecks)
>>>
>>> # Apply optimizations
>>> optimized_engine = optimizer.optimize_engine(engine)
>>> results = optimized_engine.run()
Google-style docstrings are used throughout for Sphinx documentation.
- class ProfileResult(total_time: float, bottlenecks: List[str] = <factory>, function_times: Dict[str, float]=<factory>, memory_usage: float = 0.0, recommendations: List[str] = <factory>) None[source]
Bases:
objectResults from performance profiling.
- total_time
Total execution time in seconds
- bottlenecks
List of performance bottlenecks identified
- function_times
Dictionary mapping function names to execution times
- memory_usage
Peak memory usage in MB
- recommendations
List of optimization recommendations
- class OptimizationConfig(enable_vectorization: bool = True, enable_caching: bool = True, cache_size: int = 1000, enable_numba: bool = True, memory_limit_mb: float = 4000.0, chunk_size: int = 10000) None[source]
Bases:
objectConfiguration for performance optimization.
- enable_vectorization
Use vectorized operations
- enable_caching
Use smart caching
- cache_size
Maximum cache entries
- enable_numba
Use Numba JIT compilation
- memory_limit_mb
Memory usage limit in MB
- chunk_size
Chunk size for batch processing
- class SmartCache(max_size: int = 1000)[source]
Bases:
objectSmart caching system for repeated calculations.
Provides intelligent caching with memory management and hit rate tracking.
- class VectorizedOperations[source]
Bases:
objectVectorized operations for performance optimization.
- static calculate_growth_rates(final_assets: ndarray, initial_assets: float, n_years: float) ndarray[source]
Calculate growth rates using vectorized operations.
- class PerformanceOptimizer(config: OptimizationConfig | None = None)[source]
Bases:
objectMain performance optimization engine.
Provides profiling, optimization, and monitoring capabilities for Monte Carlo simulations.
- profile_execution(func: Callable, *args, **kwargs) ProfileResult[source]
Profile function execution to identify bottlenecks.
- Parameters:
func (
Callable) – Function to profile.*args – Positional arguments for function.
**kwargs – Keyword arguments for function.
- Return type:
- Returns:
ProfileResult with profiling data.
- optimize_loss_generation(losses: List[float], batch_size: int = 10000) ndarray[source]
Optimize loss generation using vectorization.
- optimize_insurance_calculation(losses: ndarray, layers: List[Tuple[float, float, float]]) Dict[str, Any][source]
Optimize insurance calculations using vectorization and caching.
ergodic_insurance.progress_monitor module
Lightweight progress monitoring for Monte Carlo simulations.
This module provides efficient progress tracking with minimal performance overhead, including ETA estimation, convergence summaries, and console output.
- class ProgressStats(current_iteration: int, total_iterations: int, start_time: float, elapsed_time: float, estimated_time_remaining: float, iterations_per_second: float, convergence_checks: Tuple[int, float]]=<factory>, converged: bool = False, converged_at: int | None = None) None[source]
Bases:
objectStatistics for progress monitoring.
- class ProgressMonitor(total_iterations: int, check_intervals: List[int] | None = None, update_frequency: int = 1000, show_console: bool = True, convergence_threshold: float = 1.1)[source]
Bases:
objectLightweight progress monitor for Monte Carlo simulations.
Provides real-time progress tracking with minimal performance overhead (<1%). Includes ETA estimation, convergence monitoring, and console output.
- update(iteration: int, convergence_value: float | None = None) bool[source]
Update progress and check for convergence.
- get_stats() ProgressStats[source]
Get current progress statistics.
- Return type:
- Returns:
ProgressStats object with current metrics
- finish() ProgressStats[source]
Finish progress monitoring and return final stats.
- Return type:
- Returns:
Final progress statistics
- get_overhead_percentage() float[source]
Get the monitoring overhead as a percentage of total elapsed time.
- Return type:
- Returns:
Overhead percentage (0-100)
- __enter__() ProgressMonitor[source]
Enter context manager.
- Return type:
ergodic_insurance.result_aggregator module
Advanced result aggregation framework for Monte Carlo simulations.
This module provides comprehensive aggregation capabilities for simulation results, supporting hierarchical aggregation, time-series analysis, and memory-efficient processing of large datasets.
- class AggregationConfig(percentiles: List[float] = <factory>, calculate_moments: bool = True, calculate_distribution_fit: bool = False, chunk_size: int = 10000, cache_results: bool = True, precision: int = 6) None[source]
Bases:
objectConfiguration for result aggregation.
- class BaseAggregator(config: AggregationConfig | None = None)[source]
Bases:
ABCAbstract base class for result aggregation.
Provides common functionality for all aggregation types.
- class ResultAggregator(config: AggregationConfig | None = None, custom_functions: Dict[str, Callable] | None = None)[source]
Bases:
BaseAggregatorMain aggregator for simulation results.
Provides comprehensive aggregation of Monte Carlo simulation results with support for custom aggregation functions.
- class TimeSeriesAggregator(config: AggregationConfig | None = None, window_size: int = 12)[source]
Bases:
BaseAggregatorAggregator for time-series data.
Supports annual, cumulative, and rolling window aggregations.
- class PercentileTracker(percentiles: List[float], max_samples: int = 100000, seed: int | None = None)[source]
Bases:
objectEfficient percentile tracking for streaming data.
Uses the t-digest algorithm (Dunning & Ertl, 2019) for memory-efficient percentile calculation on large datasets. The t-digest provides bounded memory usage and high accuracy, especially at tail percentiles relevant to insurance risk metrics (VaR, TVaR).
- merge(other: PercentileTracker) None[source]
Merge another tracker into this one.
Combines t-digest sketches from parallel simulation chunks without loss of accuracy.
- Parameters:
other (
PercentileTracker) – Another PercentileTracker to merge into this one.- Return type:
- class ResultExporter[source]
Bases:
objectExport aggregated results to various formats.
- static to_csv(results: Dict[str, Any], filepath: Path, index_label: str = 'metric') None[source]
Export results to CSV file.
ergodic_insurance.risk_metrics module
Comprehensive risk metrics suite for tail risk analysis.
This module provides industry-standard risk metrics including VaR, TVaR, PML, and Expected Shortfall to quantify tail risk and support insurance optimization decisions.
- class RiskMetricsResult(metric_name: str, value: float, confidence_level: float | None = None, confidence_interval: Tuple[float, float] | None = None, metadata: Dict[str, Any] | None = None) None[source]
Bases:
objectContainer for risk metric calculation results.
- class RiskMetrics(losses: ndarray, weights: ndarray | None = None, seed: int | None = None)[source]
Bases:
objectCalculate comprehensive risk metrics for loss distributions.
This class provides industry-standard risk metrics for analyzing tail risk in insurance and financial applications.
- var(confidence: float = 0.99, method: str = 'empirical', bootstrap_ci: bool = False, n_bootstrap: int = 1000) float | RiskMetricsResult[source]
Calculate Value at Risk (VaR).
VaR represents the loss amount that will not be exceeded with a given confidence level over a specific time period.
- Parameters:
- Return type:
Union[float,RiskMetricsResult]- Returns:
VaR value or RiskMetricsResult with confidence intervals.
- Raises:
ValueError – If confidence level is not in (0, 1).
- tvar(confidence: float = 0.99, var_value: float | None = None) float[source]
Calculate Tail Value at Risk (TVaR/CVaR).
TVaR represents the expected loss given that the loss exceeds VaR. It’s a coherent risk measure that satisfies sub-additivity.
- expected_shortfall(threshold: float) float[source]
Calculate Expected Shortfall (ES) above a threshold.
ES is the average of all losses that exceed a given threshold. Delegates to tvar() with a pre-computed VaR value.
- pml(return_period: int) float[source]
Calculate Probable Maximum Loss (PML) for a given return period.
PML represents the loss amount expected to be equaled or exceeded once every ‘return_period’ years on average.
- Parameters:
return_period (
int) – Return period in years (e.g., 100 for 100-year event).- Return type:
- Returns:
PML value.
- Raises:
ValueError – If return period is less than 1.
- conditional_tail_expectation(confidence: float = 0.99) float[source]
Calculate Conditional Tail Expectation (CTE).
CTE is similar to TVaR but uses a slightly different calculation method. It’s the expected value of losses that exceed the VaR threshold.
- maximum_drawdown() float[source]
Calculate Maximum Drawdown.
Maximum drawdown measures the largest peak-to-trough decline in cumulative value.
- Return type:
- Returns:
Maximum drawdown value.
- economic_capital(confidence: float = 0.999, expected_loss: float | None = None) float[source]
Calculate Economic Capital requirement.
Economic capital is the amount of capital needed to cover unexpected losses at a given confidence level.
- return_period_curve(return_periods: ndarray | None = None) Tuple[ndarray, ndarray][source]
Generate return period curve (exceedance probability curve).
- tail_index(threshold: float | None = None) float[source]
Estimate tail index using Hill estimator.
The tail index characterizes the heaviness of the tail. Lower values indicate heavier tails.
- risk_adjusted_metrics(returns: ndarray | None = None, risk_free_rate: float = 0.02) Dict[str, float][source]
Calculate risk-adjusted return metrics.
- coherence_test() Dict[str, bool][source]
Test coherence properties of risk measures.
A coherent risk measure satisfies: 1. Monotonicity 2. Sub-additivity 3. Positive homogeneity 4. Translation invariance
- compare_risk_metrics(scenarios: Dict[str, ndarray], confidence_levels: List[float] | None = None) DataFrame[source]
Compare risk metrics across multiple scenarios.
- class ROEAnalyzer(roe_series: ndarray, equity_series: ndarray | None = None)[source]
Bases:
objectComprehensive ROE analysis framework.
This class provides specialized metrics and analysis tools for Return on Equity (ROE) calculations, including time-weighted averages, component breakdowns, and volatility analysis.
- time_weighted_average() float[source]
Calculate time-weighted average ROE using geometric mean.
Time-weighted average gives equal weight to each period regardless of the equity level, providing a measure of consistent performance.
- Return type:
- Returns:
Time-weighted average ROE.
- equity_weighted_average() float[source]
Calculate equity-weighted average ROE.
Equity-weighted average gives more weight to periods with higher equity levels, reflecting the actual dollar impact.
- Return type:
- Returns:
Equity-weighted average ROE.
- rolling_statistics(window: int) Dict[str, ndarray][source]
Calculate rolling window statistics for ROE.
- performance_ratios(risk_free_rate: float = 0.02) Dict[str, float][source]
Calculate performance ratios for ROE.
ergodic_insurance.ruin_probability module
Ruin probability analysis for insurance optimization.
This module provides specialized classes and methods for analyzing bankruptcy and ruin probabilities in insurance scenarios.
- class RuinProbabilityConfig(time_horizons: List[int] = <factory>, n_simulations: int = 10000, min_assets_threshold: float = 1000000, min_equity_threshold: float = 0.0, debt_service_coverage_ratio: float = 1.25, consecutive_negative_periods: int = 3, early_stopping: bool = True, parallel: bool = True, n_workers: int | None = None, seed: int | None = None, n_bootstrap: int = 1000, bootstrap_confidence_level: float = 0.95) None[source]
Bases:
objectConfiguration for ruin probability analysis.
- class RuinProbabilityResults(time_horizons: ndarray, ruin_probabilities: ndarray, confidence_intervals: ndarray, bankruptcy_causes: Dict[str, ndarray], survival_curves: ndarray, execution_time: float, n_simulations: int, convergence_achieved: bool, mid_year_ruin_count: int = 0, ruin_month_distribution: Dict[int, int] | None = None) None[source]
Bases:
objectResults from ruin probability analysis.
- time_horizons
Array of time horizons analyzed (in years).
- ruin_probabilities
Probability of ruin at each time horizon.
- confidence_intervals
Bootstrap confidence intervals for probabilities.
- bankruptcy_causes
Distribution of bankruptcy causes by horizon.
- survival_curves
Survival probability curves over time.
- execution_time
Total execution time in seconds.
- n_simulations
Number of simulations run.
- convergence_achieved
Whether convergence criteria were met.
- mid_year_ruin_count
Number of simulations with mid-year ruin (Issue #279).
- ruin_month_distribution
Distribution of ruin events by month (0-11).
- class RuinProbabilityAnalyzer(manufacturer, loss_generator, insurance_program, config)[source]
Bases:
objectAnalyzer for ruin probability calculations.
- analyze_ruin_probability(config: RuinProbabilityConfig | None = None) RuinProbabilityResults[source]
Analyze ruin probability across multiple time horizons.
- Parameters:
config (
Optional[RuinProbabilityConfig]) – Configuration for analysis- Return type:
- Returns:
RuinProbabilityResults with analysis results
ergodic_insurance.safe_pickle module
Safe pickle serialization with HMAC integrity validation.
This module provides HMAC-signed pickle operations to prevent arbitrary code execution from tampered cache files. All file-based pickle operations in the codebase should use these functions instead of raw pickle.load/pickle.dump.
The HMAC key is stored in a .pickle_hmac_key file within the cache directory (or a default location). Files written with safe_dump can only be loaded by safe_load if the HMAC signature matches, preventing deserialization of untrusted data.
Also provides deterministic_hash() as a replacement for Python’s non-deterministic built-in hash() function.
- safe_dump(obj: Any, f, protocol: int = 5, key_dir: Path | None = None) None[source]
Pickle dump with HMAC signature prepended.
- safe_load(f, key_dir: Path | None = None) Any[source]
Pickle load with HMAC verification.
- Parameters:
- Return type:
- Returns:
Deserialized object
- Raises:
ValueError – If HMAC verification fails or file is too short
- safe_dumps(obj: Any, protocol: int = 5, key_dir: Path | None = None) bytes[source]
Pickle dumps with HMAC signature prepended.
- safe_loads(data: bytes, key_dir: Path | None = None) Any[source]
Pickle loads with HMAC verification.
- Parameters:
- Return type:
- Returns:
Deserialized object
- Raises:
ValueError – If HMAC verification fails or data is too short
ergodic_insurance.scenario_manager module
Scenario management system for batch processing simulations.
This module provides a framework for managing multiple simulation scenarios, parameter sweeps, and configuration variations for comprehensive analysis.
- class ScenarioType(*values)[source]
Bases:
EnumTypes of scenario generation methods.
- SINGLE = 'single'
- GRID_SEARCH = 'grid_search'
- RANDOM_SEARCH = 'random_search'
- CUSTOM = 'custom'
- SENSITIVITY = 'sensitivity'
- class ParameterSpec(**data: Any) None[source]
Bases:
BaseModelSpecification for parameter variations in scenarios.
- name
Parameter name (dot notation for nested params)
- values
List of values for grid search
- min_value
Minimum value for random search
- max_value
Maximum value for random search
- n_samples
Number of samples for random search
- distribution
Distribution type for random sampling
- base_value
Base value for sensitivity analysis
- variation_pct
Percentage variation for sensitivity
- generate_values(method: ScenarioType, rng: Generator | None = None) List[Any][source]
Generate parameter values based on method.
- Parameters:
method (
ScenarioType) – Scenario generation methodrng (
Optional[Generator]) – Random number generator instance (created if None)
- Return type:
- Returns:
List of parameter values
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class ScenarioConfig(scenario_id: str, name: str, description: str = '', base_config: Config | None = None, simulation_config: SimulationConfig | None = None, parameter_overrides: Dict[str, ~typing.Any]=<factory>, tags: Set[str] = <factory>, priority: int = 100, created_at: datetime = <factory>, metadata: Dict[str, ~typing.Any]=<factory>) None[source]
Bases:
objectConfiguration for a single scenario.
- simulation_config: SimulationConfig | None = None
- generate_id() str[source]
Generate unique scenario ID from configuration.
- Return type:
- Returns:
Unique scenario identifier
- class ScenarioManager[source]
Bases:
objectManager for creating and organizing simulation scenarios.
- scenarios: List[ScenarioConfig]
- scenario_index: Dict[str, ScenarioConfig]
- create_scenario(name: str, base_config: Config | None = None, simulation_config: SimulationConfig | None = None, parameter_overrides: Dict[str, Any] | None = None, description: str = '', tags: Set[str] | None = None, priority: int = 100) ScenarioConfig[source]
Create a single scenario.
- Parameters:
- Return type:
- Returns:
Created scenario configuration
- add_scenario(scenario: ScenarioConfig) None[source]
Add scenario to manager.
- Parameters:
scenario (
ScenarioConfig) – Scenario to add- Return type:
- create_grid_search(name_template: str, parameter_specs: List[ParameterSpec], base_config: Config | None = None, simulation_config: SimulationConfig | None = None, tags: Set[str] | None = None) List[ScenarioConfig][source]
Create scenarios for grid search over parameters.
- Parameters:
name_template (
str) – Template for scenario namesparameter_specs (
List[ParameterSpec]) – Parameter specificationssimulation_config (
Optional[SimulationConfig]) – Simulation configuration
- Return type:
- Returns:
List of created scenarios
- create_random_search(name_template: str, parameter_specs: List[ParameterSpec], n_scenarios: int, base_config: Config | None = None, simulation_config: SimulationConfig | None = None, tags: Set[str] | None = None, seed: int | None = None) List[ScenarioConfig][source]
Create scenarios for random search over parameters.
- Parameters:
name_template (
str) – Template for scenario namesparameter_specs (
List[ParameterSpec]) – Parameter specificationsn_scenarios (
int) – Number of scenarios to generatesimulation_config (
Optional[SimulationConfig]) – Simulation configuration
- Return type:
- Returns:
List of created scenarios
- create_sensitivity_analysis(base_name: str, parameter_specs: List[ParameterSpec], base_config: Config | None = None, simulation_config: SimulationConfig | None = None, tags: Set[str] | None = None) List[ScenarioConfig][source]
Create scenarios for sensitivity analysis.
- Parameters:
base_name (
str) – Base name for scenariosparameter_specs (
List[ParameterSpec]) – Parameters to varysimulation_config (
Optional[SimulationConfig]) – Simulation configuration
- Return type:
- Returns:
List of created scenarios
- get_scenarios_by_tag(tag: str) List[ScenarioConfig][source]
Get scenarios with specific tag.
- Parameters:
tag (
str) – Tag to filter by- Return type:
- Returns:
List of matching scenarios
- get_scenarios_by_priority(max_priority: int = 100) List[ScenarioConfig][source]
Get scenarios up to priority threshold.
- Parameters:
max_priority (
int) – Maximum priority value (inclusive)- Return type:
- Returns:
Sorted list of scenarios
ergodic_insurance.sensitivity module
Comprehensive sensitivity analysis tools for insurance optimization.
This module provides tools for analyzing how changes in key parameters affect optimization results, including one-at-a-time (OAT) analysis, tornado diagrams, and two-way sensitivity analysis with efficient caching.
Example
Basic sensitivity analysis for a single parameter:
from ergodic_insurance.sensitivity import SensitivityAnalyzer
from ergodic_insurance.business_optimizer import BusinessOptimizer
from ergodic_insurance.manufacturer import WidgetManufacturer
# Setup optimizer
manufacturer = WidgetManufacturer(initial_assets=10_000_000)
optimizer = BusinessOptimizer(manufacturer)
# Run sensitivity analysis
analyzer = SensitivityAnalyzer(base_config, optimizer)
result = analyzer.analyze_parameter(
"frequency",
param_range=(3, 8),
n_points=11
)
# Generate tornado diagram
tornado_data = analyzer.create_tornado_diagram(
parameters=["frequency", "severity_mean", "premium_rate"],
metric="optimal_roe"
)
Author: Alex Filiakov Date: 2025-01-29
- class SensitivityResult(parameter: str, baseline_value: float, variations: ndarray, metrics: Dict[str, ndarray], parameter_path: str | None = None, units: str | None = None) None[source]
Bases:
objectResults from sensitivity analysis for a single parameter.
- parameter
Name of the parameter being analyzed
- baseline_value
Original value of the parameter
- variations
Array of parameter values tested
- metrics
Dictionary of metric arrays for each variation
- parameter_path
Nested path to parameter (e.g., “manufacturer.base_operating_margin”)
- units
Optional units for the parameter (e.g., “percentage”, “dollars”)
- calculate_impact(metric: str) float[source]
Calculate standardized impact on a specific metric.
The impact is calculated as the elasticity of the metric with respect to the parameter, normalized by the baseline values.
- class TwoWaySensitivityResult(parameter1: str, parameter2: str, values1: ndarray, values2: ndarray, metric_grid: ndarray, metric_name: str) None[source]
Bases:
objectResults from two-way sensitivity analysis.
- parameter1
Name of first parameter
- parameter2
Name of second parameter
- values1
Array of values for first parameter
- values2
Array of values for second parameter
- metric_grid
2D array of metric values [len(values1), len(values2)]
- metric_name
Name of the metric analyzed
- class SensitivityAnalyzer(base_config: Dict[str, Any], optimizer: Any, cache_dir: Path | None = None)[source]
Bases:
objectComprehensive sensitivity analysis tools for optimization.
This class provides methods for analyzing how parameter changes affect optimization outcomes, with built-in caching for efficiency.
- base_config
Base configuration dictionary
- optimizer
Optimizer object with an optimize() method
- results_cache
Cache for optimization results
- cache_dir
Directory for persistent cache storage
- analyze_parameter(param_name: str, param_range: Tuple[float, float] | None = None, n_points: int = 11, param_path: str | None = None, relative_range: float = 0.3) SensitivityResult[source]
Analyze sensitivity to a single parameter.
- Parameters:
param_name (
str) – Name of parameter to analyzeparam_range (
Optional[Tuple[float,float]]) – (min, max) range for parameter valuesn_points (
int) – Number of points to evaluateparam_path (
Optional[str]) – Nested path to parameter (e.g., “manufacturer.tax_rate”)relative_range (
float) – If param_range not provided, use ±relative_range from baseline
- Return type:
- Returns:
SensitivityResult with analysis results
- Raises:
KeyError – If parameter not found in base configuration
- create_tornado_diagram(parameters: List[str | Tuple[str, str]], metric: str = 'optimal_roe', relative_range: float = 0.3, n_points: int = 11) DataFrame[source]
Create tornado diagram data for parameter impacts.
- Parameters:
- Returns:
parameter: Parameter name
impact: Absolute impact value
direction: “positive” or “negative”
low_value: Metric value at parameter minimum
high_value: Metric value at parameter maximum
baseline: Metric value at baseline
baseline_param: Baseline parameter value
- Return type:
DataFrame sorted by impact magnitude with columns
- analyze_two_way(param1: str | Tuple[str, str], param2: str | Tuple[str, str], param1_range: Tuple[float, float] | None = None, param2_range: Tuple[float, float] | None = None, n_points1: int = 10, n_points2: int = 10, metric: str = 'optimal_roe', relative_range: float = 0.3) TwoWaySensitivityResult[source]
Perform two-way sensitivity analysis.
- Parameters:
param1 (
Union[str,Tuple[str,str]]) – First parameter name or (name, path) tupleparam2 (
Union[str,Tuple[str,str]]) – Second parameter name or (name, path) tupleparam1_range (
Optional[Tuple[float,float]]) – Range for first parameterparam2_range (
Optional[Tuple[float,float]]) – Range for second parametern_points1 (
int) – Number of points for first parametern_points2 (
int) – Number of points for second parametermetric (
str) – Metric to analyzerelative_range (
float) – Relative range if explicit ranges not provided
- Return type:
- Returns:
TwoWaySensitivityResult with grid of metric values
ergodic_insurance.sensitivity_visualization module
Visualization utilities for sensitivity analysis results.
This module provides publication-ready visualization functions for sensitivity analysis results, including tornado diagrams, two-way sensitivity heatmaps, and parameter impact charts.
Example
Creating a tornado diagram:
from ergodic_insurance.sensitivity_visualization import plot_tornado_diagram
# Assuming tornado_data is a DataFrame from SensitivityAnalyzer
fig = plot_tornado_diagram(
tornado_data,
title="Parameter Sensitivity Analysis",
metric_label="ROE Impact"
)
fig.savefig("tornado_diagram.png", dpi=300, bbox_inches='tight')
Author: Alex Filiakov Date: 2025-01-29
- plot_tornado_diagram(tornado_data: DataFrame, title: str = 'Sensitivity Analysis - Tornado Diagram', metric_label: str = 'Impact on Objective', figsize: Tuple[float, float] = (10, 6), n_params: int | None = None, color_positive: str = '#2E7D32', color_negative: str = '#C62828', show_values: bool = True) Figure[source]
Create a tornado diagram for sensitivity analysis results.
- Parameters:
tornado_data (
DataFrame) – DataFrame with columns: parameter, impact, direction, low_value, high_value, baselinetitle (
str) – Plot titlemetric_label (
str) – Label for the x-axisfigsize (
Tuple[float,float]) – Figure size as (width, height)n_params (
Optional[int]) – Number of top parameters to show (None for all)color_positive (
str) – Color for positive impactscolor_negative (
str) – Color for negative impactsshow_values (
bool) – Whether to show numeric values on bars
- Return type:
- Returns:
Matplotlib Figure object
- plot_two_way_sensitivity(result: TwoWaySensitivityResult, title: str | None = None, cmap: str = 'RdYlGn', figsize: Tuple[float, float] = (10, 8), show_contours: bool = True, contour_levels: int | None = 10, optimal_point: Tuple[float, float] | None = None, fmt: str = '.2f') Figure[source]
Create a heatmap for two-way sensitivity analysis.
- Parameters:
result (
TwoWaySensitivityResult) – TwoWaySensitivityResult objectcmap (
str) – Colormap namefigsize (
Tuple[float,float]) – Figure size as (width, height)show_contours (
bool) – Whether to show contour linesoptimal_point (
Optional[Tuple[float,float]]) – Optional (param1_value, param2_value) to markfmt (
str) – Format string for contour labels. Can be: - New-style format like ‘.2f’ or ‘.2%’ - Old-style format like ‘%.2f’ - Callable that takes a number and returns a string
- Return type:
- Returns:
Matplotlib Figure object
- plot_parameter_sweep(result: SensitivityResult, metrics: List[str] | None = None, title: str | None = None, figsize: Tuple[float, float] = (12, 8), normalize: bool = False, mark_baseline: bool = True) Figure[source]
Plot multiple metrics against parameter variations.
- Parameters:
result (
SensitivityResult) – SensitivityResult objectmetrics (
Optional[List[str]]) – List of metrics to plot (None for all)figsize (
Tuple[float,float]) – Figure size as (width, height)normalize (
bool) – Whether to normalize metrics to [0, 1]mark_baseline (
bool) – Whether to mark the baseline value
- Return type:
- Returns:
Matplotlib Figure object
- create_sensitivity_report(analyzer: SensitivityAnalyzer, parameters: List[str | Tuple[str, str]], output_dir: str | None = None, metric: str = 'optimal_roe', formats: List[str] | None = None) Dict[str, Any][source]
Generate a complete sensitivity analysis report.
- Parameters:
analyzer (
SensitivityAnalyzer) – SensitivityAnalyzer object with resultsparameters (
List[Union[str,Tuple[str,str]]]) – List of parameters to analyzeoutput_dir (
Optional[str]) – Directory to save figures (None for no saving)metric (
str) – Primary metric for analysisformats (
Optional[List[str]]) – File formats to save figures in
- Return type:
- Returns:
Dictionary with generated figures and analysis summary
- plot_sensitivity_matrix(results: Dict[str, SensitivityResult], metric: str = 'optimal_roe', figsize: Tuple[float, float] = (12, 10), cmap: str = 'coolwarm', show_values: bool = True) Figure[source]
Create a matrix plot showing sensitivity across multiple parameters.
- Parameters:
- Return type:
- Returns:
Matplotlib Figure object
ergodic_insurance.setup module
ergodic_insurance.simulation module
Simulation engine for time evolution of widget manufacturer model.
This module provides the main simulation engine that orchestrates the time evolution of the widget manufacturer financial model, managing loss events, financial calculations, and result collection.
The simulation framework supports both single-path and Monte Carlo simulations, enabling comprehensive analysis of insurance strategies and business outcomes under uncertainty. It tracks detailed financial metrics, processes insurance claims, and handles bankruptcy conditions appropriately.
- Key Features:
Single-path trajectory simulation with detailed metrics
Monte Carlo simulation support through integration
Insurance claim processing with policy application
Financial statement tracking and ROE calculation
Bankruptcy detection and proper termination
Comprehensive result analysis and export capabilities
Examples
Basic simulation:
from ergodic_insurance import Simulation, Config, WidgetManufacturer, ManufacturingLossGenerator
config = Config()
manufacturer = WidgetManufacturer(config.manufacturer)
loss_generator = ManufacturingLossGenerator.create_simple(
frequency=0.1, severity_mean=5_000_000, seed=42
)
sim = Simulation(
manufacturer=manufacturer,
loss_generator=loss_generator,
time_horizon=50
)
results = sim.run()
print(f"Mean ROE: {results.summary_stats()['mean_roe']:.2%}")
Note
This module is thread-safe for parallel Monte Carlo simulations when each thread has its own Simulation instance.
- Since:
Version 0.1.0
- class SimulationResults(years: ndarray, assets: ndarray, equity: ndarray, roe: ndarray, revenue: ndarray, net_income: ndarray, claim_counts: ndarray, claim_amounts: ndarray, insolvency_year: int | None = None) None[source]
Bases:
objectContainer for simulation trajectory data.
Holds the complete time series of financial metrics and events from a single simulation run, with methods for analysis and export.
This dataclass provides comprehensive storage for all simulation outputs and includes utility methods for calculating derived metrics, performing statistical analysis, and exporting data for further processing.
- years
Array of simulation years (0 to time_horizon-1).
- assets
Total assets at each year.
- equity
Shareholder equity at each year.
- roe
Return on equity for each year.
- revenue
Annual revenue for each year.
- net_income
Annual net income for each year.
- claim_counts
Number of claims in each year.
- claim_amounts
Total claim amount in each year.
- insolvency_year
Year when bankruptcy occurred (None if survived).
Examples
Analyzing simulation results:
results = simulation.run() # Get summary statistics stats = results.summary_stats() print(f"Survival: {stats['survived']}") print(f"Mean ROE: {stats['mean_roe']:.2%}") # Export to DataFrame df = results.to_dataframe() df.to_csv('simulation_results.csv') # Calculate volatility metrics volatility = results.calculate_roe_volatility() print(f"ROE Sharpe Ratio: {volatility['roe_sharpe']:.2f}")
Note
All financial values are in nominal dollars without inflation adjustment. ROE calculations handle edge cases like zero equity appropriately.
- to_dataframe() DataFrame[source]
Convert simulation results to pandas DataFrame.
- Returns:
- DataFrame with columns for year, assets, equity, roe,
revenue, net_income, claim_count, and claim_amount.
- Return type:
pd.DataFrame
Examples
Export to Excel:
df = results.to_dataframe() df.to_excel('results.xlsx', index=False)
- calculate_time_weighted_roe() float[source]
Calculate time-weighted average ROE.
Time-weighted ROE gives equal weight to each period regardless of the equity level, providing a better measure of consistent performance over time. Uses geometric mean for proper compounding.
- Returns:
Time-weighted average ROE as a decimal (e.g., 0.08 for 8%).
- Return type:
Note
This method uses geometric mean of growth factors (1 + ROE) to properly account for compounding effects. NaN values are excluded from the calculation.
Examples
Compare different ROE measures:
simple_avg = np.mean(results.roe) time_weighted = results.calculate_time_weighted_roe() print(f"Simple average: {simple_avg:.2%}") print(f"Time-weighted: {time_weighted:.2%}")
- calculate_rolling_roe(window: int) ndarray[source]
Calculate rolling window ROE.
- Parameters:
window (
int) – Window size in years (e.g., 1, 3, 5). Must be positive and not exceed the data length.- Returns:
- Array of rolling ROE values. Values are NaN for
positions where the full window is not available.
- Return type:
np.ndarray
- Raises:
ValueError – If window size exceeds data length.
Examples
Calculate and plot rolling ROE:
rolling_3yr = results.calculate_rolling_roe(3) plt.plot(results.years, rolling_3yr, label='3-Year Rolling ROE') plt.axhline(y=0.08, color='r', linestyle='--', label='Target')
- calculate_roe_components(base_operating_margin: float = 0.08, tax_rate: float = 0.25) Dict[str, ndarray][source]
Calculate ROE component breakdown.
Decomposes ROE into operating, insurance, and tax components using DuPont-style analysis. This helps identify the drivers of ROE performance and the impact of insurance decisions.
- Parameters:
- Returns:
- Dictionary containing:
’operating_roe’: Base business ROE without claims
’insurance_impact’: ROE reduction from claims/premiums
’tax_effect’: Impact of taxes on ROE
’total_roe’: Actual ROE for reference
- Return type:
Dict[str, np.ndarray]
Note
This is a simplified decomposition. Actual implementation would require more detailed financial data for precise attribution.
Examples
Analyze ROE drivers:
components = results.calculate_roe_components() operating_avg = np.mean(components['operating_roe']) insurance_drag = np.mean(components['insurance_impact']) print(f"Operating ROE: {operating_avg:.2%}") print(f"Insurance drag: {insurance_drag:.2%}")
Using manufacturer config values:
components = results.calculate_roe_components( base_operating_margin=manufacturer.config.base_operating_margin, tax_rate=manufacturer.config.tax_rate, )
- calculate_roe_volatility() Dict[str, float][source]
Calculate ROE volatility metrics.
Computes various risk-adjusted performance metrics for ROE, including standard deviation, downside deviation, Sharpe ratio, and coefficient of variation.
- Returns:
- Dictionary containing:
’roe_std’: Standard deviation of ROE
’roe_downside_deviation’: Downside deviation from mean
’roe_sharpe’: Sharpe ratio using 2% risk-free rate
’roe_coefficient_variation’: Coefficient of variation (std/mean)
- Return type:
Note
Returns zeros for all metrics if insufficient data (< 2 observations). Sharpe ratio uses a 2% risk-free rate assumption.
Examples
Risk-adjusted performance analysis:
volatility = results.calculate_roe_volatility() if volatility['roe_sharpe'] > 1.0: print("Strong risk-adjusted performance") print(f"Downside risk: {volatility['roe_downside_deviation']:.2%}")
- summary_stats() Dict[str, float][source]
Calculate summary statistics for the simulation.
Computes comprehensive summary statistics including ROE metrics, rolling averages, volatility measures, and survival indicators.
- Returns:
- Dict[str, float]: Dictionary containing:
Basic ROE metrics (mean, std, median, time-weighted)
Rolling averages (1, 3, 5 year)
Final state (assets, equity)
Claims statistics (total, frequency)
Survival indicators (survived, insolvency_year)
Volatility metrics (from calculate_roe_volatility)
- Examples:
Generate summary report:
stats = results.summary_stats() print("Performance Summary:") print(f" Mean ROE: {stats['mean_roe']:.2%}") print(f" Volatility: {stats['std_roe']:.2%}") print(f" Sharpe Ratio: {stats['roe_sharpe']:.2f}") print("
- Risk Summary:”)
print(f” Survived: {stats[‘survived’]}”) print(f” Total Claims: ${stats[‘total_claims’]:,.0f}”)
- class Simulation(manufacturer: WidgetManufacturer, loss_generator: ManufacturingLossGenerator | List[ManufacturingLossGenerator] | None = None, insurance_policy: InsurancePolicy | None = None, time_horizon: int = 100, seed: int | None = None, growth_rate: float = 0.0, letter_of_credit_rate: float = 0.015)[source]
Bases:
objectSimulation engine for widget manufacturer time evolution.
The main simulation class that coordinates the time evolution of the widget manufacturer model, processing losses and tracking financial performance over the specified time horizon.
Supports both single-path and Monte Carlo simulations, with comprehensive tracking of financial metrics, loss events, and bankruptcy conditions.
Examples
Basic simulation setup and execution:
from ergodic_insurance.config import ManufacturerConfig from ergodic_insurance.manufacturer import WidgetManufacturer from ergodic_insurance.loss_distributions import ManufacturingLossGenerator from ergodic_insurance.insurance import InsurancePolicy from ergodic_insurance.simulation import Simulation # Create manufacturer config = ManufacturerConfig(initial_assets=10_000_000) manufacturer = WidgetManufacturer(config) # Create insurance policy policy = InsurancePolicy( deductible=500_000, limit=5_000_000, premium_rate=0.02 ) # Run simulation sim = Simulation( manufacturer=manufacturer, loss_generator=ManufacturingLossGenerator.create_simple(seed=42), insurance_policy=policy, time_horizon=10 ) results = sim.run() # Analyze results print(f"Mean ROE: {results.summary_stats()['mean_roe']:.2%}") print(f"Survived: {results.insolvency_year is None}")
Running Monte Carlo simulation:
# Use MonteCarloEngine for multiple paths monte_carlo = MonteCarloEngine( base_simulation=sim, n_simulations=1000, parallel=True ) mc_results = monte_carlo.run() print(f"Survival rate: {mc_results.survival_rate:.1%}") print(f"95% VaR: ${mc_results.var_95:,.0f}")
- manufacturer
The widget manufacturer being simulated
- loss_generator
Generator for loss events
- insurance_policy
Optional insurance coverage
- time_horizon
Simulation duration in years
- seed
Random seed for reproducibility
See also
SimulationResults: Container for simulation outputMonteCarloEngine: For running multiple simulation pathsWidgetManufacturer: The core financial modelManufacturingLossGenerator: For generating loss events- step_annual(year: int, losses: List[LossEvent]) Dict[str, Any][source]
Execute single annual time step.
Processes losses for the year, applies insurance coverage, updates manufacturer financial state, and returns metrics.
- Parameters:
- Returns:
- Dictionary containing metrics:
All metrics from manufacturer.step()
’claim_count’: Number of losses this year
’claim_amount’: Total loss amount before insurance
’company_payment’: Amount paid by company after deductible
’insurance_recovery’: Amount recovered from insurance
- Return type:
Note
This method modifies the manufacturer state in-place. Insurance premiums are deducted from both assets and equity to maintain balance sheet integrity.
- Side Effects:
Modifies manufacturer.assets and manufacturer.equity
Updates manufacturer internal state via step() method
- run(progress_interval: int = 100) SimulationResults[source]
Run the full simulation over the specified time horizon.
Executes a complete simulation trajectory, processing claims each year, updating the manufacturer’s financial state, and tracking all metrics. The simulation terminates early if the manufacturer becomes insolvent.
- Parameters:
progress_interval (
int) – How often to log progress (in years). Set to 0 to disable progress logging. Useful for long simulations.- Returns:
Complete time series of financial metrics
Claim history and amounts
ROE trajectory
Insolvency year (if bankruptcy occurred)
- Return type:
SimulationResults object containing
Examples
Run simulation with progress updates:
sim = Simulation(manufacturer, time_horizon=1000) results = sim.run(progress_interval=100) # Log every 100 years # Check if company survived if results.insolvency_year: print(f"Bankruptcy in year {results.insolvency_year}") else: print(f"Survived {len(results.years)} years")
Analyze simulation results:
results = sim.run() df = results.to_dataframe() # Plot equity evolution import matplotlib.pyplot as plt plt.plot(df['year'], df['equity']) plt.xlabel('Year') plt.ylabel('Equity ($)') plt.title('Company Equity Over Time') plt.show()
Note
The simulation uses pre-generated claims for efficiency. All claims are generated at the start based on the configured loss distributions and random seed.
See also
step_annual(): Single year simulation stepSimulationResults: Output data structure
- run_with_loss_data(loss_data: LossData, validate: bool = True, progress_interval: int = 100) SimulationResults[source]
Run simulation using standardized LossData.
- Parameters:
- Return type:
- Returns:
SimulationResults object with full trajectory.
- get_trajectory() DataFrame[source]
Get simulation trajectory as pandas DataFrame.
This is a convenience method that runs the simulation if needed and returns the results as a DataFrame.
- Return type:
DataFrame- Returns:
DataFrame with simulation trajectory.
- classmethod run_monte_carlo(config: Config, insurance_policy: InsurancePolicy, n_scenarios: int = 10000, batch_size: int = 1000, n_jobs: int = 7, checkpoint_dir: Path | None = None, checkpoint_frequency: int = 5000, seed: int | None = None, resume: bool = True) Dict[str, Any][source]
Run Monte Carlo simulation using the MonteCarloEngine.
This is a convenience class method for running large-scale Monte Carlo simulations with the optimized engine.
- Parameters:
config (
Config) – Configuration object.insurance_policy (
InsurancePolicy) – Insurance policy to simulate.n_scenarios (
int) – Number of scenarios to run.batch_size (
int) – Scenarios per batch.n_jobs (
int) – Number of parallel jobs.checkpoint_dir (
Optional[Path]) – Directory for checkpoints.checkpoint_frequency (
int) – Save checkpoint every N scenarios.resume (
bool) – Whether to resume from checkpoint.
- Return type:
- Returns:
Dictionary of Monte Carlo results and statistics.
- classmethod compare_insurance_strategies(config: Config, insurance_policies: Dict[str, InsurancePolicy], n_scenarios: int = 1000, n_jobs: int = 7, seed: int | None = None) DataFrame[source]
Compare multiple insurance strategies via Monte Carlo.
- Parameters:
- Return type:
DataFrame- Returns:
DataFrame comparing results across strategies.
ergodic_insurance.statistical_tests module
Statistical hypothesis testing utilities for simulation results.
This module provides bootstrap-based hypothesis testing functions for comparing strategies, validating performance differences, and assessing statistical significance of simulation outcomes.
Example
>>> from statistical_tests import test_difference_in_means
>>> import numpy as np
>>> # Compare two strategies
>>> strategy_a_returns = np.random.normal(0.08, 0.02, 1000)
>>> strategy_b_returns = np.random.normal(0.10, 0.03, 1000)
>>> result = test_difference_in_means(
... strategy_a_returns,
... strategy_b_returns,
... alternative='less'
... )
>>> print(f"P-value: {result.p_value:.4f}")
>>> print(f"Strategy B is better: {result.reject_null}")
- class HypothesisTestResult(test_statistic: float, p_value: float, reject_null: bool, confidence_interval: Tuple[float, float], null_hypothesis: str, alternative: str, alpha: float, method: str, bootstrap_distribution: ndarray | None = None, metadata: Dict[str, Any] | None = None) None[source]
Bases:
objectContainer for hypothesis test results.
- difference_in_means_test(sample1: ndarray, sample2: ndarray, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]
Test difference in means between two samples using bootstrap.
Tests the null hypothesis that the means of two populations are equal against various alternatives using bootstrap resampling.
- Parameters:
sample1 (
ndarray) – First sample array.sample2 (
ndarray) – Second sample array.alternative (
str) – Type of alternative hypothesis: - ‘two-sided’: means are different - ‘less’: mean1 < mean2 - ‘greater’: mean1 > mean2alpha (
float) – Significance level (default 0.05).n_bootstrap (
int) – Number of bootstrap iterations (default 10000).
- Return type:
- Returns:
HypothesisTestResult containing test statistics and decision.
- Raises:
ValueError – If alternative is not valid.
Example
>>> # Test if Strategy A has lower returns than Strategy B >>> result = test_difference_in_means( ... returns_a, returns_b, alternative='less' ... ) >>> if result.reject_null: ... print("Strategy B significantly outperforms Strategy A")
- ratio_of_metrics_test(sample1: ~numpy.ndarray, sample2: ~numpy.ndarray, statistic: ~typing.Callable[[~numpy.ndarray], float] = <function mean>, null_ratio: float = 1.0, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]
Test ratio of metrics between two samples using bootstrap.
Tests whether the ratio of a statistic (e.g., mean, median) between two samples equals a specified value (typically 1.0).
- Parameters:
sample1 (
ndarray) – First sample array.sample2 (
ndarray) – Second sample array.statistic (
Callable[[ndarray],float]) – Function to compute on each sample (default: mean).null_ratio (
float) – Null hypothesis ratio value (default: 1.0).alternative (
str) – Alternative hypothesis type.alpha (
float) – Significance level.n_bootstrap (
int) – Number of bootstrap iterations.
- Return type:
- Returns:
HypothesisTestResult for the ratio test.
Example
>>> # Test if ROE ratio differs from 1.0 >>> result = test_ratio_of_metrics( ... roe_strategy_a, ... roe_strategy_b, ... statistic=np.median, ... null_ratio=1.0 ... )
- paired_comparison_test(paired_differences: ndarray, null_value: float = 0.0, alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]
Test paired differences using bootstrap.
Tests whether paired differences (e.g., from matched scenarios) have a mean equal to a specified value (typically 0).
- Parameters:
- Return type:
- Returns:
HypothesisTestResult for the paired test.
Example
>>> # Test if insurance improves outcomes >>> differences = outcomes_with_insurance - outcomes_without_insurance >>> result = paired_comparison_test(differences, alternative='greater')
- bootstrap_hypothesis_test(data: ndarray, null_hypothesis: Callable[[ndarray], ndarray], test_statistic: Callable[[ndarray], float], alternative: str = 'two-sided', alpha: float = 0.05, n_bootstrap: int = 10000, seed: int | None = None) HypothesisTestResult[source]
General bootstrap hypothesis testing framework.
Allows testing of custom hypotheses using any test statistic.
- Parameters:
data (
ndarray) – Input data array.null_hypothesis (
Callable[[ndarray],ndarray]) – Function that transforms data to satisfy null.test_statistic (
Callable[[ndarray],float]) – Function to compute test statistic.alternative (
str) – Alternative hypothesis type.alpha (
float) – Significance level.n_bootstrap (
int) – Number of bootstrap iterations.
- Return type:
- Returns:
HypothesisTestResult for the custom test.
Example
>>> # Test if variance exceeds threshold >>> def null_transform(x): ... return x * np.sqrt(threshold_var / np.var(x)) >>> result = bootstrap_hypothesis_test( ... data, null_transform, np.var, alternative='greater' ... )
- multiple_comparison_correction(p_values: List[float], method: str = 'bonferroni', alpha: float = 0.05) Tuple[ndarray, ndarray][source]
Apply multiple comparison correction to p-values.
Adjusts p-values when multiple hypothesis tests are performed to control family-wise error rate or false discovery rate.
- Parameters:
- Return type:
- Returns:
Tuple of (adjusted_p_values, reject_decisions).
Example
>>> p_vals = [0.01, 0.04, 0.03, 0.20] >>> adj_p, reject = multiple_comparison_correction(p_vals) >>> print(f"Significant tests: {np.sum(reject)}")
ergodic_insurance.stochastic_processes module
Stochastic processes for financial modeling.
This module provides various stochastic process implementations for modeling financial volatility, including Geometric Brownian Motion, lognormal volatility, and mean-reverting processes. These are used to add realistic randomness to revenue and growth modeling in the manufacturing simulation.
- class StochasticConfig(**data: Any) None[source]
Bases:
BaseModelConfiguration for stochastic processes.
Defines parameters common to all stochastic process implementations, including volatility, drift, random seed, and time step parameters.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class StochasticProcess(config: StochasticConfig)[source]
Bases:
ABCAbstract base class for stochastic processes.
Provides common interface and functionality for all stochastic process implementations used in financial modeling. All concrete implementations must provide a generate_shock method.
- class GeometricBrownianMotion(config: StochasticConfig)[source]
Bases:
StochasticProcessGeometric Brownian Motion process using Euler-Maruyama discretization.
Implements GBM with exact lognormal solution for high numerical accuracy. Commonly used for modeling asset prices and growth rates with constant relative volatility.
- class LognormalVolatility(config: StochasticConfig)[source]
Bases:
StochasticProcessSimple lognormal volatility generator for revenue/sales.
Provides simpler alternative to full GBM by applying lognormal shocks centered around 1.0. Suitable for modeling revenue variations without drift components.
- class MeanRevertingProcess(config: StochasticConfig, mean_level: float = 1.0, reversion_speed: float = 0.5)[source]
Bases:
StochasticProcessOrnstein-Uhlenbeck mean-reverting process for bounded variables.
Implements mean-reverting dynamics suitable for modeling variables that tend to revert to long-term average levels, such as operating margins or capacity utilization rates.
- create_stochastic_process(process_type: str, volatility: float, drift: float = 0.0, random_seed: int | None = None, time_step: float = 1.0) StochasticProcess[source]
Factory function to create stochastic processes.
- Parameters:
- Return type:
- Returns:
StochasticProcess instance
- Raises:
ValueError – If process_type is not recognized
ergodic_insurance.strategy_backtester module
Strategy backtesting framework for insurance decision strategies.
This module provides base classes and implementations for various insurance strategies that can be tested and compared in walk-forward validation.
Example
>>> from strategy_backtester import ConservativeFixedStrategy, StrategyBacktester
>>> from simulation import SimulationEngine
>>> # Create and configure a strategy
>>> strategy = ConservativeFixedStrategy(
... primary_limit=5000000,
... excess_limit=20000000,
... deductible=100000
... )
>>>
>>> # Run backtest
>>> backtester = StrategyBacktester(simulation_engine)
>>> results = backtester.test_strategy(
... strategy=strategy,
... n_simulations=1000,
... n_years=10
... )
- class InsuranceStrategy(name: str)[source]
Bases:
ABCAbstract base class for insurance strategies.
Defines the interface that all insurance strategies must implement for use in backtesting and walk-forward validation.
- abstractmethod get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]
Get insurance program for the current state.
- Parameters:
manufacturer (
WidgetManufacturer) – Current manufacturer statehistorical_losses (
Optional[ndarray]) – Past loss data for adaptive strategiescurrent_year (
int) – Current year in simulation
- Return type:
- Returns:
InsuranceProgram or None for no insurance.
- class NoInsuranceStrategy[source]
Bases:
InsuranceStrategyBaseline strategy with no insurance.
- get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]
Return no insurance program.
- Return type:
- Returns:
None to indicate no insurance.
- class ConservativeFixedStrategy(primary_limit: float = 5000000, excess_limit: float = 20000000, higher_limit: float = 25000000, deductible: float = 50000)[source]
Bases:
InsuranceStrategyConservative strategy with high limits and low deductible.
- get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]
Get conservative insurance program.
- Return type:
- Returns:
InsuranceProgram with high coverage.
- class AggressiveFixedStrategy(primary_limit: float = 2000000, excess_limit: float = 5000000, deductible: float = 250000)[source]
Bases:
InsuranceStrategyAggressive strategy with low limits and high deductible.
- get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]
Get aggressive insurance program.
- Return type:
- Returns:
InsuranceProgram with limited coverage.
- class OptimizedStaticStrategy(optimizer: PenaltyMethodOptimizer | None = None, target_roe: float = 0.15, max_ruin_prob: float = 0.01)[source]
Bases:
InsuranceStrategyStrategy using optimization to find best static limits.
- optimize_limits(manufacturer: WidgetManufacturer, simulation_engine: Simulation)[source]
Run optimization to find best limits.
- Parameters:
manufacturer (
WidgetManufacturer) – Manufacturer instancesimulation_engine (
Simulation) – Simulation engine for evaluation
- get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]
Get optimized insurance program.
- Return type:
- Returns:
InsuranceProgram with optimized parameters.
- class AdaptiveStrategy(base_deductible: float = 100000, base_primary: float = 3000000, base_excess: float = 10000000, adaptation_window: int = 3, adjustment_factor: float = 0.2)[source]
Bases:
InsuranceStrategyStrategy that adjusts based on recent loss experience.
- update(losses: ndarray, recoveries: ndarray, year: int)[source]
Update strategy based on recent losses.
- get_insurance_program(manufacturer: WidgetManufacturer, historical_losses: ndarray | None = None, current_year: int = 0) InsuranceProgram | None[source]
Get adaptive insurance program.
- Return type:
- Returns:
InsuranceProgram with adapted parameters.
- class BacktestResult(strategy_name: str, simulation_results: SimulationResults | SimulationResults, metrics: ValidationMetrics, execution_time: float, config: SimulationConfig) None[source]
Bases:
objectResults from strategy backtesting.
- strategy_name
Name of tested strategy
- simulation_results
Raw simulation results (either Simulation or MC results)
- metrics
Calculated performance metrics
- execution_time
Time taken to run backtest
- config
Configuration used for backtest
- simulation_results: SimulationResults | SimulationResults
- metrics: ValidationMetrics
- config: SimulationConfig
- class StrategyBacktester(simulation_engine: Simulation | None = None, metric_calculator: MetricCalculator | None = None)[source]
Bases:
objectEngine for backtesting insurance strategies.
- results_cache: Dict[str, BacktestResult]
- test_strategy(strategy: InsuranceStrategy, manufacturer: WidgetManufacturer, config: SimulationConfig, use_cache: bool = True) BacktestResult[source]
Test a single strategy.
- Parameters:
strategy (
InsuranceStrategy) – Strategy to testmanufacturer (
WidgetManufacturer) – Manufacturer instanceconfig (
SimulationConfig) – Simulation configurationuse_cache (
bool) – Whether to use cached results
- Return type:
- Returns:
BacktestResult with performance metrics.
- test_multiple_strategies(strategies: List[InsuranceStrategy], manufacturer: WidgetManufacturer, config: SimulationConfig) DataFrame[source]
Test multiple strategies and compare.
- Parameters:
strategies (
List[InsuranceStrategy]) – List of strategies to testmanufacturer (
WidgetManufacturer) – Manufacturer instanceconfig (
SimulationConfig) – Simulation configuration
- Return type:
DataFrame- Returns:
DataFrame comparing strategy performance.
ergodic_insurance.summary_statistics module
Comprehensive summary statistics and report generation for simulation results.
This module provides statistical analysis tools, distribution fitting utilities, and formatted report generation for Monte Carlo simulation results.
- format_quantile_key(q: float) str[source]
Format a quantile value as a dictionary key using per-mille resolution.
Uses per-mille (parts per thousand) to avoid key collisions for sub-percentile quantiles that are critical for insurance risk metrics.
- class StatisticalSummary(basic_stats: Dict[str, float], distribution_params: Dict[str, Dict[str, float]], confidence_intervals: Dict[str, Tuple[float, float]], hypothesis_tests: Dict[str, Dict[str, float]], extreme_values: Dict[str, float]) None[source]
Bases:
objectComplete statistical summary of simulation results.
- class SummaryStatistics(confidence_level: float = 0.95, bootstrap_iterations: int = 1000, seed: int | None = None)[source]
Bases:
objectCalculate comprehensive summary statistics for simulation results.
- class TDigest(compression: float = 200)[source]
Bases:
objectT-digest data structure for streaming quantile estimation.
Implements the merging digest variant from Dunning & Ertl (2019). Provides accurate quantile estimates, especially at the tails, with bounded memory usage proportional to the compression parameter.
The t-digest maintains a sorted set of centroids (mean, weight) that adaptively cluster data points. Clusters near the tails (q->0 or q->1) are kept small for precision, while clusters near the median can be larger.
- Parameters:
compression (
float) – Controls accuracy vs memory tradeoff. Higher values give more accuracy but use more memory. Typical range: 100-300. Default 200 gives ~0.2-1% error at median, ~0.005-0.05% at q01/q99.
References
Dunning, T. & Ertl, O. (2019). “Computing Extremely Accurate Quantiles Using t-Digests.” arXiv:1902.04023.
- merge(other: TDigest) None[source]
Merge another t-digest into this one.
After merging, this digest contains the combined information from both digests. The other digest is not modified.
- quantile(q: float) float[source]
Estimate a single quantile.
- Parameters:
q (
float) – Quantile to estimate, in range [0, 1].- Return type:
- Returns:
Estimated value at the given quantile.
- Raises:
ValueError – If the digest is empty.
- class QuantileCalculator(quantiles: List[float] | None = None, seed: int | None = None)[source]
Bases:
objectEfficient quantile calculation for large datasets.
- calculate_quantiles(data_hash: int, method: str = 'linear') Dict[str, float][source]
Calculate quantiles with caching.
- calculate(data: ndarray, method: str = 'linear') Dict[str, float][source]
Calculate quantiles for data.
- streaming_quantiles(data_stream: ndarray, compression: float = 200) Dict[str, float][source]
Calculate quantiles for streaming data using the t-digest algorithm.
Uses the t-digest merging digest algorithm (Dunning & Ertl, 2019) for streaming quantile estimation with bounded memory and high accuracy, especially at tail quantiles relevant to insurance risk metrics.
- Parameters:
- Return type:
- Returns:
Dictionary of approximate quantile values
- class DistributionFitter[source]
Bases:
objectFit and compare multiple probability distributions to data.
- DISTRIBUTIONS = {'beta': <scipy.stats._continuous_distns.beta_gen object>, 'exponential': <scipy.stats._continuous_distns.expon_gen object>, 'gamma': <scipy.stats._continuous_distns.gamma_gen object>, 'lognormal': <scipy.stats._continuous_distns.lognorm_gen object>, 'normal': <scipy.stats._continuous_distns.norm_gen object>, 'pareto': <scipy.stats._continuous_distns.pareto_gen object>, 'uniform': <scipy.stats._continuous_distns.uniform_gen object>, 'weibull': <scipy.stats._continuous_distns.weibull_min_gen object>}
- fit_all(data: ndarray, distributions: List[str] | None = None) DataFrame[source]
Fit multiple distributions and compare goodness of fit.
- class SummaryReportGenerator(style: str = 'markdown')[source]
Bases:
objectGenerate formatted summary reports for simulation results.
ergodic_insurance.trajectory_storage module
Memory-efficient storage system for simulation trajectories.
This module provides a lightweight storage system for Monte Carlo simulation trajectories that minimizes RAM usage while storing both partial time series data and comprehensive summary statistics.
- Features:
Memory-mapped numpy arrays for efficient storage
Optional HDF5 backend with compression
Configurable time series sampling (store every Nth year)
Lazy loading to minimize memory footprint
Automatic disk space management
CSV/JSON export for analysis tools
<2GB RAM usage for 100K simulations
<1GB disk usage with sampling
Example
>>> from ergodic_insurance.trajectory_storage import TrajectoryStorage
>>> storage = TrajectoryStorage(
... storage_dir="./trajectories",
... sample_interval=5, # Store every 5th year
... max_disk_usage_gb=1.0
... )
>>> # During simulation
>>> storage.store_simulation(
... sim_id=0,
... annual_losses=losses,
... final_assets=assets,
... summary_stats=stats
... )
>>> # Later retrieval
>>> data = storage.load_simulation(sim_id=0)
- class StorageConfig(storage_dir: str = './trajectory_storage', backend: str = 'memmap', sample_interval: int = 10, max_disk_usage_gb: float = 1.0, compression: bool = True, compression_level: int = 4, chunk_size: int = 1000, enable_summary_stats: bool = True, enable_time_series: bool = True, dtype: Any = <class 'numpy.float32'>) None[source]
Bases:
objectConfiguration for trajectory storage system.
- dtype
alias of
float32
- class SimulationSummary(sim_id: int, final_assets: float, total_losses: float, total_recoveries: float, mean_annual_loss: float, max_annual_loss: float, min_annual_loss: float, growth_rate: float, ruin_occurred: bool, ruin_year: int | None = None, volatility: float | None = None) None[source]
Bases:
objectSummary statistics for a single simulation.
- class TrajectoryStorage(config: StorageConfig | None = None)[source]
Bases:
objectMemory-efficient storage for simulation trajectories.
Provides lightweight storage using memory-mapped arrays or HDF5, with configurable sampling and automatic disk space management.
- store_simulation(sim_id: int, annual_losses: ndarray, insurance_recoveries: ndarray, retained_losses: ndarray, final_assets: float, initial_assets: float, ruin_occurred: bool = False, ruin_year: int | None = None) None[source]
Store simulation trajectory with automatic sampling.
- Parameters:
sim_id (
int) – Simulation identifierannual_losses (
ndarray) – Array of annual lossesinsurance_recoveries (
ndarray) – Array of insurance recoveriesretained_losses (
ndarray) – Array of retained lossesfinal_assets (
float) – Final asset valueinitial_assets (
float) – Initial asset valueruin_occurred (
bool) – Whether ruin occurred
- Return type:
- load_simulation(sim_id: int, load_time_series: bool = False) Dict[str, Any][source]
Load simulation data with lazy loading.
ergodic_insurance.trends module
Trend module for insurance claim frequency and severity adjustments.
This module provides a hierarchy of trend classes that apply multiplicative adjustments to claim frequencies and severities over time. Trends model how insurance risks evolve due to inflation, exposure growth, regulatory changes, or other systematic factors.
- Key Concepts:
All trends are multiplicative (1.0 = no change, 1.03 = 3% increase)
Support both annual and sub-annual (monthly) time steps
Seedable for reproducibility in stochastic trends
Time-based multipliers for dynamic risk evolution
Example
Basic usage with linear trend:
from ergodic_insurance.trends import LinearTrend, ScenarioTrend
# 3% annual inflation trend
inflation = LinearTrend(annual_rate=0.03)
multiplier_year5 = inflation.get_multiplier(5.0) # ~1.159
# Custom scenario with varying rates
scenario = ScenarioTrend(
factors=[1.0, 1.05, 1.08, 1.06, 1.10],
time_unit="annual"
)
multiplier_year3 = scenario.get_multiplier(3.5) # Interpolated
- Since:
Version 0.4.0 - Core trend infrastructure for ClaimGenerator
- class Trend(seed: int | None = None)[source]
Bases:
ABCAbstract base class for all trend implementations.
Defines the interface that all trend classes must implement. Trends provide multiplicative adjustments over time for frequencies and severities in insurance claim modeling.
- All trend implementations must provide:
get_multiplier(time): Returns multiplicative factor at given time
Proper handling of edge cases (negative time, etc.)
Reproducibility through seed support (if stochastic)
Examples
Implementing a custom trend:
class StepTrend(Trend): def __init__(self, step_time: float, step_factor: float): self.step_time = step_time self.step_factor = step_factor def get_multiplier(self, time: float) -> float: if time < 0: return 1.0 return 1.0 if time < self.step_time else self.step_factor
- abstractmethod get_multiplier(time: float) float[source]
Get the multiplicative adjustment factor at a given time.
- Parameters:
time (
float) – Time point (in years from start) to get multiplier for. Can be fractional for sub-annual precision.- Returns:
- Multiplicative factor (1.0 = no change, >1.0 = increase,
<1.0 = decrease).
- Return type:
Note
Implementations should handle negative time gracefully, typically returning 1.0 or the initial value.
- class NoTrend(seed: int | None = None)[source]
Bases:
TrendDefault trend implementation with no adjustment over time.
This trend always returns a multiplier of 1.0, representing no change in frequency or severity over time. Useful as a default or baseline.
Examples
Using NoTrend as baseline:
from ergodic_insurance.trends import NoTrend baseline = NoTrend() # Always returns 1.0 assert baseline.get_multiplier(0) == 1.0 assert baseline.get_multiplier(10) == 1.0 assert baseline.get_multiplier(-5) == 1.0
- class LinearTrend(annual_rate: float = 0.03, seed: int | None = None)[source]
Bases:
TrendLinear compound growth trend with constant annual rate.
Models exponential growth/decay with a fixed annual rate, similar to compound interest. Commonly used for inflation, exposure growth, or systematic risk changes.
The multiplier at time t is calculated as: (1 + annual_rate)^t
- annual_rate
Annual growth rate (0.03 = 3% growth, -0.02 = 2% decay).
Examples
Modeling inflation:
from ergodic_insurance.trends import LinearTrend # 3% annual inflation inflation = LinearTrend(annual_rate=0.03) # After 5 years: 1.03^5 ≈ 1.159 mult_5y = inflation.get_multiplier(5.0) print(f"5-year inflation factor: {mult_5y:.3f}") # After 6 months: 1.03^0.5 ≈ 1.015 mult_6m = inflation.get_multiplier(0.5) print(f"6-month inflation factor: {mult_6m:.3f}")
Modeling exposure decay:
# 2% annual exposure reduction reduction = LinearTrend(annual_rate=-0.02) mult_10y = reduction.get_multiplier(10.0) # 0.98^10 ≈ 0.817
- get_multiplier(time: float) float[source]
Calculate compound growth multiplier at given time.
- Parameters:
time (
float) – Time in years from start. Can be fractional for sub-annual calculations. Negative times return 1.0.- Returns:
- Multiplicative factor calculated as (1 + annual_rate)^time.
Returns 1.0 for negative times.
- Return type:
Examples
Calculating multipliers:
trend = LinearTrend(0.04) # 4% annual # Year 1: 1.04 mult_1 = trend.get_multiplier(1.0) # Year 2.5: 1.04^2.5 ≈ 1.104 mult_2_5 = trend.get_multiplier(2.5) # Negative time: 1.0 mult_neg = trend.get_multiplier(-1.0)
- class RandomWalkTrend(drift: float = 0.0, volatility: float = 0.1, seed: int | None = None)[source]
Bases:
TrendRandom walk trend with drift and volatility.
Models a geometric random walk (geometric Brownian motion) where the multiplier evolves as a cumulative product of random changes. Commonly used for modeling market indices, asset prices, or unpredictable long-term trends in insurance markets.
The multiplier at time t follows: M(t) = exp(drift * t + volatility * W(t)) where W(t) is a Brownian motion.
- drift
Annual drift rate (expected growth rate).
- volatility
Annual volatility (standard deviation of log returns).
- cached_path
Cached random path for efficiency.
- cached_times
Time points for the cached path.
Examples
Basic random walk with drift:
from ergodic_insurance.trends import RandomWalkTrend # 2% drift with 10% volatility trend = RandomWalkTrend(drift=0.02, volatility=0.10, seed=42) # Generate multipliers mult_1 = trend.get_multiplier(1.0) # Random around e^0.02 mult_5 = trend.get_multiplier(5.0) # More variation
Market-like volatility:
# High volatility market volatile = RandomWalkTrend(drift=0.0, volatility=0.30) # Low volatility with positive drift stable = RandomWalkTrend(drift=0.03, volatility=0.05)
- get_multiplier(time: float) float[source]
Get random walk multiplier at given time.
- Parameters:
time (
float) – Time in years from start. Negative times return 1.0.- Returns:
- Multiplicative factor following geometric Brownian motion.
Always positive due to exponential transformation.
- Return type:
Note
The path is cached on first call for efficiency. All subsequent calls will use the same random path, ensuring consistency within a simulation run.
- class MeanRevertingTrend(mean_level: float = 1.0, reversion_speed: float = 0.2, volatility: float = 0.1, initial_level: float = 1.0, seed: int | None = None)[source]
Bases:
TrendMean-reverting trend using Ornstein-Uhlenbeck process.
Models a trend that tends to revert to a long-term mean level, commonly used for interest rates, insurance market cycles, or any process with cyclical behavior around a stable level.
The process follows: dX(t) = theta*(mu - X(t))*dt + sigma*dW(t) where the multiplier M(t) = exp(X(t))
- mean_level
Long-term mean multiplier level.
- reversion_speed
Speed of mean reversion (theta).
- volatility
Volatility of the process (sigma).
- initial_level
Starting multiplier level.
- cached_path
Cached process path for efficiency.
- cached_times
Time points for the cached path.
Examples
Insurance market cycle:
from ergodic_insurance.trends import MeanRevertingTrend # Market cycles around 1.0 with 5-year half-life market = MeanRevertingTrend( mean_level=1.0, reversion_speed=0.14, # ln(2)/5 years volatility=0.10, initial_level=1.1, # Start in hard market seed=42 ) # Will gradually revert to 1.0 mult_1 = market.get_multiplier(1.0) mult_10 = market.get_multiplier(10.0) # Closer to 1.0
Interest rate model:
# Interest rates reverting to 3% with high volatility rates = MeanRevertingTrend( mean_level=1.03, reversion_speed=0.5, volatility=0.15 )
- get_multiplier(time: float) float[source]
Get mean-reverting multiplier at given time.
- Parameters:
time (
float) – Time in years from start. Negative times return 1.0.- Returns:
- Multiplicative factor following OU process.
Always positive. Tends toward mean_level over time.
- Return type:
Note
The path is cached on first call for efficiency. The process exhibits mean reversion: starting values far from the mean will tend to move toward it over time.
- class RegimeSwitchingTrend(regimes: List[float] | None = None, transition_probs: List[List[float]] | None = None, initial_regime: int = 0, regime_persistence: float = 1.0, seed: int | None = None)[source]
Bases:
TrendTrend that switches between different market regimes.
Models discrete regime changes such as hard/soft insurance markets, economic cycles, or regulatory environments. Each regime has its own multiplier, and transitions occur stochastically based on probabilities.
- regimes
List of regime multipliers.
- transition_matrix
Matrix of transition probabilities between regimes.
- initial_regime
Starting regime index.
- regime_persistence
How long regimes tend to last.
- cached_regimes
Cached regime path for efficiency.
- cached_times
Time points for the cached path.
Examples
Hard/soft insurance market:
from ergodic_insurance.trends import RegimeSwitchingTrend # Two regimes: soft (0.9x) and hard (1.2x) markets market = RegimeSwitchingTrend( regimes=[0.9, 1.2], transition_probs=[[0.8, 0.2], # Soft -> [80% stay, 20% to hard] [0.3, 0.7]], # Hard -> [30% to soft, 70% stay] initial_regime=0, # Start in soft market seed=42 ) # Multiplier switches between 0.9 and 1.2 mult_5 = market.get_multiplier(5.0)
Three-regime economic cycle:
# Recession, normal, boom economy = RegimeSwitchingTrend( regimes=[0.8, 1.0, 1.3], transition_probs=[ [0.6, 0.4, 0.0], # Recession [0.1, 0.7, 0.2], # Normal [0.0, 0.5, 0.5], # Boom ], initial_regime=1 # Start in normal )
- get_multiplier(time: float) float[source]
Get regime-based multiplier at given time.
- Parameters:
time (
float) – Time in years from start. Negative times return 1.0.- Returns:
- Multiplicative factor for the active regime at time t.
Changes discretely as regimes switch.
- Return type:
Note
The regime path is cached on first call. Regime changes are stochastic but reproducible with the same seed. The actual regime durations depend on both transition probabilities and the regime_persistence parameter.
- class ScenarioTrend(factors: List[float] | Dict[float, float], time_unit: str = 'annual', interpolation: str = 'linear', seed: int | None = None)[source]
Bases:
TrendTrend based on explicit scenario factors with interpolation.
Allows specifying exact multiplicative factors at specific time points, with linear interpolation between points. Useful for modeling known future changes, regulatory impacts, or custom scenarios.
- factors
List or dict of multiplicative factors.
- time_unit
Time unit for the factors (“annual” or “monthly”).
- interpolation
Interpolation method (“linear” or “step”).
Examples
Annual scenario with known rates:
from ergodic_insurance.trends import ScenarioTrend # Year 0: 1.0, Year 1: 1.05, Year 2: 1.08, etc. scenario = ScenarioTrend( factors=[1.0, 1.05, 1.08, 1.06, 1.10], time_unit="annual" ) # Exact points mult_1 = scenario.get_multiplier(1.0) # 1.05 mult_2 = scenario.get_multiplier(2.0) # 1.08 # Interpolated mult_1_5 = scenario.get_multiplier(1.5) # ≈1.065
Monthly scenario:
# Monthly adjustment factors monthly = ScenarioTrend( factors=[1.0, 1.01, 1.02, 1.015, 1.025, 1.03], time_unit="monthly" ) # Month 3 (0.25 years) mult_3m = monthly.get_multiplier(0.25)
Using dictionary for specific times:
# Specific time points custom = ScenarioTrend( factors={0: 1.0, 2: 1.1, 5: 1.2, 10: 1.5}, interpolation="linear" )
- get_multiplier(time: float) float[source]
Get interpolated multiplier at given time.
- Parameters:
time (
float) – Time in years from start. Can be fractional. Negative times return 1.0.- Returns:
- Multiplicative factor, interpolated from scenario points.
Before first point: returns 1.0
After last point: returns last factor
Between points: interpolated based on method
- Return type:
Examples
Interpolation behavior:
scenario = ScenarioTrend([1.0, 1.1, 1.2, 1.15]) # Exact points mult_0 = scenario.get_multiplier(0.0) # 1.0 mult_2 = scenario.get_multiplier(2.0) # 1.2 # Linear interpolation mult_1_5 = scenario.get_multiplier(1.5) # 1.15 # Beyond range mult_neg = scenario.get_multiplier(-1.0) # 1.0 mult_10 = scenario.get_multiplier(10.0) # 1.15 (last)
ergodic_insurance.validation_metrics module
Validation metrics for walk-forward analysis and strategy backtesting.
This module provides performance metrics and comparison tools for evaluating insurance strategies across training and testing periods in walk-forward validation.
Example
>>> from validation_metrics import ValidationMetrics, MetricCalculator
>>> import numpy as np
>>> # Calculate metrics for a strategy's performance
>>> returns = np.random.normal(0.08, 0.02, 1000)
>>> losses = np.random.exponential(100000, 1000)
>>>
>>> calculator = MetricCalculator()
>>> metrics = calculator.calculate_metrics(
... returns=returns,
... losses=losses,
... final_assets=10000000
... )
>>>
>>> print(f"ROE: {metrics.roe:.2%}")
>>> print(f"Sharpe Ratio: {metrics.sharpe_ratio:.2f}")
- class ValidationMetrics(roe: float, ruin_probability: float, growth_rate: float, volatility: float, sharpe_ratio: float = 0.0, max_drawdown: float = 0.0, var_95: float = 0.0, cvar_95: float = 0.0, win_rate: float = 0.0, profit_factor: float = 0.0, recovery_time: float = 0.0, stability: float = 0.0) None[source]
Bases:
objectContainer for validation performance metrics.
- roe
Return on equity (annualized)
- ruin_probability
Probability of insolvency
- growth_rate
Compound annual growth rate
- volatility
Standard deviation of returns
- sharpe_ratio
Risk-adjusted return metric
- max_drawdown
Maximum peak-to-trough decline
- var_95
Value at Risk at 95% confidence
- cvar_95
Conditional Value at Risk at 95% confidence
- win_rate
Percentage of profitable periods
- profit_factor
Ratio of gross profits to gross losses
- recovery_time
Average time to recover from drawdown
- stability
R-squared of equity curve
- compare(other: ValidationMetrics) Dict[str, float][source]
Compare metrics with another set.
- Parameters:
other (
ValidationMetrics) – Metrics to compare against.- Return type:
- Returns:
Dictionary of percentage differences.
- class StrategyPerformance(strategy_name: str, in_sample_metrics: ValidationMetrics | None = None, out_sample_metrics: ValidationMetrics | None = None, degradation: Dict[str, float]=<factory>, overfitting_score: float = 0.0, consistency_score: float = 0.0, metadata: Dict[str, ~typing.Any]=<factory>) None[source]
Bases:
objectPerformance tracking for a single strategy.
- strategy_name
Name of the strategy
- in_sample_metrics
Metrics from training period
- out_sample_metrics
Metrics from testing period
- degradation
Performance degradation from in-sample to out-sample
- overfitting_score
Degree of overfitting (0 = none, 1 = severe)
- consistency_score
Consistency across multiple windows
- metadata
Additional strategy-specific data
- in_sample_metrics: ValidationMetrics | None = None
- out_sample_metrics: ValidationMetrics | None = None
- class MetricCalculator(risk_free_rate: float = 0.02)[source]
Bases:
objectCalculator for performance metrics from simulation results.
- calculate_metrics(returns: ndarray, losses: ndarray | None = None, final_assets: ndarray | None = None, initial_assets: float = 10000000, n_years: int | None = None) ValidationMetrics[source]
Calculate comprehensive performance metrics.
- Parameters:
- Return type:
- Returns:
ValidationMetrics object with calculated metrics.
- class PerformanceTargets(min_roe: float | None = None, max_ruin_probability: float | None = None, min_sharpe_ratio: float | None = None, max_drawdown: float | None = None, min_growth_rate: float | None = None)[source]
Bases:
objectUser-defined performance targets for strategy evaluation.
- min_roe
Minimum acceptable ROE
- max_ruin_probability
Maximum acceptable ruin probability
- min_sharpe_ratio
Minimum acceptable Sharpe ratio
- max_drawdown
Maximum acceptable drawdown
- min_growth_rate
Minimum acceptable growth rate
ergodic_insurance.visualization_legacy module
Visualization utilities for professional WSJ-style plots.
This module provides standardized plotting functions with Wall Street Journal aesthetic for insurance analysis and risk metrics visualization.
NOTE: This module now acts as a facade for the new modular visualization package. New code should import directly from ergodic_insurance.visualization.
- get_figure_factory(theme: Theme | None = None) FigureFactory | None[source]
Get or create global figure factory instance.
- set_wsj_style(use_factory: bool = False, theme: Theme | None = None)[source]
Set matplotlib to use WSJ-style formatting.
- Parameters:
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.set_wsj_style()instead.
- format_currency(value: float, decimals: int = 0, abbreviate: bool = False) str[source]
Format value as currency.
- Parameters:
- Return type:
- Returns:
Formatted string (e.g., “$1,000” or “$1K” if abbreviate=True)
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.format_currency()instead.
- format_percentage(value: float, decimals: int = 1) str[source]
Format value as percentage.
- Parameters:
- Return type:
- Returns:
Formatted string (e.g., “5.0%”)
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.format_percentage()instead.
- class WSJFormatter[source]
Bases:
objectFormatter for WSJ-style axis labels.
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.WSJFormatterinstead.- static currency(x: float, decimals: int = 1) str[source]
Format value as currency (shortened method name).
- Return type:
- static percentage(x: float, decimals: int = 1) str[source]
Format value as percentage (shortened method name).
- Return type:
- create_styled_figure(size_type: str = 'medium', theme: Theme | None = None, use_factory: bool = True, **kwargs) Tuple[Figure, Axes | ndarray][source]
Create a figure with automatic styling applied.
- Parameters:
- Return type:
- Returns:
Tuple of (figure, axes)
- plot_loss_distribution(losses: ndarray | DataFrame, title: str = 'Loss Distribution', bins: int = 50, show_metrics: bool = True, var_levels: List[float] | None = None, figsize: Tuple[int, int] = (12, 6), show_stats: bool = False, log_scale: bool = False, use_factory: bool = False, theme: Theme | None = None) Figure[source]
Create WSJ-style loss distribution plot.
- Parameters:
losses (
Union[ndarray,DataFrame]) – Array of loss values or DataFrame with ‘amount’ columntitle (
str) – Plot titlebins (
int) – Number of histogram binsshow_metrics (
bool) – Whether to show VaR/TVaR linesvar_levels (
Optional[List[float]]) – VaR confidence levels to showshow_stats (
bool) – Whether to show statisticslog_scale (
bool) – Whether to use log scaleuse_factory (
bool) – Whether to use new visualization factory if availabletheme (
Optional[Theme]) – Optional theme to use with factory
- Return type:
- Returns:
Matplotlib figure
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.plot_loss_distribution()instead.
- plot_return_period_curve(losses: ndarray | DataFrame, return_periods: ndarray | None = None, scenarios: Dict[str, ndarray] | None = None, title: str = 'Return Period Curves', figsize: Tuple[int, int] = (10, 6), confidence_level: float = 0.95, show_grid: bool = True) Figure[source]
Create WSJ-style return period curve.
- Parameters:
- Return type:
- Returns:
Matplotlib figure
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.plot_return_period_curve()instead.
- plot_insurance_layers(layers: List[Dict[str, float]] | DataFrame, total_limit: float | None = None, title: str = 'Insurance Program Structure', figsize: Tuple[int, int] = (10, 6), losses: ndarray | DataFrame | None = None, loss_data: ndarray | DataFrame | None = None, show_expected_loss: bool = False) Figure[source]
Create WSJ-style insurance layer visualization.
- Parameters:
- Return type:
- Returns:
Matplotlib figure
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.plot_insurance_layers()instead.
- create_interactive_dashboard(results: Dict[str, Any] | DataFrame, title: str = 'Monte Carlo Simulation Dashboard', height: int = 600, show_distributions: bool = False) Figure[source]
Create interactive Plotly dashboard with WSJ styling.
- Parameters:
- Return type:
Figure- Returns:
Plotly figure
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.create_interactive_dashboard()instead.
- plot_convergence_diagnostics(convergence_stats: Dict[str, Any], title: str = 'Convergence Diagnostics', figsize: Tuple[int, int] = (12, 8), r_hat_threshold: float = 1.1, show_threshold: bool = False) Figure[source]
Create comprehensive convergence diagnostics plot.
- Parameters:
- Return type:
- Returns:
Matplotlib figure
Deprecated since version 2.0.0: Use
ergodic_insurance.visualization.plot_convergence_diagnostics()instead.
- plot_pareto_frontier_2d(frontier_points: List[Any], x_objective: str, y_objective: str, x_label: str | None = None, y_label: str | None = None, title: str = 'Pareto Frontier', highlight_knees: bool = True, show_trade_offs: bool = False, figsize: Tuple[float, float] = (10, 6)) Figure[source]
Plot 2D Pareto frontier with WSJ styling.
- plot_pareto_frontier_3d(frontier_points: List[Any], x_objective: str, y_objective: str, z_objective: str, x_label: str | None = None, y_label: str | None = None, z_label: str | None = None, title: str = '3D Pareto Frontier', figsize: Tuple[float, float] = (12, 8)) Figure[source]
Plot 3D Pareto frontier surface.
- create_interactive_pareto_frontier(frontier_points: List[Any], objectives: List[str], title: str = 'Interactive Pareto Frontier', height: int = 600, show_dominated: bool = True) Figure[source]
Create interactive Plotly Pareto frontier visualization.
- plot_scenario_comparison(aggregated_results: Any, metrics: List[str] | None = None, figsize: Tuple[float, float] = (14, 8), save_path: str | None = None) Figure[source]
Create comprehensive scenario comparison visualization.
- Parameters:
- Return type:
- Returns:
Matplotlib figure
- plot_sensitivity_heatmap(aggregated_results: Any, metric: str = 'mean_growth_rate', figsize: Tuple[float, float] = (10, 8), save_path: str | None = None) Figure[source]
Create sensitivity analysis heatmap.
- plot_parameter_sweep_3d(aggregated_results: Any, param1: str, param2: str, metric: str = 'mean_growth_rate', height: int = 600, save_path: str | None = None) Figure[source]
Create 3D surface plot for parameter sweep results.
- Parameters:
- Return type:
Figure- Returns:
Plotly figure
ergodic_insurance.walk_forward_validator module
Walk-forward validation system for insurance strategy testing.
This module implements a rolling window validation framework that tests insurance strategies across multiple time periods to detect overfitting and ensure robustness of insurance decisions.
Example
>>> from walk_forward_validator import WalkForwardValidator
>>> from strategy_backtester import ConservativeFixedStrategy, AdaptiveStrategy
>>> # Create validator with 3-year windows
>>> validator = WalkForwardValidator(
... window_size=3,
... step_size=1,
... test_ratio=0.3
... )
>>>
>>> # Define strategies to test
>>> strategies = [
... ConservativeFixedStrategy(),
... AdaptiveStrategy()
... ]
>>>
>>> # Run walk-forward validation
>>> results = validator.validate_strategies(
... strategies=strategies,
... n_years=10,
... n_simulations=1000
... )
>>>
>>> # Generate reports
>>> validator.generate_report(results, output_dir="./reports")
- class ValidationWindow(window_id: int, train_start: int, train_end: int, test_start: int, test_end: int) None[source]
Bases:
objectRepresents a single validation window.
- window_id
Unique identifier for the window
- train_start
Start year of training period
- train_end
End year of training period
- test_start
Start year of testing period
- test_end
End year of testing period
- class WindowResult(window: ValidationWindow, strategy_performances: Dict[str, ~ergodic_insurance.validation_metrics.StrategyPerformance]=<factory>, optimization_params: Dict[str, ~typing.Dict[str, float]]=<factory>, execution_time: float = 0.0) None[source]
Bases:
objectResults from a single validation window.
- window
The validation window
- strategy_performances
Performance by strategy name
- optimization_params
Optimized parameters if applicable
- execution_time
Time to process window
- window: ValidationWindow
- strategy_performances: Dict[str, StrategyPerformance]
- class ValidationResult(window_results: List[WindowResult] = <factory>, strategy_rankings: DataFrame = <factory>, overfitting_analysis: Dict[str, float]=<factory>, consistency_scores: Dict[str, float]=<factory>, best_strategy: str | None = None, metadata: Dict[str, ~typing.Any]=<factory>) None[source]
Bases:
objectComplete walk-forward validation results.
- window_results
Results for each window
- strategy_rankings
Overall strategy rankings
- overfitting_analysis
Overfitting detection results
- consistency_scores
Strategy consistency across windows
- best_strategy
Recommended strategy based on validation
- metadata
Additional validation metadata
- window_results: List[WindowResult]
- strategy_rankings: DataFrame
- class WalkForwardValidator(window_size: int = 3, step_size: int = 1, test_ratio: float = 0.3, simulation_engine: Simulation | None = None, backtester: StrategyBacktester | None = None, performance_targets: PerformanceTargets | None = None)[source]
Bases:
objectWalk-forward validation system for insurance strategies.
- generate_windows(total_years: int) List[ValidationWindow][source]
Generate validation windows.
- Parameters:
total_years (
int) – Total years of data available- Return type:
- Returns:
List of validation windows.
- validate_strategies(strategies: List[InsuranceStrategy], n_years: int = 10, n_simulations: int = 1000, manufacturer: WidgetManufacturer | None = None, config: Config | None = None) ValidationResult[source]
Validate strategies using walk-forward analysis.
- Parameters:
strategies (
List[InsuranceStrategy]) – List of strategies to validaten_years (
int) – Total years for validationn_simulations (
int) – Number of simulations per testmanufacturer (
Optional[WidgetManufacturer]) – Manufacturer instance
- Return type:
- Returns:
ValidationResult with complete analysis.
Module contents
Ergodic Insurance Limits - Core Package.
This module provides the main entry point for the Ergodic Insurance Limits package, exposing the key classes and functions for insurance simulation and analysis using ergodic theory. The framework helps optimize insurance retentions and limits for businesses by analyzing time-average outcomes rather than traditional ensemble approaches.
- Key Features:
Ergodic analysis of insurance decisions
Business optimization with insurance constraints
Monte Carlo simulation with trajectory storage
Insurance strategy backtesting and validation
Performance optimization and benchmarking
Comprehensive visualization and reporting
Examples
One-call analysis (recommended starting point):
from ergodic_insurance import run_analysis
results = run_analysis(
initial_assets=10_000_000,
loss_frequency=2.5,
loss_severity_mean=1_000_000,
deductible=500_000,
coverage_limit=10_000_000,
premium_rate=0.025,
)
print(results.summary())
results.plot()
Quick start with defaults (creates a $10M manufacturer, 50-year horizon):
from ergodic_insurance import Config
config = Config() # All defaults — just works
From basic company info:
from ergodic_insurance import Config
config = Config.from_company(
initial_assets=50_000_000,
operating_margin=0.12,
)
Note
This module uses lazy imports to avoid circular dependencies during test discovery. All public API classes are accessible through the module’s __all__ list.
- Since:
Version 0.4.0