Past natural catastrophes offer valuable information for present-day risk assessment, provided the loss data can be accurately transferred to the present. The trends in these data are subject to a range of influences that vary according to time and place. These influences need to be filtered out.
Socio-economic developments in values and changes in natural hazards, for example as a result of climate variability and climate change, have a fundamental impact on these trends. Economic factors on the exposure side generally play a greater role in this context. A further component affecting the trend is the increase in the recording of very small loss events due to the steady improvement in reporting, especially in industrialised and emerging countries. In order to assess the different factors, loss data need to be made comparable in terms of place and time on the basis of a global economic assessment.
Inflation adjustment and normalisation
Two similar, but fundamentally different, questions can be asked to assess past loss events according to today’s standards: (a) What would event X cost in today’s money? (b) What damage would event X cause today? To answer the first question, we simply consider the extent of the damage and look at the development of the monetary value of the loss amount. However, in order to answer (b), the loss must be re-evaluated under present-day conditions, in other words, taking into account any changes in the exposed assets and vulnerability. In the first case, it is enough to apply inflation to the historically determined loss data with the help of an established price index. It is important to ensure that the index represents the actual development of prices in the region in question and is based on the currency of the country concerned. To investigate the second question regarding the scale of economic loss that a historical event could achieve today, an additional adjustment has to be made regarding the development of values in the area affected. Such an adjustment is known as normalisation. Indexing is the term used if insured losses are being examined and the changes in insurance penetration are taken into account. Macroeconomic data such as gross domestic product (GDP) have become the established economic reference values for the normalisation of loss data (see Topics Geo 2012).
These data are available in high quality and are easy to access. The historical loss amount is multiplied by a normalisation factor that is equal to the ratio of current GDP to the GDP at the time of the historical event. Assuming that this GDP ratio accurately reflects the local changes in values, we can calculate the anticipated loss amount that would result if the event were to occur again today. Influences resulting from a change in vulnerability are not factored in.
New approach: Hazard-specific cellbased normalisation
If the GDP data relate to an entire country or a region that is significantly larger than the area affected by the natural catastrophe, one cannot automatically assume a proportional correlation between national GDP and changes in value in the area affected. To smooth this distortion, we have developed a method that we call hazard- specific regionalised normalisation. A global 1°x1° grid forms the centrepiece of this normalisation version. The annual proportion of the country’s GDP is calculated for each cell, beginning with the year 1980. A weighting is performed using the population trend in the cell, in some cases interpolated or extrapolated (Fig. 1). The special feature of this approach is that each individual cell contains a time series with the GDP share attributable to it since 1980. Cells that cross national borders are recorded several times, along with their corresponding share.
NatCatSERVICE, Munich Re’s global loss database, includes the geographic coordinates for the locations and regions that are worst affected in a loss event. These form the basis for what is known as the loss footprint for an event. In addition, each natural hazard – whether thunderstorm, flash flood or winter storm – has its individual geographic extent, which is known as the hazard footprint.
Footprints
A winter storm normally covers an area many times bigger than that of a thunderstorm. In turn, the geographical extent of a thunderstorm is typically much bigger than that of flash floods following torrential rain. The aim is to achieve a kind of geometric compromise between the hazard footprint and the loss footprint on the 1°x1° grid. An individual normalisation footprint is obtained for each event from the geocoded loss-location information and the hazard-specific selection pattern derived from it. This specifies which cells should be used to calculate the normalisation factor. Munich Re’s NatCatSERVICE has calculated the typical footprints for five basic types of loss event. When sorted according to extent, these are:
Small-scale events (including flash floods, landslides and lightning strikes)
Local events (including severe thunderstorms, earthquakes, bushfires and forest fires)
Large-scale events (including winter storms, droughts and heatwaves)
Some of these hazard-specific cell-selection patterns can be seen in Fig. 2. These graphs are available for the 28,000 or so country-based events since 1980 that are included in the NatCatSERVICE. A winter storm normally covers an area many times bigger than that of a thunderstorm. In turn, the geographical extent of a thunderstorm is typically much bigger than that of flash floods following torrential rain. The aim is to achieve a kind of geometric compromise between the hazard footprint and the loss footprint on the 1°x1° grid. An individual normalisation footprint is obtained for each event from the geocoded loss-location information and the hazard-specific selection pattern derived from it. This specifies which cells should be used to calculate the normalisation factor. Munich Re’s NatCatSERVICE has calculated the typical footprints for five basic types of loss event. When sorted according to extent, these are:
Small-scale events (including flash floods, landslides and lightning strikes)
Local events (including severe thunderstorms, earthquakes, bushfires and forest fires)
Large-scale events (including winter storms, droughts and heatwaves)
Some of these hazard-specific cell-selection patterns can be seen in Fig. 2. These graphs are available for the 28,000 or so country-based events since 1980 that are included in the NatCatSERVICE.
To determine the particular normalisation factor, we take the sum of the cell values of the whole footprint for the year in which the event occurred, and compare this value with the sum of the cell values of the footprint for the current year.
Two examples of loss amount trends for severe thunderstorm losses in North America and flood losses in Europe are displayed in Fig. 3. The increase in severe thunderstorm losses in normalised application is in line with meteorological observations made in the USA during the same period: an increase in intensity of severe – and consequently costly – thunderstorms with tornado outbreaks and severe hail. When assessing the diminishing trend in normalised flood losses in Europe, it must be remembered that a lot of money was invested in improving protection measures immediately after the devastating floods of 2002. These measures have borne fruit: despite the similar hydrological scale, the loss from the 2013 flooding was significantly below the normalised value for the 2002 event.
Number of events has a negligible impact
The normalisation method described here allows us to establish how the risk for any region has changed over time in terms of the loss amounts. As well as economic development, a further criterion for the risk assessment is that the recording of loss events must have remained constant over the period under consideration. However, this is not the case for most regions. For example, the internet has made a substantial contribution to ensuring that smaller events in particular are better recorded today than they were 30 years ago. This effect accounts for a substantial portion of the trend in increasing numbers of loss events, as shown in Fig. 4 (left-hand column, top row). However, this reporting trend has no notable impact on the loss amount trend, since annual loss amounts across all types of natural hazard depend on just a few major loss events which have always been recorded.
Improved comparability thanks to differentiated classification
It is important to have sensible graduations between the loss events in order to analyse the influence of small and major loss events on the loss statistics. One way would be simply to apply three globally applicable thresholds to the normalised loss data (such as 10, 100 and 1,000 million US dollars), in order to organise the events according to the degree of economic severity. But such a global distribution fails to take account of the fact that a loss of US$ 100m is of quite different significance for countries like Haiti or Bangladesh than it is, say, for the USA or Germany. Allowance can be made for these geographic and economic differences by spreading out the thresholds. The four income classes used each year by the World Bank to define every country can be adopted for this purpose. With each income class, the per capita gross national income increases by a factor of between 3 and 4. The metric proposed for classifying catastrophes in the table on page 62 uses this distribution, whereby the degree of severity of an event, as measured by the loss amount, depends on the particular income group. The number of victims is also incorporated into the measurement of the degree of severity. The normalised loss amount and income category for a country in the current year, in conjunction with the number of victims, provide the catastrophe class. This procedure represents the most robust method of making the economic influences of natural catastrophes comparable in terms of time and location. When this catastrophe class metric is applied to all loss events in the NatCatSERVICE database, it becomes clear that only the severe events in a particular year are of significance for the development of loss amount statistics (Fig. 4, bottom row, right). The growing number of small loss events resulting from improved reporting, particularly in recent years, has a negligible influence on loss amount statistics (in contrast to the frequency statistics). Even if the number of small loss events recorded is many times higher, the influence on the total loss amount remains insignificant. After normalisation and filtering using the catastrophe classes, what remains are residual trends and fluctuations. Attributing these then shifts the focus to changes in vulnerability (for example improved flood protection, stricter building codes or more efficient early warning systems), as well as to changes on the natural hazards side (decreases and increases in the intensity and frequency of natural hazard events). To make a further distinction here, we need to analyse regionalised and hazard-specific statistics. The method presented here is a suitable basis for this kind of further analysis.
Stay ahead of the curve with exclusive insights and industry updates! Subscribe to our Munich Re Insights Newsletter for a front-row seat to the latest trends in risk management, expert analyses and assessments, market insights, and innovations in the insurance industry. Join our community of forward-thinkers at Munich Re and empower your journey towards a more resilient future.