


Level I techniques only evaluate whether the reliability is sufficient or not, without quantifying the actual failure probability. In reliability, three methodology levels are commonly distinguished. Sampling-based uncertainty analysis, via Monte Carlo approaches, plays a central role in this characterisation and quantification of uncertainty. Or, quoting Oberkampf and co-authors, ‘realistic modelling and simulation of complex systems must include the non-deterministic features of the system and the environment’. But both concepts fundamentally require the assessment of probabilities, calling for the application of probabilistic methodologies rather than deterministic techniques. Reliability, in essence, focuses on the probability of failure, while robustness more generally targets the probability of a certain performance level. Reliability can be defined as the probability for a solution to function without failure during a given interval of time, while robustness can be described as the persistence of the characteristic behaviour of a solution under uncertain conditions. This appraisal of uncertainties is naturally connected to the concepts of ‘reliability’ and ‘robustness’. The importance of identifying, characterising and displaying the uncertainties on the results of analyses of complex systems is progressively more recognised: many regulatory standards and guidelines explicitly demand an uncertainty appraisal as part of a performance assessment of structures, systems and solutions. Both measures form fairly noncomplex upgrades of the current state-of-the-art in Monte-Carlo based uncertainty analysis but give a substantial further progress with respect to its applicability. The assessment in turn permits halting the Monte Carlo simulation when the desired levels of accuracy are reached. A sample-splitting approach is put forward, which in combination with a replicated Latin hypercube sampling allows assessing the accuracy of Monte Carlo outcomes. In the second section it is demonstrated that standard sampling statistics are inapplicable for Latin hypercube strategies. The first section shows that non-collapsing space-filling sampling strategies, illustrated here with the maximin and uniform Latin hypercube designs, highly enhance the sampling efficiency, and render a desired level of accuracy of the outcomes attainable with far lesser runs. To reduce that computational expense as much as possible, sampling efficiency and convergence for Monte Carlo are investigated in this paper. As the deterministic core simulation can be lengthy, the computational costs of Monte Carlo can be a limiting factor. It is an important tool in many assessments of the reliability and robustness of systems, structures or solutions. Monte Carlo analysis has become nearly ubiquitous since its introduction, now over 65 years ago.
