Thursday, August 20, 2009

Blind vs Open fire modelling

I always wanted to start a debate on this topic and now think that a better way is using the blog.

Three years after The Dalmarnock Fire Tests, the 'a priori' vs. 'a posteriori' debate in is still not too popular in the fire modelling community. The debate seems to be mostly taking place in personal communications and during the peer reviewing of papers. Unfortunately, not much is happening publicly or at the reach of the fire community as a whole.

Figure 1: Dalmarnock Fire Test One as seen from the outside, 18.5 min into the fire. (from [ Abecassis-Empis et al, Experimental Thermal and Fluid Science 2008]).


The problem is the following (summary). When making comparisons of modelling results to experimental measurements, there are two general approaches that can be followed: a priori (aka blind) and a posteriori (aka open). In a priori simulations, the modeller knows only a description of the initial scenario. The modeller has no access to the experimental measurements of the event and thus will be providing a true forecast of the quantities of interest. In a posteriori simulations, before the simulation is run the modeller knows the initial scenario and also how the fire developed (ie via the experimental measurements). Most fire model validations in fire engineering has been conducted a posteriori.

Only comparison of a priori and a posteriori simulations of the same event allows to investigate the possible effect that maybe has been introduced by prior knowledge of how the event developed. The importance of this effect in fire safety engineering is currently an advanced research topic and under study by different research groups.

The 2006 Dalmarnock Fire Tests conducted in a high-rise building were used to look into the problem. An international study of fire modelling was conducted prior to Dalmarnock Fire Test One. The philosophy behind the tests was to provide measurements in a realistic fire scenario with very high instrumentation density (more than 450 sensors were installed in a 3.50 by 4.75 2.45 m compartment). Each of the seven participating teams independently simulated the test scenario a priori using a common detailed description. Comparison of the modelling results shows a large scatter and considerable disparity among the predictions and between predictions and experimental measurements. These results tend to shock, please and anger the audience in equal parts. See Figure 2 below.

Figure 2: Evolution of the global heat release rate within the compartment. Comparison of predictions and experimental measurements. (from [Rein et al, Fire Safety Journal 2009]).


An exception to the relative silence of the fire community are the two magazine articles of Dr Alan Beard from Heriot-Watt University. These can be accessed here:

First note that I disagree with blanket statements like "a predicted result from a model cannot be assumed to be accurate; ie to reflect the real world". Our work also shows that fire simulations provides fire features that may be good enough to be applied towards engineering problems if a robust and conservative methodology is defined. A prerequisite for this methodology is that it can use predictions with crude levels of accuracy and that it applies appropriate safety factors. But Dr Beard has an important point in that 'real world' fire engineering applications are most frequently applied to simulate events which real behaviour had not been (and will never be) measured. These simulations are a priori simulation, not a posteriori. However, most fire model validations in fire engineering has been conducted a posteriori. I certainly agree with Dr Beard on this one; we need more a priori comparisons of fire modelling and address full model validation. What is the effect of prior knowledge of the fire development? Would the validations provide the same conclusions if conducted a priori? The problem is not unique to fire engineering and any discipline dealing with complex simulations tools should be facing this question. I do not know how other disciplines cope with it.

The differences between a priori and a posteriori modelling become patent when comparing the round-robin results with the work conducted after the Dalmarnock data was publicly disseminated. Subsequent studies (Jahn et al. 2007, Jahn et al. 2008 and Lazaro et al. 2008) show that it is possible to conduct a posteriori fire simulations that reproduce the general fire behaviour to a satisfactory level. This was achieved due to the availability experimental data of the real behaviour for reference, allowing for iterations until an adequate input file was found.

I would like to finish with the same final words I use when presenting the results in conferences and seminars. We, the authors of the Dalmarnock round-robin, are professionals of, and supporters of, fire modelling. We want fire modelling to improve and be developed further. Our daily work goes in that direction.


I am interested in hearing your comments.
.Guillermo Rein.




NOTE #1: All the relevant information, book and papers about The Dalmarnock Fire Tests are accessible in open access here.

NOTE #2: There are two points about Dalmarnock that need to be emphasised since are often misunderstood. These are:
  • The aim of our a priori work was to forecast the test results as accurately as possible, and not to conduct an engineering design with adequate conservative assumptions or safety factors.
  • Experimental variability was one of our greatest concerns and that is why the scenario was designed for maximum test reproducibility. The Dalmarnock Fire Test One was benchmarked against a second test to establish the potential experimental variability. Results show that the scatter of the a priori simulations is much larger than the experimental error and the experimental variability together.

NOTE#3: No matter how useful and interesting the results from blind simulations are, only three blind round-robins on fire modelling can be found in the historical records of the discipline. The other two are the 1999 CIB and the 2008 French PROMESIS project. All three round-robins overwhelming agree on the results, but the Dalmarnock one was the first to be publicly communicated and the one providing, by far, the largest instrumentation density.

NOTE #4: I initiated a related discussion on this topic in April 2008 in the FDS forum. See here.

8 comments:

  1. As one of the participants in the study and as a frequent user of fire/explosion simulation tools I believe this is a great subject to address and kudos to you Guillermo for bringing it up in a public way for all to benefit.

    Random Thoughts on the subject:
    I believe that prior to any type of modeling what must be clearly defined is the purpose of running the model as this makes a huge difference in the approach, the accuracy required, the tool utilized, etc. Often this is overlooked and someone runs a model because they believe it is the correct thing to do or because someone has asked them to.

    There is no doubt that modeling can provide far more information spatially and temporally about an event than other tools, and often when compared to empirical or scaled predictions they do equally well or better given the same information.

    As someone who uses models frequently after an event for reconstruction purposes I find that often the approach that is taken is to use the model for comparitive purposes or a "quantitative qualitative" assessment (i.e. given all the same parameters except one how do the model runs compare to each other). In this way you are not relying on a specific value and stating that "this is the answer" but rather using the model to support other evidence or analysis and to provide a range of reasonable answers given an appropriate sensitivity analysis.

    In the performance based design world models have become significantly more important and have been shown to be fairly good especially when looking at a design fire, smoke movement, etc. When getting into more detailed issues such as flame spread, structural integrity, etc. a full engineering analysis with safety factors should be done that does not rely solely on the modeling itself. The model should be a tool that is chosen because it best suits a particular need that has been identified in a broader scope of work not as a magic bullet that will provide all of the answers.

    The real world is a tough thing to model and fire invariably is one of the more difficult phenomena to model. Even "identical" experiments can generate varying results, yet a model will produce the same result repeatedly given the same inputs. Thus we must recognize the inherent limitations in even the best models.

    Whether modeling is done blind or open the simple fact, I believe, is that the approach taken and the manner in which the results are used is the core issue as with any engineering problem and engineered solution.

    Unless we are striving towards a black box approach out of which always comes the "right answer" then there is a use for both types of studies.

    The larger concern is that modeling will be used in an inappropriate way as the justification for a design/decision and an incident will occur that casts a shadow over fire modeling in general and the progress that has been made technically as well as acceptance will be lost. Here again is why it is important to identify upfront what the model will be used for and whether it is appropriate or not in isolation or what needs to be done in conjunction.

    I once heard someone far more knowledgeable than me state "it looks so right it must be right" in reference to some model results, this attitude/mind-set I believe is what can get us into trouble. We must always remember that we are engineers and that inappropriate reliance on a tool, whether it be a model or a screwdriver, is bound to get us in trouble.

    ReplyDelete
  2. Dear Guillermo, thank you for opening this public debate.

    I also strongly disagree with Mr Beard's sentence "a predicted result from a model cannot be assumed to be accurate". On the contrary, giving accurate results is the ultimate purpose of any model !

    We should be careful of the confusion between assumptions (input data) and model predictions (output data). Each model is just a tool which has to be held in good hands, as Mr Ryder said. I mean the model should be used by a really skilled user. It is obvious that running advanced model doesn't allow you to take place of thinking and having a strong knowledge in fire safety engineering (FSE). Otherwise, it is like driving the formula one of Lewis Hamilton to go shopping and finally say "it is not a good car !". There are much more bad users than bad models.

    I think there is a fundamental difference between a priori forecasting any test results as accurately as possible, and conducting an engineering design. These are completely two different worlds. It has no sense to draw conclusions concerning FSE from the Dalmarnock Round Robin study, because this study was not aimed at making engineering design at all. In fact, the robustness of the model is the key factor for engineering design purpose, the accuracy is of secondary interest.

    On the other hand, we have to lowly admit that we are not able to predict the evolution of a real fire involving real furniture with a high level of accuracy. This is an extremely difficult task which is going to give us intense research work for a long time ! So, to me it is not a shame to get such scattering between predictions when performing an a priori comparison. The question is then what we do with that, can we explain the differences observed ?

    It makes no doubt that the use of models (from the simplest to the more advanced) has largely contributed to improve our knowledge concerning fire dynamics. Nowadays, modeling is a major link of the scientific approach. So, please do not shoot on the models.

    It is very important to make people not specialized in modeling understand the use and limitations of any model. Scattering results like Dalmarnock's ones scare only those who trust than the use of models is like a perfect crystal ball prediction. It is also the role of FSE experts to educate the general public on this point.

    Sylvain DESANGHERE

    ReplyDelete
  3. I am posting this in the name of Scot Deal.

    ----

    I tried adding to the blog.
    Server kept flogging my attemtps.

    Modelling the structure is usually easy, as far as our worry about introducing a priori uncertainties. The design fire is the rub. I have seen more-than-I-care-to-bear of modeling that charges the customer scores of thousands of (name your denomination), documenting hundreds-of-thousands of (name your denomination)in fire protection equipment, without due diligence as to identifying 'what has to happen to fail the fire protection systems?' Part of engineering, certainly fire engineering, is investigating failure. I don't know what other disciplines do, but structural engineers reference ultimate strength. Too few analyses I see, have shown any sensitivity as to what the fire size need be to reach a building tenability failure point. Publishing a serious-failure-endpoint provides one basis for comparison *across building types*; a valid benchmark for performance-based design. The Dutch government has published acceptance-probabilities on dike failures ten years on. Probabilities of failure-fires are often lower than these Dutch tolerances, yet this Dutch hazard often exposes 1 to 2 orders more souls than the counterpart fire modeling hazard. It is time the fire profession (including AHJ's) take the blinds off and openly discuss sensitivity to fire failure. Besides, identifying a serious fire failure is a relatively easy thing to do with a model.../a priori/.


    Respectfully appreciative of the open debate,
    Scot Deal (EurekaIgnem@gmail.com)

    ReplyDelete
  4. Consider these three issues from the developer or regulator point of view:

    1. The Dalmarnock exercise was focused on "user effects" -- that is, how different modelers can choose a wide variety of input parameters and then get a wide range of results. It tells us little about whether the math and physics of the models are right. We made the decision in the NRC V&V study (NUREG 1824) to eliminate as much as possible the variations in input parameters model to model to better assess the accuracy of the models themselves. To me, that is what V&V is all about.

    2. Blind or "a priori" exercises rarely provide the modelers with enough information about the test. I've never seen a fire experiment conducted that was exactly as specified in the test plan. Practical considerations the day of the experiment usually nullify the usefulness of simulations run prior to the test. This was true of all the attempts during the International Collaborative Fire Modeling Project (ICFMP) whose experiments were included in NUREG 1824.

    3. We re-run the NUREG 1824 calcs, plus about 200 other fire tests, each time we release a new "minor" revision of FDS. Those results are the only ones that matter to anyone using that particular version. A "blind" exercise conducted in 2003, like ICFMP BE #3, is of little value for a regulatory authority who is being asked to evaluate a fire modeling analysis with a newer version of the model.

    I am not opposed to blind modeling exercises. If the opportunity arises, great, take advantage. But I would hesitate to place a greater value on so-called blind studies over "open" because the vast majority of our validation database is open. We cannot just throw away thirty years of experimental measurements because they no longer provide us with "blind" results.

    ReplyDelete
  5. I am posting this in the name of
    Mark Salley, NRC.

    ----

    Please let me jump in and try to explain our logic/thought process: If you charge off doing "Blind Simulations" you make the unvalidated assumption that: 1)The fire models are perfect tools, and, 2) your variance in predictions will be based solely on user's skill/input values. Nothing could be further from the truth. The models are not perfect ~all the reasonable developers recognize this fact. This was the whole key to the NRC and EPRI creating NUREG-1824,( http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr1824/ ) where we asked the question: "How good are the models?" and "Are different models (hand calculations/zone/CFD) better at some things than the other?" (if the only tool you own is a hammer........) NUREG-1824 now provides us with a baseline within the applicable bounding limits. We have a better handle on the "uncertainty" of a model prediction; albeit conservative, or non-conservative. To charge off doing blind predictions with un V&Ved models is like trying to solve 1 equation with 2 unknowns, the model & the modeler. Let me ask you this; if you run a blind calculation and get unacceptable answers, is it the fault of the model, or the modeler?
    Likewise, if you get the exact answer, I guess you assume the model & modeler are perfect? One could not compensate for the other? We have a much better grasp on this today. As for the quality (or lack their of) of the modeler, we are working on the users-guide for NPP applications that Jose has peer-reviewed for us. Education will always play a key role in this. I believe a much better approach would be to take the V&V fire models, understand their uncertainties and limitations, then take the completed users guide, educate the users, and only then begin to run the blind calculations. By doing this, in this manner, you would be better able to see where problems/additional work was needed (e.g with the model, modeler, input data, etc).

    Mark Henry Salley P.E.
    Chief, Fire Research Branch
    U.S. Nuclear Regulatory Commission
    Office of Nuclear Regulatory Research
    Division of Risk Analysis
    Washington, D.C.

    ReplyDelete
  6. Dear colleagues, I agree that Guillermo raises an extremely important issue. Improved models and modelers are exactly what we, as engineers, should hope and strive for.

    Clearly, we are, to my opinion, nowhere near complete trust in blind simulations and we need not be ashamed of this. The same is true in other communities.

    I will write this comment, biased towards CFD, as I am most familiar with that area, but I assume many aspects also hold for structural engineering.

    I would like to use the international workshop series on turbulent non-premixed flames (TNF, web site: http://public.ca.sandia.gov/TNF/9thWorkshop/TNF9.html) as enlighting example from the combustion community. Over the years, simulation results for a limited set of test cases improved a lot, mainly due to better and better description of the test cases and more stringent choices of computational meshes and model options and constants.

    Yet, with every new test case, scatter appears again among results of different groups all over the world. In particular now with the relatively recent break-through of LES, where some aspects appear more sensitive to numerics than in RANS turbulence modelling, there are new issues. Indeed, quality assessment of LES is, after decades of use, now a real focus in the TNF community.

    To make my point clear: 1. I do not think it is a shame that we are not at the stage of trustworthy blind fire simulations yet. It is not even possible for well-controlled flames with a well-known fuel and heat release rate, so why would it be possible yet in a far more complex fire. 2. I do not think this should be used as an argument against blind simulations. The point, to me, is that blind simulations should always be done with a sensitivity study of crucial parameters. As cumbersome and time consuming this might be, it is necessary. In that sense, many simulations on an affordable mesh can be more valuable than one 'heroic' simulation on a very fine mesh (although, of course, the mesh must not be too coarse). 3. I think, from a research point of view, a major step forward could be if a workshop series is initiated, focusing on fires that are seemingly simple to simulate (e.g. 'simple' pool fires), to learn from the deviations between several groups and then to move forward to more complex configurations, step by step.

    Finally, being where we are, I think it is utterly important to educate as many people as possible. This might, at short term notice, even be more efficient and fruitful than progress in research (which is, beyond any doubt to me, necessary for the long-term development of FSE). Increasing the number of well-educated practitioners will speed up our senses on how sensitive model results can be to certain options/choices.

    I hope I have not offended anyone with my comment. All the best,
    Bart.

    ReplyDelete
  7. Dear colleagues,

    one more thought came across my mind that I would like to throw into the discussion.

    Suppose we had perfect numerics, perfect models and perfect model users, then, specifying initial and boundary conditions, there would be a solution. There can still be some randomness in the solution due to numerical round-off, but let us assume this would be small. Then the solution from the model simulations could be considered as the 'reference solution'.

    Then: if we had the perfect experimentalist, the best he/she could do is perform many many experiments and after statistical averaging of the results, the 'reference solution' should be obtained.

    Now we all know that a developing fire is prone to large differences due to (apparently) details in the real world. So then the question becomes how large the experimental scatter is.

    My point is: if a single (or a few) experiment(s) is (are) performed, how do we know how far the measurements deviate from the statistically to be expected 'average' solution? If we are unfortunate, we are far off this average solution and then it is hard to expect that model simulations will get the 'odd' experimental results.

    Just to avoid any misunderstandings:
    1. I do not intend to say that we are close to using 'model simulation results' as 'reference solutions' in case of fire.
    2. I do not intend to say that 'blind simulations' are no good, on the contrary.
    3. I do think (see my previous comment on this blog) that it is crucial to have an idea on the sensitivity of model simulation results to (small?) variations in model options, but also to variations in initial and boundary conditions. This is cumbersome, but necessary.
    4. I also think that we should try to get funding and time for repeated, seemingly identical experiments, in order to grasp some feeling on how much scatter we can really expect. This is a true challenge for the experimentalists among us, I think.

    That's it.

    Best regards,

    Bart.

    ReplyDelete
  8. It is clear, also from comments, that it might not be a debate about which one is the better approach, between a priori and a posteriori simulations, not only because of their different purposes, but also because often it is from the analysis of the differences in results between the two approaches applied to the same case that it is possible to outline better the methodologies to apply with that model in that class of similar cases to achieve more confidence. In fact there is no need to pretend that an unreliable a priori simulation could harm the model nor the user, as good results from a posteriori simulation can not demonstrate the general reliability of the model nor the expertise of the user. Because of the outstending challenging of fire simulation tasks, comprising the complexity of fire models and the general lack of experimental data, what we think has to be focused mainly is to reach validated tools coupled with correct methodologies to apply that tools. These methodologies should be strengthen by sensitivity analysis, of course, but also from any previous simulation experience, both open or blind, reported with the sufficent amount of information to understand the code behaviour.

    Luca Iannantuoni
    Giovanni Manzini

    Dipartimento di Energia
    Politecnico di Milano

    ReplyDelete