- Analytic perspective
- Open Access
Trends in parameterization, economics and host behaviour in influenza pandemic modelling: a review and reporting protocol
© Carrasco et al.; licensee BioMed Central Ltd. 2013
Received: 1 November 2012
Accepted: 26 April 2013
Published: 7 May 2013
The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals.
We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic.
We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically.
We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison.
Influenza pandemics are overwhelmingly large scale phenomena that may result in high morbidity, mortality and large economic impacts worldwide. The influenza pandemic of 1918–9 is believed to have caused excess mortality of 20–40 million people . Influenza pandemics have occurred during the 20th century and beginning of the 21st century at intervals of between 10 and 40 years, with the latest pandemics occurring in 1918–9, 1957–8, 1968–9  and 2009–10 . Pharmaceutical and public health measures can help mitigate the impacts of pandemics [4, 5] and were implemented by many governments during the last pandemic in 2009–10 [6, 7].
Because empirical or field studies of population-level strategies to control or mitigate influenza pandemics are generally either infeasible (e.g. controlling movement of people within a city) or unethical (e.g. withholding vaccination of subpopulations to assess the effect on transmission), modelling is one of the only suitable methodologies to enable multiple hypothetical pandemic preparedness and mitigation scenarios to be assessed. Epidemic models are especially useful to address epidemiological, economic, and individuals’ behavioural questions [8–10]. The usefulness of epidemic models in directing mitigation efforts has been supported by empirical findings that have echoed previous modelling predictions. For instance, models predicted that reduced international air travel would be unlikely to stop an influenza pandemic , a finding later verified empirically during the 2009 H1N1 pandemic [12, 13]; other models predicted the potential of antiviral prophylaxis and contact tracing to control small outbreaks , a prediction also verified in real-life outbreaks in semi-closed army camps .
Definitions of model types
Compartmental epidemic models
Models that divide the population according to states relevant to the disease studied and represent the rates at which individuals change state. These models are widely used in epidemic modelling and can be represented by systems of differential or difference equations or stochastic rates. For instance a SIR compartmental model would divide the population according to whether the individuals are susceptible (S), infectious (I) or recovered (R). Basic compartmental models assume perfect mixing between homogeneous individuals but can be expanded to account for instance for different transmission rates between ages (age-structured compartmental models), or other heterogeneities
Network or random graph models
Network (graph) models are models that characterize the relationships between individuals. Infection occurs only between individuals (nodes) that have a connection between them (arcs or edges).
These models simulate the actions and interactions of autonomous agents with the aim to observe patterns of aggregation resulting from such interaction. Their relevance in epidemic modelling stems from their capacity to represent interactions and decisions at the individual level.
Metapopulation models originate from ecology and are used to represent distinct populations distributed in separated and discrete habitat patches. The populations can interact through migration. These models are useful in epidemic modelling by making the patches represent cities or other levels of spatial aggregation, thus allowing for the consideration of spatial structure. Although in their original application in ecology they did not consider the dynamics within patches, they are amenable of incorporating the epidemic dynamics within each patch, e.g. using compartmental models.
Game theoretic models
Models that study the decisions of an individual when the outcome of such decisions depends on the decisions of other individuals. These models study when cooperation or defection would arise from the interaction between individuals given certain circumstances. They can be useful in epidemic modelling to explore the incentives that humans face regarding vaccination, wearing face masks or adopting other preventative behaviour.
Optimal control and stochastic programming models
These are dynamic optimization techniques that aim to find the optimal way to control a system over time. In the case of epidemic modelling, they are useful to investigate for instance the optimal deployment of vaccines or antivirals over time to minimize the disease burden or the overall costs generated by the epidemic. These models are different to the other models that assume a level of control that is independent of the state of the system. By contrast, these models allow control to very depending of the final outcome or the state of the system.
Partial or general computable equilibrium models
Partial equilibrium models are economic models based on the equilibrium of the supply and demand of a market assuming that the prices and quantities traded in other markets do not vary. Computable equilibrium models (CGE), by contrast, consider the interactions between the markets composing an economy and study the price equilibrium in all the markets considered.
Pandemic preparedness, control and mitigation modelling has heretofore been reviewed [21–25]. These reviews show a bewildering array of models that have been introduced, especially since the 2009 pandemic, with different purposes, outcomes and structures. Despite the usefulness of modelling, few public health practitioners or decision makers undergo explicit training in modelling techniques. When combined with the rapid growth in modelling capabilities driven by increasing computing power, and the multitude of different disciplines – e.g. economics, psychology, genetics – that contribute to modelling epidemics, this makes it daunting to keep abreast of all that modelling is capable of. To facilitate model understanding, this review focuses in three pandemic modelling aspects that are recently experimenting substantial innovations: parameterization and validation, economic aspects  and behaviour of the hosts . Given the diversity of new techniques in these aspects of modelling, a review of common traits would be very helpful for non-expert users to determine which modelling techniques are most useful to address the decisions they face. In addition, a protocol to guide the reporting of these aspects together with model construction would help modellers and policy makers compare and evaluate models. To this end, we review and classify models for influenza pandemic preparedness from January 2000 to September 2011, and use the resulting analysis to develop a simple guiding protocol for reporting modelling decisions.
Search strategy and selection criteria
We searched Google Scholar, PubMed and ISI Web of Knowledge to identify articles focusing on influenza pandemic modelling to inform management strategies (see Additional file 1: Figure S1 in the electronic supplementary material (ESM) for a PRISMA flow diagram ). Our search criterion was: contains pandemic AND model* AND influenza AND policy OR policies. Our eligibility criteria were articles that: (i) were published in peer reviewed journals from January 2000 to September 2011; (ii) aimed to advise policy makers and made policy recommendations about pandemic influenza preparedness, mitigation or control; and (iii) employed mechanistic models to derive those insights. We further excluded cost-effectiveness and decision tree studies that did not incorporate disease transmission dynamics. The search in PubMed retrieved 72 articles, ISI Web of Knowledge 128, and Google Scholar 19,200 results. After an additional query refinement in Google Scholar (adding to the previous query the terms: AND preparedness OR strateg* AND simulation OR compartment*), screening of articles and further full-text assessment for their eligibility (Additional file 1: Figure S1 in ESM), 91 articles were selected for the analysis.
Classification and evaluation of modelling traits
We classify models into several major groups: compartmental epidemic models, network models, agent-based models, metapopulation models, game theoretic models, optimal control models and partial or general computable equilibrium models (definitions of the models can be found in Table 1). In some instances models can conform to several categories: e.g. compartmental models combined with metapopulation models.
Processes for model construction and validation
The process of selecting the values or distributions of the model parameters based on empirical data, usually with a random component. Rigorous parameterization is fundamental since the value of the parameters largely determines the behaviour and predictions of the model.
Sensitivity and uncertainty analysis
The study of the influence of the parameter values of the models on the model outcomes. Sensitivity analysis can vary one parameter at a time (univariate) or multiple (multivariate). The comparison of the model predictions with the baseline parameter values and the modified values gives an idea of how sensitive the model is to a certain parameter. Sensitivity analysis is useful because enhances the communication of the model, tests the robustness of the results allowing the evaluation of our confidence in the predictions, increases our understanding of the system and allows detection of implementation errors.
Uncertainty analysis evaluates the model response for the plausible range of the parameters. Uncertainty analysis provides information on what variable generates more uncertainty in the model and can help to direct data collection efforts.
The process of investigating whether model predictions are likely to be accurate. Two main types of validation can be distinguished: structural and predictive validation . Structural validity requires that the model reproduces the observed system behaviour and is constructed in accordance with the way the real system operates, i.e. is consistent and based on theory. Predictive validation requires that the model predicts accurately data that were not used in its construction. It has also been argued that the credibility of a model might be provided by the credentials of the model building techniques, that sometimes involve contrary-to-fact principles that increase the reliability of the results .
Standard data fitting procedure that consists on the minimization of the squares of the difference between the observed data points and the fitted value provided by the model.
Maximum likelihood estimation
Method to estimate the parameters of a model based on data. This method chooses values for which the probability of generating the observed data is highest, given the model.
Method of statistical inference to estimate the parameters of a model combining prior belief and the evidence observed. As more evidence is gathered the prior distribution is modified into the posterior distribution that represents the uncertainty over the parameters value.
Markov chain Monte Carlo (MCMC)
MCMC are algorithms that can be used to sample the posterior distribution for Bayesian inference and are useful because they allow to sample from multi-dimensional distributions of observations.
Particle filtering is a parameterization technique based on the simulation and sequential weighting of a sample of parameter values according to their consistency with the observed data. Particle filters are normally used to parameterize Bayesian models in which variables that cannot be observed are inferred by the model through connection in a Markov chain.
Here we define calibration as an iterative comparison between model predictions and observed data (e.g. attack rates, R0) without the use of standard statistical inference methods. After comparison, simulation of the model for different parameter values is performed and compared with the former predictions to see if an improvement in their agreement is obtained.
To classify models by their construction and validation techniques, we evaluated whether they (i) incorporated an assessment of the sensitivity of their results to model parameters and assumptions, (ii) were parameterized using parameters directly from other models, and (iii) were validated from empirical data. To that end, the articles were categorized according to the type of model used (Table 1), population heterogeneity level considered, parameterization procedure, consideration of economic impacts, inclusion of human behaviour and performance of validation or sensitivity analysis (see the ESM for a full list of the models and their characteristics).
Standard reporting protocol
Characteristics, construction, parameterization and validation aspects protocol (CCPV protocol) for influenza pandemic model reporting
Aim of the model
What questions is the model trying to address? Is the model based on past influenza pandemics?
Is the model aimed at generating predictions for future pandemics used to inform policy making? Are the predictions intended to generate quantitative or qualitative policy insights?
What are the underlying assumptions that support the construction of the model or parts of the model? E.g. the law of mass action, rational choice theory.
Scale, structure and model type.
What are the geographical and temporal scales of the model? What are the state and control variables and the parameters? Is the model solved analytically through mathematical methods or simulated? What type of model is it?
Is time modelled as discrete or continuous?
What variables and processes occur or are updated at each time step?
How is the model initialized? E.g. what proportion of individuals is initially infected?
Is the model informed by data from previous pandemics? If so, what are the main sources of data in the model?
Is the model spatially explicit or implicit? What is the spatial structure of the model?
Are the expected heterogeneities of transmission reflected by this structure?
Is the model stochastic or deterministic? How is stochasticity modelled?
What interventions are modelled (e.g. antivirals, vaccination or isolation)? How do the interventions modify epidemiological or clinical parameters in the model?
Are individuals modelled as discrete or continuous entities?
Are individuals grouped by some characteristic? (e.g. age, risk of infection).
Interactions leading to transmission
How is interaction between individuals modelled? Are interactions heterogeneous among individuals or locations?
Does the model consider the cost of the intervention and/or the economic impact of the disease?
Does the model seek to guide decision making that will optimise net benefit? Are there groups whose infection would lead to higher economic impacts? Was this distinction considered? Are costs per reduction of disease burden provided?
Are changes in the behaviour of individuals as a result of pandemic processes being modelled? What are the assumptions made regarding behaviour? Has the model been run without assumptions about pandemic-related changes to behaviour? How do results differ from the model considering such changes?
Have model results been compared with simplified versions of the model? How did results differ?
To what extent has the increase in complexity in the model hindered its interpretability?
Parameterization and Validation aspects
Sensitivity and uncertainty analysis
Have sensitivity and uncertainty analyses been undertaken? What types of analyses were done, what were the outputs and parameter ranges considered? Were there sensitive or uncertain parameters that were taken directly from previous modelling studies and that might entail a risk of bias to the predictions? Are there alternative data sets to obtain those parameters? Have alternative scenarios for values of those parameters been considered?
Describe which parameters were parameterized from: (i) previous parameters used in other pandemic models in the literature; (ii) data published in the literature, e.g. clinical trials, cohort studies; and (iii) pandemic data, e.g. time series of number of cases, attack rates.
For parameters taken directly from previous pandemic modelling studies, how were these derived? Do they apply to the case being studied? Is there a risk of model overfitting, e.g. by using epidemic case data to fit both transmission and infectious rate parameters?
Has the model undergone standard simulation verification tests? How are results from the model observed to evaluate its functioning? E.g. production of dynamic maps of spread during the simulation.
Has the model been tested for structural and/or predictive validity?
What type of data independent of model parameterization was used to test its predictive validity? If data were not available for the specific strain of study, did alternative strains or diseases were considered? E.g. seasonal instead of pandemic influenza.
Was the model able to reproduce the validation data set? If not, what changes to the structure of the model were considered? Did the updated model obtain an improved prediction?
Was this model developed in parallel with other independent research teams?
Review of influenza pandemic modelling
Most models that were applied to a specific geographic region focused on high-income countries. Very scarce were studies not focusing on high-income economies (5/91, 6% applied to upper-middle income countries like Thailand or Mexico and none applied exclusively to low-income or lower-middle income countries), despite the higher case fatality rate expected in those countries . The majority of studies were not intended to study impact in specific, localised settings such as schools or hospitals and represented instead the national or international level. A few exceptions did, on the other hand, concentrate on the effects of school closures [4, 6, 33–37] and on hospitals or hospital staff [38–40].
In reality many models utilized multiple parameterization strategies. For instance, combining estimates from the literature, censuses and maximum likelihood. For simplicity, models were categorized by the less common and most sophisticated technique used. For instance, a model using literature estimates and Bayesian inference was categorized as using Bayesian methods for parameterization. Among all models, the dominant parameterization strategy (used by 47% of the models) was to adopt parameters from previous studies, especially from other modelling studies perpetuating the use of parameters chosen by other modellers (Figure 1A “parameterization”). 25% of the studies utilized information or parameters derived from epidemiological, laboratory (e.g. viral shedding duration, cohort studies) or case data (e.g. epidemic curves, attack rates) from other sources to parameterize the model. It was common (60%, Figure 1C) to use some sort of sensitivity analysis and this was more frequent in models that did not directly adopt parameters from previous models, suggesting that sensitivity analysis was not used as a complement to reusing parameters from previous models. ABMs were more frequently built using parameters chosen by modellers in previous studies (70% Figure 1C) and constructed from population demographic data, for instance from decennial censuses, rather than using empirical data or parameters obtained from epidemiological or laboratory studies.
Although the most common approach was to parameterise models using parameter values chosen by previous modelling studies, there were several exceptions that used alternative parameterization methods (Figure 1B shows the distribution of parameterization methods and Table 2 defines the methods) ranging from calibration through simulation [33, 41, 42], maximum likelihood [12, 36], least squares [1, 11] and Bayesian computational methods such as Markov chain Monte Carlo (MCMC)  (Figure 1A).
Several real-time pandemic modelling articles involved sophisticated methods of parameterization employing on-going observed case data, such as maximum likelihood estimation  or sequential particle filtering within a Bayesian framework . Their real-time nature enabled the possibility of continuous open validation regarding the prediction of pandemic characteristics such as the timing and height of the peak, and indeed Ong et al.  report posting real-time predictions on the internet.
There were several non-real-time examples of modelling papers that parameterized compartmental models using disaggregated epidemic data such as: questionnaire or survey results [44, 45]; serological data [36, 46]; epidemic cases or mortality time series [1, 47–49]; and observed time of pandemic peaks [11, 12]. Examples of parameterization from historical epidemic data in ABMs included calibration to reproduce attack or serological infection rates from previous pandemics [33, 41, 42, 50]. Parameterization from case data can be used to investigate policy effectiveness. For instance, Cauchemez et al.  evaluated the effectiveness of school closures for pandemic control in France and showed that prolonged school closures would potentially reduce the attack rate of a pandemic by 13–17% by using MCMC Bayesian computational methods to fit an age-structured household-based compartmental model to influenza surveillance data.
Most of the reviewed models reproduced parameter choices from previous studies. This is to be expected as deriving parameters from outbreak data is complex. As a result, some articles specialize in the statistical analysis that leads to parameter derivation and others specialize in the analysis of broad policy questions. There is however the risk that this approach may perpetuate faulty parameterisations from previous studies, or applies a valid parameter value to an inappropriate setting. On the other hand, informing too many parameters in the model by fitting to epidemic time series may run the risk of overfitting or non-identifiability. It may be most credible to inform model parameters using a combination of field or laboratory studies data (e.g. to fit or even directly inform parameters such as recovery rates) and epidemic case data (e.g. to fit transmission related parameters), and then compare fitted parameter values to those obtained from previous studies. One of the possible explanations why this combination of data sources is not common is data paucity, rendering the use of parameters chosen from other modelling studies as one of the few alternatives. One way to increase the pool of available data for model parameterization is to establish international data sharing mechanisms among governments and researchers, especially regarding disease transmission between individuals and surveys of population contact patterns , to facilitate the construction of robust models.
Even if epidemic data are available, the small number of models parameterized from such data might also reflect statistical difficulties brought about by censorship in the data—some processes cannot be observed, and many influenza infections are not virologically confirmed, have indistinguishable symptoms, or are asymptomatic. Such censoring combines with non-independence between observations to prevent the use of standard statistical techniques. While such difficulties can be overcome, for instance using maximum likelihood estimation methods , particle filtering  or other likelihood-based computational methods  (Table 3), these require at least some mastery of modern statistical techniques and may be computationally intensive. For instance, Bayesian methods that use MCMC algorithms or approximate Bayesian computation methods can be particularly powerful and flexible tools (Table 2) . These methods allow the merging of prior knowledge on the epidemic parameters—such as those derived from datasets described in the literature—with observed data from the outbreak in question. In addition they allow rigorous parameterisation of models of the processes underlying highly censored data . Bayesian computational methods can thus be used as a flexible and powerful way to perform inference on unobserved parameters. Software such as openBUGS  and JAGS  are making the use of MCMC algorithms for model fitting accessible to non-specialists.
Parameterization becomes more difficult for large-scale simulation models like ABMs not only because ABMs present many more parameters to be fitted but also because they make it harder to derive an explicit likelihood function making impossible the use of MCMC in Bayesian computational methods or maximum likelihood estimation methods. One promising techniques that does not require full, explicit likelihood functions, and that is used in statistical ecology and DNA sequencing, is one potential solution: sequential importance sampling . Sequential importance sampling, particle filtering or the sequential Monte Carlo method can be performed using the R package POMP .
Implications for CCPV protocol
Reporting the combination of data used for parameterization would allow model users to evaluate the reliability of the models, reduce the risk of model overfitting and allow assessing the adequacy of the parameter for a specific setting (Table 3 “model parameterization”). Sensitivity and uncertainty analysis are other ways to evaluate the influence of individual parameters and their uncertainty range on model predictions (Table 2). They can be used to direct data collection efforts and should ideally be reported (Table 3 “sensitivity and uncertainty analysis”).
The review demonstrated the rarity of model validation (only 16% of compartmental models and 22% of ABMs, Figure 1C), despite the importance of two types of validation – structural and predictive (Table 2) – in developing model credibility. Structural validity, which concerns the consistency of a model with theory, may be easier to establish for compartmental models as they are (usually) based on epidemic theory for which results have been derived analytically, as long as they are not oversimplified and unable to capture the salient features of the pandemic. In some instances, modellers may use these analytically soluble models to generate qualitative insights rather than quantitative predictions to inform policy. Structural validity will thus be more relevant for these models rather than comparisons with observed quantitative data.
Predictive validity, on the other hand, is established by comparing model predictions to independently observed outcomes during a pandemic to help assess whether the model appropriately reflects reality, i.e. is capable of capturing the salient mechanisms governing the dynamics of the pandemic. If the agreement with validation data is poor, structural or parametric changes to the model might be needed until adequate validation can be obtained (Table 3). Compartmental models, by aggregating individuals in homogeneous compartments, are amenable to structural changes, accounting, for instance, for spatial and host structure by adding further compartments (e.g. only 64% of the models reviewed were exclusively compartmental with extensions including a metapopulation approach (15%), dynamic optimization (10%) and game theory (3%) (Figure 1A)).
When making structural changes, modellers have to deal with a fundamental trade-off between realism and interpretability, with additional complexity increasing the opacity of the model at the same time it adds realism, potentially up to a point where the model becomes a black box. An example of a structural change is the need to capture spatial hierarchies, such as cities and countries, if space is expected to influence transmission dynamics or the roll out or effectiveness of an intervention. Often such structure is captured using ABMs that represent individuals in different countries, provinces, cities and even districts within a city, but such finely grained structure makes analytical interpretation of model operation virtually impossible. One possible compromise between ease of interpretation and complexity of spatial structure — e.g. between compartmental and ABMs — for populations clustered in cities or countries is the metapopulation model [11, 57].
As part of the assessment of predictive validity, it might also be useful to compare models with analogous simplified or extended versions [e.g. . For example, the predictions of a spatially explicit ABM can be compared to those of its “equivalent” spatially implicit compartmental model. Because complex models, such as ABMs, will only be more realistic than compartmental models provided there are data to support their added realism, comparisons of ABMs with their simplified compartmental ‘analogue’ will demonstrate whether the added realism of the ABM is justified by improved predictive power and whether the complexity brought about by the ABM leads to substantial losses in model interpretability (“complexity” in the CCPV protocol, Table 3).
Comparison between models developed by different groups is another interesting alternative to investigate model validity. Parallel model development – by different groups working on the same problem – allows identifying inconsistencies between model results, thus highlighting aspects of the system that are insufficiently understood or outcomes that are not robust to the decisions made in model construction. Parallel model development has been applied for instance to malaria eradication , rheumatoid arthritis  and HIV antiretroviral treatment effectiveness .
If data for validation are non-existent, reporting of the alternative verification techniques used would enhance credibility. These might involve simulation-based observation techniques such as animation (e.g. reproducing maps of model predictions to identify malfunctions), degeneration tests (deactivate model functions to evaluate changes in predictions), extreme-conditions tests (checking that model predictions are logical even under unusually extreme inputs) or face validation (showing results to experts) and can be very useful to detect anomalies in the models  (“model verification”, Table 3).
Implications for CCPV protocol
Reporting the underlying assumptions governing the model, as well as their justification, would help model users evaluate the structural validity of the model (Table 3 “characteristics, theoretical basis”). Validation processes will show if the models are oversimplified and do not capture the salient features of the pandemic. In addition, reporting structural and predictive validity together with subsequent structural changes (e.g. spatial explicitness) to models would allow policy makers to assess the reliability of model predictions, and other analysts to assess the robustness of model construction and parameterisation (Table 3 “construction aspects, space” and “model validation”). Further assistance in evaluating the validity of the model can be obtained through reporting model verification techniques, whether the model has been compared with simpler versions or with other models developed in parallel (Table 3 “model verification” and “complexity”).
Very few pandemic preparedness models integrate transmission dynamics and economic analysis . Most models reviewed could quantify the time course of an outbreak and the associated disease and health care endpoints. Metrics such as the reduction in the number infected or dying were commonly used to evaluate the effectiveness of any interventions considered. However, only a minority of studies (17% and 26% of compartmental and ABMs respectively, Figure 1C) sought to address economic questions, either related to the economic impacts of the pandemic or the value for money of the control or mitigation measures in question. In some cases, this may be because epidemiological modellers lack the expertise to identify and model economic aspects. Collaboration between epidemiological modellers and health economists may thus be mutually beneficial to explore new interdisciplinary modelling approaches.
While evaluation of the effectiveness of interventions such as social distancing or antiviral prophylaxis is useful in itself, and may be enough to rule an intervention out or guide policy when costs are uncertain, in many circumstances being able to integrate effectiveness with economic concerns in critical in deciding whether to support the intervention. One possible way to elucidate whether economic aspects would enhance the usefulness of the model for policy makers is to ask whether the relative costs of the intervention would condition its selection. For instance, school closures—identified as effective strategies [4, 34, 64, 65]—of more than four weeks have been shown to burden the economy and even treble the costs arising from an influenza pandemic . In addition, individuals who are economically active will involve a much higher economic burden by job absenteeism due to illness or care giving . Considering the economic impacts of such heterogeneities at a social and individual level may change the optimal implementation of an intervention from what would be recommended based on epidemiological considerations alone (i.e. minimising disease burden). The inclusion of a cost-effectiveness outcome (e.g. cost per quality-adjusted life years (QALY) gained or per case averted) is a common approach which allows comparison of the value for money of different interventions for the same health problem (or even with other health problems when generic measures such as QALYs are used as the denominator).
Few of the reviewed studies incorporated economic aspects but, of those that did, several novel approaches were taken. One such approach was to couple estimates of the cost-effectiveness of vaccinating specific age and risk groups to real-time predictions . These types of real-time outputs of the model, refined as the pandemic progressed, are helpful for decision makers who need to decide the number of vaccine doses to purchase and distribute, and to whom they will be allocated, based on the latest country-specific data.
Novel insights on the optimal allocation of economic resources were also obtained from approaches embedding compartmental models into optimization frameworks such as optimal control theory or dynamic programming [39, 45, 68–71]. For instance Lee et al. , using optimal control theory, identified the optimal way to dynamically allocate control measures such as antiviral allocation and isolation, subject to the dynamics of the pandemic and the effects of the control measures on those dynamics. Their analysis identified aggressive allocation of antivirals at the beginning of the pandemic as an optimal strategy. Accounting for the dynamic nature of the pandemic and allowing control efforts to vary produces new dynamic insights for interventions, a fundamental difference from epidemic models that keep control efforts constant (Table 1).
Few compartmental models were used to perform cost-effectiveness analysis. On those that did, models were integrated in a cost-effectiveness analysis of antiviral prophylaxis and vaccination [72, 73]. Cost-effectiveness analyses were also incorporated into ABMs [6, 7, 74–76]. For instance, Sander et al.  estimated the number of QALYs lost and economic costs due to pandemic influenza using a detailed ABM structured by age and infection risk. This model represented people interacting in known contact groups such as households, neighbourhoods, communities, schools and work groups. QALYs were obtained from clinical trial data. Direct costs such as visits to physicians and indirect costs such as job absenteeism were also computed. As a result the cost-effectiveness of different antiviral, school closure and pre-vaccination strategies could be estimated and compared to inform policy making.
The integration of economic and epidemic models for pandemic preparedness does not yet appear to have explored all possible model combinations, with a large scope for modelling innovation. For instance, although advanced economic models such as CGE models have been applied to influenza pandemics and were able to capture the effects of job absenteeism or deaths on the affected sectors of various economies [66, 78, 79], our review did not identify any study that combined such models with dynamic epidemic models in a way that both models feedback on to each other. Not allowing feedback is reasonable if job absenteeism can be approximated as a sudden shock to the production systems—though in reality the shock might be progressive or present several peaks—or if feedback from the economy into the epidemic is not expected. Examples of such feedback could be changes in individuals’ commuting patterns or behaviour as the economy is affected or a potential loss of the financial capacity to mitigate the epidemic at the individual and government levels.
Implications for CCPV protocol
Reporting the economic aspects considered in the model, the type of analysis employed, heterogeneity of impacts in different groups and disease burden metrics employed, would facilitate model users understanding the capabilities of the model and the adequacy of the economic analysis undertaken (Table 3, “model construction, economic aspects”).
Behavioural aspects of infection transmission have been studied in the context of the control of other diseases [a general review is provided by . The inclusion of the behaviour of the individuals during an influenza pandemic has heretofore been uncommon among compartmental models and has only recently started to receive attention [10, 44, 80, 81]. Although most pandemic models represent individuals as entities whose behaviour remains invariant, in reality, human behaviour might hinder or foster pandemic mitigation efforts, especially for severe pandemics like that of 1918. Very few compartmental models reviewed considered the effect of changes in behaviour on the impact of the pandemic (7%). New insights have been obtained by integrating compartmental models with game theory [44, 80]. For instance, Galvani et al  parameterized an epidemiological game-theoretic model from questionnaires on perceptions on influenza. The model was employed to compare self-interested behaviour from the elderly towards vaccination with the socially optimal behaviour that would involve vaccinating children to reduce overall transmission. The model identified how the individual and social equilibria differed more for seasonal influenza than for pandemic influenza – because pandemic influenza might also pose a substantial risk to the young. This study illustrates how, as a result of including human behaviour in the model, the need to incentivize individuals to reduce overall influenza transmission can be identified.
In our review, the inclusion of individuals’ behaviour was more common among simulation models although, instead of basing behaviour representation on game or microeconomic theory, it was usually based on simple rules and assumptions. Different kinds of behaviours were considered in several models, including voluntary isolation, increased social distancing once infected, and preventive behaviour [33, 42, 66, 74, 79, 82–85]. The inclusion of behaviour can lead to substantially different conclusions. For instance, if individuals perceive an epidemic as life-threatening, they might change their commuting patterns, wear masks and take more extreme precautions  and as a result, a model not considering these behavioural changes would overestimate the attack rate and the number of fatalities that eventually would result from the epidemic. In a similar fashion, if individuals perceive an epidemic to be benign, vaccination rates and adoption of precautions may drop, undermining the effectiveness of control measures (evidence of both kinds of responses has been observed during the H1N1 2009 pandemic ).
The extent to which human behaviour can affect model predictions is, however, poorly understood and further research is necessary to gauge when behaviour should be included in models. A useful practice would be to report behavioural assumptions, including homogeneity, in the model systematically and how the incorporation of individuals’ behaviour affects model predictions with respect to the model without behaviour (Table 3). Data availability is also a major obstacle for the incorporation of human behaviour to models and again sharing mechanisms would facilitate model development.
Implications for CCPV protocol
Reporting of the assumptions on how behaviour is modelled would help model users interpreting model results. Reporting of comparisons of model results with and without behaviour would further facility the understanding of behaviour in the model (Table 3, “construction aspects, behaviour”).
Influenza pandemic models have, over the last decade, proliferated dramatically. In parallel to the rapid increase in the number of models, many now incorporate sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Techniques such as Bayesian inference, agent-based modelling and the application of game theory are being newly applied to influenza, answering a more diverse set of public health questions.
This increase in modelling diversity stems from an increase in diversity of research questions and policy strategies. Ultimately, however, the choices made in model construction will depend critically on the data available, the research question and the consideration of the trade-off between realism and interpretability of the model. Even though models need to be fit for purpose, it is noteworthy that many influenza pandemics models rely on parameters from previous modelling studies and are rarely validated using observed data.
Although model validation is not expected in influenza pandemic modelling, it is considered a basic prerequisite for publication in other fields, such as the related discipline of ecological modelling. For instance, the editorial policy of the journal Ecological Modelling states: “Papers that only present a model without support of ecological data for calibration and hopefully also validation of the model will not be accepted because a model has in most cases no interest before it has been held up to ecological reality” , and a standardised ODD protocol (overview, design concepts, and details) for documenting ABMs more generally in that field has been published in the same journal [31, 88]. Guidelines also exist in the fields of health economics. Examples are guidelines from the National Institute for Clinical Excellence (NICE) in the UK , the Drummond Checklist that is required for economic submissions by the British Medical Journal , guidelines for cost-effectiveness analysis  and modelling guidelines from the International Society for Pharmacoeconomics and Outcomes Research .
Given the large variety in modelling approaches for influenza pandemic management and to facilitate comparison between models, we developed a simple general modelling Characteristics, Construction, Parameterization and Validation aspects (CCPV) reporting protocol (Table 3). The use of the protocol together with international data sharing mechanisms would facilitate comparability between models, transparency in decisions about the kinds of models to use, and ultimately increase the confidence in the use of modelling in formulating influenza pandemic policies.
M.I.C. and L.R.C. gratefully acknowledge research funding from the research grants NMRC/CSA/011/2009 and NMRC/H1N1R/005/2009 respectively. L.R.C. acknowledges support from the grant WBS R-154-000-527-133.
- Mills C, Robins J, Lipsitch M: Transmissibility of 1918 pandemic influenza. Nature. 2004, 432: 904-906.View ArticlePubMedGoogle Scholar
- Potter CW: A history of influenza. J Appl Microbiol. 2001, 91: 572-579.View ArticlePubMedGoogle Scholar
- Fraser C, Donnelly C, Cauchemez S, Hanage W, Van Kerkhove M, Hollingsworth T, Griffin J, Baggaley R, Jenkins H, Lyons E, et al: Pandemic potential of a strain of influenza A (H1N1): early findings. Science. 2009, 324: 1557-1561.PubMed CentralView ArticlePubMedGoogle Scholar
- Cauchemez S, Valleron A, Boelle P, Flahault A, Ferguson N: Estimating the impact of school closure on influenza transmission from Sentinel data. Nature. 2008, 452: 750-754.View ArticlePubMedGoogle Scholar
- Ferguson N, Cummings D, Cauchemez S, Fraser C, Riley S, Aronrag M, Iamsirithaworn S, Burke D: Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005, 437: 209-214.View ArticlePubMedGoogle Scholar
- Brown ST, Tai JHY, Bailey RR, Cooley PC, Wheaton WD, Potter MA, Voorhees RE, LeJeune M, Grefenstette JJ, Burke DS, et al: Would school closure for the 2009 H1N1 influenza epidemic have been worth the cost?: a computational simulation of Pennsylvania. BMC Publ Health. 2011, 11: 353.View ArticleGoogle Scholar
- Andradottir S, Chiu W, Goldsman D, Lee M, Tsui K-L, Sander B, Fisman D, Nizam A: Reactive strategies for containing developing outbreaks of pandemic influenza. BMC Publ Health. 2011, 11: S1.View ArticleGoogle Scholar
- Anderson RM, May RM: Infectious Diseases of Humans: Dynamics & Control. New York, USA: Oxford University Press; 1992.Google Scholar
- Baguelin M, Hoek AJV, Jit M, Flasche S, White PJ, Edmunds WJ: Vaccination against pandemic influenza A/H1N1v in England: a real-time economic evaluation. Vaccine. 2010, 28: 2370-2384.View ArticlePubMedGoogle Scholar
- Fenichel EP, Castillo-Chavez C, Ceddia MG, Chowell G, Parra PAG, Hickling GJ, Holloway G, Horan R, Morin B, Perrings C, et al: Adaptive human behavior in epidemiological models. Proc Natl Acad Sci U S A. 2011, 108: 6306-6311.PubMed CentralView ArticlePubMedGoogle Scholar
- Cooper BS, Pitman RJ, Edmunds WJ, Gay NJ: Delaying the international spread of pandemic influenza. PLoS Med. 2006, 3: e212.PubMed CentralView ArticlePubMedGoogle Scholar
- Bajardi P, Poletto C, Ramasco JJ, Tizzoni M, Colizza V, Vespignani A: Human mobility networks, travel restrictions, and the global spread of 2009 H1N1 pandemic. PLoS One. 2011, 6: e16591.PubMed CentralView ArticlePubMedGoogle Scholar
- Yu H, Cauchemez S, Donnelly CA, Zhou L, Feng L, Xiang N, Zheng J, Ye M, Huai Y, Liao Q: Transmission dynamics, border entry screening, and school holidays during the 2009 influenza A (H1N1) pandemic, China. Emerg Infect Dis. 2012, 18: 758.PubMed CentralView ArticlePubMedGoogle Scholar
- Lee VJ, Yap J, Cook AR, Chen MI, Tay JK, Tan BH, Loh JP, Chew SW, Koh WH, Lin R, et al: Oseltamivir ring prophylaxis for containment of 2009 H1N1 influenza outbreaks. N Engl J Med. 2010, 2010 (362): 2166-2174.View ArticleGoogle Scholar
- Daley DJ, Gani J: Epidemic Modelling: An Introduction. Page 13. Cambridge: Cambridge University Press; 2001.Google Scholar
- Keeling MJ, Rohani P: Modeling Infectious Diseases in Humans and Animals. Page 10. Princeton Univ Press: Princeton; 2007.Google Scholar
- Sargent RG: Verification and validation of simulation models. Proceedings of the 2005 Winter Simulation Conference. Edited by: Kuhl ME, Steiger NM, Armstrong FB, Joines JA. Piscataway, New Jersey, USA; 2005, 130-143.View ArticleGoogle Scholar
- Kermack WO, McKendrick AG: A contribution to the mathematical theory of infections. Proc R Soc Lond A. 1927, 115: 700-721.View ArticleGoogle Scholar
- Gumel A, Ruan S, Day T, Watmough J, Brauer F, van den Driessche P, Gabrielson D, Bowman C, Alexander M, Ardal A, et al: Modelling strategies for controlling SARS outbreaks. Proc R Soc B. 2004, 271: 2223-2232.PubMed CentralView ArticlePubMedGoogle Scholar
- Fraser C: Factors that make an infectious disease outbreak controllable. Proc Natl Acad Sci. 2004, 101: 6146-6151.PubMed CentralView ArticlePubMedGoogle Scholar
- Riley S: Large-scale spatial-transmission models of infectious disease. Science. 2007, 316: 1298-1301.View ArticlePubMedGoogle Scholar
- Coburn B, Wagner B, Blower S: Modeling influenza epidemics and pandemics: insights into the future of swine flu (H1N1). BMC Med. 2009, 7: 30.PubMed CentralView ArticlePubMedGoogle Scholar
- Arino J, Bauch CT, Brauer F, Driedger SM, Greer AL, Moghadas SM, Pizzi NJ, Sander B, Tuite A, van den Driessche P, et al: Pandemic influenza: modelling and public health perspectives. Math Biosci Eng. 2011, 8: 1-20.View ArticlePubMedGoogle Scholar
- Grassly NC, Fraser C: Mathematical models of infectious disease transmission. Nat Rev Microbiol. 2008, 6: 477-487.PubMedGoogle Scholar
- Lee V, Lye D, Wilder-Smith A: Combination strategies for pandemic influenza response—a systematic review of mathematical modeling studies. BMC Med. 2009, 7: 76.PubMed CentralView ArticlePubMedGoogle Scholar
- Pérez Velasco R, Praditsitthikorn N, Wichmann K, Mohara A, Kotirum S, Tantivess S, Vallenas C, Harmanci H, Teerawattananon Y: Systematic review of economic evaluations of preparedness strategies and interventions against influenza pandemics. PLoS One. 2012, 7: e30333.PubMed CentralView ArticlePubMedGoogle Scholar
- Funk S, Salathe M, Jansen VAA: Modelling the influence of human behaviour on the spread of infectious diseases: a review. J R Soc Interface. 2010, 7: 1247-1256.PubMed CentralView ArticlePubMedGoogle Scholar
- Moher D, Liberati A, Tetzlaff J, Altman DG, The PG: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009, 6: e1000097.PubMed CentralView ArticlePubMedGoogle Scholar
- Grüne-Yanoff T, Weirich P: The philosophy and epistemology of simulation: a review. Simul Gaming. 2010, 41: 20-50.View ArticleGoogle Scholar
- Winsberg E: Models of success versus the success of models: reliability without truth. Synthese. 2006, 152: 1-19.View ArticleGoogle Scholar
- Grimm V, Berger U, Bastiansen F, Eliassen S, Ginot V, Giske J, Goss-Custard J, Grand T, Heinz SK, Huse G, et al: A standard protocol for describing individual-based and agent-based models. Ecol Model. 2006, 198: 115-126.View ArticleGoogle Scholar
- Fedson DS: Meeting the challenge of influenza pandemic preparedness in developing countries. Emerg Infect Dis. 2009, 15: 365-371.PubMed CentralView ArticlePubMedGoogle Scholar
- Milne G, Kelso J, Kelly H, Huband S, McVernon J: A small community model for the transmission of infectious diseases: comparison of school closure as an intervention in individual-based models of an influenza pandemic. PLoS One. 2008, 3: e4005.PubMed CentralView ArticlePubMedGoogle Scholar
- Glass K, Barnes B: How much would closing schools reduce transmission during an influenza pandemic?. Epidemiology. 2007, 18: 623-628.View ArticlePubMedGoogle Scholar
- House T, Baguelin M, Van Hoek AJ, White PJ, Sadique Z, Eames K, Read JM, Hens N, Melegaro A, Edmunds WJ, Keeling MJ: Modelling the impact of local reactive school closures on critical care provision during an influenza pandemic. Proc R Soc B Biol Sci. 2011, 278: 2753-2760.View ArticleGoogle Scholar
- Vynnycky E, Edmunds W: Analyses of the 1957 (Asian) influenza pandemic in the United Kingdom and the impact of school closures. Epidemiol Infect. 2008, 136: 166-179.PubMed CentralPubMedGoogle Scholar
- Chen S, Liao C: Modelling control measures to reduce the impact of pandemic influenza among schoolchildren. Epidemiol Infect. 2008, 136: 1035-1045.PubMed CentralView ArticlePubMedGoogle Scholar
- Lee VJ, Chen MI: Effectiveness of neuraminidase inhibitors for preventing staff absenteeism during pandemic influenza. Emerg Infect Dis. 2007, 13: 449-457.PubMed CentralView ArticlePubMedGoogle Scholar
- Lee S, Chowell G, Castillo-Chávez C: Optimal control for pandemic influenza: the role of limited antiviral treatment and isolation. J Theor Biol. 2010, 265: 136-150.View ArticlePubMedGoogle Scholar
- Cooley P, Lee BY, Brown S, Cajka J, Chasteen B, Ganapathi L, Stark JH, Wheaton WD, Wagener DK, Burke DS: Protecting health care workers: a pandemic simulation based on Allegheny County. Influenza Other Respir Viruses. 2010, 4: 61-72.PubMed CentralView ArticlePubMedGoogle Scholar
- Chao DL, Halloran ME, Obenchain VJ, Longini IM Jr: FluTE, a publicly available stochastic influenza epidemic simulation model. PLoS Comput Biol. 2010, 6: e1000656.PubMed CentralView ArticlePubMedGoogle Scholar
- Savachkin A, Uribe A: Dynamic redistribution of mitigation resources during influenza pandemics. Socio Econ Plan Sci. In Press.Google Scholar
- Ong JBS, Chen MI, Cook AR, Lee HC, Lee VJ, Lin RTP, Tambyah PA, Goh LG: Real-time epidemic monitoring and forecasting of H1N1-2009 using influenza-like illness from general practice and family doctor clinics in Singapore. PLoS One. 2010, 5: e10036.PubMed CentralView ArticlePubMedGoogle Scholar
- Galvani A, Reluga T, Chapman G: Long-standing influenza vaccination policy is in accord with individual self-interest but not with the utilitarian optimum. Proc Natl Acad Sci U S A. 2007, 104: 5692-5697.PubMed CentralView ArticlePubMedGoogle Scholar
- Medlock J, Galvani A: Optimizing influenza vaccine distribution. Science. 2009, 325: 1705-1708.View ArticlePubMedGoogle Scholar
- Mylius SD, Hagenaars TJ, Lugner AK, Wallinga J: Optimal allocation of pandemic influenza vaccine depends on age, risk and timing. Vaccine. 2008, 26: 3742-3749.View ArticlePubMedGoogle Scholar
- Tuite AR, Fisman DN, Kwong JC, Greer AL: Optimal pandemic influenza vaccine allocation strategies for the Canadian population. PLoS One. 2010, 5: e10520.PubMed CentralView ArticlePubMedGoogle Scholar
- Krumkamp R, Kretzschmar M, Rudge JW, Ahmad A, Hanvoravongchai P, Westenhoefer J, Stein M, Putthasri W, Coker R: Health service resource needs for pandemic influenza in developing countries: a linked transmission dynamics, interventions and resource demand model. Epidemiol Infect. 2011, 139: 59-67.View ArticlePubMedGoogle Scholar
- Matrajt L, Longini IM Jr: Optimizing vaccine allocation at different points in time during an epidemic. PLoS One. 2010, 5: e13767.PubMed CentralView ArticlePubMedGoogle Scholar
- Halloran M, Ferguson N, Eubank S, Longini I, Cummings D, Lewis B, Xu S, Fraser C, Vullikanti A, Germann T, et al: Modeling targeted layered containment of an influenza pandemic in the United States. Proc Natl Acad Sci U S A. 2008, 105: 4639-4644.PubMed CentralView ArticlePubMedGoogle Scholar
- Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, Massari M, Salmaso S, et al: Scalia Tomba G, Wallinga J, et al: Social contacts and mixing patterns relevant to the spread of infectious diseases. Plos Med. 2008, 5: 381-391.View ArticleGoogle Scholar
- Lee PM: Bayesian Statistics: An Introduction. 3rd edition. London: Arnold; 2004.Google Scholar
- Thomas A: O Hara B, Ligges U, Sturtz S: Making BUGS open. R News. 2006, 6: 12-17.Google Scholar
- Plummer M: JAGS: a program for analysis of Bayesian graphical models using Gibbs sampling. Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), March 20-22. Edited by: Hornik K, Leisch F, Zeileis A. Vienna, Austria: Technische Universität Wien; 2003, ISSN 1609-395X 2003.Google Scholar
- Doucet A, Godsill S, Andrieu C: On sequential Monte Carlo sampling methods for Bayesian filtering. Stat Comput. 2000, 10: 197-208.View ArticleGoogle Scholar
- King AA, Ionides EL, Bret'o CM, Ellner SP, Kendall BE, Wearing H, Ferrari MJ, Lavine M, Reuman DC: pomp: Statistical inference for partially observed Markov processes (R package). 2010,http://pomp.r-forge.r-project.org
- Colizza V, Barrat A, Barthelemy M, Valleron A, Vespignani A: Modeling the worldwide spread of pandemic influenza: baseline case and containment interventions. PLoS Med. 2007, 4: e13.PubMed CentralView ArticlePubMedGoogle Scholar
- Debarre F, Bonhoeffer S, Regoes R: The effect of population structure on the emergence of drug-resistance during pandemic influenza. J R Soc Interface. 2007, 4: 893-906.PubMed CentralView ArticlePubMedGoogle Scholar
- Alonso PL, Brown G, Arevalo-Herrera M, Binka F, Chitnis C, Collins F, Doumbo OK, Greenwood B, Hall BF, Levine MM, et al: A research agenda to underpin malaria eradication. PLoS Med. 2011, 8: e1000406.PubMed CentralView ArticlePubMedGoogle Scholar
- Drummond MF, Barbieri M, Wong J: Analytic choices in economic models of treatments for rheumatoid arthritis: what makes a difference?. Med Dec Making. 2005, 25: 520-533.View ArticleGoogle Scholar
- Eaton JW, Johnson LF, Salomon JA, Bärnighausen T, Bendavid E, Bershteyn A, Bloom DE, Cambiano V, Fraser C, Hontelez JAC, et al: HIV treatment as prevention: systematic comparison of mathematical models of the potential impact of antiretroviral therapy on HIV incidence in South Africa. PLoS Med. 2012, 9: e1001245.PubMed CentralView ArticlePubMedGoogle Scholar
- Sargent RG: A tutorial on validation and verification of simulation models. 1988 Winter Simulation Conference. San Diego, USA; 1988.Google Scholar
- Lugnér AK, Mylius SD, Wallinga J: Dynamic versus static models in cost-effectiveness analyses of anti-viral drug therapy to mitigate an influenza pandemic. Heal Econ. 2009, 19: 518-531.Google Scholar
- Ferguson N, Cummings D, Fraser C, Cajka J, Cooley P, Burke D: Strategies for mitigating an influenza pandemic. Nature. 2006, 442: 448-452.View ArticlePubMedGoogle Scholar
- Germann T, Kadau K, Longini I, Macken C: Mitigation strategies for pandemic influenza in the United States. Proc Natl Acad Sci U S A. 2006, 103: 5935-5940.PubMed CentralView ArticlePubMedGoogle Scholar
- Keogh-Brown MR, Smith RD, Edmunds JW, Beutels P: The macroeconomic impact of pandemic influenza: estimates from models of the United Kingdom, France, Belgium and The Netherlands. Eur J Heal Econ. 2010, 11: 543-554.View ArticleGoogle Scholar
- Szucs T: The socio-economic burden of influenza. J Antimicrob Chemother. 1999, 44: 11-15.View ArticlePubMedGoogle Scholar
- Jung E, Iwami S, Takeuchi Y, Jo T-C: Optimal control strategy for prevention of avian influenza pandemic. J Theor Biol. 2009, 260: 220-229.View ArticlePubMedGoogle Scholar
- Lin F, Muthuraman K, Lawley M: An optimal control theory approach to non-pharmaceutical interventions. BMC Infect Dis. 2010, 10: 32.PubMed CentralView ArticlePubMedGoogle Scholar
- Tanner MW, Sattenspiel L, Ntaimo L: Finding optimal vaccination strategies under parameter uncertainty using stochastic programming. Math Biosci. 2008, 215: 144-151.View ArticlePubMedGoogle Scholar
- Prosper O, Saucedo O, Thompson D, Torres-Garcia G, Wang XH, Castillo-Chavez C: Modeling control strategies for concurrent epidemics of seasonal and pandemic H1N1 influenza. Math Biosci Eng. 2011, 8: 141-170.View ArticlePubMedGoogle Scholar
- Khazeni N, Hutton DW, Garber AM, Owens DK: Effectiveness and cost-effectiveness of expanded antiviral prophylaxis and adjuvanted vaccination strategies for an influenza A (H5N1) pandemic. Ann Intern Med. 2009, 151: 840-853.PubMed CentralView ArticlePubMedGoogle Scholar
- Carrasco LR, Lee VJ, Chen MI, Matchar DB, Thompson JP, Cook AR: Strategies for antiviral stockpiling for future influenza pandemics: a global epidemic-economic perspective. J R Soc Interface. 2011, 8: 1307-1313.PubMed CentralView ArticlePubMedGoogle Scholar
- Barrett C, Bisset K, Leidig J, Marathe A, Marathe M: Economic and social impact of influenza mitigation strategies by demographic class. Epidemics. 2011, 3: 19-31.PubMed CentralView ArticlePubMedGoogle Scholar
- Lee BY, Brown ST, Korch GW, Cooley PC, Zimmerman RK, Wheaton WD, Zimmer SM, Grefenstette JJ, Bailey RR, Assi T-M, Burke DS: A computer simulation of vaccine prioritization, allocation, and rationing during the 2009 H1N1 influenza pandemic. Vaccine. 2010, 28: 4875-4879.PubMed CentralView ArticlePubMedGoogle Scholar
- Epstein J, Goedecke D, Yu F, Morris R, Wagener D, Bobashev G: Controlling pandemic flu: the value of international air travel restrictions. PLoS One. 2007, 2: e401.PubMed CentralView ArticlePubMedGoogle Scholar
- Sander B, Nizam A, Garrison L, Postma M, Halloran M, Longini I: Economic evaluation of influenza pandemic mitigation strategies in the US using a stochastic microsimulation transmission model. Value Health. 2008, 12: 226-233.PubMed CentralView ArticlePubMedGoogle Scholar
- Dixon PB, Lee B, Muehlenbeck T, Rimmer MT, Rose A, Verikios G: Effects on the U.S. of an H1N1 epidemic: analysis with a quarterly CGE model. J Homel Secur Emerg Manag. 2010, 7: article75.Google Scholar
- Smith RD, Keogh-Brown MR, Barnett T: Estimating the economic impact of pandemic influenza: an application of the computable general equilibrium model to the UK. Soc Sci Med. 2011, 73: 235-244.View ArticlePubMedGoogle Scholar
- Shim E, Meyers LA, Galvani AP: Optimal H1N1 vaccination strategies based on self-interest versus group interest. BMC Publ Health. 2011, 11 (Suppl 1): S4.View ArticleGoogle Scholar
- Poletti P, Ajelli M, Merler S: The effect of risk perception on the 2009 H1N1 pandemic influenza dynamics. PLoS One. 2011, 6 (2): e16460.PubMed CentralView ArticlePubMedGoogle Scholar
- Morimoto T, Ishikawa H: Assessment of intervention strategies against a novel influenza epidemic using an individual-based model. Environ Health Prev Med. 2010, 15: 151-161.PubMed CentralView ArticlePubMedGoogle Scholar
- Aleman DM, Wibisono TG, Schwartz B: A nonhomogeneous agent-based simulation approach to modeling the spread of disease in a pandemic outbreak. Interfaces. 2011, 41: 301-315.View ArticleGoogle Scholar
- Kelso J, Milne G, Kelly H: Simulation suggests that rapid activation of social distancing can arrest epidemic development due to a novel strain of influenza. BMC Publ Health. 2009, 9: 117.View ArticleGoogle Scholar
- Loganathan P, Sundaramoorthy S, Lakshminarayanan S: Modeling information feedback during H1N1 outbreak using stochastic agent-based models. Asia Pac J Chem Eng. 2011, 6: 391-397.View ArticleGoogle Scholar
- Lau J, Yang X, Pang E, Tsui H, Wong E, Wing Y: SARS-related perceptions in Hong Kong. Emerg Infect Dis. 2005, 11: 417-424.PubMed CentralPubMedGoogle Scholar
- Jørgensen SE, Fath BD, Grant W, Nielsen SN: The editorial policy of ecological modelling. Ecol Model. 2006, 199: 1-3.View ArticleGoogle Scholar
- Grimm V, Berger U, DeAngelis DL, Polhill JG, Giske J, Railsback SF: The ODD protocol: a review and first update. Ecol Model. 2010, 221: 2760-2768.View ArticleGoogle Scholar
- Birch S, Gafni A: On being NICE in the UK: guidelines for technology appraisal for the NHS in England and Wales. Heal Econ. 2002, 11: 185-191.View ArticleGoogle Scholar
- Drummond MF, Jefferson TO: Guidelines for authors and peer reviewers of economic submissions to the BMJ. BMJ. 1996, 313: 275-283.PubMed CentralView ArticlePubMedGoogle Scholar
- Murray C, Evans DB, Acharya A, Baltussen R: Development of WHO guidelines on generalized cost-effectiveness analysis. Heal Econ. 2000, 9: 235-251.View ArticleGoogle Scholar
- Weinstein MC, O'Brien B, Hornberger J, Jackson J, Johannesson M, McCabe C, Luce BR: Principles of good practice for decision analytic modeling in health-care evaluation: report of the ISPOR Task Force on Good Research Practices—Modeling Studies. Value Health. 2003, 6: 9-17.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.