Maternal exposure to environmental chemicals may cause adverse effects on child development, especially during early gestation. Studies have mostly focused on prenatal exposure to individual chemicals of interest, ignoring the fact that women are exposed to multiple chemicals on a daily basis. In this study, we investigate the patterns of maternal biomonitoring data for twenty nine chemicals by maternal characteristics from the Maternal-Infant Research on Environment Chemicals (MIREC) Study. Principal component analysis (PCA) was used to extract the chemicals that have similar patterns as well as to reduce the dimension of the dataset. Cluster analysis was subsequently implemented to categorize participants based on their socio-demographic variables, followed by hypothesis testing to determine if the mean converted concentrations of chemical substances significantly differ among women with different characteristics. Eleven components were retained which explained approximately 70% of the variance, and six main clusters of the participants were categorized. In particular, outputs showed that for pregnant women, one component is dominated by persistent organic pollutants, while another is dominated by phthalates. The results demonstrated that mixtures of chemical concentrations have a strong association with the characteristics of the participants. As a result, future studies may benefit by analyzing multiple exposures to environmental chemicals in relation to the health effects of pregnant women and children.
Methods based on the propensity score allow one to reduce the effects of measured confounding when using observational data to estimate the effects of treatments, exposures and interventions. During this introductory talk, I will define the propensity score, describe different methods in which it can be used, and discuss good practice when using propensity score methods.
Spatial statistics concerns solving scientific problems involving spatially dependent data, where two data points nearby one another tend to be more similar than two data points separated by a long distance. In the health sciences, spatial statistics is commonly used for modelling and understanding spatially varying risk factors such as air pollution or smoking prevalence, and for making inference about the spatial distribution of disease risk. An short introduction to spatial statistics and geostatsitics will be given in this talk, describing: Gaussian random fields and spatial correlation structures; statistical models for continuously-valued risk factor data and integer valued health outcome data; and inference methods for fitting geostatistical models to data. Examples shown will include arsenic in groundwater and locations of murders in Toronto.
In the analysis of the time to event data the possibility of the occurrence of more than one event changes the statistical analysis and the interpretation of the results. For example, in oncology, patients could experience local or distant relapse of their disease. When the primary cancer is controlled, the patients live long enough to experience side effects of the treatment as heart disease or second cancers. It is desirable to be able to draw conclusions not only on the overall event free but for each type of event separately. Thus, situations of competing risks issue. The type of analysis presented will be based on modeling the hazard of subdistribution after an introduction of the specific terminology. Examples will be provided to enhance the understanding of the intricacies of this type of analysis.
Network meta-analysis is a technique that can be used to summarize the evidence in fields where multiple options exist for the treatment of a condition or disease. Although this is a relatively new technique, with the first serious applications published only ten years ago, it is an increasingly popular one. A recent commentary in the Lancet even suggested that network meta-analysis might become "the norm for comparative effectiveness." I will present the basics of network meta-analysis, so that at the end of this talk you will:
(1) Understand what a network meta-analysis is and the key assumption(s) it makes
(2) Understand similarities and differences between network meta-analyses and traditional meta-analyses with respect to
(a) Literature search
(b) Data abstraction and analyses
(c) Presentation of results
(3) Be aware of software to fit standard network meta-analysis models
Neutral zone classifiers include 'no-decision' as a classification outcome. In this talk, neutral zone classifiers will be extended to sequential contexts for analyzing longitudinal data. Applications could include medical diagnosis where a decision variable is repeatedly measured on each subject with the expectation of being able to ultimately identify a patient disease status. Sequential classifiers monitor the sequence of measurements and decide when to stop sampling and how to classify the subject. Decision boundaries for the posterior probability of class membership are derived to minimize the overall expected cost. Misclassification rates and expected sample size are investigated and the results are compared with non-sequential classifiers. A recursive optimization algorithm is derived for Gaussian contexts that overcomes the computational complexity of backward induction.
Policy makers deciding on the allocation of healthcare resources need to be informed a priori about the impact of their decisions on population health and the healthcare budget. Decision analysis is one of the tools that health economists and decision scientists use to inform such political decisions. There is a large spectrum of decision analytic models that are being applied in health to inform policy making; they span from simple decision trees to agent-based microsimulations and stochastic infectious disease models. These models rely on input from different levels of evidence (data, expert opinion) that need to be synthesized often using Bayesian methods.
This lecture will provide a brief introduction to the methods of decision analytical modeling used in health economics and policy. It will present the basic decision models used (Decision trees, Markov models, agent-based microsimulations) and will briefly discuss a few more sophisticated methods of healthcare decision modeling (multistate modeling, discrete event simulations etc). Finally, during the lecture we will outline statistical methods commonly used in decision modeling (survival analysis, multilevel/hierarchical models etc) and we will illustrate our attempts for developing a unified framework in healthcare decision making.
While, modern social epidemiology has largely staked its reputation on producing strong evidence about the effects of social exposures on individual health, central to the historical origins of the field are questions about the health of populations and societies. Overtime, however, ecological methods used in the field's earliest studies fell out of favour and were replaced with methods which, while arguably offering greater rigour, lend themselves primarily to testing individual-level hypotheses. Lost in the process has been a central focus on pressing questions about the health of populations and the conditions of societies. A newer suite of statistical methods offers an opportunity for the field to return more of its attention to such questions, by enabling researchers to draw ecological inferences with the rigour for which social epidemiology has come to be known.
The impact of genomic technologies on the care of patients with rare inherited diseases and their families is broadly accepted. Clear demonstration of their usefulness in the health care domain is illustrated in the management of disorders such as phenylketonuria, familial hypercholesterolemia and many others. The impact of genomic technologies on the population health domain, that is common disease management, however, is more controversial. Given these successes and controversies, how to reconciliate the goal of precision medicine with a population health impact? I would argue that the field of genetic epidemiology, with the support of innovative statistical approaches for genetics, makes this reconciliation and paved the way to precision medicine and improved population health.
Chronic wounds have a huge social and economic impact to both patients and the health care system. Multiple studies have been conducted to evaluate various types of interventions and treatments. The choice of outcomes in these studies is inconsistent, including proportion of wounds healed, time to healing and rate of healing. Here I present a number of statistical challenges and complications around the analysis of these outcomes, based on examples from the literature as well as from experience from two pragmatic clinical trials. Important lessons learned regarding the choice of study design, outcomes and statistical methods applied will be presented.