However, there is very limited MI software for this. Further, analysis of a data set with missing values that are Not Missing At Random (NMAR) is complicated by the need to extend the MAR imputation model to include a model for the reason for dropout. Here, we propose a simple alternative. We first impute under MAR and obtain parameter estimates for each imputed data set. The overall NMAR parameter estimate is a weighted average of these parameter estimates, where the weights depend on the assumed degree of departure from MAR. In some settings, this approach gives results that closely agree with joint modelling as the number of imputations increases. In others, it provides ball-park estimates of the results of full NMAR modelling, indicating the extent to which it is necessary and providing a check on its results.
We illustrate our approach with a small simulation study, and the analysis of data from a trial of interventions to improve the quality of peer review.
This is the traditional Christmas meeting, to be accompanied by cheese ("cheez" for vegans) and wine.
Reference will be made to some uses of statistical ideas in the courts. Cases discussed will include (a) the Daubert, Joiner and Kuhmo Tire trilogy from the US Supreme Court where the requirements for the admissibility of expert evidence were expressed, (b) the English Court of Appeal cases R v. Adams, D.J. and R. v. Doheny and Adams where comments about the role of statistics in legal argument were made, and (c) a more recent English Court of Appeal case R v. Grey where comments were made about the requirements for the use of frequency data. The relevance of these for recent cases concerning sudden explained deaths of infants will be discussed.
This presentation involves three case studies describing the unit's recent collaborative work with:
This talk is concerned with a Bayesian approach to parameter estimation and prediction for smooth transition models.
Reconstruction of images is a nonlinear inverse problem. To yield a stable solution it is necessary to make considerable use of prior information. The role of a Bayesian approach is therefore of fundamental importance, especially when coupled with MCMC sampling to provide information about solution behaviour.
This talk will describe the techniques used to model regulatory networks, and the computational tools needed for simulation and analysis. An overview will also be given of the MCMC algorithms which can in principle be used for carrying out Bayesian inference for the parameters underlying the network models, and the problems associated with applying such techniques in practice.
The project's web page gives more details.