A common task in statistical practice is the estimation of unknown parameters from available data. When proposing a model one could rely on unbiased estimators, meaning that the expected value of an estimator equals the true parameter being estimated. Historically, the absence of bias is often considered an attractive property of an estimator, because it allows intuitive interpretation of the results. However, a fixation on bias reduction increases the variance, which deteriorates the predictive performance of a model. It also occurs that the data do not provide enough information to produce well-behaved unbiased estimates. In these so called ill-conditioned problems, unbiased estimators will over-fit the data, or will be impossible to obtain. This work focusses on ill-conditioned problems in high-dimensional data analysis. Typical examples of high-dimensional data are signals and spectra in analytical chemistry, image data in computer vision ormicroarray data in biology. In all chapters penalized estimators, or often called shrinkage estimators, are used to obtain estimates. This means that the usual loss function is augmented with some type of penalty function that constrains or shrinks the coefficients in the model. The concept of ill-posedness is further introduced in the next section. A short introduction into penalized estimation is given in section 1.2. Section 1.3 provides a chapter by chapter introduction to the different topics within this thesis. In addition it provides some more details concerning the data and its technological aspects, not treated in the particular chapters.

Additional Metadata
Keywords high-dimensional data analysis, penalized estimation
Promotor P.H.C. Eilers (Paul) , P.J. van der Spek (Peter)
Publisher Erasmus University Rotterdam
ISBN 978-94-6191-898-7
Persistent URL hdl.handle.net/1765/51414
de Rooi, J.J. (2013, October 24). Penalized Estimation in High-Dimensional Data Analysis. Erasmus University Rotterdam. Retrieved from http://hdl.handle.net/1765/51414