Simple proof of robustness for Bayesian heavy-tailed linear regression models
Abstract
In the Bayesian literature, a line of research called resolution of conflict is about the characterization of robustness against outliers of statistical models. The robustness characterization of a model is achieved by establishing the limiting behaviour of the posterior distribution under an asymptotic framework in which the outliers move away from the bulk of the data. The proofs of the robustness characterization results, especially the recent ones for regression models, are technical and not intuitive, limiting the accessibility and preventing the development of theory in that line of research. We highlight that the proof complexity is due to the generality of the assumptions on the prior distribution. To address the issue of accessibility, we present a significantly simpler proof for a linear regression model with a specific prior distribution corresponding to the one typically used. The proof is intuitive and uses classical results of probability theory. To promote the development of theory in resolution of conflict, we highlight which steps are only valid for linear regression and which ones are valid in greater generality. The generality of the assumption on the error distribution is also appealing; essentially, it can be any distribution with regularly varying or log-regularly varying tails. So far, there does not exist a result in such generality for models with regularly varying distributions. Finally, we analyse the necessity of the assumptions.
- Publication:
-
arXiv e-prints
- Pub Date:
- January 2025
- DOI:
- arXiv:
- arXiv:2501.06349
- Bibcode:
- 2025arXiv250106349G
- Keywords:
-
- Mathematics - Statistics Theory