We consider the problem of estimating a latent signal from a lossy compressed version of the data. We assume that the data is generated by an underlying signal and compressed using a lossy compression scheme that is agnostic to this signal. In reconstruction, the underlying signal is estimated so as to minimize a prescribed loss measure. For the above setting and an arbitrary distortion measure between the data and its compressed version, we define the rate-distortion (RD) risk of an estimator as its risk with respect to the distribution achieving Shannon's RD function with respect to this distortion. We derive conditions under which the RD risk describes the risk in estimating from the compressed data. The main theoretical tools to obtain these conditions are transportation-cost inequalities in conjunction with properties of source codes achieving Shannon's RD function. We show that these conditions hold in various settings, including settings where the alphabet of the underlying signal is finite or when the RD achieving distribution is multivariate normal. We evaluate the RD risk in special cases under these settings. This risk provides an achievable loss in compress-and-estimate settings, i.e., when the data is first compressed, communicated or stored using a procedure that is agnostic to the underlying signal, which is later estimated from the compressed version of the data. Our results imply the following general procedure for designing estimators from datasets undergoing lossy compression without specifying the actual compression technique; train the estimator based on a perturbation of the data according to the RD achieving distribution. Under general conditions, this estimator achieves the RD risk when applied to the lossy compressed version of the data.