In computational imaging, hardware for signal sampling and software for object reconstruction are designed in tandem for improved capability. Examples of such systems include computed tomography (CT), magnetic resonance imaging (MRI), and superresolution microscopy. In contrast to more traditional cameras, in these devices, indirect measurements are taken and computational algorithms are used for reconstruction. This allows for advanced capabilities such as super-resolution or 3-dimensional imaging, pushing forward the frontier of scientific discovery. However, these techniques generally require a large number of measurements, causing low throughput, motion artifacts, and/or radiation damage, limiting applications. Data-driven approaches to reducing the number of measurements needed have been proposed, but they predominately require a ground truth or reference dataset, which may be impossible to collect. This work outlines a self-supervised approach and explores the future work that is necessary to make such a technique usable for real applications. Light-emitting diode (LED) array microscopy, a modality that allows visualization of transparent objects in two and three dimensions with high resolution and field-of-view, is used as an illustrative example. We release our code at https://github.com/vganapati/LED_PVAE and our experimental data at https://doi.org/10.6084/m9.figshare.21232088 .