The use of an n-bit signed analog-to-digital convertor in connection with an analog sample is considered, taking into account the error variance associated with the conversion process. Companders have been successfully employed to reduce the error variance in those cases where the amplitude density is known a priori. However, companders add to the cost complexity of a digital system and analog data compression is often given a higher priority than precision. In the present investigation, a noncompanding method is presented which optimizes precision if a uniform quantizer is used. The optimal threshold for the uniform quantization of two important classes of inputs is derived and tested. It is shown that for a uniform input, the threshold is chosen to be equal to the maximal value of that signal. For normally distributed processes, the threshold is chosen as a function of the prespecified wordlength.