The consistent use of a general statistical theory makes possible the elimination of ambiguities and of idealizing assumptions from the interpretation of delayed-coincidence experiments. Introduction of the concept of the "total coincidence counting rate" (which can be determined experimentally) makes possible the definition of resolving time, thereby eliminating discrepancies between earlier definitions; it also provides means for relating the coincidence efficiencies directly to the number of source events. The effect of random time lags on coincidence curves is calculated and experimental methods for the determination of time lags are derived. The statistical errors in the determination of moments of a coincidence curve are calculated and outlined in detail for first moment investigations. It is shown that: (1) the best choice of the resolving time is a (experimentally measurable) weighted rms of all time delays present in the measurement; (2) using the best choice of the resolving time the standard error of the centroid, obtained by successive measurements of the points of a coincidence curve, is approximately twice the least theoretical standard error (that could be obtained for the total time of observation); (3) the moment method can be applied generally for the determination of mean time delays; other methods, while applicable with some restrictions, can lead to similar or greater statistical errors.