Six methods are described for the correction of the response time distortions which occur in the outputs of linear ratemeters when measuring time-dependent systems. Such corrections are necessary for valid interpretation of the physical system under investigation. The methods have been tested by applying them to a variety of input-output sequences, using both theoretical output values calculated from analytical input functions, and also experimental outputs from ratemeters under laboratory and field conditions. A finite difference solution and an iterative method are found to be the best procedures and both give adequate accuracy under most circumstances in practice. The conditions influencing which of these two methods is better in a given situation are discussed in detail.