We present a method that estimates the ego motion of a vehicle based on camera data of high resolution (207x204 pixels) Time-of-Flight cameras using visual odometry techniques. Translation and rotation of camera motion in six degrees of freedom are calculated. Point correspondences in consecutive amplitude image pairs are built. By consideration of the depth image of the camera 3D point correspondences are derived from the two dimensional point correspondences. Camera motion between the two images is then computed by registration of the two resulting point clouds. The process is optimized by incorporation of outlier removal and a multi sensor setup. The presented optimizations raise the precision and robustness of the method and enable visual odometry by Time-of-Flight camera data as an alternative to common odometry systems in low speed scenarios.