To date, machine learning for human action recognition in video has been widely implemented in sports activities. Although some studies have been successful in the past, precision is still the most significant concern. In this study, we present a high-accuracy framework to automatically clip the sports video stream by using a three-level prediction algorithm based on two classical open-source structures, i.e., YOLO-v3 and OpenPose. It is found that by using a modest amount of sports video training data, our methodology can perform sports activity highlights clipping accurately. Comparing with the previous systems, our methodology shows some advantages in accuracy. This study may serve as a new clipping system to extend the potential applications of the video summarization in sports field, as well as facilitates the development of match analysis system.