Visual Summary of Egocentric Photostreams by Representative Keyframes
Abstract
Building a visual summary from an egocentric photostream captured by a lifelogging wearable camera is of high interest for different applications (e.g. memory reinforcement). In this paper, we propose a new summarization method based on keyframes selection that uses visual features extracted by means of a convolutional neural network. Our method applies an unsupervised clustering for dividing the photostreams into events, and finally extracts the most relevant keyframe for each event. We assess the results by applying a blind-taste test on a group of 20 people who assessed the quality of the summaries.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2015
- DOI:
- 10.48550/arXiv.1505.01130
- arXiv:
- arXiv:1505.01130
- Bibcode:
- 2015arXiv150501130B
- Keywords:
-
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Information Retrieval
- E-Print:
- Paper accepted in the IEEE First International Workshop on Wearable and Ego-vision Systems for Augmented Experience (WEsAX). Turin, Italy. July 3, 2015