Text classification with pixel embedding
Abstract
We propose a novel framework to understand the text by converting sentences or articles into videolike 3dimensional tensors. Each frame, corresponding to a slice of the tensor, is a word image that is rendered by the word's shape. The length of the tensor equals to the number of words in the sentence or article. The proposed transformation from the text to a 3dimensional tensor makes it very convenient to implement an $n$gram model with convolutional neural networks for text analysis. Concretely, we impose a 3dimensional convolutional kernel on the 3dimensional text tensor. The first two dimensions of the convolutional kernel size equal the size of the word image and the last dimension of the kernel size is $n$. That is, every time when we slide the 3dimensional kernel over a word sequence, the convolution covers $n$ word images and outputs a scalar. By iterating this process continuously for each $n$gram along with the sentence or article with multiple kernels, we obtain a 2dimensional feature map. A subsequent 1dimensional maxovertime pooling is applied to this feature map, and three fullyconnected layers are used for conducting text classification finally. Experiments of several text classification datasets demonstrate surprisingly superior performances using the proposed model in comparison with existing methods.
 Publication:

arXiv eprints
 Pub Date:
 November 2019
 arXiv:
 arXiv:1911.04115
 Bibcode:
 2019arXiv191104115L
 Keywords:

 Computer Science  Computation and Language