1 Introduction
Deep neural networks have achieved a great success in solving many practical problems. Deep learning methods are based on multiple levels of representation in learning. Each level involves simple but nonlinear units for learning. Many deep learning networks have been developed and applied in various applications successfully. For example, convolutional neural networks (CNNs)
[17, 21, 29]have been well applied in computer vision problems, recurrent neural networks (RNNs)
[9, 12, 25]are used in audio and natural language processing. For more detailed discussions, see
[22] and its references.In the recent years, more and more works focus on the theoretical explanations of neural networks. One important topic is the expressive power, i.e., comparing the expressive ability of different neural networks architectures. In the literature [7, 8, 15, 16, 24, 26, 27, 28, 30], researches have been done in the investigation of the depth efficiency of neural networks. It is natural to claim that a deep network can be more powerful in the expressiveness than a shallow network. Recently, Khrulkov et al. [19] applied a tensor train decomposition to exploit the expressive power of RNNs experimentally. In [4], Cohen et al. theoretically analyzed specific shallow convolutional network by using CP decomposition and specific deep convolutional network based on hierarchical tensor decomposition. The result of the paper is that the expressive power of such deep convolutional networks is significantly better than that of shallow networks. Cohen et al. in [5]
generalized convolutional arithmetic circuits into convolutional rectifier networks to handle activation functions, like ReLU. They showed that the depth efficiency of convolutional rectifier networks is weaker than that of convolutional arithmetic circuits.
Although many attempts in theoretical analysis success, the understanding of expressiveness is still needed to be developed. The main contribution of this paper is that a new deep network based on Tucker tensor decomposition is proposed. We analyze the expressive power of the new network, and show that it is required to use an exponential number of nodes in a shallow network to represent a Tucker network. Moreover, we compare the performance of the proposed Tucker network, hierarchical tensor network and shallow network on two datasets (Mnist and CIFAR) and demonstrate that the proposed Tucker network outperforms the other two networks.
The rest of this paper is organized as follows. In Section 2, we briefly review tensor decompositions. We present the proposed Tucker network and show its expressive power in Section 3. In Section 4, experimental results are presented to demonstrate the performance of the Tucker network. Some concluding remarks are given in Section 5.
2 Tensor Decomposition
A dimensional tensor is a multidimensional array, i.e., . Its th unfolding matrix is defined as . Given an index subset and the corresponding compliment set , , the matricization of is denoted as a matrix , obtained by reshaping tensor into matrix.
We also introduce two important operators in tensor analysis, tensor product and Kronecker product. Given tensors and of order and respectively, the tensor product is defined as . Note that when
, the tensor product is the outer product of vectors.
denotes Kronecker product which is an operation on two matrices, i.e., for matrices , , , defined by .Moreover, we use to denote the set for simplicity.
In the following, we review some wellknown tensor decomposition methods and related convolutional networks.
CP decomposition: [3, 14] Given a tensor , the CANDECOMP/PARAFAC decomposition (CP) is defined as follows:
(1) 
where , . The minimal value of such that CP decomposition exists is called the CP rank of denoted as .
Tucker decomposition:[6, 31] Given a tensor , the Tucker decomposition is defined as follows:
which can be written as,
(2) 
where , , , , , , . The minimal value of such that (2) holds is called Tucker rank of , denoted as . If , we simplicity denoted as .
HT decomposition: The Hierarchical Tucker (HT) Tensor format is a multilevel variant of a tensor decomposition format. The definition requires the introduction of a tree. For detailed discussion, see [10, 11, 13]. Given a tensor , . The hierarchical tensor decomposition has the following form:
(3)  
where are the generated vectors of tensor . refer to level rank. We denote . If all the ranks are equal to , for simple.
2.1 Convolutional Networks
Given a dataset of pairs , each object is represented as a set of vectors with . By applying parameter dependent functions , we construct a representation map . Object with
is classified into one of categories
. Classification is carried out through the maximization of the following score function:(4) 
where is a trainable coefficient tensor.
The representation functions
have many choices. For example, neuronstype functions
for parameters and pointwise nonlinear activation . We list some commonly used activation functions here, for example hard threshold: for, otherwise 0; the rectified linear unit (ReLU)
; and sigmoid .The main task is to estimate the parameters
and the coefficient tensors . The computational challenge is that the coefficient tensor has an exponential number of entries. We can utilize tensor decompositions to address this issue.If the coefficient tensor is in CP decomposition, the network corresponding to CP decomposition is called shallow network(or CP Network), see Figure 1. We obtain its score function:
(5) 
Note that the same vectors are shared across all classes . If set , the model is universal, i.e., any tensors can be represented.
If the coefficient tensors are in HT format like (8), the network refer to HT network. An example of HT network with is showed in Figure 2. Cohen et al. [4] analyzed the expressive power of HT network and proved that a shallow network with exponentially large width is required to emulate a HT network.
3 Tucker Network
In this section, we propose a Tucker network. If the coefficient tensors in (4) are in Tucker format (2), we refer it as Tucker network, i.e.,
(6) 
Suppose for same vectors () in (6). Here be the th unfolding of tensor . If set , the number of parameter is: . If set , the model is universe, any tensor can be represented by Tucker format, number of parameters are needed. Note that the score function for Tucker network:
The Tucker network architecture is given in Figure 3. The outputs from convolution layer are
where , . The last output, i.e., score value is given as follows:
where is tensor scalar product, i.e., the sum of entrywise product of two tensors. Because is a order tensor of smaller dimension , it can be further decomposed with a deeper network. In this sense, Tucker network is also a kind of deep network.
The following theorem demonstrates the expressive power of Tucker network.
Theorem 1.
Let be a tensor of order and dimension in each mode, generated by Tucker form in (6). Define for all possible subsets , consider the space of all possible configurations for parameters. In the space, will have CP rank of at least almost everywhere, i.e.,the Lebesgue measure of the space whose CP rank is less than is zero.
The proof can be found in the supplementary section. We remark that if , when is even, the Lebesgue measure of the Tucker format space whose CP rank is less than is zero; when
is odd, the Lebesgue measure of the Tucker format space whose CP rank is less than
is also zero.3.1 Connection with HT Network
In this subsection, to compare the expressive power of HT and Tucker network, we discuss the relationship between Tucker format and hierarchical Tucker tensor format firstly. Here we only consider hierarchical tensor format, its corresponding HT network (8) has been well discussed in [4].
We start it from hierarchical Tucker tensor, its HT network architecture is shown in Figure 2 . Given a order tensor, its hierarchical tensor format can always be written as
are vectors size of . Here we suppose that . Denote , we have , where
is a linear transformation that converts a matrix into a column vector.
is diagonal operator that transform a vector into a diagonal matrix. Similarly, we have,where , , and , .
From the property of Kronecker product: , we deduce that,
Therefore,
with , We can get that
(7) 
Therefore,
with . It implies that a hierarchical tensor format can be written as a order Tucker tensor. Worth to say, from (7), the rank of is less than that of its factor matrices. Because of the structure of , , we get that and also . From the rank property, .
When the hierarchical tensor has layers, we can similarly deduced the following results.
Theorem 2.
Any hierarchical tensor can be represented as a order Tucker tensor and vice versa.
Theorem 3.
For any tensor , if , then .
According to Theorem 3, given a hierarchical Tucker network of width , we know that the width of Tucker network is not possible larger than .
4 Experimental Results
We designed experiments to compare the performance of three networks: Tucker network, HT network and shallow network. The results illustrate the usefulness of Tucker network. We implement shallow network, Tucker network and HT network with TensorFlow[1] backend, and test three networks on two different data sets: Mnist [23] and CIFAR10 [20]. All three networks are trained by using the backpropagation algorithm. In all three networks, we choose ReLU as the activation function in the representation layer
and apply batch normalization
[18] between convolution layer and pooling layer to eliminate numerical overflow and underflow.We choose Neuronstype with ReLU nonlinear activation as representation map : . Actually the representation mapping now is acted as a convolution layer in general CNNs. Each image patch is transformed through a representation function with parameter sharing across all the image patches. Convolution layer in Figure 3 actually can been seen as a locally connected layer in CNN. It is a specific convolution layer without parameter sharing, which means that the parameters of filter would differ when sliding across different spatial positions. In the hidden layer, without overlapping, a 3D convolution operator size of is applied. Following is a product pooling layer to realize the outer product computation . It can be explained as a pooling layer with local connectivity property, which only connects a neuron with partial neurons in the previous layer. The output of neuron is the multiplication of entries in the neurons connected to it. The fullyconnected layer simply apply the linear mapping on the output of pooling layer. The output of Tucker network would be a vector corresponding to class scores.
4.1 Mnist
The MNIST database of handwritten digits has a training set of 60000 examples, and a test set of 10000 examples with 10 categories from 0 to 9. Each image is of
pixels. In the experiment, we select the gradient descent optimizer for backpropagation with batch size 200, and use a exponential decay learning rate with 0.2 initial learning rate, 6000 decay step and 0.1 decay rate. Figure 4shows the training and test accuracy of three networks with 3834 number of parameters that have been learned. The parameters contains four parameters in batch normalization (mean, std, alpha, beta). We list filter size, strides size and rank as well in Table
1. It is obvious that Tucker network outperforms shallow network and HT network. Moreover, we test the sensitivity of Tucker network with the change of rank, and compare the performance with the other two networks with the same number of parameters. Figure 5 illustrates the sensitivity performance, each value records the highest accuracy in training or test data. Tucker network can achieve the highest accuracy at most times.4.2 Cifar10
CIFAR10 data [20] is a more complicated data set consisting of 60000 color images size of with 10 classes. Here, we use the gradient descent optimizer with 0.05 learning rate and 200 batch size to train. In Figure 6 we report the training and test accuracy with 23790 trained parameters. Table 2 shows the parameter details of sensitivity test, whose results are displayed in Figure 7 . From Figure 6 and Figure 7 , Tucker network still has more excellent performance when fitting a more complicated data set.
5 Conclusion
In this paper, we presented a Tucker network and prove the expressive power theorem. We stated that a shallow network of exponentially large width is required to mimic Tucker network. A connection between Tucker network and HT network is discussed. The experiments on Mnist and CIFAR10 data show the usefulness of our proposed Tucker network.
Network 


Filter size  Strides size 



Tucker  10  14 23  14 5  2  
HT  3478  14  14 14  14 14  8  
Shallow  10  16 21  12 7  2  
Tucker  12  14 17  14 11  3  
HT  3834  18  14 14  14 14  3  
Shallow  16  14 16  14 12  3  
Tucker  12  14 15  14 13  4  
HT  5300  12  16 26  12 2  4  
Shallow  10  20 21  8 7  4  
Tucker  11  14 14  14 14  5  
HT  8657  11  26 27  2 1  11  
Shallow  17  20 23  8 5  10 
Network 


Filter size  Strides size 


Tucker  10  16 26  16 6  3  
HT  13432  10  2121  11 11  3  
Shallow  10  17 26  15 6  3  
Tucker  10  16 31  16 1  4  
HT  17626  22  16 16  1616  6  
Shallow  10  20 29  123  4  
Tucker  20  16 18  1614  5  
HT  23970  30  16 16  1616  6  
Shallow  12  25 26  76  19  
Tucker  24  16 16  1616  6  
HT  32016  12  28 31  41  9  
Shallow  20  17 31  151  4  
Tucker  31  16 17  1615  7  
HT  50233  43  18 21  1411  7  
Shallow  37  17 26  156  7 
References
 [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensorflow: A system for largescale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016.
 [2] R. Caron and T. Traynor. The zero set of a polynomial. 2005.
 [3] J. D. Carroll and J.J. Chang. Analysis of individual differences in multidimensional scaling via an nway generalization of “eckartyoung” decomposition. Psychometrika, 35(3):283–319, 1970.
 [4] N. Cohen, O. Sharir, and A. Shashua. On the expressive power of deep learning: A tensor analysis. In Conference on Learning Theory, pages 698–728, 2016.
 [5] N. Cohen and A. Shashua. Convolutional rectifier networks as generalized tensor decompositions. In International Conference on Machine Learning, pages 955–963, 2016.

[6]
L. De Lathauwer, B. De Moor, and J. Vandewalle.
A multilinear singular value decomposition.
SIAM journal on Matrix Analysis and Applications, 21(4):1253–1278, 2000.  [7] O. Delalleau and Y. Bengio. Shallow vs. deep sumproduct networks. In Advances in Neural Information Processing Systems, pages 666–674, 2011.
 [8] R. Eldan and O. Shamir. The power of depth for feedforward neural networks. In Conference on learning theory, pages 907–940, 2016.
 [9] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with lstm. 1999.
 [10] L. Grasedyck. Hierarchical singular value decomposition of tensors. SIAM Journal on Matrix Analysis and Applications, 31(4):2029–2054, 2010.
 [11] L. Grasedyck and W. Hackbusch. An introduction to hierarchical (h) rank and ttrank of tensors with examples. Computational Methods in Applied Mathematics Comput. Methods Appl. Math., 11(3):291–304, 2011.
 [12] A. Graves, A.r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE, 2013.
 [13] W. Hackbusch. Tensor spaces and numerical tensor calculus, volume 42. Springer Science & Business Media, 2012.
 [14] R. A. Harshman et al. Foundations of the parafac procedure: Models and conditions for an" explanatory" multimodal factor analysis. 1970.

[15]
J. Hastad.
Almost optimal lower bounds for small depth circuits.
In
Proceedings of the eighteenth annual ACM symposium on Theory of computing
, pages 6–20. Citeseer, 1986.  [16] J. Håstad and M. Goldmann. On the power of smalldepth threshold circuits. Computational Complexity, 1(2):113–129, 1991.

[17]
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016.  [18] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
 [19] V. Khrulkov, A. Novikov, and I. Oseledets. Expressive power of recurrent neural networks. arXiv preprint arXiv:1711.00811, 2017.
 [20] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
 [21] Y. LeCun, Y. Bengio, et al. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
 [22] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. nature, 521(7553):436, 2015.
 [23] Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel. Handwritten digit recognition with a backpropagation network. In Advances in neural information processing systems, pages 396–404, 1990.
 [24] J. Martens and V. Medabalimi. On the expressive efficiency of sum product networks. arXiv preprint arXiv:1411.7717, 2014.
 [25] T. Mikolov, S. Kombrink, L. Burget, J. Černockỳ, and S. Khudanpur. Extensions of recurrent neural network language model. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5528–5531. IEEE, 2011.
 [26] G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural networks. In Advances in neural information processing systems, pages 2924–2932, 2014.
 [27] R. Pascanu, G. Montufar, and Y. Bengio. On the number of response regions of deep feed forward networks with piecewise linear activations. arXiv preprint arXiv:1312.6098, 2013.
 [28] T. Poggio, F. Anselmi, and L. Rosasco. Itheory on depth vs width: hierarchical function composition. Technical report, Center for Brains, Minds and Machines (CBMM), 2015.
 [29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
 [30] M. Telgarsky. Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485, 2016.
 [31] L. R. Tucker. Some mathematical notes on threemode factor analysis. Psychometrika, 31(3):279–311, 1966.
Appendix A Appendix A. Proofs
a.1 Proof of Theorem 1
In section 3, we presented Tucker network and showed its expressive power. To prove Theorem 1, we firstly state and prove three lemmas which will be needed for the proofs.
Lemma 1.
For any matricization of a tensor whose CP rank is ,
Proof.
∎
In Lemma 1, we give the lower bound of the CPrank. If the matricization of a tensor has matrix rank , then using the above lemma, we get that the CP rank of is larger than .
For a order tensor who is in Tucker format, its matricization has the following form.
Lemma 2.
Given a order tensor whose Tucker format is , index subsets and , then
Proof.
Therefore,
where . Then,
∎
For simplicity, we denote
where
We get
Lemma 3.
If each factor matrix of tensor has full column rank, i.e., has full column rank, then .
Proof.
. ∎
Proof of Theorem 1
Proof.
According to Lemma 1, it suffices to prove that the rank of is at least almost everywhere. From Lemma 3, equivalently, we prove the rank of is at least almost everywhere.
For any , and all possible subsets and the corresponding compliment set , . We let , which simply holds the elements of . Because for all possible subsets . For all , we have . In the following, we will prove that the Lebesgue measure of the space that is zero.
Let be the topleft sub matrix of and is the determinant, as we know that is a polynomial in the entries of , according to theorem in[2], it either vanishes on a set of zero measure or it is the zero polynomial. It implies that the Lebesgue measure of the space whose is zero, i.e., the Lebesgue measure of the space whose rank less than is zero. The result thus follows. ∎
a.2 Proof of Theorem 2
In this subsection, we will prove Theorem 2, the connection of Tucker tensor format and hierarchical Tucker tensor format. The expressive power of hierarchical Tucker tensor network has been well discussed in [4].
In Section 2, we defined matricization which is a kind of general matricization. In the following, we simply consider the proper order matricization of tensor, denoted as the matrix here, for example, for , ; for , .
The hierarchical tensor decomposition format is given as follows:
(8)  
Comments
There are no comments yet.