Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Minimizing low-rank models of high-order tensors: Hardness, span, tight relaxation, and applications (2210.11413v3)

Published 16 Oct 2022 in eess.SP, cs.DS, cs.LG, and math.OC

Abstract: We consider the problem of finding the smallest or largest entry of a tensor of order N that is specified via its rank decomposition. Stated in a different way, we are given N sets of R-dimensional vectors and we wish to select one vector from each set such that the sum of the Hadamard product of the selected vectors is minimized or maximized. We show that this fundamental tensor problem is NP-hard for any tensor rank higher than one, and polynomial-time solvable in the rank-one case. We also propose a continuous relaxation and prove that it is tight for any rank. For low-enough ranks, the proposed continuous reformulation is amenable to low-complexity gradient-based optimization, and we propose a suite of gradient-based optimization algorithms drawing from projected gradient descent, Frank-Wolfe, or explicit parametrization of the relaxed constraints. We also show that our core results remain valid no matter what kind of polyadic tensor model is used to represent the tensor of interest, including Tucker, HOSVD/MLSVD, tensor train, or tensor ring. Next, we consider the class of problems that can be posed as special instances of the problem of interest. We show that this class includes the partition problem (and thus all NP-complete problems via polynomial-time transformation), integer least squares, integer linear programming, integer quadratic programming, sign retrieval (a special kind of mixed integer programming / restricted version of phase retrieval), and maximum likelihood decoding of parity check codes. We demonstrate promising experimental results on a number of hard problems, including state-of-art performance in decoding low density parity check codes and general parity check codes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. N. Higham and S. Relton, “Estimating the largest elements of a matrix,” SIAM J. Sci. Comput., vol. 38, no. 6, pp. C584–C601, 2016.
  2. G. Ballard, T. G. Kolda, A. Pinar, and C. Seshadhri, “Diamond sampling for approximate maximum all-pairs dot-product (MAD) search,” in 2015 IEEE International Conference on Data Mining (ICDM).   Los Alamitos, CA, USA: IEEE Computer Society, nov 2015, pp. 11–20. [Online]. Available: https://doi.ieeecomputersociety.org/10.1109/ICDM.2015.46
  3. N. Kargas, N. D. Sidiropoulos, and X. Fu, “Tensors, learning, and "Kolmogorov extension" for finite-alphabet random vectors,” IEEE Trans. Signal Process., vol. 66, no. 18, pp. 4854–4868, 2018. [Online]. Available: https://doi.org/10.1109/TSP.2018.2862383
  4. Z. Lu, Y. Hu, and B. Zeng, “Sampling for approximate maximum search in factorized tensor,” in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 2017.
  5. M. Espig, W. Hackbusch, A. Litvinenko, H. Matthies, and E. Zander, “Iterative algorithms for the post-processing of high-dimensional data,” Journal of Computational Physics, vol. 410, p. 109396, 03 2020.
  6. N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551–3582, 2017.
  7. L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, pp. 279–311, 1966c.
  8. L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000. [Online]. Available: https://doi.org/10.1137/S0895479896305696
  9. I. V. Oseledets, “Tensor-train decomposition,” SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2295–2317, 2011. [Online]. Available: https://doi.org/10.1137/090752286
  10. Q. Zhao, G. Zhou, S. Xie, L. Zhang, and A. Cichocki, “Tensor ring decomposition,” CoRR, vol. abs/1606.05535, 2016. [Online]. Available: http://arxiv.org/abs/1606.05535
  11. Y. Zniyed, R. Boyer, A. L. de Almeida, and G. Favier, “High-order CPD estimation with dimensionality reduction using a tensor train model,” in 2018 26th European Signal Processing Conference (EUSIPCO), 2018, pp. 2613–2617.
  12. C. J. Hillar and L.-H. Lim, “Most tensor problems are NP-hard,” J. ACM, vol. 60, no. 6, nov 2013. [Online]. Available: https://doi.org/10.1145/2512329
  13. E. L. Schreiber, R. E. Korf, and M. D. Moffitt, “Optimal multi-way number partitioning,” J. ACM, vol. 65, no. 4, jul 2018. [Online]. Available: https://doi.org/10.1145/3184400
  14. S. Lacoste-Julien, “Convergence rate of Frank-Wolfe for non-convex objectives,” CoRR, vol. abs/1607.00345, 2016. [Online]. Available: http://arxiv.org/abs/1607.00345
  15. Q. Li, D. McKenzie, and W. Yin, “From the simplex to the sphere: Faster constrained optimization using the Hadamard parametrization,” 2021. [Online]. Available: https://arxiv.org/abs/2112.05273
  16. B. Leshem, O. Raz, A. Jaffe, and B. Nadler, “The discrete sign problem: Uniqueness, recovery algorithms and phase retrieval applications,” Applied and Computational Harmonic Analysis, vol. 45, no. 3, pp. 463–485, 2018.
  17. K. Suzuki, C. Tsutake, K. Takahashi, and T. Fujii, “Compressing sign information in DCT-based image coding via deep sign retrieval,” 2022.
  18. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: A contemporary overview,” IEEE Signal Processing Magazine, vol. 32, no. 3, pp. 87–109, 2015.
  19. Y. C. Eldar, N. Hammen, and D. G. Mixon, “Recent advances in phase retrieval [lecture notes],” IEEE Signal Processing Magazine, vol. 33, no. 5, pp. 158–162, 2016.
  20. T. Qiu, X. Fu, N. D. Sidiropoulos, and D. P. Palomar, “MISO channel estimation and tracking from received signal strength feedback,” IEEE Transactions on Signal Processing, vol. 66, no. 7, pp. 1691–1704, 2018.
  21. A. Shokrollahi, “LDPC codes: An introduction,” in Coding, Cryptography and Combinatorics, K. Feng, H. Niederreiter, and C. Xing, Eds.   Basel: Birkhäuser Basel, 2004, pp. 85–110.
  22. J. Chen, A. Dholakia, E. Eleftheriou, M. Fossorier, and X.-Y. Hu, “Reduced-complexity decoding of LDPC codes,” IEEE Transactions on Communications, vol. 53, no. 8, pp. 1288–1299, 2005.
  23. J. Håstad, “Some optimal inapproximability results,” J. ACM, vol. 48, no. 4, p. 798–859, jul 2001. [Online]. Available: https://doi.org/10.1145/502090.502098
  24. J. Håstad, S. Huang, R. Manokaran, R. O’Donnell, and J. Wright, “Improved np-inapproximability for 2-variable linear equations,” Theory of Computing, vol. 13, 12 2017.
  25. J. Håstad, “Solving systems of linear equations over finite fields,” https://www.csc.kth.se/ johanh/sms07.pdf, 2007, accessed: 2023-05-17.
  26. S. Abu-Surra, D. DeClercq, D. Divsalar, and W. E. Ryan, “Trapping set enumerators for specific LDPC codes,” in 2010 Information Theory and Applications Workshop (ITA).   IEEE, 2010, pp. 1–5.
Citations (1)

Summary

We haven't generated a summary for this paper yet.