Emergent Mind

Solving Regularized Exp, Cosh and Sinh Regression Problems

(2303.15725)
Published Mar 28, 2023 in cs.LG

Abstract

In modern machine learning, attention computation is a fundamental task for training LLMs such as Transformer, GPT-4 and ChatGPT. In this work, we study exponential regression problem which is inspired by the softmax/exp unit in the attention mechanism in LLMs. The standard exponential regression is non-convex. We study the regularization version of exponential regression problem which is a convex problem. We use approximate newton method to solve in input sparsity time. Formally, in this problem, one is given matrix $A \in \mathbb{R}{n \times d}$, $b \in \mathbb{R}n$, $w \in \mathbb{R}n$ and any of functions $\exp, \cosh$ and $\sinh$ denoted as $f$. The goal is to find the optimal $x$ that minimize $ 0.5 | f(Ax) - b |22 + 0.5 | \mathrm{diag}(w) A x |22$. The straightforward method is to use the naive Newton's method. Let $\mathrm{nnz}(A)$ denote the number of non-zeros entries in matrix $A$. Let $\omega$ denote the exponent of matrix multiplication. Currently, $\omega \approx 2.373$. Let $\epsilon$ denote the accuracy error. In this paper, we make use of the input sparsity and purpose an algorithm that use $\log ( |x0 - x*|2 / \epsilon)$ iterations and $\widetilde{O}(\mathrm{nnz}(A) + d{\omega} )$ per iteration time to solve the problem.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.