Emergent Mind

Robust, scalable and fast bootstrap method for analyzing large scale data

(1504.02382)
Published Apr 9, 2015 in stat.ME , cs.IR , cs.IT , math.IT , stat.CO , and stat.ML

Abstract

In this paper we address the problem of performing statistical inference for large scale data sets i.e., Big Data. The volume and dimensionality of the data may be so high that it cannot be processed or stored in a single computing node. We propose a scalable, statistically robust and computationally efficient bootstrap method, compatible with distributed processing and storage systems. Bootstrap resamples are constructed with smaller number of distinct data points on multiple disjoint subsets of data, similarly to the bag of little bootstrap method (BLB) [1]. Then significant savings in computation is achieved by avoiding the re-computation of the estimator for each bootstrap sample. Instead, a computationally efficient fixed-point estimation equation is analytically solved via a smart approximation following the Fast and Robust Bootstrap method (FRB) [2]. Our proposed bootstrap method facilitates the use of highly robust statistical methods in analyzing large scale data sets. The favorable statistical properties of the method are established analytically. Numerical examples demonstrate scalability, low complexity and robust statistical performance of the method in analyzing large data sets.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.