|Frontiers of Scientific Computing Lecture Series|
|Approximation Algorithms for Big Data|
|Dongbin Xiu, The Ohio State University|
|Professor, Ohio Eminent Scholar|
|Digital Media Center 1034
October 25, 2016 - 03:30 pm
One of the central tasks in scientific computing is to accurately approximate unknown target functions. This is typically done with the help of data — samples of the unknown functions. In statistics this falls into the realm of regression and machine learning. In mathematics, it is the central theme of approximation theory. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. Moreover, data often contain certain corruption errors, in addition to the standard noisy errors. In this talk, we present some new developments regarding certain aspects of big data approximation. More specifically, we present numerical algorithms that address two issues: (1) how to automatically eliminate corruption/biased errors in data; and (2) how to create accurate approximation models in very high dimensional spaces using stream/live data, without the need to store the entire data set. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.
Dongbin Xiu is a Professor and Ohio Eminent Scholar at The Ohio State University in the Department of Mathematics. He received his PhD in Division of Applied Mathematics at Brown University. His research interests include multivariate approximation theory, stochastic computations and uncertainty quantifaction, design and optimization under uncertainty, data assimilation and high-order numerical methods.
|This lecture has a reception @ 03:00 pm|