Consistency of Penalized Convex Regression


  •  Eunji Lim    

Abstract

We consider the problem of estimating an unknown convex function f_* (0, 1)^d →R from data (X1, Y1), … (X_n; Y_n).A simple approach is finding a convex function that is the closest to the data points by minimizing the sum of squared errors over all convex functions. The convex regression estimator, which is computed this way, su ers from a drawback of having extremely large subgradients near the boundary of its domain. To remedy this situation, the penalized convex regression estimator, which minimizes the sum of squared errors plus the sum of squared norms of the subgradient over all convex functions, is recently proposed. In this paper, we prove that the penalized convex regression estimator and its subgradient converge with probability one to f_* and its subgradient, respectively, as n → ∞, and hence, establish the legitimacy of the penalized convex regression estimator.  


This work is licensed under a Creative Commons Attribution 4.0 License.