Consistency of Penalized Convex Regression


  •  Eunji Lim    

Abstract

We consider the problem of estimating an unknown convex function f_* (0, 1)^d →R from data (X1, Y1), … (X_n; Y_n).A simple approach is finding a convex function that is the closest to the data points by minimizing the sum of squared errors over all convex functions. The convex regression estimator, which is computed this way, su ers from a drawback of having extremely large subgradients near the boundary of its domain. To remedy this situation, the penalized convex regression estimator, which minimizes the sum of squared errors plus the sum of squared norms of the subgradient over all convex functions, is recently proposed. In this paper, we prove that the penalized convex regression estimator and its subgradient converge with probability one to f_* and its subgradient, respectively, as n → ∞, and hence, establish the legitimacy of the penalized convex regression estimator.  


This work is licensed under a Creative Commons Attribution 4.0 License.
  • ISSN(Print): 1927-7032
  • ISSN(Online): 1927-7040
  • Started: 2012
  • Frequency: bimonthly

Journal Metrics

  • h-index (December 2021): 20
  • i10-index (December 2021): 51
  • h5-index (December 2021): N/A
  • h5-median(December 2021): N/A

( The data was calculated based on Google Scholar Citations. Click Here to Learn More. )

Contact