Riemannian Proximal Policy Optimization


  •  Shijun Wang    
  •  Baocheng Zhu    
  •  Chen Li    
  •  Mingzhe Wu    
  •  James Zhang    
  •  Wei Chu    
  •  Yuan Qi    

Abstract

In this paper, we propose a general Riemannian proximal optimization algorithm with guaranteed convergence to solve Markov decision process (MDP) problems. To model policy functions in MDP, we employ Gaussian mixture model (GMM) and formulate it as a non-convex optimization problem in the Riemannian space of positive semidefinite matrices. For two given policy functions, we also provide its lower bound on policy improvement by using bounds derived from the Wasserstein distance of GMMs. Preliminary experiments show the efficacy of our proposed Riemannian proximal policy optimization algorithm.



This work is licensed under a Creative Commons Attribution 4.0 License.
  • ISSN(Print): 1913-8989
  • ISSN(Online): 1913-8997
  • Started: 2008
  • Frequency: quarterly

Journal Metrics

WJCI (2021): 0.557

Impact Factor 2021 (by WJCI):  0.304

h-index (December 2022): 40

i10-index (December 2022): 179

h5-index (December 2022): N/A

h5-median(December 2022): N/A

( The data was calculated based on Google Scholar Citations. Click Here to Learn More. )

Contact