Riemannian Proximal Policy Optimization


  •  Shijun Wang    
  •  Baocheng Zhu    
  •  Chen Li    
  •  Mingzhe Wu    
  •  James Zhang    
  •  Wei Chu    
  •  Yuan Qi    

Abstract

In this paper, we propose a general Riemannian proximal optimization algorithm with guaranteed convergence to solve Markov decision process (MDP) problems. To model policy functions in MDP, we employ Gaussian mixture model (GMM) and formulate it as a non-convex optimization problem in the Riemannian space of positive semidefinite matrices. For two given policy functions, we also provide its lower bound on policy improvement by using bounds derived from the Wasserstein distance of GMMs. Preliminary experiments show the efficacy of our proposed Riemannian proximal policy optimization algorithm.



This work is licensed under a Creative Commons Attribution 4.0 License.