BYU

Abstract by Samuel Neff

Personal Infomation


Presenter's Name

Samuel Neff

Degree Level

Masters

Abstract Infomation


Department

Computer Science

Faculty Advisor

Sean Warnick

Title

Regularization Techniques in Reinforcement Learning Based on Bellman Residual Minimization and Least Squares Temporal Difference Learning

Abstract

Deep reinforcement learning has received increased attention recently due to its promising performance over a variety of control tasks. Yet, conventional techniques for regularizing neural networks have largely been avoided in state of the art reinforcement learning algorithms. This is perhaps due to the fact that agents are typically trained and evaluated over the same environments. This work studies the effectiveness of older regularization techniques for Markov decision processes, coupled with newer actor-critic formulations using deep neural networks to understand the effectiveness of regularization in deep reinforcement learning.