본문

Efficient Reinforcement Learning Through Uncertainties- [electronic resource]
Efficient Reinforcement Learning Through Uncertainties - [electronic resource]
내용보기
Efficient Reinforcement Learning Through Uncertainties- [electronic resource]
자료유형  
 학위논문파일 국외
최종처리일시  
20240214101241
ISBN  
9798379721909
DDC  
004
저자명  
Zhou, Dongruo.
서명/저자  
Efficient Reinforcement Learning Through Uncertainties - [electronic resource]
발행사항  
[S.l.]: : University of California, Los Angeles., 2023
발행사항  
Ann Arbor : : ProQuest Dissertations & Theses,, 2023
형태사항  
1 online resource(167 p.)
주기사항  
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
주기사항  
Advisor: Gu, Quanquan.
학위논문주기  
Thesis (Ph.D.)--University of California, Los Angeles, 2023.
사용제한주기  
This item must not be sold to any third party vendors.
초록/해제  
요약This dissertation is centered around the concept of uncertainty-aware reinforcement learning (RL), which seeks to enhance the efficiency of RL by incorporating uncertainty. RL is a vital mathematical framework in the field of artificial intelligence (AI) for creating autonomous agents that can learn optimal behaviors through interaction with their environments. However, RL is often criticized for being sample inefficient and computationally demanding. To tackle these challenges, the primary goals of this dissertation are twofold: to offer theoretical understanding of uncertainty-aware RL and to develop practical algorithms that utilize uncertainty to enhance the efficiency of RL.Our first objective is to develop an RL approach that is efficient in terms of sample usage for Markov Decision Processes (MDPs) with large state and action spaces. We present an uncertainty-aware RL algorithm that incorporates function approximation. We provide theoretical proof that this algorithm achieves near minimax optimal statistical complexity when learning the optimal policy. In our second objective, we address two specific scenarios: the batch learning setting and the rare policy switch setting. For both settings, we propose uncertainty-aware RL algorithms with limited adaptivity. These algorithms significantly reduce the number of policy switches compared to previous baseline algorithms while maintaining a similar level of statistical complexity. Lastly, we focus on estimating uncertainties in neural network-based estimation models. We introduce a gradient-based method that effectively computes these uncertainties. Our approach is computationally efficient, and the resulting uncertainty estimates are both valid and reliable.The methods and techniques presented in this dissertation contribute to the advancement of our understanding regarding the fundamental limits of RL. These research findings pave the way for further exploration and development in the field of decision-making algorithm design.
일반주제명  
Computer science.
키워드  
Machine learning
키워드  
Reinforcement learning
키워드  
Markov Decision Processes
키워드  
Optimal policy
기타저자  
University of California, Los Angeles Computer Science 0201
기본자료저록  
Dissertations Abstracts International. 84-12B.
기본자료저록  
Dissertation Abstract International
전자적 위치 및 접속  
로그인 후 원문을 볼 수 있습니다.
신착도서 더보기
최근 3년간 통계입니다.

소장정보

  • 예약
  • 소재불명신고
  • 나의폴더
  • 우선정리요청
  • 비도서대출신청
  • 야간 도서대출신청
소장자료
등록번호 청구기호 소장처 대출가능여부 대출정보
TF06247 전자도서
마이폴더 부재도서신고 비도서대출신청

* 대출중인 자료에 한하여 예약이 가능합니다. 예약을 원하시면 예약버튼을 클릭하십시오.

해당 도서를 다른 이용자가 함께 대출한 도서

관련 인기도서

로그인 후 이용 가능합니다.