본문

Robust Machine Learning for the Control of Real-world Robotic Systems- [electronic resource]
Robust Machine Learning for the Control of Real-world Robotic Systems - [electronic resour...
내용보기
Robust Machine Learning for the Control of Real-world Robotic Systems- [electronic resource]
자료유형  
 학위논문파일 국외
최종처리일시  
20240214100448
ISBN  
9798380380713
DDC  
620
저자명  
Westenbroek, Tyler.
서명/저자  
Robust Machine Learning for the Control of Real-world Robotic Systems - [electronic resource]
발행사항  
[S.l.]: : University of California, Berkeley., 2023
발행사항  
Ann Arbor : : ProQuest Dissertations & Theses,, 2023
형태사항  
1 online resource(129 p.)
주기사항  
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
주기사항  
Advisor: Sastry, S. Shankar.
학위논문주기  
Thesis (Ph.D.)--University of California, Berkeley, 2023.
사용제한주기  
This item must not be sold to any third party vendors.
초록/해제  
요약Optimal control is a powerful paradigm for controller design as it can be used to implicitly encode complex stabilizing behaviors using cost functions which are relatively simple to specify. On the other hand, the curse of dimensionality and the presence of non-convex optimization landscapes can make it challenging to reliably obtain stabilizing controllers for complex high-dimensional systems. Recently, sampling-based reinforcement learning approaches have enabled roboticists to obtain approximately optimal feedback controllers for high-dimensional systems even when the dynamics are unknown. However, these methods remain too unreliable for practical deployment in many application domains.This dissertation argues that the key to reliable optimization-based controller synthesis is obtaining a deeper understanding of how the cost functions we write down and the algorithms we design interact with the underlying feedback geometry of the control system. First, we next investigate how to accelerate model-free reinforcement learning by embedding control Lyapunov functions - which are energy like functions for the system- into the objective. Next we will introduce a novel data-driven policy optimization framework which embeds structural information from an approximate dynamics model and family of low-level feedback controllers into the update scheme. We then turn to a dynamic programming perspective, and investigate how the geometric structure of the system places fundamental limitations on how much computation is required to compute or learn a stabilizing controller. Finally, we investigate derivative-based search algorithms and investigate how to design 'good' cost functions for model predictive control schemes, which ensure these methods stabilize the system even when gradient-based methods are used to search over a non-convex objective. Throughout an emphasis will be placed on how structural insights gleaned from a simple analytical model can guide our design decisions, and we will discuss applications to dynamic walking, flight control, and autonomous driving.
일반주제명  
Engineering.
일반주제명  
Computer engineering.
일반주제명  
Robotics.
키워드  
Control theory
키워드  
Machine learning
키워드  
Autonomous driving
키워드  
Reinforcement learning
키워드  
Dynamic programming
기타저자  
University of California, Berkeley Electrical Engineering & Computer Sciences
기본자료저록  
Dissertations Abstracts International. 85-03B.
기본자료저록  
Dissertation Abstract International
전자적 위치 및 접속  
로그인 후 원문을 볼 수 있습니다.
신착도서 더보기
최근 3년간 통계입니다.

소장정보

  • 예약
  • 소재불명신고
  • 나의폴더
  • 우선정리요청
  • 비도서대출신청
  • 야간 도서대출신청
소장자료
등록번호 청구기호 소장처 대출가능여부 대출정보
TF05924 전자도서
마이폴더 부재도서신고 비도서대출신청

* 대출중인 자료에 한하여 예약이 가능합니다. 예약을 원하시면 예약버튼을 클릭하십시오.

해당 도서를 다른 이용자가 함께 대출한 도서

관련 인기도서

로그인 후 이용 가능합니다.