본문

Stochastic Optimization and Learning: An Adaptive and Resource-Efficient Approach- [electronic resource]
Stochastic Optimization and Learning: An Adaptive and Resource-Efficient Approach - [elect...
내용보기
Stochastic Optimization and Learning: An Adaptive and Resource-Efficient Approach- [electronic resource]
자료유형  
 학위논문파일 국외
최종처리일시  
20240214101243
ISBN  
9798380314299
DDC  
621.3
저자명  
Salgia, Sudeep.
서명/저자  
Stochastic Optimization and Learning: An Adaptive and Resource-Efficient Approach - [electronic resource]
발행사항  
[S.l.]: : Cornell University., 2023
발행사항  
Ann Arbor : : ProQuest Dissertations & Theses,, 2023
형태사항  
1 online resource(502 p.)
주기사항  
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
주기사항  
Advisor: Zhao, Qing.
학위논문주기  
Thesis (Ph.D.)--Cornell University, 2023.
사용제한주기  
This item must not be sold to any third party vendors.
초록/해제  
요약This dissertation focuses on stochastic optimization and learning where the underlying probabilistic models are unknown and the decision maker optimizes their actions over time through sequential interactions with the unknown environment.The first part of this dissertation is on stochastic optimization where the learner aims at optimizing an unknown random loss function in expectation --- the fundamental building block of training any machine learning model today. In the centralized setting, we study three different classes of stochastic optimization problems, categorized based on the structural assumptions of the unknown function. For stochastic convex optimization, we propose the first extension of the coordinate minimization (CM) approach to stochastic optimization. The proposed approach provides a universal framework for extending low-dimensional optimization routines to high-dimensional problems and inherits the scalability and parallelizability properties of CM. For kernel-based optimization, we propose the first algorithm with order-optimal regret guarantees thereby closing the existing gap between upper and lower bounds. Thirdly, for neural net based optimization for contextual bandits, we explore the setting where neural nets are equipped with a smooth activation function, in contrast to the existing work which primarily focuses on ReLU neural nets. In the second part of this dissertation, we focus on challenges unique to distributed stochastic optimization. For the problem of distributed linear bandits, we investigate the regret-communication trade-off by establishing the information-theoretic lower bounds on the required communications (in terms of bits) for achieving a sublinear regret order. We also develop an efficient algorithm that is the first to achieve both order-optimal regret and optimal order of communication cost (in bits). We also extend our algorithm for stochastic convex optimization where it continues to enjoy order-optimal regret and communication performance. Secondly, we study the impact of statistical heterogeneity among clients in a distributed kernel-based bandit framework. We adopt a personalization approach to tackle the heterogeneity among users propose an algorithm that achieves order-optimal regret through a novel design that carefully balances personalized exploration with collaborative exploration.The third part of the dissertation focuses on problems arising in Active Learning and Active Hypothesis Testing. For the problem of online active learning for classifying streaming instances, we develop a disagreement-based learning algorithm for a general hypothesis space and noise model that incurs a bounded regret and has an order-optimal label complexity. We also study the problem of noisy group testing under unknown noise models, which is contrast to the existing studies that assume perfect knowledge of probabilistic model of the noise. We propose a novel algorithm that is agnostic to the noise distribution and offers a sample complexity that adapts to the noise level and is order-optimal in both the population size and the error rate. In the last chapter, we consider the problem of uniformity testing of Lipschitz continuous distributions with bounded support. We propose a sequential test that offers adaptivity to the unknown $\\ell_1$ distance to the uniform distribution, allowing quicker identification of more anomalous (non-uniform) distributions.
일반주제명  
Electrical engineering.
일반주제명  
Computer science.
일반주제명  
Statistics.
키워드  
Adaptive group
키워드  
Bandits
키워드  
Communication efficient algorithms
키워드  
Computationally efficient algorithm
키워드  
Machine learning
키워드  
Stochastic optimization
기타저자  
Cornell University Electrical and Computer Engineering
기본자료저록  
Dissertations Abstracts International. 85-03B.
기본자료저록  
Dissertation Abstract International
전자적 위치 및 접속  
로그인 후 원문을 볼 수 있습니다.
신착도서 더보기
최근 3년간 통계입니다.

소장정보

  • 예약
  • 소재불명신고
  • 나의폴더
  • 우선정리요청
  • 비도서대출신청
  • 야간 도서대출신청
소장자료
등록번호 청구기호 소장처 대출가능여부 대출정보
TF07819 전자도서
마이폴더 부재도서신고 비도서대출신청

* 대출중인 자료에 한하여 예약이 가능합니다. 예약을 원하시면 예약버튼을 클릭하십시오.

해당 도서를 다른 이용자가 함께 대출한 도서

관련 인기도서

로그인 후 이용 가능합니다.