Dissertation/Thesis Abstract

Scheduling policy design using stochastic dynamic programming
by Glaubius, Robert, Ph.D., Washington University in St. Louis, 2009, 159; 3386643
Abstract (Summary)

Scheduling policies for open soft real-time systems must be able to balance the competing concerns of meeting their objectives under exceptional conditions while achieving good performance in the average case. Balancing these concerns requires modeling strategies that represent the range of possible task behaviors, and solution techniques that are capable of effectively managing uncertainty in order to discover scheduling policies that are effective across the range of system modes. We develop methods for solving a particular class of task scheduling problems in an open soft real-time setting involving repeating, non-preemptable tasks that contend for a single shared resource. We enforce timeliness by optimizing performance with respect to the proportional progress of tasks in the system.

We model this scheduling problem as an infinite-state Markov decision process, and provide guarantees regarding the existence of optimal solutions to this problem. We derive several methods for approximating optimal scheduling policies and provide theoretical justification and empirical evidence that these solutions are good approximations to the optimal solution. We consider cases in which task models are known, and adapt reinforcement learning methods to learn task models when they are not available.

Indexing (document details)
Advisor: Smart, William D.
Commitee: Chen, Yixin, Gill, Christopher, Goldman, Sally, Szepesvari, Csaba, Thoroughman, Kurt
School: Washington University in St. Louis
Department: Computer Science
School Location: United States -- Missouri
Source: DAI-B 70/12, Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Computer science
Keywords: Markov decision processes, Real-time systems, Reinforcement learning, Scheduling
Publication Number: 3386643
ISBN: 9781109522556
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest