Matroid-Constrained Approximately Supermodular Optimization for Near-Optimal Actuator Scheduling
Published in the proceedings of 2019 IEEE 58th Conference on Decision and Control (CDC), 2019
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Published in the proceedings of 2019 IEEE 58th Conference on Decision and Control (CDC), 2019
Published in Transactions on Automatic Control, 2020
Published in 2020 IEEE 59th Conference on Decision and Control (CDC), 2020
Published in 2022 American Control Conference (ACC), 2022
Published in Workshop on the Algorithmic Foundations of Robotics 2022, to appear, 2022
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in the proceedings of 2019 IEEE 58th Conference on Decision and Control (CDC), 2019
Published in Transactions on Automatic Control, 2020
Access paper here Control scheduling refers to the problem of as- signing agents or actuators to act upon a dynamical system at specific times so as to minimize a quadratic control cost, such as the objective of the Linear-quadratic-Gaussian (LQG) or the Linear Quadratic Regulator (LQR). When budget or operational constraints are imposed on the schedule, this problem is in general NP-hard and its solution can therefore only be approximated even for moderately large systems. The quality of this approximation depends on the structure of both the constraints and the objective. This work shows that greedy scheduling is near-optimal when the constraints can be repre- sented as an intersection of matroids, algebraic structures that encode requirements such as limits on the number of agents deployed per time slot, total number of actuator uses, and duty cycle restrictions. To do so, it proves that the LQG cost function is α-supermodular and provides a new α/(α + P )-optimality certificates for the greedy minimization of such functions over an intersections of P matroids. These certificates are shown to approach the 1/(1 + P ) guarantee of supermodular functions in relevant settings. These results support the use of greedy algorithms in non-supermodular quadratic control problems as opposed to typical heuristics such as convex relaxations and surrogate figures of merit, e.g., the log det of the controllability Gramian.
Published in 2020 IEEE 59th Conference on Decision and Control (CDC), 2020
Published in 2022 American Control Conference (ACC), 2022
Access paper here The notion of approximate information states (AIS) was introduced in [1] as a methodology for learning taskrelevant state representations for control in partially observable systems. They proposed particular learning objectives which attempt to reconstruct the cost and next state and provide a bound on the suboptimality of the closed-loop performance, but it is unclear whether these bounds are tight or actually lead to good performance in practice. Here we study this methodology by examining the special case of discrete approximate information states (DAIS). In this setting, we can solve for the globally optimal policy using value iteration, allowing us to disambiguate the performance of the AIS objective from the policy search. Going further, for small problems with finite information states, we reformulate the DAIS learning problem as a novel mixed-integer program (MIP) and solve it to its global optimum; in the infinite information states case, we introduce clustering-based and end-to-end gradient-based optimization methods for minimizing the DAIS construction loss. We study DAIS in three partially observable environments and find that the AIS objective offers relatively loose bounds for guaranteeing monotonic performance improvement and is sufficient but not necessary for implementing optimal controllers. DAIS may even prove useful in practice by itself or as part of mixed discrete- and continuous-state representations, due to its ability to represent logical state, to its potential interpretabilty, and to the availability of these stronger algorithms.
Published in Workshop on the Algorithmic Foundations of Robotics 2022, to appear, 2022
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Graduate Course, MIT, 2020