Learning-enabled navigation and nonlinear control for resource constrained robots
OA Version
Citation
Abstract
Robots with limited on-board resources face fundamental challenges when performing semantic navigation tasks, such as object search or exploration in unknown environments. Recent advances in deep learning applications in robotics facilitate more efficient task execution by leveraging past experiences, thereby reducing sensing, computation, and storage demands. However, existing approaches often rely on specialized, task-specific models or detailed geometric representations, which can be costly in resource-constrained settings. Meanwhile, deep reinforcement learning solutions for control typically lack the stability guarantees required in safety-critical applications. This dissertation addresses these issues by proposing (1) a resource-efficient semantic navigation component and (2) a control-theoretic component with provable stability and safety guarantees. Within semantic navigation, we develop multi-task architectures that predict environmental abstractions directly from sparse, non-semantic measurements (e.g., 2-D laser data). A U-Net–inspired approach enables simultaneous object search and topological mapping, outperforming traditional frontier-based methods in residential environments. To eliminate grid-level prediction—and thus the need for post-processing to generate topological maps—as well as redundant storage of geometric maps, we introduce BoxMap, an end-to-end Detection-Transformer–based module that encodes rooms and doors as a topological graph. While storing only quadratic-scale information with respect to the number of rooms, BoxMap leads to more efficient graph-based exploration strategies compared to conventional geometric approaches. For controller synthesis, we propose a method that learns Lyapunov certificates and control laws for piecewise-linear dynamics via Mixed-Integer Linear Programming (MILP). By designing the Lyapunov neural network to enforce non-negativity, monotonicity over half-spaces, and a unique global minimum, we shorten training time and expand the valid region of attraction. We further present an extension that synthesizes an output-feedback controller that enforces both control Lyapunov and control barrier constraints, thereby ensuring stability and safety when exiting a room. This serves as a step toward unifying robust semantic navigation (via BoxMap) with provably safe and stable control for robotic systems. Overall, this work lays the groundwork for a principled, resource-efficient approach to semantic navigation and safe control under real-world constraints. Future work includes fully integrating these components and extending obstacle handling to random or dynamic obstacles, offering a more robust approach for robots operating in partially unknown structured environments.
Description
2025
License
Attribution 4.0 International