This course offers theoretical exploration and problem-solving in optimal control theory and applications. The course also enables students to gain skills to formulate, analyze, and solve diverse optimal control problems, benefiting various domains.
Upon successful completion of this course, students will be able to:
(1) Underline variational calculus's role in optimization and optimal control conditions' significance in dynamic systems,
(2) Explain the Pontryagin minimum principle's insights, time-optimal problems, and distinctions between bang-bang and singular control,
(3) Utilize the Hamilton-Jacobi-Bellman method for optimal control in continuous time systems to solve boundary value problems,
(4) Analyze discrete-time control systems, including state transitions and control inputs, and the Pontryagin principle's effects on various dynamic system control strategies,
(5) Evaluate trade-offs between bang-bang and singular control considering energy, stability, and practicality,
(6) Propose solutions for complex time-optimal control problems using dynamic programming and the Hamilton-Jacobi-Bellman approach, considering singular controls.
(1) Kirk, D. E. (2004). Optimal Control Theory: An Introduction. Dover Publications. (Original work published 1970 by Prentice-Hall).
(2) Gelfand, I. M., & Fomin, S. V. (2000). Calculus of Variations. Dover Publications. (Original work published 1963 by Prentice-Hall).
(3) Athans, M., & Falb, P. L. (2006). Optimal Control: An Introduction to the Theory and Its Applications. Dover Publications. (Original work published 1966 by McGraw-Hill).
Test/Exam (70%), Performance Project (Written, Oral) (30%)
Workload | Hrs |
---|---|
Lectures | 42 |
Course Readings | 70 |
Exams/Quizzes | 70 |
Resource Review | 10 |
Report on a Topic | 20 |
Oral Presentation | 13 |