This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games.
My reading of this book was inspired by a particular (mathematical) modeling challenge in my academic work. The word, challenge, is one of those useful euphemisms to mask our inadequacies, which nobly we try to remedy. My challenge is attenuated, but not resolved. This useful book has helped me to frame better my specific work questions and to begin to find the answers. I’ll let my friends here know when (if) I have it all sorted.
For the interested, “viscosity solutions” extend the classical concept of a 'solution' to a partial differential equation and have diverse applications, including first order equations arising in optimal control (the Hamilton–Jacobi–Bellman equation), differential games (the Hamilton–Jacobi–Isaacs equation) or front evolution problems, as well as second-order equations such as the ones arising in stochastic optimal control or stochastic differential games. The book provides an exposition of the general concept and various practical-focused applications, including the field of finance.