Jump to ratings and reviews
Rate this book

Dynamic Programming and Optimal Control

Rate this book
The first of the two volumes of the leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The treatment focuses on basic unifying themes, and conceptual foundations. It illustrates the versatility, power, and generality of the method with many examples and applications from engineering, operations research, and other fields. It also addresses extensively the practical application of the methodology, possibly through the use of approximations, and provides an introduction to the far-reaching methodology of Neuro-Dynamic Programming. The first volume is oriented towards modeling, conceptualization, and finite-horizon problems, but also includes a substantive introduction to infinite horizon problems that is suitable for classroom use. The second volume is oriented towards mathematical analysis and computation, and treats infinite horizon problems extensively. The text contains many illustrations, worked-out examples, and exercises.

691 pages, Hardcover

First published January 1, 1995

4 people are currently reading
103 people want to read

About the author

Dimitri P. Bertsekas

30 books19 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
8 (29%)
4 stars
13 (48%)
3 stars
4 (14%)
2 stars
1 (3%)
1 star
1 (3%)
No one has reviewed this book yet.

Can't find what you're looking for?

Get help and learn more about the design.