## Dynamic ProgrammingAn introduction to the mathematical theory of multistage decision processes, this text takes a "functional equation" approach to the discovery of optimum policies. Written by a leading developer of such policies, it presents a series of methods, uniqueness and existence theorems, and examples for solving the relevant equations. The text examines existence and uniqueness theorems, the optimal inventory equation, bottleneck problems in multistage production processes, a new formalism in the calculus of variation, strategies behind multistage games, and Markovian decision processes. Each chapter concludes with a problem set that Eric V. Denardo of Yale University, in his informative new introduction, calls "a rich lode of applications and research topics." 1957 edition. 37 figures. |

### Other editions - View all

### Common terms and phrases

allocation analysis analytic applications approximation in policy assumption auto Bellman calculus of variations Chapter choice classical computational concave function Consider the equation Consider the problem constraints continuous function continuous version convergence convex convex function corresponding cost decision processes deﬁned deﬁnition denote derive determining the maximum difﬁculty discrete discussion dx/dt Dynamic Programming employ existence and uniqueness expected value ﬁeld ﬁgures ﬁnd ﬁnite ﬁrst ﬁxed formulation function f functional equation given Hence inequality initial interval Lemma linear machine mathematical matrix method minimize minimum mixed policy monotone increasing nonlinear obtain optimal policy partial differential equation player policy space probability problem of determining problem of maximizing proof quantity RAND Corporation recurrence relation region result sequence fN Show stage Stieltjes integrals stochastic stochastic processes successive approximations techniques tion total return transformations uniqueness theorems variables variational problem vector yields zero