# Controlled Markov Processes and Viscosity Solutions

Controlled Markov Processes and Viscosity Solutions This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. ; Thisbookisintendedasanintroductiontooptimalstochasticcontrolforc- tinuoustimeMarkovprocessesandtothetheoryofviscositysolutions. We- proach stochastic control problems by the method of dynamic programming. The fundamental equation of dynamic programming is a nonlinear evolution equation for the value function. For controlled Markov di?usion processes on n - dimensional euclidean space, the dynamic programming equation becomes a nonlinear partial di?erential equation of second order, called a Hamilton – Jacobi – Bellman (HJB) partial di?erential equation. The theory of visc- ity solutions, ?rst introduced by M. G. Crandall and P. -L. Lions, provides a convenient framework in which to study HJB equations. Typically, the value functionisnotsmoothenoughtosatisfytheHJBequationinaclassicalsense. However,underquitegeneralassumptionsthevaluefunctionistheuniquev- cosity solution of the HJB equation with appropriate boundary conditions. In addition, the viscosity solution framework is well suited to proving continuous dependence of solutions on problem data. The book begins with an introduction to dynamic programming for - terministic optimal control problems in Chapter I, and to the corresponding theory of viscosity solutions in Chapter II. A rather elementary introduction todynamicprogrammingforcontrolledMarkovprocessesisprovidedinCh- ter III. This is followed by the more technical Chapters IV and V, which are concerned with controlled Markov di?usions and viscosity solutions of HJB equations. We have tried, through illustrative examples in early chapters and the selection of material in Chapters VI – VII, to connect stochastic c- trol theory with other mathematical areas (e. g. large deviations theory) and with applications to engineering, physics, management, and ?nance. Chapter VIII is an introduction to singular stochastic control.; Deterministic Optimal Control.- Viscosity Solutions.- Optimal Control of Markov Processes: Classical Solutions.- Controlled Markov Diffusions in ?n.- Viscosity Solutions: Second-Order Case.- Logarithmic Transformations and Risk Sensitivity.- Singular Perturbations.- Singular Stochastic Control.- Finite Difference Numerical Approximations.- Applications to Finance.- Differential Games.; This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. Stochastic control problems are treated using the dynamic programming approach. The authors approach stochastic control problems by the method of dynamic programming. The fundamental equation of dynamic programming is a nonlinear evolution equation for the value function. For controlled Markov diffusion processes, this becomes a nonlinear partial differential equation of second order, called a Hamilton-Jacobi-Bellman (HJB) equation. Typically, the value function is not smooth enough to satisfy the HJB equation in a classical sense. Viscosity solutions provide framework in which to study HJB equations, and to prove continuous dependence of solutions on problem data. The theory is illustrated by applications from engineering, management science, and financial economics. In this second edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included. Review of the earlier edition: "This book is highly recommended to anyone who wishes to learn the dinamic principle applied to optimal stochastic control for diffusion processes. Without any doubt, this is a fine book and most likely it is going to become a classic on the area... ." SIAM Review, 1994 ; Provides a luckd introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions Also offers a concise introduction to risk-sensitive control theory, nonlinear H-infinity control and differential games Several all-new chapters have been added, and others completely rewritten For the Second Edition, new material has been added on application to mathematical finance ; This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman equations. The authors use illustrative examples and selective material to connect stochastic control theory with other mathematical areas (e.g. large deviations theory) and with applications to engineering, physics, management, and finance. ; US http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

# Controlled Markov Processes and Viscosity Solutions

436 pages

Loading next page...

/lp/springer-e-books/controlled-markov-processes-and-viscosity-solutions-727k2Zf09r

# References (116)

Publisher
Springer New York
Copyright
Copyright � Springer Basel AG
DOI
10.1007/0-387-31071-1
Publisher site
See Book on Publisher Site

### Abstract

This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. ; Thisbookisintendedasanintroductiontooptimalstochasticcontrolforc- tinuoustimeMarkovprocessesandtothetheoryofviscositysolutions. We- proach stochastic control problems by the method of dynamic programming. The fundamental equation of dynamic programming is a nonlinear evolution equation for the value function. For controlled Markov di?usion processes on n - dimensional euclidean space, the dynamic programming equation becomes a nonlinear partial di?erential equation of second order, called a Hamilton – Jacobi – Bellman (HJB) partial di?erential equation. The theory of visc- ity solutions, ?rst introduced by M. G. Crandall and P. -L. Lions, provides a convenient framework in which to study HJB equations. Typically, the value functionisnotsmoothenoughtosatisfytheHJBequationinaclassicalsense. However,underquitegeneralassumptionsthevaluefunctionistheuniquev- cosity solution of the HJB equation with appropriate boundary conditions. In addition, the viscosity solution framework is well suited to proving continuous dependence of solutions on problem data. The book begins with an introduction to dynamic programming for - terministic optimal control problems in Chapter I, and to the corresponding theory of viscosity solutions in Chapter II. A rather elementary introduction todynamicprogrammingforcontrolledMarkovprocessesisprovidedinCh- ter III. This is followed by the more technical Chapters IV and V, which are concerned with controlled Markov di?usions and viscosity solutions of HJB equations. We have tried, through illustrative examples in early chapters and the selection of material in Chapters VI – VII, to connect stochastic c- trol theory with other mathematical areas (e. g. large deviations theory) and with applications to engineering, physics, management, and ?nance. Chapter VIII is an introduction to singular stochastic control.; Deterministic Optimal Control.- Viscosity Solutions.- Optimal Control of Markov Processes: Classical Solutions.- Controlled Markov Diffusions in ?n.- Viscosity Solutions: Second-Order Case.- Logarithmic Transformations and Risk Sensitivity.- Singular Perturbations.- Singular Stochastic Control.- Finite Difference Numerical Approximations.- Applications to Finance.- Differential Games.; This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. Stochastic control problems are treated using the dynamic programming approach. The authors approach stochastic control problems by the method of dynamic programming. The fundamental equation of dynamic programming is a nonlinear evolution equation for the value function. For controlled Markov diffusion processes, this becomes a nonlinear partial differential equation of second order, called a Hamilton-Jacobi-Bellman (HJB) equation. Typically, the value function is not smooth enough to satisfy the HJB equation in a classical sense. Viscosity solutions provide framework in which to study HJB equations, and to prove continuous dependence of solutions on problem data. The theory is illustrated by applications from engineering, management science, and financial economics. In this second edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included. Review of the earlier edition: "This book is highly recommended to anyone who wishes to learn the dinamic principle applied to optimal stochastic control for diffusion processes. Without any doubt, this is a fine book and most likely it is going to become a classic on the area... ." SIAM Review, 1994 ; Provides a luckd introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions Also offers a concise introduction to risk-sensitive control theory, nonlinear H-infinity control and differential games Several all-new chapters have been added, and others completely rewritten For the Second Edition, new material has been added on application to mathematical finance ; This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman equations. The authors use illustrative examples and selective material to connect stochastic control theory with other mathematical areas (e.g. large deviations theory) and with applications to engineering, physics, management, and finance. ; US

Published: Feb 4, 2006