I. Karatzas, S. Shreve (1984)
Connections between Optimal Stopping and Singular Stochastic Control I. Monotone Follower ProblemsSiam Journal on Control and Optimization, 22
M. James, J. Baras, R. Elliott (1994)
Risk-sensitive control and dynamic games for partially observed discrete-time nonlinear systemsIEEE Trans. Autom. Control., 39
西尾 真喜子 (1981)
Lectures on stochastic control theory
D. Stroock, S. Varadhan (1979)
Multidimensional Diffusion Processes
J. Lehoczky, S. Sethi, H. Soner, M. Taksar (1991)
An Asymptotic Analysis of Hierarchical Control of Manufacturing Systems Under UncertaintyOperations Strategy eJournal
R. Courant, D. Hilbert (1962)
Methods of Mathematical Physics
M. Crandall, H. Ishii (1990)
The maximum principle for semicontinuous functionsDifferential and Integral Equations
R. Jensen, P. Lions, P. Souganidis (1988)
A uniqueness result for viscosity solutions of second order fully nonlinear partial di
S. Sheu (1985)
Stochastic Control and Exit Probabilities of Jump ProcessesSiam Journal on Control and Optimization, 23
H. Soner (1993)
MOTION OF A SET BY THE CURVATURE OF ITS BOUNDARYJournal of Differential Equations, 101
Avner Friedman (1971)
Differential games
I. Karatzas, S. Shreve (1986)
Equivalent models for finite-fuel stochastic controlStochastics An International Journal of Probability and Stochastic Processes, 18
L. Evans, H. Ishii (1985)
A PDE approach to some asymptotic problems concerning random differential equations with small noise intensitiesAnnales De L Institut Henri Poincare-analyse Non Lineaire, 2
M. Bardi, M. Falcone (1990)
An approximation scheme for the minimum time functionSiam Journal on Control and Optimization, 28
H. Ishii (1993)
Viscosity solutions of nonlinear second-order partial differential equations in hilbert spacesCommunications in Partial Differential Equations, 18
F. Clarke (1989)
Methods of dynamic and nonsmooth optimization
W. Feller (1959)
An Introduction to Probability Theory and Its Applications
S. Varadhan (1984)
Large Deviations and Applications
P. Lions, P. Souganidis (1985)
Differential Games, Optimal Control and Directional Derivatives of Viscosity Solutions of Bellman’s and Isaacs’ EquationsSiam Journal on Control and Optimization, 23
Jose-Luis Mendali (1989)
Some estimates for finite difference approximationsSiam Journal on Control and Optimization, 27
O. Ladyženskaja (1968)
Linear and Quasilinear Equations of Parabolic Type, 23
J. Aubin (1979)
Mathematical methods of game and economic theory
H., Mete Soner, Steven, E., Shreve (1989)
Regularity of the value function for a two-dimensional singular stochastic control problemSiam Journal on Control and Optimization, 27
H. Frankowska (1993)
Lower semicontinuous solutions of Hamilton-Jacobi-Bellman equationsSiam Journal on Control and Optimization, 31
A. Bensoussan, H. SchuppenvanJ. (1985)
Optimal control of partially observable stochastic systems with an exponential-of-integral performance indexTheoretical Computer Science
J. Bather, H. Chernoff (1967)
Sequential decisions in the control of a spaceship
O. Ladyzhenskai︠a︡, N. Uralʹt︠s︡eva (1968)
Linear and quasilinear elliptic equations
Eduardo Sontag (1990)
Mathematical Control Theory: Deterministic Finite Dimensional Systems
W. Fleming, D. Vermes (1989)
Convex duality approach to the optimal control of diffusionsSiam Journal on Control and Optimization, 27
R. Merton, P. Samuelson (1990)
Continuous-Time Finance
H. Ishii (1984)
Uniqueness of unbounded viscosity solution of Hamilton-Jacobi equationsIndiana University Mathematics Journal, 33
A. Bensoussan (1988)
Perturbation Methods in Optimal Control
N. Krasovskii, A. Subbotin, S. Kotz (1987)
Game-Theoretical Control Problems
J. Harrison (1988)
Brownian Models of Queueing Networks with Heterogeneous Customer Populations, 10
A. Brodsky, M. Krasnoselskii, Richard Flaherty, L. Boron (1964)
Positive solutions of operator equations
L. Cesari (1983)
Optimization-Theory And Applications
Analyse Numérique, G. Barles, B. Perthame (1987)
Discontinuous solutions of deterministic optimal stopping time problemsMathematical Modelling and Numerical Analysis, 21
B. Øksendal (1985)
Stochastic Differential EquationsThe Mathematical Gazette, 77
R. Vinter, R. Lewis (1978)
The Equivalence of Strong and Weak Formulations for Certain Problems in Optimal ControlSiam Journal on Control and Optimization, 16
R. Akella, P. Kumar (1985)
Optimal control of production rate in a failure prone manufacturing system1985 24th IEEE Conference on Decision and Control
W. Fleming (1977)
Exit probabilities and optimal stochastic controlApplied Mathematics and Optimization, 4
W. Ziemer (1989)
Weakly differentiable functions
I. Karatzas (1987)
Brownian Motion and Stochastic CalculusElearn
F. Baccelli, G. Cohen, G. Olsder, J. Quadrat (1994)
Synchronization and Linearity: An Algebra for Discrete Event SystemsJournal of the Operational Research Society, 45
W. Fleming, H. Soner (1989)
Asymptotic expansions for Markov processes with lévy generatorsApplied Mathematics and Optimization, 19
M. Davis, A. Norman (1990)
Portfolio Selection with Transaction CostsMath. Oper. Res., 15
Mark Davis (1984)
Piecewise‐Deterministic Markov Processes: A General Class of Non‐Diffusion Stochastic ModelsJournal of the royal statistical society series b-methodological, 46
M. Day (1980)
On a stochastic control problem with exit constraintsApplied Mathematics and Optimization, 6
E. Barron, R. Jensen (1986)
The Pontryagin maximum principle from dynamic programming and viscosity solutions to first-order partial differential equationsTransactions of the American Mathematical Society, 298
W. Fleming, R. Rishel (1975)
Deterministic and Stochastic Optimal Control
R. Losee, Robert Elliott (1982)
Stochastic calculus and applications
H. Ishii (1987)
Perron’s method for Hamilton-Jacobi equationsDuke Mathematical Journal, 55
Jakša Cvitanić, I. Karatzas (1993)
Hedging Contingent Claims with Constrained PortfoliosAnnals of Applied Probability, 3
H. Ishii, S. Koike (1983)
Boundary regulatity and uniqueness for an elliptic equations with gradient constraintCommunications in Partial Differential Equations, 8
O. Oaks, G. Cook (1976)
Piecewise Linear Control of Nonlinear SystemsIEEE Transactions on Industrial Electronics and Control Instrumentation, IECI-23
H. Soner (1993)
Singular perturbations in manufacturingSiam Journal on Control and Optimization, 31
T. Zariphopoulou (1989)
Optimal Investment - Consumption Models With Constraints
I. Karatzas (1985)
Probabilistic aspects of finite-fuel stochastic control.Proceedings of the National Academy of Sciences of the United States of America, 82 17
W. Fleming (1971)
Stochastic Control for Small Noise IntensitiesSiam Journal on Control, 9
I. Karatzas (1983)
A class of singular stochastic control problemsAdvances in Applied Probability, 15
L. Evans (1979)
A second order elliptic equation with gradient constraintCommunications in Partial Differential Equations, 4
N. Krylov (1987)
Nonlinear Elliptic and Parabolic Equations of the Second Order Equations
S. Shreve, H. Soner, Ganlin Xu (1991)
1optimal Investment and Consumption with Two Bonds and Transaction CostsIO: Firm Structure
Yun-Gang Chen, Y. Giga, S. Goto (1989)
Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations, 65
M. Taksar, M. Klass, David Assaf (1988)
A Diffusion Model for Optimal Portfolio Selection in the Presence of Brokerage FeesMath. Oper. Res., 13
P. Hartman (1965)
Ordinary Differential EquationsJournal of the American Statistical Association, 60
Mark Davis (1977)
Linear estimation and stochastic control
I. Karatzas, S. Shreve (1985)
Connections Between Optimal Stopping and Singular Stochastic Control II. Reflected Follower ProblemsSiam Journal on Control and Optimization, 23
L. Rogers (1982)
Stochastic differential equations and diffusion processes: Nobuyuki Ikeda and Shinzo Watanabe North-Holland, Amsterdam, 1981, xiv + 464 pages, Dfl.175.00European Journal of Operational Research, 10
D. Tataru (1992)
Viscosity solutions of Hamilton-Jacobi equations with unbounded nonlinear termsJournal of Mathematical Analysis and Applications, 163
K. Glover, J. Doyle (1988)
State-space formulae for all stabilizing controllers that satisfy and H ∞ norm bound and relations to risk sensitivitySystems & Control Letters, 11
L. Evans (1992)
Measure theory and fine properties of functions
M. Crandall, L. Evans, P. Lions (1984)
Some Properties of Viscosity Solutions of Hamilton-Jacobi Equations.Transactions of the American Mathematical Society, 282
N. Krylov (1980)
Controlled Diffusion Processes
M. Bardi, Pierpaolo Soravia (1991)
Hamilton-Jacobi equations with singular boundary conditions on a free boundary and applications to differential gamesTransactions of the American Mathematical Society, 325
K. Chung (1961)
Markov Chains with Stationary Transition Probabilities
P. Cannarsa, H. Soner (1985)
On the Singularities of the Viscosity Solutions to Hamilton-Jacobi-Bellman Equations
P. Lions (1982)
Generalized Solutions of Hamilton-Jacobi Equations
E. Barron, R. Jensen (1990)
Semicontinuous Viscosity Solutions For Hamilton–Jacobi Equations With Convex HamiltoniansCommunications in Partial Differential Equations, 15
F. Clarke (1983)
Optimization And Nonsmooth Analysis
P. Cannarsa, H. Frankowska (1990)
Some characterizations of optimal trajectories in control theory29th IEEE Conference on Decision and Control
M. Akian (1988)
Re´solution nume´rique d'equations d'Hamilton-Jacobi-Bellman-au moyen d'algorithmes multigrilles et d'ite´rations sur les politiquesAnalysis and optimization of systems
H. Soner (1986)
Optimal control with state-space constraint ISiam Journal on Control and Optimization, 24
S. Pliska (1986)
A Stochastic Calculus Model of Continuous Trading: Optimal PortfoliosMath. Oper. Res., 11
S. Shreve, H. Soner (1994)
Optimal Investment and Consumption with Transaction CostsAnnals of Applied Probability, 4
Ruth Williams (1985)
Brownian motion in a wedge with oblique reflection at the boundary
P. Cannarsa, H. Frankowska (1992)
Value function and optimality conditions for semilinear control problemsApplied Mathematics and Optimization, 26
R. Bellman, L. Pontryagin, V. Boltyanskii, R. Gamkrelidze, E. Mischenko (1962)
Mathematical Theory of Optimal Processes
D. Stroock (1984)
An Introduction to the Theory of Large Deviations
S. Sheu (1991)
Some Estimates of the Transition Density of a Nondegenerate Diffusion Markov ProcessAnnals of Probability, 19
I. Gikhman (1969)
Introduction to the theory of random processes
M. Hestenes (1966)
Calculus of variations and optimal control theory
P. Whittle (1990)
Risk-Sensitive Optimal Control
P. Lions (1983)
Optimal control of diffusion processes and hamilton–jacobi–bellman equations part 2 : viscosity solutions and uniquenessCommunications in Partial Differential Equations, 8
N. Karoui (1981)
Les Aspects Probabilistes Du Controle Stochastique
T. Zariphopoulou (1992)
Investment-consumption models with transaction fees and Markov-chain parametersSiam Journal on Control and Optimization, 30
R. Elliott, N. Kalton (1972)
The Existence Of Value In Differential Games
J. Menaldi, M. Robin (1983)
On some cheap control problems for diffusion processesTransactions of the American Mathematical Society, 278
F. Clarke (1991)
Necessary conditions for nonsmooth problems in-optimal control and the calculus of variations
R. Lipt︠s︡er, Alʹbert Shiri︠a︡ev (1977)
Statistics of random processes
W. Rudin (1968)
Real and complex analysis
F. Black, Myron Scholes (1973)
The Pricing of Options and Corporate LiabilitiesJournal of Political Economy, 81
L. Evans (1983)
Classical solutions of the Hamilton-Jacobi-Bellman equation for uniformly elliptic operatorsTransactions of the American Mathematical Society, 275
H. Ishii (1989)
On uniqueness and existence of viscosity solutions of fully nonlinear second‐order elliptic PDE'sCommunications on Pure and Applied Mathematics, 42
P. Chow, J. Menaldi, M. Robin (1984)
Additive control of stochastic linear systems with finite horizonThe 23rd IEEE Conference on Decision and Control
P. Lions (1981)
Control of diffusion processes in RNCommunications on Pure and Applied Mathematics, 34
A. Subbotin (1991)
Existence and uniqueness results for Hamilton-Jacobi equationsNonlinear Analysis-theory Methods & Applications, 16
W. Fleming (1965)
Functions of Several Variables
L. Evans (1982)
Classical solutions of fully nonlinear, convex, second‐order elliptic equationsCommunications on Pure and Applied Mathematics, 35
R. Bellman (1957)
Dynamic ProgrammingScience, 153
H. Soner, S. Shreve (1991)
A free boundary problem related to singular stochastic control: the parabolic caseCommunications in Partial Differential Equations, 16
U. Haussmann (1986)
A stochastic maximum principle for optimal control of diffusions
P. Lions (1983)
Optimal control of diffusion processes and Hamilton-Jacobi-Bellman equations, Part I, 8
W. Fleming, S. Sethi, H. Soner (1987)
An Optimal Stochastic Production Planning Problem with Randomly Fluctuating DemandStochastic Models eJournal
T. Mikami (1990)
Variational processes from the weak forward equationCommunications in Mathematical Physics, 135
H. Ishii, Shigeaki Koike (1991)
Remarks on elliptic singular perturbation problemsApplied Mathematics and Optimization, 23
This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. ; Thisbookisintendedasanintroductiontooptimalstochasticcontrolforc- tinuoustimeMarkovprocessesandtothetheoryofviscositysolutions. We- proach stochastic control problems by the method of dynamic programming. The fundamental equation of dynamic programming is a nonlinear evolution equation for the value function. For controlled Markov di?usion processes on n - dimensional euclidean space, the dynamic programming equation becomes a nonlinear partial di?erential equation of second order, called a Hamilton – Jacobi – Bellman (HJB) partial di?erential equation. The theory of visc- ity solutions, ?rst introduced by M. G. Crandall and P. -L. Lions, provides a convenient framework in which to study HJB equations. Typically, the value functionisnotsmoothenoughtosatisfytheHJBequationinaclassicalsense. However,underquitegeneralassumptionsthevaluefunctionistheuniquev- cosity solution of the HJB equation with appropriate boundary conditions. In addition, the viscosity solution framework is well suited to proving continuous dependence of solutions on problem data. The book begins with an introduction to dynamic programming for - terministic optimal control problems in Chapter I, and to the corresponding theory of viscosity solutions in Chapter II. A rather elementary introduction todynamicprogrammingforcontrolledMarkovprocessesisprovidedinCh- ter III. This is followed by the more technical Chapters IV and V, which are concerned with controlled Markov di?usions and viscosity solutions of HJB equations. We have tried, through illustrative examples in early chapters and the selection of material in Chapters VI – VII, to connect stochastic c- trol theory with other mathematical areas (e. g. large deviations theory) and with applications to engineering, physics, management, and ?nance. Chapter VIII is an introduction to singular stochastic control.; Deterministic Optimal Control.- Viscosity Solutions.- Optimal Control of Markov Processes: Classical Solutions.- Controlled Markov Diffusions in ?n.- Viscosity Solutions: Second-Order Case.- Logarithmic Transformations and Risk Sensitivity.- Singular Perturbations.- Singular Stochastic Control.- Finite Difference Numerical Approximations.- Applications to Finance.- Differential Games.; This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. Stochastic control problems are treated using the dynamic programming approach. The authors approach stochastic control problems by the method of dynamic programming. The fundamental equation of dynamic programming is a nonlinear evolution equation for the value function. For controlled Markov diffusion processes, this becomes a nonlinear partial differential equation of second order, called a Hamilton-Jacobi-Bellman (HJB) equation. Typically, the value function is not smooth enough to satisfy the HJB equation in a classical sense. Viscosity solutions provide framework in which to study HJB equations, and to prove continuous dependence of solutions on problem data. The theory is illustrated by applications from engineering, management science, and financial economics. In this second edition, new material on applications to mathematical finance has been added. Concise introductions to risk-sensitive control theory, nonlinear H-infinity control and differential games are also included. Review of the earlier edition: "This book is highly recommended to anyone who wishes to learn the dinamic principle applied to optimal stochastic control for diffusion processes. Without any doubt, this is a fine book and most likely it is going to become a classic on the area... ." SIAM Review, 1994 ; Provides a luckd introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions Also offers a concise introduction to risk-sensitive control theory, nonlinear H-infinity control and differential games Several all-new chapters have been added, and others completely rewritten For the Second Edition, new material has been added on application to mathematical finance ; This book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman equations. The authors use illustrative examples and selective material to connect stochastic control theory with other mathematical areas (e.g. large deviations theory) and with applications to engineering, physics, management, and finance. ; US
Published: Feb 4, 2006
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote