Markov Decision Process (MDP)

Last updated: July 2, 2020

What Does Markov Decision Process (MDP) Mean?

A Markov decision process (MDP) is something that professionals refer to as a “discrete time stochastic control process.” It's based on mathematics pioneered by Russian academic Andrey Markov in the late 19th and early 20th centuries.


Techopedia Explains Markov Decision Process (MDP)

One way to explain a Markov decision process and associated Markov chains is that these are elements of modern game theory predicated on simpler mathematical research by the Russian scientist some hundred years ago. The description of a Markov decision process is that it studies a scenario where a system is in some given set of states, and moves forward to another state based on the decisions of a decision maker.

A Markov chain as a model shows a sequence of events where probability of a given event depends on a previously attained state. Professionals may talk about a “countable state space” in describing the Markov decision process – some associate the idea of the Markov decision model with a “random walk” model or other stochastic model based on probabilities (the random walk model, often cited on Wall Street, models the movement of an equity up or down in a market probability context).

In general, Markov decision processes are often applied to some of the most sophisticated technologies that professionals are working on today, for example, in robotics, automation and research models.


Share this Term

  • Facebook
  • LinkedIn
  • Twitter

Related Reading


Software DevelopmentComputer ScienceObject Oriented Programming (OOP)Machine LearningData Science

Trending Articles

Go back to top