Monads in functional programming are most often associated with the Haskell language, where they play a central role in I/O and have found numerous other uses. Most introductions to monads are currently written for Haskell programmers. However, monads can be used with any functional language, even languages quite different from Haskell. Here I want to explain monads in the context of Clojure, a modern Lisp dialect with strong support for functional programming. A monad implementation for Clojure is available in the library clojure.contrib.monads. Before trying out the examples given in this tutorial, type (use 'clojure.contrib.monads) into your Clojure REPL.
Monads are about composing computational steps into a bigger multi-step computation. Let’s start with the simplest monad, known as the identity monad in the Haskell world. It’s actually built into the Clojure language, and you have certainly used it: it’s the let form.
Consider the following piece of code:
(let [a 1
b (inc a)]
(* a b))
This can be seen as a three-step calculation:
1. Compute 1 (a constant), and call the result a.
2. Compute (inc a), and call the result b.
3. Compute (* a b), which is the result of the multi-step computation.
Each step has access to the results of all previous steps through the symbols to which their results have been bound.
Now suppose that Clojure didn’t have a let form. Could you still compose computations by binding intermediate results to symbols? The answer is yes, using functions. The following expression is in fact equivalent to the previous one:
( (fn [a] ( (fn [b] (* a b)) (inc a) ) ) 1 )
The outermost level defines an anonymous function of a and calls with with the argument 1 - this is how we bind 1 to the symbol a. Inside the function of a, the same construct is used once more: the body of (fn [a] ...) is a function of b called with argument (inc a). If you don’t believe that this somewhat convoluted expression is equivalent to the original let form, just paste both into Clojure!
Of course the functional equivalent of the let form is not something you would want to work with. The computational steps appear in reverse order, and the whole construct is nearly unreadable even for this very small example. But we can clean it up and put the steps in the right order with a small helper function, bind. We will call it m-bind (for monadic bind) right away, because that’s the name it has in Clojure’s monad library. First, its definition:
(defn m-bind [value function]
(function value))
As you can see, it does almost nothing, but it permits to write a value before the function that is applied to it. Using m-bind, we can write our example as
(m-bind 1 (fn [a]
(m-bind (inc a) (fn [b]
(* a b)))))
That’s still not as nice as the let form, but it comes a lot closer. In fact, all it takes to convert a let form into a chain of computations linked by m-bind is a little macro. This macro is called domonad, and it permits us to write our example as
(domonad identity-m
[a 1
b (inc a)]
(* a b))
This looks exactly like our original let form. Running macroexpand-1 on it yields
(clojure.contrib.monads/with-monad identity-m
(m-bind 1 (fn [a] (m-bind (inc a) (fn [b] (m-result (* a b)))))))
This is the expression you have seen above, wrapped in a (with-monad identity-m ...) block (to tell Clojure that you want to evaluate it in the identity monad) and with an additional call to m-result that I will explain later. For the identity monad, m-result is just identity - hence its name.
As you might guess from all this, monads are generalizations of the let form that replace the simple m-bind function shown above by something more complex. Each monad is defined by an implementation of m-bind and an associated implementation of m-result. A with-monad block simply binds (using a let form!) these implementations to the names m-bind and m-result, so that you can use a single syntax for composing computations in any monad. Most frequently, you will use the domonad macro for this.
As our second example, we will look at another very simple monad, but one that adds something useful that you don’t get in a let form. Suppose your computations can fail in some way, and signal failure by producing nil as a result. Let’s take our example expression again and wrap it into a function:
(defn f [x]
(let [a x
b (inc a)]
(* a b)))
In the new setting of possibly-failing computations, you want this to return nil when x is nil, or when (inc a) yields nil. (Of course (inc a) will never yield nil, but that’s the nature of examples…) Anyway, the idea is that whenever a computational step yields nil, the final result of the computation is nil, and the remaining steps are never executed. All it takes to get this behaviour is a small change:
(defn f [x]
(domonad maybe-m
[a x
b (inc a)]
(* a b)))
The maybe monad represents computations whose result is maybe a valid value, but maybe nil. Its m-result function is still identity, so we don’t have to discuss m-result yet (be patient, we will get there in the second part of this tutorial). All the magic is in the m-bind function:
(defn m-bind [value function]
(if (nil? value)
nil
(function value)))
If its input value is non-nil, it calls the supplied function, just as in the identity monad. Recall that this function represents the rest of the computation, i.e. all following steps. If the value is nil, then m-bind returns nil and the rest of the computation is never called. You can thus call (f 1), yielding 2 as before, but also (f nil) yielding nil, without having to add nil-detecting code after every step of your computation, because m-bind does it behind the scenes.
In part 2, I will introduce some more monads, and look at some generic functions that can be used in any monad to aid in composing computations.
In the first part of this tutorial, I have introduced the two most basic monads: the identity monad and the maybe monad. In this part, I will continue with the sequence monad, which will be the occasion to explain the role of the mysterious m-result function. I will also show a couple of useful generic monad operations.
One of the most frequently used monads is the sequence monad (known in the Haskell world as the list monad). It is in fact so common that it is built into Clojure as well, in the form of the for form. Let’s look at an example:
(for [a (range 5)
b (range a)]
(* a b))
A for form resembles a let form not only syntactically. It has the same structure: a list of binding expressions, in which each expression can use the bindings from the preceding ones, and a final result expressions that typically depends on all the bindings as well. The difference between let and for is that let binds a single value to each symbol, whereas for binds several values in sequence. The expressions in the binding list must therefore evaluate to sequences, and the result is a sequence as well. The for form can also contain conditions in the form of :when and :while clauses, which I will discuss later. From the monad point of view of composable computations, the sequences are seen as the results of non-deterministic computations, i.e. computations that have more than one result.
Using the monad library, the above loop is written as
(domonad sequence-m
[a (range 5)
b (range a)]
(* a b))
Since we alread know that the domonad macro expands into a chain of m-bind calls ending in an expression that calls m-result, all that remains to be explained is how m-bind and m-result are defined to obtain the desired looping effect.
As we have seen before, m-bind calls a function of one argument that represents the rest of the computation, with the function argument representing the bound variable. To get a loop, we have to call this function repeatedly. A first attempt at such an m-bind function would be
(defn m-bind-first-try [sequence function]
(map function sequence))
Let’s see what this does for our example:
(m-bind-first-try (range 5) (fn [a]
(m-bind-first-try (range a) (fn [b]
(* a b)))))
This yields (() (0) (0 2) (0 3 6) (0 4 8 12)), whereas the for form given above yields (0 0 2 0 3 6 0 4 8 12). Something is not yet quite right. We want a single flat result sequence, but what we get is a nested sequence whose nesting level equals the number of m-bind calls. Since m-bind introduces one level of nesting, it must also remove one. That sounds like a job for concat. So let’s try again:
(defn m-bind-second-try [sequence function]
(apply concat (map function sequence)))
(m-bind-second-try (range 5) (fn [a]
(m-bind-second-try (range a) (fn [b]
(* a b)))))
This is worse: we get an exception. Clojure tells us:
java.lang.IllegalArgumentException: Don't know how to create ISeq from: Integer
Back to thinking!
Our current m-bind introduces a level of sequence nesting and also takes one away. Its result therefore has as many levels of nesting as the return value of the function that is called. The final result of our expression has as many nesting values as (* a b) - which means none at all. If we want one level of nesting in the result, no matter how many calls to m-bind we have, the only solution is to introduce one level of nesting at the end. Let’s try a quick fix:
(m-bind-second-try (range 5) (fn [a]
(m-bind-second-try (range a) (fn [b]
(list (* a b))))))
This works! Our (fn [b] ...) always returns a one-element list. The inner m-bind thus creates a sequence of one-element lists, one for each value of b, and concatenates them to make a flat list. The outermost m-bind then creates such a list for each value of a and concatenates them to make another flat list. The result of each m-bind thus is a flat list, as it should be. And that illustrates nicely why we need m-result to make a monad work. The final definition of the sequence monad is thus given by
(defn m-bind [sequence function]
(apply concat (map function sequence)))
(defn m-result [value]
(list value))
The role of m-result is to turn a bare value into the expression that, when appearing on the right-hand side in a monadic binding, binds the symbol to that value. This is one of the conditions that a pair of m-bind and m-result functions must fulfill in order to define a monad. Expressed as Clojure code, this condition reads
(= (m-bind (m-result value) function)
(function value))
There are two more conditions that complete the three monad laws. One of them is
(= (m-bind monadic-expression m-result)
monadic-expression)
with monadic-expression standing for any expression valid in the monad under consideration, e.g. a sequence expression for the sequence monad. This condition becomes clearer when expressed using the domonad macro:
(= (domonad
[x monadic-expression]
x)
monadic-expression)
The final monad law postulates associativity:
(= (m-bind (m-bind monadic-expression
function1)
function2)
(m-bind monadic-expression
(fn [x] (m-bind (function1 x)
function2))))
Again this becomes a bit clearer using domonad syntax:
(= (domonad
[y (domonad
[x monadic-expression]
(function1 x))]
(function2 y))
(domonad
[x monadic-expression
y (function1 x)]
(function2 y)))
It is not necessary to remember the monad laws for using monads, they are of importance only when you start to define your own monads. What you should remember about m-result is that (m-result x) represents the monadic computation whose result is x. For the sequence monad, this means a sequence with the single element x. For the identity monad and the maybe monad, which I have presented in the first part of the tutorial, there is no particular structure to monadic expressions, and therefore m-result is just the identity function.
Now it’s time to relax: the most difficult material has been covered. I will return to monad theory in the next part, where I will tell you more about the :when clauses in for loops. The rest of this part will be of a more pragmatic nature.
You may have wondered what the point of the identity and sequence monads is, given that Clojure already contains fully equivalent forms. The answer is that there are generic operations on computations that have an interpretation in any monad. Using the monad library, you can write functions that take a monad as an argument and compose computations in the given monad. I will come back to this later with a concrete example. The monad library also contains some useful predefined operations for use with any monad, which I will explain now. They all have names starting with the prefix m-.
Perhaps the most frequently used generic monad function is m-lift. It converts a function of n standard value arguments into a function of n monadic expressions that returns a monadic expression. The new function contains implicit m-bind and m-result calls. As a simple example, take
(def nil-respecting-addition
(with-monad maybe-m
(m-lift 2 +)))
This is a function that returns the sum of two arguments, just like + does, except that it automatically returns nil when either of its arguments is nil. Note that m-lift needs to know the number of arguments that the function has, as there is no way to obtain this information by inspecting the function itself.
To illustrate how m-lift works, I will show you an equivalent definition in terms of domonad:
(defn nil-respecting-addition
[x y]
(domonad maybe-m
[a x
b y]
(+ a b)))
This shows that m-lift implies one call to m-result and one m-bind call per argument. The same definition using the sequence monad would yield a function that returns a sequence of all possible sums of pairs from the two input sequences.
Exercice: The following function is equivalent to a well-known built-in Clojure function. Which one?
(defn mystery
[f xs]
( (m-lift 1 f) xs ))
Another popular monad operation is m-seq. It takes a sequence of monadic expressions, and returns a sequence of their result values. In terms of domonad, the expression (m-seq [a b c]) becomes
(domonad
[x a
y b
z c]
'(x y z))
Here is an example of how you might want to use it:
(with-monad sequence-m
(defn ntuples [n xs]
(m-seq (replicate n xs))))
Try it out for yourself!
The final monad operation I want to mention is m-chain. It takes a list of one-argument computations, and chains them together by calling each element of this list with the result of the preceding one. For example, (m-chain [a b c]) is equivalent to
(fn [arg]
(domonad
[x (a arg)
y (b x)
z (c y)]
z))
A usage example is the traversal of hierarchies. The Clojure function parents yields the parents of a given class or type in the hierarchy used for multimethod dispatch. When given a Java class, it returns its base classes. The following function builds on parents to find the n-th generation ascendants of a class:
(with-monad sequence-m
(defn n-th-generation
[n cls]
( (m-chain (replicate n parents)) cls )))
(n-th-generation 0 (class []))
(n-th-generation 1 (class []))
(n-th-generation 2 (class []))
You may notice that some classes can occur more than once in the result, because they are the base class of more than one class in the generation below. In fact, we ought to use sets instead of sequences for representing the ascendants at each generation. Well… that’s easy. Just replace sequence-m by set-m and run it again!
In part 3, I will come back to the :when clause in for loops, and show how it is implemented and generalized in terms of monads. I will also explain another monad or two. Stay tuned!
Before moving on to the more advanced aspects of monads, let’s recapitulate what defines a monad (see part 1 and part 2 for explanations):
1. A data structure that represents the result of a computation, or the computation itself. We haven’t seen an example of the latter case yet, but it will come soon.
2. A function m-result that converts an arbitrary value to a monadic data structure equivalent to that value.
3. A function m-bind that binds the result of a computation, represented by the monadic data structure, to a name (using a function of one argument) to make it available in the following computational step.
Taking the sequence monad as an example, the data structure is the sequence, representing the outcome of a non-deterministic computation, m-result is the function list, which converts any value into a list containing just that value, and m-bindis a function that executes the remaining steps once for each element in a sequence, and removes one level of nesting in the result.
The three ingredients above are what defines a monad, under the condition that the three monad laws are respected. Some monads have two additional definitions that make it possible to perform additional operations. These two definitions have the names m-zero and m-plus. m-zero represents a special monadic value that corresponds to a computation with no result. One example is nil in the maybe monad, which typically represents a failure of some kind. Another example is the empty sequence in the sequence monad. The identity monad is an example of a monad that has no m-zero.
m-plus is a function that combines the results of two or more computations into a single one. For the sequence monad, it is the concatenation of several sequences. For the maybe monad, it is a function that returns the first of its arguments that is not nil.
There is a condition that has to be satisfied by the definitions of m-zero and m-plus for any monad:
(= (m-plus m-zero monadic-expression)
(m-plus monadic-expression m-zero)
monadic-expression)
In words, combining m-zero with any monadic expression must yield the same expression. You can easily verify that this is true for the two examples (maybe and sequence) given above.
One benefit of having an m-zero in a monad is the possibility to use conditions. In the first part, I promised to return to the :when clauses in Clojure’s for forms, and now the time has come to discuss them. A simple example is
(for [a (range 5)
:when (odd? a)]
(* 2 a))
The same construction is possible with domonad:
(domonad sequence
[a (range 5)
:when (odd? a)]
(* 2 a))
Recall that domonad is a macro that translates a let-like syntax into a chain of calls to m-bind ending in a call to m-result. The clause a (range 5) becomes
(m-bind (range 5) (fn [a] remaining-steps))
where remaining-steps is the transformation of the rest of to the domonad form. A :when clause is of course treated specially, it becomes
(if predicate remaining-steps m-zero)
Our small example thus expands to
(m-bind (range 5) (fn [a]
(if (odd? a) (m-result (* 2 a)) m-zero)))
Inserting the definitions of m-bind, m-result, and m-zero, we finally get
(apply concat (map (fn [a]
(if (odd? a) (list (* 2 a)) (list))) (range 5)))
The result of map is a sequence of lists that have zero or one elements: zero for even values (the value of m-zero) and one for odd values (produced by m-result). concat makes a single flat list out of this, which contains only the elements that satisfy the :when clause.
As for m-plus, it is in practice used mostly with the maybe and sequence monads, or with variations of them. A typical use would be a search algorithm (think of a parser, a regular expression search, a database query) that can succeed (with one or more results) or fail (no results). m-plus would then be used to pursue alternative searches and combine the results into one (sequence monad), or to continue searching until a result is found (maybe monad). Note that it is perfectly possible in principle to have a monad with an m-zero but no m-plus, though in all common cases an m-plus can be defined as well if an m-zero is known.
After this bit of theory, let’s get acquainted with more monads. In the beginning of this part, I mentioned that the data structure used in a monad does not always represent the result(s) of a computational step, but sometimes the computation itself. An example of such a monad is the state monad, whose data structure is a function.
The state monad’s purpose is to facilitate the implementation of stateful algorithms in a purely functional way. Stateful algorithms are algorithms that require updating some variables. They are of course very common in imperative languages, but not compatible with the basic principle of pure functional programs which should not have mutable data structures. One way to simulate state changes while remaining purely functional is to have a special data item (in Clojure that would typically be a map) that stores the current values of all mutable variables that the algorithm refers to. A function that in an imperative program would modify a variable now takes the current state as an additional input argument and returns an updated state along with its usual result. The changing state thus becomes explicit in the form of a data item that is passed from function to function as the algorithm’s execution progresses. The state monad is a way to hide the state-passing behind the scenes and write an algorithm in an imperative style that consults and modifies the state.
The state monad differs from the monads that we have seen before in that its data structure is a function. This is thus a case of a monad whose data structure represents not the result of a computation, but the computation itself. A state monad value is a function that takes a single argument, the current state of the computation, and returns a vector of length two containing the result of the computation and the updated state after the computation. In practice, these functions are typically closures, and what you use in your program code are functions that create these closures. Such state-monad-value-generating functions are the equivalent of statements in imperative languages. As you will see, the state monad allows you to compose such functions in a way that makes your code look perfectly imperative, even though it is still purely functional!
Let’s start with a simple but frequent situation: the state that your code deals with takes the form of a map. You may consider that map to be a namespace in an imperative languages, with each key defining a variable. Two basic operations are reading the value of a variable, and modifying that value. They are already provided in the Clojure monad library, but I will show them here anyway because they make nice examples.
First, we look at fetch-val, which retrieves the value of a variable:
(defn fetch-val [key]
(fn [s]
[(key s) s]))
Here we have a simple state-monad-value-generating function. It returns a function of a state variable s which, when executed, returns a vector of the return value and the new state. The return value is the value corresponding to the key in the map that is the state value. The new state is just the old one - a lookup should not change the state of course.
Next, let’s look at set-val, which modifies the value of a variable and returns the previous value:
(defn set-val [key val]
(fn [s]
(let [old-val (get s key)
new-s (assoc s key val)]
[old-val new-s])))
The pattern is the same again: set-val returns a function of state s that, when executed, returns the old value of the variable plus an updated state map in which the new value is the given one.
With these two ingredients, we can start composing statements. Let’s define a statement that copies the value of one variable into another one and returns the previous value of the modified variable:
(defn copy-val [from to]
(domonad state-m
[from-val (fetch-val from)
old-to-val (set-val to from-val)]
old-to-val))
What is the result of copy-val? A state-monad value, of course: a function of a state variable s that, when executed, returns the old value of variable to plus the state in which the copy has taken place. Let’s try it out:
(let [initial-state {:a 1 :b 2}
computation (copy-val :b :a)
[result final-state] (computation initial-state)]
final-state)
We get {:a 2, :b 2}, as expected. But how does it work? To understand the state monad, we need to look at its definitions for m-result and m-bind, of course.
First, m-result, which does not contain any surprises: it returns a function of a state variable s that, when executed, returns the result value v and the unchanged state s:
(defn m-result [v] (fn [s] [v s]))
The definition of m-bind is more interesting:
(defn m-bind [mv f]
(fn [s]
(let [[v ss] (mv s)]
((f v) ss))))
Obviously, it returns a function of a state variable s. When that function is executed, it first runs the computation described by mv (the first ’statement’ in the chain set up by m-bind) by applying it to the state s. The return value is decomposed into result v and new state ss. The result of the first step, v, is injected into the rest of the computation by calling f on it (like for the other m-bind functions that we have seen). The result of that call is of course another state-monad value, and thus a function of a state variable. When we are inside our (fn [s] ...), we are already at the execution stage, so we have to call that function on the state ss, the one that resulted from the execution of the first computational step.
The state monad is one of the most basic monads, of which many variants are in use. Usually such a variant adds something to m-bind that is specific to the kind of state being handled. An example is the the stream monad in clojure.contrib.stream-utils. Its state describes a stream of data items, and the m-bind function checks for invalid values and for the end-of-stream condition in addition to what the basic m-bind of the state monad does.
A variant of the state monad that is so frequently used that is has itself become one of the standard monads is the writer monad. Its state is an accumulator (anything defined in clojure.contrib.accumulators), to which computations can add something by calling the function write. The name comes from a particularly popular application: logging. Take a basic computation in the identity monad, for example (remember that the identity monad is just Clojure’s built-in let). Now assume you want to add a protocol of the computation in the form of a list or a string that accumulates information about the progress of the computation. Just change the identity monad to the writer monad, and add calls to write where required!
Here is a concrete example: the well-known Fibonacci function in its most straightforward (and most inefficient) implementation:
(defn fib [n]
(if (< n 2)
n
(let [n1 (dec n)
n2 (dec n1)]
(+ (fib n1) (fib n2)))))
Let’s add some protocol of the computation in order to see which calls are made to arrive at the final result. First, we rewrite the above example a bit to make every computational step explicit:
(defn fib [n]
(if (< n 2)
n
(let [n1 (dec n)
n2 (dec n1)
f1 (fib n1)
f2 (fib n2)]
(+ f1 f2))))
Second, we replace let by domonad and choose the writer monad with a vector accumulator:
(require ['clojure.contrib.accumulators :as 'accu])
(with-monad (writer-m accu/empty-vector)
(defn fib-trace [n]
(if (< n 2)
(m-result n)
(domonad
[n1 (m-result (dec n))
n2 (m-result (dec n1))
f1 (fib-trace n1)
_ (write [n1 f1])
f2 (fib-trace n2)
_ (write [n2 f2])
]
(+ f1 f2))))
)
Finally, we run fib-trace and look at the result:
(fib-trace 3)
[2 [[1 1] [0 0] [2 1] [1 1]]]
The first element of the return value, 2, is the result of the function fib. The second element is the protocol vector containing the arguments and results of the recursive calls.
Note that it is sufficient to comment out the lines with the calls to write and change the monad to identity-m to obtain a standard fib function with no protocol - try it out for yourself!
Part 4 will show you how to define your own monads by combining monad building blocks called monad transformers. As an illustration, I will explain the probability monad and how it can be used for Bayesian estimates when combined with the maybe-transformer.
In this fourth and last part of my monad tutorial, I will write about monad transformers. I will deal with only one of them, but it’s a start. I will also cover the probability monad, and how it can be extended using a monad transformer.
Basically, a monad transformer is a function that takes a monad argument and returns another monad. The returned monad is a variant of the one passed in to which some functionality has been added. The monad transformer defines that added functionality. Many of the common monads that I have presented before have monad transformer analogs that add the monad’s functionality to another monad. This makes monads modular by permitting client code to assemble monad building blocks into a customized monad that is just right for the task at hand.
Consider two monads that I have discussed before: the maybe monad and the sequence monad. The maybe monad is for computations that can fail to produce a valid value, and return nil in that case. The sequence monad is for computations that return multiple results, in the form of monadic values that are sequences. A monad combining the two can take two forms: 1) computations yielding multiple results, any of which could be nil indicating failure 2) computations yielding either a sequence of results or nil in the case of failure. The more interesting combination is 1), because 2) is of little practical use: failure can be represented more easily and with no additional effort by returning an empty result sequence.
So how can we create a monad that puts the maybe monad functionality inside sequence monad values? Is there a way we can reuse the existing implementations of the maybe monad and the sequence monad? It turns out that this is not possible, but we can keep one and rewrite the other one as a monad transformer, which we can then apply to the sequence monad (or in fact some other monad) to get the desired result. To get the combination we want, we need to turn the maybe monad into a transformer and apply it to the sequence monad.
First, as a reminder, the definitions of the maybe and the sequence monads:
(defmonad maybe-m
[m-zero nil
m-result (fn [v] v)
m-bind (fn [mv f]
(if (nil? mv) nil (f mv)))
m-plus (fn [& mvs]
(first (drop-while nil? mvs)))
])
(defmonad sequence-m
[m-result (fn [v]
(list v))
m-bind (fn [mv f]
(apply concat (map f mv)))
m-zero (list)
m-plus (fn [& mvs]
(apply concat mvs))
])
And now the definition of the maybe monad transformer:
(defn maybe-t
[m]
(monad [m-result (with-monad m m-result)
m-bind (with-monad m
(fn [mv f]
(m-bind mv
(fn [x]
(if (nil? x)
(m-result nil)
(f x))))))
m-zero (with-monad m m-zero)
m-plus (with-monad m m-plus)
]))
The real definition in clojure.contrib.monads is a bit more complicated, and I will explain the differences later, but for now this basic version is good enough. The combined monad is constructed by
(def maybe-in-sequence-m (maybe-t sequence-m))
which is a straightforward function call, the result of which is a monad. Let’s first look at what m-result does. The m-result of maybe-m is the identity function, so we’d expect that our combined monad m-result is just the one from sequence-m. This is indeed the case, as (with-monad m m-result) returns the m-result function from monad m. We see the same construct for m-zero and m-plus, meaning that all we need to understand is m-bind.
The combined m-bind calls the m-bind of the base monad (sequence-m in our case), but it modifies the function argument, i.e. the function that represents the rest of the computation. Before calling it, it first checks if its argument would
be nil. If it isn’t, the original function is called, meaning that the combined monad behaves just like the base monad as long as no computation ever returns nil. If there is a nil value, the maybe monad says that no further computation should take place and that the final result should immediately be nil. However, we can’t just return nil, as we must return a valid monadic value in the combined monad (in our example, a sequence of possibly-nil values). So we feed nil into the base monad’s m-result, which takes care of wrapping up nil in the required data structure.
Let’s see it in action:
(domonad maybe-in-sequence-m
[x [1 2 nil 4]
y [10 nil 30 40]]
(+ x y))
The output is:
(11 nil 31 41 12 nil 32 42 nil 14 nil 34 44)
As expected, there are all the combinations of non-nil values in both input sequences. However, it is surprising at first sight that there are four nil entries. Shouldn’t there be eight, resulting from the combinations of a nil in one sequence with the four values in the other sequence?
To understand why there are four nils, let’s look again at how the m-bind definition in maybe-t handles them. At the top level, it will be called with the vector [1 2 nil 4] as the monadic value. It hands this to the m-bind of sequence-m, which calls the
anonymous function in maybe-t’s m-bind four times, once for each element of the vector. For the three non-nil values, no special treatment is added. For the one nil value, the net result of the computation is nil and the rest of the computation is never called. The nil in the first input vector thus accounts for one nil in the result, and the rest of the computation is called three times. Each of these three rounds produces then three valid results and one nil. We thus have 3×3 valid results, 3×1 nil from the second vector, plus the one nil from the first vector. That makes nine valid results and four nils.
Is there a way to get all sixteen combinations, with all the possible nil results in the result? Yes, but not using the maybe-t transformer. You have to use the maybe and the sequence monads separately, for example like this:
(with-monad maybe-m
(def maybe-+ (m-lift 2 +)))
(domonad sequence-m
[x [1 2 nil 4]
y [10 nil 30 40]]
(maybe-+ x y))
When you use maybe-t, you always get the shortcutting behaviour seen above: as soon as there is a nil, the total result is nil and the rest of the computation is never executed. In most situations, that’s what you want.
The combination of maybe-t and sequence-m is not so useful in practice because a much easier (and more efficient) way to handle invalid results is to remove them from the sequences before any further processing happens. But the example is simple and thus fine for explaining the basics. You are now ready for a more realistic example: the use of maybe-t with the
probability distribution monad.
The probability distribution monad is made for working with finite probability distributions, i.e. probability distributions in which a finite set of values has a non-zero probability. Such a distribution is represented by a map from the values to their probabilities. The monad and various useful functions for working with finite distributions is defined in the
library clojure.contrib.probabilities.finite-distributions.
A simple example of a finite distribution:
(use 'clojure.contrib.probabilities.finite-distributions)
(def die (uniform #{1 2 3 4 5 6}))
(prob odd? die)
This prints 1/2, the probability that throwing a single die yields an odd number. The value of die is the probability distribution of the outcome of throwing a die:
{6 1/6, 5 1/6, 4 1/6, 3 1/6, 2 1/6, 1 1/6}
Suppose we throw the die twice and look at the sum of the two values. What is its probability distribution? That’s where the monad comes in:
(domonad dist-m
[d1 die
d2 die]
(+ d1 d2))
The result is:
{2 1/36, 3 1/18, 4 1/12, 5 1/9, 6 5/36, 7 1/6, 8 5/36, 9 1/9, 10 1/12, 11 1/18, 12 1/36}
You can read the above domonad block as ‘draw a value from the distribution die and call it d1, draw a value from the distribution die and call it d2, then give me the distribution of (+ d1 d2)‘. This is a very simple example; in general, each distribution can depend on the values drawn from the preceding ones, thus creating the joint distribution of several variables. This approach is known as ‘ancestral sampling’.
The monad dist-m applies the basic rule of combining probabilities: if event A has probability p and event B has probability q, and if the events are independent (or at least uncorrelated), then the probability of the combined event (A and B) is p*q. Here is the definition of dist-m:
(defmonad dist-m
[m-result (fn [v] {v 1})
m-bind (fn [mv f]
(letfn [(add-prob [dist [x p]]
(assoc dist x (+ (get dist x 0) p)))]
(reduce add-prob {}
(for [[x p] mv [y q] (f x)]
[y (* q p)]))))
])
As usually, the interesting stuff happens in m-bind. Its first argument, mv, is a map representing a probability distribution. Its second argument, f, is a function representing the rest of the calculation. It is called for each possible value in the probability distribution in the for form. This for form iterates over both the possible values of the input distribution and the possible values of the distribution returned by (f x), combining the probabilities by multiplication and putting them into the output distribution. This is done by reducing over the helper function add-prob, which checks if the value is already present in the map, and if so, adds the probability to the previously obtained one. This is necessary because the samples from the (f x) distribution can contain the same value more than once if they were obtained for different x.
For a more interesting example, let’s consider the famous Monty Hall problem. In a game show, the player faces three doors. A prize is waiting for him behind one of them, but there is nothing behind the two other ones. If he picks the right door, he gets the prize. Up to there, the problem is simple: the probability of winning is 1/3.
But there is a twist. After the player makes his choice, the game host open one of the two other doors, which shows an empty space. He then asks the player if he wants to change his mind and choose the last remaining door instead of his initial choice. Is this a good strategy?
To make this a well-defined problem, we have to assume that the game host knows where the prize is and that he would not open the corresponding door. Then we can start coding:
(def doors #{:A :B :C})
(domonad dist-m
[prize (uniform doors)
choice (uniform doors)]
(if (= choice prize) :win :loose))
Let’s go through this step by step. First, we choose the prize door by drawing from a uniform distribution over the three doors :A, :B, and :C. That represents what happens before the player comes in. Then the player’s initial choice is made, drawing from the same distribution. Finally, we ask for the distribution of the outcome of the game, code>:win or :loose. The answer is, unsurprisingly, {:win 1/3, :loose 2/3}.
This covers the case in which the player does not accept the host's proposition to change his mind. If he does, the game becomes more complicated:
(domonad dist-m
[prize (uniform doors)
choice (uniform doors)
opened (uniform (disj doors prize choice))
choice (uniform (disj doors opened choice))]
(if (= choice prize) :win :loose))
The third step is the most interesting one: the game host opens a door which is neither the prize door nor the initial choice of the player. We model this by removing both prize and choice from the set of doors, and draw uniformly from the resulting set, which can have one or two elements depending on prize and choice. The player then changes his mind and chooses from the set of doors other than the open one and his initial choice. With the standard three-door game, that set has exactly one element, but the code above also works for a larger number of doors - try it out yourself!
Evaluating this piece of code yields {:loose 1/3, :win 2/3}, indicating that the change-your-mind strategy is indeed the better one.
Back to the maybe-t transformer. The finite-distribution library defines a second monad by
(def cond-dist-m (maybe-t dist-m))
This makes nil a special value in distributions, which is used to represent events that we don't want to consider as possible ones. With the definitions of maybe-t and dist-m, you can guess how nil values are propagated when distributions are combined: for any nil value, the distributions that potentially depend on it are never evaluated, and the nil value's probability is transferred entirely to the probability of nil in the output distribution. But how does nil ever get into a distribution? And, most of all, what is that good for?
Let's start with the last question. The goal of this nil-containing distributions is to eliminate certain values. Once the final distribution is obtained, the nil value is removed, and the remaining distribution is normalized to make the sum of the probabilities of the remaining values equal to one. This nil-removal and normalization is performed by the utility function normalize-cond. The cond-dist-m monad is thus a sophisticated way to compute conditional probabilities, and in particular to facilitate Bayesian inference, which is an important technique in all kinds of data analysis.
As a first exercice, let's calculate a simple conditional probability from an input distribution and a predicate. The output distribution should contain only the values satisfying the predicate, but be normalized to one:
(defn cond-prob [pred dist]
(normalize-cond (domonad cond-dist-m
[v dist
:when (pred v)]
v))))
The important line is the one with the :when condition. As I have explained in parts 1 and 2, the domonad form becomes
(m-bind dist
(fn [v]
(if (pred v)
(m-result v)
m-zero)))
If you have been following carefully, you should complain now: with the definitions of dist-m and maybe-t I have given above, cond-dist-m should not have any m-zero! But as I said earlier, the maybe-t shown here is a simplified version. The real one checks if the base monad has m-zero, and if it hasn't, it substitutes its own, which is (with-monad m (m-result nil)). Therefore the m-zero of cond-dist-m is {nil 1}, the distribution whose only value is nil.
The net effect of the domonad form in this example is thus to keep all values that satisfy the predicate with their initial probabilities, but to transfer the probability of all values to nil. The call to normalize-cond then takes out the nil and re-distributes its probability to the other values. Example:
(cond-prob odd? die)
-> {5 1/3, 3 1/3, 1 1/3}
The cond-dist-m monad really becomes interesting for Bayesian inference problems. Bayesian inference is technique for drawing conclusions from incomplete observations. It has a wide range of applications, from spam filters to weather forecasts. For an introduction to the technique and its mathematical basis, you can start with the Wikipedia article.
Here I will discuss a very simple inference problem and its solution in Clojure. Suppose someone has three dice, one with six faces, one with eight, and one with twelve. This person picks one die, throws it a few times, and gives us the numbers, but doesn't tell us which die it was. Given these observations, we would like to infer the probabilities for each of the three dice to have been picked. We start by defining a function that returns the distribution of a die with n faces:
(defn die-n [n] (uniform (range 1 (inc n))))
Next, we come to the core of Bayesian inference. One central ingredient is the probability for throwing a given number under the assumption that die X was used. We thus need the probability distributions for each of our three dice:
(def dice {:six (die-n 6)
:eight (die-n 8 )
:twelve (die-n 12)})
The other central ingredient is a distribution representing our 'prior knowledge' about the chosen die. We actually know nothing at all, so each die has the same weight in this distribution:
(def prior (uniform (keys dice)))
Now we can write the inference function. It takes as input the prior-knowledge distribution and a number that was obtained from the die. It returns the a posteriori distribution that combines the prior information with the information from the observation.
(defn add-observation [prior observation]
(normalize-cond
(domonad cond-dist-m
[die prior
number (get dice die)
:when (= number observation)]
die)))
Let's look at the domonad form. The first step picks one die according to the prior knowledge. The second line "throws" that die, obtaining a number. The third line eliminates the numbers that don't match the observation. And then we ask for the distribution of the die.
It is instructive to compare this function with the mathematical formula for Bayes' theorem, which is the basis of Bayesian inference. Bayes' theorem is P(H|E) = P(E|H) P(H) / P(E), where H stands for the hypothesis ("the die chosen was X") and E stands for the evidence ("the number thrown was N"). P(H) is the prior knowledge. The formula must be evaluated for a fixed value of E, which is the observation.
The first line of our domonad form implements P(H), the second line implements P(E|H). These two lines together thus sample P(E, H) using ancestral sampling, as we have seen before. The :when line represents the observation; we wish to apply Bayes' theorem for a fixed value of E. Once E has been fixed, P(E) is just a number, required for normalization. This is handled by normalize-cond in our code.
Let's see what happens when we add a single observation:
(add-observation prior 1)
-> {:twelve 2/9, :eight 1/3, :six 4/9}
We see that the highest probability is given to :six, then :eight, and finally :twelve. This happens because 1 is a possible value for all dice, but it is more probable as a result of throwing a six-faced die (1/6) than as a result of throwing an eight-faced die (1/8) or a twelve-faced die (1/12). The observation thus favours a die with a small number of faces.
If we have three observations, we can call add-observation repeatedly:
(-> prior (add-observation 1)
(add-observation 3)
(add-observation 7))
-> {:twelve 8/35, :eight 27/35}
Now we see that the candidate :six has disappeared. In fact, the observed value of 7 rules it out completely. Moreover, the observed numbers strongly favour :eight over :twelve, which is again due to the preference for the smallest possible die in the game.
This inference problem is very similar to how a spam filter works. In that case, the three dice are replaced by the choices :spam or :no-spam. For each of them, we have a distribution of words, obtained by analyzing large quantities of e-mail messages. The function add-observation is strictly the same, we'd just pick different variable names. And then we'd call it for each word in the message we wish to evaluate, starting from a prior distribution defined by the total number of :spam and :no-spam messages in our database.
To end this introduction to monad transformers, I will explain the m-zero problem in maybe-t. As you know, the maybe monad has an m-zero definition (nil) and an m-plus definition, and those two can be carried over into a monad created by applying maybe-t to some base monad. This is what we have seen in the case of cond-dist-m. However, the base monad might have its own m-zero and m-plus, as we have seen in the case of sequence-m. Which set of definitions should the combined monad have? Only the user of maybe-t can make that decision, so maybe-t has an optional parameter for this (see its documentation for the details). The only clear case is a base monad without m-zero and m-plus; in that case, nothing is lost if maybe-t imposes its own.
from
onclojure.com