layout: post
title: “Using map, apply, bind and sequence in practice”
description: “A real-world example that uses all the techniques”
categories: [“Patterns”]
seriesId: “Map and Bind and Apply, Oh my!”

seriesOrder: 5

This post is the fifth in a series.
In the first two posts, I described some of the core functions for dealing with generic data types: map, bind, and so on.
In the third post, I discussed “applicative” vs “monadic” style, and how to lift values and functions to be consistent with each other.
In the previous post, I introduced traverse and sequence as a way of working with lists of elevated values.

In this post, we’ll finish up by working through a practical example that uses all the techniques that have been discussed so far.

Series contents

Here’s a list of shortcuts to the various functions mentioned in this series:


Part 5: A real-world example that uses all the techniques


Example: Downloading and processing a list of websites

The example will be a variant of the one mentioned at the beginning of the third post:

  • Given a list of websites, create an action that finds the site with the largest home page.

Let’s break this down into steps:

First we’ll need to transform the urls into a list of actions, where each action downloads the page and gets the size of the content.

And then we need to find the largest content, but in order to do this we’ll have to convert the list of actions into a single action containing a list of sizes.
And that’s where traverse or sequence will come in.

Let’s get started!

The downloader

First we need to create a downloader. I would use the built-in System.Net.WebClient class, but for some reason it doesn’t allow override of the timeout.
I’m going to want to have a small timeout for the later tests on bad uris, so this is important.

One trick is to just subclass WebClient and intercept the method that builds a request. So here it is:

  1. // define a millisecond Unit of Measure
  2. type [<Measure>] ms
  3. /// Custom implementation of WebClient with settable timeout
  4. type WebClientWithTimeout(timeout:int<ms>) =
  5. inherit System.Net.WebClient()
  6. override this.GetWebRequest(address) =
  7. let result = base.GetWebRequest(address)
  8. result.Timeout <- int timeout
  9. result

Notice that I’m using units of measure for the timeout value. I find that units of measure are invaluable to distiguish seconds from milliseconds.
I once accidentally set a timeout to 2000 seconds rather than 2000 milliseconds and I don’t want to make that mistake again!

The next bit of code defines our domain types. We want to be able to keep the url and the size together as we process them. We could use a tuple,
but I am a proponent of using types to model your domain, if only for documentation.

  1. // The content of a downloaded page
  2. type UriContent =
  3. UriContent of System.Uri * string
  4. // The content size of a downloaded page
  5. type UriContentSize =
  6. UriContentSize of System.Uri * int

Yes, this might be overkill for a trivial example like this, but in a more serious project I think it is very much worth doing.

Now for the code that does the downloading:

  1. /// Get the contents of the page at the given Uri
  2. /// Uri -> Async<Result<UriContent>>
  3. let getUriContent (uri:System.Uri) =
  4. async {
  5. use client = new WebClientWithTimeout(1000<ms>) // 1 sec timeout
  6. try
  7. printfn " [%s] Started ..." uri.Host
  8. let! html = client.AsyncDownloadString(uri)
  9. printfn " [%s] ... finished" uri.Host
  10. let uriContent = UriContent (uri, html)
  11. return (Result.Success uriContent)
  12. with
  13. | ex ->
  14. printfn " [%s] ... exception" uri.Host
  15. let err = sprintf "[%s] %A" uri.Host ex.Message
  16. return Result.Failure [err ]
  17. }

Notes:

  • The .NET library will throw on various errors, so I am catching that and turning it into a Failure.
  • The use client = section ensures that the client will be correctly disposed at the end of the block.
  • The whole operation is wrapped in an async workflow, and the let! html = client.AsyncDownloadString is where the download happens asynchronously.
  • I’ve added some printfns for tracing, just for this example. In real code, I wouldn’t do this of course!

Before moving on, let’s test this code interactively. First we need a helper to print the result:

  1. let showContentResult result =
  2. match result with
  3. | Success (UriContent (uri, html)) ->
  4. printfn "SUCCESS: [%s] First 100 chars: %s" uri.Host (html.Substring(0,100))
  5. | Failure errs ->
  6. printfn "FAILURE: %A" errs

And then we can try it out on a good site:

  1. System.Uri ("http://google.com")
  2. |> getUriContent
  3. |> Async.RunSynchronously
  4. |> showContentResult
  5. // [google.com] Started ...
  6. // [google.com] ... finished
  7. // SUCCESS: [google.com] First 100 chars: <!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en-GB"><head><meta cont

and a bad one:

  1. System.Uri ("http://example.bad")
  2. |> getUriContent
  3. |> Async.RunSynchronously
  4. |> showContentResult
  5. // [example.bad] Started ...
  6. // [example.bad] ... exception
  7. // FAILURE: ["[example.bad] "The remote name could not be resolved: 'example.bad'""]

Extending the Async type with map and apply and bind

At this point, we know that we are going to be dealing with the world of Async, so before we go any further, let’s make sure that we have our four core functions available:

  1. module Async =
  2. let map f xAsync = async {
  3. // get the contents of xAsync
  4. let! x = xAsync
  5. // apply the function and lift the result
  6. return f x
  7. }
  8. let retn x = async {
  9. // lift x to an Async
  10. return x
  11. }
  12. let apply fAsync xAsync = async {
  13. // start the two asyncs in parallel
  14. let! fChild = Async.StartChild fAsync
  15. let! xChild = Async.StartChild xAsync
  16. // wait for the results
  17. let! f = fChild
  18. let! x = xChild
  19. // apply the function to the results
  20. return f x
  21. }
  22. let bind f xAsync = async {
  23. // get the contents of xAsync
  24. let! x = xAsync
  25. // apply the function but don't lift the result
  26. // as f will return an Async
  27. return! f x
  28. }

These implementations are straightforward:

  • I’m using the async workflow to work with Async values.
  • The let! syntax in map extracts the content from the Async (meaning run it and await the result).
  • The return syntax in map, retn, and apply lifts the value to an Async using return.
  • The apply function runs the two parameters in parallel using a fork/join pattern.
    If I had instead written let! fChild = ... followed by a let! xChild = ...
    that would have been monadic and sequential, which is not what I wanted.
  • The return! syntax in bind means that the value is already lifted and not to call return on it.

Getting the size of the downloaded page

Getting back on track, we can continue from the downloading step and move on to the process of converting the result to a UriContentSize:

  1. /// Make a UriContentSize from a UriContent
  2. /// UriContent -> Result<UriContentSize>
  3. let makeContentSize (UriContent (uri, html)) =
  4. if System.String.IsNullOrEmpty(html) then
  5. Result.Failure ["empty page"]
  6. else
  7. let uriContentSize = UriContentSize (uri, html.Length)
  8. Result.Success uriContentSize

If the input html is null or empty we’ll treat this an error, otherwise we’ll return a UriContentSize.

Now we have two functions and we want to combine them into one “get UriContentSize given a Uri” function. The problem is that the outputs and inputs don’t match:

  • getUriContent is Uri -> Async<Result<UriContent>>
  • makeContentSize is UriContent -> Result<UriContentSize>

The answer is to transform makeContentSize from a function that takes a UriContent as input into
a function that takes a Async<Result<UriContent>> as input. How can we do that?

First, use Result.bind to convert it from an a -> Result<b> function to a Result<a> -> Result<b> function.
In this case, UriContent -> Result<UriContentSize> becomes Result<UriContent> -> Result<UriContentSize>.

Next, use Async.map to convert it from an a -> b function to a Async<a> -> Async<b> function.
In this case, Result<UriContent> -> Result<UriContentSize> becomes Async<Result<UriContent>> -> Async<Result<UriContentSize>>.

Using map, apply, bind and sequence in practice - 图1

And now that it has the right kind of input, so we can compose it with getUriContent:

  1. /// Get the size of the contents of the page at the given Uri
  2. /// Uri -> Async<Result<UriContentSize>>
  3. let getUriContentSize uri =
  4. getUriContent uri
  5. |> Async.map (Result.bind makeContentSize)

That’s some gnarly type signature, and it’s only going to get worse! It’s at times like these that I really appreciate type inference.

Let’s test again. First a helper to format the result:

  1. let showContentSizeResult result =
  2. match result with
  3. | Success (UriContentSize (uri, len)) ->
  4. printfn "SUCCESS: [%s] Content size is %i" uri.Host len
  5. | Failure errs ->
  6. printfn "FAILURE: %A" errs

And then we can try it out on a good site:

  1. System.Uri ("http://google.com")
  2. |> getUriContentSize
  3. |> Async.RunSynchronously
  4. |> showContentSizeResult
  5. // [google.com] Started ...
  6. // [google.com] ... finished
  7. //SUCCESS: [google.com] Content size is 44293

and a bad one:

  1. System.Uri ("http://example.bad")
  2. |> getUriContentSize
  3. |> Async.RunSynchronously
  4. |> showContentSizeResult
  5. // [example.bad] Started ...
  6. // [example.bad] ... exception
  7. //FAILURE: ["[example.bad] "The remote name could not be resolved: 'example.bad'""]

Getting the largest size from a list

The last step in the process is to find the largest page size.

That’s easy. Once we have a list of UriContentSize, we can easily find the largest one using List.maxBy:

  1. /// Get the largest UriContentSize from a list
  2. /// UriContentSize list -> UriContentSize
  3. let maxContentSize list =
  4. // extract the len field from a UriContentSize
  5. let contentSize (UriContentSize (_, len)) = len
  6. // use maxBy to find the largest
  7. list |> List.maxBy contentSize

Putting it all together

We’re ready to assemble all the pieces now, using the following algorithm:

  • Start with a list of urls
  • Turn the list of strings into a list of uris (Uri list)
  • Turn the list of Uris into a list of actions (Async<Result<UriContentSize>> list)
  • Next we need to swap the top two parts of the stack. That is, transform a List<Async> into a Async<List>.

Using map, apply, bind and sequence in practice - 图2

  • Next we need to swap the bottom two parts of the stack — transform a List<Result> into a Result<List>.
    But the two bottom parts of the stack are wrapped in an Async so we need to use Async.map to do this.

Using map, apply, bind and sequence in practice - 图3

  • Finally we need to use List.maxBy on the bottom List to convert it into a single value. That is, transform a List<UriContentSize> into a UriContentSize.
    But the bottom of the stack is wrapped in a Result wrapped in an Async so we need to use Async.map and Result.map to do this.

Using map, apply, bind and sequence in practice - 图4

Here’s the complete code:

  1. /// Get the largest page size from a list of websites
  2. let largestPageSizeA urls =
  3. urls
  4. // turn the list of strings into a list of Uris
  5. // (In F# v4, we can call System.Uri directly!)
  6. |> List.map (fun s -> System.Uri(s))
  7. // turn the list of Uris into a "Async<Result<UriContentSize>> list"
  8. |> List.map getUriContentSize
  9. // turn the "Async<Result<UriContentSize>> list"
  10. // into an "Async<Result<UriContentSize> list>"
  11. |> List.sequenceAsyncA
  12. // turn the "Async<Result<UriContentSize> list>"
  13. // into a "Async<Result<UriContentSize list>>"
  14. |> Async.map List.sequenceResultA
  15. // find the largest in the inner list to get
  16. // a "Async<Result<UriContentSize>>"
  17. |> Async.map (Result.map maxContentSize)

This function has signature string list -> Async<Result<UriContentSize>>, which is just what we wanted!

There are two sequence functions involved here: sequenceAsyncA and sequenceResultA. The implementations are as you would expect from
all the previous discussion, but I’ll show the code anyway:

  1. module List =
  2. /// Map a Async producing function over a list to get a new Async
  3. /// using applicative style
  4. /// ('a -> Async<'b>) -> 'a list -> Async<'b list>
  5. let rec traverseAsyncA f list =
  6. // define the applicative functions
  7. let (<*>) = Async.apply
  8. let retn = Async.retn
  9. // define a "cons" function
  10. let cons head tail = head :: tail
  11. // right fold over the list
  12. let initState = retn []
  13. let folder head tail =
  14. retn cons <*> (f head) <*> tail
  15. List.foldBack folder list initState
  16. /// Transform a "list<Async>" into a "Async<list>"
  17. /// and collect the results using apply.
  18. let sequenceAsyncA x = traverseAsyncA id x
  19. /// Map a Result producing function over a list to get a new Result
  20. /// using applicative style
  21. /// ('a -> Result<'b>) -> 'a list -> Result<'b list>
  22. let rec traverseResultA f list =
  23. // define the applicative functions
  24. let (<*>) = Result.apply
  25. let retn = Result.Success
  26. // define a "cons" function
  27. let cons head tail = head :: tail
  28. // right fold over the list
  29. let initState = retn []
  30. let folder head tail =
  31. retn cons <*> (f head) <*> tail
  32. List.foldBack folder list initState
  33. /// Transform a "list<Result>" into a "Result<list>"
  34. /// and collect the results using apply.
  35. let sequenceResultA x = traverseResultA id x

Adding a timer

It will be interesting to see how long the download takes for different scenarios,
so let’s create a little timer that runs a function a certain number of times and takes the average:

  1. /// Do countN repetitions of the function f and print the time per run
  2. let time countN label f =
  3. let stopwatch = System.Diagnostics.Stopwatch()
  4. // do a full GC at the start but not thereafter
  5. // allow garbage to collect for each iteration
  6. System.GC.Collect()
  7. printfn "======================="
  8. printfn "%s" label
  9. printfn "======================="
  10. let mutable totalMs = 0L
  11. for iteration in [1..countN] do
  12. stopwatch.Restart()
  13. f()
  14. stopwatch.Stop()
  15. printfn "#%2i elapsed:%6ims " iteration stopwatch.ElapsedMilliseconds
  16. totalMs <- totalMs + stopwatch.ElapsedMilliseconds
  17. let avgTimePerRun = totalMs / int64 countN
  18. printfn "%s: Average time per run:%6ims " label avgTimePerRun

Ready to download at last

Let’s download some sites for real!

We’ll define two lists of sites: a “good” one, where all the sites should be accessible, and a “bad” one, containing invalid sites.

  1. let goodSites = [
  2. "http://google.com"
  3. "http://bbc.co.uk"
  4. "http://fsharp.org"
  5. "http://microsoft.com"
  6. ]
  7. let badSites = [
  8. "http://example.com/nopage"
  9. "http://bad.example.com"
  10. "http://verybad.example.com"
  11. "http://veryverybad.example.com"
  12. ]

Let’s start by running largestPageSizeA 10 times with the good sites list:

  1. let f() =
  2. largestPageSizeA goodSites
  3. |> Async.RunSynchronously
  4. |> showContentSizeResult
  5. time 10 "largestPageSizeA_Good" f

The output is something like this:

  1. [google.com] Started ...
  2. [bbc.co.uk] Started ...
  3. [fsharp.org] Started ...
  4. [microsoft.com] Started ...
  5. [bbc.co.uk] ... finished
  6. [fsharp.org] ... finished
  7. [google.com] ... finished
  8. [microsoft.com] ... finished
  9. SUCCESS: [bbc.co.uk] Content size is 108983
  10. largestPageSizeA_Good: Average time per run: 533ms

We can see immediately that the downloads are happening in parallel — they have all started before the first one has finished.

Now what about if some of the sites are bad?

  1. let f() =
  2. largestPageSizeA badSites
  3. |> Async.RunSynchronously
  4. |> showContentSizeResult
  5. time 10 "largestPageSizeA_Bad" f

The output is something like this:

  1. [example.com] Started ...
  2. [bad.example.com] Started ...
  3. [verybad.example.com] Started ...
  4. [veryverybad.example.com] Started ...
  5. [verybad.example.com] ... exception
  6. [veryverybad.example.com] ... exception
  7. [example.com] ... exception
  8. [bad.example.com] ... exception
  9. FAILURE: [
  10. "[example.com] "The remote server returned an error: (404) Not Found."";
  11. "[bad.example.com] "The remote name could not be resolved: 'bad.example.com'"";
  12. "[verybad.example.com] "The remote name could not be resolved: 'verybad.example.com'"";
  13. "[veryverybad.example.com] "The remote name could not be resolved: 'veryverybad.example.com'""]
  14. largestPageSizeA_Bad: Average time per run: 2252ms

Again, all the downloads are happening in parallel, and all four failures are returned.

Optimizations

The largestPageSizeA has a series of maps and sequences in it which means that the list is being iterated over three times and the async mapped over twice.

As I said earlier, I prefer clarity over micro-optimizations unless there is proof otherwise, and so this does not bother me.

However, let’s look at what you could do if you wanted to.

Here’s the original version, with comments removed:

  1. let largestPageSizeA urls =
  2. urls
  3. |> List.map (fun s -> System.Uri(s))
  4. |> List.map getUriContentSize
  5. |> List.sequenceAsyncA
  6. |> Async.map List.sequenceResultA
  7. |> Async.map (Result.map maxContentSize)

The first two List.maps could be combined:

  1. let largestPageSizeA urls =
  2. urls
  3. |> List.map (fun s -> System.Uri(s) |> getUriContentSize)
  4. |> List.sequenceAsyncA
  5. |> Async.map List.sequenceResultA
  6. |> Async.map (Result.map maxContentSize)

The map-sequence can be replaced with a traverse:

  1. let largestPageSizeA urls =
  2. urls
  3. |> List.traverseAsyncA (fun s -> System.Uri(s) |> getUriContentSize)
  4. |> Async.map List.sequenceResultA
  5. |> Async.map (Result.map maxContentSize)

and finally the two Async.maps can be combined too:

  1. let largestPageSizeA urls =
  2. urls
  3. |> List.traverseAsyncA (fun s -> System.Uri(s) |> getUriContentSize)
  4. |> Async.map (List.sequenceResultA >> Result.map maxContentSize)

Personally, I think we’ve gone too far here. I prefer the original version to this one!

As an aside, one way to get the best of both worlds is to use a “streams” library that automatically merges the maps for you.
In F#, a good one is Nessos Streams. Here is a blog post showing the difference between streams and
the standard seq.

Downloading the monadic way

Let’s reimplement the downloading logic using monadic style and see what difference it makes.

First we need a monadic version of the downloader:

  1. let largestPageSizeM urls =
  2. urls
  3. |> List.map (fun s -> System.Uri(s))
  4. |> List.map getUriContentSize
  5. |> List.sequenceAsyncM // <= "M" version
  6. |> Async.map List.sequenceResultM // <= "M" version
  7. |> Async.map (Result.map maxContentSize)

This one uses the monadic sequence functions (I won’t show them — the implementation is as you expect).

Let’s run largestPageSizeM 10 times with the good sites list and see if there is any difference from the applicative version:

  1. let f() =
  2. largestPageSizeM goodSites
  3. |> Async.RunSynchronously
  4. |> showContentSizeResult
  5. time 10 "largestPageSizeM_Good" f

The output is something like this:

  1. [google.com] Started ...
  2. [google.com] ... finished
  3. [bbc.co.uk] Started ...
  4. [bbc.co.uk] ... finished
  5. [fsharp.org] Started ...
  6. [fsharp.org] ... finished
  7. [microsoft.com] Started ...
  8. [microsoft.com] ... finished
  9. SUCCESS: [bbc.co.uk] Content size is 108695
  10. largestPageSizeM_Good: Average time per run: 955ms

There is a big difference now — it is obvious that the downloads are happening in series — each one starts only when the previous one has finished.

As a result, the average time is 955ms per run, almost twice that of the applicative version.

Now what about if some of the sites are bad? What should we expect? Well, because it’s monadic, we should expect that after the first error,
the remaining sites are skipped, right? Let’s see if that happens!

  1. let f() =
  2. largestPageSizeM badSites
  3. |> Async.RunSynchronously
  4. |> showContentSizeResult
  5. time 10 "largestPageSizeM_Bad" f

The output is something like this:

  1. [example.com] Started ...
  2. [example.com] ... exception
  3. [bad.example.com] Started ...
  4. [bad.example.com] ... exception
  5. [verybad.example.com] Started ...
  6. [verybad.example.com] ... exception
  7. [veryverybad.example.com] Started ...
  8. [veryverybad.example.com] ... exception
  9. FAILURE: ["[example.com] "The remote server returned an error: (404) Not Found.""]
  10. largestPageSizeM_Bad: Average time per run: 2371ms

Well that was unexpected! All of the sites were visited in series, even though the first one had an error. But in that case, why is only the first error returned,
rather than all the the errors?

Can you see what went wrong?

Explaining the problem

The reason why the implementation did not work as expected is that the chaining of the Asyncs was independent of the chaining of the Results.

If you step through this in a debugger you can see what is happening:

  • The first Async in the list was run, resulting in a failure.
  • Async.bind was used with the next Async in the list. But Async.bind has no concept of error, so the next Async was run, producing another failure.
  • In this way, all the Asyncs were run, producing a list of failures.
  • This list of failures was then traversed using Result.bind. Of course, because of the bind, only the first one was processed and the rest ignored.
  • The final result was that all the Asyncs were run but only the first failure was returned.


Treating two worlds as one

The fundamental problem is that we are treating the Async list and Result list as separate things to be traversed over.
But that means that a failed Result has no influence on whether the next Async is run.

What we want to do, then, is tie them together so that a bad result does determine whether the next Async is run.

And in order to do that, we need to treat the Async and the Result as a single type — let’s imaginatively call it AsyncResult.

If they are a single type, then bind looks like this:

Using map, apply, bind and sequence in practice - 图5

meaning that the previous value will determine the next value.

And also, the “swapping” becomes much simpler:

Using map, apply, bind and sequence in practice - 图6

Defining the AsyncResult type

OK, let’s define the AsyncResult type and it’s associated map, return, apply and bind functions.

  1. /// type alias (optional)
  2. type AsyncResult<'a> = Async<Result<'a>>
  3. /// functions for AsyncResult
  4. module AsyncResult =
  5. module AsyncResult =
  6. let map f =
  7. f |> Result.map |> Async.map
  8. let retn x =
  9. x |> Result.retn |> Async.retn
  10. let apply fAsyncResult xAsyncResult =
  11. fAsyncResult |> Async.bind (fun fResult ->
  12. xAsyncResult |> Async.map (fun xResult ->
  13. Result.apply fResult xResult))
  14. let bind f xAsyncResult = async {
  15. let! xResult = xAsyncResult
  16. match xResult with
  17. | Success x -> return! f x
  18. | Failure err -> return (Failure err)
  19. }

Notes:

  • The type alias is optional. We can use Async<Result<'a>> directly in the code and it wil work fine. The point is that conceptually AsyncResult is a separate type.
  • The bind implementation is new. The continuation function f is now crossing two worlds, and has the signature 'a -> Async<Result<'b>>.
    • If the inner Result is successful, the continuation function f is evaluated with the result. The return! syntax means that the return value is already lifted.
    • If the inner Result is a failure, we have to lift the failure to an Async.

Defining the traverse and sequence functions

With bind and return in place, we can create the appropriate traverse and sequence functions for AsyncResult:

  1. module List =
  2. /// Map an AsyncResult producing function over a list to get a new AsyncResult
  3. /// using monadic style
  4. /// ('a -> AsyncResult<'b>) -> 'a list -> AsyncResult<'b list>
  5. let rec traverseAsyncResultM f list =
  6. // define the monadic functions
  7. let (>>=) x f = AsyncResult.bind f x
  8. let retn = AsyncResult.retn
  9. // define a "cons" function
  10. let cons head tail = head :: tail
  11. // right fold over the list
  12. let initState = retn []
  13. let folder head tail =
  14. f head >>= (fun h ->
  15. tail >>= (fun t ->
  16. retn (cons h t) ))
  17. List.foldBack folder list initState
  18. /// Transform a "list<AsyncResult>" into a "AsyncResult<list>"
  19. /// and collect the results using bind.
  20. let sequenceAsyncResultM x = traverseAsyncResultM id x

Defining and testing the downloading functions

Finally, the largestPageSize function is simpler now, with only one sequence needed.

  1. let largestPageSizeM_AR urls =
  2. urls
  3. |> List.map (fun s -> System.Uri(s) |> getUriContentSize)
  4. |> List.sequenceAsyncResultM
  5. |> AsyncResult.map maxContentSize

Let’s run largestPageSizeM_AR 10 times with the good sites list and see if there is any difference from the applicative version:

  1. let f() =
  2. largestPageSizeM_AR goodSites
  3. |> Async.RunSynchronously
  4. |> showContentSizeResult
  5. time 10 "largestPageSizeM_AR_Good" f

The output is something like this:

  1. [google.com] Started ...
  2. [google.com] ... finished
  3. [bbc.co.uk] Started ...
  4. [bbc.co.uk] ... finished
  5. [fsharp.org] Started ...
  6. [fsharp.org] ... finished
  7. [microsoft.com] Started ...
  8. [microsoft.com] ... finished
  9. SUCCESS: [bbc.co.uk] Content size is 108510
  10. largestPageSizeM_AR_Good: Average time per run: 1026ms

Again, the downloads are happening in series. And again, the time per run is almost twice that of the applicative version.

And now the moment we’ve been waiting for! Will it skip the downloading after the first bad site?

  1. let f() =
  2. largestPageSizeM_AR badSites
  3. |> Async.RunSynchronously
  4. |> showContentSizeResult
  5. time 10 "largestPageSizeM_AR_Bad" f

The output is something like this:

  1. [example.com] Started ...
  2. [example.com] ... exception
  3. FAILURE: ["[example.com] "The remote server returned an error: (404) Not Found.""]
  4. largestPageSizeM_AR_Bad: Average time per run: 117ms

Success! The error from the first bad site prevented the rest of the downloads, and the short run time is proof of that.

Summary

In this post, we worked through a small practical example. I hope that this example demonstrated that
map, apply, bind, traverse, and sequence are not just academic abstractions but essential tools in your toolbelt.

In the next post we’ll working through another practical example, but this time
we will end up creating our own elevated world. See you then!