Parsing URLs

In a realistic web app, we want to show different content for different URLs:

  • /search
  • /search?q=seiza
  • /settings

How do we do that? We use the elm/url to parse the raw strings into nice Elm data structures. This package makes the most sense when you just look at examples, so that is what we will do!

Example 1

Say we have an art website where the following addresses should be valid:

  • /topic/architecture
  • /topic/painting
  • /topic/sculpture
  • /blog/42
  • /blog/123
  • /blog/451
  • /user/tom
  • /user/sue
  • /user/sue/comment/11
  • /user/sue/comment/51

So we have topic pages, blog posts, user information, and a way to look up individual user comments. We would use the Url.Parser module to write a URL parser like this:

  1. import Url.Parser exposing (Parser, (</>), int, map, oneOf, s, string)
  2. type Route
  3. = Topic String
  4. | Blog Int
  5. | User String
  6. | Comment String Int
  7. routeParser : Parser (Route -> a) a
  8. routeParser =
  9. oneOf
  10. [ map Topic (s "topic" </> string)
  11. , map Blog (s "blog" </> int)
  12. , map User (s "user" </> string)
  13. , map Comment (s "user" </> string </> s "comment" </> int)
  14. ]
  15. -- /topic/pottery ==> Just (Topic "pottery")
  16. -- /topic/collage ==> Just (Topic "collage")
  17. -- /topic/ ==> Nothing
  18. -- /blog/42 ==> Just (Blog 42)
  19. -- /blog/123 ==> Just (Blog 123)
  20. -- /blog/mosaic ==> Nothing
  21. -- /user/tom/ ==> Just (User "tom")
  22. -- /user/sue/ ==> Just (User "sue")
  23. -- /user/bob/comment/42 ==> Just (Comment "bob" 42)
  24. -- /user/sam/comment/35 ==> Just (Comment "sam" 35)
  25. -- /user/sam/comment/ ==> Nothing
  26. -- /user/ ==> Nothing

The Url.Parser module makes it quite concise to fully turn valid URLs into nice Elm data!

Example 2

Now say we have a personal blog where addresses like this are valid:

  • /blog/12/the-history-of-chairs
  • /blog/13/the-endless-september
  • /blog/14/whale-facts
  • /blog/
  • /blog?q=whales
  • /blog?q=seiza

In this case we have individual blog posts and a blog overview with an optional query parameter. We need to add the Url.Parser.Query module to write our URL parser this time:

  1. import Url.Parser exposing (Parser, (</>), (<?>), int, map, oneOf, s, string)
  2. import Url.Parser.Query as Query
  3. type Route
  4. = BlogPost Int String
  5. | BlogQuery (Maybe String)
  6. routeParser : Parser (Route -> a) a
  7. routeParser =
  8. oneOf
  9. [ map BlogPost (s "blog" </> int </> string)
  10. , map BlogQuery (s "blog" <?> Query.string "q")
  11. ]
  12. -- /blog/14/whale-facts ==> Just (BlogPost 14 "whale-facts")
  13. -- /blog/14 ==> Nothing
  14. -- /blog/whale-facts ==> Nothing
  15. -- /blog/ ==> Just (BlogQuery Nothing)
  16. -- /blog ==> Just (BlogQuery Nothing)
  17. -- /blog?q=chabudai ==> Just (BlogQuery (Just "chabudai"))
  18. -- /blog/?q=whales ==> Just (BlogQuery (Just "whales"))
  19. -- /blog/?query=whales ==> Just (BlogQuery Nothing)

The </> and <?> operators let us write parsers that look quite like the actual URLs we want to parse. And adding Url.Parser.Query allowed us to handle query parameters like ?q=seiza.

Example 3

Okay, now we have a documentation website with addresses like this:

  • /Basics
  • /Maybe
  • /List
  • /List#map
  • /List#filter
  • /List#foldl

We can use the fragment parser from Url.Parser to handle these addresses like this:

  1. type alias Docs =
  2. (String, Maybe String)
  3. docsParser : Parser (Docs -> a) a
  4. docsParser =
  5. map Tuple.pair (string </> fragment identity)
  6. -- /Basics ==> Just ("Basics", Nothing)
  7. -- /Maybe ==> Just ("Maybe", Nothing)
  8. -- /List ==> Just ("List", Nothing)
  9. -- /List#map ==> Just ("List", Just "map")
  10. -- /List# ==> Just ("List", Just "")
  11. -- /List/map ==> Nothing
  12. -- / ==> Nothing

So now we can handle URL fragments as well!

Synthesis

Now that we have seen a few parsers, we should look at how this fits into a Browser.application program. Rather than just saving the current URL like last time, can we parse it into useful data and show that instead?

  1. TODO

The major new things are:

  1. Our update parses the URL when it gets a UrlChanged message.
  2. Our view function shows different content for different addresses!

It is really not too fancy. Nice!

But what happens when you have 10 or 20 or 100 different pages? Does it all go in this one view function? Surely it cannot be all in one file. How many files should it be in? What should be the directory structure? That is what we will discuss next!