UPDATED: 11th April 2019

Sponsor me on Patreon to support more content like this.

用中文阅读

Introduction - Part 2 Docker and go-micro

In the previous post, we covered the basics of writing a gRPC based microservice. In this part; we will cover the basics of Dockerising a service, we will also be updating our service to use go-micro, and finally, introducing a second service.

Introducing Docker.

With the advent of cloud computing, and the birth of microservices. The pressure to deploy more, but smaller chunks of code at a time has led to some interesting new ideas and technologies. One of which being the concept of containers.

Traditionally, teams would deploy a monolith to static servers, running a set operating system, with a predefined set of dependencies to keep track of. Or maybe on a VM provisioned by Chef or Puppet for example. Scaling was expensive and not all that effective. The most common option was vertical scaling, i.e throwing more and more resources at static servers.

Tools like vagrant came along and made provisioning VM's fairly trivial. But running a VM was still a fairly heft operation. You were running a full operating system in all its glory, kernel and all, within your host machine. In terms of resources, this is pretty expensive. So when microservices came along, it became infeasible to run so many separate codebases in their own environments.

Along came containers

Containers are slimmed down versions of an operating system. Containers don't contain a kernel, a guest OS or any of the lower level components which would typically make up an OS.

Containers only contain the top level libraries and its run-time components. The kernel is shared across the host machine. So the host machine runs a single Unix kernel, which is then shared by n amount of containers, running very different sets of run-times.

Under the hood, containers utilise various kernel utilities, in order to share resources and network functionality across the container space.

Further reading

This means you can run the run-time and the dependencies your code needs, without booting up several complete operating systems. This is a game changer because the overall size of a container vs a VM, is magnitudes smaller. Ubuntu for example, is typically a little under 1GB in size. Whereas its Docker image counterpart is a mere 188mb.

You will notice I spoke more broadly of containers in that introduction, rather than 'Docker containers'. It's common to think that Docker and containers are the same thing. However, containers are more of a concept or set of capabilities within Linux. Docker is just a flavor of containers, which became popular largely due to its ease of use. There are others, too. But we'll be sticking with Docker as it's in my opinion the best supported, and the simplest for newcomers.

So now hopefully you see the value in containerisation, we can start Dockerising our first service. Let's create a Dockerfile $ touch consignment-service/Dockerfile.

In that file, add the following:

  1. FROM alpine:latest
  2. RUN mkdir /app
  3. WORKDIR /app
  4. ADD consignment-service /app/consignment-service
  5. CMD ["./consignment-service"]

If you're running on Linux, you might run into issues using Alpine. So if you're following this article on a Linux machine, simply replace alpine with debian, and you should be good to go. We'll touch on an even better way to build our binaries later on.

First of all, we are pulling in the latest Linux Alpine image. Linux Alpine is a light-weight Linux distribution, developed and optimised for running Dockerised web applications. In other words, Linux Alpine has just enough dependencies and run-time functionality to run most applications. This means its image size is around 8mb(!!). Which compared with say… an Ubuntu VM at around 1GB, you can start to see why Docker images became a more natural fit for microservices and cloud computing.

Next we create a new directory to house our application, and set the context directory to our new directory. This is so that our app directory is the default directory. We then add our compiled binary into our Docker container, and run it.

Now let's update our Makefile's build entry to build our docker image.

  1. build:
  2. ...
  3. GOOS=linux GOARCH=amd64 go build
  4. docker build -t shippy-service-consignment .

We've added two more steps here, and I'd like to explain them in a little more detail. First of all, we're building our Go binary. You will notice two environment variables are being set before we run $ go build however. GOOS and GOARCH allow you to cross-compile your go binary for another operating system. Since I'm developing on a Macbook, I can't compile a go binary, and then run it within a Docker container, which uses Linux. The binary will be completely meaningless within your Docker container and it will throw an error.

The second step I added is the docker build process. This will read your Dockerfile, and build an image by the name consignment-service, the period denotes a directory path, so here we just want the build process to look in the current directory.

I'm going to add a new entry in our Makefile:

  1. run:
  2. docker run -p 50051:50051 shippy-service-consignment

Here, we run our consignment-service docker image, exposing the port 50051. Because Docker runs on a separate networking layer, you need to forward the port used within your Docker container, to your host. You can forward the internal port to a new port on the host by changing the first segment. For example, if you wanted to run this service on port 8080, you would change the -p argument to 8080:50051. You can also run a container in the background by including a -d flag. For example docker run -d -p 50051:50051 consignment-service.

You can read more about how Docker's networking works here.

Run $ make run, then in a separate terminal pane, run your cli client again $ go run main.go and double check it still works.

When you run $ docker build, you are building your code and run-time environment into an image. Docker images are portable snapshots of your environment, its dependencies. You can share docker images by publishing them to docker hub. Which is like a sort of npm, or yum repo for docker images. When you define a FROM in your Dockerfile, you are telling docker to pull that image from docker hub to use as your base. You can then extend and override parts of that base file, by re-defining them in your own. We won't be publishing our docker images, but feel free to peruse docker hub, and note how just about any piece of software has been containerised already. Some really remarkable things have been Dockerised.

Each declaration within a Dockerfile is cached when it's first built. This saves having to re-build the entire run-time each time you make a change. Docker is clever enough to work out which parts have changed, and which parts needs re-building. This makes the build process incredibly quick.

Enough about containers! Let's get back to our code.

When creating a gRPC service, there's quite a lot of boilerplate code for creating connections, and you have to hard-code the location of the service address into a client, or other service in order for it to connect to it. This is tricky, because when you are running services in the cloud, they may not share the same host, or the address or ip may change after re-deploying a service.

This is where service discovery comes into play. Service discovery keeps an up-to-date catalogue of all your services and their locations. Each service registers itself at runtime, and de-registers itself on closure. Each service then has a name or id assigned to it. So that even though it may have a new IP address, or host address, as long as the service name remains the same, you don't need to update calls to this service from your other services.

Typically, there are many approaches to this problem, but like most things in programming, if someone has tackled this problem already, there's no point re-inventing the wheel. One person who has tackled these problems with fantastic clarity and ease of use, is @chuhnk (Asim Aslam), creator of Go-micro. He is very much a one man army, producing some fantastic software. Please consider helping him out if you like what you see!

Go-micro

Go-micro is a powerful microservice framework written in Go, for use, for the most part with Go. However you can use Sidecar in order to interface with other languages also.

Go-micro has useful features for making microservices in Go trivial. But we'll start with probably the most common issue it solves, and that's service discovery.

We will need to make a few updates to our service in order to work with go-micro. Go-micro integrates as a protoc plugin, in this case replacing the standard gRPC plugin we're currently using. So let's start by replacing that in our Makefile.

Be sure to install the go-micro dependencies:

  1. $ go get -u github.com/micro/protobuf/{proto,protoc-gen-go}
  1. build:
  2. protoc -I. --go_out=plugins=micro:. \
  3. proto/consignment/consignment.proto
  4. ...
  5. ...

We have updated our Makefile to use the go-micro plug-in, instead of the gRPC plugin. Now we will need to update our shippy-service-consignment/main.go file to use go-micro. This will abstract much of our previous gRPC code. It handles registering and spinning up our service with ease.

  1. // shippy-service-consignment/main.go
  2. package main
  3. import (
  4. "fmt"
  5. // Import the generated protobuf code
  6. pb "github.com/EwanValentine/shippy/consignment-service/proto/consignment"
  7. "github.com/micro/go-micro"
  8. "context"
  9. )
  10. type repository interface {
  11. Create(*pb.Consignment) (*pb.Consignment, error)
  12. GetAll() []*pb.Consignment
  13. }
  14. // Repository - Dummy repository, this simulates the use of a datastore
  15. // of some kind. We'll replace this with a real implementation later on.
  16. type Repository struct {
  17. consignments []*pb.Consignment
  18. }
  19. func (repo *Repository) Create(consignment *pb.Consignment) (*pb.Consignment, error) {
  20. updated := append(repo.consignments, consignment)
  21. repo.consignments = updated
  22. return consignment, nil
  23. }
  24. func (repo *Repository) GetAll() []*pb.Consignment {
  25. return repo.consignments
  26. }
  27. // Service should implement all of the methods to satisfy the service
  28. // we defined in our protobuf definition. You can check the interface
  29. // in the generated code itself for the exact method signatures etc
  30. // to give you a better idea.
  31. type service struct {
  32. repo repository
  33. }
  34. // CreateConsignment - we created just one method on our service,
  35. // which is a create method, which takes a context and a request as an
  36. // argument, these are handled by the gRPC server.
  37. func (s *service) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error {
  38. // Save our consignment
  39. consignment, err := s.repo.Create(req)
  40. if err != nil {
  41. return err
  42. }
  43. // Return matching the `Response` message we created in our
  44. // protobuf definition.
  45. res.Created = true
  46. res.Consignment = consignment
  47. return nil
  48. }
  49. func (s *service) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error {
  50. consignments := s.repo.GetAll()
  51. res.Consignments = consignments
  52. return nil
  53. }
  54. func main() {
  55. repo := &Repository{}
  56. // Create a new service. Optionally include some options here.
  57. srv := micro.NewService(
  58. // This name must match the package name given in your protobuf definition
  59. micro.Name("shippy.service.consignment"),
  60. )
  61. // Init will parse the command line flags.
  62. srv.Init()
  63. // Register handler
  64. pb.RegisterShippingServiceHandler(srv.Server(), &service{repo})
  65. // Run the server
  66. if err := srv.Run(); err != nil {
  67. fmt.Println(err)
  68. }
  69. }

The main changes here are the way in which we instantiate our gRPC server, which has been abstracted neatly behind a mico.NewService() method, which handles registering our service. And finally, the service.Run() function, which handles the connection itself. Similar as before, we register our implementation, but this time using a slightly different method.

The second biggest changes are to the service methods themselves, the arguments and response types have changes slightly to take both the request and the response structs as arguments, and now only returning an error. Within our methods, we set the response, which is handled by go-micro.

Finally, we are no longer hard-coding the port. Go-micro should be configured using environment variables, or command line arguments. To set the address, use MICRO_SERVER_ADDRESS=:50051. By default, Micro utilises mdns (multicast dns) as the service discovery broker for local use. You wouldn't typically use mdns for service discovery in production, but we want to avoid having to run something like Consul or etcd locally for the sakes of testing. More on this in a later post.

Let's update our Makefile to reflect this.

  1. run:
  2. docker run -p 50051:50051 \
  3. -e MICRO_SERVER_ADDRESS=:50051 \
  4. shippy-service-consignment

The -e is an environment variable flag, this allows you to pass in environment variables into your Docker container. You must have a flag per variable, for example -e ENV=staging -e DB_HOST=localhost etc.

Now if you run $ make run, you will have a Dockerised service, with service discovery. So let's update our cli tool to utilise this.

  1. import (
  2. ...
  3. micro "github.com/micro/go-micro"
  4. )
  5. func main() {
  6. service := micro.NewService(micro.Name("shippy.consignment.cli"))
  7. service.Init()
  8. client := pb.NewShippingServiceClient("shippy.consignment.service", service.Client())
  9. ...
  10. }

See here for full file

Here we've imported the go-micro libraries for creating clients, and replaced our existing connection code, with the go-micro client code, which uses service resolution instead of connecting directly to an address.

However if you run this, this won't work. This is because we're running our service in a Docker container now, which has its own mdns, separate to the host mdns we are currently using. The easiest way to fix this is to ensure both service and client are running in "dockerland", so that they are both running on the same host, and using the same network layer. So let's create a Makefile consignment-cli/Makefile, and create some entries.

  1. build:
  2. GOOS=linux GOARCH=amd64 go build
  3. docker build -t shippy-cli-consignment .
  4. run:
  5. docker run shippy-cli-consignment

Similar to before, we want to build our binary for Linux. When we run our docker image, we want to pass in an environment variable to instruct go-micro to use mdns.

Now let's create a Dockerfile for our CLI tool:

  1. FROM alpine:latest
  2. RUN mkdir -p /app
  3. WORKDIR /app
  4. ADD consignment.json /app/consignment.json
  5. ADD consignment-cli /app/consignment-cli
  6. CMD ["./shippy-cli-consignment"]

This is very similar to our services Dockerfile, except it also pulls in our json data file as well.

Now when you run $ make run in your shippy-cli-consignment directory, you should see Created: true, the same as before.

Earlier, I mentioned that those of you using Linux should switch to use the Debian base image. Now seems like a good time to take a look at a new feature from Docker: Multi-stage builds. This allows us to use multiple Docker images in one Dockerfile.

This is useful in our case especially, as we can use one image to build our binary, with all the correct dependencies etc, then use the second image to run it. Let's try this out, I will leave detailed comments along-side the code:

  1. # consignment-service/Dockerfile
  2. # We use the official golang image, which contains all the
  3. # correct build tools and libraries. Notice `as builder`,
  4. # this gives this container a name that we can reference later on.
  5. FROM golang:alpine as builder
  6. RUN apk --no-cache add git
  7. # Set our workdir to our current service in the gopath
  8. WORKDIR /app/shippy-service-consignment
  9. # Copy the current code into our workdir
  10. COPY . .
  11. RUN go mod download
  12. # Build the binary, with a few flags which will allow
  13. # us to run this binary in Alpine.
  14. RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-consignment
  15. # Here we're using a second FROM statement, which is strange,
  16. # but this tells Docker to start a new build process with this
  17. # image.
  18. FROM alpine:latest
  19. # Security related package, good to have.
  20. RUN apk --no-cache add ca-certificates
  21. # Same as before, create a directory for our app.
  22. RUN mkdir /app
  23. WORKDIR /app
  24. # Here, instead of copying the binary from our host machine,
  25. # we pull the binary from the container named `builder`, within
  26. # this build context. This reaches into our previous image, finds
  27. # the binary we built, and pulls it into this container. Amazing!
  28. COPY --from=builder /app/shippy-service-consignment/shippy-service-consignment .
  29. # Run the binary as per usual! This time with a binary build in a
  30. # separate container, with all of the correct dependencies and
  31. # run time libraries.
  32. CMD ["./shippy-service-consignment"]

I will now go through our other Dockerfiles and apply this new approach. Oh, and remember to remove $ go build from your Makefiles!

More on multi-stage builds here.

Vessel service

Let's create a second service. We have a consignment service, this will deal with matching a consignment of containers to a vessel which is best suited to that consignment. In order to match our consignment, we need to send the weight and amount of containers to our new vessel service, which will then find a vessel capable of handling that consignment.

Create a new directory in your root directory $ mkdir vessel-service, now created a sub-directory for our new services protobuf definition, $ mkdir -p shippy-service-vessel/proto/vessel. Now let's create a new protobuf file, $ touch shippy-service-vessel/proto/vessel/vessel.proto.

Since the protobuf definition is really the core of our domain design, let's start there.

  1. // shippy-service-vessel/proto/vessel/vessel.proto
  2. syntax = "proto3";
  3. package vessel;
  4. service VesselService {
  5. rpc FindAvailable(Specification) returns (Response) {}
  6. }
  7. message Vessel {
  8. string id = 1;
  9. int32 capacity = 2;
  10. int32 max_weight = 3;
  11. string name = 4;
  12. bool available = 5;
  13. string owner_id = 6;
  14. }
  15. message Specification {
  16. int32 capacity = 1;
  17. int32 max_weight = 2;
  18. }
  19. message Response {
  20. Vessel vessel = 1;
  21. repeated Vessel vessels = 2;
  22. }

As you can see, this is very similar to our first service. We create a service, with a single rpc method called FindAvailable. This takes a Specification type and returns a Response type. The Response type returns either a Vessel type or multiple Vessels, using the repeated field.

Now we need to create a Makefile to handle our build logic and our run script. $ touch shippy-service-vessel/Makefile. Open that file and add the following:

  1. // vessel-service/Makefile
  2. build:
  3. protoc -I. --go_out=plugins=micro:. \
  4. proto/vessel/vessel.proto
  5. docker build -t shippy-service-vessel .
  6. run:
  7. docker run -p 50052:50051 -e MICRO_SERVER_ADDRESS=:50051 shippy-service-vessel

This is almost identical to the first Makefile we created for our consignment-service, however notice the service names and the ports have changed a little. We can't run two docker containers on the same port, so we make use of Dockers port forwarding here to ensure this service forwards 50051 to 50052 on the host network.

Now we need a Dockerfile, using our new multi-stage format:

  1. # vessel-service/Dockerfile
  2. FROM golang:alpine as builder
  3. RUN apk --no-cache add git
  4. WORKDIR /app/shippy-service-vessel
  5. COPY . .
  6. RUN go mod download
  7. RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-vessel
  8. FROM alpine:latest
  9. RUN apk --no-cache add ca-certificates
  10. RUN mkdir /app
  11. WORKDIR /app
  12. COPY --from=builder /app/shippy-service-vessel .
  13. CMD ["./shippy-service-vessel"]

Finally, we can start on our implementation:

  1. // vessel-service/main.go
  2. package main
  3. import (
  4. "context"
  5. "errors"
  6. "fmt"
  7. pb "github.com/EwanValentine/shippy/vessel-service/proto/vessel"
  8. "github.com/micro/go-micro"
  9. )
  10. type Repository interface {
  11. FindAvailable(*pb.Specification) (*pb.Vessel, error)
  12. }
  13. type VesselRepository struct {
  14. vessels []*pb.Vessel
  15. }
  16. // FindAvailable - checks a specification against a map of vessels,
  17. // if capacity and max weight are below a vessels capacity and max weight,
  18. // then return that vessel.
  19. func (repo *VesselRepository) FindAvailable(spec *pb.Specification) (*pb.Vessel, error) {
  20. for _, vessel := range repo.vessels {
  21. if spec.Capacity <= vessel.Capacity && spec.MaxWeight <= vessel.MaxWeight {
  22. return vessel, nil
  23. }
  24. }
  25. return nil, errors.New("No vessel found by that spec")
  26. }
  27. // Our grpc service handler
  28. type service struct {
  29. repo repository
  30. }
  31. func (s *service) FindAvailable(ctx context.Context, req *pb.Specification, res *pb.Response) error {
  32. // Find the next available vessel
  33. vessel, err := s.repo.FindAvailable(req)
  34. if err != nil {
  35. return err
  36. }
  37. // Set the vessel as part of the response message type
  38. res.Vessel = vessel
  39. return nil
  40. }
  41. func main() {
  42. vessels := []*pb.Vessel{
  43. &pb.Vessel{Id: "vessel001", Name: "Boaty McBoatface", MaxWeight: 200000, Capacity: 500},
  44. }
  45. repo := &VesselRepository{vessels}
  46. srv := micro.NewService(
  47. micro.Name("shippy.service.vessel"),
  48. )
  49. srv.Init()
  50. // Register our implementation with
  51. pb.RegisterVesselServiceHandler(srv.Server(), &service{repo})
  52. if err := srv.Run(); err != nil {
  53. fmt.Println(err)
  54. }
  55. }

Now let's get to the interesting part. When we create a consignment, we need to alter our consignment service to call our new vessel service, find a vessel, and update the vessel_id in the created consignment:

  1. package main
  2. import (
  3. "context"
  4. "fmt"
  5. "log"
  6. "sync"
  7. pb "github.com/EwanValentine/shippy-service-consignment/proto/consignment"
  8. vesselProto "github.com/EwanValentine/shippy-service-vessel/proto/vessel"
  9. "github.com/micro/go-micro"
  10. )
  11. const (
  12. port = ":50051"
  13. )
  14. type repository interface {
  15. Create(*pb.Consignment) (*pb.Consignment, error)
  16. GetAll() []*pb.Consignment
  17. }
  18. // Repository - Dummy repository, this simulates the use of a datastore
  19. // of some kind. We'll replace this with a real implementation later on.
  20. type Repository struct {
  21. mu sync.RWMutex
  22. consignments []*pb.Consignment
  23. }
  24. // Create a new consignment
  25. func (repo *Repository) Create(consignment *pb.Consignment) (*pb.Consignment, error) {
  26. repo.mu.Lock()
  27. updated := append(repo.consignments, consignment)
  28. repo.consignments = updated
  29. repo.mu.Unlock()
  30. return consignment, nil
  31. }
  32. // GetAll consignments
  33. func (repo *Repository) GetAll() []*pb.Consignment {
  34. return repo.consignments
  35. }
  36. // Service should implement all of the methods to satisfy the service
  37. // we defined in our protobuf definition. You can check the interface
  38. // in the generated code itself for the exact method signatures etc
  39. // to give you a better idea.
  40. type service struct {
  41. repo repository
  42. vesselClient vesselProto.VesselServiceClient
  43. }
  44. // CreateConsignment - we created just one method on our service,
  45. // which is a create method, which takes a context and a request as an
  46. // argument, these are handled by the gRPC server.
  47. func (s *service) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error {
  48. // Here we call a client instance of our vessel service with our consignment weight,
  49. // and the amount of containers as the capacity value
  50. vesselResponse, err := s.vesselClient.FindAvailable(context.Background(), &vesselProto.Specification{
  51. MaxWeight: req.Weight,
  52. Capacity: int32(len(req.Containers)),
  53. })
  54. log.Printf("Found vessel: %s \n", vesselResponse.Vessel.Name)
  55. if err != nil {
  56. return err
  57. }
  58. // We set the VesselId as the vessel we got back from our
  59. // vessel service
  60. req.VesselId = vesselResponse.Vessel.Id
  61. // Save our consignment
  62. consignment, err := s.repo.Create(req)
  63. if err != nil {
  64. return err
  65. }
  66. res.Created = true
  67. res.Consignment = consignment
  68. return nil
  69. }
  70. // GetConsignments -
  71. func (s *service) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error {
  72. consignments := s.repo.GetAll()
  73. res.Consignments = consignments
  74. return nil
  75. }
  76. func main() {
  77. repo := &Repository{}
  78. // Set-up micro instance
  79. srv := micro.NewService(
  80. micro.Name("shippy.service.consignment"),
  81. )
  82. srv.Init()
  83. vesselClient := vesselProto.NewVesselServiceClient("shippy.service.vessel", srv.Client())
  84. // Register handlers
  85. pb.RegisterShippingServiceHandler(srv.Server(), &service{repo, vesselClient})
  86. // Run the server
  87. if err := srv.Run(); err != nil {
  88. fmt.Println(err)
  89. }
  90. }

Here we've created a client instance for our vessel service, this allows us to use the service name, i.e shipy.service.vessel to call the vessel service as a client and interact with its methods. In this case, just the one method (FindAvailable). We send our consignment weight, along with the amount of containers we want to ship as a specification to the vessel-service. Which then returns an appropriate vessel.

Update the consignment-cli/consignment.json file, remove the hardcoded vessel_id, we want to confirm our own is working. And let's add a few more containers and up the weight. For example:

  1. {
  2. "description": "This is a test consignment",
  3. "weight": 55000,
  4. "containers": [
  5. { "customer_id": "cust001", "user_id": "user001", "origin": "Manchester, United Kingdom" },
  6. { "customer_id": "cust002", "user_id": "user001", "origin": "Derby, United Kingdom" },
  7. { "customer_id": "cust005", "user_id": "user001", "origin": "Sheffield, United Kingdom" }
  8. ]
  9. }

Now run $ make build && make run in consignment-cli. You should see a response, with a list of created consignments. In your consignments, you should now see a vessel_id has been set.

So there we have it, two inter-connected microservices and a command line interface! The next part in the series, we will look at persisting some of this data using MongoDB. We will also add in a third service, and use docker-compose to manage our growing ecosystem of containers locally.

Check out the repo here for the full example.

For repos here:shippy-service-consignmentshippy-service-vesselshippy-cli-consignment

As ever, any feedback, please send it over to (mailto:[email protected]). Much appreciated!

If you are finding this series useful, and you use an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. Cheers! https://monzo.me/ewanvalentine

Or, sponsor me on Patreon to support more content like this.

Accolades: Docker Newsletter (22nd November 2017).