Go Microservices, part 8 - centralized configuration with Viper and Spring Cloud Config

15 May 2017 // Erik Lupander

In part 8 of the blog series, we’ll explore centralized configuration for Go microservices with Spring Cloud Config.

Introduction

Centralizing something when dealing with Microservices may seem a bit off given that microservices after all is about decomposing your system into separate independent pieces of software. However, what we’re typically after is isolation of processes. Other aspects of microservice operations should be dealt with in a centralized way. For example, logs should end up in your logging solution such as the elk stack, monitoring goes into a dedicated monitoring - in this part of the blog series, we’ll deal with externalized and centralized configuration using Spring Cloud Config and git.

Handling configuration for the various microservices that our application consists of in a centralized manner is actually quite natural as well. Especially when running in a containerized environment on an unknown number of underlying hardware nodes, managing config files built into each microservice image or from mounted volumes can quickly become a real headache. There are a number of proven projects to help deal with this, for example etcd, consul and zookeeper. However, it should be noted that those projects provide a lot more than just serving configuration. Since this blog series focus on integrating Go microservices with the Spring Cloud / Netflix OSS ecosystem of supporting services, we’ll be basing our centralized configuration on Spring Cloud Configuration, a piece of software dedicated to provide exactly that.

Spring Cloud Config

The Spring Cloud ecosystem provides a solution for centralized configuration not-so-creatively named Spring Cloud Config. The Spring Cloud Config server can be viewed as a proxy between your services and their actual configuration, providing a number of really neat features such as:

  • Support for several different configuration backends such as git (default), file systems and plugins for using etcd, consul and zookeeper as stores.
  • Transparent decryption of encrypted properties.
  • Pluggable security
  • Push mechanism using git hooks / REST API and Spring Cloud Bus (e.g. RabbitMQ) to propagate changes in config files to services, making live reload of configuration possible.
    For a more in-depth article about Spring Cloud Config in particular, take a look at my colleague Magnus recent blog post.

In this blog post, we will integrate our “accountservice” with a Spring Cloud Config server backed by a public git repository on github, from which we’ll fetch configuration, encrypt/decrypt a property and also implement live-reload of config properties.

Here’s a simple overview of the overall solution we’re aiming for:

configserver.png

Overview

Since we’re running Docker in Swarm mode, we’ll continue using Docker mechanics in various ways. Inside the Swarm, we should run at least one (perferrably more) instances of Spring Cloud Configuration servers. When one of our microservices starts up, all they need to know about are the following:

  • The logical service name and port of the config server. I.e - we’re deploying our config servers on Docker Swarm as services, let’s say we name that service “configserver”. That means that is the only thing the microservices needs to know about addressing in order to make a request for its configuration.
  • What their name is, e.g. “accountservice”
  • What execution profile it is running, e.g. “dev”, “test” or “prod”. If you’re familiar with the concept of spring.profiles.active, this is a home-brewn counterpart we can use for Go.
  • If we’re using git as backend and want to fetch configuration from a particular branch, that needs to be known up front. (Optional)
    Given the four criteria above, a sample GET request for configuration could look like this in Go code:
  1. resp, err := http.Get("http://configserver:8888/accountservice/dev/P8")

I.e:

  1. protocol://url:port/applicationName/profile/branch

Setting up a Spring Cloud Configuration server in your Swarm

For part 8, you’ll probably want to clone branch P8 since it includes the source for the config server:

  1. git clone https://github.com/callistaenterprise/goblog.git
  2. git checkout P8

You could probably set up and deploy the config server in other ways. However, for simplicity I’ve prepared a /support folder in the root /goblog folder of the source code repository of the blog series which will contain the requisite 3rd party services we’ll need further on.

Typically, each required support component will either be a simple Dockerfile for conveniently building and deploying components which we can use out of the box, or it will be (java) source code and configuration (Spring Cloud applications are usually based on Spring Boot) we’ll need to build ourselves using gradle. (No worries, all you need is to have a JDK installed).

(Most of these Spring Cloud applications were prepared by my colleague Magnus for his microservices blog series.

Let’s get started with the config server, shall we?

RabbitMQ

What? Weren’t we about to install Spring Cloud Configuration server? Well - that piece of software depends on having a message broker to propagate configuration changes using Spring Cloud Bus backed by RabbitMQ. Having RabbitMQ around is a very good thing anyway which we’ll be using in a later blog post so we’ll start by getting RabbitMQ up and running as a service in our Swarm.

I’ve prepared a Dockerfile inside /goblog/support/rabbitmq to use a pre-baked image which we’ll deploy as a Docker Swarm service.

We’ll create a new bash (.sh) script to automate things for us if/when we need to update things.

In the root /goblog folder, create a new file support.sh:

  1. #!/bin/bash
  2. # RabbitMQ
  3. docker service rm rabbitmq
  4. docker build -t someprefix/rabbitmq support/rabbitmq/
  5. docker service create --name=rabbitmq --replicas=1 --network=my_network -p 1883:1883 -p 5672:5672 -p 15672:15672 someprefix/rabbitmq

(You may need to chmod it to make it executable)

Run it and wait while Docker downloads the necessary images and deploys RabbitMQ into your Swarm. When it’s done, you should be able to open the RabbitMQ Admin GUI and log in using guest/guest at:

  1. open http://$ManagerIP:15672/#/

Your web browser should open and display something like this:rabbitmq

If you see the RabbitMQ admin GUI, we can be fairly sure it works as advertised.

Spring Cloud Configuration server

In /support/config-server you’ll find a Spring Boot application pre-configured to run the config server. We’ll be using a git repository for storing and accessing our configuration using yaml files.

Feel free to take a look at /goblog/support/config-server/src/main/resources/application.yml which is the config file of the config server:

  1. ---
  2. # For deployment in Docker containers
  3. spring:
  4. profiles: docker
  5. cloud:
  6. config:
  7. server:
  8. git:
  9. uri: https://github.com/eriklupander/go-microservice-config.git
  10. # Home-baked keystore for encryption. Of course, a real environment wouldn't expose passwords in a blog...
  11. encrypt:
  12. key-store:
  13. location: file:/server.jks
  14. password: letmein
  15. alias: goblogkey
  16. secret: changeme
  17. # Since we're running in Docker Swarm mode, disable Eureka Service Discovery
  18. eureka:
  19. client:
  20. enabled: false
  21. # Spring Cloud Config requires rabbitmq, use the service name.
  22. spring.rabbitmq.host: rabbitmq
  23. spring.rabbitmq.port: 5672

We see a few things:

  • We’re telling the config-server to fetch configuration from our git-repo at the specified URI.
  • A keystore for encryption (self-signed) and decryption (we’ll get back to that)
  • Since we’re running in Docker Swarm mode, Eureka Service Discovery is disabled.
  • The config server is expecting to find a rabbitmq host at “rabbitmq” which just happens to be the Docker Swarm service name we just gave our RabbitMQ service.
    The Dockerfile for the config-server is quite simple:
  1. FROM davidcaste/alpine-java-unlimited-jce
  2. EXPOSE 8888
  3. ADD ./build/libs/*.jar app.jar
  4. ADD ./server.jks /
  5. ENTRYPOINT ["java","-Dspring.profiles.active=docker","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

(Never mind that java.security.egd stuff, it’s a workaround for a problem we don’t care about in this blog series)

A few things of note here:

  • We’re using a base docker image based on Alpine Linux that has the Java unlimited cryptography extension installed, this is a requirement if we want to use the encryption/decryption features of Spring Cloud Config.
  • A home-baked keystore is added to the root folder of the container image.

Build the keystore

To use encrypted properties later on, we’ll configure the config server with a self-signed certificate. (You’ll need to have keytool on your PATH).

In the /goblog/support/config-server/ folder, run:

  1. keytool -genkeypair -alias goblogkey -keyalg RSA \
  2. -dname "CN=Go Blog,OU=Unit,O=Organization,L=City,S=State,C=SE" \
  3. -keypass changeme -keystore server.jks -storepass letmein \
  4. -validity 730

This should create server.jks. Feel free to modify any properties/passwords, just remember to update application.yml accordingly!

Build and deploy

Time to build and deploy the server. Let’s create a shell script to save us time if or when we need to do this again. Remember - you need a Java Runtime Environment to build this! In the /goblog folder, create a file named springcloud.sh. We will put all things that actually needs building (and that may take some time) in there:

  1. #!/bin/bash
  2. cd support/config-server
  3. ./gradlew build
  4. cd ../..
  5. docker build -t someprefix/configserver support/config-server/
  6. docker service rm configserver
  7. docker service create --replicas 1 --name configserver -p 8888:8888 --network my_network --update-delay 10s --with-registry-auth --update-parallelism 1 someprefix/configserver

Run it from the /goblog folder (you may need to chmod +x first):

  1. > ./springcloud.sh

This may take a while, give it a minute or two and then check if you can see it up-and-running using docker service:

  1. > docker service ls
  2. ID NAME MODE REPLICAS IMAGE
  3. 39d26cc3zeor rabbitmq replicated 1/1 someprefix/rabbitmq
  4. eu00ii1zoe76 viz replicated 1/1 manomarks/visualizer:latest
  5. q36gw6ee6wry accountservice replicated 1/1 someprefix/accountservice
  6. t105u5bw2cld quotes-service replicated 1/1 eriklupander/quotes-service:latest
  7. urrfsu262e9i dvizz replicated 1/1 eriklupander/dvizz:latest
  8. w0jo03yx79mu configserver replicated 1/1 someprefix/configserver

Try to manually load the “accountservice” configuration as JSON using curl:

  1. > curl http://$ManagerIP:8888/accountservice/dev/master
  2. {"name":"accountservice","profiles":["dev"],"label":"master","version":"b8cfe2779e9604804e625135b96b4724ea378736",
  3. "propertySources":[
  4. {"name":"https://github.com/eriklupander/go-microservice-config.git/accountservice-dev.yml",
  5. "source":
  6. {"server_port":6767,"server_name":"Accountservice DEV"}
  7. }]
  8. }

(Formatted for brevity)

The actual configuration is stored within the “source” property where all values from the .yml file will appear as key-value pairs. Loading and parsing the “source” property into usable configuration in Go is the centerpiece of this blog post.

The YAML config files

Before moving on to Go code, let’s take a look inside the root folder of the P8 branch of the configuration-repo:

  1. accountservice-dev.yml
  2. accountservice-test.yml

Both these files are currently very sparsely populated:

  1. server_port: 6767
  2. server_name: Accountservice TEST
  3. the_password: (we'll get back to this one)

The only thing we’re configuring at this point is the HTTP port we want our service to bind to. A real service will probably have a lot more stuff in it.

Using encryption/decryption

One really neat thing about Spring Cloud Config is its built-in support for transparently decrypting values encrypted directly in the configuration files. For example, take a look at accountservice-test.yml where we have a dummy “the_password” property:

  1. server_port: 6767
  2. server_name: Accountservice TEST
  3. the_password: '{cipher}AQB1BMFCu5UsCcTWUwEQt293nPq0ElEFHHp5B2SZY8m4kUzzqxOFsMXHaH7SThNNjOUDGxRVkpPZEkdgo6aJFSPRzVF04SXOVZ6Rjg6hml1SAkLy/k1R/E0wp0RrgySbgh9nNEbhzqJz8OgaDvRdHO5VxzZGx8uj5KN+x6nrQobbIv6xTyVj9CSqJ/Btf/u1T8/OJ54vHwi5h1gSvdox67teta0vdpin2aSKKZ6w5LyQocRJbONUuHyP5roCONw0pklP+2zhrMCy0mXhCJSnjoHvqazmPRUkyGcjcY3LHjd39S2eoyDmyz944TKheI6rWtCfozLcIr/wAZwOTD5sIuA9q8a9nG2GppclGK7X649aYQynL+RUy1q7T7FbW/TzSBg='

By prefixing the encrypted string with {cipher}, our Spring Cloud configuration server will know how to automatically decrypt the value for us before passing the result to the service. In a running instance with everything configured correctly, a curl request to the REST API to fetch this config would return:

  1. ...
  2. "source": {
  3. "server_port": 6767,
  4. "server_name": "Accountservice TEST",
  5. "the_password": "password"
  6. ....

Pretty neat, right? The “the_password” property can be stored as clear-text encrypted string on a public server (if you trust the encryption algorithm and the integrity of your signing key) and the Spring Cloud Config server (which may not under any circumstance be made available unsecured and/or visible outside of your internal cluster!!) transparently decrypts the property into actual value ‘password’.

Of course, you need to encrypt the value using the same key as Spring Cloud Config is using for decryption, something that can be done over the config server’s HTTP API:

  1. curl http://$ManagerIP:8888/encrypt -d 'password'
  2. AQClKEMzqsGiVpKx+Vx6vz+7ww00n... (rest omitted for brevity)

Viper

Our Go-based configuration framework of choice is Viper. Viper has a nice API to work with, is extensible and doesn’t get in the way of our normal application code. While Viper doesn’t support loading configuration from Spring Cloud Configuration servers natively, we’ll write a short snippet of code that does this for us. Viper also handles many file types as config source - for example json, yaml and plain properties files. Viper can also read environment variables from the OS for us which can quite neat. Once initialized and populated, our configuration is always available using the various viper.Get* functions. Very convenient, indeed.

Remember the picture at the top of this blog post? Well, if not - here it is again:

configserver.png

We’ll make our microservices do an HTTP request on start, extract the “source” part of the JSON response and stuff that into Viper so we can get the HTTP port for our web server there. Let’s go!

Loading the configuration

As already demonstrated using curl, we can do a plain HTTP request to the config server where we just need to know our name and our “profile”. We’ll start by adding some parsing of flags to our “accountservice” main.go so we can specify an environment “profile” when starting as well as an optional URI to the config server:

  1. var appName = "accountservice"
  2. // Init function, runs before main()
  3. func init() {
  4. // Read command line flags
  5. profile := flag.String("profile", "test", "Environment profile, something similar to spring profiles")
  6. configServerUrl := flag.String("configServerUrl", "http://configserver:8888", "Address to config server")
  7. configBranch := flag.String("configBranch", "master", "git branch to fetch configuration from")
  8. flag.Parse()
  9. // Pass the flag values into viper.
  10. viper.Set("profile", *profile)
  11. viper.Set("configServerUrl", *configServerUrl)
  12. viper.Set("configBranch", *configBranch)
  13. }
  14. func main() {
  15. fmt.Printf("Starting %v\n", appName)
  16. // NEW - load the config
  17. config.LoadConfigurationFromBranch(
  18. viper.GetString("configServerUrl"),
  19. appName,
  20. viper.GetString("profile"),
  21. viper.GetString("configBranch"))
  22. initializeBoltClient()
  23. service.StartWebServer(viper.GetString("server_port")) // NEW, use port from loaded config
  24. }

The config.LoadConfigurationFromBranch(..) function goes into a new package we’re calling config. Create /goblog/accountservice/config and the following file named loader.go:

  1. // Loads config from for example http://configserver:8888/accountservice/test/P8
  2. func LoadConfigurationFromBranch(configServerUrl string, appName string, profile string, branch string) {
  3. url := fmt.Sprintf("%s/%s/%s/%s", configServerUrl, appName, profile, branch)
  4. fmt.Printf("Loading config from %s\n", url)
  5. body, err := fetchConfiguration(url)
  6. if err != nil {
  7. panic("Couldn't load configuration, cannot start. Terminating. Error: " + err.Error())
  8. }
  9. parseConfiguration(body)
  10. }
  11. // Make HTTP request to fetch configuration from config server
  12. func fetchConfiguration(url string) ([]byte, error) {
  13. resp, err := http.Get(url)
  14. if err != nil {
  15. panic("Couldn't load configuration, cannot start. Terminating. Error: " + err.Error())
  16. }
  17. body, err := ioutil.ReadAll(resp.Body)
  18. return body, err
  19. }
  20. // Pass JSON bytes into struct and then into Viper
  21. func parseConfiguration(body []byte) {
  22. var cloudConfig springCloudConfig
  23. err := json.Unmarshal(body, &cloudConfig)
  24. if err != nil {
  25. panic("Cannot parse configuration, message: " + err.Error())
  26. }
  27. for key, value := range cloudConfig.PropertySources[0].Source {
  28. viper.Set(key, value)
  29. fmt.Printf("Loading config property %v => %v\n", key, value)
  30. }
  31. if viper.IsSet("server_name") {
  32. fmt.Printf("Successfully loaded configuration for service %s\n", viper.GetString("server_name"))
  33. }
  34. }
  35. // Structs having same structure as response from Spring Cloud Config
  36. type springCloudConfig struct {
  37. Name string `json:"name"`
  38. Profiles []string `json:"profiles"`
  39. Label string `json:"label"`
  40. Version string `json:"version"`
  41. PropertySources []propertySource `json:"propertySources"`
  42. }
  43. type propertySource struct {
  44. Name string `json:"name"`
  45. Source map[string]interface{} `json:"source"`
  46. }

Basically, we’re doing that HTTP GET to the config server with our appName, profile and git branch, then unmarshalling the response JSON into the springCloudConfig struct we’re declaring in the same file. Finally, we’re simply iterating over all the key-value pairs in the cloudConfig.PropertySources[0] and stuffing each pair into viper so we can access them whenever we want using viper.GetString(key) or another of the typed getters the Viper API provides.

Note that if we have an issue contacting the configuration server or parsing its response, we panic() the entire microservice which will kill it. Docker Swarm will detect this and try to deploy a new instance in a few seconds. The typical reason for a behaviour such as this is when starting your cluster from cold and the Go-based microservice will start much faster than the Spring Boot-based config server does. Let Swarm retry a few times and things should sort themselves out.

We’ve split the actual work up into one public function and a few package-scoped ones for easier unit testing. The unit test for checking so we can transform JSON into actual viper properties looks like this using the GoConvey style of tests:

  1. func TestParseConfiguration(t *testing.T) {
  2. Convey("Given a JSON configuration response body", t, func() {
  3. var body = `{"name":"accountservice-dev","profiles":["dev"],"label":null,"version":null,"propertySources":[{"name":"file:/config-repo/accountservice-dev.yml","source":{"server_port":6767"}}]}`
  4. Convey("When parsed", func() {
  5. parseConfiguration([]byte(body))
  6. Convey("Then Viper should have been populated with values from Source", func() {
  7. So(viper.GetString("server_port"), ShouldEqual, "6767")
  8. })
  9. })
  10. })
  11. }

Run from goblog/accountservice if you want to:

  1. > go test ./...

Updates to the Dockerfile

Given that we’re loading the configuration from an external source, our service needs a hint about where to find it. That’s performed by using flags as command-line arguments when starting the container and service:

goblog/accountservice/Dockerfile:

  1. FROM iron/base
  2. EXPOSE 6767
  3. ADD accountservice-linux-amd64 /
  4. ADD healthchecker-linux-amd64 /
  5. HEALTHCHECK --interval=3s --timeout=3s CMD ["./healthchecker-linux-amd64", "-port=6767"] || exit 1
  6. ENTRYPOINT ["./accountservice-linux-amd64", "-configServerUrl=http://configserver:8888", "-profile=test", "-configBranch=P8"]

Our ENTRYPOINT now supplies values making it possible to configure from where to load configuration.

Into the Swarm

You probably noted that we’re not using 6767 as a hard-coded port number anymore, i.e:

  1. service.StartWebServer(viper.GetString("server_port"))

Use the copyall.sh script to build and redeploy the updated “accountservice” into Docker Swarm

  1. > ./copyall.sh

After everything’s finished, the service should still be running exactly as it did before you started on this part of the blog series, with the exception that it actually picked its HTTP port from an external and centralized configuration server rather than being hard-coded into the compiled binary.

(Do note that ports exposed in Dockerfiles, Healthcheck CMDs and Docker Swarm “docker service create” statements doesn’t know anything about config servers. In a CI/CD pipeline, you’d probably externalize relevant properties so they are injectable by the build server at build time.)

Let’s take a look at the log output of our accountservice:

  1. > docker logs -f [containerid]
  2. Starting accountservice
  3. Loading config from http://configserver:8888/accountservice/test/P8
  4. Loading config property the_password => password
  5. Loading config property server_port => 6767
  6. Loading config property server_name => Accountservice TEST
  7. Successfully loaded configuration for service Accountservice TEST

(To actually print config values is a bad practice, the output above is just for educational reasons!)

Live configuration updates

  1. - "Oh, did that external service we're using for [some purpose] change their URL?"
  2. - "Darn. None told us!!"

I assume many of us have encountered situations where we need to either rebuild an entire application or at least restart it to update some invalid or changed configuration value. Spring Cloud has the concept of @RefreshScopes where beans can be live-updated with changed configuration propagated from a git commit hook.

This figure provides an overview of how a push to a git repo is propagated to our Go-based microservices:

/assets/blogg/goblog/part8-springcloudpush.png

In this blog post, we’re using a github repo which has absolutely no way of knowing how to perform a post-commit hook operation to my laptop’s Spring Cloud server, so we’ll emulate a commit hook push using the built-in /monitor endpoint of our Spring Cloud Config server.

  1. curl -H "X-Github-Event: push" -H "Content-Type: application/json" -X POST -d '{"commits": [{"modified": ["accountservice.yml"]}],"name":"some name..."}' -ki http://$ManagerIP:8888/monitor

The Spring Cloud Config server will know what to do with this POST and send out a RefreshRemoteApplicationEvent on an exchange on RabbitMQ (abstracted by Spring Cloud Bus). If we take a look at the RabbitMQ admin GUI after having booted Spring Cloud Config successfully, that exchange should have been created:

Exchange name

How does an exchange relate to more traditional messaging constructs such as publisher, consumer and queue?

  1. Publisher -> Exchange -> (Routing) -> Queue -> Consumer

I.e - a message is published to an exchange, which then distributes message copies to queue(s) based on routing rules and bindings which may have registered consumers.

So in order to consume RefreshRemoteApplicationEvent messages (I prefer to call them refresh tokens), all we have to do now is make our Go service listen for such messages on the springCloudBus exchange and if we are the targeted application, perform a configuration reload. Let’s do that.

Using the AMQP protocol to consume messages in Go

The RabbitMQ broker can be accessed using the AMQP protocol. There’s a good Go AMQP client we’re going to use called streadway/amqp.Most of the AMQP / RabbitMQ plumbing code should go into some reusable utility, perhaps we’ll refactor that later on. The plumbing code is based on this example from the streadway/amqp repo.

In /goblog/accountservice/main.go, add a new line inside the main() function that will start an AMQP consumer for us:

  1. func main() {
  2. fmt.Printf("Starting %v\n", appName)
  3. config.LoadConfigurationFromBranch(
  4. viper.GetString("configServerUrl"),
  5. appName,
  6. viper.GetString("profile"),
  7. viper.GetString("configBranch"))
  8. initializeBoltClient()
  9. // NEW
  10. go config.StartListener(appName, viper.GetString("amqp_server_url"), viper.GetString("config_event_bus"))
  11. service.StartWebServer(viper.GetString("server_port"))
  12. }

Note the new amqp_server_url and config_event_bus properties, they’re loaded from the _accountservice-test.yml configuration file we’re loading.

The StartListener function goes into a new file /goblog/accountservice/config/events.go. This file has a lot of AMQP boilerplate which we’ll skip so we concentrate on the interesting parts:

  1. func StartListener(appName string, amqpServer string, exchangeName string) {
  2. err := NewConsumer(amqpServer, exchangeName, "topic", "config-event-queue", exchangeName, appName)
  3. if err != nil {
  4. log.Fatalf("%s", err)
  5. }
  6. log.Printf("running forever")
  7. select {} // Yet another way to stop a Goroutine from finishing...
  8. }

The NewConsumer function is where all the boilerplate goes. We’ll skip down to the code that actually processes an incoming message:

  1. func handleRefreshEvent(body []byte, consumerTag string) {
  2. updateToken := &UpdateToken{}
  3. err := json.Unmarshal(body, updateToken)
  4. if err != nil {
  5. log.Printf("Problem parsing UpdateToken: %v", err.Error())
  6. } else {
  7. if strings.Contains(updateToken.DestinationService, consumerTag) {
  8. log.Println("Reloading Viper config from Spring Cloud Config server")
  9. // Consumertag is same as application name.
  10. LoadConfigurationFromBranch(
  11. viper.GetString("configServerUrl"),
  12. consumerTag,
  13. viper.GetString("profile"),
  14. viper.GetString("configBranch"))
  15. }
  16. }
  17. }
  18. type UpdateToken struct {
  19. Type string `json:"type"`
  20. Timestamp int `json:"timestamp"`
  21. OriginService string `json:"originService"`
  22. DestinationService string `json:"destinationService"`
  23. Id string `json:"id"`
  24. }

This code tries to parse the inbound message into an UpdateToken struct and if the destinationService matches our consumerTag (i.e. the appName “accountservice”), we’ll call the same LoadConfigurationFromBranch function initially called when the service started.

Please note that in a real-life scenario, the NewConsumer function and general message handling code would need more work with error handling, making sure only the appropriate messages are processed etc.

Unit testing

Let’s write a unit test for the handleRefreshEvent() function. Create a new test file /goblog/accountservice/config/events_test.go:

  1. var SERVICE_NAME = "accountservice"
  2. func TestHandleRefreshEvent(t *testing.T) {
  3. // Configure initial viper values
  4. viper.Set("configServerUrl", "http://configserver:8888")
  5. viper.Set("profile", "test")
  6. viper.Set("configBranch", "master")
  7. // Mock the expected outgoing request for new config
  8. defer gock.Off()
  9. gock.New("http://configserver:8888").
  10. Get("/accountservice/test/master").
  11. Reply(200).
  12. BodyString(`{"name":"accountservice-test","profiles":["test"],"label":null,"version":null,"propertySources":[{"name":"file:/config-repo/accountservice-test.yml","source":{"server_port":6767,"server_name":"Accountservice RELOADED"}}]}`)
  13. Convey("Given a refresh event received, targeting our application", t, func() {
  14. var body = `{"type":"RefreshRemoteApplicationEvent","timestamp":1494514362123,"originService":"config-server:docker:8888","destinationService":"accountservice:**","id":"53e61c71-cbae-4b6d-84bb-d0dcc0aeb4dc"}
  15. `
  16. Convey("When handled", func() {
  17. handleRefreshEvent([]byte(body), SERVICE_NAME)
  18. Convey("Then Viper should have been re-populated with values from Source", func() {
  19. So(viper.GetString("server_name"), ShouldEqual, "Accountservice RELOADED")
  20. })
  21. })
  22. })
  23. }

I hope the BDD-style of GoConvey conveys (pun intended!) how the test works. Note though how we use gock to intercept the outgoing HTTP request for new configuration and that we pre-populate viper with some initial values.

Running it

Time to test this. Redeploy using our trusty copyall.sh script:

  1. > ./copyall.sh

Check the log of the accountservice:

  1. > docker logs -f [containerid]
  2. Starting accountservice
  3. ... [truncated for brevity] ...
  4. Successfully loaded configuration for service Accountservice TEST <-- LOOK HERE!!!!
  5. ... [truncated for brevity] ...
  6. 2017/05/12 12:06:36 dialing amqp://guest:guest@rabbitmq:5672/
  7. 2017/05/12 12:06:36 got Connection, getting Channel
  8. 2017/05/12 12:06:36 got Channel, declaring Exchange (springCloudBus)
  9. 2017/05/12 12:06:36 declared Exchange, declaring Queue (config-event-queue)
  10. 2017/05/12 12:06:36 declared Queue (0 messages, 0 consumers), binding to Exchange (key 'springCloudBus')
  11. 2017/05/12 12:06:36 Queue bound to Exchange, starting Consume (consumer tag 'accountservice')
  12. 2017/05/12 12:06:36 running forever

Now, we’ll make a change to the accountservice-test.yml file on my git repo and then fake a commit hook using the /monitor API POST shown earlier in this blog post:

I’m changing accountservice-test.yml and its service_name property, from Accountservice TEST to Temporary test string! and pushing the change.

Next, use curl to let our Spring Cloud Config server know about the update:

  1. > curl -H "X-Github-Event: push" -H "Content-Type: application/json" -X POST -d '{"commits": [{"modified": ["accountservice.yml"]}],"name":"what is this?"}' -ki http://192.168.99.100:8888/monitor

If everything works, this should trigger a refresh token from the Config server which our accountservice picks up. Check the log again:

  1. > docker logs -f [containerid]
  2. 2017/05/12 12:13:22 got 195B consumer: [accountservice] delivery: [1] routingkey: [springCloudBus] {"type":"RefreshRemoteApplicationEvent","timestamp":1494591202057,"originService":"config-server:docker:8888","destinationService":"accountservice:**","id":"1f421f58-cdd6-44c8-b5c4-fbf1e2839baa"}
  3. 2017/05/12 12:13:22 Reloading Viper config from Spring Cloud Config server
  4. Loading config from http://configserver:8888/accountservice/test/P8
  5. Loading config property server_port => 6767
  6. Loading config property server_name => Temporary test string!
  7. Loading config property amqp_server_url => amqp://guest:guest@rabbitmq:5672/
  8. Loading config property config_event_bus => springCloudBus
  9. Loading config property the_password => password
  10. Successfully loaded configuration for service Temporary test string! <-- LOOK HERE!!!!

As you can see, the final line now prints “Successfully loaded configuration for service Temporary test string!”. The source code for that line:

  1. if viper.IsSet("server_name") {
  2. fmt.Printf("Successfully loaded configuration for service %s\n", viper.GetString("server_name"))
  3. }

I.e - we’ve dynamically changed a property value previously stored in Viper during runtime without touching our service! This IS really cool!!

Important note:While updating properties dynamically is very cool, that in itself won’t update things like the port of our running web server, existing Connection objects in pools or (for example) the active connection to the RabbitMQ broker. Those kinds of “already-running” things takes a lot more care to restart with new config values and is out of scope for this particular blog post.

(Unless you’re set things up with your own git repo, this demo isn’t reproducible but I hope you enjoyed it anyway.)

Footprint and performance

Adding loading of configuration at startup shouldn’t affect runtime performance at all and it doesn’t. 1K req/s yields the same latencies, CPU & memory use as before. Just take my word for it or try yourself. We’ll just take quick peek at memory use after first startup:

  1. CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
  2. accountservice.1.pi7wt0wmh2quwm8kcw4e82ay4 0.02% 4.102MiB / 1.955GiB 0.20% 18.8kB / 16.5kB 0B / 1.92MB 6
  3. configserver.1.3joav3m6we6oimg28879gii79 0.13% 568.7MiB / 1.955GiB 28.41% 171kB / 130kB 72.9MB / 225kB 50
  4. rabbitmq.1.kfmtsqp5fnw576btraq19qel9 0.19% 125.5MiB / 1.955GiB 6.27% 6.2MB / 5.18MB 31MB / 414kB 75
  5. quotes-service.1.q81deqxl50n3xmj0gw29mp7jy 0.05% 340.1MiB / 1.955GiB 16.99% 2.97kB / 0B 48.1MB / 0B 30

Even with AMQP integration and Viper as configuration framework, we have an initial footprint of ~4 mb. Our Spring Boot-based config server uses over 500 mb of RAM while RabbitMQ (which I think is written in Erlang?) uses 125 mb.

I’m fairly certain we can starve the config server down to 256 mb initial heap size using some standard JVM -xmx args but it’s nevertheless definitely a lot of RAM. However, in a production environment I would expect us running ~2 config server instances, not tens or hundreds. When it comes to the supporting services from the Spring Cloud ecosystem, memory use isn’t such a big deal as we usually won’t have more than one or a few instances of any such service.

Summary

In this part of the Go microservices blog series we deployed a Spring Cloud Config server and its RabbitMQ dependency into our Swarm. Then, we wrote a bit of Go code that using plain HTTP, JSON and the Viper framework loads config from the config server on startup and feeds it into Viper for convenient access throughout our microservice codebase.

In the next part, we’ll continue to explore AMQP and RabbitMQ, going into more detail and take a look at sending some messages ourselves.