Contributing

Thank you for your time and effort to help us improve Rook! Here are a few steps to get started. If you have any questions, don’t hesitate to reach out to us on our Slack dev channel.

Prerequisites

  1. GO 1.11 or greater installed
  2. Git client installed
  3. Github account

Initial Setup

Create a Fork

From your browser navigate to http://github.com/rook/rook and click the “Fork” button.

Clone Your Fork

Open a console window and do the following;

  1. # Create the rook repo path
  2. mkdir -p $GOPATH/src/github.com/rook
  3. # Navigate to the local repo path and clone your fork
  4. cd $GOPATH/src/github.com/rook
  5. # Clone your fork, where <user> is your GitHub account name
  6. git clone https://github.com/<user>/rook.git
  7. cd rook

Build

  1. # build all rook storage providers
  2. make
  3. # build a single storage provider, where the IMAGES can be a subdirectory of the "images" folder:
  4. # "cassandra", "ceph", "cockroachdb", "edgefs", "minio", or "nfs"
  5. make IMAGES="cassandra" build
  6. # multiple storage providers can also be built
  7. make IMAGES="cassandra ceph" build

Development Settings

To provide consistent whitespace and other formatting in your go and other source files, it is recommended you apply the following settings in your IDE:

  • Format with the goreturns tool
  • Trim trailing whitespace

For example, in VS Code this translates to the following settings:

  1. {
  2. "editor.formatOnSave": true,
  3. "go.buildOnSave": "package",
  4. "go.formatTool": "goreturns",
  5. "files.trimTrailingWhitespace": true,
  6. "files.insertFinalNewline": true,
  7. "files.trimFinalNewlines": true,
  8. }

Add Upstream Remote

First you will need to add the upstream remote to your local git:

  1. # Add 'upstream' to the list of remotes
  2. git remote add upstream https://github.com/rook/rook.git
  3. # Verify the remote was added
  4. git remote -v

Now you should have at least origin and upstream remotes. You can also add other remotes to collaborate with other contributors.

Layout

A source code layout is shown below, annotated with comments about the use of each important directory:

  1. rook
  2. ├── build # build makefiles and logic to build, publish and release all Rook artifacts
  3. ├── cluster
  4. ├── charts # Helm charts
  5. └── rook-ceph
  6. └── examples # Sample yaml files for Rook cluster
  7. ├── cmd # Binaries with main entrypoint
  8. ├── rook # Main command entry points for operators and daemons
  9. └── rookflex # Main command entry points for Rook flexvolume driver
  10. ├── design # Design documents for the various components of the Rook project
  11. ├── Documentation # Rook project Documentation
  12. ├── images # Dockerfiles to build images for all supported storage providers
  13. ├── pkg
  14. ├── apis
  15. ├── ceph.rook.io # ceph specific specs for cluster, file, object
  16. ├── v1
  17. ├── cockroachdb.rook.io # cockroachdb specific specs
  18. └── v1alpha1
  19. ├── minio.rook.io # minio specific specs for cluster, object
  20. └── v1alpha1
  21. ├── nfs.rook.io # nfs server specific specs
  22. └── v1alpha1
  23. └── rook.io # rook.io API group of common types
  24. └── v1alpha2
  25. ├── client # auto-generated strongly typed client code to access Rook APIs
  26. ├── clusterd
  27. ├── daemon # daemons for each storage provider
  28. ├── ceph
  29. └── discover
  30. ├── operator # all orchestration logic and custom controllers for each storage provider
  31. ├── ceph
  32. ├── cockroachdb
  33. ├── discover
  34. ├── k8sutil
  35. ├── minio
  36. ├── nfs
  37. └── test
  38. ├── test
  39. ├── util
  40. └── version
  41. └── tests # integration tests
  42. ├── framework # the Rook testing framework
  43. ├── clients # test clients used to consume Rook resources during integration tests
  44. ├── installer # installs Rook and its supported storage providers into integration tests environments
  45. └── utils
  46. ├── integration # all test cases that will be invoked during integration testing
  47. ├── longhaul # longhaul tests
  48. ├── pipeline # Jenkins pipeline
  49. └── scripts # scripts for setting up integration and manual testing environments

Development

To add a feature or to make a bug fix, you will need to create a branch in your fork and then submit a pull request (PR) from the branch.

Design Document

For new features of significant scope and complexity, a design document is recommended before work begins on the implementation. For smaller, straightforward features and bug fixes, there is no need for a design document. Authoring a design document for big features has many advantages:

  • Helps flesh out the approach by forcing the author to think critically about the feature and can identify potential issues early on
  • Gets agreement amongst the community before code is written that could be wasted effort in the wrong direction
  • Serves as an artifact of the architecture that is easier to read for visitors to the project than just the code by itself

Note that writing code to prototype the feature while working on the design may be very useful to help flesh out the approach.

A design document should be written as a markdown file in the design folder. You will see many examples of previous design documents in that folder. Submit a pull request for the design to be discussed and approved by the community before being merged into master, just like any other change to the repository.

An issue should be opened to track the work of authoring and completing the design document. This issue is in addition to the issue that is tracking the implementation of the feature. The design label should be assigned to the issue to denote it as such.

Create a Branch

From a console, create a new branch based on your fork and start working on it:

  1. # Ensure all your remotes are up to date with the latest
  2. git fetch --all
  3. # Create a new branch that is based off upstream master. Give it a simple, but descriptive name.
  4. # Generally it will be two to three words separated by dashes and without numbers.
  5. git checkout -b feature-name upstream/master

Now you are ready to make the changes and commit to your branch.

Updating Your Fork

During the development lifecycle, you will need to keep up-to-date with the latest upstream master. As others on the team push changes, you will need to rebase your commits on top of the latest. This avoids unnecessary merge commits and keeps the commit history clean.

Whenever you need to update your local repository, you never want to merge. You always will rebase. Otherwise you will end up with merge commits in the git history. If you have any modified files, you will first have to stash them (git stash save -u "<some description>").

  1. git fetch --all
  2. git rebase upstream/master

Rebasing is a very powerful feature of Git. You need to understand how it works or else you will risk losing your work. Read about it in the Git documentation, it will be well worth it. In a nutshell, rebasing does the following:

  • “Unwinds” your local commits. Your local commits are removed temporarily from the history.
  • The latest changes from upstream are added to the history
  • Your local commits are re-applied one by one
  • If there are merge conflicts, you will be prompted to fix them before continuing. Read the output closely. It will tell you how to complete the rebase.
  • When done rebasing, you will see all of your commits in the history.

Submitting a Pull Request

Once you have implemented the feature or bug fix in your branch, you will open a PR to the upstream rook repo. Before opening the PR ensure you have added unit tests, are passing the integration tests, cleaned your commit history, and have rebased on the latest upstream.

In order to open a pull request (PR) it is required to be up to date with the latest changes upstream. If other commits are pushed upstream before your PR is merged, you will also need to rebase again before it will be merged.

Regression Testing

All pull requests must pass the unit and integration tests before they can be merged. These tests automatically run as a part of the build process. The results of these tests along with code reviews and other criteria determine whether your request will be accepted into the rook/rook repo. It is prudent to run all tests locally on your development box prior to submitting a pull request to the rook/rook repo.

Unit Tests

From the root of your local Rook repo execute the following to run all of the unit tests:

  1. make test

Unit tests for individual packages can be run with the standard go test command. Before you open a PR, confirm that you have sufficient code coverage on the packages that you changed. View the coverage.html in a browser to inspect your new code.

  1. go test -coverprofile=coverage.out
  2. go tool cover -html=coverage.out -o coverage.html

Running the Integration Tests

For instructions on how to execute the end to end smoke test suite, follow the test instructions.

Commit History

To prepare your branch to open a PR, you will need to have the minimal number of logical commits so we can maintain a clean commit history. Most commonly a PR will include a single commit where all changes are squashed, although sometimes there will be multiple logical commits.

  1. # Inspect your commit history to determine if you need to squash commits
  2. git log
  3. # Rebase the commits and edit, squash, or even reorder them as you determine will keep the history clean.
  4. # In this example, the last 5 commits will be opened in the git rebase tool.
  5. git rebase -i HEAD~5

Once your commit history is clean, ensure you have based on the latest upstream before you open the PR.

Submitting

Go to the Rook github to open the PR. If you have pushed recently, you should see an obvious link to open the PR. If you have not pushed recently, go to the Pull Request tab and select your fork and branch for the PR.

After the PR is open, you can make changes simply by pushing new commits. Your PR will track the changes in your fork and update automatically.

Backport a Fix to a Release Branch

Manual flow

The flow for getting a fix into a release branch is to first make the commit to master following the process outlined above. After the commit is in master, you’ll need to cherry-pick the commit to the intended release branch. You can do this by first creating a local branch that is based off the release branch, for example:

  1. git fetch --all
  2. git checkout -b backport-my-fix upstream/release-0.6

Then go ahead and cherry-pick the commit using the hash of the commit itself, not the merge commit hash:

  1. git cherry-pick -x 099cc27b73a8d77e0504831f374a7e117ad0a2e4

This will immediately create a cherry-picked commit with a nice message saying where the commit was cherry-picked from. Now go ahead and push to your origin:

  1. git push origin HEAD

Automated flow

You will find a script at contrib/backport_to_stable_branch.sh at the root of the Rook repository. Execute it and it will do the necessary things and will push the backport branch. Then go on the Rook Github web page and create your pull request.

Create the backport pull request

The last step is to open a PR with the base being the intended release branch. If you don’t know how to do this, read Github documentation on changing the base branch range. Once the PR is approved and merged, then your backported change will be available in the next release.

Debugging operators locally

Operators are meant to be run inside a Kubernetes cluster. However, this makes it harder to use debugging tools and slows down the developer cycle of edit-build-test since testing requires to build a container image, push to the cluster, restart the pods, get logs, etc.

A common operator developer practice is to run the operator locally on the developer machine in order to leverage the developer tools and comfort.

In order to support this external operator mode, rook detects if the operator is running outside of the cluster (using standard cluster env) and changes the behavior as follows:

  • Connecting to Kubernetes API will load the config from the user ~/.kube/config.
  • Instead of the default CommandExecutor this mode uses a TranslateCommandExecutor that executes every command issued by the operator to run as a Kubernetes job inside the cluster, so that any tools that the operator needs from its image can be called. For example, in cockroachdb

Building locally

Building a single rook binary for all operators:

  1. make GO_STATIC_PACKAGES=github.com/rook/rook/cmd/rook go.build

Note: the binary output location is _output/bin/linux_amd64/rook on linux, and _output/bin/darwin_amd64/rook on mac.

Running locally

The command-line flag: --operator-image <image> should be used to allow running outside of a pod since some operators read the image from the pod. This is a pattern where the operator pod is based on the image of the actual storage provider image (currently used by ceph, edgefs, cockroachdb, minio). The image url should be passed manually (for now) to match the operator’s Dockerfile FROM statement.

The next sections describe the supported operators and their notes.

CockroachDB:

  1. _output/bin/darwin_amd64/rook cockroachdb operator --operator-image cockroachdb/cockroach:v2.0.2
  • Set --operator-image to the base image of cockroachdb Dockerfile
  • The execution of /cockroach/cockroach init in initCluster() runs in a kubernetes job to complete the clusterization of its pods.

Minio:

  1. _output/bin/darwin_amd64/rook minio operator --operator-image minio/minio:RELEASE.2019-04-23T23-50-36Z