PL/Container enables users to run Greenplum procedural language functions inside a Docker container, to avoid security risks associated with running Python or R code on Greenplum segment hosts. For Python, PL/Container also enables you to use the Compute Unified Device Architecture (CUDA) API with NVIDIA GPU hardware in your procedural language functions. This topic covers information about the architecture, installation, and setup of PL/Container:

For detailed information about using PL/Container, refer to:

The PL/Container language extension is available as an open source module. For information about the module, see the README file in the GitHub repository at https://github.com/greenplum-db/plcontainer.

About the PL/Container Language Extension

The Greenplum Database PL/Container language extension allows you to create and run PL/Python or PL/R user-defined functions (UDFs) securely, inside a Docker container. Docker provides the ability to package and run an application in a loosely isolated environment called a container. For information about Docker, see the Docker web site.

Running UDFs inside the Docker container ensures that:

  • The function execution process takes place in a separate environment and allows decoupling of the data processing. SQL operators such as “scan,” “filter,” and “project” are run at the query executor (QE) side, and advanced data analysis is run at the container side.
  • User code cannot access the OS or the file system of the local host.
  • User code cannot introduce any security risks.
  • Functions cannot connect back to the Greenplum Database if the container is started with limited or no network access.

PL/Container Architecture

PL/Container architecture

Example of the process flow:

Consider a query that selects table data using all available segments, and transforms the data using a PL/Container function. On the first call to a function in a segment container, the query executor on the master host starts the container on that segment host. It then contacts the running container to obtain the results. The container might respond with a Service Provider Interface (SPI) - a SQL query run by the container to get some data back from the database - returning the result to the query executor.

A container running in standby mode waits on the socket and does not consume any CPU resources. PL/Container memory consumption depends on the amount of data cached in global dictionaries.

The container connection is closed by closing the Greenplum Database session that started the container, and the container shuts down.

About PL/Container 3 Beta

Greenplum Database 6.5 introduces PL/Container version 3 Beta, which:

  • Provides support for the new GreenplumR interface.
  • Reduces the number of processes created by PL/Container, in order to save system resources.
  • Supports more containers running concurrently.
  • Includes improved log messages to help diagnose problems.
  • Supports the DO command (anonymous code block).

PL/Container 3 is currently a Beta feature, and provides only a Beta R Docker image for running functions; Python images are not yet available. Save and uninstall any existing PL/Container software before you install PL/Container 3 Beta.

Install PL/Container

This topic includes how to:

The following sections describe these tasks in detail.

Prerequisites

  • For PL/Container 2.1.x use Greenplum Database 6 on CentOS 7.x (or later), RHEL 7.x (or later), or Ubuntu 18.04.

    Note

    PL/Container 2.1.x supports Docker images with Python 3 installed.

  • For PL/Container 3 Beta use Greenplum Database 6.1 or later on CentOS 7.x (or later), RHEL 7.x (or later), or Ubuntu 18.04.

  • The minimum Linux OS kernel version supported is 3.10. To verify your kernel release use:

    1. $ uname -r
  • The minimum supported Docker versions on all hosts is Docker 19.03.

Install Docker

To use PL/Container you need to install Docker on all Greenplum Database host systems. These instructions show how to set up the Docker service on CentOS 7 but RHEL 7 is a similar process.

These steps install the docker package and start the Docker service as a user with sudo privileges.

  1. Ensure the user has sudo privileges or is root.
  2. Install the dependencies required for Docker:

    1. sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  3. Add the Docker repo:

    1. sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  4. Update yum cache:

    1. sudo yum makecache fast
  5. Install Docker:

    1. sudo yum -y install docker-ce
  6. Start Docker daemon:

    1. sudo systemctl start docker
  7. On each Greenplum Database host, the gpadmin user should be part of the docker group for the user to be able to manage Docker images and containers. Assign the Greenplum Database administrator gpadmin to the group docker:

    1. sudo usermod -aG docker gpadmin
  8. Exit the session and login again to update the privileges.

  9. Configure Docker to start when the host system starts:

    1. sudo systemctl enable docker.service
    1. sudo systemctl start docker.service
  10. Run a Docker command to test the Docker installation. This command lists the currently running Docker containers.

    1. docker ps
  11. After you install Docker on all Greenplum Database hosts, restart the Greenplum Database system to give Greenplum Database access to Docker.

    1. gpstop -ra

For a list of observations while using Docker and PL/Container, see the Notes section. For a list of Docker reference documentation, see Docker References.

Install PL/Container

Install the PL/Container language extension using the gppkg utility.

  1. Download the “PL/Container for RHEL 7” package that applies to your Greenplum Database version, from the VMware Tanzu Network. PL/Container is listed under Greenplum Procedural Languages.
  2. As gpadmin, copy the PL/Container language extension package to the master host.
  3. Follow the instructions in Verifying the Greenplum Database Software Download to verify the integrity of the Greenplum Procedural Languages PL/Container software.
  4. Run the package installation command:

    1. gppkg -i plcontainer-2.1.1-rhel7-x86_64.gppkg
  5. Source the file $GPHOME/greenplum_path.sh:

    1. source $GPHOME/greenplum_path.sh
  6. Make sure Greenplum Database is up and running:

    1. gpstate -s

    If it’s not, start it:

    1. gpstart -a
  7. For PL/Container version 3 Beta only, add the plc_coordinator shared library to the Greenplum Database shared_preload_libraries server configuration parameter. Be sure to retain any previous setting of the parameter. For example:

    1. gpconfig -s shared_preload_libraries
    2. Values on all segments are consistent
    3. GUC : shared_preload_libraries
    4. Coordinator value: diskquota
    5. Segment value: diskquota
    6. gpconfig -c shared_preload_libraries -v 'diskquota,plc_coordinator'
  8. Restart Greenplum Database:

    1. gpstop -ra
  9. Login into one of the available databases, for example:

    1. psql postgres
  10. Register the PL/Container extension, which installs the plcontainer utility:

    1. CREATE EXTENSION plcontainer;

    You’ll need to register the utility separately on each database that might need the PL/Container functionality.

Install PL/Container Docker Images

Install the Docker images that PL/Container will use to create language-specific containers to run the UDFs.

Note

The PL/Container open source module contains dockerfiles to build Docker images that can be used with PL/Container. You can build a Docker image to run PL/Python UDFs and a Docker image to run PL/R UDFs. See the dockerfiles in the GitHub repository at https://github.com/greenplum-db/plcontainer.

  • Download the files that contain the Docker images from the VMware Tanzu Network. For example, for Greenplum 6.22, click on “PL/Container Image for Python 2.2.0” which downloads plcontainer-python3-image-2.2.0-gp6.tar.gz with Python 3.9 and the Python 3.9 Data Science Module Package.

    If you require different images from the ones provided by Tanzu Greenplum, you can create custom Docker images, install the image and add the image to the PL/ Container configuration.

  • If you are using PL/Container 3 Beta, note that this Beta version is compatible only with the associated plcontainer-r-image-3.0.0-beta-gp6.tar.gz image.

  • Follow the instructions in Verifying the Greenplum Database Software Download to verify the integrity of the Greenplum Procedural Languages PL/Container Image software.

  • Use the plcontainer image-add command to install an image on all Greenplum Database hosts. Provide the -f option to specify the file system location of a downloaded image file. For example:

    1. # Install a Python 2 based Docker image
    2. plcontainer image-add -f /home/gpadmin/plcontainer-python-image-2.2.0-gp6.tar.gz
    3. # Install a Python 3 based Docker image
    4. plcontainer image-add -f /home/gpadmin/plcontainer-python3-image-2.2.0-gp6.tar.gz
    5. # Install an R based Docker image
    6. plcontainer image-add -f /home/gpadmin/plcontainer-r-image-2.1.3-gp6.tar.gz
    7. # Install the Beta R image for use with PL/Container 3.0.0 Beta
    8. plcontainer image-add -f /home/gpadmin/plcontainer-r-image-3.0.0-beta-gp6.tar.gz

    The utility displays progress information, similar to:

    1. 20200127:21:54:43:004607 plcontainer:mdw:gpadmin-[INFO]:-Checking whether docker is installed on all hosts...
    2. 20200127:21:54:43:004607 plcontainer:mdw:gpadmin-[INFO]:-Distributing image file /home/gpadmin/plcontainer-python-images-1.5.0.tar to all hosts...
    3. 20200127:21:54:55:004607 plcontainer:mdw:gpadmin-[INFO]:-Loading image on all hosts...
    4. 20200127:21:55:37:004607 plcontainer:mdw:gpadmin-[INFO]:-Removing temporary image files on all hosts...

    By default, the image-add command copies the image to each Greenplum Database segment and standby master host, and installs the image. When you specify the [-ulc | --use_local_copy] option, plcontainer installs the image only on the host on which you run the command. Use this option when the PL/Container image already resides on disk on a host.

    For more information on image-add options, visit the plcontainer reference page.

  • To display the installed Docker images on the local host use:

    1. $ plcontainer image-list
    REPOSITORYTAGIMAGE IDCREATED
    pivotaldata/plcontainer_r_shareddevel7427f920669d10 months ago
    pivotaldata/plcontainer_python_shareddevele36827eba53e10 months ago
    pivotaldata/plcontainer_python3_shareddevely32827ebe55b5 months ago
  • Add the image information to the PL/Container configuration file using plcontainer runtime-add, to allow PL/Container to associate containers with specified Docker images.

    Use the -r option to specify your own user defined runtime ID name, use the -i option to specify the Docker image, and the -l option to specify the Docker image language. When there are multiple versions of the same docker image, for example 1.0.0 or 1.2.0, specify the TAG version using “:” after the image name.

    1. # Add a Python 2 based runtime
    2. plcontainer runtime-add -r plc_python_shared -i pivotaldata/plcontainer_python_shared:devel -l python
    3. # Add a Python 3 based runtime that is supported with PL/Container 2.2.x
    4. plcontainer runtime-add -r plc_python3_shared -i pivotaldata/plcontainer_python3_shared:devel -l python3
    5. # Add an R based runtime
    6. plcontainer runtime-add -r plc_r_shared -i pivotaldata/plcontainer_r_shared:devel -l r

    The utility displays progress information as it updates the PL/Container configuration file on the Greenplum Database instances.

    For details on other runtime-add options, see the plcontainer reference page.

  • Optional: Use Greenplum Database resource groups to manage and limit the total CPU and memory resources of containers in PL/Container runtimes. In this example, the Python runtime will be used with a preconfigured resource group 16391:

    1. plcontainer runtime-add -r plc_python_shared -i pivotaldata/plcontainer_python_shared:devel -l
    2. python -s resource_group_id=16391

    For more information about enabling, configuring, and using Greenplum Database resource groups with PL/Container, see PL/Container Resource Management.

You can now create a simple function to test your PL/Container installation.

Test the PL/Container Installation

List the names of the runtimes your created and added to the PL/Container XML file:

  1. plcontainer runtime-show

which will show a list of all installed runtimes:

  1. PL/Container Runtime Configuration:
  2. ---------------------------------------------------------
  3. Runtime ID: plc_python_shared
  4. Linked Docker Image: pivotaldata/plcontainer_python_shared:devel
  5. Runtime Setting(s):
  6. Shared Directory:
  7. ---- Shared Directory From HOST '/usr/local/greenplum-db/./bin/plcontainer_clients' to Container '/clientdir', access mode is 'ro'
  8. ---------------------------------------------------------

You can also view the PL/Container configuration information with the plcontainer runtime-show -r <runtime_id> command. You can view the PL/Container configuration XML file with the plcontainer runtime-edit command.

Use the psql utility and select an existing database:

  1. psql postgres;

If the PL/Container extension is not registered with the selected database, first enable it using:

  1. postgres=# CREATE EXTENSION plcontainer;

Create a simple function to test your installation; in the example, the function will use the runtime plc_python_shared:

  1. postgres=# CREATE FUNCTION dummyPython() RETURNS text AS $$
  2. # container: plc_python_shared
  3. return 'hello from Python'
  4. $$ LANGUAGE plcontainer;

And test the function using:

  1. postgres=# SELECT dummyPython();
  2. dummypython
  3. -------------------
  4. hello from Python
  5. (1 row)

Similarly, to test the R runtime:

  1. postgres=# CREATE FUNCTION dummyR() RETURNS text AS $$
  2. # container: plc_r_shared
  3. return ('hello from R')
  4. $$ LANGUAGE plcontainer;
  5. CREATE FUNCTION
  6. postgres=# select dummyR();
  7. dummyr
  8. --------------
  9. hello from R
  10. (1 row)

For further details and examples about using PL/Container functions, see PL/Container Functions.

Upgrade PL/Container

To upgrade PL/Container, you save the current configuration, upgrade PL/Container, and then restore the configuration after upgrade. There is no need to update the Docker images when you upgrade PL/Container.

Note

Before you perform this upgrade procedure, ensure that you have migrated your PL/Container package from your previous Greenplum Database installation to your new Greenplum Database installation. Refer to the gppkg command for package installation and migration information.

You cannot upgrade to PL/Container 3 Beta. To install PL/Container 3 Beta, first save and then uninstall your existing PL/Container software. Then follow the instructions in Install PL/Container.

To upgrade, perform the following procedure:

  1. Save the PL/Container configuration. For example, to save the configuration to a file named plcontainer202-backup.xml in the local directory:

    1. $ plcontainer runtime-backup -f plcontainer202-backup.xml
  2. Use the Greenplum Database gppkg utility with the -u option to update the PL/Container language extension. For example, the following command updates the PL/Container language extension to version 2.2.0 on a Linux system:

    1. $ gppkg -u plcontainer-2.2.0-gp6-rhel7_x86_64.gppkg
  3. Source the Greenplum Database environment file $GPHOME/greenplum_path.sh.

    1. $ source $GPHOME/greenplum_path.sh
  4. Restore the PL/Container configuration that you saved in a previous step:

    1. $ plcontainer runtime-restore -f plcontainer202-backup.xml
  5. Restart Greenplum Database.

    1. $ gpstop -ra
  6. You do not need to re-register the PL/Container extension in the databases in which you previously created the extension but ensure that you register the PL/Container extension in each new database that will run PL/Container UDFs. For example, the following command registers PL/Container in a database named mytest:

    1. $ psql -d mytest -c 'CREATE EXTENSION plcontainer;'

    The command also creates PL/Container-specific functions and views.

Uninstall PL/Container

To uninstall PL/Container, remove Docker containers and images, and then remove the PL/Container support from Greenplum Database.

When you remove support for PL/Container, the plcontainer user-defined functions that you created in the database will no longer work.

Uninstall Docker Containers and Images

On the Greenplum Database hosts, uninstall the Docker containers and images that are no longer required.

The plcontainer image-list command lists the Docker images that are installed on the local Greenplum Database host.

The plcontainer image-delete command deletes a specified Docker image from all Greenplum Database hosts.

Some Docker containers might exist on a host if the containers were not managed by PL/Container. You might need to remove the containers with Docker commands. These docker commands manage Docker containers and images on a local host.

  • The command docker ps -a lists all containers on a host. The command docker stop stops a container.
  • The command docker images lists the images on a host.
  • The command docker rmi removes images.
  • The command docker rm removes containers.

Remove PL/Container Support for a Database

To remove support for PL/Container, drop the extension from the database. Use the psql utility with DROP EXTENSION command (using -c) to remove PL/Container from mytest database.

  1. psql -d mytest -c 'DROP EXTENSION plcontainer CASCADE;'

The CASCADE keyword drops PL/Container-specific functions and views.

Remove PL/Container 3 Beta Shared Library

This step is required only if you have installed PL/Container 3 Beta. Before you remove the extension from your system with gppkg, remove the shared library configuration for the plc_coordinator process:

  1. Examine the shared_preload_libraries server configuration parameter setting.

    1. $ gpconfig -s shared_preload_libraries
    • If plc_coordinator is the only library listed, remove the configuration parameter setting:

      1. $ gpconfig -r shared_preload_libraries

      Removing a server configuration parameter comments out the setting in the postgresql.conf file.

    • If there are multiple libraries listed, remove plc_coordinator from the list and re-set the configuration parameter. For example, if shared_preload_libraries is set to 'diskquota,plc_coordinator':

      1. $ gpconfig -c shared_preload_libraries -v 'diskquota'
  2. Restart the Greenplum Database cluster:

    1. $ gpstop -ra

Uninstall the PL/Container Language Extension

If no databases have plcontainer as a registered language, uninstall the Greenplum Database PL/Container language extension with the gppkg utility.

  1. Use the Greenplum Database gppkg utility with the -r option to uninstall the PL/Container language extension. This example uninstalls the PL/Container language extension on a Linux system:

    1. $ gppkg -r plcontainer-2.1.1

    You can run the gppkg utility with the options -q --all to list the installed extensions and their versions.

  2. Reload greenplum_path.sh.

    1. $ source $GPHOME/greenplum_path.sh
  3. Restart the database.

    1. $ gpstop -ra

Notes

Docker Notes

  • If a PL/Container Docker container exceeds the maximum allowed memory, it is terminated and an out of memory warning is displayed.
  • PL/Container does not limit the Docker base device size, the size of the Docker container. In some cases, the Docker daemon controls the base device size. For example, if the Docker storage driver is devicemapper, the Docker daemon --storage-opt option flag dm.basesize controls the base device size. The default base device size for devicemapper is 10GB. The Docker command docker info displays Docker system information including the storage driver. The base device size is displayed in Docker 1.12 and later. For information about Docker storage drivers, see the Docker information Daemon storage-driver.

    When setting the Docker base device size, the size must be set on all Greenplum Database hosts.

  • Known issue:

    Occasionally, when PL/Container is running in a high concurrency environment, the Docker daemon hangs with log entries that indicate a memory shortage. This can happen even when the system seems to have adequate free memory.

    The issue seems to be triggered by the aggressive virtual memory requirement of the Go language (golang) runtime that is used by PL/Container, and the Greenplum Database Linux server kernel parameter setting for overcommit_memory. The parameter is set to 2 which does not allow memory overcommit.

    A workaround that might help is to increase the amount of swap space and increase the Linux server kernel parameter overcommit_ratio. If the issue still occurs after the changes, there might be memory shortage. You should check free memory on the system and add more RAM if needed. You can also decrease the cluster load.

Docker References

Docker home page https://www.docker.com/

Docker command line interface https://docs.docker.com/engine/reference/commandline/cli/

Dockerfile reference https://docs.docker.com/engine/reference/builder/

For CentOS, see Docker site installation instructions for CentOS.

For a list of Docker commands, see the Docker engine Run Reference.

Installing Docker on Linux systems https://docs.docker.com/engine/installation/linux/centos/

Control and configure Docker with systemd https://docs.docker.com/engine/admin/systemd/