Seldon Core Serving

Model serving using Seldon

This Kubeflow component has stable status. See the Kubeflow versioning policies.

Seldon Core comes installed with Kubeflow. The Seldon Core documentation site provides full documentation for running Seldon Core inference.

Seldon presently requires a Kubernetes cluster version >= 1.12 and <= 1.17.

If you have a saved model in a PersistentVolume (PV), Google Cloud Storage bucket or Amazon S3 Storage you can use one of the prepackaged model servers provided by Seldon Core.

Seldon Core also provides language specific model wrappers to wrap your inference code for it to run in Seldon Core.

Kubeflow specifics

  • A namespace label set as serving.kubeflow.org/inferenceservice=enabled

The following example applies the label seldon to the namespace for serving:

  1. kubectl create namespace seldon
  2. kubectl label namespace seldon serving.kubeflow.org/inferenceservice=enabled

Istio Gateway

By default Seldon will use the kubeflow-gateway in the kubeflow namespace. If you wish to change to a separate Gateway you would need to update the Kubeflow Seldon kustomize by changing the environment variable ISTIO_GATEWAY in the seldon-manager Deployment.

Kubeflow 1.0.0, 1.0.1, 1.0.2

For the above versions you would need to create an Istio Gateway in the namespace you want to run inference called kubeflow-gateway. For example, for a namespace seldon:

  1. cat <<EOF | kubectl create -n seldon -f -
  2. apiVersion: networking.istio.io/v1alpha3
  3. kind: Gateway
  4. metadata:
  5. name: kubeflow-gateway
  6. spec:
  7. selector:
  8. istio: ingressgateway
  9. servers:
  10. - hosts:
  11. - '*'
  12. port:
  13. name: http
  14. number: 80
  15. protocol: HTTP
  16. EOF

Simple example

Create a new namespace:

  1. kubectl create ns seldon

Label that namespace so you can run inference tasks in it:

  1. kubectl label namespace seldon serving.kubeflow.org/inferenceservice=enabled

For Kubeflow version 1.0.0, 1.0.1 and 1.0.2 create an Istio Gateway as shown above.

Create an example SeldonDeployment with a dummy model:

  1. cat <<EOF | kubectl create -n seldon -f -
  2. apiVersion: machinelearning.seldon.io/v1
  3. kind: SeldonDeployment
  4. metadata:
  5. name: seldon-model
  6. spec:
  7. name: test-deployment
  8. predictors:
  9. - componentSpecs:
  10. - spec:
  11. containers:
  12. - image: seldonio/mock_classifier_rest:1.3
  13. name: classifier
  14. graph:
  15. children: []
  16. endpoint:
  17. type: REST
  18. name: classifier
  19. type: MODEL
  20. name: example
  21. replicas: 1
  22. EOF

Wait for state to become available:

  1. kubectl get sdep seldon-model -n seldon -o jsonpath='{.status.state}\n'

Port forward to the Istio gateway:

  1. kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8004:80

Send a prediction request:

  1. curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8004/seldon/seldon/seldon-model/api/v1.0/predictions -H "Content-Type: application/json"

You should see a response:

  1. {
  2. "meta": {
  3. "puid": "i2e1i8nq3lnttadd5i14gtu11j",
  4. "tags": {
  5. },
  6. "routing": {
  7. },
  8. "requestPath": {
  9. "classifier": "seldonio/mock_classifier_rest:1.3"
  10. },
  11. "metrics": []
  12. },
  13. "data": {
  14. "names": ["proba"],
  15. "ndarray": [[0.43782349911420193]]
  16. }
  17. }

Further documentation

Last modified 04.08.2020: Update seldon docs for 1.1.0 release (#1905) (c5dbe145)