NVIDIA TensorRT Inference Server

Model serving with TRT Inference Server

Kubeflow currently doesn’t have a specific guide for NVIDIA TensorRT InferenceServer. See the NVIDIAdocumentationfor instructions on running NVIDIA inference server on Kubernetes.

Feedback

Was this page helpful?

Glad to hear it! Please tell us how we can improve.

Sorry to hear that. Please tell us how we can improve.

Last modified 07.02.2020: Removed content from NVIDIA server page and linked to NVIDIA docs (#1608) (e5f9a335)