Service metrics

Every Knative Service has a proxy container that proxies the connections to the application container. A number of metrics are reported for the queue proxy performance.

Using the following metrics, you can measure if requests are queued at the proxy side (need for backpressure) and what is the actual delay in serving requests at the application side.

Queue proxy metrics

Requests endpoint.

Metric NameDescriptionTypeTagsUnitStatus
revision_request_countThe number of requests that are routed to queue-proxyCounterconfiguration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
DimensionlessStable
revision_request_latenciesThe response time in millisecondHistogramconfiguration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
MillisecondsStable
revision_app_request_countThe number of requests that are routed to user-containerCounterconfiguration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
DimensionlessStable
revision_app_request_latenciesThe response time in millisecondHistogramconfiguration_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
MillisecondsStable
revision_queue_depthThe current number of items in the serving and waiting queue, or not reported if unlimited concurrencyGaugeconfiguration_name
event-display
container_name
namespace_name
pod_name
response_code_class
revision_name
service_name
DimensionlessStable