Enable Redis Stream Monitors
Overview
To set up end-to-end monitoring for Redis, you need to gather metrics from two sources: the redis-exporter and redis-stream-monitor. The redis-exporter provides basic functional metrics, while the stream-monitor publishes more descriptive metrics as a separate service.
This guide focuses on deploying the redis-stream-monitor to capture detailed event framework metrics that are useful for monitoring entire events activity under Harness.
Part 1: Deploy Redis Stream Monitor Chart
The redis stream monitor chart provides descriptive and meaningful metrics about Redis streams used by the Harness event framework.
Pull the Chart
helm pull oci://us-west1-docker.pkg.dev/gcr-prod/harness-helm-artifacts/redis-stream-monitor --version 1.4.0 --untar
Configure Override Values
Create a file named override.yaml with the following content:
global:
monitoring:
enabled: true
port: 8889
path: /metrics
managedPlatform: "oss"
resources:
limits:
memory: 4096Mi
requests:
cpu: 1
memory: 4096Mi
config:
EVENTS_FRAMEWORK_ENV_NAMESPACE: ""
STACK_DRIVER_METRICS_PUSH_ENABLED: "true"
ENV: "SMP"
Install the Chart
helm install redis-stream-monitor redis-stream-monitor -n <namespace> -f override.yaml
Available Metrics
The following metrics are captured and made available through this service:
redis_streams_length- Number of messages in the streamredis_streams_memory_usage- Memory used by the streamredis_streams_average_message_size- Average size of messages in the streamredis_streams_events_framework_deadletter_queue_size- Size of the dead letter queueredis_streams_consumer_group_pending_count- Number of pending messages in consumer groupredis_streams_consumer_group_behind_by_count- How far behind the consumer group is
PromQL Dashboard Queries
Single Stream Dashboards
Use these queries to monitor a specific use case by setting the $usecase variable:
avg by(usecaseName) (redis_streams_memory_usage{usecaseName="$usecase"})
avg by(usecaseName) (redis_streams_length{usecaseName="$usecase"})
avg by(usecaseName) (redis_streams_average_message_size{usecaseName="$usecase"})
avg by(usecaseName, consumergroupName) (redis_streams_consumer_group_behind_by_count{usecaseName="$usecase"})
avg by(usecaseName, consumergroupName) (redis_streams_consumer_group_pending_count{usecaseName="$usecase"})
avg by(exported_namespace, usecaseName) (redis_streams_events_framework_deadletter_queue_size{usecaseName="$usecase"})
Multiple Stream Dashboards
Use these queries to monitor all streams across all use cases:
avg by(usecaseName) (redis_streams_memory_usage{})
avg by(usecaseName) (redis_streams_length{})
avg by(usecaseName) (redis_streams_average_message_size{})
avg by(usecaseName, consumergroupName) (redis_streams_consumer_group_behind_by_count{})
avg by(usecaseName, consumergroupName) (redis_streams_consumer_group_pending_count{})
avg by(exported_namespace, usecaseName) (redis_streams_events_framework_deadletter_queue_size{})
Available Use Cases
The usecaseName label can be one of the following values:
Expand to view all use cases
DEBEZIUM_gitOpsMongo.harness-gitops.applicationsDEBEZIUM_gitOpsMongo.harness-gitops.utilization_snapshotDEBEZIUM_harnessMongo.harness.applicationsDEBEZIUM_ng-harness.ng-harness.moduleLicensesDEBEZIUM_ngMongo.ng-harness.moduleLicensesDEBEZIUM_pms-harness.pms-harness.planExecutionsSummaryDEBEZIUM_pmsMongo.pms-harness.planExecutionsSummaryDEBEZIUM_sscaMongo.ng-harness.instanceNGLICENSES_USAGE_REDIS_EVENT_CONSUMERasync_filter_creationcache_refreshcd_deployment_eventcf_archive_ff_activation_auditcf_archive_ff_auditcf_create_envcf_create_ff_activation_auditcf_create_ff_auditcf_create_seg_auditcf_delete_envcf_delete_ff_activation_auditcf_delete_ff_auditcf_delete_seg_auditcf_dismiss_anomalycf_feature_metrics_datacf_git_synccf_git_sync_now_eventscf_patch_ff_activation_auditcf_patch_ff_auditcf_patch_seg_auditcf_proxy_keycf_restore_ff_activation_auditcf_restore_ff_auditcf_svc_updatescf_target_metricscg_general_eventcg_notify_eventchaos_change_eventsci_orchestration_notify_evententity_activityentity_crudfull_sync_streamgit_branch_hook_event_streamgit_config_streamgit_pr_event_streamgit_push_event_streamiacm_orchestration_notify_eventinstance_statsldap_group_syncmodulelicenseobserver_event_channelorchestration_logpipeline_initiate_nodepipeline_interruptpipeline_interrupt_cdpipeline_interrupt_cipipeline_interrupt_cvpipeline_interrupt_iacmpipeline_interrupt_pmspipeline_interrupt_stopipeline_node_advisepipeline_node_advise_cdpipeline_node_advise_cipipeline_node_advise_cvpipeline_node_advise_iacmpipeline_node_advise_pmspipeline_node_advise_stopipeline_node_facilitationpipeline_node_facilitation_cdpipeline_node_facilitation_cipipeline_node_facilitation_cvpipeline_node_facilitation_iacmpipeline_node_facilitation_pmspipeline_node_facilitation_stopipeline_node_progresspipeline_node_progress_cdpipeline_node_progress_cipipeline_node_progress_cvpipeline_node_progress_iacmpipeline_node_progress_pmspipeline_node_progress_stopipeline_node_resumepipeline_node_resume_cdpipeline_node_resume_cipipeline_node_resume_cvpipeline_node_resume_iacmpipeline_node_resume_pmspipeline_node_resume_stopipeline_node_startpipeline_node_start_cdpipeline_node_start_cipipeline_node_start_cvpipeline_node_start_iacmpipeline_node_start_pmspipeline_node_start_stopipeline_orchestrationpipeline_partial_plan_responsepipeline_sdk_responsepipeline_sdk_spawnpipeline_sdk_step_responsepipeline_start_planplan_notify_eventpms_orchestration_notify_eventpolling_events_streamsaml_authorization_assertionsetup_usagesrm_custom_changesrm_statemachine_eventsto_orchestration_notify_eventtrigger_execution_events_streamusermembershipwebhook_events_streamwebhook_request_payload_data
Part 2: Configure Metric Scraping
Option 1: Using Prometheus Operator (monitoring.coreos.com/v1)
If you are using the Prometheus Operator with monitoring.coreos.com/v1 CRDs, create a PodMonitor resource to scrape metrics from the redis-stream-monitor pods.
If you encounter an error that the CRD is not present, ensure that you install the necessary CRDs when deploying your Prometheus instance. This resource is critical for Prometheus to scrape the metrics.
Create a file named podmonitor.yaml:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: redis-stream-monitor
namespace: <namespace>
spec:
selector:
matchLabels:
app.kubernetes.io/name: redis-stream-monitor
podMetricsEndpoints:
- port: "8889"
interval: 120s
path: "/metrics"
Apply the PodMonitor:
helm install redis-stream-monitor redis-stream-monitor -n <namespace> -f override.yaml
The PodMonitor resource will be automatically created by the Helm chart when managedPlatform: "oss" is configured.
Option 2: Using Other Observability Tools
For other observability platforms, configure your scraper to:
- Target port:
8889 - Metrics path:
/metrics - Pod selector:
app.kubernetes.io/name: redis-stream-monitor
Summary
You have now deployed the redis-stream-monitor to capture detailed Redis stream metrics. These metrics provide valuable insights into the Harness event framework, including stream length, memory usage, consumer lag, and dead letter queue sizes, enabling proactive monitoring and troubleshooting of event processing in your Harness environment.