By just running a command we could easily scale our deployment. In this blog post, you will learn how to use the open-source tool KEDA (Kubernetes Event-Driven Autoscaler) to monitor metrics coming out of the HAProxy Kubernetes Ingress Controller instead. TL DR: The horizontal pod autoscaler is a control loop which is managed by the controller managers which queries the metrics and compare the resources utilization against these metrics’s values.įuthermore I highly suggest you to check the algorithm behind the horizontal pod autoscaler here Example The only shortcoming is that the built-in autoscaler, which is called the Horizontal Pod Autoscaler, can only monitor a pod’s CPU usage. The controller manager obtains the metrics from either the resource metrics API (for per-pod resource metrics), or the custom metrics API (for all other metrics). The Horizontal Pod Autoscaler is implemented as a control loop, with a period controlled by the controller manager’s –horizontal-pod-autoscaler-sync-period flag (with a default value of 15 seconds).ĭuring each period, the controller manager queries the resource utilization against the metrics specified in each HorizontalPodAutoscaler definition. This allow you to scale your Kubernetes environment based on criteria that you’d define. ![]() Kubernetes provide a horizontal pod autoscaler which allow you to scale your deployment based on metrics such as CPU or custom metrics.
0 Comments
Leave a Reply. |