In today's dynamic and cloud-native landscape, monitoring and observability are essential aspects of managing and maintaining applications running on Kubernetes.
With the proliferation of microservices and containerized architectures, having comprehensive monitoring solutions becomes crucial for ensuring reliability, performance, and scalability in Prometheus for Kubernetes monitoring.
In this blog post, our goal is to enable users to set up a self-managed Prometheus instance using Helm in a Kubernetes environment. This setup aims to provide users with more control over their monitoring infrastructure, allowing for customization and flexibility in managing Prometheus and Grafana.
In achieving this goal, we will cover the following key aspects:
This guide is designed to be accessible for users with basic knowledge of Kubernetes and Helm, offering step-by-step instructions for a successful setup.
Prometheus is an open-source monitoring and alerting toolkit originally built at SoundCloud. It is widely adopted for monitoring applications and services in modern cloud-native environments, specifically designed to monitor Kubernetes clusters. A self-managed Prometheus setup refers to an instance of Prometheus, that is configured and maintained by the user within their Kubernetes environment using Helm charts. Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. It allows users to define, install, and upgrade complex Kubernetes applications using pre-configured package definitions called charts. Unlike managed services, a self-managed Prometheus instance provides users with more control over the configuration, customization, and maintenance of their monitoring infrastructure.
By opting for a self-managed Prometheus setup, users gain the advantage of a highly customizable and scalable monitoring solution that aligns with their specific operational needs, integrating seamlessly with Grafana for comprehensive monitoring and visualization capabilities.
Before you begin with the installation of a self-managed Prometheus setup, ensure that you have the following prerequisites in place:
Ensure that you meet these prerequisites to ensure a smooth installation and configuration process for your self-managed Prometheus setup.
1. Add Prometheus Helm Repo:
1 helm repo add prometheus-community undefinedhttps://prometheus-community.github.io/helm-chartsundefined
2. Install Prometheus with Helm:
1 helm install prometheus prometheus-community/prometheus \\
2 --namespace prometheus \\
3 --set alertmanager.enabled=false \\
4 --set prometheus-pushgateway.enabled=false \\
5 --set prometheus.configmapReload.prometheu.enabled=false
figmapReload.prometheu.enabled=false
NAME: prometheus
LAST DEPLOYED: Mon Apr 15 12:05:12 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.default.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090
#################################################################################
###### WARNING: Pod Security Policy has been disabled by default since #####
###### it deprecated after k8s 1.25+. use #####
###### (index .Values "prometheus-node-exporter" "rbac" #####
###### . "pspEnabled") with (index .Values #####
###### "prometheus-node-exporter" "rbac" "pspAnnotations") #####
###### in case you still need it. #####
#################################################################################
3. Navigate to 'microservices-boilerplate' Directory:
1 cd infra/config-files/amp-files
4. Configure Grafana:
5. To set up Postgres as persistence for Grafana and add SMTP credentials to grafana.ini, follow these steps:
1 DOMAIN_NAME="test-yourproject.com"
2 COGNITO_AUTH_DOMAIN="https://microservices-test-yourproject-users.auth.eu-west-2.amazoncognito.com"
3 COGNITO_POOL_CLIENT_ID="your_id_here"
4 COGNITO_POOL_CLIENT_SECRET="your_secret_here"
5 GRAFANA_DB_URL="postgres://postgres:Qdfg5d4g54Yd5gQd4sfG@microservices-db.cluster-cm16afbs4pai.eu-west-2.rds
6 SMTP_PASSWORD="your_password_here"
7 GRAFANA_ENV="grafana-mb-test"
1 cd config-files/amp-files
2 cp amp_query_override_values.yaml amp_query_override_values.tmp.yaml
3 sed -i "s/\[DOMAIN_NAME\]/$DOMAIN_NAME/g" amp_query_override_values.tmp.yaml
4 sed -i "s|\[COGNITO_AUTH_DOMAIN\]|$COGNITO_AUTH_DOMAIN|g" amp_query_override_values.tmp.yaml
5 sed -i "s|\[COGNITO_POOL_CLIENT_ID\]|$COGNITO_POOL_CLIENT_ID|g" amp_query_override_values.tmp.yaml
6 sed -i "s|\[COGNITO_POOL_CLIENT_SECRET\]|$COGNITO_POOL_CLIENT_SECRET|g" amp_query_override_values.tmp.yaml
7 sed -i "s|\[GRAFANA_DB_URL\]|$GRAFANA_DB_URL|g" amp_query_override_values.tmp.yaml
8 sed -i "s|\[SMTP_PASSWORD\]|$SMTP_PASSWORD|g" amp_query_override_values.tmp.yaml
9 sed -i "s|\[GRAFANA_ENV\]|$GRAFANA_ENV|g" amp_query_override_values.tmp.yaml
10 cd ../../
# serviceAccount:
# name: "iamproxy-service-account"
grafana.ini:
server:
domain: grafana.test-yourproject.com
root_url: https://grafana.your-amazing-project.com/
router_logging: true
auth:
sigv4_auth_enabled: true
login_cookie_name: grafana_session
login_maximum_inactive_lifetime_duration: 12h
login_maximum_lifetime_duration: 24h
disable_login_form: true
disable_signout_menu: false
signout_redirect_url: https://microservices-dev.auth.eu-west-2.amazoncognito.com/logout?client_id=your_id_hereundefinedlogout_uri=https://grafana.my-amazing-project.com/login
oauth_auto_login: false
aws:
assume_role_enabled: true
auth.generic_oauth:
enabled: true
name: OAuth
allow_sign_up: true
auto_login: false
client_id: your_id_here
client_secret: your_secret_here
scopes: email aws.cognito.signin.user.admin openid profile
auth_url: https://microservices-dev.auth.eu-west-2.amazoncognito.com/oauth2/authorize
token_url: https://microservices-dev.auth.eu-west-2.amazoncognito.com/oauth2/token
api_url: https://microservices-dev.auth.eu-west-2.amazoncognito.com/oauth2/userInfo
role_attribute_path:
(
"cognito:groups" |
contains([*], 'grafana-admin') undefinedundefined 'Admin' ||
contains([*], 'grafana-editor') undefinedundefined 'Editor' ||
contains([*], 'grafana-viewer') undefinedundefined 'Viewer'
)
role_attribute_strict: true
security:
cookie_samesite: lax
database:
url : postgres://postgres:sdfsdfsfsdf@sdfsdfsdf-db.cluster-sdfsdf.eu-west-2.rds.amazonaws.com:5432/grafana-dev
ssl_mode: disable
smtp:
enabled: true
host: email-smtp.eu-west-2.amazonaws.com:587
startTLS_policy: MandatoryStartTLS
password: sdfsdfsfs+pHt+s0
user: fdsfsdsdfs
from_address: contact@seaflux.com
from_name: grafana-mb-dev
resources:
limits:
cpu: "500m"
memory: "1G"
requests:
cpu: "85m"
memory: "256Mi"
assertNoLeakedSecrets: false
1 helm upgrade --install grafana-for-amp grafana/grafana -n default -f ./amp_query_override_values.tmp.yaml
Release "grafana-for-amp" does not exist. Installing it now.
NAME: grafana-for-amp
LAST DEPLOYED: Mon Apr 15 12:00:54 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
kubectl get secret --namespace default grafana-for-amp -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:
grafana-for-amp.default.svc.cluster.local
Get the Grafana URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana-for-amp" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 3000
3. Login with the password from step 1 and the username: admin
#################################################################################
###### WARNING: Persistence is disabled!!! You will lose your data when #####
###### the Grafana pod is terminated. #####
#################################################################################
6. How to Create Slack Channel and App for notification.
7. How to Get Notification on Email and Slack channel:
8. Import Grafana Metrics:
Import the sample JSON file from this Github Repository by a Site Reliability Engineer to get the metrics on your Grafana Dashboard. You can edit your metrics according to your specific data.
If you want to remove the self-managed Prometheus setup and associated components, follow these steps:
1. Delete Prometheus Deployment:
Use the following Helm command to delete the Prometheus deployment:
1 helm delete prometheus -n prometheus
2. Delete Prometheus Namespace (Optional):
If you created a separate namespace for Prometheus, you can delete the namespace:
1 kubectl delete namespace prometheus
3. Verify Cleanup::
Ensure that all associated resources, such as Pods, Services, and ConfigMaps, are removed. Use kubectl commands to verify the cleanup.
1 kubectl get pods,svc,configmap -n prometheus
Congratulations! Now you know what Prometheus is and how it can be leveraged to monitor your Kubernetes microservices. By understanding and implementing it with Helm, you can effectively self-manage your applications with Kubernetes. Keep exploring and experimenting with different deployment strategies to optimize your deployments further.
We at Seaflux are your dedicated partners in the ever-evolving landscape of Cloud Computing. Whether you're contemplating a seamless cloud migration, exploring the possibilities of Kubernetes deployment, or harnessing the power of AWS serverless architecture, Seaflux is here to lead the way.
Have specific questions or ambitious projects in mind? Let's discuss! Schedule a meeting with us here, and let Seaflux be your trusted companion in unlocking the potential of cloud innovation. Your journey to a more agile and scalable future starts with us.
DevOps Engineer