K8s on AWS 관리 스크립트

  • Post author:
  • Post category:칼럼
  • Post comments:1 Comment
  • Post last modified:February 8, 2020

2018년 7월 30일. 짬내서 스크립트 몇 개를 정리해본다.

Reclaim policy를 Retain으로 바꾸기

Persistent Volume의 Reclaim Policy가 Delete이면 데이터를 잃어버릴 수 있다. 그래서 보통 StorageClass의 Reclaim Policy를 Reclaim으로 잡는데 default Storage Class의 Reclaim Policy가 Delete인 탓도 있고 하여 간혹 개별 PV의 Reclaim Policy를 수정해야 한다. 아래 스크립트를 이용하면 Reclaim이 아닌 PV를 찾아서 모두 Reclaim으로 바꾼다.

#!/bin/bash
# See https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
set -eo pipefail
echo ">>> BEFORE"
kubectl get pv
kubectl patch pv $(kubectl get pv -o json | jq -r '.items[] | select(.spec .persistentVolumeReclaimPolicy != "Retain") | .metadata .name') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
echo ">>> AFTER"
kubectl get pv

이런 작업은 하루에 한번 정도 자동으로 수행하면 여러 사람을 구제할 수 있다.

kind: ServiceAccount
apiVersion: v1
metadata:
name: reclaim-policy
namespace: monitoring
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: reclaim-policy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: reclaim-policy
subjects:
kind: ServiceAccount
name: reclaim-policy
namespace: monitoring
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: reclaim-policy
rules:
apiGroups:
""
resources:
persistentvolumes
verbs:
get
list
patch
apiVersion: v1
kind: ConfigMap
metadata:
name: reclaim-policy-scripts
namespace: monitoring
data:
set-to-retain.sh: |
#!/bin/bash
# See https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
echo ">>> BEFORE"
kubectl get pv
kubectl patch pv $(kubectl get pv -o json | jq -r '.items[] | select(.spec .persistentVolumeReclaimPolicy != "Retain") | .metadata .name') -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
echo ">>> AFTER"
kubectl get pv
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: reclaim-policy
namespace: monitoring
labels:
app: reclaim-policy
spec:
schedule: "@hourly"
jobTemplate:
spec:
template:
metadata:
labels:
app: reclaim-policy
spec:
serviceAccount: reclaim-policy
containers:
name: reclaim-policy
image: solsson/[email protected]:18bf01c2c756b550103a99b3c14f741acccea106072cd37155c6d24be4edd6e2
args:
/bin/bash
-c
"/scripts/set-to-retain.sh"
volumeMounts:
name: scripts-d
mountPath: /scripts
volumes:
name: scripts-d
projected:
defaultMode: 500
sources:
configMap:
name: reclaim-policy-scripts
items:
key: set-to-retain.sh
path: set-to-retain.sh
restartPolicy: Never
successfulJobsHistoryLimit: 10
failedJobsHistoryLimit: 10

AZ별 애플리케이션 배포 현황 보기

애플리케이션이 AWS Availability Zone에 골고루 배포되어 있는지를 알고 싶을 때 유용하다.

#!/bin/bash -e
type csvjson || brew install csvkit
type jq || brew install jq
RED='\033[0;31m'
NC='\033[0m' # No Color
OUTPUT_PODS=pods.csv
truncate -s 0 "${OUTPUT_PODS}"
echo "Namespace,App,Node" > "${OUTPUT_PODS}"
kubeall get pod -o json | jq -r '.items[] | [ .metadata .namespace, if (.metadata .labels .app?) then (.metadata .labels .app) elif (.metadata .labels ."k8s-app"?) then (.metadata .labels ."k8s-app") else "" end, .spec .nodeName ] | @csv' >> "${OUTPUT_PODS}"
OUTPUT_NODES=nodes.csv
truncate -s 0 "${OUTPUT_NODES}"
echo "Region,AZ,Node" > "${OUTPUT_NODES}"
kubeall get node -o json | jq -r '.items[] | [ .metadata .labels ."failure-domain.beta.kubernetes.io/region", .metadata .labels ."failure-domain.beta.kubernetes.io/zone", .metadata .name ] | @csv' >> "${OUTPUT_NODES}"
TABLE_NAME=tmp
TMP_FILE=${TABLE_NAME}.csv
truncate -s 0 "${TMP_FILE}"
OUTPUT_FILE=output.csv
truncate -s 0 "${OUTPUT_FILE}"
csvjoin -c Node "${OUTPUT_PODS}" "${OUTPUT_NODES}" | csvcut -c Namespace,App,Node,AZ > "${TMP_FILE}"
csvsql –query "select Namespace, App, AZ, count(*) as cnt from '${TABLE_NAME}' group by Namespace, App, AZ order by Namespace asc, App asc, AZ asc" "${TMP_FILE}" > "${OUTPUT_FILE}"
echo -e "The following are the first 3 x 9 records of the result:"
echo -e "\n"
head -n 3 "${OUTPUT_FILE}" | cut -d$'\t' -f1-9
echo -e "\n"
echo -e "Open ${OUTPUT_FILE} to read all the records you wanted"
echo -e "\n "
echo "done!"
exit 0

AZ별 EC2 인스턴스 분포 현황 보기

Kubernetes에 국한된 건 아니지만 EC2 인스턴스가 여러 AZ에 골고루 배포됐는지 확인할 때 유용하다.

#!/bin/bash
type csvsql || brew instal csvkit
type jq || brew install jq
type aws || brew intall awscli
TABLE_NAME=tmp
TMP_FILE=${TABLE_NAME}.csv
OUTPUT_FILE="output.csv"
truncate -s 0 "${OUTPUT_FILE}"
echo "Name AZ" > "${TMP_FILE}"
aws ec2 describe-instances | jq -r '.Reservations[].Instances[] | select(.State .Name == "running") | [ (.Tags[]|select(.Key=="Name")|.Value), .Placement .AvailabilityZone] | @tsv' >> "${TMP_FILE}"
csvsql –query "select name, az, count(*) as cnt from '${TABLE_NAME}' group by name, az" "${TMP_FILE}" > "${OUTPUT_FILE}"
echo -e "The followings are the first 3 x 3 records of the result:"
echo -e "\n"
head -n 3 "${OUTPUT_FILE}" | cut -d$'\t' -f1-3
echo -e "\n"
echo -e "Open ${OUTPUT_FILE} to read all the records you wanted"
echo -e "\n"

This Post Has One Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.