小男孩‘自慰网亚洲一区二区,亚洲一级在线播放毛片,亚洲中文字幕av每天更新,黄aⅴ永久免费无码,91成人午夜在线精品,色网站免费在线观看,亚洲欧洲wwwww在线观看

分享

Using Existing Ceph Cluster for Kubernetes Persistent Storage

 louy2 2019-01-14

Last update:

I wrote about Rook storage a few weeks ago, but maybe you already have Ceph cluster running in your datacenter. Or you prefer to use Ceph on separate nodes and without Kubernetes. Also, currently Rook is alpha software and not ready for production use. I would assume that this large Ceph cluster if you have one, is also used for other services outside Kubernetes. Whatever is the case it is simple to connect Ceph and Kubernetes together to provision persistent volumes on Kubernetes.

Previous blog post

Connect Ceph and Kubernetes

RBD client is used for interaction between Kubernetes and Ceph. Unfortunately, it is not available in official kube-controller-manager container. You could change kube controller manager image to include RBD, but that is not recommended. Instead, I will use external storage plugin for Ceph. This will create a separate rbd-provisioner pod which has rbd installed. My Kubernetes test cluster is RBAC enabled. If not, you can only create Deployment resource and skip the rest. In that case, don't forget to delete service account from deployment definition. Let's create all resources for rbd-provisioner with RBAC in kube-system namespace:

? cat <<EOF | kubectl create -n kube-system -f -
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: rbd-provisioner
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rbd-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: /rbd
      serviceAccount: rbd-provisioner
EOF

Please check that quay.io/external_storage/rbd-provisioner:latest image has the same Ceph version installed as your Ceph cluster. You can check it like this on any machine running docker:

? docker pull quay.io/external_storage/rbd-provisioner:latest
? docker history quay.io/external_storage/rbd-provisioner:latest | grep CEPH_VERSION
<missing>           15 hours ago        /bin/sh -c #(nop)  ENV CEPH_VERSION=luminous    0B

Wait a few minutes for RBD volume provisioner to be up and running:

? kubectl get pods -l app=rbd-provisioner -n kube-system
NAME                               READY     STATUS    RESTARTS   AGE
rbd-provisioner-77d75fdc5b-mpbpn   1/1       Running   1          1m

RBD volume provisioner needs admin key from Ceph to provision storage. To get the admin key from Ceph cluster use this command:

sudo ceph --cluster ceph auth get-key client.admin

NOTE: Run all commands that start with sudo on Ceph MON node. Also I'm using Jewel version of Ceph and rbd-provisioner is based on Jewel as well.

Then add this key to Kubernetes secrets:

? kubectl create secret generic ceph-secret     --type="kubernetes.io/rbd"     --from-literal=key='AQBwruNY/lEmCxAAKS7tzZHSforkUE85htnA/g=='     --namespace=kube-system

I will also create a separate Ceph pool for Kubernetes and the new client key as this Ceph cluster has cephx authentication enabled:

sudo ceph --cluster ceph osd pool create kube 1024 1024
sudo ceph --cluster ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
sudo ceph --cluster ceph auth get-key client.kube

Add the new client secret for kube pool into Kubernetes secrets:

? kubectl create secret generic ceph-secret-kube     --type="kubernetes.io/rbd"     --from-literal=key='AQC/c+dYsXNUNBAAMTEW1/WnzXdmDZIBhcw6ug=='     --namespace=kube-system

When both secrets are present create the new storage class. Let's call it fast-rbd:

? cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-rbd
provisioner: /rbd
parameters:
  monitors: <monitor-1-ip>:6789, <monitor-2-ip>:6789, <monitor-3-ip>:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-secret-kube
  userSecretNamespace: kube-system
  imageFormat: "2"
  imageFeatures: layering
EOF

And the last step is to create a simple PVC to test RBD volume provisioner:

? cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: fast-rbd
EOF

That's it, the new volume created on Ceph cluster:

? kubectl get pvc myclaim
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myclaim   Bound     pvc-11559e19-2541-11e8-94dc-525400474652   8Gi        RWO            fast-rbd       1h

For any troubleshooting, run pvc describe command on a particular PVC.

Summary

This was a simple how-to guide to help you to connect Ceph and Kubernetes together. RBD volume provisioner is simple to deploy, but either way, I might create a Helm chart later. Stay tuned for the next one.

    本站是提供個(gè)人知識(shí)管理的網(wǎng)絡(luò)存儲(chǔ)空間,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點(diǎn)。請(qǐng)注意甄別內(nèi)容中的聯(lián)系方式、誘導(dǎo)購買等信息,謹(jǐn)防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請(qǐng)點(diǎn)擊一鍵舉報(bào)。
    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評(píng)論

    發(fā)表

    請(qǐng)遵守用戶 評(píng)論公約

    類似文章 更多