Openstack over k8s: различия между версиями
Материал из noname.com.ua
Перейти к навигацииПерейти к поискуSirmax (обсуждение | вклад) (→4) |
Sirmax (обсуждение | вклад) (→555) |
||
Строка 299: | Строка 299: | ||
</PRE> |
</PRE> |
||
==555== |
==555== |
||
+ | helm delete --purge ceph |
||
+ | helm install --name=ceph local/ceph |
Версия 14:52, 2 февраля 2019
1) Install openstack(ceph as storage) on top of K8s(All-in-one-installation) using openstack-helm project 2) change Keystone token expiration time afterwards to 24 hours 3) deploy 3 VMs connected to each other using heat
install minikube
Install helm
brew install kubernetes-helm
helm init $HELM_HOME has been configured at /Users/mmazur/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-fc47s 1/1 Running 0 2d19h coredns-86c58d9df4-jpl6m 1/1 Running 0 2d19h etcd-minikube 1/1 Running 0 2d19h kube-addon-manager-minikube 1/1 Running 0 2d19h kube-apiserver-minikube 1/1 Running 0 2d19h kube-controller-manager-minikube 1/1 Running 0 2d19h kube-proxy-9tg5l 1/1 Running 0 2d19h kube-scheduler-minikube 1/1 Running 0 2d19h storage-provisioner 1/1 Running 0 2d19h tiller-deploy-69ffbf64bc-vspfs 1/1 Running 0 108s
helm list
helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈ Happy Helming!⎈
Ceph
helm serve & [1] 35959 14:16:09-mmazur@Mac18:~/WORK_OTHER/Mirantis_test_task/ceph-helm$ Regenerating index. This may take a moment. Now serving you on 127.0.0.1:8879
helm repo add local http://localhost:8879/charts "local" has been added to your repositories
git clone https://github.com/ceph/ceph-helm cd ceph-helm/ceph
make ===== Processing [helm-toolkit] chart ===== if [ -f helm-toolkit/Makefile ]; then make --directory=helm-toolkit; fi find: secrets: No such file or directory echo Generating /Users/mmazur/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/helm-toolkit/templates/_secrets.tpl Generating /Users/mmazur/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/helm-toolkit/templates/_secrets.tpl rm -f templates/_secrets.tpl for i in ; do printf '{{ define "'$i'" }}' >> templates/_secrets.tpl; cat $i >> templates/_secrets.tpl; printf "{{ end }}\n" >> templates/_secrets.tpl; done if [ -f helm-toolkit/requirements.yaml ]; then helm dependency update helm-toolkit; fi Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 0 charts Deleting outdated charts if [ -d helm-toolkit ]; then helm lint helm-toolkit; fi ==> Linting helm-toolkit [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures if [ -d helm-toolkit ]; then helm package helm-toolkit; fi Successfully packaged chart and saved it to: /Users/mmazur/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/helm-toolkit-0.1.0.tgz ===== Processing [ceph] chart ===== if [ -f ceph/Makefile ]; then make --directory=ceph; fi if [ -f ceph/requirements.yaml ]; then helm dependency update ceph; fi Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 1 charts Downloading helm-toolkit from repo http://localhost:8879/charts Deleting outdated charts if [ -d ceph ]; then helm lint ceph; fi ==> Linting ceph [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures if [ -d ceph ]; then helm package ceph; fi Successfully packaged chart and saved it to: /Users/mmazur/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/ceph-0.1.0.tgz
kubectl describe node minikube Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=minikube node-role.kubernetes.io/master=
kubectl label node minikube ceph-mon=enabled ceph-mgr=enabled ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled
kubectl describe node minikube Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux ceph-mgr=enabled ceph-mon=enabled ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled kubernetes.io/hostname=minikube node-role.kubernetes.io/master=
2
helm install --name=ceph local/ceph NAME: ceph LAST DEPLOYED: Sat Feb 2 14:31:32 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ceph-mon ClusterIP None <none> 6789/TCP 1s ceph-rgw ClusterIP 10.96.102.37 <none> 8088/TCP 0s ==> v1beta1/DaemonSet NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ceph-mon 1 1 0 1 0 ceph-mon=enabled 0s ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ceph-mds 1 1 1 0 0s ceph-mgr 1 1 1 0 0s ceph-mon-check 1 1 1 0 0s ceph-rbd-provisioner 2 2 2 0 0s ceph-rgw 1 1 1 0 0s ==> v1/Job NAME COMPLETIONS DURATION AGE ceph-mon-keyring-generator 0/1 0s 0s ceph-rgw-keyring-generator 0/1 0s 0s ceph-mgr-keyring-generator 0/1 0s 0s ceph-osd-keyring-generator 0/1 0s 0s ceph-mds-keyring-generator 0/1 0s 0s ceph-namespace-client-key-generator 0/1 0s 0s ceph-storage-keys-generator 0/1 0s 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE ceph-mon-7bjnx 0/3 Pending 0 0s ceph-mds-85b4fbb478-9sr5d 0/1 Pending 0 0s ceph-mgr-588577d89f-gwgjx 0/1 Pending 0 0s ceph-mon-check-549b886885-gqhsg 0/1 Pending 0 0s ceph-rbd-provisioner-5cf47cf8d5-d55wt 0/1 ContainerCreating 0 0s ceph-rbd-provisioner-5cf47cf8d5-j6zhk 0/1 Pending 0 0s ceph-rgw-7b9677854f-rfz5w 0/1 Pending 0 0s ceph-mon-keyring-generator-x8dxx 0/1 Pending 0 0s ceph-rgw-keyring-generator-x4vrz 0/1 Pending 0 0s ceph-mgr-keyring-generator-4tncv 0/1 Pending 0 0s ceph-osd-keyring-generator-qfws4 0/1 Pending 0 0s ceph-mds-keyring-generator-lfhnc 0/1 Pending 0 0s ceph-namespace-client-key-generator-ktr6d 0/1 Pending 0 0s ceph-storage-keys-generator-ctxvt 0/1 Pending 0 0s ==> v1/Secret NAME TYPE DATA AGE ceph-keystone-user-rgw Opaque 7 1s ==> v1/ConfigMap NAME DATA AGE ceph-bin-clients 2 1s ceph-bin 26 1s ceph-etc 1 1s ceph-templates 5 1s ==> v1/StorageClass NAME PROVISIONER AGE general ceph.com/rbd 1s
3
kubectl get pods NAME READY STATUS RESTARTS AGE ceph-mds-85b4fbb478-9sr5d 0/1 Pending 0 2m ceph-mds-keyring-generator-lfhnc 0/1 ContainerCreating 0 2m ceph-mgr-588577d89f-gwgjx 0/1 Init:0/2 0 2m ceph-mgr-keyring-generator-4tncv 0/1 ContainerCreating 0 2m ceph-mon-7bjnx 0/3 Init:0/2 0 2m ceph-mon-check-549b886885-gqhsg 0/1 Init:0/2 0 2m ceph-mon-keyring-generator-x8dxx 0/1 ContainerCreating 0 2m ceph-namespace-client-key-generator-ktr6d 0/1 ContainerCreating 0 2m ceph-osd-keyring-generator-qfws4 0/1 ContainerCreating 0 2m ceph-rbd-provisioner-5cf47cf8d5-d55wt 0/1 ContainerCreating 0 2m ceph-rbd-provisioner-5cf47cf8d5-j6zhk 0/1 ContainerCreating 0 2m ceph-rgw-7b9677854f-rfz5w 0/1 Pending 0 2m ceph-rgw-keyring-generator-x4vrz 0/1 ContainerCreating 0 2m ceph-storage-keys-generator-ctxvt 0/1 ContainerCreating 0 2m
Node-Selectors: ceph-mds=enabled Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 95s (x2 over 95s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
kubectl label node minikube ceph-mon=enabled ceph-mgr=enabled ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled ceph-mds=enabled
4
kubectl get pods NAME READY STATUS RESTARTS AGE ceph-mds-85b4fbb478-9sr5d 0/1 Pending 0 4m24s ceph-mds-keyring-generator-lfhnc 0/1 ContainerCreating 0 4m24s ceph-mgr-588577d89f-gwgjx 0/1 Init:0/2 0 4m24s ceph-mgr-keyring-generator-4tncv 0/1 ContainerCreating 0 4m24s ceph-mon-7bjnx 0/3 Init:0/2 0 4m24s ceph-mon-check-549b886885-gqhsg 0/1 Init:0/2 0 4m24s ceph-mon-keyring-generator-x8dxx 0/1 CrashLoopBackOff 4 4m24s ceph-namespace-client-key-generator-ktr6d 0/1 ContainerCreating 0 4m24s ceph-osd-keyring-generator-qfws4 0/1 CrashLoopBackOff 4 4m24s ceph-rbd-provisioner-5cf47cf8d5-d55wt 0/1 ContainerCreating 0 4m24s ceph-rbd-provisioner-5cf47cf8d5-j6zhk 0/1 ContainerCreating 0 4m24s ceph-rgw-7b9677854f-rfz5w 0/1 Pending 0 4m24s ceph-rgw-keyring-generator-x4vrz 0/1 CrashLoopBackOff 4 4m24s ceph-storage-keys-generator-ctxvt 0/1 ContainerCreating 0 4m24s
kubectl logs ceph-rgw-keyring-generator-x4vrz ... skipped ... Error from server (Forbidden): error when creating "STDIN": secrets is forbidden: User "system:serviceaccount:default:default" cannot create resource "secrets" in API group "" in the namespace "default"
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grant-all-to-default-service-account-role-binding namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - kind: ServiceAccount name: default namespace: default
kubectl create -f p.yaml
kubectl get pods NAME READY STATUS RESTARTS AGE ceph-mds-85b4fbb478-9sr5d 0/1 Pending 0 14m ceph-mgr-588577d89f-gwgjx 0/1 Init:0/2 0 14m ceph-mon-7bjnx 0/3 Init:0/2 0 14m ceph-mon-check-549b886885-gqhsg 0/1 Init:0/2 0 14m ceph-namespace-client-key-generator-ktr6d 0/1 Completed 6 14m ceph-rbd-provisioner-5cf47cf8d5-d55wt 1/1 Running 0 14m ceph-rbd-provisioner-5cf47cf8d5-j6zhk 1/1 Running 0 14m ceph-rgw-7b9677854f-rfz5w 0/1 Pending 0 14m
555
helm delete --purge ceph helm install --name=ceph local/ceph