Openstack over k8s: различия между версиями

Материал из noname.com.ua
Перейти к навигацииПерейти к поиску
 
(не показана 31 промежуточная версия этого же участника)
Строка 1: Строка 1:
  +
[[Категория:Openstack]]
  +
[[Категория:Linux]]
  +
=Комплексное задание на знание K8s/Helm/OpenStack на 8 рабочих часов (1 день)=
 
<PRE>
 
<PRE>
1) Install openstack(ceph as storage) on top of K8s(All-in-one-installation) using openstack-helm project
+
* Install openstack(ceph as storage) on top of K8s(All-in-one-installation) using openstack-helm project
2) change Keystone token expiration time afterwards to 24 hours
+
* change Keystone token expiration time afterwards to 24 hours
3) deploy 3 VMs connected to each other using heat
+
* deploy 3 VMs connected to each other using heat
 
</PRE>
 
</PRE>
   
  +
==TL; DR==
=install minikube=
 
   
  +
* Мне понадобилось примерно 13 рабочих часов что бы закончить задние
=Install helm=
 
   
  +
Хорошее:
  +
* Задание можно сделать за 8 часов (и даже быстрее)
   
  +
Плохое
brew install kubernetes-helm
 
   
  +
* Практически невозможно сделать на ноутбуке без Linux.
  +
* Примерно половину времени потрачено на попытку "взлететь" напрямую на Mac OS и использовать в качестве кластера K8s уже имевшийся minikube
  +
Это был явный фейл - как минимум чарты ceph не совместимы с миникубом никак (https://github.com/ceph/ceph-helm/issues/73), до остальных я не дошел. <BR>
  +
Деплоить без скриптовой обвязки явно заняло бы больше чем 1 день (на самом деле если не срезать углы пользуясь скриптами то минимум неделя)
  +
* Когда я понял что задеплоить на миникуб за отведеннгое время не успеваю то решил настроить ВМку с убунтой и дальше работать с ней
  +
Второй явный фейл (но не мой =) ) - то что задание требует минимум 8 гигов свободной памяти,
  +
а на самом деле даже на ВМке с 16-ю гигами и 8 ядрами все шевелилось очень медленно. (Человек с ноутом с 8 гигами не сделает это задание из-за недостатка памяти)
  +
<BR> Как следствие - регулярные падения скриптов из-за таймаутов,
  +
<BR>Отмечу так же что с не слишком быстрым интернетом я наступил на проблему, что Pull образов был медленным и скрипты не дожидались и падали по таймауту. <BR>
  +
Хорошей идеей было бы скачать образы заранее, но об этом я подумал уже в середине процесса и тратить время на анализ какие образы нужны, не стал,
  +
  +
==Решение==
  +
  +
Логи деплоймента вынесены в отдельный раздел в конце что б не загромождать документ.
  +
===Создание ВМки с Убунтой===
  +
https://docs.openstack.org/openstack-helm/latest/install/developer/requirements-and-host-config.html
 
<PRE>
 
<PRE>
  +
System Requirements¶
helm init
 
  +
The recommended minimum system requirements for a full deployment are:
$HELM_HOME has been configured at /Users/mmazur/.helm.
 
   
  +
16GB of RAM
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
 
  +
8 Cores
  +
48GB HDD
  +
</PRE>
   
  +
[[Изображение:VM-1.png|600px]]
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
 
  +
To prevent this, run `helm init` with the --tiller-tls-verify flag.
 
  +
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
 
  +
На этом этапе изначально я совершил 2 ошибки
Happy Helming!
 
  +
* Создал слишком маленькую (мало CPU Cores) машину
  +
* Не проверил пересечения сетей
  +
<PRE>
  +
Warning
  +
By default the Calico CNI will use 192.168.0.0/16 and Kubernetes services will use 10.96.0.0/16 as the CIDR for services.
  +
Check that these CIDRs are not in use on the development node before proceeding, or adjust as required.
  +
</PRE>
  +
Кстати, похоже что тут ошибка в маске в документации - реально используется маска <B>/12</B><BR>
  +
Немного отредактированный для удобства чтения вывод ps
  +
<PRE>
  +
root 5717 4.0 1.7 448168 292172 ? Ssl 19:27 0:51 | \_ kube-apiserver --feature-gates=MountPropagation=true,PodShareProcessNamespace=true
  +
--service-node-port-range=1024-65535
  +
--advertise-address=172.17.0.1
  +
--service-cluster-ip-range=10.96.0.0/12
  +
</PRE>
  +
  +
===Подготовка===
  +
* https://docs.openstack.org/openstack-helm/latest/install/developer/kubernetes-and-common-setup.html
  +
Если следовать инструкции и не пробовать ничего менять то никаких проблем на 18-й убунте не возникло. <BR>
  +
  +
===Установка OpenStack===
  +
Если следовать инструкции то никаких проблем не возникает, за исключением таймаутов.
  +
<BR>
  +
Насколько я смог выяснить - все скрипты делают корректную зачистку и потому перезапуск достаточно безопасен.
  +
<BR>
  +
К сожалению я не сохранил список скриптов которые приходилось перезапускать
  +
  +
===Проверка OpenStack===
  +
После окончания проверил самым простым способом - работает ли keystone:
  +
  +
<PRE>
  +
root@openstack:~# export OS_CLOUD=openstack_helm
  +
root@openstack:~# openstack token issue
 
</PRE>
 
</PRE>
   
  +
На первый взгляд все PODы как минимум запустились
 
<PRE>
 
<PRE>
kubectl get pods -n kube-system
+
kubectl -n openstack get pods
NAME READY STATUS RESTARTS AGE
+
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-fc47s 1/1 Running 0 2d19h
+
ceph-ks-endpoints-hkj77 0/3 Completed 0 3h
coredns-86c58d9df4-jpl6m 1/1 Running 0 2d19h
+
ceph-ks-service-l4wdx 0/1 Completed 0 3h
etcd-minikube 1/1 Running 0 2d19h
+
ceph-openstack-config-ceph-ns-key-generator-z82mk 0/1 Completed 0 17h
kube-addon-manager-minikube 1/1 Running 0 2d19h
+
ceph-rgw-66685f585d-st7dp 1/1 Running 0 3h
kube-apiserver-minikube 1/1 Running 0 2d19h
+
ceph-rgw-storage-init-2vrpg 0/1 Completed 0 3h
kube-controller-manager-minikube 1/1 Running 0 2d19h
+
cinder-api-85df68f5d8-j6mqh 1/1 Running 0 2h
kube-proxy-9tg5l 1/1 Running 0 2d19h
+
cinder-backup-5f9598868-5kxxx 1/1 Running 0 2h
kube-scheduler-minikube 1/1 Running 0 2d19h
+
cinder-backup-storage-init-g627m 0/1 Completed 0 2h
storage-provisioner 1/1 Running 0 2d19h
+
cinder-bootstrap-r2295 0/1 Completed 0 2h
tiller-deploy-69ffbf64bc-vspfs 1/1 Running 0 108s
+
cinder-db-init-nk7jm 0/1 Completed 0 2h
  +
cinder-db-sync-vlbcm 0/1 Completed 0 2h
  +
cinder-ks-endpoints-cnwgb 0/9 Completed 0 2h
  +
cinder-ks-service-6zs57 0/3 Completed 0 2h
  +
cinder-ks-user-bp8zb 0/1 Completed 0 2h
  +
cinder-rabbit-init-j97b7 0/1 Completed 0 2h
  +
cinder-scheduler-6bfcd6476d-r87hm 1/1 Running 0 2h
  +
cinder-storage-init-6ksjc 0/1 Completed 0 2h
  +
cinder-volume-5fccd4cc5-dpxqm 1/1 Running 0 2h
  +
cinder-volume-usage-audit-1549203300-25mkf 0/1 Completed 0 14m
  +
cinder-volume-usage-audit-1549203600-hnh54 0/1 Completed 0 8m
  +
cinder-volume-usage-audit-1549203900-v5t4w 0/1 Completed 0 4m
  +
glance-api-745dc74457-42nwf 1/1 Running 0 3h
  +
glance-bootstrap-j5wt4 0/1 Completed 0 3h
  +
glance-db-init-lw97h 0/1 Completed 0 3h
  +
glance-db-sync-dbp5s 0/1 Completed 0 3h
  +
glance-ks-endpoints-gm5rw 0/3 Completed 0 3h
  +
glance-ks-service-64jfj 0/1 Completed 0 3h
  +
glance-ks-user-ftv9c 0/1 Completed 0 3h
  +
glance-rabbit-init-m7b7k 0/1 Completed 0 3h
  +
glance-registry-6cb86c767-2mkbx 1/1 Running 0 3h
  +
glance-storage-init-m29p4 0/1 Completed 0 3h
  +
heat-api-69db75bb6d-h24w9 1/1 Running 0 3h
  +
heat-bootstrap-v9642 0/1 Completed 0 3h
  +
heat-cfn-86896f7466-n5dnz 1/1 Running 0 3h
  +
heat-db-init-lfrsb 0/1 Completed 0 3h
  +
heat-db-sync-wct2x 0/1 Completed 0 3h
  +
heat-domain-ks-user-4fg65 0/1 Completed 0 3h
  +
heat-engine-6756c84fdd-44hzf 1/1 Running 0 3h
  +
heat-engine-cleaner-1549203300-s48sb 0/1 Completed 0 14m
  +
heat-engine-cleaner-1549203600-gffn4 0/1 Completed 0 8m
  +
heat-engine-cleaner-1549203900-6hwvj 0/1 Completed 0 4m
  +
heat-ks-endpoints-wxjwp 0/6 Completed 0 3h
  +
heat-ks-service-v95sk 0/2 Completed 0 3h
  +
heat-ks-user-z6xhb 0/1 Completed 0 3h
  +
heat-rabbit-init-77nzb 0/1 Completed 0 3h
  +
heat-trustee-ks-user-mwrf5 0/1 Completed 0 3h
  +
heat-trusts-7x7nt 0/1 Completed 0 3h
  +
horizon-5877548d5d-27t8c 1/1 Running 0 3h
  +
horizon-db-init-jsjm5 0/1 Completed 0 3h
  +
horizon-db-sync-wxwpw 0/1 Completed 0 3h
  +
ingress-86cf786fd8-fbz8w 1/1 Running 4 18h
  +
ingress-error-pages-7f574d9cd7-b5kwh 1/1 Running 0 18h
  +
keystone-api-f658f747c-q6w65 1/1 Running 0 3h
  +
keystone-bootstrap-ds8t5 0/1 Completed 0 3h
  +
keystone-credential-setup-hrp8t 0/1 Completed 0 3h
  +
keystone-db-init-dhgf2 0/1 Completed 0 3h
  +
keystone-db-sync-z8d5d 0/1 Completed 0 3h
  +
keystone-domain-manage-86b25 0/1 Completed 0 3h
  +
keystone-fernet-rotate-1549195200-xh9lv 0/1 Completed 0 2h
  +
keystone-fernet-setup-txgc8 0/1 Completed 0 3h
  +
keystone-rabbit-init-jgkqz 0/1 Completed 0 3h
  +
libvirt-427lp 1/1 Running 0 2h
  +
mariadb-ingress-5cff98cbfc-24vjg 1/1 Running 0 17h
  +
mariadb-ingress-5cff98cbfc-nqlhq 1/1 Running 0 17h
  +
mariadb-ingress-error-pages-5c89b57bc-twn7z 1/1 Running 0 17h
  +
mariadb-server-0 1/1 Running 0 17h
  +
memcached-memcached-6d48bd48bc-7kd84 1/1 Running 0 3h
  +
neutron-db-init-rvf47 0/1 Completed 0 2h
  +
neutron-db-sync-6w7bn 0/1 Completed 0 2h
  +
neutron-dhcp-agent-default-znxhn 1/1 Running 0 2h
  +
neutron-ks-endpoints-47xs8 0/3 Completed 1 2h
  +
neutron-ks-service-sqtwg 0/1 Completed 0 2h
  +
neutron-ks-user-tpmrb 0/1 Completed 0 2h
  +
neutron-l3-agent-default-5nbsp 1/1 Running 0 2h
  +
neutron-metadata-agent-default-9ml6v 1/1 Running 0 2h
  +
neutron-ovs-agent-default-mg8ln 1/1 Running 0 2h
  +
neutron-rabbit-init-sgnwm 0/1 Completed 0 2h
  +
neutron-server-9bdc765c9-bx6sf 1/1 Running 0 2h
  +
nova-api-metadata-78fb54c549-zcmxg 1/1 Running 2 2h
  +
nova-api-osapi-6c5c6dd4fc-7z5qq 1/1 Running 0 2h
  +
nova-bootstrap-hp6n4 0/1 Completed 0 2h
  +
nova-cell-setup-1549195200-v5bv8 0/1 Completed 0 2h
  +
nova-cell-setup-1549198800-6d8sm 0/1 Completed 0 1h
  +
nova-cell-setup-1549202400-c9vfz 0/1 Completed 0 29m
  +
nova-cell-setup-dfdzw 0/1 Completed 0 2h
  +
nova-compute-default-fmqtl 1/1 Running 0 2h
  +
nova-conductor-5b9956bffc-5ts7s 1/1 Running 0 2h
  +
nova-consoleauth-7f8dbb8865-lt5mr 1/1 Running 0 2h
  +
nova-db-init-hjp2p 0/3 Completed 0 2h
  +
nova-db-sync-zn6px 0/1 Completed 0 2h
  +
nova-ks-endpoints-ldzhz 0/3 Completed 0 2h
  +
nova-ks-service-c64tb 0/1 Completed 0 2h
  +
nova-ks-user-kjskm 0/1 Completed 0 2h
  +
nova-novncproxy-6f485d9f4c-6m2n5 1/1 Running 0 2h
  +
nova-placement-api-587c888875-6cmmb 1/1 Running 0 2h
  +
nova-rabbit-init-t275g 0/1 Completed 0 2h
  +
nova-scheduler-69886c6fdf-hcwm6 1/1 Running 0 2h
  +
nova-service-cleaner-1549195200-7jw2d 0/1 Completed 1 2h
  +
nova-service-cleaner-1549198800-pvckn 0/1 Completed 0 1h
  +
nova-service-cleaner-1549202400-kqpxz 0/1 Completed 0 29m
  +
openvswitch-db-nx579 1/1 Running 0 2h
  +
openvswitch-vswitchd-p4xj5 1/1 Running 0 2h
  +
placement-ks-endpoints-vt4pk 0/3 Completed 0 2h
  +
placement-ks-service-sw2b9 0/1 Completed 0 2h
  +
placement-ks-user-zv755 0/1 Completed 0 2h
  +
rabbitmq-rabbitmq-0 1/1 Running 0 4h
  +
swift-ks-user-ktptt 0/1 Completed 0 3h
 
</PRE>
 
</PRE>
  +
===Доступ к Horizon===
  +
(настройки посмотрел в ингрессе, 10.255.57.3 - Адрес виртуальной машины
 
<PRE>
 
<PRE>
  +
cat /etc/hosts
helm list
 
  +
10.255.57.3 os horizon.openstack.svc.cluster.local
 
</PRE>
 
</PRE>
  +
[[Изображение:Horizon first login.png|600px]]
   
  +
===Конфигурация KeyStone===
  +
В задании сказано:
 
<PRE>
 
<PRE>
  +
change Keystone token expiration time afterwards to 24 hours
helm repo update
 
Hang tight while we grab the latest from your chart repositories...
 
...Skip local chart repository
 
...Successfully got an update from the "stable" chart repository
 
Update Complete. ⎈ Happy Helming!⎈
 
 
</PRE>
 
</PRE>
  +
Первое - проверим что там на самом деле
  +
<PRE>
  +
openstack token issue
  +
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  +
| Field | Value |
  +
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  +
| expires | 2019-02-04T00:25:34+0000 |
  +
| id | gAAAAABcVt2-s8ugiwKaNQiA9djycTJ2CoDZ0sC176e54cjnE0RevPsXkgiZH0U5m_kNQlo0ctunA_TvD1tULyn0ckRkrO0Pxht1yT-cQ1TTidhkJR2sVojcXG3hiau0RMm0YOfoydDemyuvGMS7mwZ_Z2m9VtmJ-F83xQ8CwEfhItH6vRMzmGk |
  +
| project_id | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 |
  +
| user_id | 42068c166a3245208b5ac78965eab80b |
  +
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  +
</PRE>
  +
Похоже что TTL=12h <BR>
  +
Быстрое чтение документации ( https://docs.openstack.org/juno/config-reference/content/keystone-configuration-file.html ) привело меня к мысли что нужно менять секцию
  +
<PRE>
  +
[token]
  +
expiration = 3600
  +
</PRE>
  +
<BR>
  +
Тут было принято решение сделать "быстро и грязно" и в реальном мире так скорее всего не выйдет,
   
  +
1. Посчитать новое значение (вместо 24 вбил по ошибке 34)
=Ceph=
 
  +
<PRE>
* http://docs.ceph.com/docs/mimic/start/kube-helm/
 
  +
bc
  +
bc 1.07.1
  +
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
  +
This is free software with ABSOLUTELY NO WARRANTY.
  +
For details type `warranty'.
  +
34*60*60
  +
122400
  +
quit
  +
</PRE>
   
  +
2. Проверить что у нас один экземпляр keystone
  +
<BR><I> Было б забавно если их было б больше и все выдавали токены с разным TTL </I>
 
<PRE>
 
<PRE>
  +
docker ps | grep keystone
helm serve &
 
  +
41e785977105 16ec948e619f "/tmp/keystone-api.s…" 2 hours ago Up 2 hours k8s_keystone-api_keystone-api-f658f747c-q6w65_openstack_8ca3a9ed-279f-11e9-a72e-080027da2b2f_0
[1] 35959
 
  +
6905400831ad k8s.gcr.io/pause-amd64:3.1 "/pause" 2 hours ago Up 2 hours k8s_POD_keystone-api-f658f747c-q6w65_openstack_8ca3a9ed-279f-11e9-a72e-080027da2b2f_0
14:16:09-mmazur@Mac18:~/WORK_OTHER/Mirantis_test_task/ceph-helm$ Regenerating index. This may take a moment.
 
  +
</PRE>
Now serving you on 127.0.0.1:8879
 
  +
Тут конечно нужно <B>kubectl exec ...</B> но я решил срезать угол
  +
<PRE>
  +
docker exec -u root -ti 41e785977105 bash
  +
</PRE>
  +
Проверяю что запущено
  +
<PRE>
  +
ps -auxfw
  +
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
  +
root 566 0.0 0.0 18236 3300 pts/0 Ss 12:34 0:00 bash
  +
root 581 0.0 0.0 34428 2872 pts/0 R+ 12:36 0:00 \_ ps -auxfw
  +
keystone 1 0.0 1.1 263112 185104 ? Ss 10:42 0:01 apache2 -DFOREGROUND
  +
keystone 11 3.5 0.5 618912 95016 ? Sl 10:42 4:03 (wsgi:k -DFOREGROUND
  +
keystone 478 0.1 0.0 555276 9952 ? Sl 12:23 0:00 apache2 -DFOREGROUND
  +
keystone 506 0.2 0.0 555348 9956 ? Sl 12:24 0:01 apache2 -DFOREGROUND
  +
</PRE>
  +
Соответствует ожиданиям .
  +
<BR>
  +
Содержимое файла /etc/keystone.keystone.conf тоже - там как и предполагалось 12h
  +
<PRE>
  +
[token]
  +
expiration = 43200
  +
</PRE>
  +
Поменять файл прямо на месте не удалось - что б уже все сделать не по правилам, изменил его снаружи на хосте
  +
<PRE>
  +
root@openstack:~# find /var -name keystone.conf
  +
/var/lib/kubelet/pods/8ca3a9ed-279f-11e9-a72e-080027da2b2f/volumes/kubernetes.io~empty-dir/etckeystone/keystone.conf
  +
/var/lib/kubelet/pods/8ca3a9ed-279f-11e9-a72e-080027da2b2f/volumes/kubernetes.io~secret/keystone-etc/..2019_02_03_12_37_10.041243569/keystone.conf
  +
/var/lib/kubelet/pods/8ca3a9ed-279f-11e9-a72e-080027da2b2f/volumes/kubernetes.io~secret/keystone-etc/keystone.conf
  +
</PRE>
  +
и, возблагодарив разработчиков за то что keystone запущен под Apache (что позволило сделать релоад а не пересоздавать контейнер, а я не был уверен что знаю как это сделать правильно)
  +
<PRE>
  +
docker exec -u root -ti 41e785977105 bash
  +
</PRE>
   
  +
<PRE>
  +
ps -auxfw
  +
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
  +
root 566 0.0 0.0 18236 3300 pts/0 Ss 12:34 0:00 bash
  +
root 581 0.0 0.0 34428 2872 pts/0 R+ 12:36 0:00 \_ ps -auxfw
  +
keystone 1 0.0 1.1 263112 185104 ? Ss 10:42 0:01 apache2 -DFOREGROUND
  +
keystone 11 3.5 0.5 618912 95016 ? Sl 10:42 4:03 (wsgi:k -DFOREGROUND
  +
keystone 478 0.1 0.0 555276 9952 ? Sl 12:23 0:00 apache2 -DFOREGROUND
  +
keystone 506 0.2 0.0 555348 9956 ? Sl 12:24 0:01 apache2 -DFOREGROUND
  +
root@keystone-api-f658f747c-q6w65:/etc/keystone# kill -HUP 1
 
</PRE>
 
</PRE>
   
 
<PRE>
 
<PRE>
  +
root@keystone-api-f658f747c-q6w65:/etc/keystone# ps -auxfw
helm repo add local http://localhost:8879/charts
 
  +
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
"local" has been added to your repositories
 
  +
root 566 0.0 0.0 18236 3300 pts/0 Ss 12:34 0:00 bash
  +
root 583 0.0 0.0 34428 2888 pts/0 R+ 12:36 0:00 \_ ps -auxfw
  +
keystone 1 0.0 1.1 210588 183004 ? Ss 10:42 0:01 apache2 -DFOREGROUND
  +
keystone 11 3.5 0.0 0 0 ? Z 10:42 4:03 [apache2] <defunct>
  +
root@keystone-api-f658f747c-q6w65:/etc/keystone# ps -auxfw
  +
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
  +
root 566 0.0 0.0 18236 3300 pts/0 Ss 12:34 0:00 bash
  +
root 955 0.0 0.0 34428 2904 pts/0 R+ 12:36 0:00 \_ ps -auxfw
  +
keystone 1 0.0 1.1 263120 185124 ? Ss 10:42 0:01 apache2 -DFOREGROUND
  +
keystone 584 12.0 0.0 290680 8820 ? Sl 12:36 0:00 (wsgi:k -DFOREGROUND
  +
keystone 585 14.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 586 14.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 587 17.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 588 13.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 589 14.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 590 10.0 0.0 555188 10020 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 591 12.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 592 10.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 593 15.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 594 14.0 0.0 265528 8572 ? R 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 595 13.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 596 11.0 0.0 266040 8832 ? R 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 597 19.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 598 14.0 0.0 555188 9956 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 599 18.0 0.0 555188 9952 ? Sl 12:36 0:00 apache2 -DFOREGROUND
  +
keystone 600 11.0 0.0 265528 8376 ? R 12:36 0:00 apache2 -DFOREGROUND
 
</PRE>
 
</PRE>
   
  +
Проверяю применились ли изменения:
 
<PRE>
 
<PRE>
  +
openstack token issue
git clone https://github.com/ceph/ceph-helm
 
  +
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
cd ceph-helm/ceph
 
  +
| Field | Value |
  +
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  +
| expires | 2019-02-04T22:37:10+0000 |
  +
| id | gAAAAABcVuB2tQtAX56G1_kqJKeekpsWDJPTE19IMhWvNlGQqmDZQap9pgXQQkhQNMQNpR7Q6XR_w5_ngsx_l36vKXUND75uy4fimAbaLBDBdxxOzJqDRq4NLz4sEdTzLs2T3nyISwItLloOj-8sw7x1Pg2-9N-9afudv_jcYLVCq2luAImfRpY |
  +
| project_id | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 |
  +
| user_id | 42068c166a3245208b5ac78965eab80b |
  +
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  +
root@openstack:/var/lib/kubelet/pods/8ca3a9ed-279f-11e9-a72e-080027da2b2f/volumes/kubernetes.io~secret/keystone-etc/..data# date
  +
Sun Feb 3 12:37:18 UTC 2019
 
</PRE>
 
</PRE>
  +
34 часа (из-за опечатки) но менять на 24 я уже не стал
  +
===Heat Deploy===
  +
<B> deploy 3 VMs connected to each other using heat</B>
  +
  +
Самая простая часть - все написано документации https://docs.openstack.org/openstack-helm/latest/install/developer/exercise-the-cloud.html
  +
<BR> У меня с первого раза не создалась ВМка
 
<PRE>
 
<PRE>
  +
[instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] Instance failed to spawn
make
 
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] Traceback (most recent call last):
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2133, in _build_resources
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] yield resources
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/compute/manager.py", line 1939, in _build_and_run_instance
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] block_device_info=block_device_info)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2786, in spawn
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] block_device_info=block_device_info)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3193, in _create_image
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] fallback_from_host)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3309, in _create_and_inject_local_root
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] instance, size, fallback_from_host)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6953, in _try_fetch_image_cache
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] size=size)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 242, in cache
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] *args, **kwargs)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 584, in create_image
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] prepare_template(target=base, *args, **kwargs)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] return f(*args, **kwargs)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 238, in fetch_func_sync
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] fetch_func(target=target, *args, **kwargs)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/utils.py", line 458, in fetch_image
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] images.fetch_to_raw(context, image_id, target)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/images.py", line 132, in fetch_to_raw
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] fetch(context, image_href, path_tmp)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/images.py", line 123, in fetch
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] IMAGE_API.download(context, image_href, dest_path=path)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/api.py", line 184, in download
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] dst_path=dest_path)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/glance.py", line 533, in download
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] _reraise_translated_image_exception(image_id)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/glance.py", line 1050, in _reraise_translated_image_exception
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] six.reraise(type(new_exc), new_exc, exc_trace)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/glance.py", line 531, in download
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] image_chunks = self._client.call(context, 2, 'data', image_id)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/glance.py", line 168, in call
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] result = getattr(controller, method)(*args, **kwargs)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/common/utils.py", line 535, in inner
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] return RequestIdProxy(wrapped(*args, **kwargs))
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/v2/images.py", line 208, in data
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] resp, body = self.http_client.get(url)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 285, in get
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] return self._request('GET', url, **kwargs)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 277, in _request
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] resp, body_iter = self._handle_response(resp)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 107, in _handle_response
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] raise exc.from_response(resp, resp.content)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] HTTPInternalServerError: HTTPInternalServerError (HTTP 500)
  +
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]
  +
2019-02-03 13:21:22,418.418 21157 INFO nova.compute.resource_tracker [req-cdb3800a-87ba-4ee9-88ad-e6914522a847 - - - - -] Final resource view: name=openstack phys_ram=16039MB used_ram=576MB phys_disk=48GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[]
  +
2019-02-03 13:21:27,224.224 21157 INFO nova.compute.manager [req-c0895961-b263-4122-82cc-5267be0aad8f 42068c166a3245208b5ac78965eab80b 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 - - -] [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] Terminating instance
  +
2019-02-03 13:21:27,332.332 21157 INFO nova.virt.libvirt.driver [-] [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] Instance destroyed successfully.
  +
</PRE>
   
  +
Так как я подозревал что проблема в тормозах
===== Processing [helm-toolkit] chart =====
 
  +
<PRE>
if [ -f helm-toolkit/Makefile ]; then make --directory=helm-toolkit; fi
 
  +
2019-02-03 12:57:35,835.835 21157 WARNING nova.scheduler.client.report [req-cdb3800a-87ba-4ee9-88ad-e6914522a847 - - - - -] Failed to update inventory for resource provider 57daad8e-d831-4271-b3ef-332237d32b49: 503 503 Service Unavailable
find: secrets: No such file or directory
 
  +
The server is currently unavailable. Please try again at a later time.
echo Generating /Users/mmazur/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/helm-toolkit/templates/_secrets.tpl
 
  +
</PRE>
Generating /Users/mmazur/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/helm-toolkit/templates/_secrets.tpl
 
  +
то просто зачистил stack и закоментировал а скрипте создание сетей
rm -f templates/_secrets.tpl
 
  +
После чего VM-ка успешно создалась
for i in ; do printf '{{ define "'$i'" }}' >> templates/_secrets.tpl; cat $i >> templates/_secrets.tpl; printf "{{ end }}\n" >> templates/_secrets.tpl; done
 
  +
<BR>
if [ -f helm-toolkit/requirements.yaml ]; then helm dependency update helm-toolkit; fi
 
  +
<PRE>
Hang tight while we grab the latest from your chart repositories...
 
  +
openstack server list
...Successfully got an update from the "local" chart repository
 
  +
+--------------------------------------+----------------------------------------------+--------+------------------------------------------------------------------------+---------------------+----------------------------------------------+
...Successfully got an update from the "stable" chart repository
 
  +
| ID | Name | Status | Networks | Image | Flavor |
Update Complete. ⎈Happy Helming!⎈
 
  +
+--------------------------------------+----------------------------------------------+--------+------------------------------------------------------------------------+---------------------+----------------------------------------------+
Saving 0 charts
 
  +
| 155405cd-011a-42a2-93d7-3ed6eda250b2 | heat-basic-vm-deployment-server-ynxjzrycsd3z | ACTIVE | heat-basic-vm-deployment-private_net-tbltedh44qjv=10.0.0.4, 172.24.4.5 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-flavor-3kbmengg2bkm |
Deleting outdated charts
 
  +
+--------------------------------------+----------------------------------------------+--------+------------------------------------------------------------------------+---------------------+----------------------------------------------+
if [ -d helm-toolkit ]; then helm lint helm-toolkit; fi
 
  +
</PRE>
==> Linting helm-toolkit
 
  +
<PRE>
[INFO] Chart.yaml: icon is recommended
 
  +
root@openstack:~# openstack stack list
  +
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
  +
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
  +
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
  +
| 1f90d25b-eb19-48cd-b623-cc0c7bccc28f | heat-vm-volume-attach | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:36:15Z | None |
  +
| 8d4ce486-ddb0-4826-b225-6b7dc4eef157 | heat-basic-vm-deployment | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:35:11Z | None |
  +
| d0aeea69-4639-4942-a905-ec30ed99aa47 | heat-subnet-pool-deployment | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:16:29Z | None |
  +
| 688585e9-9b99-4ac7-bd04-9e7b874ec6c7 | heat-public-net-deployment | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:16:09Z | None |
  +
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
  +
</PRE>
   
  +
Теперь нужно создать три ВМки и проверить связь
1 chart(s) linted, no failures
 
  +
<BR>
if [ -d helm-toolkit ]; then helm package helm-toolkit; fi
 
  +
Что б не делать это руками воспользовался тем же скриптом, дополнив его <B>for I in $(seq 1 3); do</B>
Successfully packaged chart and saved it to: /Users/mmazur/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/helm-toolkit-0.1.0.tgz
 
  +
<PRE>
  +
for I in $(seq 1 3); do
  +
openstack stack create --wait \
  +
--parameter public_net=${OSH_EXT_NET_NAME} \
  +
--parameter image="${IMAGE_NAME}" \
  +
--parameter ssh_key=${OSH_VM_KEY_STACK} \
  +
--parameter cidr=${OSH_PRIVATE_SUBNET} \
  +
--parameter dns_nameserver=${OSH_BR_EX_ADDR%/*} \
  +
-t ./tools/gate/files/heat-basic-vm-deployment.yaml \
  +
heat-basic-vm-deployment-${I}
  +
...
  +
</PRE>
  +
Полученный результат
  +
<PRE>
  +
# openstack stack list
  +
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
  +
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
  +
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
  +
| a6f1e35e-7536-4707-bdfa-b2885ab7cae2 | heat-vm-volume-attach-3 | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:48:31Z | None |
  +
| ccb63a87-37f4-4355-b399-ef4abb43983b | heat-basic-vm-deployment-3 | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:47:25Z | None |
  +
| 6d61b8de-80cd-4138-bfe2-8333a4b354ce | heat-vm-volume-attach-2 | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:47:12Z | None |
  +
| 75c44f1c-d8da-422e-a027-f16b8458e224 | heat-basic-vm-deployment-2 | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:46:08Z | None |
  +
| 95da63ac-9e20-4492-b3a6-fab74649bbf9 | heat-vm-volume-attach-1 | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:45:54Z | None |
  +
| 447881bb-6c93-4b92-9765-578782ee2ef5 | heat-basic-vm-deployment-1 | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:44:42Z | None |
  +
| 1f90d25b-eb19-48cd-b623-cc0c7bccc28f | heat-vm-volume-attach | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:36:15Z | None |
  +
| 8d4ce486-ddb0-4826-b225-6b7dc4eef157 | heat-basic-vm-deployment | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:35:11Z | None |
  +
| d0aeea69-4639-4942-a905-ec30ed99aa47 | heat-subnet-pool-deployment | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:16:29Z | None |
  +
| 688585e9-9b99-4ac7-bd04-9e7b874ec6c7 | heat-public-net-deployment | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:16:09Z | None |
  +
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
  +
</PRE>
  +
<PRE>
  +
openstack server list
  +
+--------------------------------------+------------------------------------------------+--------+---------------------------------------------------------------------------+---------------------+------------------------------------------------+
  +
| ID | Name | Status | Networks | Image | Flavor |
  +
+--------------------------------------+------------------------------------------------+--------+---------------------------------------------------------------------------+---------------------+------------------------------------------------+
  +
| 412cbd8b-e4c1-46e8-b48c-065e9830bfa8 | heat-basic-vm-deployment-3-server-v5lwzoyotkwo | ACTIVE | heat-basic-vm-deployment-3-private_net-4unttrj2lq6z=10.0.0.6, 172.24.4.18 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-3-flavor-3gncn5vwfu6z |
  +
| e7a4e42c-aa9c-47bc-ba7b-d229af1a2077 | heat-basic-vm-deployment-2-server-vhacv5jz7dnt | ACTIVE | heat-basic-vm-deployment-2-private_net-2gz44w5rjy7s=10.0.0.6, 172.24.4.11 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-2-flavor-hxr5eawiveg5 |
  +
| 52886edd-be09-4ba1-aebd-5563f25f4f60 | heat-basic-vm-deployment-1-server-wk5lxhhnhhyn | ACTIVE | heat-basic-vm-deployment-1-private_net-hqx3dmohj3n5=10.0.0.5, 172.24.4.12 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-1-flavor-6aiokzvf4qaq |
  +
| 155405cd-011a-42a2-93d7-3ed6eda250b2 | heat-basic-vm-deployment-server-ynxjzrycsd3z | ACTIVE | heat-basic-vm-deployment-private_net-tbltedh44qjv=10.0.0.4, 172.24.4.5 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-flavor-3kbmengg2bkm |
  +
+--------------------------------------+------------------------------------------------+--------+---------------------------------------------------------------------------+---------------------+------------------------------------------------+
  +
</PRE>
  +
Проверка сети(22 порт точно открыт):
  +
<PRE>
  +
root@openstack:/etc/openstack# ssh -i /root/.ssh/osh_key cirros@172.24.4.18
   
  +
$ nc 172.24.4.11 22
===== Processing [ceph] chart =====
 
  +
SSH-2.0-dropbear_2012.55
if [ -f ceph/Makefile ]; then make --directory=ceph; fi
 
  +
^Cpunt!
if [ -f ceph/requirements.yaml ]; then helm dependency update ceph; fi
 
Hang tight while we grab the latest from your chart repositories...
 
...Successfully got an update from the "local" chart repository
 
...Successfully got an update from the "stable" chart repository
 
Update Complete. ⎈Happy Helming!⎈
 
Saving 1 charts
 
Downloading helm-toolkit from repo http://localhost:8879/charts
 
Deleting outdated charts
 
if [ -d ceph ]; then helm lint ceph; fi
 
==> Linting ceph
 
[INFO] Chart.yaml: icon is recommended
 
   
  +
$ nc 172.24.4.12 22
1 chart(s) linted, no failures
 
  +
SSH-2.0-dropbear_2012.55
if [ -d ceph ]; then helm package ceph; fi
 
  +
^Cpunt!
Successfully packaged chart and saved it to: /Users/mmazur/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/ceph-0.1.0.tgz
 
  +
  +
$
 
</PRE>
 
</PRE>
  +
===Horizon with VMs===
  +
[[Изображение:Horizon With VMs.png|1200px]]
  +
<BR>Удивительно но заработала даже VNC консоль
  +
<BR>
  +
[[Изображение:Horizon VNC.png|1000px]]
   
  +
===Logs===
  +
====MariaDB====
  +
<PRE>
  +
+ ./tools/deployment/common/wait-for-pods.sh openstack
  +
+ helm status mariadb
  +
LAST DEPLOYED: Sun Feb 3 10:25:00 2019
  +
NAMESPACE: openstack
  +
STATUS: DEPLOYED
   
  +
RESOURCES:
* https://github.com/ceph/ceph-helm
 
  +
==> v1beta1/PodDisruptionBudget
  +
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
  +
mariadb-server 0 N/A 1 13h
   
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
mariadb-bin 5 13h
  +
mariadb-etc 5 13h
  +
mariadb-services-tcp 1 13h
   
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
mariadb-ingress-error-pages 1 13h
  +
mariadb-ingress 1 13h
  +
mariadb-mariadb 1 13h
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
mariadb-mariadb-ingress 13h
  +
mariadb-ingress 13h
  +
mariadb-mariadb 13h
  +
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
mariadb-discovery ClusterIP None <none> 3306/TCP,4567/TCP 13h
  +
mariadb-ingress-error-pages ClusterIP None <none> 80/TCP 13h
  +
mariadb ClusterIP 10.104.164.168 <none> 3306/TCP 13h
  +
mariadb-server ClusterIP 10.107.255.234 <none> 3306/TCP 13h
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
mariadb-netpol application=mariadb 13h
  +
  +
==> v1/Secret
  +
NAME TYPE DATA AGE
  +
mariadb-dbadmin-password Opaque 1 13h
  +
mariadb-secrets Opaque 1 13h
  +
  +
==> v1beta1/Role
  +
NAME AGE
  +
mariadb-ingress 13h
  +
mariadb-openstack-mariadb-ingress 13h
  +
mariadb-mariadb 13h
  +
  +
==> v1/Deployment
  +
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  +
mariadb-ingress-error-pages 1 1 1 1 13h
  +
mariadb-ingress 2 2 2 2 13h
  +
  +
==> v1/StatefulSet
  +
NAME DESIRED CURRENT AGE
  +
mariadb-server 1 1 13h
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
mariadb-ingress-error-pages-5c89b57bc-twn7z 1/1 Running 0 13h
  +
mariadb-ingress-5cff98cbfc-24vjg 1/1 Running 0 13h
  +
mariadb-ingress-5cff98cbfc-nqlhq 1/1 Running 0 13h
  +
mariadb-server-0 1/1 Running 0 13h
  +
  +
  +
</PRE>
  +
====RabbitMQ====
 
<PRE>
 
<PRE>
  +
+ helm upgrade --install rabbitmq ../openstack-helm-infra/rabbitmq --namespace=openstack --values=/tmp/rabbitmq.yaml --set pod.replicas.server=1
kubectl describe node minikube
 
  +
Release "rabbitmq" does not exist. Installing it now.
Name: minikube
 
  +
NAME: rabbitmq
Roles: master
 
  +
LAST DEPLOYED: Sun Feb 3 10:27:01 2019
Labels: beta.kubernetes.io/arch=amd64
 
  +
NAMESPACE: openstack
beta.kubernetes.io/os=linux
 
  +
STATUS: DEPLOYED
kubernetes.io/hostname=minikube
 
  +
node-role.kubernetes.io/master=
 
  +
RESOURCES:
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
rabbitmq-dsv-7b1733 ClusterIP None <none> 5672/TCP,25672/TCP,15672/TCP 2s
  +
rabbitmq-mgr-7b1733 ClusterIP 10.111.11.128 <none> 80/TCP,443/TCP 2s
  +
rabbitmq ClusterIP 10.108.248.80 <none> 5672/TCP,25672/TCP,15672/TCP 2s
  +
  +
==> v1/StatefulSet
  +
NAME DESIRED CURRENT AGE
  +
rabbitmq-rabbitmq 1 1 2s
  +
  +
==> v1beta1/Ingress
  +
NAME HOSTS ADDRESS PORTS AGE
  +
rabbitmq-mgr-7b1733 rabbitmq-mgr-7b1733,rabbitmq-mgr-7b1733.openstack,rabbitmq-mgr-7b1733.openstack.svc.cluster.local 80 2s
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
rabbitmq-rabbitmq-0 0/1 Pending 0 2s
  +
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
rabbitmq-rabbitmq-bin 4 3s
  +
rabbitmq-rabbitmq-etc 2 2s
  +
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
rabbitmq-test 1 2s
  +
rabbitmq-rabbitmq 1 2s
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
rabbitmq-netpol application=rabbitmq 2s
  +
  +
==> v1beta1/Role
  +
NAME AGE
  +
rabbitmq-openstack-rabbitmq-test 2s
  +
rabbitmq-rabbitmq 2s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
rabbitmq-rabbitmq-test 2s
  +
rabbitmq-rabbitmq 2s
  +
  +
  +
+ ./tools/deployment/common/wait-for-pods.sh openstack
  +
+ helm status rabbitmq
  +
LAST DEPLOYED: Sun Feb 3 10:27:01 2019
  +
NAMESPACE: openstack
  +
STATUS: DEPLOYED
  +
  +
RESOURCES:
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
rabbitmq-dsv-7b1733 ClusterIP None <none> 5672/TCP,25672/TCP,15672/TCP 2m21s
  +
rabbitmq-mgr-7b1733 ClusterIP 10.111.11.128 <none> 80/TCP,443/TCP 2m21s
  +
rabbitmq ClusterIP 10.108.248.80 <none> 5672/TCP,25672/TCP,15672/TCP 2m21s
  +
  +
==> v1beta1/Ingress
  +
NAME HOSTS ADDRESS PORTS AGE
  +
rabbitmq-mgr-7b1733 rabbitmq-mgr-7b1733,rabbitmq-mgr-7b1733.openstack,rabbitmq-mgr-7b1733.openstack.svc.cluster.local 80 2m21s
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
rabbitmq-netpol application=rabbitmq 2m21s
  +
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
rabbitmq-rabbitmq-bin 4 2m22s
  +
rabbitmq-rabbitmq-etc 2 2m21s
  +
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
rabbitmq-test 1 2m21s
  +
rabbitmq-rabbitmq 1 2m21s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
rabbitmq-rabbitmq-test 2m21s
  +
rabbitmq-rabbitmq 2m21s
  +
  +
==> v1beta1/Role
  +
NAME AGE
  +
rabbitmq-openstack-rabbitmq-test 2m21s
  +
rabbitmq-rabbitmq 2m21s
  +
  +
==> v1/StatefulSet
  +
NAME DESIRED CURRENT AGE
  +
rabbitmq-rabbitmq 1 1 2m21s
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
rabbitmq-rabbitmq-0 1/1 Running 0 2m21s
 
</PRE>
 
</PRE>
  +
 
<PRE>
 
<PRE>
  +
root@openstack:~/mira/openstack-helm# echo $?
kubectl label node minikube ceph-mon=enabled ceph-mgr=enabled ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled
 
  +
0
 
</PRE>
 
</PRE>
  +
  +
====Memcached====
 
<PRE>
 
<PRE>
  +
kubectl describe node minikube
 
  +
+ helm upgrade --install memcached ../openstack-helm-infra/memcached --namespace=openstack --values=/tmp/memcached.yaml
Name: minikube
 
  +
Release "memcached" does not exist. Installing it now.
Roles: master
 
  +
NAME: memcached
Labels: beta.kubernetes.io/arch=amd64
 
  +
LAST DEPLOYED: Sun Feb 3 10:30:32 2019
beta.kubernetes.io/os=linux
 
  +
NAMESPACE: openstack
ceph-mgr=enabled
 
  +
STATUS: DEPLOYED
ceph-mon=enabled
 
  +
ceph-osd=enabled
 
  +
RESOURCES:
ceph-osd-device-dev-sdb=enabled
 
  +
==> v1/ConfigMap
ceph-osd-device-dev-sdc=enabled
 
kubernetes.io/hostname=minikube
+
NAME DATA AGE
  +
memcached-memcached-bin 1 3s
node-role.kubernetes.io/master=
 
  +
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
memcached-memcached 1 3s
  +
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
memcached ClusterIP 10.96.106.159 <none> 11211/TCP 3s
  +
  +
==> v1/Deployment
  +
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  +
memcached-memcached 1 1 1 0 3s
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
memcached-netpol application=memcached 3s
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
memcached-memcached-6d48bd48bc-7kd84 0/1 Init:0/1 0 2s
  +
  +
  +
+ ./tools/deployment/common/wait-for-pods.sh openstack
  +
+ helm status memcached
  +
LAST DEPLOYED: Sun Feb 3 10:30:32 2019
  +
NAMESPACE: openstack
  +
STATUS: DEPLOYED
  +
  +
RESOURCES:
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
memcached-netpol application=memcached 78s
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
memcached-memcached-6d48bd48bc-7kd84 1/1 Running 0 77s
  +
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
memcached-memcached-bin 1 78s
  +
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
memcached-memcached 1 78s
  +
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
memcached ClusterIP 10.96.106.159 <none> 11211/TCP 78s
  +
  +
==> v1/Deployment
  +
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  +
memcached-memcached 1 1 1 1 78s
 
</PRE>
 
</PRE>
==2==
 
 
<PRE>
 
<PRE>
  +
root@openstack:~/mira/openstack-helm# echo $?
helm install --name=ceph local/ceph
 
  +
0
NAME: ceph
 
  +
</PRE>
LAST DEPLOYED: Sat Feb 2 14:31:32 2019
 
  +
NAMESPACE: default
 
  +
====Keystone====
  +
<PRE>
  +
+ ./tools/deployment/common/wait-for-pods.sh openstack
  +
+ helm status keystone
  +
LAST DEPLOYED: Sun Feb 3 10:36:19 2019
  +
NAMESPACE: openstack
 
STATUS: DEPLOYED
 
STATUS: DEPLOYED
   
 
RESOURCES:
 
RESOURCES:
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
keystone-credential-rotate 1 7m20s
  +
keystone-fernet-rotate 1 7m20s
  +
keystone-api 1 7m20s
  +
keystone-bootstrap 1 7m20s
  +
keystone-credential-setup 1 7m19s
  +
keystone-db-init 1 7m19s
  +
keystone-db-sync 1 7m19s
  +
keystone-domain-manage 1 7m19s
  +
keystone-fernet-setup 1 7m19s
  +
keystone-rabbit-init 1 7m19s
  +
keystone-test 1 7m19s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
keystone-keystone-credential-rotate 7m19s
  +
keystone-credential-rotate 7m19s
  +
keystone-fernet-rotate 7m19s
  +
keystone-keystone-fernet-rotate 7m18s
  +
keystone-keystone-api 7m18s
  +
keystone-keystone-bootstrap 7m18s
  +
keystone-credential-setup 7m18s
  +
keystone-keystone-db-init 7m18s
  +
keystone-keystone-db-sync 7m18s
  +
keystone-keystone-domain-manage 7m18s
  +
keystone-fernet-setup 7m18s
  +
keystone-keystone-rabbit-init 7m18s
  +
keystone-keystone-test 7m18s
  +
 
==> v1/Service
 
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ceph-mon ClusterIP None <none> 6789/TCP 1s
+
keystone-api ClusterIP 10.110.158.186 <none> 5000/TCP 7m18s
ceph-rgw ClusterIP 10.96.102.37 <none> 8088/TCP 0s
+
keystone ClusterIP 10.108.1.22 <none> 80/TCP,443/TCP 7m18s
   
==> v1beta1/DaemonSet
+
==> v1/Deployment
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ceph-mon 1 1 0 1 0 ceph-mon=enabled 0s
+
keystone-api 1 1 1 1 7m18s
   
==> v1beta1/Deployment
+
==> v1beta1/CronJob
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
ceph-mds 1 1 1 0 0s
+
keystone-credential-rotate 0 0 1 * * False 0 <none> 7m18s
ceph-mgr 1 1 1 0 0s
+
keystone-fernet-rotate 0 */12 * * * False 0 <none> 7m18s
ceph-mon-check 1 1 1 0 0s
 
ceph-rbd-provisioner 2 2 2 0 0s
 
ceph-rgw 1 1 1 0 0s
 
   
==> v1/Job
+
==> v1beta1/Ingress
NAME COMPLETIONS DURATION AGE
+
NAME HOSTS ADDRESS PORTS AGE
  +
keystone keystone,keystone.openstack,keystone.openstack.svc.cluster.local 80 7m18s
ceph-mon-keyring-generator 0/1 0s 0s
 
  +
ceph-rgw-keyring-generator 0/1 0s 0s
 
  +
==> v1beta1/PodDisruptionBudget
ceph-mgr-keyring-generator 0/1 0s 0s
 
  +
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
ceph-osd-keyring-generator 0/1 0s 0s
 
ceph-mds-keyring-generator 0/1 0s 0s
+
keystone-api 0 N/A 1 7m20s
  +
ceph-namespace-client-key-generator 0/1 0s 0s
 
  +
==> v1/Secret
ceph-storage-keys-generator 0/1 0s 0s
 
  +
NAME TYPE DATA AGE
  +
keystone-etc Opaque 9 7m20s
  +
keystone-credential-keys Opaque 2 7m20s
  +
keystone-db-admin Opaque 1 7m20s
  +
keystone-db-user Opaque 1 7m20s
  +
keystone-fernet-keys Opaque 2 7m20s
  +
keystone-keystone-admin Opaque 8 7m20s
  +
keystone-keystone-test Opaque 8 7m20s
  +
keystone-rabbitmq-admin Opaque 1 7m20s
  +
keystone-rabbitmq-user Opaque 1 7m20s
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
keystone-netpol application=keystone 7m18s
   
 
==> v1/Pod(related)
 
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
+
NAME READY STATUS RESTARTS AGE
ceph-mon-7bjnx 0/3 Pending 0 0s
+
keystone-api-f658f747c-q6w65 1/1 Running 0 7m18s
ceph-mds-85b4fbb478-9sr5d 0/1 Pending 0 0s
+
keystone-bootstrap-ds8t5 0/1 Completed 0 7m18s
ceph-mgr-588577d89f-gwgjx 0/1 Pending 0 0s
+
keystone-credential-setup-hrp8t 0/1 Completed 0 7m18s
ceph-mon-check-549b886885-gqhsg 0/1 Pending 0 0s
+
keystone-db-init-dhgf2 0/1 Completed 0 7m18s
ceph-rbd-provisioner-5cf47cf8d5-d55wt 0/1 ContainerCreating 0 0s
+
keystone-db-sync-z8d5d 0/1 Completed 0 7m18s
ceph-rbd-provisioner-5cf47cf8d5-j6zhk 0/1 Pending 0 0s
+
keystone-domain-manage-86b25 0/1 Completed 0 7m18s
ceph-rgw-7b9677854f-rfz5w 0/1 Pending 0 0s
+
keystone-fernet-setup-txgc8 0/1 Completed 0 7m18s
ceph-mon-keyring-generator-x8dxx 0/1 Pending 0 0s
+
keystone-rabbit-init-jgkqz 0/1 Completed 0 7m18s
  +
ceph-rgw-keyring-generator-x4vrz 0/1 Pending 0 0s
 
  +
==> v1/Job
ceph-mgr-keyring-generator-4tncv 0/1 Pending 0 0s
 
ceph-osd-keyring-generator-qfws4 0/1 Pending 0 0s
+
NAME COMPLETIONS DURATION AGE
ceph-mds-keyring-generator-lfhnc 0/1 Pending 0 0s
+
keystone-bootstrap 1/1 7m13s 7m18s
ceph-namespace-client-key-generator-ktr6d 0/1 Pending 0 0s
+
keystone-credential-setup 1/1 2m6s 7m18s
ceph-storage-keys-generator-ctxvt 0/1 Pending 0 0s
+
keystone-db-init 1/1 3m46s 7m18s
  +
keystone-db-sync 1/1 6m11s 7m18s
  +
keystone-domain-manage 1/1 6m51s 7m18s
  +
keystone-fernet-setup 1/1 3m52s 7m18s
  +
keystone-rabbit-init 1/1 5m33s 7m18s
  +
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
keystone-bin 13 7m20s
  +
  +
==> v1beta1/Role
  +
NAME AGE
  +
keystone-openstack-keystone-credential-rotate 7m19s
  +
keystone-credential-rotate 7m19s
  +
keystone-fernet-rotate 7m19s
  +
keystone-openstack-keystone-fernet-rotate 7m19s
  +
keystone-openstack-keystone-api 7m19s
  +
keystone-openstack-keystone-bootstrap 7m19s
  +
keystone-credential-setup 7m19s
  +
keystone-openstack-keystone-db-init 7m19s
  +
keystone-openstack-keystone-db-sync 7m19s
  +
keystone-openstack-keystone-domain-manage 7m19s
  +
keystone-fernet-setup 7m19s
  +
keystone-openstack-keystone-rabbit-init 7m19s
  +
keystone-openstack-keystone-test 7m19s
  +
  +
  +
+ export OS_CLOUD=openstack_helm
  +
+ OS_CLOUD=openstack_helm
  +
+ sleep 30
  +
+ openstack endpoint list
  +
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+
  +
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
  +
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+
  +
| 0f9f179d90a64e76ac65873826a4851e | RegionOne | keystone | identity | True | internal | http://keystone-api.openstack.svc.cluster.local:5000/v3 |
  +
| 1ea5e2909c574c01bc815b96ba818db3 | RegionOne | keystone | identity | True | public | http://keystone.openstack.svc.cluster.local:80/v3 |
  +
| 32e745bc02af4e5cb20830c83fc626e3 | RegionOne | keystone | identity | True | admin | http://keystone.openstack.svc.cluster.local:80/v3 |
  +
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+
  +
</PRE>
  +
<PRE>
  +
root@openstack:~/mira/openstack-helm# echo $?
  +
0
  +
</PRE>
  +
  +
====Heat====
  +
<PRE>
  +
+ :
  +
+ helm upgrade --install heat ./heat --namespace=openstack --set manifests.network_policy=true
  +
Release "heat" does not exist. Installing it now.
  +
NAME: heat
  +
LAST DEPLOYED: Sun Feb 3 10:57:10 2019
  +
NAMESPACE: openstack
  +
STATUS: DEPLOYED
  +
  +
RESOURCES:
  +
==> v1beta1/CronJob
  +
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
  +
heat-engine-cleaner */5 * * * * False 0 <none> 4s
   
 
==> v1/Secret
 
==> v1/Secret
NAME TYPE DATA AGE
+
NAME TYPE DATA AGE
  +
heat-etc Opaque 10 7s
ceph-keystone-user-rgw Opaque 7 1s
 
  +
heat-db-user Opaque 1 7s
  +
heat-db-admin Opaque 1 7s
  +
heat-keystone-user Opaque 8 7s
  +
heat-keystone-test Opaque 8 7s
  +
heat-keystone-stack-user Opaque 5 7s
  +
heat-keystone-trustee Opaque 8 7s
  +
heat-keystone-admin Opaque 8 7s
  +
heat-rabbitmq-admin Opaque 1 7s
  +
heat-rabbitmq-user Opaque 1 7s
  +
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
heat-engine-cleaner 1 7s
  +
heat-api 1 7s
  +
heat-cfn 1 7s
  +
heat-engine 1 7s
  +
heat-bootstrap 1 6s
  +
heat-db-init 1 6s
  +
heat-db-sync 1 6s
  +
heat-ks-endpoints 1 6s
  +
heat-ks-service 1 6s
  +
heat-ks-user-domain 1 6s
  +
heat-trustee-ks-user 1 6s
  +
heat-ks-user 1 6s
  +
heat-rabbit-init 1 6s
  +
heat-trusts 1 6s
  +
heat-test 1 6s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
heat-heat-engine-cleaner 5s
  +
heat-heat-api 5s
  +
heat-heat-cfn 5s
  +
heat-heat-engine 5s
  +
heat-heat-db-init 5s
  +
heat-heat-db-sync 5s
  +
heat-heat-ks-endpoints 5s
  +
heat-heat-ks-service 5s
  +
heat-heat-ks-user-domain 5s
  +
heat-heat-trustee-ks-user 5s
  +
heat-heat-ks-user 5s
  +
heat-heat-rabbit-init 5s
  +
heat-heat-trusts 5s
  +
heat-heat-test 5s
  +
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
heat-api ClusterIP 10.107.126.110 <none> 8004/TCP 5s
  +
heat-cfn ClusterIP 10.103.165.157 <none> 8000/TCP 5s
  +
heat ClusterIP 10.106.167.63 <none> 80/TCP,443/TCP 5s
  +
cloudformation ClusterIP 10.107.173.42 <none> 80/TCP,443/TCP 5s
  +
  +
==> v1/Deployment
  +
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  +
heat-api 1 1 1 0 5s
  +
heat-cfn 1 1 1 0 5s
  +
heat-engine 1 1 1 0 5s
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
heat-netpol application=heat 4s
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
heat-api-69db75bb6d-h24w9 0/1 Init:0/1 0 5s
  +
heat-cfn-86896f7466-n5dnz 0/1 Init:0/1 0 5s
  +
heat-engine-6756c84fdd-44hzf 0/1 Init:0/1 0 5s
  +
heat-bootstrap-v9642 0/1 Init:0/1 0 5s
  +
heat-db-init-lfrsb 0/1 Pending 0 5s
  +
heat-db-sync-wct2x 0/1 Init:0/1 0 5s
  +
heat-ks-endpoints-wxjwp 0/6 Pending 0 5s
  +
heat-ks-service-v95sk 0/2 Pending 0 5s
  +
heat-domain-ks-user-4fg65 0/1 Pending 0 4s
  +
heat-trustee-ks-user-mwrf5 0/1 Pending 0 4s
  +
heat-ks-user-z6xhb 0/1 Pending 0 4s
  +
heat-rabbit-init-77nzb 0/1 Pending 0 4s
  +
heat-trusts-7x7nt 0/1 Pending 0 4s
  +
  +
==> v1beta1/PodDisruptionBudget
  +
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
  +
heat-api 0 N/A 0 7s
  +
heat-cfn 0 N/A 0 7s
   
 
==> v1/ConfigMap
 
==> v1/ConfigMap
NAME DATA AGE
+
NAME DATA AGE
ceph-bin-clients 2 1s
+
heat-bin 16 7s
ceph-bin 26 1s
 
ceph-etc 1 1s
 
ceph-templates 5 1s
 
   
==> v1/StorageClass
+
==> v1beta1/Role
NAME PROVISIONER AGE
+
NAME AGE
  +
heat-openstack-heat-engine-cleaner 6s
general ceph.com/rbd 1s
 
  +
heat-openstack-heat-api 6s
  +
heat-openstack-heat-cfn 6s
  +
heat-openstack-heat-engine 6s
  +
heat-openstack-heat-db-init 6s
  +
heat-openstack-heat-db-sync 6s
  +
heat-openstack-heat-ks-endpoints 6s
  +
heat-openstack-heat-ks-service 6s
  +
heat-openstack-heat-ks-user-domain 6s
  +
heat-openstack-heat-trustee-ks-user 6s
  +
heat-openstack-heat-ks-user 6s
  +
heat-openstack-heat-rabbit-init 6s
  +
heat-openstack-heat-trusts 5s
  +
heat-openstack-heat-test 5s
  +
  +
==> v1/Job
  +
NAME COMPLETIONS DURATION AGE
  +
heat-bootstrap 0/1 5s 5s
  +
heat-db-init 0/1 4s 5s
  +
heat-db-sync 0/1 5s 5s
  +
heat-ks-endpoints 0/1 4s 5s
  +
heat-ks-service 0/1 4s 5s
  +
heat-domain-ks-user 0/1 4s 4s
  +
heat-trustee-ks-user 0/1 4s 4s
  +
heat-ks-user 0/1 4s 4s
  +
heat-rabbit-init 0/1 4s 4s
  +
heat-trusts 0/1 4s 4s
  +
  +
==> v1beta1/Ingress
  +
NAME HOSTS ADDRESS PORTS AGE
  +
heat heat,heat.openstack,heat.openstack.svc.cluster.local 80 4s
  +
cloudformation cloudformation,cloudformation.openstack,cloudformation.openstack.svc.cluster.local 80 4s
  +
  +
  +
+ ./tools/deployment/common/wait-for-pods.sh openstack
  +
+ export OS_CLOUD=openstack_helm
  +
+ OS_CLOUD=openstack_helm
  +
+ openstack service list
  +
+----------------------------------+----------+----------------+
  +
| ID | Name | Type |
  +
+----------------------------------+----------+----------------+
  +
| 5c354b75377944888ac1cc9a3a088808 | heat | orchestration |
  +
| a55578c3ca8948d28055511c6a2e59bc | heat-cfn | cloudformation |
  +
| fa7df0be3e99442d8fe42bda7519072f | keystone | identity |
  +
+----------------------------------+----------+----------------+
  +
+ sleep 30
  +
+ openstack orchestration service list
  +
+------------------------------+-------------+--------------------------------------+-------------+--------+----------------------------+--------+
  +
| Hostname | Binary | Engine ID | Host | Topic | Updated At | Status |
  +
+------------------------------+-------------+--------------------------------------+-------------+--------+----------------------------+--------+
  +
| heat-engine-6756c84fdd-44hzf | heat-engine | 7a396564-9fce-43f9-aedd-3a48101925e8 | heat-engine | engine | 2019-02-03T11:03:45.000000 | up |
  +
+------------------------------+-------------+--------------------------------------+-------------+--------+----------------------------+--------+
 
</PRE>
 
</PRE>
==3==
 
 
<PRE>
 
<PRE>
  +
root@openstack:~/mira/openstack-helm# echo $?
kubectl get pods
 
  +
0
NAME READY STATUS RESTARTS AGE
 
ceph-mds-85b4fbb478-9sr5d 0/1 Pending 0 2m
 
ceph-mds-keyring-generator-lfhnc 0/1 ContainerCreating 0 2m
 
ceph-mgr-588577d89f-gwgjx 0/1 Init:0/2 0 2m
 
ceph-mgr-keyring-generator-4tncv 0/1 ContainerCreating 0 2m
 
ceph-mon-7bjnx 0/3 Init:0/2 0 2m
 
ceph-mon-check-549b886885-gqhsg 0/1 Init:0/2 0 2m
 
ceph-mon-keyring-generator-x8dxx 0/1 ContainerCreating 0 2m
 
ceph-namespace-client-key-generator-ktr6d 0/1 ContainerCreating 0 2m
 
ceph-osd-keyring-generator-qfws4 0/1 ContainerCreating 0 2m
 
ceph-rbd-provisioner-5cf47cf8d5-d55wt 0/1 ContainerCreating 0 2m
 
ceph-rbd-provisioner-5cf47cf8d5-j6zhk 0/1 ContainerCreating 0 2m
 
ceph-rgw-7b9677854f-rfz5w 0/1 Pending 0 2m
 
ceph-rgw-keyring-generator-x4vrz 0/1 ContainerCreating 0 2m
 
ceph-storage-keys-generator-ctxvt 0/1 ContainerCreating 0 2m
 
 
</PRE>
 
</PRE>
  +
  +
====Horizon====
 
<PRE>
 
<PRE>
  +
+ helm status horizon
Node-Selectors: ceph-mds=enabled
 
  +
LAST DEPLOYED: Sun Feb 3 11:05:26 2019
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
 
  +
NAMESPACE: openstack
node.kubernetes.io/unreachable:NoExecute for 300s
 
  +
STATUS: DEPLOYED
Events:
 
  +
Type Reason Age From Message
 
  +
RESOURCES:
---- ------ ---- ---- -------
 
  +
==> v1/Pod(related)
Warning FailedScheduling 95s (x2 over 95s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
 
  +
NAME READY STATUS RESTARTS AGE
  +
horizon-5877548d5d-27t8c 1/1 Running 0 6m22s
  +
horizon-db-init-jsjm5 0/1 Completed 0 6m23s
  +
horizon-db-sync-wxwpw 0/1 Completed 0 6m23s
  +
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
horizon-bin 6 6m26s
  +
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
horizon ClusterIP 10.111.206.119 <none> 80/TCP,443/TCP 6m24s
  +
horizon-int NodePort 10.107.139.114 <none> 80:31000/TCP 6m23s
  +
  +
==> v1/Job
  +
NAME COMPLETIONS DURATION AGE
  +
horizon-db-init 1/1 37s 6m23s
  +
horizon-db-sync 1/1 3m27s 6m23s
  +
  +
==> v1beta1/Role
  +
NAME AGE
  +
horizon-openstack-horizon 6m25s
  +
horizon-openstack-horizon-db-init 6m24s
  +
horizon-openstack-horizon-db-sync 6m24s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
horizon-horizon 6m24s
  +
horizon-horizon-db-init 6m24s
  +
horizon-horizon-db-sync 6m24s
  +
  +
==> v1/Deployment
  +
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  +
horizon 1 1 1 1 6m23s
  +
  +
==> v1beta1/Ingress
  +
NAME HOSTS ADDRESS PORTS AGE
  +
horizon horizon,horizon.openstack,horizon.openstack.svc.cluster.local 80 6m23s
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
horizon-netpol application=horizon 6m23s
  +
  +
==> v1beta1/PodDisruptionBudget
  +
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
  +
horizon 0 N/A 1 6m28s
  +
  +
==> v1/Secret
  +
NAME TYPE DATA AGE
  +
horizon-etc Opaque 10 6m27s
  +
horizon-db-admin Opaque 1 6m27s
  +
horizon-db-user Opaque 1 6m26s
  +
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
horizon 1 6m26s
  +
horizon-db-init 1 6m26s
  +
horizon-db-sync 1 6m26s
  +
  +
 
</PRE>
 
</PRE>
 
<PRE>
 
<PRE>
  +
root@openstack:~/mira/openstack-helm# echo $?
kubectl label node minikube ceph-mon=enabled ceph-mgr=enabled ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled ceph-mds=enabled
 
  +
0
 
</PRE>
 
</PRE>
   
==4==
+
====Rados GW====
 
<PRE>
 
<PRE>
  +
kubectl get pods
 
  +
+ helm upgrade --install radosgw-openstack ../openstack-helm-infra/ceph-rgw --namespace=openstack --values=/tmp/radosgw-openstack.yaml
NAME READY STATUS RESTARTS AGE
 
  +
Release "radosgw-openstack" does not exist. Installing it now.
ceph-mds-85b4fbb478-9sr5d 0/1 Pending 0 4m24s
 
  +
NAME: radosgw-openstack
ceph-mds-keyring-generator-lfhnc 0/1 ContainerCreating 0 4m24s
 
  +
E0203 11:14:02.662467 18336 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:33061->127.0.0.1:35352: write tcp4 127.0.0.1:33061->127.0.0.1:35352: write: broken pipe
ceph-mgr-588577d89f-gwgjx 0/1 Init:0/2 0 4m24s
 
  +
LAST DEPLOYED: Sun Feb 3 11:14:02 2019
ceph-mgr-keyring-generator-4tncv 0/1 ContainerCreating 0 4m24s
 
  +
NAMESPACE: openstack
ceph-mon-7bjnx 0/3 Init:0/2 0 4m24s
 
  +
STATUS: DEPLOYED
ceph-mon-check-549b886885-gqhsg 0/1 Init:0/2 0 4m24s
 
  +
ceph-mon-keyring-generator-x8dxx 0/1 CrashLoopBackOff 4 4m24s
 
  +
RESOURCES:
ceph-namespace-client-key-generator-ktr6d 0/1 ContainerCreating 0 4m24s
 
  +
==> v1/ServiceAccount
ceph-osd-keyring-generator-qfws4 0/1 CrashLoopBackOff 4 4m24s
 
ceph-rbd-provisioner-5cf47cf8d5-d55wt 0/1 ContainerCreating 0 4m24s
+
NAME SECRETS AGE
ceph-rbd-provisioner-5cf47cf8d5-j6zhk 0/1 ContainerCreating 0 4m24s
+
ceph-rgw 1 0s
ceph-rgw-7b9677854f-rfz5w 0/1 Pending 0 4m24s
+
ceph-ks-endpoints 1 0s
ceph-rgw-keyring-generator-x4vrz 0/1 CrashLoopBackOff 4 4m24s
+
ceph-ks-service 1 0s
ceph-storage-keys-generator-ctxvt 0/1 ContainerCreating 0 4m24s
+
swift-ks-user 1 0s
  +
ceph-rgw-storage-init 1 0s
  +
radosgw-openstack-test 1 0s
  +
  +
==> v1/Job
  +
NAME COMPLETIONS DURATION AGE
  +
ceph-ks-endpoints 0/1 0s 0s
  +
ceph-ks-service 0/1 0s 0s
  +
swift-ks-user 0/1 0s 0s
  +
ceph-rgw-storage-init 0/1 0s 0s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
radosgw-openstack-ceph-rgw 0s
  +
radosgw-openstack-ceph-ks-endpoints 0s
  +
radosgw-openstack-ceph-ks-service 0s
  +
radosgw-openstack-swift-ks-user 0s
  +
ceph-rgw-storage-init 0s
  +
radosgw-openstack-radosgw-openstack-test 0s
  +
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
radosgw ClusterIP 10.98.97.193 <none> 80/TCP,443/TCP 0s
  +
ceph-rgw ClusterIP 10.98.50.234 <none> 8088/TCP 0s
  +
  +
==> v1/Deployment
  +
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  +
ceph-rgw 1 1 1 0 0s
  +
  +
==> v1beta1/Ingress
  +
NAME HOSTS ADDRESS PORTS AGE
  +
radosgw radosgw,radosgw.openstack,radosgw.openstack.svc.cluster.local 80 0s
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
ceph-rgw-66685f585d-st7dp 0/1 Init:0/3 0 0s
  +
ceph-ks-endpoints-hkj77 0/3 Init:0/1 0 0s
  +
ceph-ks-service-l4wdx 0/1 Init:0/1 0 0s
  +
swift-ks-user-ktptt 0/1 Init:0/1 0 0s
  +
ceph-rgw-storage-init-2vrpg 0/1 Init:0/2 0 0s
  +
  +
==> v1/Secret
  +
NAME TYPE DATA AGE
  +
ceph-keystone-user-rgw Opaque 10 0s
  +
ceph-keystone-user Opaque 8 0s
  +
ceph-keystone-admin Opaque 8 0s
  +
radosgw-s3-admin-creds Opaque 3 0s
  +
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
ceph-rgw-bin-ks 3 0s
  +
ceph-rgw-bin 7 0s
  +
radosgw-openstack-ceph-templates 1 0s
  +
ceph-rgw-etc 1 0s
  +
  +
==> v1beta1/Role
  +
NAME AGE
  +
radosgw-openstack-openstack-ceph-rgw 0s
  +
radosgw-openstack-openstack-ceph-ks-endpoints 0s
  +
radosgw-openstack-openstack-ceph-ks-service 0s
  +
radosgw-openstack-openstack-swift-ks-user 0s
  +
ceph-rgw-storage-init 0s
  +
radosgw-openstack-openstack-radosgw-openstack-test 0s
  +
  +
  +
+ ./tools/deployment/common/wait-for-pods.sh openstack
  +
+ helm status radosgw-openstack
  +
LAST DEPLOYED: Sun Feb 3 11:14:02 2019
  +
NAMESPACE: openstack
  +
STATUS: DEPLOYED
  +
  +
RESOURCES:
  +
==> v1beta1/Role
  +
NAME AGE
  +
radosgw-openstack-openstack-ceph-rgw 3m54s
  +
radosgw-openstack-openstack-ceph-ks-endpoints 3m54s
  +
radosgw-openstack-openstack-ceph-ks-service 3m54s
  +
radosgw-openstack-openstack-swift-ks-user 3m54s
  +
ceph-rgw-storage-init 3m54s
  +
radosgw-openstack-openstack-radosgw-openstack-test 3m54s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
radosgw-openstack-ceph-rgw 3m54s
  +
radosgw-openstack-ceph-ks-endpoints 3m54s
  +
radosgw-openstack-ceph-ks-service 3m54s
  +
radosgw-openstack-swift-ks-user 3m54s
  +
ceph-rgw-storage-init 3m54s
  +
radosgw-openstack-radosgw-openstack-test 3m54s
  +
  +
==> v1/Service
  +
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  +
radosgw ClusterIP 10.98.97.193 <none> 80/TCP,443/TCP 3m54s
  +
ceph-rgw ClusterIP 10.98.50.234 <none> 8088/TCP 3m54s
  +
  +
==> v1/Deployment
  +
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  +
ceph-rgw 1 1 1 1 3m54s
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
ceph-rgw-66685f585d-st7dp 1/1 Running 0 3m54s
  +
ceph-ks-endpoints-hkj77 0/3 Completed 0 3m54s
  +
ceph-ks-service-l4wdx 0/1 Completed 0 3m54s
  +
swift-ks-user-ktptt 0/1 Completed 0 3m54s
  +
ceph-rgw-storage-init-2vrpg 0/1 Completed 0 3m54s
  +
  +
==> v1/Secret
  +
NAME TYPE DATA AGE
  +
ceph-keystone-user-rgw Opaque 10 3m54s
  +
ceph-keystone-user Opaque 8 3m54s
  +
ceph-keystone-admin Opaque 8 3m54s
  +
radosgw-s3-admin-creds Opaque 3 3m54s
  +
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
ceph-rgw-bin-ks 3 3m54s
  +
ceph-rgw-bin 7 3m54s
  +
radosgw-openstack-ceph-templates 1 3m54s
  +
ceph-rgw-etc 1 3m54s
  +
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
ceph-rgw 1 3m54s
  +
ceph-ks-endpoints 1 3m54s
  +
ceph-ks-service 1 3m54s
  +
swift-ks-user 1 3m54s
  +
ceph-rgw-storage-init 1 3m54s
  +
radosgw-openstack-test 1 3m54s
  +
  +
==> v1/Job
  +
NAME COMPLETIONS DURATION AGE
  +
ceph-ks-endpoints 1/1 3m43s 3m54s
  +
ceph-ks-service 1/1 3m22s 3m54s
  +
swift-ks-user 1/1 3m50s 3m54s
  +
ceph-rgw-storage-init 1/1 70s 3m54s
  +
  +
==> v1beta1/Ingress
  +
NAME HOSTS ADDRESS PORTS AGE
  +
radosgw radosgw,radosgw.openstack,radosgw.openstack.svc.cluster.local 80 3m54s
  +
  +
  +
+ export OS_CLOUD=openstack_helm
  +
+ OS_CLOUD=openstack_helm
  +
+ sleep 30
  +
+ openstack service list
  +
+----------------------------------+----------+----------------+
  +
| ID | Name | Type |
  +
+----------------------------------+----------+----------------+
  +
| 4dc87bba6ab94fd3b70e9d4493ef4e44 | swift | object-store |
  +
| 5c354b75377944888ac1cc9a3a088808 | heat | orchestration |
  +
| a55578c3ca8948d28055511c6a2e59bc | heat-cfn | cloudformation |
  +
| fa7df0be3e99442d8fe42bda7519072f | keystone | identity |
  +
+----------------------------------+----------+----------------+
  +
+ openstack container create mygreatcontainer
  +
+--------------------------------------+------------------+-------------------------------------------------+
  +
| account | container | x-trans-id |
  +
+--------------------------------------+------------------+-------------------------------------------------+
  +
| KEY_2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | mygreatcontainer | tx000000000000000000018-005c56ce05-12f8-default |
  +
+--------------------------------------+------------------+-------------------------------------------------+
  +
+ curl -L -o /tmp/important-file.jpg https://imgflip.com/s/meme/Cute-Cat.jpg
  +
% Total % Received % Xferd Average Speed Time Time Time Current
  +
Dload Upload Total Spent Left Speed
  +
100 35343 100 35343 0 0 168k 0 --:--:-- --:--:-- --:--:-- 168k
  +
+ openstack object create --name superimportantfile.jpg mygreatcontainer /tmp/important-file.jpg
  +
+------------------------+------------------+----------------------------------+
  +
| object | container | etag |
  +
+------------------------+------------------+----------------------------------+
  +
| superimportantfile.jpg | mygreatcontainer | d09dbe3a95308bb4abd216885e7d1c34 |
  +
+------------------------+------------------+----------------------------------+
 
</PRE>
 
</PRE>
  +
 
<PRE>
 
<PRE>
  +
kubectl logs ceph-rgw-keyring-generator-x4vrz
 
  +
... skipped ...
 
  +
OS_CLOUD=openstack_helm openstack object list mygreatcontainer
Error from server (Forbidden): error when creating "STDIN": secrets is forbidden: User "system:serviceaccount:default:default" cannot create resource "secrets" in API group "" in the namespace "default"
 
  +
+------------------------+
  +
| Name |
  +
+------------------------+
  +
| superimportantfile.jpg |
  +
+------------------------+
 
</PRE>
 
</PRE>
  +
  +
  +
====Glance====
 
<PRE>
 
<PRE>
  +
+ export OS_CLOUD=openstack_helm
apiVersion: rbac.authorization.k8s.io/v1
 
  +
+ OS_CLOUD=openstack_helm
kind: ClusterRoleBinding
 
  +
+ openstack service list
metadata:
 
  +
+----------------------------------+----------+----------------+
name: grant-all-to-default-service-account-role-binding
 
  +
| ID | Name | Type |
namespace: default
 
  +
+----------------------------------+----------+----------------+
roleRef:
 
  +
| 0ef5d6114769472a896e7d5bfc2eb41a | glance | image |
apiGroup: rbac.authorization.k8s.io
 
  +
| 4dc87bba6ab94fd3b70e9d4493ef4e44 | swift | object-store |
kind: ClusterRole
 
  +
| 5c354b75377944888ac1cc9a3a088808 | heat | orchestration |
name: admin
 
  +
| a55578c3ca8948d28055511c6a2e59bc | heat-cfn | cloudformation |
subjects:
 
  +
| fa7df0be3e99442d8fe42bda7519072f | keystone | identity |
- kind: ServiceAccount
 
  +
+----------------------------------+----------+----------------+
name: default
 
  +
+ sleep 30
namespace: default
 
  +
+ openstack image list
  +
+--------------------------------------+---------------------+--------+
  +
| ID | Name | Status |
  +
+--------------------------------------+---------------------+--------+
  +
| ccfed5c7-b652-4dbc-8fc9-6dc4fec4985c | Cirros 0.3.5 64-bit | active |
  +
+--------------------------------------+---------------------+--------+
  +
+ openstack image show 'Cirros 0.3.5 64-bit'
  +
+------------------+------------------------------------------------------+
  +
| Field | Value |
  +
+------------------+------------------------------------------------------+
  +
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
  +
| container_format | bare |
  +
| created_at | 2019-02-03T11:26:45Z |
  +
| disk_format | qcow2 |
  +
| file | /v2/images/ccfed5c7-b652-4dbc-8fc9-6dc4fec4985c/file |
  +
| id | ccfed5c7-b652-4dbc-8fc9-6dc4fec4985c |
  +
| min_disk | 1 |
  +
| min_ram | 0 |
  +
| name | Cirros 0.3.5 64-bit |
  +
| owner | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 |
  +
| properties | hypervisor_type='qemu', os_distro='cirros' |
  +
| protected | False |
  +
| schema | /v2/schemas/image |
  +
| size | 13267968 |
  +
| status | active |
  +
| tags | |
  +
| updated_at | 2019-02-03T11:26:48Z |
  +
| virtual_size | None |
  +
| visibility | private |
  +
+------------------+------------------------------------------------------+
 
</PRE>
 
</PRE>
  +
  +
====Cinder====
 
<PRE>
 
<PRE>
  +
+ ./tools/deployment/common/wait-for-pods.sh openstack
kubectl create -f p.yaml
 
  +
+ export OS_CLOUD=openstack_helm
  +
+ OS_CLOUD=openstack_helm
  +
+ openstack service list
  +
+----------------------------------+----------+----------------+
  +
| ID | Name | Type |
  +
+----------------------------------+----------+----------------+
  +
| 0ef5d6114769472a896e7d5bfc2eb41a | glance | image |
  +
| 151bcbb92c854322ae154447cc58662b | cinderv2 | volumev2 |
  +
| 4dc87bba6ab94fd3b70e9d4493ef4e44 | swift | object-store |
  +
| 5c354b75377944888ac1cc9a3a088808 | heat | orchestration |
  +
| a55578c3ca8948d28055511c6a2e59bc | heat-cfn | cloudformation |
  +
| f25bb35c9bf147e4b0d10487c8e8eeaf | cinderv3 | volumev3 |
  +
| f32899524d4b46248ca82d317748bbfd | cinder | volume |
  +
| fa7df0be3e99442d8fe42bda7519072f | keystone | identity |
  +
+----------------------------------+----------+----------------+
  +
+ sleep 30
  +
+ openstack volume type list
  +
+--------------------------------------+------+-----------+
  +
| ID | Name | Is Public |
  +
+--------------------------------------+------+-----------+
  +
| 25ae326e-f840-4cb9-802e-4646dd237cad | rbd1 | True |
  +
+--------------------------------------+------+-----------+
 
</PRE>
 
</PRE>
 
<PRE>
 
<PRE>
  +
root@openstack:~/mira/openstack-helm# echo $?
kubectl get pods
 
  +
0
NAME READY STATUS RESTARTS AGE
 
ceph-mds-85b4fbb478-9sr5d 0/1 Pending 0 14m
 
ceph-mgr-588577d89f-gwgjx 0/1 Init:0/2 0 14m
 
ceph-mon-7bjnx 0/3 Init:0/2 0 14m
 
ceph-mon-check-549b886885-gqhsg 0/1 Init:0/2 0 14m
 
ceph-namespace-client-key-generator-ktr6d 0/1 Completed 6 14m
 
ceph-rbd-provisioner-5cf47cf8d5-d55wt 1/1 Running 0 14m
 
ceph-rbd-provisioner-5cf47cf8d5-j6zhk 1/1 Running 0 14m
 
ceph-rgw-7b9677854f-rfz5w 0/1 Pending 0 14m
 
 
</PRE>
 
</PRE>
  +
====OpenVSwitch=====
==555==
 
  +
<PRE>
helm delete --purge ceph
 
  +
+ ./tools/deployment/common/wait-for-pods.sh openstack
helm install --name=ceph local/ceph
 
  +
+ helm status openvswitch
  +
LAST DEPLOYED: Sun Feb 3 11:43:52 2019
  +
NAMESPACE: openstack
  +
STATUS: DEPLOYED
   
  +
RESOURCES:
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
openvswitch-bin 3 113s
   
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
openvswitch-db 1 113s
  +
openvswitch-vswitchd 1 113s
   
  +
==> v1/DaemonSet
  +
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  +
openvswitch-db 1 1 1 1 1 openvswitch=enabled 113s
  +
openvswitch-vswitchd 1 1 1 1 1 openvswitch=enabled 113s
   
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
openvswitch-netpol application=openvswitch 113s
   
  +
==> v1/Pod(related)
===6666===
 
  +
NAME READY STATUS RESTARTS AGE
  +
openvswitch-db-nx579 1/1 Running 0 113s
  +
openvswitch-vswitchd-p4xj5 1/1 Running 0 113s
   
  +
</PRE>
15:24:35-mmazur@Mac18:~/WORK_OTHER/Mirantis_test_task/ceph-helm/ceph/ceph$ kubectl label node minikube ceph-mon=enabled ceph-mgr=enabled ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled ceph-mds=enabled ceph-mds=enabled
 
  +
<PRE>
  +
root@openstack:~/mira/openstack-helm# echo $?
  +
0
  +
</PRE>
  +
====LibVirt====
  +
<PRE>
  +
+ helm upgrade --install libvirt ../openstack-helm-infra/libvirt --namespace=openstack --set manifests.network_policy=true --values=/tmp/libvirt.yaml
  +
Release "libvirt" does not exist. Installing it now.
  +
NAME: libvirt
  +
LAST DEPLOYED: Sun Feb 3 11:46:46 2019
  +
NAMESPACE: openstack
  +
STATUS: DEPLOYED
   
  +
RESOURCES:
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
libvirt-427lp 0/1 Init:0/3 0 0s
   
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
libvirt-bin 3 0s
  +
libvirt-etc 2 0s
   
  +
==> v1/ServiceAccount
* https://github.com/ceph/ceph-helm/issues/73
 
  +
NAME SECRETS AGE
  +
libvirt 1 0s
  +
  +
==> v1beta1/Role
  +
NAME AGE
  +
libvirt-openstack-libvirt 0s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
libvirt-libvirt 0s
  +
  +
==> v1/DaemonSet
  +
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  +
libvirt 1 1 0 1 0 openstack-compute-node=enabled 0s
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
libvirt-netpol application=libvirt 0s
  +
  +
  +
+ helm status libvirt
  +
LAST DEPLOYED: Sun Feb 3 11:46:46 2019
  +
NAMESPACE: openstack
  +
STATUS: DEPLOYED
  +
  +
RESOURCES:
  +
==> v1beta1/Role
  +
NAME AGE
  +
libvirt-openstack-libvirt 1s
  +
  +
==> v1beta1/RoleBinding
  +
NAME AGE
  +
libvirt-libvirt 1s
  +
  +
==> v1/DaemonSet
  +
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  +
libvirt 1 1 0 1 0 openstack-compute-node=enabled 1s
  +
  +
==> v1/NetworkPolicy
  +
NAME POD-SELECTOR AGE
  +
libvirt-netpol application=libvirt 1s
  +
  +
==> v1/Pod(related)
  +
NAME READY STATUS RESTARTS AGE
  +
libvirt-427lp 0/1 Init:0/3 0 1s
  +
  +
==> v1/ConfigMap
  +
NAME DATA AGE
  +
libvirt-bin 3 1s
  +
libvirt-etc 2 1s
  +
  +
==> v1/ServiceAccount
  +
NAME SECRETS AGE
  +
libvirt 1 1s
  +
</PRE>
  +
<PRE>
  +
root@openstack:~/mira/openstack-helm# echo $?
  +
0
  +
</PRE>

Текущая версия на 10:14, 30 октября 2023

Комплексное задание на знание K8s/Helm/OpenStack на 8 рабочих часов (1 день)

* Install openstack(ceph as storage) on top of K8s(All-in-one-installation) using openstack-helm project 
* change Keystone token expiration time afterwards to 24 hours 
* deploy 3 VMs connected to each other using heat

TL; DR

  • Мне понадобилось примерно 13 рабочих часов что бы закончить задние

Хорошее:

  • Задание можно сделать за 8 часов (и даже быстрее)

Плохое

  • Практически невозможно сделать на ноутбуке без Linux.
  • Примерно половину времени потрачено на попытку "взлететь" напрямую на Mac OS и использовать в качестве кластера K8s уже имевшийся minikube

Это был явный фейл - как минимум чарты ceph не совместимы с миникубом никак (https://github.com/ceph/ceph-helm/issues/73), до остальных я не дошел.
Деплоить без скриптовой обвязки явно заняло бы больше чем 1 день (на самом деле если не срезать углы пользуясь скриптами то минимум неделя)

  • Когда я понял что задеплоить на миникуб за отведеннгое время не успеваю то решил настроить ВМку с убунтой и дальше работать с ней

Второй явный фейл (но не мой =) ) - то что задание требует минимум 8 гигов свободной памяти, а на самом деле даже на ВМке с 16-ю гигами и 8 ядрами все шевелилось очень медленно. (Человек с ноутом с 8 гигами не сделает это задание из-за недостатка памяти)
Как следствие - регулярные падения скриптов из-за таймаутов,
Отмечу так же что с не слишком быстрым интернетом я наступил на проблему, что Pull образов был медленным и скрипты не дожидались и падали по таймауту.
Хорошей идеей было бы скачать образы заранее, но об этом я подумал уже в середине процесса и тратить время на анализ какие образы нужны, не стал,

Решение

Логи деплоймента вынесены в отдельный раздел в конце что б не загромождать документ.

Создание ВМки с Убунтой

https://docs.openstack.org/openstack-helm/latest/install/developer/requirements-and-host-config.html

System Requirements¶
The recommended minimum system requirements for a full deployment are:

16GB of RAM
8 Cores
48GB HDD

VM-1.png


На этом этапе изначально я совершил 2 ошибки

  • Создал слишком маленькую (мало CPU Cores) машину
  • Не проверил пересечения сетей
 Warning
By default the Calico CNI will use 192.168.0.0/16 and Kubernetes services will use 10.96.0.0/16 as the CIDR for services. 
Check that these CIDRs are not in use on the development node before proceeding, or adjust as required.

Кстати, похоже что тут ошибка в маске в документации - реально используется маска /12
Немного отредактированный для удобства чтения вывод ps

root      5717  4.0  1.7 448168 292172 ?       Ssl  19:27   0:51      |   \_ kube-apiserver --feature-gates=MountPropagation=true,PodShareProcessNamespace=true 
                                                                                                      --service-node-port-range=1024-65535 
                                                                                                      --advertise-address=172.17.0.1 
                                                                                                      --service-cluster-ip-range=10.96.0.0/12

Подготовка

Если следовать инструкции и не пробовать ничего менять то никаких проблем на 18-й убунте не возникло.

Установка OpenStack

Если следовать инструкции то никаких проблем не возникает, за исключением таймаутов.
Насколько я смог выяснить - все скрипты делают корректную зачистку и потому перезапуск достаточно безопасен.
К сожалению я не сохранил список скриптов которые приходилось перезапускать

Проверка OpenStack

После окончания проверил самым простым способом - работает ли keystone:

root@openstack:~# export  OS_CLOUD=openstack_helm
root@openstack:~# openstack   token issue

На первый взгляд все PODы как минимум запустились

kubectl  -n openstack get pods
NAME                                                READY     STATUS      RESTARTS   AGE
ceph-ks-endpoints-hkj77                             0/3       Completed   0          3h
ceph-ks-service-l4wdx                               0/1       Completed   0          3h
ceph-openstack-config-ceph-ns-key-generator-z82mk   0/1       Completed   0          17h
ceph-rgw-66685f585d-st7dp                           1/1       Running     0          3h
ceph-rgw-storage-init-2vrpg                         0/1       Completed   0          3h
cinder-api-85df68f5d8-j6mqh                         1/1       Running     0          2h
cinder-backup-5f9598868-5kxxx                       1/1       Running     0          2h
cinder-backup-storage-init-g627m                    0/1       Completed   0          2h
cinder-bootstrap-r2295                              0/1       Completed   0          2h
cinder-db-init-nk7jm                                0/1       Completed   0          2h
cinder-db-sync-vlbcm                                0/1       Completed   0          2h
cinder-ks-endpoints-cnwgb                           0/9       Completed   0          2h
cinder-ks-service-6zs57                             0/3       Completed   0          2h
cinder-ks-user-bp8zb                                0/1       Completed   0          2h
cinder-rabbit-init-j97b7                            0/1       Completed   0          2h
cinder-scheduler-6bfcd6476d-r87hm                   1/1       Running     0          2h
cinder-storage-init-6ksjc                           0/1       Completed   0          2h
cinder-volume-5fccd4cc5-dpxqm                       1/1       Running     0          2h
cinder-volume-usage-audit-1549203300-25mkf          0/1       Completed   0          14m
cinder-volume-usage-audit-1549203600-hnh54          0/1       Completed   0          8m
cinder-volume-usage-audit-1549203900-v5t4w          0/1       Completed   0          4m
glance-api-745dc74457-42nwf                         1/1       Running     0          3h
glance-bootstrap-j5wt4                              0/1       Completed   0          3h
glance-db-init-lw97h                                0/1       Completed   0          3h
glance-db-sync-dbp5s                                0/1       Completed   0          3h
glance-ks-endpoints-gm5rw                           0/3       Completed   0          3h
glance-ks-service-64jfj                             0/1       Completed   0          3h
glance-ks-user-ftv9c                                0/1       Completed   0          3h
glance-rabbit-init-m7b7k                            0/1       Completed   0          3h
glance-registry-6cb86c767-2mkbx                     1/1       Running     0          3h
glance-storage-init-m29p4                           0/1       Completed   0          3h
heat-api-69db75bb6d-h24w9                           1/1       Running     0          3h
heat-bootstrap-v9642                                0/1       Completed   0          3h
heat-cfn-86896f7466-n5dnz                           1/1       Running     0          3h
heat-db-init-lfrsb                                  0/1       Completed   0          3h
heat-db-sync-wct2x                                  0/1       Completed   0          3h
heat-domain-ks-user-4fg65                           0/1       Completed   0          3h
heat-engine-6756c84fdd-44hzf                        1/1       Running     0          3h
heat-engine-cleaner-1549203300-s48sb                0/1       Completed   0          14m
heat-engine-cleaner-1549203600-gffn4                0/1       Completed   0          8m
heat-engine-cleaner-1549203900-6hwvj                0/1       Completed   0          4m
heat-ks-endpoints-wxjwp                             0/6       Completed   0          3h
heat-ks-service-v95sk                               0/2       Completed   0          3h
heat-ks-user-z6xhb                                  0/1       Completed   0          3h
heat-rabbit-init-77nzb                              0/1       Completed   0          3h
heat-trustee-ks-user-mwrf5                          0/1       Completed   0          3h
heat-trusts-7x7nt                                   0/1       Completed   0          3h
horizon-5877548d5d-27t8c                            1/1       Running     0          3h
horizon-db-init-jsjm5                               0/1       Completed   0          3h
horizon-db-sync-wxwpw                               0/1       Completed   0          3h
ingress-86cf786fd8-fbz8w                            1/1       Running     4          18h
ingress-error-pages-7f574d9cd7-b5kwh                1/1       Running     0          18h
keystone-api-f658f747c-q6w65                        1/1       Running     0          3h
keystone-bootstrap-ds8t5                            0/1       Completed   0          3h
keystone-credential-setup-hrp8t                     0/1       Completed   0          3h
keystone-db-init-dhgf2                              0/1       Completed   0          3h
keystone-db-sync-z8d5d                              0/1       Completed   0          3h
keystone-domain-manage-86b25                        0/1       Completed   0          3h
keystone-fernet-rotate-1549195200-xh9lv             0/1       Completed   0          2h
keystone-fernet-setup-txgc8                         0/1       Completed   0          3h
keystone-rabbit-init-jgkqz                          0/1       Completed   0          3h
libvirt-427lp                                       1/1       Running     0          2h
mariadb-ingress-5cff98cbfc-24vjg                    1/1       Running     0          17h
mariadb-ingress-5cff98cbfc-nqlhq                    1/1       Running     0          17h
mariadb-ingress-error-pages-5c89b57bc-twn7z         1/1       Running     0          17h
mariadb-server-0                                    1/1       Running     0          17h
memcached-memcached-6d48bd48bc-7kd84                1/1       Running     0          3h
neutron-db-init-rvf47                               0/1       Completed   0          2h
neutron-db-sync-6w7bn                               0/1       Completed   0          2h
neutron-dhcp-agent-default-znxhn                    1/1       Running     0          2h
neutron-ks-endpoints-47xs8                          0/3       Completed   1          2h
neutron-ks-service-sqtwg                            0/1       Completed   0          2h
neutron-ks-user-tpmrb                               0/1       Completed   0          2h
neutron-l3-agent-default-5nbsp                      1/1       Running     0          2h
neutron-metadata-agent-default-9ml6v                1/1       Running     0          2h
neutron-ovs-agent-default-mg8ln                     1/1       Running     0          2h
neutron-rabbit-init-sgnwm                           0/1       Completed   0          2h
neutron-server-9bdc765c9-bx6sf                      1/1       Running     0          2h
nova-api-metadata-78fb54c549-zcmxg                  1/1       Running     2          2h
nova-api-osapi-6c5c6dd4fc-7z5qq                     1/1       Running     0          2h
nova-bootstrap-hp6n4                                0/1       Completed   0          2h
nova-cell-setup-1549195200-v5bv8                    0/1       Completed   0          2h
nova-cell-setup-1549198800-6d8sm                    0/1       Completed   0          1h
nova-cell-setup-1549202400-c9vfz                    0/1       Completed   0          29m
nova-cell-setup-dfdzw                               0/1       Completed   0          2h
nova-compute-default-fmqtl                          1/1       Running     0          2h
nova-conductor-5b9956bffc-5ts7s                     1/1       Running     0          2h
nova-consoleauth-7f8dbb8865-lt5mr                   1/1       Running     0          2h
nova-db-init-hjp2p                                  0/3       Completed   0          2h
nova-db-sync-zn6px                                  0/1       Completed   0          2h
nova-ks-endpoints-ldzhz                             0/3       Completed   0          2h
nova-ks-service-c64tb                               0/1       Completed   0          2h
nova-ks-user-kjskm                                  0/1       Completed   0          2h
nova-novncproxy-6f485d9f4c-6m2n5                    1/1       Running     0          2h
nova-placement-api-587c888875-6cmmb                 1/1       Running     0          2h
nova-rabbit-init-t275g                              0/1       Completed   0          2h
nova-scheduler-69886c6fdf-hcwm6                     1/1       Running     0          2h
nova-service-cleaner-1549195200-7jw2d               0/1       Completed   1          2h
nova-service-cleaner-1549198800-pvckn               0/1       Completed   0          1h
nova-service-cleaner-1549202400-kqpxz               0/1       Completed   0          29m
openvswitch-db-nx579                                1/1       Running     0          2h
openvswitch-vswitchd-p4xj5                          1/1       Running     0          2h
placement-ks-endpoints-vt4pk                        0/3       Completed   0          2h
placement-ks-service-sw2b9                          0/1       Completed   0          2h
placement-ks-user-zv755                             0/1       Completed   0          2h
rabbitmq-rabbitmq-0                                 1/1       Running     0          4h
swift-ks-user-ktptt                                 0/1       Completed   0          3h

Доступ к Horizon

(настройки посмотрел в ингрессе, 10.255.57.3 - Адрес виртуальной машины

cat /etc/hosts
10.255.57.3 os horizon.openstack.svc.cluster.local

Horizon first login.png

Конфигурация KeyStone

В задании сказано:

change Keystone token expiration time afterwards to 24 hours 

Первое - проверим что там на самом деле

openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-02-04T00:25:34+0000                                                                                                                                                                |
| id         | gAAAAABcVt2-s8ugiwKaNQiA9djycTJ2CoDZ0sC176e54cjnE0RevPsXkgiZH0U5m_kNQlo0ctunA_TvD1tULyn0ckRkrO0Pxht1yT-cQ1TTidhkJR2sVojcXG3hiau0RMm0YOfoydDemyuvGMS7mwZ_Z2m9VtmJ-F83xQ8CwEfhItH6vRMzmGk |
| project_id | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44                                                                                                                                                        |
| user_id    | 42068c166a3245208b5ac78965eab80b                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Похоже что TTL=12h
Быстрое чтение документации ( https://docs.openstack.org/juno/config-reference/content/keystone-configuration-file.html ) привело меня к мысли что нужно менять секцию

[token]
expiration = 3600


Тут было принято решение сделать "быстро и грязно" и в реальном мире так скорее всего не выйдет,

1. Посчитать новое значение (вместо 24 вбил по ошибке 34)

bc
bc 1.07.1
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
34*60*60
122400
quit

2. Проверить что у нас один экземпляр keystone
Было б забавно если их было б больше и все выдавали токены с разным TTL

docker ps | grep keystone
41e785977105        16ec948e619f                                                     "/tmp/keystone-api.s…"   2 hours ago          Up 2 hours                              k8s_keystone-api_keystone-api-f658f747c-q6w65_openstack_8ca3a9ed-279f-11e9-a72e-080027da2b2f_0
6905400831ad        k8s.gcr.io/pause-amd64:3.1                                       "/pause"                 2 hours ago          Up 2 hours                              k8s_POD_keystone-api-f658f747c-q6w65_openstack_8ca3a9ed-279f-11e9-a72e-080027da2b2f_0

Тут конечно нужно kubectl exec ... но я решил срезать угол

docker exec -u root -ti 41e785977105  bash

Проверяю что запущено

ps -auxfw
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       566  0.0  0.0  18236  3300 pts/0    Ss   12:34   0:00 bash
root       581  0.0  0.0  34428  2872 pts/0    R+   12:36   0:00  \_ ps -auxfw
keystone     1  0.0  1.1 263112 185104 ?       Ss   10:42   0:01 apache2 -DFOREGROUND
keystone    11  3.5  0.5 618912 95016 ?        Sl   10:42   4:03 (wsgi:k -DFOREGROUND
keystone   478  0.1  0.0 555276  9952 ?        Sl   12:23   0:00 apache2 -DFOREGROUND
keystone   506  0.2  0.0 555348  9956 ?        Sl   12:24   0:01 apache2 -DFOREGROUND

Соответствует ожиданиям .
Содержимое файла /etc/keystone.keystone.conf тоже - там как и предполагалось 12h

[token]
expiration = 43200

Поменять файл прямо на месте не удалось - что б уже все сделать не по правилам, изменил его снаружи на хосте

root@openstack:~# find /var -name keystone.conf
/var/lib/kubelet/pods/8ca3a9ed-279f-11e9-a72e-080027da2b2f/volumes/kubernetes.io~empty-dir/etckeystone/keystone.conf
/var/lib/kubelet/pods/8ca3a9ed-279f-11e9-a72e-080027da2b2f/volumes/kubernetes.io~secret/keystone-etc/..2019_02_03_12_37_10.041243569/keystone.conf
/var/lib/kubelet/pods/8ca3a9ed-279f-11e9-a72e-080027da2b2f/volumes/kubernetes.io~secret/keystone-etc/keystone.conf

и, возблагодарив разработчиков за то что keystone запущен под Apache (что позволило сделать релоад а не пересоздавать контейнер, а я не был уверен что знаю как это сделать правильно)

docker exec -u root -ti 41e785977105  bash
ps -auxfw
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       566  0.0  0.0  18236  3300 pts/0    Ss   12:34   0:00 bash
root       581  0.0  0.0  34428  2872 pts/0    R+   12:36   0:00  \_ ps -auxfw
keystone     1  0.0  1.1 263112 185104 ?       Ss   10:42   0:01 apache2 -DFOREGROUND
keystone    11  3.5  0.5 618912 95016 ?        Sl   10:42   4:03 (wsgi:k -DFOREGROUND
keystone   478  0.1  0.0 555276  9952 ?        Sl   12:23   0:00 apache2 -DFOREGROUND
keystone   506  0.2  0.0 555348  9956 ?        Sl   12:24   0:01 apache2 -DFOREGROUND
root@keystone-api-f658f747c-q6w65:/etc/keystone# kill -HUP 1
root@keystone-api-f658f747c-q6w65:/etc/keystone# ps -auxfw
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       566  0.0  0.0  18236  3300 pts/0    Ss   12:34   0:00 bash
root       583  0.0  0.0  34428  2888 pts/0    R+   12:36   0:00  \_ ps -auxfw
keystone     1  0.0  1.1 210588 183004 ?       Ss   10:42   0:01 apache2 -DFOREGROUND
keystone    11  3.5  0.0      0     0 ?        Z    10:42   4:03 [apache2] <defunct>
root@keystone-api-f658f747c-q6w65:/etc/keystone# ps -auxfw
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       566  0.0  0.0  18236  3300 pts/0    Ss   12:34   0:00 bash
root       955  0.0  0.0  34428  2904 pts/0    R+   12:36   0:00  \_ ps -auxfw
keystone     1  0.0  1.1 263120 185124 ?       Ss   10:42   0:01 apache2 -DFOREGROUND
keystone   584 12.0  0.0 290680  8820 ?        Sl   12:36   0:00 (wsgi:k -DFOREGROUND
keystone   585 14.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   586 14.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   587 17.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   588 13.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   589 14.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   590 10.0  0.0 555188 10020 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   591 12.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   592 10.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   593 15.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   594 14.0  0.0 265528  8572 ?        R    12:36   0:00 apache2 -DFOREGROUND
keystone   595 13.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   596 11.0  0.0 266040  8832 ?        R    12:36   0:00 apache2 -DFOREGROUND
keystone   597 19.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   598 14.0  0.0 555188  9956 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   599 18.0  0.0 555188  9952 ?        Sl   12:36   0:00 apache2 -DFOREGROUND
keystone   600 11.0  0.0 265528  8376 ?        R    12:36   0:00 apache2 -DFOREGROUND

Проверяю применились ли изменения:

openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-02-04T22:37:10+0000                                                                                                                                                                |
| id         | gAAAAABcVuB2tQtAX56G1_kqJKeekpsWDJPTE19IMhWvNlGQqmDZQap9pgXQQkhQNMQNpR7Q6XR_w5_ngsx_l36vKXUND75uy4fimAbaLBDBdxxOzJqDRq4NLz4sEdTzLs2T3nyISwItLloOj-8sw7x1Pg2-9N-9afudv_jcYLVCq2luAImfRpY |
| project_id | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44                                                                                                                                                        |
| user_id    | 42068c166a3245208b5ac78965eab80b                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
root@openstack:/var/lib/kubelet/pods/8ca3a9ed-279f-11e9-a72e-080027da2b2f/volumes/kubernetes.io~secret/keystone-etc/..data# date
Sun Feb  3 12:37:18 UTC 2019

34 часа (из-за опечатки) но менять на 24 я уже не стал

Heat Deploy

deploy 3 VMs connected to each other using heat

Самая простая часть - все написано документации https://docs.openstack.org/openstack-helm/latest/install/developer/exercise-the-cloud.html
У меня с первого раза не создалась ВМка

[instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] Instance failed to spawn
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] Traceback (most recent call last):
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/compute/manager.py", line 2133, in _build_resources
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     yield resources
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/compute/manager.py", line 1939, in _build_and_run_instance
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     block_device_info=block_device_info)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2786, in spawn
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     block_device_info=block_device_info)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3193, in _create_image
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     fallback_from_host)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3309, in _create_and_inject_local_root
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     instance, size, fallback_from_host)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6953, in _try_fetch_image_cache
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     size=size)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 242, in cache
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     *args, **kwargs)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 584, in create_image
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     prepare_template(target=base, *args, **kwargs)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     return f(*args, **kwargs)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 238, in fetch_func_sync
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     fetch_func(target=target, *args, **kwargs)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/libvirt/utils.py", line 458, in fetch_image
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     images.fetch_to_raw(context, image_id, target)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/images.py", line 132, in fetch_to_raw
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     fetch(context, image_href, path_tmp)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/virt/images.py", line 123, in fetch
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     IMAGE_API.download(context, image_href, dest_path=path)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/api.py", line 184, in download
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     dst_path=dest_path)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/glance.py", line 533, in download
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     _reraise_translated_image_exception(image_id)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/glance.py", line 1050, in _reraise_translated_image_exception
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     six.reraise(type(new_exc), new_exc, exc_trace)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/glance.py", line 531, in download
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     image_chunks = self._client.call(context, 2, 'data', image_id)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/nova/image/glance.py", line 168, in call
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     result = getattr(controller, method)(*args, **kwargs)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/common/utils.py", line 535, in inner
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     return RequestIdProxy(wrapped(*args, **kwargs))
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/v2/images.py", line 208, in data
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     resp, body = self.http_client.get(url)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 285, in get
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     return self._request('GET', url, **kwargs)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 277, in _request
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     resp, body_iter = self._handle_response(resp)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]   File "/var/lib/openstack/local/lib/python2.7/site-packages/glanceclient/common/http.py", line 107, in _handle_response
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]     raise exc.from_response(resp, resp.content)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] HTTPInternalServerError: HTTPInternalServerError (HTTP 500)
2019-02-03 13:20:56,466.466 21157 ERROR nova.compute.manager [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad]
2019-02-03 13:21:22,418.418 21157 INFO nova.compute.resource_tracker [req-cdb3800a-87ba-4ee9-88ad-e6914522a847 - - - - -] Final resource view: name=openstack phys_ram=16039MB used_ram=576MB phys_disk=48GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[]
2019-02-03 13:21:27,224.224 21157 INFO nova.compute.manager [req-c0895961-b263-4122-82cc-5267be0aad8f 42068c166a3245208b5ac78965eab80b 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 - - -] [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] Terminating instance
2019-02-03 13:21:27,332.332 21157 INFO nova.virt.libvirt.driver [-] [instance: 6aa5979e-1e03-4c8c-92bf-b1c1a43022ad] Instance destroyed successfully.

Так как я подозревал что проблема в тормозах

2019-02-03 12:57:35,835.835 21157 WARNING nova.scheduler.client.report [req-cdb3800a-87ba-4ee9-88ad-e6914522a847 - - - - -] Failed to update inventory for resource provider 57daad8e-d831-4271-b3ef-332237d32b49: 503 503 Service Unavailable
The server is currently unavailable. Please try again at a later time.

то просто зачистил stack и закоментировал а скрипте создание сетей После чего VM-ка успешно создалась

 openstack server list
+--------------------------------------+----------------------------------------------+--------+------------------------------------------------------------------------+---------------------+----------------------------------------------+
| ID                                   | Name                                         | Status | Networks                                                               | Image               | Flavor                                       |
+--------------------------------------+----------------------------------------------+--------+------------------------------------------------------------------------+---------------------+----------------------------------------------+
| 155405cd-011a-42a2-93d7-3ed6eda250b2 | heat-basic-vm-deployment-server-ynxjzrycsd3z | ACTIVE | heat-basic-vm-deployment-private_net-tbltedh44qjv=10.0.0.4, 172.24.4.5 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-flavor-3kbmengg2bkm |
+--------------------------------------+----------------------------------------------+--------+------------------------------------------------------------------------+---------------------+----------------------------------------------+
root@openstack:~# openstack stack list
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
| ID                                   | Stack Name                  | Project                          | Stack Status    | Creation Time        | Updated Time |
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
| 1f90d25b-eb19-48cd-b623-cc0c7bccc28f | heat-vm-volume-attach       | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:36:15Z | None         |
| 8d4ce486-ddb0-4826-b225-6b7dc4eef157 | heat-basic-vm-deployment    | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:35:11Z | None         |
| d0aeea69-4639-4942-a905-ec30ed99aa47 | heat-subnet-pool-deployment | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:16:29Z | None         |
| 688585e9-9b99-4ac7-bd04-9e7b874ec6c7 | heat-public-net-deployment  | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:16:09Z | None         |
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+

Теперь нужно создать три ВМки и проверить связь
Что б не делать это руками воспользовался тем же скриптом, дополнив его for I in $(seq 1 3); do

for I in $(seq 1 3); do
    openstack stack create --wait \
        --parameter public_net=${OSH_EXT_NET_NAME} \
        --parameter image="${IMAGE_NAME}" \
        --parameter ssh_key=${OSH_VM_KEY_STACK} \
        --parameter cidr=${OSH_PRIVATE_SUBNET} \
        --parameter dns_nameserver=${OSH_BR_EX_ADDR%/*} \
        -t ./tools/gate/files/heat-basic-vm-deployment.yaml \
        heat-basic-vm-deployment-${I}
...

Полученный результат

# openstack stack list
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
| ID                                   | Stack Name                  | Project                          | Stack Status    | Creation Time        | Updated Time |
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
| a6f1e35e-7536-4707-bdfa-b2885ab7cae2 | heat-vm-volume-attach-3     | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:48:31Z | None         |
| ccb63a87-37f4-4355-b399-ef4abb43983b | heat-basic-vm-deployment-3  | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:47:25Z | None         |
| 6d61b8de-80cd-4138-bfe2-8333a4b354ce | heat-vm-volume-attach-2     | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:47:12Z | None         |
| 75c44f1c-d8da-422e-a027-f16b8458e224 | heat-basic-vm-deployment-2  | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:46:08Z | None         |
| 95da63ac-9e20-4492-b3a6-fab74649bbf9 | heat-vm-volume-attach-1     | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:45:54Z | None         |
| 447881bb-6c93-4b92-9765-578782ee2ef5 | heat-basic-vm-deployment-1  | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:44:42Z | None         |
| 1f90d25b-eb19-48cd-b623-cc0c7bccc28f | heat-vm-volume-attach       | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:36:15Z | None         |
| 8d4ce486-ddb0-4826-b225-6b7dc4eef157 | heat-basic-vm-deployment    | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:35:11Z | None         |
| d0aeea69-4639-4942-a905-ec30ed99aa47 | heat-subnet-pool-deployment | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:16:29Z | None         |
| 688585e9-9b99-4ac7-bd04-9e7b874ec6c7 | heat-public-net-deployment  | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | CREATE_COMPLETE | 2019-02-03T13:16:09Z | None         |
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
openstack server list
+--------------------------------------+------------------------------------------------+--------+---------------------------------------------------------------------------+---------------------+------------------------------------------------+
| ID                                   | Name                                           | Status | Networks                                                                  | Image               | Flavor                                         |
+--------------------------------------+------------------------------------------------+--------+---------------------------------------------------------------------------+---------------------+------------------------------------------------+
| 412cbd8b-e4c1-46e8-b48c-065e9830bfa8 | heat-basic-vm-deployment-3-server-v5lwzoyotkwo | ACTIVE | heat-basic-vm-deployment-3-private_net-4unttrj2lq6z=10.0.0.6, 172.24.4.18 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-3-flavor-3gncn5vwfu6z |
| e7a4e42c-aa9c-47bc-ba7b-d229af1a2077 | heat-basic-vm-deployment-2-server-vhacv5jz7dnt | ACTIVE | heat-basic-vm-deployment-2-private_net-2gz44w5rjy7s=10.0.0.6, 172.24.4.11 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-2-flavor-hxr5eawiveg5 |
| 52886edd-be09-4ba1-aebd-5563f25f4f60 | heat-basic-vm-deployment-1-server-wk5lxhhnhhyn | ACTIVE | heat-basic-vm-deployment-1-private_net-hqx3dmohj3n5=10.0.0.5, 172.24.4.12 | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-1-flavor-6aiokzvf4qaq |
| 155405cd-011a-42a2-93d7-3ed6eda250b2 | heat-basic-vm-deployment-server-ynxjzrycsd3z   | ACTIVE | heat-basic-vm-deployment-private_net-tbltedh44qjv=10.0.0.4, 172.24.4.5    | Cirros 0.3.5 64-bit | heat-basic-vm-deployment-flavor-3kbmengg2bkm   |
+--------------------------------------+------------------------------------------------+--------+---------------------------------------------------------------------------+---------------------+------------------------------------------------+

Проверка сети(22 порт точно открыт):

root@openstack:/etc/openstack# ssh -i /root/.ssh/osh_key cirros@172.24.4.18

$ nc 172.24.4.11 22
SSH-2.0-dropbear_2012.55
^Cpunt!

$ nc 172.24.4.12  22
SSH-2.0-dropbear_2012.55
^Cpunt!

$

Horizon with VMs

Horizon With VMs.png
Удивительно но заработала даже VNC консоль
Horizon VNC.png

Logs

MariaDB

+ ./tools/deployment/common/wait-for-pods.sh openstack
+ helm status mariadb
LAST DEPLOYED: Sun Feb  3 10:25:00 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/PodDisruptionBudget
NAME            MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
mariadb-server  0              N/A              1                    13h

==> v1/ConfigMap
NAME                  DATA  AGE
mariadb-bin           5     13h
mariadb-etc           5     13h
mariadb-services-tcp  1     13h

==> v1/ServiceAccount
NAME                         SECRETS  AGE
mariadb-ingress-error-pages  1        13h
mariadb-ingress              1        13h
mariadb-mariadb              1        13h

==> v1beta1/RoleBinding
NAME                     AGE
mariadb-mariadb-ingress  13h
mariadb-ingress          13h
mariadb-mariadb          13h

==> v1/Service
NAME                         TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)            AGE
mariadb-discovery            ClusterIP  None            <none>       3306/TCP,4567/TCP  13h
mariadb-ingress-error-pages  ClusterIP  None            <none>       80/TCP             13h
mariadb                      ClusterIP  10.104.164.168  <none>       3306/TCP           13h
mariadb-server               ClusterIP  10.107.255.234  <none>       3306/TCP           13h

==> v1/NetworkPolicy
NAME            POD-SELECTOR         AGE
mariadb-netpol  application=mariadb  13h

==> v1/Secret
NAME                      TYPE    DATA  AGE
mariadb-dbadmin-password  Opaque  1     13h
mariadb-secrets           Opaque  1     13h

==> v1beta1/Role
NAME                               AGE
mariadb-ingress                    13h
mariadb-openstack-mariadb-ingress  13h
mariadb-mariadb                    13h

==> v1/Deployment
NAME                         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mariadb-ingress-error-pages  1        1        1           1          13h
mariadb-ingress              2        2        2           2          13h

==> v1/StatefulSet
NAME            DESIRED  CURRENT  AGE
mariadb-server  1        1        13h

==> v1/Pod(related)
NAME                                         READY  STATUS   RESTARTS  AGE
mariadb-ingress-error-pages-5c89b57bc-twn7z  1/1    Running  0         13h
mariadb-ingress-5cff98cbfc-24vjg             1/1    Running  0         13h
mariadb-ingress-5cff98cbfc-nqlhq             1/1    Running  0         13h
mariadb-server-0                             1/1    Running  0         13h


RabbitMQ

+ helm upgrade --install rabbitmq ../openstack-helm-infra/rabbitmq --namespace=openstack --values=/tmp/rabbitmq.yaml --set pod.replicas.server=1
Release "rabbitmq" does not exist. Installing it now.
NAME:   rabbitmq
LAST DEPLOYED: Sun Feb  3 10:27:01 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME                 TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)                       AGE
rabbitmq-dsv-7b1733  ClusterIP  None           <none>       5672/TCP,25672/TCP,15672/TCP  2s
rabbitmq-mgr-7b1733  ClusterIP  10.111.11.128  <none>       80/TCP,443/TCP                2s
rabbitmq             ClusterIP  10.108.248.80  <none>       5672/TCP,25672/TCP,15672/TCP  2s

==> v1/StatefulSet
NAME               DESIRED  CURRENT  AGE
rabbitmq-rabbitmq  1        1        2s

==> v1beta1/Ingress
NAME                 HOSTS                                                                                              ADDRESS  PORTS  AGE
rabbitmq-mgr-7b1733  rabbitmq-mgr-7b1733,rabbitmq-mgr-7b1733.openstack,rabbitmq-mgr-7b1733.openstack.svc.cluster.local  80       2s

==> v1/Pod(related)
NAME                 READY  STATUS   RESTARTS  AGE
rabbitmq-rabbitmq-0  0/1    Pending  0         2s

==> v1/ConfigMap
NAME                   DATA  AGE
rabbitmq-rabbitmq-bin  4     3s
rabbitmq-rabbitmq-etc  2     2s

==> v1/ServiceAccount
NAME               SECRETS  AGE
rabbitmq-test      1        2s
rabbitmq-rabbitmq  1        2s

==> v1/NetworkPolicy
NAME             POD-SELECTOR          AGE
rabbitmq-netpol  application=rabbitmq  2s

==> v1beta1/Role
NAME                              AGE
rabbitmq-openstack-rabbitmq-test  2s
rabbitmq-rabbitmq                 2s

==> v1beta1/RoleBinding
NAME                    AGE
rabbitmq-rabbitmq-test  2s
rabbitmq-rabbitmq       2s


+ ./tools/deployment/common/wait-for-pods.sh openstack
+ helm status rabbitmq
LAST DEPLOYED: Sun Feb  3 10:27:01 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME                 TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)                       AGE
rabbitmq-dsv-7b1733  ClusterIP  None           <none>       5672/TCP,25672/TCP,15672/TCP  2m21s
rabbitmq-mgr-7b1733  ClusterIP  10.111.11.128  <none>       80/TCP,443/TCP                2m21s
rabbitmq             ClusterIP  10.108.248.80  <none>       5672/TCP,25672/TCP,15672/TCP  2m21s

==> v1beta1/Ingress
NAME                 HOSTS                                                                                              ADDRESS  PORTS  AGE
rabbitmq-mgr-7b1733  rabbitmq-mgr-7b1733,rabbitmq-mgr-7b1733.openstack,rabbitmq-mgr-7b1733.openstack.svc.cluster.local  80       2m21s

==> v1/NetworkPolicy
NAME             POD-SELECTOR          AGE
rabbitmq-netpol  application=rabbitmq  2m21s

==> v1/ConfigMap
NAME                   DATA  AGE
rabbitmq-rabbitmq-bin  4     2m22s
rabbitmq-rabbitmq-etc  2     2m21s

==> v1/ServiceAccount
NAME               SECRETS  AGE
rabbitmq-test      1        2m21s
rabbitmq-rabbitmq  1        2m21s

==> v1beta1/RoleBinding
NAME                    AGE
rabbitmq-rabbitmq-test  2m21s
rabbitmq-rabbitmq       2m21s

==> v1beta1/Role
NAME                              AGE
rabbitmq-openstack-rabbitmq-test  2m21s
rabbitmq-rabbitmq                 2m21s

==> v1/StatefulSet
NAME               DESIRED  CURRENT  AGE
rabbitmq-rabbitmq  1        1        2m21s

==> v1/Pod(related)
NAME                 READY  STATUS   RESTARTS  AGE
rabbitmq-rabbitmq-0  1/1    Running  0         2m21s
root@openstack:~/mira/openstack-helm# echo $?
0

Memcached


+ helm upgrade --install memcached ../openstack-helm-infra/memcached --namespace=openstack --values=/tmp/memcached.yaml
Release "memcached" does not exist. Installing it now.
NAME:   memcached
LAST DEPLOYED: Sun Feb  3 10:30:32 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                     DATA  AGE
memcached-memcached-bin  1     3s

==> v1/ServiceAccount
NAME                 SECRETS  AGE
memcached-memcached  1        3s

==> v1/Service
NAME       TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)    AGE
memcached  ClusterIP  10.96.106.159  <none>       11211/TCP  3s

==> v1/Deployment
NAME                 DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
memcached-memcached  1        1        1           0          3s

==> v1/NetworkPolicy
NAME              POD-SELECTOR           AGE
memcached-netpol  application=memcached  3s

==> v1/Pod(related)
NAME                                  READY  STATUS    RESTARTS  AGE
memcached-memcached-6d48bd48bc-7kd84  0/1    Init:0/1  0         2s


+ ./tools/deployment/common/wait-for-pods.sh openstack
+ helm status memcached
LAST DEPLOYED: Sun Feb  3 10:30:32 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/NetworkPolicy
NAME              POD-SELECTOR           AGE
memcached-netpol  application=memcached  78s

==> v1/Pod(related)
NAME                                  READY  STATUS   RESTARTS  AGE
memcached-memcached-6d48bd48bc-7kd84  1/1    Running  0         77s

==> v1/ConfigMap
NAME                     DATA  AGE
memcached-memcached-bin  1     78s

==> v1/ServiceAccount
NAME                 SECRETS  AGE
memcached-memcached  1        78s

==> v1/Service
NAME       TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)    AGE
memcached  ClusterIP  10.96.106.159  <none>       11211/TCP  78s

==> v1/Deployment
NAME                 DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
memcached-memcached  1        1        1           1          78s
root@openstack:~/mira/openstack-helm# echo $?
0

Keystone

+ ./tools/deployment/common/wait-for-pods.sh openstack
+ helm status keystone
LAST DEPLOYED: Sun Feb  3 10:36:19 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME                        SECRETS  AGE
keystone-credential-rotate  1        7m20s
keystone-fernet-rotate      1        7m20s
keystone-api                1        7m20s
keystone-bootstrap          1        7m20s
keystone-credential-setup   1        7m19s
keystone-db-init            1        7m19s
keystone-db-sync            1        7m19s
keystone-domain-manage      1        7m19s
keystone-fernet-setup       1        7m19s
keystone-rabbit-init        1        7m19s
keystone-test               1        7m19s

==> v1beta1/RoleBinding
NAME                                 AGE
keystone-keystone-credential-rotate  7m19s
keystone-credential-rotate           7m19s
keystone-fernet-rotate               7m19s
keystone-keystone-fernet-rotate      7m18s
keystone-keystone-api                7m18s
keystone-keystone-bootstrap          7m18s
keystone-credential-setup            7m18s
keystone-keystone-db-init            7m18s
keystone-keystone-db-sync            7m18s
keystone-keystone-domain-manage      7m18s
keystone-fernet-setup                7m18s
keystone-keystone-rabbit-init        7m18s
keystone-keystone-test               7m18s

==> v1/Service
NAME          TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)         AGE
keystone-api  ClusterIP  10.110.158.186  <none>       5000/TCP        7m18s
keystone      ClusterIP  10.108.1.22     <none>       80/TCP,443/TCP  7m18s

==> v1/Deployment
NAME          DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
keystone-api  1        1        1           1          7m18s

==> v1beta1/CronJob
NAME                        SCHEDULE      SUSPEND  ACTIVE  LAST SCHEDULE  AGE
keystone-credential-rotate  0 0 1 * *     False    0       <none>         7m18s
keystone-fernet-rotate      0 */12 * * *  False    0       <none>         7m18s

==> v1beta1/Ingress
NAME      HOSTS                                                             ADDRESS  PORTS  AGE
keystone  keystone,keystone.openstack,keystone.openstack.svc.cluster.local  80       7m18s

==> v1beta1/PodDisruptionBudget
NAME          MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
keystone-api  0              N/A              1                    7m20s

==> v1/Secret
NAME                      TYPE    DATA  AGE
keystone-etc              Opaque  9     7m20s
keystone-credential-keys  Opaque  2     7m20s
keystone-db-admin         Opaque  1     7m20s
keystone-db-user          Opaque  1     7m20s
keystone-fernet-keys      Opaque  2     7m20s
keystone-keystone-admin   Opaque  8     7m20s
keystone-keystone-test    Opaque  8     7m20s
keystone-rabbitmq-admin   Opaque  1     7m20s
keystone-rabbitmq-user    Opaque  1     7m20s

==> v1/NetworkPolicy
NAME             POD-SELECTOR          AGE
keystone-netpol  application=keystone  7m18s

==> v1/Pod(related)
NAME                             READY  STATUS     RESTARTS  AGE
keystone-api-f658f747c-q6w65     1/1    Running    0         7m18s
keystone-bootstrap-ds8t5         0/1    Completed  0         7m18s
keystone-credential-setup-hrp8t  0/1    Completed  0         7m18s
keystone-db-init-dhgf2           0/1    Completed  0         7m18s
keystone-db-sync-z8d5d           0/1    Completed  0         7m18s
keystone-domain-manage-86b25     0/1    Completed  0         7m18s
keystone-fernet-setup-txgc8      0/1    Completed  0         7m18s
keystone-rabbit-init-jgkqz       0/1    Completed  0         7m18s

==> v1/Job
NAME                       COMPLETIONS  DURATION  AGE
keystone-bootstrap         1/1          7m13s     7m18s
keystone-credential-setup  1/1          2m6s      7m18s
keystone-db-init           1/1          3m46s     7m18s
keystone-db-sync           1/1          6m11s     7m18s
keystone-domain-manage     1/1          6m51s     7m18s
keystone-fernet-setup      1/1          3m52s     7m18s
keystone-rabbit-init       1/1          5m33s     7m18s

==> v1/ConfigMap
NAME          DATA  AGE
keystone-bin  13    7m20s

==> v1beta1/Role
NAME                                           AGE
keystone-openstack-keystone-credential-rotate  7m19s
keystone-credential-rotate                     7m19s
keystone-fernet-rotate                         7m19s
keystone-openstack-keystone-fernet-rotate      7m19s
keystone-openstack-keystone-api                7m19s
keystone-openstack-keystone-bootstrap          7m19s
keystone-credential-setup                      7m19s
keystone-openstack-keystone-db-init            7m19s
keystone-openstack-keystone-db-sync            7m19s
keystone-openstack-keystone-domain-manage      7m19s
keystone-fernet-setup                          7m19s
keystone-openstack-keystone-rabbit-init        7m19s
keystone-openstack-keystone-test               7m19s


+ export OS_CLOUD=openstack_helm
+ OS_CLOUD=openstack_helm
+ sleep 30
+ openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                                     |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+
| 0f9f179d90a64e76ac65873826a4851e | RegionOne | keystone     | identity     | True    | internal  | http://keystone-api.openstack.svc.cluster.local:5000/v3 |
| 1ea5e2909c574c01bc815b96ba818db3 | RegionOne | keystone     | identity     | True    | public    | http://keystone.openstack.svc.cluster.local:80/v3       |
| 32e745bc02af4e5cb20830c83fc626e3 | RegionOne | keystone     | identity     | True    | admin     | http://keystone.openstack.svc.cluster.local:80/v3       |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------------------+
root@openstack:~/mira/openstack-helm# echo $?
0

Heat

+ :
+ helm upgrade --install heat ./heat --namespace=openstack --set manifests.network_policy=true
Release "heat" does not exist. Installing it now.
NAME:   heat
LAST DEPLOYED: Sun Feb  3 10:57:10 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/CronJob
NAME                 SCHEDULE     SUSPEND  ACTIVE  LAST SCHEDULE  AGE
heat-engine-cleaner  */5 * * * *  False    0       <none>         4s

==> v1/Secret
NAME                      TYPE    DATA  AGE
heat-etc                  Opaque  10    7s
heat-db-user              Opaque  1     7s
heat-db-admin             Opaque  1     7s
heat-keystone-user        Opaque  8     7s
heat-keystone-test        Opaque  8     7s
heat-keystone-stack-user  Opaque  5     7s
heat-keystone-trustee     Opaque  8     7s
heat-keystone-admin       Opaque  8     7s
heat-rabbitmq-admin       Opaque  1     7s
heat-rabbitmq-user        Opaque  1     7s

==> v1/ServiceAccount
NAME                  SECRETS  AGE
heat-engine-cleaner   1        7s
heat-api              1        7s
heat-cfn              1        7s
heat-engine           1        7s
heat-bootstrap        1        6s
heat-db-init          1        6s
heat-db-sync          1        6s
heat-ks-endpoints     1        6s
heat-ks-service       1        6s
heat-ks-user-domain   1        6s
heat-trustee-ks-user  1        6s
heat-ks-user          1        6s
heat-rabbit-init      1        6s
heat-trusts           1        6s
heat-test             1        6s

==> v1beta1/RoleBinding
NAME                       AGE
heat-heat-engine-cleaner   5s
heat-heat-api              5s
heat-heat-cfn              5s
heat-heat-engine           5s
heat-heat-db-init          5s
heat-heat-db-sync          5s
heat-heat-ks-endpoints     5s
heat-heat-ks-service       5s
heat-heat-ks-user-domain   5s
heat-heat-trustee-ks-user  5s
heat-heat-ks-user          5s
heat-heat-rabbit-init      5s
heat-heat-trusts           5s
heat-heat-test             5s

==> v1/Service
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)         AGE
heat-api        ClusterIP  10.107.126.110  <none>       8004/TCP        5s
heat-cfn        ClusterIP  10.103.165.157  <none>       8000/TCP        5s
heat            ClusterIP  10.106.167.63   <none>       80/TCP,443/TCP  5s
cloudformation  ClusterIP  10.107.173.42   <none>       80/TCP,443/TCP  5s

==> v1/Deployment
NAME         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
heat-api     1        1        1           0          5s
heat-cfn     1        1        1           0          5s
heat-engine  1        1        1           0          5s

==> v1/NetworkPolicy
NAME         POD-SELECTOR      AGE
heat-netpol  application=heat  4s

==> v1/Pod(related)
NAME                          READY  STATUS    RESTARTS  AGE
heat-api-69db75bb6d-h24w9     0/1    Init:0/1  0         5s
heat-cfn-86896f7466-n5dnz     0/1    Init:0/1  0         5s
heat-engine-6756c84fdd-44hzf  0/1    Init:0/1  0         5s
heat-bootstrap-v9642          0/1    Init:0/1  0         5s
heat-db-init-lfrsb            0/1    Pending   0         5s
heat-db-sync-wct2x            0/1    Init:0/1  0         5s
heat-ks-endpoints-wxjwp       0/6    Pending   0         5s
heat-ks-service-v95sk         0/2    Pending   0         5s
heat-domain-ks-user-4fg65     0/1    Pending   0         4s
heat-trustee-ks-user-mwrf5    0/1    Pending   0         4s
heat-ks-user-z6xhb            0/1    Pending   0         4s
heat-rabbit-init-77nzb        0/1    Pending   0         4s
heat-trusts-7x7nt             0/1    Pending   0         4s

==> v1beta1/PodDisruptionBudget
NAME      MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
heat-api  0              N/A              0                    7s
heat-cfn  0              N/A              0                    7s

==> v1/ConfigMap
NAME      DATA  AGE
heat-bin  16    7s

==> v1beta1/Role
NAME                                 AGE
heat-openstack-heat-engine-cleaner   6s
heat-openstack-heat-api              6s
heat-openstack-heat-cfn              6s
heat-openstack-heat-engine           6s
heat-openstack-heat-db-init          6s
heat-openstack-heat-db-sync          6s
heat-openstack-heat-ks-endpoints     6s
heat-openstack-heat-ks-service       6s
heat-openstack-heat-ks-user-domain   6s
heat-openstack-heat-trustee-ks-user  6s
heat-openstack-heat-ks-user          6s
heat-openstack-heat-rabbit-init      6s
heat-openstack-heat-trusts           5s
heat-openstack-heat-test             5s

==> v1/Job
NAME                  COMPLETIONS  DURATION  AGE
heat-bootstrap        0/1          5s        5s
heat-db-init          0/1          4s        5s
heat-db-sync          0/1          5s        5s
heat-ks-endpoints     0/1          4s        5s
heat-ks-service       0/1          4s        5s
heat-domain-ks-user   0/1          4s        4s
heat-trustee-ks-user  0/1          4s        4s
heat-ks-user          0/1          4s        4s
heat-rabbit-init      0/1          4s        4s
heat-trusts           0/1          4s        4s

==> v1beta1/Ingress
NAME            HOSTS                                                                               ADDRESS  PORTS  AGE
heat            heat,heat.openstack,heat.openstack.svc.cluster.local                                80       4s
cloudformation  cloudformation,cloudformation.openstack,cloudformation.openstack.svc.cluster.local  80       4s


+ ./tools/deployment/common/wait-for-pods.sh openstack
+ export OS_CLOUD=openstack_helm
+ OS_CLOUD=openstack_helm
+ openstack service list
+----------------------------------+----------+----------------+
| ID                               | Name     | Type           |
+----------------------------------+----------+----------------+
| 5c354b75377944888ac1cc9a3a088808 | heat     | orchestration  |
| a55578c3ca8948d28055511c6a2e59bc | heat-cfn | cloudformation |
| fa7df0be3e99442d8fe42bda7519072f | keystone | identity       |
+----------------------------------+----------+----------------+
+ sleep 30
+ openstack orchestration service list
+------------------------------+-------------+--------------------------------------+-------------+--------+----------------------------+--------+
| Hostname                     | Binary      | Engine ID                            | Host        | Topic  | Updated At                 | Status |
+------------------------------+-------------+--------------------------------------+-------------+--------+----------------------------+--------+
| heat-engine-6756c84fdd-44hzf | heat-engine | 7a396564-9fce-43f9-aedd-3a48101925e8 | heat-engine | engine | 2019-02-03T11:03:45.000000 | up     |
+------------------------------+-------------+--------------------------------------+-------------+--------+----------------------------+--------+
root@openstack:~/mira/openstack-helm# echo $?
0

Horizon

+ helm status horizon
LAST DEPLOYED: Sun Feb  3 11:05:26 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                      READY  STATUS     RESTARTS  AGE
horizon-5877548d5d-27t8c  1/1    Running    0         6m22s
horizon-db-init-jsjm5     0/1    Completed  0         6m23s
horizon-db-sync-wxwpw     0/1    Completed  0         6m23s

==> v1/ConfigMap
NAME         DATA  AGE
horizon-bin  6     6m26s

==> v1/Service
NAME         TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)         AGE
horizon      ClusterIP  10.111.206.119  <none>       80/TCP,443/TCP  6m24s
horizon-int  NodePort   10.107.139.114  <none>       80:31000/TCP    6m23s

==> v1/Job
NAME             COMPLETIONS  DURATION  AGE
horizon-db-init  1/1          37s       6m23s
horizon-db-sync  1/1          3m27s     6m23s

==> v1beta1/Role
NAME                               AGE
horizon-openstack-horizon          6m25s
horizon-openstack-horizon-db-init  6m24s
horizon-openstack-horizon-db-sync  6m24s

==> v1beta1/RoleBinding
NAME                     AGE
horizon-horizon          6m24s
horizon-horizon-db-init  6m24s
horizon-horizon-db-sync  6m24s

==> v1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
horizon  1        1        1           1          6m23s

==> v1beta1/Ingress
NAME     HOSTS                                                          ADDRESS  PORTS  AGE
horizon  horizon,horizon.openstack,horizon.openstack.svc.cluster.local  80       6m23s

==> v1/NetworkPolicy
NAME            POD-SELECTOR         AGE
horizon-netpol  application=horizon  6m23s

==> v1beta1/PodDisruptionBudget
NAME     MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
horizon  0              N/A              1                    6m28s

==> v1/Secret
NAME              TYPE    DATA  AGE
horizon-etc       Opaque  10    6m27s
horizon-db-admin  Opaque  1     6m27s
horizon-db-user   Opaque  1     6m26s

==> v1/ServiceAccount
NAME             SECRETS  AGE
horizon          1        6m26s
horizon-db-init  1        6m26s
horizon-db-sync  1        6m26s


root@openstack:~/mira/openstack-helm# echo $?
0

Rados GW


+ helm upgrade --install radosgw-openstack ../openstack-helm-infra/ceph-rgw --namespace=openstack --values=/tmp/radosgw-openstack.yaml
Release "radosgw-openstack" does not exist. Installing it now.
NAME:   radosgw-openstack
E0203 11:14:02.662467   18336 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:33061->127.0.0.1:35352: write tcp4 127.0.0.1:33061->127.0.0.1:35352: write: broken pipe
LAST DEPLOYED: Sun Feb  3 11:14:02 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME                    SECRETS  AGE
ceph-rgw                1        0s
ceph-ks-endpoints       1        0s
ceph-ks-service         1        0s
swift-ks-user           1        0s
ceph-rgw-storage-init   1        0s
radosgw-openstack-test  1        0s

==> v1/Job
NAME                   COMPLETIONS  DURATION  AGE
ceph-ks-endpoints      0/1          0s        0s
ceph-ks-service        0/1          0s        0s
swift-ks-user          0/1          0s        0s
ceph-rgw-storage-init  0/1          0s        0s

==> v1beta1/RoleBinding
NAME                                      AGE
radosgw-openstack-ceph-rgw                0s
radosgw-openstack-ceph-ks-endpoints       0s
radosgw-openstack-ceph-ks-service         0s
radosgw-openstack-swift-ks-user           0s
ceph-rgw-storage-init                     0s
radosgw-openstack-radosgw-openstack-test  0s

==> v1/Service
NAME      TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)         AGE
radosgw   ClusterIP  10.98.97.193  <none>       80/TCP,443/TCP  0s
ceph-rgw  ClusterIP  10.98.50.234  <none>       8088/TCP        0s

==> v1/Deployment
NAME      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
ceph-rgw  1        1        1           0          0s

==> v1beta1/Ingress
NAME     HOSTS                                                          ADDRESS  PORTS  AGE
radosgw  radosgw,radosgw.openstack,radosgw.openstack.svc.cluster.local  80       0s

==> v1/Pod(related)
NAME                         READY  STATUS    RESTARTS  AGE
ceph-rgw-66685f585d-st7dp    0/1    Init:0/3  0         0s
ceph-ks-endpoints-hkj77      0/3    Init:0/1  0         0s
ceph-ks-service-l4wdx        0/1    Init:0/1  0         0s
swift-ks-user-ktptt          0/1    Init:0/1  0         0s
ceph-rgw-storage-init-2vrpg  0/1    Init:0/2  0         0s

==> v1/Secret
NAME                    TYPE    DATA  AGE
ceph-keystone-user-rgw  Opaque  10    0s
ceph-keystone-user      Opaque  8     0s
ceph-keystone-admin     Opaque  8     0s
radosgw-s3-admin-creds  Opaque  3     0s

==> v1/ConfigMap
NAME                              DATA  AGE
ceph-rgw-bin-ks                   3     0s
ceph-rgw-bin                      7     0s
radosgw-openstack-ceph-templates  1     0s
ceph-rgw-etc                      1     0s

==> v1beta1/Role
NAME                                                AGE
radosgw-openstack-openstack-ceph-rgw                0s
radosgw-openstack-openstack-ceph-ks-endpoints       0s
radosgw-openstack-openstack-ceph-ks-service         0s
radosgw-openstack-openstack-swift-ks-user           0s
ceph-rgw-storage-init                               0s
radosgw-openstack-openstack-radosgw-openstack-test  0s


+ ./tools/deployment/common/wait-for-pods.sh openstack
+ helm status radosgw-openstack
LAST DEPLOYED: Sun Feb  3 11:14:02 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Role
NAME                                                AGE
radosgw-openstack-openstack-ceph-rgw                3m54s
radosgw-openstack-openstack-ceph-ks-endpoints       3m54s
radosgw-openstack-openstack-ceph-ks-service         3m54s
radosgw-openstack-openstack-swift-ks-user           3m54s
ceph-rgw-storage-init                               3m54s
radosgw-openstack-openstack-radosgw-openstack-test  3m54s

==> v1beta1/RoleBinding
NAME                                      AGE
radosgw-openstack-ceph-rgw                3m54s
radosgw-openstack-ceph-ks-endpoints       3m54s
radosgw-openstack-ceph-ks-service         3m54s
radosgw-openstack-swift-ks-user           3m54s
ceph-rgw-storage-init                     3m54s
radosgw-openstack-radosgw-openstack-test  3m54s

==> v1/Service
NAME      TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)         AGE
radosgw   ClusterIP  10.98.97.193  <none>       80/TCP,443/TCP  3m54s
ceph-rgw  ClusterIP  10.98.50.234  <none>       8088/TCP        3m54s

==> v1/Deployment
NAME      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
ceph-rgw  1        1        1           1          3m54s

==> v1/Pod(related)
NAME                         READY  STATUS     RESTARTS  AGE
ceph-rgw-66685f585d-st7dp    1/1    Running    0         3m54s
ceph-ks-endpoints-hkj77      0/3    Completed  0         3m54s
ceph-ks-service-l4wdx        0/1    Completed  0         3m54s
swift-ks-user-ktptt          0/1    Completed  0         3m54s
ceph-rgw-storage-init-2vrpg  0/1    Completed  0         3m54s

==> v1/Secret
NAME                    TYPE    DATA  AGE
ceph-keystone-user-rgw  Opaque  10    3m54s
ceph-keystone-user      Opaque  8     3m54s
ceph-keystone-admin     Opaque  8     3m54s
radosgw-s3-admin-creds  Opaque  3     3m54s

==> v1/ConfigMap
NAME                              DATA  AGE
ceph-rgw-bin-ks                   3     3m54s
ceph-rgw-bin                      7     3m54s
radosgw-openstack-ceph-templates  1     3m54s
ceph-rgw-etc                      1     3m54s

==> v1/ServiceAccount
NAME                    SECRETS  AGE
ceph-rgw                1        3m54s
ceph-ks-endpoints       1        3m54s
ceph-ks-service         1        3m54s
swift-ks-user           1        3m54s
ceph-rgw-storage-init   1        3m54s
radosgw-openstack-test  1        3m54s

==> v1/Job
NAME                   COMPLETIONS  DURATION  AGE
ceph-ks-endpoints      1/1          3m43s     3m54s
ceph-ks-service        1/1          3m22s     3m54s
swift-ks-user          1/1          3m50s     3m54s
ceph-rgw-storage-init  1/1          70s       3m54s

==> v1beta1/Ingress
NAME     HOSTS                                                          ADDRESS  PORTS  AGE
radosgw  radosgw,radosgw.openstack,radosgw.openstack.svc.cluster.local  80       3m54s


+ export OS_CLOUD=openstack_helm
+ OS_CLOUD=openstack_helm
+ sleep 30
+ openstack service list
+----------------------------------+----------+----------------+
| ID                               | Name     | Type           |
+----------------------------------+----------+----------------+
| 4dc87bba6ab94fd3b70e9d4493ef4e44 | swift    | object-store   |
| 5c354b75377944888ac1cc9a3a088808 | heat     | orchestration  |
| a55578c3ca8948d28055511c6a2e59bc | heat-cfn | cloudformation |
| fa7df0be3e99442d8fe42bda7519072f | keystone | identity       |
+----------------------------------+----------+----------------+
+ openstack container create mygreatcontainer
+--------------------------------------+------------------+-------------------------------------------------+
| account                              | container        | x-trans-id                                      |
+--------------------------------------+------------------+-------------------------------------------------+
| KEY_2cb7f2c19a6f4e148bc3f9d0d0b7ed44 | mygreatcontainer | tx000000000000000000018-005c56ce05-12f8-default |
+--------------------------------------+------------------+-------------------------------------------------+
+ curl -L -o /tmp/important-file.jpg https://imgflip.com/s/meme/Cute-Cat.jpg
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 35343  100 35343    0     0   168k      0 --:--:-- --:--:-- --:--:--  168k
+ openstack object create --name superimportantfile.jpg mygreatcontainer /tmp/important-file.jpg
+------------------------+------------------+----------------------------------+
| object                 | container        | etag                             |
+------------------------+------------------+----------------------------------+
| superimportantfile.jpg | mygreatcontainer | d09dbe3a95308bb4abd216885e7d1c34 |
+------------------------+------------------+----------------------------------+


OS_CLOUD=openstack_helm openstack object list mygreatcontainer
+------------------------+
| Name                   |
+------------------------+
| superimportantfile.jpg |
+------------------------+


Glance

+ export OS_CLOUD=openstack_helm
+ OS_CLOUD=openstack_helm
+ openstack service list
+----------------------------------+----------+----------------+
| ID                               | Name     | Type           |
+----------------------------------+----------+----------------+
| 0ef5d6114769472a896e7d5bfc2eb41a | glance   | image          |
| 4dc87bba6ab94fd3b70e9d4493ef4e44 | swift    | object-store   |
| 5c354b75377944888ac1cc9a3a088808 | heat     | orchestration  |
| a55578c3ca8948d28055511c6a2e59bc | heat-cfn | cloudformation |
| fa7df0be3e99442d8fe42bda7519072f | keystone | identity       |
+----------------------------------+----------+----------------+
+ sleep 30
+ openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| ccfed5c7-b652-4dbc-8fc9-6dc4fec4985c | Cirros 0.3.5 64-bit | active |
+--------------------------------------+---------------------+--------+
+ openstack image show 'Cirros 0.3.5 64-bit'
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | f8ab98ff5e73ebab884d80c9dc9c7290                     |
| container_format | bare                                                 |
| created_at       | 2019-02-03T11:26:45Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/ccfed5c7-b652-4dbc-8fc9-6dc4fec4985c/file |
| id               | ccfed5c7-b652-4dbc-8fc9-6dc4fec4985c                 |
| min_disk         | 1                                                    |
| min_ram          | 0                                                    |
| name             | Cirros 0.3.5 64-bit                                  |
| owner            | 2cb7f2c19a6f4e148bc3f9d0d0b7ed44                     |
| properties       | hypervisor_type='qemu', os_distro='cirros'           |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 13267968                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2019-02-03T11:26:48Z                                 |
| virtual_size     | None                                                 |
| visibility       | private                                              |
+------------------+------------------------------------------------------+

Cinder

+ ./tools/deployment/common/wait-for-pods.sh openstack
+ export OS_CLOUD=openstack_helm
+ OS_CLOUD=openstack_helm
+ openstack service list
+----------------------------------+----------+----------------+
| ID                               | Name     | Type           |
+----------------------------------+----------+----------------+
| 0ef5d6114769472a896e7d5bfc2eb41a | glance   | image          |
| 151bcbb92c854322ae154447cc58662b | cinderv2 | volumev2       |
| 4dc87bba6ab94fd3b70e9d4493ef4e44 | swift    | object-store   |
| 5c354b75377944888ac1cc9a3a088808 | heat     | orchestration  |
| a55578c3ca8948d28055511c6a2e59bc | heat-cfn | cloudformation |
| f25bb35c9bf147e4b0d10487c8e8eeaf | cinderv3 | volumev3       |
| f32899524d4b46248ca82d317748bbfd | cinder   | volume         |
| fa7df0be3e99442d8fe42bda7519072f | keystone | identity       |
+----------------------------------+----------+----------------+
+ sleep 30
+ openstack volume type list
+--------------------------------------+------+-----------+
| ID                                   | Name | Is Public |
+--------------------------------------+------+-----------+
| 25ae326e-f840-4cb9-802e-4646dd237cad | rbd1 | True      |
+--------------------------------------+------+-----------+
root@openstack:~/mira/openstack-helm# echo $?
0

OpenVSwitch=

+ ./tools/deployment/common/wait-for-pods.sh openstack
+ helm status openvswitch
LAST DEPLOYED: Sun Feb  3 11:43:52 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME             DATA  AGE
openvswitch-bin  3     113s

==> v1/ServiceAccount
NAME                  SECRETS  AGE
openvswitch-db        1        113s
openvswitch-vswitchd  1        113s

==> v1/DaemonSet
NAME                  DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR        AGE
openvswitch-db        1        1        1      1           1          openvswitch=enabled  113s
openvswitch-vswitchd  1        1        1      1           1          openvswitch=enabled  113s

==> v1/NetworkPolicy
NAME                POD-SELECTOR             AGE
openvswitch-netpol  application=openvswitch  113s

==> v1/Pod(related)
NAME                        READY  STATUS   RESTARTS  AGE
openvswitch-db-nx579        1/1    Running  0         113s
openvswitch-vswitchd-p4xj5  1/1    Running  0         113s

root@openstack:~/mira/openstack-helm# echo $?
0

LibVirt

+ helm upgrade --install libvirt ../openstack-helm-infra/libvirt --namespace=openstack --set manifests.network_policy=true --values=/tmp/libvirt.yaml
Release "libvirt" does not exist. Installing it now.
NAME:   libvirt
LAST DEPLOYED: Sun Feb  3 11:46:46 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME           READY  STATUS    RESTARTS  AGE
libvirt-427lp  0/1    Init:0/3  0         0s

==> v1/ConfigMap
NAME         DATA  AGE
libvirt-bin  3     0s
libvirt-etc  2     0s

==> v1/ServiceAccount
NAME     SECRETS  AGE
libvirt  1        0s

==> v1beta1/Role
NAME                       AGE
libvirt-openstack-libvirt  0s

==> v1beta1/RoleBinding
NAME             AGE
libvirt-libvirt  0s

==> v1/DaemonSet
NAME     DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR                   AGE
libvirt  1        1        0      1           0          openstack-compute-node=enabled  0s

==> v1/NetworkPolicy
NAME            POD-SELECTOR         AGE
libvirt-netpol  application=libvirt  0s


+ helm status libvirt
LAST DEPLOYED: Sun Feb  3 11:46:46 2019
NAMESPACE: openstack
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Role
NAME                       AGE
libvirt-openstack-libvirt  1s

==> v1beta1/RoleBinding
NAME             AGE
libvirt-libvirt  1s

==> v1/DaemonSet
NAME     DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR                   AGE
libvirt  1        1        0      1           0          openstack-compute-node=enabled  1s

==> v1/NetworkPolicy
NAME            POD-SELECTOR         AGE
libvirt-netpol  application=libvirt  1s

==> v1/Pod(related)
NAME           READY  STATUS    RESTARTS  AGE
libvirt-427lp  0/1    Init:0/3  0         1s

==> v1/ConfigMap
NAME         DATA  AGE
libvirt-bin  3     1s
libvirt-etc  2     1s

==> v1/ServiceAccount
NAME     SECRETS  AGE
libvirt  1        1s
root@openstack:~/mira/openstack-helm# echo $?
0