- Root of this topic
- Learn Emacs
- GUI vs. Prompt
- Iron Key
- Foot Pedal
- Fedora
- Icehouse all-in-one
Icehouse, 1 physical, separate compute, native virutalization
Howto: RHOSP5 install on RHEL7, with controller on VM, a native compute node for efficient virtualization and instances which can be accessed externally.
This unofficial guide differs from other evaluation Single-Node Deployments in that it still uses one physical system, but gives you experience with a separate compute node and is a good next step as you learn more about Neutron.
This document covers how to configure a RHEL7 system to use nova-compute for native virutalization while running other openstack services on a VM. In the end you will be able to boot instances that can be accessed over the same network as your RHEL7 system. If you normally run RHEL or Fedora on your laptop and boot VMs of different flavors for development or experiementation using virt-manager but want to switch to using OpenStack, then you might like the setup described in this document.
This documentation is heavily network oriented and explains configurations made to the network in detail while mentioning installation of KVM for virtualization, the LVM allocation of block storage and the installation of OpenStack via packstack only in passing. I am assuming that you have already used KVM and LVM and have already done a packstack all-in-one install and are not seeing these things for the first time. Instead I assume you are reading this so that you can do more more with your own personal OpenStack; e.g. use external compute nodes or provide instances that can be accessed from outside of your internal network. Thus, I am focusing on the network since I find that to be the hard part of such a configuration.
The idea for this setup came from the Red Hat CL210 class (http://www.redhat.com/training/courses/cl210) as a similar set up was used in class.
I. Configure a RHEL7 base system
I am starting with kirk.example.com, a vanilla RHEL7.0 minimally installed system which has 125G of free raw PV which I will later make available to OpenStack. I will assume you have a similar system and not cover installing RHEL7. We will pick up from the install with registering to get your Red Hat Enterprise Linux Open Stack Platform (RHEL-OSP) entitlements.
Register via subscription-manager according to Procedure 1.2.1 of the Single-Node Deployment guide. In summary:
subscription-manager register subscription-manager list --available > available.txt
Find a pool from available.txt containing OpenStack and set it to a variable POOL.
subscription-manager attach --pool=$POOL subscription-manager repos --disable=* subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-openstack-5.0-rpms
Confirm you have rhel-7-server-openstack-5.0-rpms via 'yum repolist':
[root@kirk ~]# yum repolist Loaded plugins: product-id, subscription-manager repo id repo name status rhel-7-server-htb-rpms/x86_64 Red Hat Enterprise Linux 0 rhel-7-server-openstack-5.0-rpms/7Server/x86_64 Red Hat OpenStack 5.0 for 381 rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 4,574 rhel-ha-for-rhel-7-server-htb-rpms/x86_64 Red Hat Enterprise Linux 0 rhel-ha-for-rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 45 rhel-lb-for-rhel-7-server-htb-rpms/x86_64 Red Hat Enterprise Linux 0 rhel-rs-for-rhel-7-server-htb-rpms/x86_64 Red Hat Enterprise Linux 0 rhel-rs-for-rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 56 rhel-sap-for-rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 14 repolist: 5,070 [root@kirk ~]#
II. Configure the network and virtual OpenStack controller node
There will be four phases to configuring the network.
- Configure a public bridge for the physical system
- Configure a virtual system to use the public bridge
- Configure Open vSwitch with internal/external bridges
- Plug ports of the compute/controller nodes into the bridges
Each of the above four steps corresponds with the following diagram with an orange number. In my network kirk's IP is 172.16.2.5 and I am going to be configuring my VM and my openstack instances with floating IPs in the same 172.16.2.0/24 subnet so we can all communicate.
Part 1. Configure a public bridge for the physical system
The default install of RHEL7 configures the device enp0s25 with network manager.
[root@kirk network-scripts]# ip addr 1: lo:mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s25: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:21:9b:42:80:20 brd ff:ff:ff:ff:ff:ff inet 172.16.2.5/24 brd 172.16.2.255 scope global dynamic enp0s25 valid_lft 431sec preferred_lft 431sec inet6 fe80::221:9bff:fe42:8020/64 scope link valid_lft forever preferred_lft forever [root@kirk network-scripts]# [root@kirk network-scripts]# cat ifcfg-enp0s25 TYPE="Ethernet" BOOTPROTO="dhcp" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" NAME="enp0s25" UUID="fabe4c0c-5d92-4a6a-9aa2-8f983da403e0" ONBOOT="yes" HWADDR="00:21:9B:42:80:20" PEERDNS="yes" PEERROUTES="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" [root@kirk network-scripts]#
We will update this device's settings to configure a bridge as described in Networking Guide for RHEL7 chapters 2.4 and 6.3:
Disable Network Manager and backup ifcfg-enp0s25, which we will replace using the conventions of the document described above.
systemctl stop NetworkManager.service systemctl disable NetworkManager.service systemctl start network.service systemctl enable network.service cd /etc/sysconfig/network-scripts/ cp ifcfg-enp0s25 /root/ mv ifcfg-enp0s25 ifcfg-eth0 sed s/\"//g -i ifcfg-eth0
Verify that "NM_CONTROLLED" isn't in /etc/sysconfig/network-scripts/*.
Update ifcfg-eth0 to the following, reboot and verify it still works.
DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes HWADDR=00:21:9B:42:80:20
Kirk consistently gets the same IP from my DHCP server through its MAC address so that's the only reason I'm not going to configure it as static.
Install bridge support with `yum install bridge-utils` and then make a bridge by creating ifcfg-br100 with the following content:
DEVICE=br100 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes
Modify ifcfg-eth0 so it uses bridge br100:
DEVICE=eth0 BRIDGE=br100 ONBOOT=yes HWADDR=00:21:9B:42:80:20
Restart networking with `systemctl restart network` and see it work. If you are using a static address, then replace BOOTPROTO=dhcp in br100 with entries for IPADDR, PREFIX, GATEWAY and DNS1.
[root@kirk ~]# ip addr 1: lo:mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast master br100 state UP qlen 1000 link/ether 00:21:9b:42:80:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::221:9bff:fe42:8020/64 scope link valid_lft forever preferred_lft forever 3: br100: mtu 1500 qdisc noqueue state UP link/ether 00:21:9b:42:80:20 brd ff:ff:ff:ff:ff:ff inet 172.16.2.5/24 brd 172.16.2.255 scope global dynamic br100 valid_lft 587sec preferred_lft 587sec inet6 fe80::221:9bff:fe42:8020/64 scope link valid_lft forever preferred_lft forever [root@kirk ~]#
At present a packstack install of OSP5 on RHEL7 will configure not firewalld, but iptables.
systemctl stop firewalld.service systemctl disable firewalld.service systemctl start iptables.service systemctl enable iptables.service
We can now think of eth0 on kirk (the physical machine) as a port in br100. We will create a VM in the next step whose virtual eth0 will also plug into this bridge. The VM will run our OpenStack controller and will use the same eth0 on the VM to reach br100. This connection will be our external network as far as OpenStack is concerned. The same VM will also have eth1 which will be used by OpenStack controller's internal network. eth1 will connect back to kirk via kirk's br101 which we will configure later.
Part 2. Configure a virtual system to use the public bridge
We will now install KVM so we can run a VM called controller which will have the hostname controller.example.com. Even though we'll be using kirk as our nova compute node later we have a chicken/egg problem to solve so we're going to use KVM directly to get our controller VM running first. The presumption is that any other VMs run on kirk will be controlled via OpenStack.
Install KVM, verify the module and start/enable libvirtd.
yum install kvm libvirt libvirt-python python-virtinst virt-install qemu-kvm lsmod | grep kvm systemctl start libvirtd.service systemctl enable libvirtd.service
After a default KVM server install as described above, KVM sets up a virtual network that exists only between the KVM server and its VMs. The most basic VM installation examples in the KVM documentation demonstrate setting up a VM that will exist on this private network and will be NAT'd by the KVM host. The rest of this section explains how to set up the KVM host so that the VMs can exist on the same network as the KVM host as virtual servers without any NAT'ing.
The private network uses a bridge called 'virbr0'.
[root@kirk init.d]# ip addr show virbr0 4: virbr0:mtu 1500 qdisc noqueue state DOWN link/ether 8a:ed:16:6a:51:7c brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever [root@kirk init.d]#
virsh can interface with this device directly.
[root@kirk init.d]# virsh net-dumpxml default <network> <name>default</name> <uuid>c1326049-5c00-46b1-8728-28c3bd2a3342</uuid> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <bridge name='virbr0' stp='on' delay='0' /> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254' /> </dhcp> </ip> </network> [root@kirk init.d]#
The default KVM network is defined in XML, as seen above, and the VM we will create will also be defined in XML. That XML will be modified to plug not into the default network bridge virbr0 but instead it will plug directly into br100. Note that virsh will list virtual devices so you won't see the non-virtual br100 but brctl will show both.
[root@kirk ~]# virsh net-list Name State Autostart Persistent ---------------------------------------------------------- default active yes yes [root@kirk ~]# brctl show bridge name bridge id STP enabled interfaces br100 8000.00219b428020 no eth0 virbr0 8000.000000000000 yes [root@kirk ~]#
The controller VM will use br100 as a network device but it will also need a disk. A raw pv of 125G is available and we will carve a 20G subset of it for the controller.
[root@kirk ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 rhel lvm2 a-- 22.55g 0 /dev/sda3 lvm2 a-- 125.97g 125.97g [root@kirk ~]# [root@kirk ~]# vgcreate vgcontroller /dev/sda3 Volume group "controller" successfully created [root@kirk ~]# lvcreate -n lvcontroller0 -L 20G vgcontroller Logical volume "lvcontroller0" created [root@kirk ~]# [root@kirk ~]# lvs /dev/mapper/vgcontroller-lvcontroller0 LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lvcontroller0 vgcontroller -wi-a----- 20.00g [root@kirk ~]#
Install a RHEL7 VM called controller. In my case I have a USB disk containing a RHEL7 ISO mounted in the location option in the command below.
virt-install --accelerate --hvm --vcpus=1 --virt-type kvm \ --arch x86_64 --debug --connect qemu:///system \ --network network:default --name controller --ram 1024 \ --nographics --extra-args="console=ttyS0 text" \ --location=/mnt/iso/rhel-server-7.0-x86_64-dvd.iso \ --disk=/dev/mapper/vgcontroller-lvcontroller0
Note that I am specifying the raw LV I defined earlier and I am using the default network. Later we will set this to br100. Follow the text based installation.
After the installation you should have a vm called controller, which has XML definition file called /etc/libvirt/qemu/controller.xml that you can edit. Do not edit it directly however, instead edit it by running `virsh edit controller`. You will then see the content of the XML file and you should then change this line:
<interface type='network'> <mac address='52:54:00:2e:47:a5'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
to this: <interface type='bridge'> <mac address='52:54:00:2e:47:a5'/> <source bridge='br100'/> </interface>
write your changes quit the default editor. Then start the VM and console into it.
Next I'll start my VM and console into it.
[root@kirk qemu]# virsh list Id Name State ---------------------------------------------------- [root@kirk qemu]# virsh start controller Domain controller started [root@kirk qemu]# virsh list Id Name State ---------------------------------------------------- 4 controller running [root@kirk qemu]# virsh console 4 ....
Once controller is up and running console to it as above, disable NetworkManager and configure it to use a static address by changing /etc/sysconfig/network-scripts/ifcfg-eth0 to contain the following:
DEVICE=eth0 HWADDR=52:54:00:2E:47:A5 ONBOOT=yes IPADDR=172.16.2.105 PREFIX=24 GATEWAY=172.16.2.1 DNS1=8.8.8.8 DNS2=8.8.4.4
Reboot your VM and then verify that you can ssh to controller from outside of kirk; e.g. from your personal workstation. This will be important later when you need to access horizon.
SSH to your VM and register it.
subscription-manager register subscription-manager list --available > available.txt
Find a pool from available.txt and set it to a variable POOL.
subscription-manager attach --pool=$POOL subscription-manager repos --disable=* subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-openstack-5.0-rpms
You should now have two RHEL7 systems that you are ready to install OpenStack on that can communicate using IPs on the same VLAN.
Part 3. Configure Open vSwitch with internal/external bridges
A. Explanation of Open vSwitch bridges and ports
We are now at the point where we will create Open vSwitch bridges and ports as seen in green and step 3 in the diagram. Many of these will be created by default via the packstack install, provided we direct packstack to do this. After that it's just a matter of plugging ports into the Open vSwitch bridges to get the full system working togther.
The bridges below will exist by default after an OpenStack install.
- br-int internal bridge exists on all compute & networking nodes
- br-ex (external bridge) exists on networking nodes
br-eth1 will be created during the packstack install because we will add the following directive to our answers.txt file.
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1 CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:1000:2999
The first line above specifies that vlan (not local as used for an all-in-one install) will be how the compute and controller node communicate. OVS will then automatically create the 'int' and a 'phy' adapter to bridge things together from the physical to the integration bridge, and this is how traffic is shared amongst hosts. The physnet1 above is just a label used to map external networks and assign VLAN ranges. The trick, is that packstack will use the above to run `ovs-vsctl add-br br-eth1` on both systems. This command will create the bridge br-eth1 and later we will plug kirk's br101 port into the br-eth1 bridge as well as plug controller's eth1 port into the br-eth1 bridge.
In the end we will have the following:
- br-eth1 as an OVS bridge for internal communication
- kirk will use br101 to communicate on br-eth1
- controller will use eth1 to communicate on br-eth1
- br-int can be thought of as a patch panel between compute nodes
Note that the internal network which allows the compute node to communicate with the controller node uses the 192.168.32.0/22 network. This is simply the default in an generic answers.txt as per the following:
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
Hopefully the above explains where the components of the network in the diagram will come from.
B. Install OpenStack
We will install packstack on the controller and then configure packstack to install OpenStack components on the controller as well as connect to kirk and install OpenStack components on kirk from the same packstack session running on the controller.
To enable the controller to easily connect to kirk we'll generate an SSH keypair with `ssh-keygen` on controller. Packstack will later ssh-copy-id this key onto the controller. Next we'll install packstack and generate a default answers.txt file.
[root@controller ~]# yum install openstack-packstack ... [root@controller ~]# packstack --gen-answer-file /root/answers.txt [root@controller ~]#
Make the following changes to /root/answers.txt
CONFIG_COMPUTE_HOSTS=172.16.2.5 CONFIG_HORIZON_SSL=y CONFIG_PROVISION_DEMO_FLOATRANGE=172.16.2.1/24 CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:1000:2999 CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
The first tells packstack to make kirk provide the compute resources since it has more than the VM. In the second line we want our web interface to use https. Also, you should see a reference to the SSH keypair that was created in the prvious step. The last three lines were explained in the previous section. Let's look more closely at the third line:
CONFIG_PROVISION_DEMO_FLOATRANGE=172.16.2.1/24
In my network 172.16.2.1/24 is the IP range that the systems I do research on get from my DHCP server. A packstack install will configure a "public" network for you for a demo probject. I am setting the above since I want the demo project to have a floating IP range that will actually work. The default value, 172.24.4.224/28, has no meaning in my network. We are creating plenty of virtual networks but OpenStack's public network is where we make a connection outside of it so it should be a real address you can communicate with. In my case I want my instances to get a floating IP that can be seen outside of openstack which is in the same subnet as my other VMs like kirk and controller which are in the same network. You'll want to set this relative to your environment.
Start the packstack install from the controller node and give it some time to complete; e.g. on my system it took about 40 minutes. You should see messages about puppet running on both the controller and kirk.
packstack --answer-file /root/answers.txt
When the install is finished point your browser at the controllers IP and read the /root/keystonerc_admin file for the admin username and password. If you're able to login to the controller then proceed.
Plug ports of the compute/controller nodes into the bridges
First we'll connect the private networks and tie the bridges together on kirk and controller:
[root@contoller ~]# ovs-vsctl add-port br-eth1 eth1 [root@contoller ~]# [root@kirk ~]# ovs-vsctl add-port br-eth1 br101 [root@kirk ~]#
Now we'll connect the public networks by making the following changes in the controller VM's /etc/sysconfig/network-scripts/ directory. You should recognize this pattern from the work we did on kirk earlier.
[root@controller network-scripts]# cat ifcfg-eth0 DEVICE=eth0 ONBOOT=yes HWADDR=52:54:00:2E:47:A5 [root@controller network-scripts]#
Note that there's no BRIDGE=br-ex because Open vSwitch will handle it.
[root@controller network-scripts]# cat ifcfg-br-ex DEVICE=br-ex ONBOOT=yes IPADDR=172.16.2.105 PREFIX=24 GATEWAY=172.16.2.1 DNS1=8.8.8.8 DNS2=8.8.4.4 [root@controller network-scripts]#
Note that ther's no Type=Bridge because Open vSwitch will handle it.
Now we need to plug controller's eth0 port into the br-ex bridge and tell our VM in advance to restart the network immediately after this.
[root@controller network-scripts(keystone_admin)]# ovs-vsctl add-port br-ex eth0 ; service network restart Restarting network (via systemctl): [ OK ] [root@controller network-scripts(keystone_admin)]#
You MUST do this in one line as you will be disconnected and hopefully reconnected by this command.
C. Give more cinder space for to the controller
This is slightly ad hoc as I overlooked it until now. If you do not create a VG called cinder-volumes, then packstack will create one for you with a loopback to the original install disk. I forgot about this and I'm going to allocate the space available on kirk to controller.
I have 105G available on vgcontroller as I'm only using 20 of it for the controller LV.
[root@kirk ~]# vgdisplay vgcontroller | grep Free Free PE / Size 27128 / 105.97 GiB [root@kirk ~]# [root@kirk ~]# lvdisplay /dev/vgcontroller/lvcontroller0 | grep Size LV Size 20.00 GiB [root@kirk ~]#
I'm going to make another LV and push it up to controller.
[root@kirk ~]# lvcreate -n lvcinder -l 27128 vgcontroller Logical volume "lvcinder" created [root@kirk ~]#[root@kirk ~]# lvdisplay /dev/mapper/vgcontroller-lvcinder | grep Size LV Size 105.97 GiB [root@kirk ~]#
Attach the disk to the controller VM
[root@kirk ~]# virsh attach-disk controller /dev/mapper/vgcontroller-lvcinder vdb --persistent Disk attached successfully [root@kirk ~]#
The --persistent option updates /etc/libvirt/qemu/controller.xml so it will be there when the system is rebooted. I can see the new disk from the VM.
[root@controller network-scripts(keystone_admin)]# fdisk -l /dev/vdb Disk /dev/vdb: 113.8 GB, 113783078912 bytes, 222232576 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@controller network-scripts(keystone_admin)]#
Add it to the LVM pool.
[root@controller network-scripts(keystone_admin)]# pvcreate /dev/vdb Physical volume "/dev/vdb" successfully created [root@controller network-scripts(keystone_admin)]#
Add it to cinder-volumes:
[root@controller network-scripts(keystone_admin)]# vgextend cinder-volumes /dev/vdb Volume group "cinder-volumes" successfully extended [root@controller network-scripts(keystone_admin)]#
Remove /dev/loop0:
[root@controller network-scripts(keystone_admin)]# vgreduce cinder-volumes /dev/loop0 Removed "/dev/loop0" from volume group "cinder-volumes" [root@controller network-scripts(keystone_admin)]#
Now I have more space for cinder.
[root@controller network-scripts(keystone_admin)]# vgdisplay cinder-volumes --- Volume group --- VG Name cinder-volumes System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 105.96 GiB PE Size 4.00 MiB Total PE 27127 Alloc PE / Size 0 / 0 Free PE / Size 27127 / 105.96 GiB VG UUID bMme1S-qxgf-GG5X-Oo9W-jJIu-gaPe-VnhMDw [root@controller network-scripts(keystone_admin)]#
III. Test OpenStack
Follow Part III, Using OpenStack, from the Getting Started with Red Hat Enterprise Linux OpenStack Platform guide:
The documentation above will have you create an instance. You should be able to see your instance running on your hypervisor right next to the controller. You just need to take care to let OpenStack manage your instances on KVM and only use virt-manager to manage your controller.
[root@kirk nova]# virsh list Id Name State ---------------------------------------------------- 4 controller running 5 instance-00000002 running [root@kirk nova]#
The documentation above will also have you create a network and a router which your instance will attach to. Set the gateway of your router to the public network (172.16.2.0/24) and allocate a floating IP from this range. Then associate that floating IP with your instance. You should get an IP in the public range that you can use to SSH to your instance.
[jfulton@kreacher ~]$ ssh 172.16.2.109 -l cirros cirros@172.16.2.109's password: $ uname -a Linux cirros 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 GNU/Linux $
Your instance should also be able to ping hosts on the Internet.
$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=59 time=37.428 ms 64 bytes from 8.8.8.8: seq=1 ttl=59 time=28.661 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 28.661/33.044/37.428 ms $
IV. Troubleshooting
If you reboot your controller and compute node and are unable reach the controller VM or instances via the network try running the following command from the compute node:
iptables --flush FORWARD
You may also need to run the following from the controller node:
ovs-vsctl del-port br-ex eth0; service network restart ovs-vsctl --may-exist add-port br-ex eth0; service network restart
The ovs commands on the controller node are done because of the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1115151
The iptables command is done because of the way we're virtualizing our controller node. When the hypervisor reboots it may not forward traffic to the controller node and if you run an `iptalbes -L` and see that forward chain has the following, then you have this problem.
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Running `iptables --flush FORWARD` flushes the above rule out so that you can SSH to your controller node and other instances. If you run the above and then type `service iptables save`, then you shouldn't need to run the flush each time. You might also want to configure your controller VM to start when the hypervisor is started:
[root@kirk ~]# virsh autostart controller Domain controller marked as autostarted [root@kirk ~]#
After following the above, you should be able to reboot kirk, and then point a browser at your web interface to start using OpenStack.