0% found this document useful (0 votes)
12 views6 pages

LM12

open stack essentials

Uploaded by

Eugene Berna I
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views6 pages

LM12

open stack essentials

Uploaded by

Eugene Berna I
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

External network access

Every tenant will have at least one network to launch instances on, which will be built as we
have just built a network. Whenever a new tenant is created, the steps that have just been
performed will need to be performed for that new tenant. All tenants will share a network that
provides external access to the outside world. Let's work through creating this external
network.

Preparing a network
Earlier, we discussed how Neutron is an a API layer that manages virtual networking
resources. The preparation for external network access will be different for different Neutron
plugins. Talk to your networking vendor for your specific implementation. In general, what's
being accomplished by this preparation is the connection of the networking node to a set of
externally routeable IP addresses. External just means external to the OpenStack cluster.
These may be a pool within your company's 10.0.0.0/8 network or a pool of IPs public to
the Internet. The tenant network IP addresses are not publicly routeable. The floating IP
addresses allocated from the external network will be public and mapped to the tenant IP
addresses on the instances to provide access to the instances outside of your OpenStack
deployment. This is accomplished using the Network Address Translation (NAT) rules.
NOTE

In future versions of Packstack, part of this process may already be completed for you. If you
find some of it already completed by your installation, just use this section to gain an
understanding of what has been done for you.

Since we are using Open vSwitch for this deployment, let's take a look at how to set up OVS.
Let's start by looking at the virtual switches defined on the networking node as follows:

network# ovs-vsctl show

a621d2b2-a4cb-4cbd-8d4a-f3e802125445

Bridge br-int

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port br-int
Interface br-int

type: internal

Bridge br-tun

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port br-tun

Interface br-tun

type: internal

Port "vxlan-2"

Interface "vxlan-2"

type: vxlan

options: {in_key=flow, local_ip="192.168.123.102",


out_key=flow, remote_ip="192.168.123.103"}

Bridge br-ex

Port br-ex

Interface br-ex

type: internal

ovs_version: "2.0.1"

In this output, you can see three bridges. You can think of these exactly as you would think of
a switch—as a network appliance that has a bunch of places to plugin Ethernet cables into. A
port is just something plugged into one of these virtual switches. Each of these virtual
switches has a port to itself; br-int is patched to br-tun and br-tun is patched to br-int.
You can see the VXLAN tunnel established between the network node and the compute node
on br-tun. Br-ex is just a switch that's not plugged into anything right now. Br-int is known
as the integration bridge and is used for local attachments to OVS. Br-tun is the tunnel bridge
used to establish tunnels between nodes, and br-ex is the external bridge, which is what we
need to focus on.
The network node has interfaces for its actual network devices, which are
probably eth0 and eth1, or em1 and em2 depending on your distribution and device. What
needs to happen is for the device on your network node that can route to the external pool of
IP addresses to be plugged into br-ex. It is important when this happens to make sure that
traffic flowing through the Ethernet device on the node communicates with OVS and not
directly with the host itself. To make sure this happens, the IP address associated with the
Ethernet device must be moved off the device and onto the OVS br-ex. To do this, we will
create a network device configuration for br-ex and let Linux networking bring up this OVS
device and the physical Ethernet device. Then, OVS will be used to bridge these two devices
together. This is not a traditional Linux networking bridge; it is attaching the physical device
to an OVS switch as a port.
Let's walk through what this looks like.

Control node Networking node Compute node

eth0 192.168.123.101 192.168.123.102 192.168.123.103

eth1 192.168.122.101 192.168.122.102 192.168.122.103

First, look at the IP configuration and the configuration file for our nodes in the following
table. Start by recalling the networking configuration defined in Chapter 2, RDO Installation.
In this example, 192.168.122.0/24 is the external IP pool and 192.168.123.0/24 is the
internal subnet for the OpenStack nodes to communicate. That means that the VXLAN
tunnels will be established over 192.168.123.0/24, as we saw in the OVS output, and the
external floating IP addresses will be allocated from 192.168.122.0/24. The network
configuration file for eth1 should look something like this:

network# cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

BOOTPROTO=static

NM_CONTROLLED=no

ONBOOT=yes

IPADDR=192.168.122.100

NETMASK=255.255.255.0
GATEWAY=19

2.168.122.1

DNS1=192.168.122.1

A file for br-ex will not exist. A simple way to create one is to copy the file of eth1, as shown
in the following command, because almost all of the configuration needed for br-ex is
already in that file:

network# cd /etc/sysconfig/network-scripts/

network# cp ifcfg-eth1 ifcfg-br-ex

To complete the device configuration preparation, remove all of the IP addresses from the file
of eth1 and update the device name in the file of br-ex. The final result will look like this:

network# cat ifcfg-eth1

DEVICE=eth1

BOOTPROTO=static

NM_CONTROLLED=no

ONBOOT=yes

network# cat ifcfg-br-ex

DEVICE=br-ex

BOOTPROTO=static

NM_CONTROLLED=no

ONBOOT=yes

IPADDR=192.168.122.100

GATEWAY=192.168.122.1

NETMASK=255.255.255.0

DNS1=192.168.122.1
When networking is restarted, eth1 will be brought up and operate at layer 2 only, and br-
ex will be brought up ready to communicate on layer 3. If you are not familiar with the
difference between layer 2 and layer 3, layer 2 is communication at the MAC address level
and layer 3 is communication at the IP address level. The last piece of this puzzle is
associating them together with OVS. When eth1 gets plugged in as a port to br-ex in OVS,
OVS will take control of the interface and traffic traveling on it will be interrupted until the
devices are restarted. I am usually SSHed into a machine over my external device. To avoid
this loss in connectivity, you can perform the following OVS command and the network
restart in the same line; SSH will do a reconnect, and it will appear as though you never lost
connection:

network# ovs-vsctl add-port br-ex eth1 && service network restart

Restarting network (via systemctl): [ OK ]

network#

In the first command, you are adding the eth1 port to the br-ex bridge or just
plugging eth1 into br-ex. When the prompt comes back, it means you have successfully
prepared the underlying network infrastructure in OVS for an external OpenStack network.
Creating an external network
Now that OVS has connectivity to the externally routeable IP pool that will be managed by
OpenStack, it's time to tell OpenStack about this set of resources it can manage. Because
an external network is a general purpose resource, it must be created by the administrator.

Go ahead and source your keystonerc_admin file on your control node so that you can create
the external network as a privileged user. Then, create the external network, as shown in the
following commands:

control# neutron net-create --tenant-id services ext --


router:external=True --shared

control# neutron subnet-create --tenant-id services ext


192.168.122.0/24 --disable-dhcp --allocation_pool
start=192.168.122.2,end=192.168.122.99 --allocation-pool
start=192.168.122.110,end=192.168.122.254

You'll notice a few things here. First, the tenant that the network and subnet are created in is
the services tenant. As mentioned in Chapter 3, Identity Management, everything is a member
of a tenant. General purpose resources like these are no exception. They are put into
the services tenant because users don't have access to networks in this tenant directly, so
they would not have the ability to create instances and attach them directly to the external
network. Things would not work if that was done because the underlying virtual networking
infrastructure is not structured to allow this to work properly. Second, the network is marked
as external and shared. Third, note the allocation pools; the nodes are 101, 102, and 103. So
I've left out the IP addresses 100–109. This way, OpenStack won't allocate the IP addresses
assigned to the nodes. Finally, DHCP is disabled. If DHCP was not disabled, OpenStack
would try to start and attach a dnsmasq service for the external network. This should not
happen because there may be a DHCP service running external to OpenStack that would
conflict with the one that would have started if DHCP was enabled on the external network.
The final step to make this network accessible to the instances that will be launched on a
tenant network is setting the tenant router's gateway to the external network. Let's do that for
the router created earlier, as shown in the following command:

control# neutron router-gateway-set my_router ext

You might also like