Building a Private Cloud With OpenStack Part 3 (Installing OpenStack with PackStack)

At this point, you are now ready to install OpenStack on three separate nodes. OpenStack will be installed using the PackStack utility. PackStack is an RPM package that contains puppet manifests to deploy a multi-node or all-in-one OpenStack deployment.

Run the following commands to prepare and install PackStack:

NOTE: commands are to be run on each node

  1. systemctl stop firewalld
  2. systemctl disable firewalld
  3. systemctl stop NetworkManager
  4. systemctl disable NetworkManager
  5. systemctl enable network
  6. systemctl enable network

NOTE: a static IP address needs to be specified on each node-internal interface so that OpenStack services can communicate with each other.

**The following commands should be run on the controller/storage node**

yum install openstack-packstack – RPM package that contains the puppet files

packstack –gen-answer-file=~/answers.txt – will generate an answer file to specify where each service will reside on which node in the internal network environment

Edit the answer file and make the following changes to the below parameters. Ensure that the proper IPs are associated with the proper nodes in which you want to install each OpenStack service of your choice. For this example, we are going to configure the answer file to deploy based on the network diagram in Part 1.

CONFIG_AMQP_HOST=192.168.2.10

CONFIG_MARIADB_HOST=192.168.2.10

CONFIG_KEYSTONE_LDAP_URL=ldap://192.168.2.10

CONFIG_MONGODB_HOST=192.168.2.10

CONFIG_REDIS_HOST=192.168.2.10

*CONFIG_CONTROLLER_HOST=192.168.2.10

*CONFIG_COMPUTE_HOSTS=192.168.2.30

*CONFIG_NETWORK_HOSTS=192.168.2.20

*CONFIG_STORAGE_HOST=192.168.2.10

CONFIG_SAHARA_HOST=192.168.2.10

NOTE: The parameters with the * is where each node IP address is specified to indicate which services to install on which node

Once the answer file is configured correctly, run the following command on the controller/storage node.

packstack –answer-file=~/answers.txt

The installation will begin and at the end, an “Installation has completed successfully” message will be displayed. Be aware that the installation will take at least an hour.

Building a Private Cloud With OpenStack Part 2 (CentOS Minimal Installation and Local Yum Repository Configuration)

Installation:

To make things simple, I decided to install CentOS with GNOME desktop on the Controller Node and install CentOS Minimal on the i3 Servers. All the configuration work will be completed on the HP desktop with SSH access to the i3 Servers. During the installations, all defaults were accepted, such as automatic partitioning, and the usual manual configurations were completed, such as setting the hostname, time, date, and timezone.

Yum Local Repository:

We will configure a local Yum repository on a VM hosted on the Controller Node for the use of our i3 servers to grab required OpenStack packages when required during the OpenStack installation (future posts will include OpenStack installation). Hosting a local repository on the public facing Controller Node is a common best practice used to save Internet bandwidth, or speed up the downloading of repository packages over and over again for other internal networks. In a security perspective, this avoids the Network and Compute node to be exposed on the external network since the repository packages will be obtained from the local HP desktop in the internal network. This also allows a more restrictive environment to restrict what other internal hosts can download what packages.

Configuration Requirements:

Root privileges in the hosted repository system (HP Desktop in this case)

CentOS 7 Everything DVD ISO File – this will be used to host packages from the regular CentOS Installation DVD

Running Apache Web Server to share the repository to our local network

Internet Access from the local repository host (HP Desktop)

Step-By-Step Configuration:

The first local repository will be the regular CentOS packages that RedHat provides:

(Make sure that all commands listed below are executed by root)

  1. yum install createrepo – createrepo package is required for creating the repository
  2. mkdir -p /var/www/html/repository/centos/7 – directories are created in the Apache document root to store the packages and share them on the local network
  3. mount ~/CentOS-7-x86_64-Everything-1503-01.iso /mnt/ – Depending on which version of CentOS that was downloaded, ensure that the correct ISO file is entered correctly. This will mount the ISO file to the /mnt directory. This will give access to all packages that reside in the ISO file. Ensure that the ISO file is located in the /root directory.
  4. cp -r /mnt/Packages/* /var/www/html/repository/centos/7/ – This will copy all the packages from the ISO file to the Apache directory that was created before and will be shared on the local network.
  5. restorecon -v -R /var/www/html – This will update the SELinux security contexts for all the new files that were copied.
  6. createrepo –database /var/www/html/repository/centos/7 – This will create a repository database for the internal hosts to read from when attempting to download a package. Database files will be generated based on the RPM packages in the directory. Yum will therefore, search through the database files when necessary to either search or install a new package.
  7. rsync -avz rsync://mirror.csclub.uwaterloo.ca/centos/7/os/x86_64/Packages/ /var/www/html/repository/centos/7 – This will obtain all the updates to the RPM packages from a repository mirror closest to the host externally. In this case, the Waterloo mirror was used as it is the closest. A script can be created to do this every day during off-peak business hours if this was a real production environment.

All CentOS packages from the ISO file are now obtainable locally from any internal host while saving significant network bandwidth. The next steps will be to create another local repository that shares all the OpenStack packages required for OpenStack installation and setting up the i3 servers to install packages only from these local repositories.

Commands to create the openstack-ocata repository:

NOTE: The openstack-ocata repository contains all the required installation packages for OpenStack.

  1. yum install createrepo (if not installed already)
  2. mkdir -p /var/www/html/repository/centos/cloud/7/x86_64/openstack-ocata
  3. rsync -avz rsync://mirror.csclub.uwaterloo.ca/centos/7/cloud/x86_64/openstack-ocata/ /var/www/html/repository/centos/7/cloud/x86_64/openstack-ocata
  4. createrepo –database /var/www/html/repository/centos/cloud/7/x86_64/opentstack-ocata
  5. restorecon -v -R /var/www/html

NOTE: Additional packages are needed for a successfull OpenStack installation, which are located in the @extras repository. Therefore, this repository needs to be added as well.

  1. yum install createrepo (if not installed already)
  2. mkdir -p /var/www/html/repository/centos/7/x86_64/extras
  3. rsync -avz rsync://mirror.csclub.uwaterloo.ca/centos/7/extras/x86_64/Packages/ /var/www/html/repository/centos/7/x86_64/extras
  4. createrepo –database /var/www/html/repository/centos/7/x86_64/extras
  5. restorecon -v -R /var/www/html

Ensure that the Apache service is installed and running in the VM and the Controller. This will allow the repositories to be shared on the local network and allow access to the Horizon dashboard provided by OpenStack.

  1. yum install httpd
  2. systemctl enable httpd
  3. systemctl start httpd

Connecting to the Apache web page may fail. In that case, ensure that firewalld and/or IPTABLES are both allowing port 80. There is also an SELinux Boolean that needs to be enabled to allow connections from different networks.

setsebool -P httpd_can_network_connect on – will allow HTTP connections from other networks to succeed

The repositories are now created locally. The next step is to direct the Compute and Network node to use the local repositories when installing packages with YUM.

(Steps are to be completed on the Network and Compute Node)

  1. cd /etc/yum.repos.d
  2. mkdir hold
  3. mv * hold
  4. vi myCentosMirror.repo
  5. vi openstack-ocata.repo
  6. vi centosextras.repo

File Contents:

 

myCentesMirror.repo:

[myCentosMirror]

name=my CentOS 7 Mirror

baseurl=http://<IP-Address-of-VM>/repository/centos/7

gpgcheck=0

enable=1

 

openstack-ocata.repo:

[openstack-ocata]

name=openstack-ocata-repository

baseurl=http://<IP-Address-of-VM>/repository/centos/cloud/7/x86_64/openstack-ocata

gpgcheck=0

enable=1

 

centosextras.repo:

[centosextras]

name=CentOS Extras

baseurl=http://<IP-Address-of-VM>/repository/centos/7/x86_64/extras

gpgcheck=0

enable=1

 

The Compute and Network Nodes are now setup to search through the local repositories for any packages that the user defines. However, there is no route to the Virtual Machine on the Controller Node. Therefore, IP forwarding needs to be enabled on the Controller Node, two IPTABLES rules need to be added to the Controller Node, and static routes need to be set on the Compute and Network Nodes.

Run the following commands on the Controller Node:

vi /etc/sysctl.conf – add the following line: net.ipv4.ip_forward = 1

sysctl -p /etc/sysctl.conf – makes the changes

iptables -I FORWARD -i <VM-network-interface> -j ACCEPT

iptables -I FORWARD -o <VM-network-interface -j ACCEPT

Run the following commands on the Compute and Network Nodes:

vi /etc/sysconfig/network-scripts/route-<interface-that-packets-leave-from>

Contents of route-<interface> file:

10.0.0.0/24 via 192.168.2.10 dev <interface-that-packets-leave-from>

NOTE: 10.0.0.0/24 should be changed to the correct VM network that is used.

 

 

 

Building a Private Cloud With OpenStack Part 1 (Overview)

This project involves the implementation of a technology that is emerging every day and is continuing to expand to many enterprises in the IT industry. OpenStack has been decided to be implemented because many companies are moving to either a public, private, or hybrid cloud to reduce business costs and pay for what is only being consumed.

The diagram below displays the hardware requirements that will be used to create a private cloud with OpenStack.

The first hardware requirement is the Netgear Switch. The switch is configured with three VLANs. The first VLAN will be used for the public traffic (VLAN1) that tenant users will be able to access their private virtual networks from and/or create new projects. The second VLAN will be used for the virtual networking traffic that OpenStack’s networking service (Neutron) provides to tenant users (VLAN10).  The third VLAN will be used for the OpenStack internal traffic and for switch management (VLAN30). This VLAN will allow the OpenStack services to communicate with each other when required to, for example, run a VM or create a virtual network. Every request made by the administrator or tenant user requires all services to be communicating with each other.

Three separate server nodes will be used to host OpenStack services instead of using an all-in-one server approach. The Controller/Storage node is hosted on a regular HP desktop computer with 8GB RAM and 1TB HDD. This node is used for hosting the Horizon dashboard for the administrator and tenant users. Tenant users are able to access the dashboard externally and create, for example, instances (VMs), virtual networks, storage volumes, etc. Two i3-3240 3.40Ghz servers are used for the Compute node and the Network node with 8GB RAM and a 256GB Samsung 850 Pro SSD in each server.

The IP addressing scheme for VLAN30 is shown in the below screenshot.

OpenStackDiagram

VLAN30 internal OpenStack traffic is used by fiber media to provide quick communication between OpenStack services. Other interface networks are using regular copper RJ-45 connections.

Switch Port Assignment:

VLAN1 – Gigabit ports 5 – 22

VLAN10 – Gigabit ports 1 – 4

VLAN30 – Gigabit ports 23 – 28

Basic Nagios Install and Configuration Using Ansible

 

— # Installing and Configuring Nagios
– hosts: local
remote_user: ansible
become: yes
become_method: sudo
connection: local
gather_facts: yes

tasks:
– name: Install epel repository
yum:
name: epel-release
state: latest

– name: Install Apache
yum:
name: httpd
state: latest

– name: Turn on HTTP and enable at boot
service:
name: httpd
state: started
enabled: yes

– name: Install Nagios Server
yum:
name: nagios
state: latest

– name: Install Nagios Plugins
yum:
name: nagios-plugins-all
state: latest

– name: Nagios Config
copy:
remote_src: true
src: /home/ansible/NagiosFiles/nagios.cfg
dest: /etc/nagios/nagios.cfg

– name: Nagios Host Config
copy:
remote_src: true
src: /home/ansible/NagiosFiles/objects/
dest: /etc/nagios/conf.d/

– name: Turn on Nagios service and enable at boot
service:
name: nagios
state: started
enabled: yes

*NOTE: an error occurs during the task of copying the Nagios configuration files. Will be fixed.