Linux Cloud Technologies 2013

  Build the cloud on Linux!  This year looks very promising for Linux when it comes to building your private cloud using open source technologies.  Finally, Linux-based software and application

Read More
Linux Cloud Technologies 2013

OpenStack Foundation Fractures the Summit

by on September 6, 2016 at 5:06 pm

It was announced on the OpenStack-dev mailing list that the splitting of the summit is now official.  Barcelona will be the last ‘traditional’ summit for OpenStack devs and operators.  In it’s place will be two separate summits, at separate locations and times: The “Forum” and the “Project Teams Gathering”.  Let’s discuss how will this impact the community.


in cloud

Deploying OpenStack with OpenStack-Ansible

by on April 13, 2016 at 7:05 pm

Deploying OpenStack with OpenStack-Ansible will have your organization running a production-ready OpenStack cloud in a matter of minutes. In this tutorial, I’m going to break down the steps of using OpenStack-Ansible to get you up and running in no time!


, , , ,

Ad goes here

How To Write a Python Web Framework From Scratch

by on November 18, 2015 at 11:48 pm

In recent years, Python has become a very popular web-programming language.  Unlike PHP, how to go about writing a web application is a little less straight forward in Python.  Most administrators are familiar with the LAMP stack, but there does not seem to a defacto standard in the Python world.  In this article, I’ll break down the different layers of the Python web stack (on Linux, of course), as well as how to start your own framework.


in How-To

, , ,

Ubuntu 15.10 No GUI

by on November 14, 2015 at 2:32 am

Another Ubuntu release, another disappointment.  I maintain, that in my personal experience, Ubuntu is the buggiest of the mainstream distros.  This week I installed Ubuntu Server 15.10.  I wanted to test out Juju and the MAAS software, and get a nicely automated Openstack Cloud operating in my lab using said software.  Unfortunately, I ran into a couple of problems.

First, there was no option to install the GUI from the default install media.  Second, my attempt to install desktop software using the limited guides published by Ubuntu proved problematic.  There is no obvious way to get the Gnome Desktop environment running, nor the Unity desktop.

I’m no linux lightweight, but I expect basic services such as the GUI to work out of the box, especially if they are pushing the Juju and MAAS software which is GUI based.

I suspect that this release suffers from ‘permissions hell’.  There is an effort to make the CLI behave more like the desktop, prompting for the admin’s password without having to run sudo.  Perhaps this is a systemd thing?

I’ll revisit Ubuntu for it’s next LTS release, 16.04.  For now, that install is getting wiped for Debian.

in Ubuntu

Building and Installing the latest kernel on CentOS 7

by on September 23, 2015 at 7:47 pm

The linux kernel is a constantly developing piece of software; new features and drivers are being added all the time.  Fortunately for administrators, the system call API is very stable, so using a newer kernel with your distribution is typically quite painless.  Building and installing a new kernel from source sounds quite intimidating, but in reality, could not be easier.  While you can find a 3rd party repo to install a newer kernel version from, I’m going to walk you through the steps to accomplish such a process.


in CentOS, How-To

kdbus is merged into linux-next

by on September 13, 2015 at 4:09 am

kdbus is a somewhat contentious kernel patch that is intended to provide the dbus api in kernel space. It is slated to be a drop in replacement for dbus (user space), with the initial beneficiary of the merge the systemd software that is present on most recent distributions. With linux 4.3rc1 out (which does not include kdbus), linux-next (proposals for inclusion of patches into kernel 4.4) has been made available, and it does indeed include kdbus.


in Linux News

openvswitch GRE over IPv6 on CentOS 7

by on August 19, 2015 at 10:53 pm

At my day job, I help maintain a series of network checks that need to run in parallel.  Currently, we use one small VM to run each process.  At first, this was a fine arrangement, as there were only a handful of VMs, but now there are over 60.  That consumes more resources that I would like, and adds significantly to troubleshooting and administration costs.

Since docker is the latest and greatest thing, my colleagues wanted to transition to using that.  While it would probably fit the bill, we don’t really need such a heavy-handed (in my opinion) solution to this problem.  The processes do not have to be isolated, just their network stacks.  Furthermore, it would be a one-off use of the technology that everyone would have to learn, and we already use puppet for configuration management and code deployment.

Enter the use of network namespaces.  Network namespaces solve the network isolation problem quiet well, at least in our use-case.  I did some reading on http://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/ which greatly helped my understanding on how to setup a network namespace and have it communicate to the outside world.  There are a couple follow ups to that post that describes using openvswitch on his blog that I recommend reading as well, as they are very detailed.

In my particular case, I need my processes to communicate via a GRE interface.  Additionally, I need my interfaces to be able to request an IP via dhcp over said interface, and to make matters worse, it needs to be done over IPv6.

openvswitch does not yet have an IPv6 GRE implementation, though you can add a properly configured interface to an openvswitch bridge to accomplish the same thing.

Scott’s article references Ubuntu 12.04.  I first got this working (after many failed attempts on CentOS 7) on Ubuntu 14.04 with kernel 3.13 (that is the kernel it is currently shipping with/ apt updates to).  Ubuntu 14.04 also ships with iproute2 v3.12.0.  Ubuntu 14.04 repos include openvswitch 2.0.2

CentOS / RHEL 7 ships with kernel 3.10.  While the necessary kernel module, ip6_gre.ko is included in main-line kernel source, it is not compiled by default with CentOS’s kernel.  Copying the source code from the mainline kernel and building the module succeeds, but the oldest version of iproute2 that seems to include the necessary functionality is 3.12, and that does not seem to mesh well with 3.10.  Some things work, but ip6gretap does not.

The solution is to build and install kernel 3.13 (I used v3.13.11 to be specific, and be sure to select the necessary IPv6 GRE module use make menuconfig) and the corresponding iproute2 v3.12.0.  These steps will be detailed later in the article for your convenience.  I also built openvswitch v2.4 from source as well.

After you have the aforementioned software built and installed, here is how to bring it all together:

modprobe ip6_gre
ip -6 link add tap1 type ip6gretap local <local ipv6> remote <remote ipv6>
ovs-vsctl add-br b1
ovs-vsctl add-port b1 int0 -- set interface int0 type=internal
ovs-vsctl add-port b1 tap1
ip link set tap1 up
ip link set b1 up
ip netns add ns1
ip link set netns ns1 int0
ip netns exec ns1 ip link set int0 up
ip netns exec ns1 ip link set lo up

After running those commands, you should have a functioning layer 2 GRE tunnel over IPv6.  In my case, I’m using an IPv4 connection inside the tunnel.  I have not tested IPv6 in IPv6 at this time.  If you have a dhcp server on the LAN at the other end of your tunnel, you should be able to run dhclient int0 to get your IP and associated dhcp information assigned to int0.

modprobe is necessary in this instance because iproute2 does not seem to ask the kernel to load that module for us.  My guess is that this was fixed in later version.

Please note, type ip6gretap will also forward layer 2 traffic.  This may or may not be correct for you, after you have installed the updated iproute2 command, you can see other options, such as ip6gre.

Another gotchya:  With IPv4, you do not have to designate the local address for the tunnel, either iproute2 or the kernel use the system’s routing table to determine the correct outbound interface.  This does not work for IPv6; you must specify the local address for the IPv6 GRE endpoint, or it will silently fail (or possibly emit an error message in dmesg).

Building Kernel From Source

For the purposes of this article, it’s going to be brief and I won’t be going into a lot of detail.

Disclaimer:  This may break who knows what on your specific system.  On my systems, everything worked just great.  Also, building a later kernel from source means you won’t be getting updates from the standard CentOS / RHEL repos when you run yum update.  It will be up to you to ensure your kernel is patched with the latest security patches and bug fixes.  It might be worth your time using Ubuntu or another main-stream distro that provides the updated software you need.  In my case, these are not public facing servers and are for internal reports only, and I just don’t have the time to port a bunch of custom RPMs over to .debs.

yum groupinstall "Development Tools"
yum install openssl-devel
git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
cd linux-stable
git checkout v3.13.11
make oldconfig
make menuconfig #note: this is where you will select the IPv6 GRE module.  I also select IPv6 Source Address Routing, if that makes a difference, I'm not sure.

make rpm

Please note, you shouldn’t technically build software as root (it might break your system if something goes wrong). So, feel free to do everything after yum install as a regular user. I won’t tell if you build it as root though ;)

So, that should build 3 RPMs containing kernel 3.13.  The kernel, kernel headers (kernel-devel) and kernel-debug.  I installed all 3.

After you have built the kernel, you need to update grub2.  You can run the following:

grub2-editenv /boot/grub2/grub.cfg unset saved_entry

This will update grub to have the new kernel boot by default.  Go ahead and reboot your system so you can continue to build the other software.

Building iproute2 from source

iproute2 is closely tied to the kernel version.  In this particular case, there is not a v3.13 available, perhaps there was at some point, I don’t know.  However, Ubuntu 14.04 uses v3.12.0 successfully, so that’s what we’ll be using as well.

As mentioned in the kernel section, there are risks associated with building software from source instead of the repos.  These steps may break your system, but they worked just fine on mine.

git clone  git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git
cd iproute2
git checkout v3.12.0
./configure prefix=/usr
make && make install

That should build and install the software into the normal directories.  It will overwrite existing files at the destination.  If you prefer to not overwrite the existing iproute2 software installed on your system, you can omit the make install section, change to ./configure prefix=/usr/local

Building openvswitch from source

git clone https://github.com/openvswitch/ovs.git
cd ovs
./configure prefix=/usr
make dist

Those steps are from openvswitch’s INSTALL.RHEL.md file.  I recommend reading that file, there might be a package you need to install to make it work.  I highly recommend using make dist and installing the resulting RPM, as openvswitch will then install cleanly with systemd.

I’m sure I may have missed a package or development library somewhere along the way. Closely watch the output of make and see what it complains about. Often times you can fix the problem by adding a -devel to whatever missing library it complains about and yum will install what you need. IE: libary foo not found. yum install foo-devel

I hope you found this useful, as I spent a good deal of time getting this all to work.

Why not just use the latest kernel, 4.2, if I’m going to be using a newer kernel? Well, I tried that, and it didn’t work. I think that using a kernel version supported long-term by another major distribution is as safe a bet as possible. They will likely contribute security and bug fixes upstream to the mainline kernel, so you will be able to update your kernel as time goes on if absolutely necessary.

in CentOS, How-To

Deploying Apache Virtual Hosts using Puppet on CentOS 6

by on October 13, 2014 at 7:03 pm

Scaling a website to serve thousands or even tens of thousands of users simultaneously is a challenge often best tackled by horizontal scaling – distributing workloads across dozens or even hundreds of servers. As a tool for preparing servers for that task, Puppet offers low deployment costs, ease of use, and automated configuration management.

After a successful deployment of a new hardware farm, how can you assure a static configuration across your entire environment? Puppet addresses that problem. Let’s see how to install Puppet and use it to deploy, as an example, an Apache web server virtual host on CentOS 6. This tutorial shows how to deploy virtual hosts on only one server, but the same steps can be replicated to manage many servers. I’ll assume you’re familiar with the Linux command line, basic networking concepts, and using Apache.


in Uncategorized

AWS: Use instance role credentials to query ec2 API

by on September 16, 2014 at 3:58 pm

I was having some issues including a token in v4 signing requests using the ec2 query API.  With the help of the excellent AWS support, I know have a working example based on the documentation provided by Amazon.

# AWS Version 4 signing example

# EC2 API (DescribeRegions)

# See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
# This version makes a GET request and passes the signature
# in the Authorization header.
import sys, os, base64, datetime, hashlib, hmac, json
import requests # pip install requests

# ************* REQUEST VALUES *************
method = 'GET'
service = 'ec2'
host = 'ec2.amazonaws.com'
region = 'us-east-1'
endpoint = 'https://ec2.amazonaws.com'
request_parameters = 'Action=DescribeRegions&Version=2013-10-15'

# Get the Role information and credentials
r = requests.get('');
role = r.text
r = requests.get('' + role);
decoded_data = json.loads(r.text)
access_key = decoded_data['AccessKeyId']
secret_key = decoded_data['SecretAccessKey']
token = decoded_data['Token']

# Key derivation functions. See:
# http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
def sign(key, msg):
 return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()

def getSignatureKey(key, dateStamp, regionName, serviceName):
 kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp)
 kRegion = sign(kDate, regionName)
 kService = sign(kRegion, serviceName)
 kSigning = sign(kService, 'aws4_request')
 return kSigning

# Create a date for headers and the credential string
t = datetime.datetime.utcnow()
amzdate = t.strftime('%Y%m%dT%H%M%SZ')
datestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope
# ************* TASK 1: CREATE A CANONICAL REQUEST *************
# http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html

# Step 1 is to define the verb (GET, POST, etc.)--already done.

# Step 2: Create canonical URI--the part of the URI from domain to query
# string (use '/' if no path)
canonical_uri = '/'

# Step 3: Create the canonical query string. In this example (a GET request),
# request parameters are in the query string. Query string values must
# be URL-encoded (space=%20). The parameters must be sorted by name.
# For this example, the query string is pre-formatted in the request_parameters variable.
canonical_querystring = request_parameters

# Step 4: Create the canonical headers and signed headers. Header names
# and value must be trimmed and lowercase, and sorted in ASCII order.
# Note that there is a trailing \n.
canonical_headers = 'host:' + host + '\n' + 'x-amz-date:' + amzdate + '\n'

# Step 5: Create the list of signed headers. This lists the headers
# in the canonical_headers list, delimited with ";" and in alpha order.
# Note: The request can include any headers; canonical_headers and
# signed_headers lists those that you want to be included in the
# hash of the request. "Host" and "x-amz-date" are always required.
signed_headers = 'host;x-amz-date'

# Step 6: Create payload hash (hash of the request body content). For GET
# requests, the payload is an empty string ("").
payload_hash = hashlib.sha256('').hexdigest()

# Step 7: Combine elements to create create canonical request
canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash
# ************* TASK 2: CREATE THE STRING TO SIGN*************
# Match the algorithm to the hashing algorithm you use, either SHA-1 or
# SHA-256 (recommended)
algorithm = 'AWS4-HMAC-SHA256'
credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request'
string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request).hexdigest()
# ************* TASK 3: CALCULATE THE SIGNATURE *************
# Create the signing key using the function defined above.
signing_key = getSignatureKey(secret_key, datestamp, region, service)

# Sign the string_to_sign using the signing_key
signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest()
# ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************
# The signing information can be either in a query string value or in
# a header named Authorization. This code shows how to use a header.
# Create authorization header and add to request headers
authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature

# The request can include any headers, but MUST include "host", "x-amz-date",
# and (for this scenario) "Authorization". "host" and "x-amz-date" must
# be included in the canonical_headers and signed_headers, as noted
# earlier. Order here is not significant.
# Python note: The 'host' header is added automatically by the Python 'requests' library.
headers = {'x-amz-date':amzdate , 'Authorization':authorization_header, 'X-Amz-Security-Token':token}
# ************* SEND THE REQUEST *************
request_url = endpoint + '?' + canonical_querystring

print '\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++'
print 'Request URL = ' + request_url
r = requests.get(request_url, headers=headers)

print '\nRESPONSE++++++++++++++++++++++++++++++++++++'
print 'Response code: %d\n' % r.status_code
print r.text

Hopefully you find this useful.

in Uncategorized

, ,

Join Ubuntu 14.04 to Active Directory Domain using realmd

by on April 29, 2014 at 9:15 pm

This proved to be a difficult task.  I spent several hours scouring the internet for various bugs in this process to little avail.  I’m going to summarize what I did to actually get this puppy up and running.

Started with a clean install of Ubuntu 14.04 LTS Server Edition.  Pointed my DNS to my AD controller.


in Uncategorized