Linux Cloud Technologies 2013

  Build the cloud on Linux!  This year looks very promising for Linux when it comes to building your private cloud using open source technologies.  Finally, Linux-based software and application

Read More
Linux Cloud Technologies 2013

Building and Installing the latest kernel on CentOS 7

by on September 23, 2015 at 7:47 pm

The linux kernel is a constantly developing piece of software; new features and drivers are being added all the time.  Fortunately for administrators, the system call API is very stable, so using a newer kernel with your distribution is typically quite painless.  Building and installing a new kernel from source sounds quite intimidating, but in reality, could not be easier.  While you can find a 3rd party repo to install a newer kernel version from, I’m going to walk you through the steps to accomplish such a process.


in CentOS, How-To

kdbus is merged into linux-next

by on September 13, 2015 at 4:09 am

kdbus is a somewhat contentious kernel patch that is intended to provide the dbus api in kernel space. It is slated to be a drop in replacement for dbus (user space), with the initial beneficiary of the merge the systemd software that is present on most recent distributions. With linux 4.3rc1 out (which does not include kdbus), linux-next (proposals for inclusion of patches into kernel 4.4) has been made available, and it does indeed include kdbus.


in Linux News
Ad goes here

openvswitch GRE over IPv6 on CentOS 7

by on August 19, 2015 at 10:53 pm

At my day job, I help maintain a series of network checks that need to run in parallel.  Currently, we use one small VM to run each process.  At first, this was a fine arrangement, as there were only a handful of VMs, but now there are over 60.  That consumes more resources that I would like, and adds significantly to troubleshooting and administration costs.

Since docker is the latest and greatest thing, my colleagues wanted to transition to using that.  While it would probably fit the bill, we don’t really need such a heavy-handed (in my opinion) solution to this problem.  The processes do not have to be isolated, just their network stacks.  Furthermore, it would be a one-off use of the technology that everyone would have to learn, and we already use puppet for configuration management and code deployment.

Enter the use of network namespaces.  Network namespaces solve the network isolation problem quiet well, at least in our use-case.  I did some reading on http://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/ which greatly helped my understanding on how to setup a network namespace and have it communicate to the outside world.  There are a couple follow ups to that post that describes using openvswitch on his blog that I recommend reading as well, as they are very detailed.

In my particular case, I need my processes to communicate via a GRE interface.  Additionally, I need my interfaces to be able to request an IP via dhcp over said interface, and to make matters worse, it needs to be done over IPv6.

openvswitch does not yet have an IPv6 GRE implementation, though you can add a properly configured interface to an openvswitch bridge to accomplish the same thing.

Scott’s article references Ubuntu 12.04.  I first got this working (after many failed attempts on CentOS 7) on Ubuntu 14.04 with kernel 3.13 (that is the kernel it is currently shipping with/ apt updates to).  Ubuntu 14.04 also ships with iproute2 v3.12.0.  Ubuntu 14.04 repos include openvswitch 2.0.2

CentOS / RHEL 7 ships with kernel 3.10.  While the necessary kernel module, ip6_gre.ko is included in main-line kernel source, it is not compiled by default with CentOS’s kernel.  Copying the source code from the mainline kernel and building the module succeeds, but the oldest version of iproute2 that seems to include the necessary functionality is 3.12, and that does not seem to mesh well with 3.10.  Some things work, but ip6gretap does not.

The solution is to build and install kernel 3.13 (I used v3.13.11 to be specific, and be sure to select the necessary IPv6 GRE module use make menuconfig) and the corresponding iproute2 v3.12.0.  These steps will be detailed later in the article for your convenience.  I also built openvswitch v2.4 from source as well.

After you have the aforementioned software built and installed, here is how to bring it all together:

modprobe ip6_gre
ip -6 link add tap1 type ip6gretap local <local ipv6> remote <remote ipv6>
ovs-vsctl add-br b1
ovs-vsctl add-port b1 int0 -- set interface int0 type=internal
ovs-vsctl add-port b1 tap1
ip link set tap1 up
ip link set b1 up
ip netns add ns1
ip link set netns ns1 int0
ip netns exec ns1 ip link set int0 up
ip netns exec ns1 ip link set lo up

After running those commands, you should have a functioning layer 2 GRE tunnel over IPv6.  In my case, I’m using an IPv4 connection inside the tunnel.  I have not tested IPv6 in IPv6 at this time.  If you have a dhcp server on the LAN at the other end of your tunnel, you should be able to run dhclient int0 to get your IP and associated dhcp information assigned to int0.

modprobe is necessary in this instance because iproute2 does not seem to ask the kernel to load that module for us.  My guess is that this was fixed in later version.

Please note, type ip6gretap will also forward layer 2 traffic.  This may or may not be correct for you, after you have installed the updated iproute2 command, you can see other options, such as ip6gre.

Another gotchya:  With IPv4, you do not have to designate the local address for the tunnel, either iproute2 or the kernel use the system’s routing table to determine the correct outbound interface.  This does not work for IPv6; you must specify the local address for the IPv6 GRE endpoint, or it will silently fail (or possibly emit an error message in dmesg).

Building Kernel From Source

For the purposes of this article, it’s going to be brief and I won’t be going into a lot of detail.

Disclaimer:  This may break who knows what on your specific system.  On my systems, everything worked just great.  Also, building a later kernel from source means you won’t be getting updates from the standard CentOS / RHEL repos when you run yum update.  It will be up to you to ensure your kernel is patched with the latest security patches and bug fixes.  It might be worth your time using Ubuntu or another main-stream distro that provides the updated software you need.  In my case, these are not public facing servers and are for internal reports only, and I just don’t have the time to port a bunch of custom RPMs over to .debs.

yum groupinstall "Development Tools"
yum install openssl-devel
git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
cd linux-stable
git checkout v3.13.11
make oldconfig
make menuconfig #note: this is where you will select the IPv6 GRE module.  I also select IPv6 Source Address Routing, if that makes a difference, I'm not sure.

make rpm

Please note, you shouldn’t technically build software as root (it might break your system if something goes wrong). So, feel free to do everything after yum install as a regular user. I won’t tell if you build it as root though ;)

So, that should build 3 RPMs containing kernel 3.13.  The kernel, kernel headers (kernel-devel) and kernel-debug.  I installed all 3.

After you have built the kernel, you need to update grub2.  You can run the following:

grub2-editenv /boot/grub2/grub.cfg unset saved_entry

This will update grub to have the new kernel boot by default.  Go ahead and reboot your system so you can continue to build the other software.

Building iproute2 from source

iproute2 is closely tied to the kernel version.  In this particular case, there is not a v3.13 available, perhaps there was at some point, I don’t know.  However, Ubuntu 14.04 uses v3.12.0 successfully, so that’s what we’ll be using as well.

As mentioned in the kernel section, there are risks associated with building software from source instead of the repos.  These steps may break your system, but they worked just fine on mine.

git clone  git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git
cd iproute2
git checkout v3.12.0
./configure prefix=/usr
make && make install

That should build and install the software into the normal directories.  It will overwrite existing files at the destination.  If you prefer to not overwrite the existing iproute2 software installed on your system, you can omit the make install section, change to ./configure prefix=/usr/local

Building openvswitch from source

git clone https://github.com/openvswitch/ovs.git
cd ovs
./configure prefix=/usr
make dist

Those steps are from openvswitch’s INSTALL.RHEL.md file.  I recommend reading that file, there might be a package you need to install to make it work.  I highly recommend using make dist and installing the resulting RPM, as openvswitch will then install cleanly with systemd.

I’m sure I may have missed a package or development library somewhere along the way. Closely watch the output of make and see what it complains about. Often times you can fix the problem by adding a -devel to whatever missing library it complains about and yum will install what you need. IE: libary foo not found. yum install foo-devel

I hope you found this useful, as I spent a good deal of time getting this all to work.

Why not just use the latest kernel, 4.2, if I’m going to be using a newer kernel? Well, I tried that, and it didn’t work. I think that using a kernel version supported long-term by another major distribution is as safe a bet as possible. They will likely contribute security and bug fixes upstream to the mainline kernel, so you will be able to update your kernel as time goes on if absolutely necessary.

in CentOS, How-To

Starting Your Linux Career: 10 Steps

by on June 5, 2015 at 6:09 pm

I cannot lie: learning Linux took a lot of time.  I spent approximately 3 years as a daily linux user before landing my first full-time Linux-based position.  However, my time was not as well spent as it could have been.  Knowing then what I know now (and what I’m attempting to share with you), I could have been in a Linux position within 6 months.

Like many, my first experiences with Linux were of my own curiosity.  I was a Microsoft Windows Vista user at the time, and overall, very happy with the experience.  I had previously tried Ubuntu with marginal success; drivers in those days were much less readily available for hardware as they are today.  One of my work friends turned me onto Ultimate Edition, an Ubuntu based distro, which was essentially a DVD jam packed with some of the better software and all of the non-free drivers my system needed out of the box.  I was dazzled by the desktop effects that I was able to use, and I quickly found the platform suitable for web development.


in Uncategorized

Deploying Apache Virtual Hosts using Puppet on CentOS 6

by on October 13, 2014 at 7:03 pm

Scaling a website to serve thousands or even tens of thousands of users simultaneously is a challenge often best tackled by horizontal scaling – distributing workloads across dozens or even hundreds of servers. As a tool for preparing servers for that task, Puppet offers low deployment costs, ease of use, and automated configuration management.

After a successful deployment of a new hardware farm, how can you assure a static configuration across your entire environment? Puppet addresses that problem. Let’s see how to install Puppet and use it to deploy, as an example, an Apache web server virtual host on CentOS 6. This tutorial shows how to deploy virtual hosts on only one server, but the same steps can be replicated to manage many servers. I’ll assume you’re familiar with the Linux command line, basic networking concepts, and using Apache.


in Uncategorized

AWS: Use instance role credentials to query ec2 API

by on September 16, 2014 at 3:58 pm

I was having some issues including a token in v4 signing requests using the ec2 query API.  With the help of the excellent AWS support, I know have a working example based on the documentation provided by Amazon.

# AWS Version 4 signing example

# EC2 API (DescribeRegions)

# See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
# This version makes a GET request and passes the signature
# in the Authorization header.
import sys, os, base64, datetime, hashlib, hmac, json
import requests # pip install requests

# ************* REQUEST VALUES *************
method = 'GET'
service = 'ec2'
host = 'ec2.amazonaws.com'
region = 'us-east-1'
endpoint = 'https://ec2.amazonaws.com'
request_parameters = 'Action=DescribeRegions&Version=2013-10-15'

# Get the Role information and credentials
r = requests.get('');
role = r.text
r = requests.get('' + role);
decoded_data = json.loads(r.text)
access_key = decoded_data['AccessKeyId']
secret_key = decoded_data['SecretAccessKey']
token = decoded_data['Token']

# Key derivation functions. See:
# http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
def sign(key, msg):
 return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()

def getSignatureKey(key, dateStamp, regionName, serviceName):
 kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp)
 kRegion = sign(kDate, regionName)
 kService = sign(kRegion, serviceName)
 kSigning = sign(kService, 'aws4_request')
 return kSigning

# Create a date for headers and the credential string
t = datetime.datetime.utcnow()
amzdate = t.strftime('%Y%m%dT%H%M%SZ')
datestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope
# ************* TASK 1: CREATE A CANONICAL REQUEST *************
# http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html

# Step 1 is to define the verb (GET, POST, etc.)--already done.

# Step 2: Create canonical URI--the part of the URI from domain to query
# string (use '/' if no path)
canonical_uri = '/'

# Step 3: Create the canonical query string. In this example (a GET request),
# request parameters are in the query string. Query string values must
# be URL-encoded (space=%20). The parameters must be sorted by name.
# For this example, the query string is pre-formatted in the request_parameters variable.
canonical_querystring = request_parameters

# Step 4: Create the canonical headers and signed headers. Header names
# and value must be trimmed and lowercase, and sorted in ASCII order.
# Note that there is a trailing \n.
canonical_headers = 'host:' + host + '\n' + 'x-amz-date:' + amzdate + '\n'

# Step 5: Create the list of signed headers. This lists the headers
# in the canonical_headers list, delimited with ";" and in alpha order.
# Note: The request can include any headers; canonical_headers and
# signed_headers lists those that you want to be included in the
# hash of the request. "Host" and "x-amz-date" are always required.
signed_headers = 'host;x-amz-date'

# Step 6: Create payload hash (hash of the request body content). For GET
# requests, the payload is an empty string ("").
payload_hash = hashlib.sha256('').hexdigest()

# Step 7: Combine elements to create create canonical request
canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash
# ************* TASK 2: CREATE THE STRING TO SIGN*************
# Match the algorithm to the hashing algorithm you use, either SHA-1 or
# SHA-256 (recommended)
algorithm = 'AWS4-HMAC-SHA256'
credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request'
string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request).hexdigest()
# ************* TASK 3: CALCULATE THE SIGNATURE *************
# Create the signing key using the function defined above.
signing_key = getSignatureKey(secret_key, datestamp, region, service)

# Sign the string_to_sign using the signing_key
signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest()
# ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************
# The signing information can be either in a query string value or in
# a header named Authorization. This code shows how to use a header.
# Create authorization header and add to request headers
authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature

# The request can include any headers, but MUST include "host", "x-amz-date",
# and (for this scenario) "Authorization". "host" and "x-amz-date" must
# be included in the canonical_headers and signed_headers, as noted
# earlier. Order here is not significant.
# Python note: The 'host' header is added automatically by the Python 'requests' library.
headers = {'x-amz-date':amzdate , 'Authorization':authorization_header, 'X-Amz-Security-Token':token}
# ************* SEND THE REQUEST *************
request_url = endpoint + '?' + canonical_querystring

print '\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++'
print 'Request URL = ' + request_url
r = requests.get(request_url, headers=headers)

print '\nRESPONSE++++++++++++++++++++++++++++++++++++'
print 'Response code: %d\n' % r.status_code
print r.text

Hopefully you find this useful.

in Uncategorized

, ,

Join Ubuntu 14.04 to Active Directory Domain using realmd

by on April 29, 2014 at 9:15 pm

This proved to be a difficult task.  I spent several hours scouring the internet for various bugs in this process to little avail.  I’m going to summarize what I did to actually get this puppy up and running.

Started with a clean install of Ubuntu 14.04 LTS Server Edition.  Pointed my DNS to my AD controller.


in Uncategorized

Ubuntu 14.04 Web Server Tutorial

by on April 29, 2014 at 4:38 pm

In this article, I’m going to be outlining the steps to install and configure a complete web server on a base install of Ubuntu 14.04 LTS server edition.  Not only will you learn how to install a complete web server or “LAMP stack” from the command line, you’ll also understand a little bit more about how each service works.  Ubuntu LTS releases are proven server platforms, and 14.04 brings many needed updates to the LAMP stack, most notably Apache Server 2.4

I personally don’t prefer to install “Web server” package groups during server install time.  I like to install each necessary package one by one to ensure I only have the software that I require for my operation.  This tutorial is also useful if you’re running Ubuntu 14.04 desktop version and want to install a LAMP stack for testing or development purposes.


in How-To, Ubuntu

, , , , ,

Join Ubuntu 12.04LTS to Active Directory Domain

by on January 17, 2014 at 6:07 pm

Preliminary Steps

DNS must be configured properly.  You should be able to ping “mydomain.xx” from the CLI and the host name must resolve.  Generally speaking, entries in /etc/hosts are not sufficient.  You should be able to use whatever DNS server the Windows computers on the network use.

While entries in /etc/resolv.conf will allow you to temporarily adjust DNS settings, these setting will typically be overwritten if you’re using DHCP to obtain an IP.  You must make an entry for the interface in the /etc/network/interfaces file.  It is also helpful to add the dns-search parameter as well.  E.G.:

auto eth0
iface eth0 inet static
dns-search mydomain.xx

The above example will set a static IP of for the Linux host, and assumes that our Active Directory DNS server is  Obviously, you must edit these settings to fit your environment.  The DNS server does not have to be an Active Directory DNS server, but it must be able to resolve the domain names and host names.  For instance, if your Linux host is on a private subnet, you might put in the gateway’s IP address, as the gateway will forward the packets upstream to an actual DNS server.

A reboot after adjusting network settings on Ubuntu is recommended.

Additionally, you will need either a Domain Admin or other Active Directory user that has access to add machines to an OU.

Install Required Packages

First, run apt-get update

This will ensure that you have the current package listings from the repository.

Next, install the following packages using apt-get install <package>samba, winbind, krb5-user, libpam-winbind

You may receive an error while attempting to install one or more of these packages and the installation will refuse to proceed.  I have only observed on existing servers, not on a clean install of 12.04LTS.  If this is the case, you may install the packages using aptitude install <package> .  At first the install will fail and it will prompt you to leave the packages uninstalled.  Type “N”.  The next message will ask you to downgrade a handful of packages to allow install.  Type “Y”.  This downgrade does not appear to affect the operation of your software and allows the necessary packages to be installed.

Editing Config Files

Add the following changes to /etc/samba/smb.conf in the [global] section.

workgroup = MYDOMAIN

password server = dc1.mydomian.xx dc2.mydomain.xx


security = ads

idmap uid = 16777216-33554431

idmap gid = 16777216-33554431

template shell = /sbin/nologin

winbind use default domain = true

winbind offline logon = false

winbind enum users = yes

winbind enum groups = yes

client ntlmv2 auth = yes

client use spnego principal = no

Let’s talk about some of the important settings.

workgroup is the name of the domain without the top level domain.  If the domain is a tertiary domain, such as MY.DOMAIN.XX, then the workgroup would be MY

realm is the name of the Kerberos Realm for the domain.  This should be in all CAPS and contain the entire domain name.  Example:  MY.DOMAIN.XX or MYDOMAIN.XX

security is the setting that tells Samba to use Winbind.

Idmap uid/gid  can be any valid range of numbers.  Generally speaking, these number should be above 100k.

template shell is the setting which controls what shells active directory users will have when they try to log in via console of ssh.  /sbin/nologin will allow the users to access Samba shares, but otherwise not have permissions on the Linux system.

winbind use default domain is the setting which tells Samba to use only usernames for lookups.  If this is set to false, you would have to address AD accounts as myuser@mydomain.xx or mydomain\myuser.

client ntlmv2 auth enables Winbind and Samba to communicate using ntlmv2.  If you do not set this to yes, you won’t be able to join the domain.

Join the Active Directory Domain

Now that winbind is installed and Samba’s config file has been update, we should restart the smbd and winbind services.  service smbd restart && service winbind restart

Next, let’s generate a Kerberos ticket for our AD user.  kinit myadmin

You will be prompted for a password as follows:  Password for myadmin@MYDOMAIN.XX:

After entering the password, the command should complete with no output or errors.

Now that we have verified Kerberos is working by requesting a ticket, we can join the server to the domain using the net command as follows:  net ads join –U myadmin

At the prompt, enter your password.  You should see “Joined <Server Name> to realm ‘MYDOMAIN.XX’.  You will likely also see “No DNS domain configured for <servername>.  Unable to perform DNS Update.  DNS update failed!”  This is normal, and it just means that the DNS server was not updated with your ubuntu’s server A record.  That will have to be created manually by the DNS administrator, if desired (but not required for AD integration).

If our join was successful, we need to update a couple more things:  nss and pam.  Edit /etc/nsswitch.conf to enable winbind for passwd, group, and shadow services:

passwd: compat winbind

group: compat winbind

shadow: compat winbind


Now, we should be able to update our PAM configs automatically by running pam-auth-update This will open up a TUI screen (text user interface) and you can select Winbind NT/AD if not already selected and press OK.  This should update the requisite PAM files to enable winbind integration with PAM.

To check to make sure that everything is running as expected, run the command getent passwd myadmin and you should see an entry similar to one in /etc/passwd

in How-To, Ubuntu

, , ,

CentOS 6 Google App Engine Python Development with Eclipse

by on November 26, 2013 at 5:34 pm

With more and more companies moving applications to the cloud, Google App Engine makes a lot of sense.  GAE is a Platform as a Service (PaaS) product offered which runs on Google’s infrastructure.  Some of the touted capabilities are seamless, limitless, and completely automated application scaling.  In this article, you’ll learn how to setup a basic development environment for Google App Engine’s Python SDK on CentOS 6 using PyDev and Eclipse.


, , , ,