Git with an alternate SSH key…

So I use both for my day job, and also for managing my private Git repos. (Since BB is free for personal private repo use, whereas GitHub charges for that…)

However, when I go to push to BitBucket for my personal use, I need to make sure that my SSH keys for work aren’t loaded. This has resulted in me doing things like “ssh-add -D” to wipe out all the keys in my ssh agent, then manually loading my personal key for git use. Then when I start work again, I have to reload my other keys. Rather annoying.

I came across a solution here: git admin: An alias for running git commands as a privileged SSH identity

However, it didn’t work for me. Took a bit to figure out why, but it came full circle back to the use of ssh-agent– even though I was properly specifying my SSH identity file, the keys from my ssh-agent were being seen first. All I had to do was to disable the use of ssh-agent inside of the script, like so:

set -e
set -u

ssh -i $SSH_KEYFILE [email protected]

That did the trick for me. Hope that helps someone else out there as well!

Running a system RabbitMQ on a server with Chef Server

One of the things that I’ve been working on getting set up here at home is Logstash to analyze all the various log files from the home network and the servers that I admin. As far as I can tell, that seems to be the Open Source equivalent of Splunk (which is a great tool, but expensive, and the free version is missing some features that I’d be interested in).

However, I recently migrated my systems from Opscode’s Hosted Chef to the Open Source Chef server running on the box that I had been setting logstash up on. Logstash uses RabbitMQ for messaging, as does Chef. I thought that things would be relatively easy to get working together, but I don’t know much about RabbitMQ. While the ultimate solution was actually trivial, I didn’t find it easy to figure out what to do.

I tried many things that ultimately didn’t pan out. I’m still not entirely sure why, since RabbitMQ is a bit of a black box to me. I tried various combinations of the following settings in /etc/chef-server/chef-server.rb, trying to configure the two systems differently enough so that they wouldn’t conflict with each other:

rabbitmq['consumer_id'] = 'curry'            # Chef default: hotsauce
rabbitmq['nodename'] = '[email protected]'         # Chef default: [email protected]
rabbitmq['node_ip_address'] = ''  # Chef default:
rabbitmq['node_port'] = 5673                 # RabbitMQ default: 5672

None of those did the trick. Even after applying all those settings, I still got this back when I did rabbitmqctl status:

[email protected]:/var/log/rabbitmq# rabbitmqctl status
Status of node [email protected] ...
Error: unable to connect to node [email protected]: nodedown


nodes in question: [[email protected]]

hosts, their running nodes and ports:
- rain: [{bookshelf,56170},

current node details:
- node name: [email protected]
- home dir: /var/lib/rabbitmq
- cookie hash: vkNSjkgIyXIKaNLguSEV7A==

[email protected]:/var/log/rabbitmq#

Eventually I came across the solution here: All I had to do was create /etc/rabbitmq/rabbitmq-env.conf and add the following lines:

# I am a complete /etc/rabbitmq/rabbitmq-env.conf file.
# Comment lines start with a hash character.
# This is a /bin/sh script file - use ordinary envt var syntax

This ends up with RabbitMQ operating as a different messaging node on the system and co-existing peacefully with the RabbitMQ setup that Chef is running.

I am still curious as to how rabbitmqctl was able to see the config that the Chef server was using, even when it was running on a different port and the databases are stored in two completely different directories (/var/opt/chef-server/rabbitmq and /var/lib/rabbitmq). If anyone knows the answer to that, I’d love to find out!

Running the cacti cookbook under Ubuntu

So something I’ve been loving lately as I dive into the world of DevOps is the large community that Opscode has built up around Chef. While Puppet and Chef aim towards solving the same problem, and have many similarities in thought towards solutions (and many differences, of course), one of the swaying factors for many people like myself is the community. Puppet mostly gave me the tools to reinvent the wheel for my infrastructure; Chef gives me the tools to make a wheel and a shop full of free wheels already made. Sometimes you need to do a bit of work to make it fit, but sometimes you can just hook it up and go. That’s an invaluable thing in today’s fast paced IT world.

My “itch” to scratch of today was Cacti. I’ve been having some problems with the local Comcast connection, and the temperature has been rising here in the PNW, and as a result my mind has returned towards getting my local network monitoring set back up. And indeed, this is a great chance to set up a local testbed for Chef work unrelated to my day job. So I got Nagios and rsyslog bootstrapped with Chef here at home yesterday and worked on Cacti today.

(I got a little derailed when I found out that for some reason my Linux box’s swap partition had an incorrect entry in /etc/fstab. After fixing that, I got an error when trying to turn swap back on:swapon: /dev/sda5: read swap header failed: Invalid argumentThis article pointed me to the solution:mkswap /dev/sda5; swapon -aThat recreated the swap header and then I was able to enable it and have a stable system again.)

There was already a cookbook for Cacti, but it looks like it was designed for Redhat package names and file paths. I spent some time stepping through things and making it work with Ubuntu. For the most part, it was a matter of taking some hard-coded settings, replacing them with attributes, and setting the default values for those attributes to be the same as the old hard-coded values. This allows me to then override them locally, and anyone else already using the cookbook will see no change. I did add a few platform-specific checks, for things like the Ubuntu package names.

In all likelihood, anyone running a Debian system can probably change the spots I added Ubuntu support and extend them for Debian support as well (since most of the core Ubuntu packages either come from or get merged upstream into Debian). However, I don’t have a Debian test environment yet, so I didn’t want to make assumptions. It’s on the list of things to get up and running in a VM… CentOS, Oracle Linux, and Debian.

Here’s the role that I ended up with, when all was said and done. In my case, this server is only available over the internal network, so I didn’t need SSL support.

  "name": "cacti-server",
  "description": "Role to configure Cacti server.",
  "json_class": "Chef::Role",
  "chef_type": "role",
  "default_attributes": {
  "override_attributes": {
    "cacti": {
      "user": "www-data",
      "group": "www-data",
      "cron_minute": "*",
      "apache2": {
        "conf_dir": "/etc/apache2/conf.d",
        "doc_root": "/var/www",
        "ssl": {
          "force": false,
          "enabled": false
  "run_list": [
  "env_run_lists": {

I sent over a pull request to get the changes merged in, but until then feel free to grab the cookbook from github (note that you’ll want the ubuntu branch). If you’re using Berkshelf, you can add this to your Berksfile:
cookbook 'cacti', github: 'stormerider/chef-cacti', branch: 'ubuntu'

I hope this helps someone else!

Building an Ubuntu AMI with Elastic Beanstalk Support

First off, this would not have been possible without help from Defusal on #eventmachine in Freenode IRC. When I hit the wall and beat my head against it for multiple hours, they were the one to give me the key that allowed me to take the final step forward into completing this conversion.

Secondly, I’ll disclose some biases and assumptions straight up. I’ve done this with Ubuntu 12.04 on a 64-bit platform. I chose this because I wanted to standardize all of my EC2 nodes on Ubuntu as a distribution, and Amazon Linux is Redhat based. Redhat is a great company that does a lot of great work, but I’ve worked with CentOS 5 for some time now and kept running into problems because things were so out of date to ensure stability. I realize that a lot has gotten updated with RHEL 6, but that doesn’t mean that they will keep things up to date moving forward; that’s not the goal of RHEL, that’s more the aim of a distribution like Fedora. I can respect that dedication to stability, but working in a small startup, I want a lot of flexibility and maneuverability from a technical perspective; I don’t want a distribution to limit me to a list of applications that has essentially ossified. However, I’m assuming you’re not here because you like Redhat as a distro, or you wouldn’t be checking this out. ­čÖé I also assume that you’ve got your instance set up with Apache2 and ruby-1.9 (in my case, 1.9.3).

First off, boot up an existing ElasticBeanstalk AMI. I did this from an instance that was actually running as part of a EB project. (Yes, I know that everyone says “you can’t do it from a deployed project”. Well, I did.) Tar up the contents of /opt/aws, /opt/elasticbeanstalk, /etc/httpd/sites, and /opt/tomcat7 (or /opt/tomcat6 if that’s what you want to run, but I’ll refer to everything following for Tomcat 7). I also snagged /etc/init.d/functions, /etc/init.d/tomcat7, and /etc/init.d/hostmanager. Copy this file over to your new build host.

Create users for tomcat7 and elasticbeanstalk with the home directories pointed to their respective directories in /opt. Add tomcat7 to the elasticbeanstalk group (this is important). Extract the files into /opt on your new build host. (Do it in this order to preserve file permissions, or chown them appropriately.) Remove /opt/tomcat7/webapps/ROOT.war, /opt/tomcat7/webapps/ROOT/, and /opt/elasticbeanstalk/srv/hostmanager/db/hostmanager.db. These steps are important to be able to deploy your new application code to the instance, and the hostmanager.db file seems to be why most people have problems creating an image from an active EB node.

Copy the init scripts to their correct locations. Note that you could skip the functions file if you want to rewrite the hostmanager init script; I didn’t feel like bothering with it, personally. You’ll need to modify the hostmanager init script slightly; I put the┬áfunctions┬áfile in /opt/aws and updated the hostmanager script to look there for it. Don’t forget to enable them with:

update-rc.d tomcat7 defaults
update-rc.d hostmanager defaults

Hostmanager expects things to be a certain way, and the RedHat way is not the Ubuntu way. The Apache configs obviously go into /etc/apache2/sites-available and then symlinked into /etc/apache2/sites-enabled (either manually or using a2ensite). However, you will need to snap a few symlinks to get hostmanager to recognize the Apache config:

cd /etc ; ln -s apache2 httpd
cd /var/run ; ln -s apache2 httpd
cd apache2 ; ln -s ../

This is the one part I think I might be missing a detail or two on; I was pretty tired when I worked on this so I’m not 100% sure if there were other symlinks to be snapped or just those. It should be fairly obvious in the hostmanager logs if one is missing (look in┬á/opt/elasticbeanstalk/var/log). If you really want, you might be able to avoid these symlinks by customizing the hostmanager config files, but I didn’t trust that Amazon doesn’t have an updating mechanism in place. To a large extent, I have tried to install this in a compatible way that if Amazon pushes an update of some sort that it likely won’t break on your custom image. Not knowing exactly what Amazon can do in regards to that, however, I can’t guarantee anything.

So, one thing about hostmanager and EB. Amazon has embedded an entire Ruby installation into their elasticbeanstalk directory. This means that you don’t have to worry about it conflicting with anything else and you don’t need to install a long list of gem files on your system to get things to work. However, I did run into the fact that Amazon Linux is using a different version of the OpenSSL and libcrypto libraries than Ubuntu 12.04 is (if you’re using an older Ubuntu version, this may or may not apply to you. YMMV). To fix that, I did something along the lines of:

mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/
mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/
mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/
mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/
cp /usr/lib/ruby/1.9.1/x86_64-linux/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
cp /usr/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
cp /usr/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
cp /usr/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
cp /usr/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/

I also ran into a HUGE issue that stumped me for hours regarding EventMachine and rb-inotify. I’m not sure why the code & gem versions work perfectly fine on the Amazon setup and not on the Ubuntu host I was working on, but it simply would not work for me. I kept getting errors with hostmanager like so:

[2012-08-09 16:55:15 +0000] Host Manager startup complete
[2012-08-09 16:55:15 +0000] Stopping DaemonManager
>> Exiting!
/opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:795:in `attach_io': undefined method `attach_fd' for EventMachine:Module (NoMethodError)
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:769:in `watch'
from /opt/elasticbeanstalk/srv/hostmanager/lib/elasticbeanstalk/hostmanager/daemon/logdirectorymonitor.rb:101:in `run'
from /opt/elasticbeanstalk/srv/hostmanager/lib/elasticbeanstalk/hostmanager/daemon.rb:36:in `block in start'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:996:in `call'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:996:in `block in run_deferred_callbacks'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:996:in `each'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:996:in `run_deferred_callbacks'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:1449:in `event_callback'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:898:in `eventable_read'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:369:in `block in crank_selectables'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:369:in `each'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:369:in `crank_selectables'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:324:in `block in run'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:318:in `loop'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:318:in `run'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:64:in `run_machine'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/backends/base.rb:61:in `start'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/server.rb:159:in `start'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/controllers/controller.rb:86:in `start'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/runner.rb:185:in `run_command'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/runner.rb:151:in `run!'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/bin/thin:6:in `'
from /opt/elasticbeanstalk/bin/thin:19:in `load'
from /opt/elasticbeanstalk/bin/thin:19:in `'

Ultimately when I hopped into #eventmachine to ask about it, Defusal suggested upgrading eventmachine from the last stable version (which is very, very old) to the newest beta.

[email protected]# /opt/elasticbeanstalk/bin/gem uninstall eventmachine
You have requested to uninstall the gem:
thin-1.3.1 depends on [eventmachine (>= 0.12.6)]
If you remove this gems, one or more dependencies will not be met.
Continue with Uninstall? [Yn] Y
Successfully uninstalled eventmachine-1.0.0.rc.4
[email protected]# /opt/elasticbeanstalk/bin/gem install eventmachine --pre
Building native extensions. This could take a while...
Successfully installed eventmachine-1.0.0.rc.4
1 gem installed
Installing ri documentation for eventmachine-1.0.0.rc.4...
Installing RDoc documentation for eventmachine-1.0.0.rc.4...
[email protected]#

I did that, and hostmanager started up perfectly!

I personally updated the rest of the gems as well, when I was troubleshooting the error with the ssl/crypto libraries, so I don’t know if there are other “gotchas” left if you don’t do that. Your mileage may vary.

At that point, I built a new AMI off of that instance, and once it was done, I set it as a custom AMI in Elastic Beanstalk. It chewed on it a bit, swapping instances around and such, and loaded up fine with the sample AWS Tomcat WAR. Hurray! (Edit: I should also clarify that we’ve been using this image since with no problems on our real webapps, so it’s not just the sample WAR that works.)

I hope this helps some other people, because when I was doing my research on the subject, I couldn’t find anything from anyone else out there about this (outside of being stumped a few steps before what blocked me). I want to thank my work for allowing me to detail the process that I developed while working for them and contribute that knowledge back to the community which has helped us in so many ways.