Git with an alternate SSH key…

So I use both for my day job, and also for managing my private Git repos. (Since BB is free for personal private repo use, whereas GitHub charges for that…)

However, when I go to push to BitBucket for my personal use, I need to make sure that my SSH keys for work aren’t loaded. This has resulted in me doing things like “ssh-add -D” to wipe out all the keys in my ssh agent, then manually loading my personal key for git use. Then when I start work again, I have to reload my other keys. Rather annoying.

I came across a solution here: git admin: An alias for running git commands as a privileged SSH identity

However, it didn’t work for me. Took a bit to figure out why, but it came full circle back to the use of ssh-agent– even though I was properly specifying my SSH identity file, the keys from my ssh-agent were being seen first. All I had to do was to disable the use of ssh-agent inside of the script, like so:

set -e
set -u
ssh -i $SSH_KEYFILE $@

That did the trick for me. Hope that helps someone else out there as well!

Running a system RabbitMQ on a server with Chef Server

One of the things that I’ve been working on getting set up here at home is Logstash to analyze all the various log files from the home network and the servers that I admin. As far as I can tell, that seems to be the Open Source equivalent of Splunk (which is a great tool, but expensive, and the free version is missing some features that I’d be interested in).

However, I recently migrated my systems from Opscode’s Hosted Chef to the Open Source Chef server running on the box that I had been setting logstash up on. Logstash uses RabbitMQ for messaging, as does Chef. I thought that things would be relatively easy to get working together, but I don’t know much about RabbitMQ. While the ultimate solution was actually trivial, I didn’t find it easy to figure out what to do.

I tried many things that ultimately didn’t pan out. I’m still not entirely sure why, since RabbitMQ is a bit of a black box to me. I tried various combinations of the following settings in /etc/chef-server/chef-server.rb, trying to configure the two systems differently enough so that they wouldn’t conflict with each other:

rabbitmq['consumer_id'] = 'curry'            # Chef default: hotsauce
rabbitmq['nodename'] = 'rabbit@chef'         # Chef default: rabbit@localhost
rabbitmq['node_ip_address'] = ''  # Chef default:
rabbitmq['node_port'] = 5673                 # RabbitMQ default: 5672

None of those did the trick. Even after applying all those settings, I still got this back when I did rabbitmqctl status:

root@rain:/var/log/rabbitmq# rabbitmqctl status
Status of node rabbit@rain ...
Error: unable to connect to node rabbit@rain: nodedown


nodes in question: [rabbit@rain]

hosts, their running nodes and ports:
- rain: [{bookshelf,56170},

current node details:
- node name: rabbitmqctl13530@rain
- home dir: /var/lib/rabbitmq
- cookie hash: vkNSjkgIyXIKaNLguSEV7A==


Eventually I came across the solution here: All I had to do was create /etc/rabbitmq/rabbitmq-env.conf and add the following lines:

# I am a complete /etc/rabbitmq/rabbitmq-env.conf file.
# Comment lines start with a hash character.
# This is a /bin/sh script file - use ordinary envt var syntax

This ends up with RabbitMQ operating as a different messaging node on the system and co-existing peacefully with the RabbitMQ setup that Chef is running.

I am still curious as to how rabbitmqctl was able to see the config that the Chef server was using, even when it was running on a different port and the databases are stored in two completely different directories (/var/opt/chef-server/rabbitmq and /var/lib/rabbitmq). If anyone knows the answer to that, I’d love to find out!

Running the cacti cookbook under Ubuntu

So something I’ve been loving lately as I dive into the world of DevOps is the large community that Opscode has built up around Chef. While Puppet and Chef aim towards solving the same problem, and have many similarities in thought towards solutions (and many differences, of course), one of the swaying factors for many people like myself is the community. Puppet mostly gave me the tools to reinvent the wheel for my infrastructure; Chef gives me the tools to make a wheel and a shop full of free wheels already made. Sometimes you need to do a bit of work to make it fit, but sometimes you can just hook it up and go. That’s an invaluable thing in today’s fast paced IT world.

My “itch” to scratch of today was Cacti. I’ve been having some problems with the local Comcast connection, and the temperature has been rising here in the PNW, and as a result my mind has returned towards getting my local network monitoring set back up. And indeed, this is a great chance to set up a local testbed for Chef work unrelated to my day job. So I got Nagios and rsyslog bootstrapped with Chef here at home yesterday and worked on Cacti today.

(I got a little derailed when I found out that for some reason my Linux box’s swap partition had an incorrect entry in /etc/fstab. After fixing that, I got an error when trying to turn swap back on:swapon: /dev/sda5: read swap header failed: Invalid argumentThis article pointed me to the solution:mkswap /dev/sda5; swapon -aThat recreated the swap header and then I was able to enable it and have a stable system again.)

There was already a cookbook for Cacti, but it looks like it was designed for Redhat package names and file paths. I spent some time stepping through things and making it work with Ubuntu. For the most part, it was a matter of taking some hard-coded settings, replacing them with attributes, and setting the default values for those attributes to be the same as the old hard-coded values. This allows me to then override them locally, and anyone else already using the cookbook will see no change. I did add a few platform-specific checks, for things like the Ubuntu package names.

In all likelihood, anyone running a Debian system can probably change the spots I added Ubuntu support and extend them for Debian support as well (since most of the core Ubuntu packages either come from or get merged upstream into Debian). However, I don’t have a Debian test environment yet, so I didn’t want to make assumptions. It’s on the list of things to get up and running in a VM… CentOS, Oracle Linux, and Debian.

Here’s the role that I ended up with, when all was said and done. In my case, this server is only available over the internal network, so I didn’t need SSL support.

  "name": "cacti-server",
  "description": "Role to configure Cacti server.",
  "json_class": "Chef::Role",
  "chef_type": "role",
  "default_attributes": {
  "override_attributes": {
    "cacti": {
      "user": "www-data",
      "group": "www-data",
      "cron_minute": "*",
      "apache2": {
        "conf_dir": "/etc/apache2/conf.d",
        "doc_root": "/var/www",
        "ssl": {
          "force": false,
          "enabled": false
  "run_list": [
  "env_run_lists": {

I sent over a pull request to get the changes merged in, but until then feel free to grab the cookbook from github (note that you’ll want the ubuntu branch). If you’re using Berkshelf, you can add this to your Berksfile:
cookbook 'cacti', github: 'stormerider/chef-cacti', branch: 'ubuntu'

I hope this helps someone else!

Using my fork of htop-osx…

I’ve been futzing with htop-osx in my spare time to add support for CPU temperature monitoring and fan speed… these are things I like to know when I’m using a laptop, and I figured other folks here might as well. If you use homebrew, just do: brew edit htop-osx and paste in the values from and then brew install htop-osx (or if you already have it installed, brew upgrade htop-osx).

Otherwise, you can clone the fork from and build it manually. Once you’ve done so, run htop and hit F2 to enter setup, navigate over to Available Meters, and add them to whichever column you want (left or right). I normally make htop suid anyways to be able to get full process details, so I’m not sure if that’s required to probe the SMC keys for temperature/fan speed, but it’s possible.

(Most folks will only have one fan; the newer MacBook Pros and the 27″ iMacs only do, I believe the Airs as well. Older MBPs have two, like the loaner I used when getting my MBP repaired. Some Mac Pros– the desktops pre-iMac integration– have up to 4 fans. The code currently only displays 3 of them, the 4th being the PSU fan.)


Creating and managing secure passwords

If you’re like me, you probably have dozens of accounts on various different websites. Hopefully that also means you have dozens of passwords, right? Using one password is one of the quickest ways to have your entire digital life go up in flames, because most sites will track your account by your email address (which is often used as your username, or can be used to find your username), and this can quickly become a series of cascading failures. Attackers will leapfrog from one system to another, and if they get access to your email account itself, then it’s usually game over with how most password reset forms work. For a very scary account of how bad things like this can be, check out this article here covering what happened to a Wired reporter.

But who can keep track of multiple passwords in their head in a truly secure fashion? I know I can’t. For a long time, I used a three-tier system: one quick and dirty password for sites I didn’t care about, another for sites that I used a lot, and then a few specific passwords for individual sites (like my email) that I wanted to keep ultra-secure. But as time wore on, I realized that some of the sites that I considered disposable became anything but, and human nature (read: laziness :)) meant that I wasn’t “reclassifying” them with a newer tier of password security.

When I started working at Livemocha, I was introduced to a wonderful tool called KeePass. KeePass is a simple piece of software that allows you to create an encrypted database to store your passwords in. I’d seen similar things before for PDAs, back in the PalmOS days, but KeePass is open source software (in this case, meaning free as in beer) and is supported on virtually every major operating system out there: iOS, Android, OS X, Linux, Blackberry, Windows (including Windows Phone), and a generic version for devices running J2ME. They also offer a version for Windows that is portable, so you can toss the software onto a USB flash drive along with your database and access it wherever you go.

The main downside to KeePass is that they assume that you know how to use it. There’s no introductory wizard that I’ve found, no walk through to get you familiar with the application, they just dump you in and expect you to figure out your way around. While it’s not exactly an un-intuitive UI that they use, it’s not exactly simple, and it can be overwhelming to someone trying to use it for the first time. Really, it’s much simpler than it looks, and I will hopefully be able to show you how to use its various features without fear of their learning curve.

KeePass is what I use to manage my passwords at this point. I’ve since seen other services aimed at doing the same kind of thing, such as LastPass, but one advantage that KeePass has over online services is that it’s not online unless you put it online, which means it can’t be attacked if people can’t reach your database file. If you carry it around on a USB drive (insert obligatory disclaimer about flash drive failure rates and the need to back up your data), then it’s very hard for someone to attack it when it’s unplugged from a computer. They would have to have access to a computer you plugged it into and copy it while it was inserted, and then they’d still have to deal with the formidable encryption that KeePass offers. Given that LastPass had a security breach a while back, that was a selling point for me.

I do compromise a bit myself; I use multiple computers, multiple operating systems, and multiple devices. I can’t plug in a USB drive to my phone, but I can run KeePassDroid on it. My solution was to store my KeePass database files in Dropbox. Dropbox (which I plan on covering in more detail in a future blog post) allows me to synchronize my files across all of my devices and computers, which means that I can access my KeePass database on any of them as well. I’ll be more than happy to answer any questions about how to handle that kind of setup in the comments if you want.

So, that said, how does one use KeePass? For illustration purposes, I will be using the KeePassX client for OS X in my screenshots. I also use the 1.x KeePass protocol, as it is supported better across different platforms than the newer 2.x protocol.

First, you need to create a database. When you do so, it asks you for a “master password”. Choose something secure!! If someone finds this key, they’ve got access to your entire vault. That is, needless to say, a Very Bad Thing. You can also specify a key file, which will need to be used in addition to the password. (KeePass does not allow you to use a file instead of a password, only in addition to, in case you need an extra layer of security. If you are storing the file on a network share of some kind, you could save the key file on a flash drive and/or your mobile devices to increase the difficulty someone would have in order to crack the database.)

Screen Shot 2013-01-30 at 5.17.40 PM

After you repeat the master password and finish that up, it will create a basic database like so:

Screen Shot 2013-01-30 at 5.27.47 PM

The first thing that you want to do is to create a set of folders (KeePass calls them groups) to organize your passwords so that you can find them quickly. But also notice that there is a search box up top as well; that’s come in very handy for me in the past, especially for the database I use at work with credentials of all the accounts we have with various vendors and such. Here’s what my personal database looks like:

Screen Shot 2013-01-30 at 5.31.47 PM

Obviously your folder structure will vary. This is my personal DB, I have one that I use at work, and I also have another household DB that my wife and housemate also have access to (via Dropbox sharing). You can move entries between folders readily, so don’t sweat the structure too much, just do whatever makes sense to you, and reorganize it later if you want/need to.

Once you’ve got that sorted out, you’ll want to create a new entry to add in an account to your database. You can right click on the right entry pane, or use the keyboard shortcut (Command-Y under OS X, Control-Y under Windows). You’ll get a window with a bunch of fields to fill out. Here’s an example of one of my entries:

Screen Shot 2013-01-30 at 5.41.30 PM

This is my account for the digital section of Seattle Public Library (SPL). I’ve slightly obfuscated the username, since the account uses a 4 digit PIN, and I don’t really want someone trying combinations until they get in. :) But you get the basic idea. The title explains what the entry is for; username/password are obvious. Not all passwords you want to store have to be for websites; I have used it to store things like server room door combinations and the like in the past. However, if it is for a website, it just makes sense to put in the URL so that you can easily open it with a click when you’re looking up the password. The comments field is very handy to store additional information, such as what answers you put in for security questions, or serial numbers for devices, whatever you fancy. I haven’t played around with the expiration feature personally, but I assume it warns you if a password is nearing expiration when you open the database.

You can also attach files to an entry. This is very handy if you want to store non-standard information securely. Once the file is attached to the entry, it is encrypted along with the rest of the database. I’ve used this in the past to attach text files with information about gift debit cards, where I needed to provide more information than KeePass natively allows for the fields, especially if the entry was the login information for the site managing the cards.

Note the quality bar. This is a very handy feature of KeePass, in that it gives you a quick and easy visual indicator of how secure your password is. Obviously, a 4 digit PIN is, well, less than secure. To choose a secure password, I highly recommend using one of two methods, depending on the site in question.

Option one: the password generator located at This is a tool based on the principles explained in this comic. The basic gist is that the longer a password is, the more secure it is, so it’s more secure to have a password based on 4 distinct words than it is to have a shorter password mixing cases, numbers, and special characters. The comic covers the math involved for those who are curious. The upside is that not only is it easier to type and more secure, but it’s also easier to remember (particularly handy for places when you can’t paste the password in, one of my personal pet peeves…).

Option two: the password generator included in KeePass itself. Just click on the “Gen” button and adjust the properties of the password you want to generate to the policies of the site in question. (For example, Blizzard Entertainment’s Battle.Net system has a maximum character limit of 15, which is somewhat surprising. Others will allow or disallow special characters, etc.) There are a lot of options to tune in this generator, but I can guarantee you’re not likely to remember this kind of password– but that’s what you have KeePass for, isn’t it? :) To see the password generated, you need to click on the eyeball icon to reveal it– it’s a handy security feature for preventing people from looking over your shoulder.

Screen Shot 2013-01-30 at 5.51.30 PMThat’s a basic intro to password security principles in general and KeePass in specific. I’m sure that you all will have questions about aspects that I didn’t cover, and I’ll be more than happy to address them in the comments. Fire away!

Building an Ubuntu AMI with Elastic Beanstalk Support

First off, this would not have been possible without help from Defusal on #eventmachine in Freenode IRC. When I hit the wall and beat my head against it for multiple hours, they were the one to give me the key that allowed me to take the final step forward into completing this conversion.

Secondly, I’ll disclose some biases and assumptions straight up. I’ve done this with Ubuntu 12.04 on a 64-bit platform. I chose this because I wanted to standardize all of my EC2 nodes on Ubuntu as a distribution, and Amazon Linux is Redhat based. Redhat is a great company that does a lot of great work, but I’ve worked with CentOS 5 for some time now and kept running into problems because things were so out of date to ensure stability. I realize that a lot has gotten updated with RHEL 6, but that doesn’t mean that they will keep things up to date moving forward; that’s not the goal of RHEL, that’s more the aim of a distribution like Fedora. I can respect that dedication to stability, but working in a small startup, I want a lot of flexibility and maneuverability from a technical perspective; I don’t want a distribution to limit me to a list of applications that has essentially ossified. However, I’m assuming you’re not here because you like Redhat as a distro, or you wouldn’t be checking this out. :) I also assume that you’ve got your instance set up with Apache2 and ruby-1.9 (in my case, 1.9.3).

First off, boot up an existing ElasticBeanstalk AMI. I did this from an instance that was actually running as part of a EB project. (Yes, I know that everyone says “you can’t do it from a deployed project”. Well, I did.) Tar up the contents of /opt/aws, /opt/elasticbeanstalk, /etc/httpd/sites, and /opt/tomcat7 (or /opt/tomcat6 if that’s what you want to run, but I’ll refer to everything following for Tomcat 7). I also snagged /etc/init.d/functions, /etc/init.d/tomcat7, and /etc/init.d/hostmanager. Copy this file over to your new build host.

Create users for tomcat7 and elasticbeanstalk with the home directories pointed to their respective directories in /opt. Add tomcat7 to the elasticbeanstalk group (this is important). Extract the files into /opt on your new build host. (Do it in this order to preserve file permissions, or chown them appropriately.) Remove /opt/tomcat7/webapps/ROOT.war/opt/tomcat7/webapps/ROOT/, and /opt/elasticbeanstalk/srv/hostmanager/db/hostmanager.db. These steps are important to be able to deploy your new application code to the instance, and the hostmanager.db file seems to be why most people have problems creating an image from an active EB node.

Copy the init scripts to their correct locations. Note that you could skip the functions file if you want to rewrite the hostmanager init script; I didn’t feel like bothering with it, personally. You’ll need to modify the hostmanager init script slightly; I put the functions file in /opt/aws and updated the hostmanager script to look there for it. Don’t forget to enable them with:

update-rc.d tomcat7 defaults
update-rc.d hostmanager defaults

Hostmanager expects things to be a certain way, and the RedHat way is not the Ubuntu way. The Apache configs obviously go into /etc/apache2/sites-available and then symlinked into /etc/apache2/sites-enabled (either manually or using a2ensite). However, you will need to snap a few symlinks to get hostmanager to recognize the Apache config:

cd /etc ; ln -s apache2 httpd
cd /var/run ; ln -s apache2 httpd
cd apache2 ; ln -s ../

This is the one part I think I might be missing a detail or two on; I was pretty tired when I worked on this so I’m not 100% sure if there were other symlinks to be snapped or just those. It should be fairly obvious in the hostmanager logs if one is missing (look in /opt/elasticbeanstalk/var/log). If you really want, you might be able to avoid these symlinks by customizing the hostmanager config files, but I didn’t trust that Amazon doesn’t have an updating mechanism in place. To a large extent, I have tried to install this in a compatible way that if Amazon pushes an update of some sort that it likely won’t break on your custom image. Not knowing exactly what Amazon can do in regards to that, however, I can’t guarantee anything.

So, one thing about hostmanager and EB. Amazon has embedded an entire Ruby installation into their elasticbeanstalk directory. This means that you don’t have to worry about it conflicting with anything else and you don’t need to install a long list of gem files on your system to get things to work. However, I did run into the fact that Amazon Linux is using a different version of the OpenSSL and libcrypto libraries than Ubuntu 12.04 is (if you’re using an older Ubuntu version, this may or may not apply to you. YMMV). To fix that, I did something along the lines of:

mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/
mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/
mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/
mv /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/digest/
cp /usr/lib/ruby/1.9.1/x86_64-linux/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
cp /usr/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
cp /usr/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
cp /usr/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/
cp /usr/lib/ruby/1.9.1/x86_64-linux/digest/ /opt/elasticbeanstalk/lib/ruby/1.9.1/x86_64-linux/

I also ran into a HUGE issue that stumped me for hours regarding EventMachine and rb-inotify. I’m not sure why the code & gem versions work perfectly fine on the Amazon setup and not on the Ubuntu host I was working on, but it simply would not work for me. I kept getting errors with hostmanager like so:

[2012-08-09 16:55:15 +0000] Host Manager startup complete
[2012-08-09 16:55:15 +0000] Stopping DaemonManager
>> Exiting!
/opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:795:in `attach_io': undefined method `attach_fd' for EventMachine:Module (NoMethodError)
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:769:in `watch'
from /opt/elasticbeanstalk/srv/hostmanager/lib/elasticbeanstalk/hostmanager/daemon/logdirectorymonitor.rb:101:in `run'
from /opt/elasticbeanstalk/srv/hostmanager/lib/elasticbeanstalk/hostmanager/daemon.rb:36:in `block in start'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:996:in `call'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:996:in `block in run_deferred_callbacks'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:996:in `each'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:996:in `run_deferred_callbacks'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:1449:in `event_callback'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:898:in `eventable_read'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:369:in `block in crank_selectables'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:369:in `each'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:369:in `crank_selectables'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:324:in `block in run'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:318:in `loop'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:318:in `run'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/pr_eventmachine.rb:64:in `run_machine'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/backends/base.rb:61:in `start'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/server.rb:159:in `start'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/controllers/controller.rb:86:in `start'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/runner.rb:185:in `run_command'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/lib/thin/runner.rb:151:in `run!'
from /opt/elasticbeanstalk/lib/ruby/gems/1.9.1/gems/thin-1.3.1/bin/thin:6:in `'
from /opt/elasticbeanstalk/bin/thin:19:in `load'
from /opt/elasticbeanstalk/bin/thin:19:in `'

Ultimately when I hopped into #eventmachine to ask about it, Defusal suggested upgrading eventmachine from the last stable version (which is very, very old) to the newest beta.

root@host# /opt/elasticbeanstalk/bin/gem uninstall eventmachine
You have requested to uninstall the gem:
thin-1.3.1 depends on [eventmachine (>= 0.12.6)]
If you remove this gems, one or more dependencies will not be met.
Continue with Uninstall? [Yn] Y
Successfully uninstalled eventmachine-1.0.0.rc.4
root@host# /opt/elasticbeanstalk/bin/gem install eventmachine --pre
Building native extensions. This could take a while...
Successfully installed eventmachine-1.0.0.rc.4
1 gem installed
Installing ri documentation for eventmachine-1.0.0.rc.4...
Installing RDoc documentation for eventmachine-1.0.0.rc.4...

I did that, and hostmanager started up perfectly!

I personally updated the rest of the gems as well, when I was troubleshooting the error with the ssl/crypto libraries, so I don’t know if there are other “gotchas” left if you don’t do that. Your mileage may vary.

At that point, I built a new AMI off of that instance, and once it was done, I set it as a custom AMI in Elastic Beanstalk. It chewed on it a bit, swapping instances around and such, and loaded up fine with the sample AWS Tomcat WAR. Hurray! (Edit: I should also clarify that we’ve been using this image since with no problems on our real webapps, so it’s not just the sample WAR that works.)

I hope this helps some other people, because when I was doing my research on the subject, I couldn’t find anything from anyone else out there about this (outside of being stumped a few steps before what blocked me). I want to thank my work for allowing me to detail the process that I developed while working for them and contribute that knowledge back to the community which has helped us in so many ways.

Long hiatus

I know I haven’t posted here in a long while, but my health issues have kept me busy focusing on getting the day job done. I’ve been learning a lot of new technologies and playing around a lot with Amazon’s cloud offerings. My work now uses WordPress MU on AWS to host some sites, including our homepage, which has been a challenge, getting the scaling right for that technology and finding how all the plugins work with WPMU.

It reminded me that I haven’t been good about keeping my own plugins up to date. I’m going to go through and see what is still relevant; I have a feeling that most of the Widgets I made are no longer needed as WordPress has evolved and included a lot of the alterations I was making into the basic options. As I find this deprecated plugins, I’ll take down the associated pages for them (although I’ll leave the blog posts up for posterity). For plugins that are still functional (such as the WIP Manager), I think I’m going to spend a little time (not too much, but enough) making sure they’re up to date with WP 3.4 and do things the new way. The sidebar offers a lot more room to organize things so I’ll likely create a tab of my own.

You may have also noticed the site URL change. I’ve gone by “stormerider” as my online handle for many, many years, so it was only appropriate that I actually pick up the domain name. I’ve used it especially in the coding community, so I figured I would put my tech blog here. For the longest time I went by the pen name of “Alan Morgan” as well, but that is changing. I’m in the process (once I collect the money to do so, along with my wife) of changing my name to “Morgan Blackthorne”.

I have a set of technical blog posts that I intend to publish over at Romance Divas (which I am a technical administrator for). I intend on cross-posting them here, although some of them may be a bit beginner-ish for this blog. I also think that I’m going to try to take some of the lessons that I’ve learned while working at the day job and post them up here, if my work will allow me to, so that others can learn from what I’ve done and blaze their own trail in the clouds.

StormChat Web 0.2.1 Release

StormChat Web 0.2.1, released on 6/2/2011

Changes include:

  • Many fixes in the SMF FALs.
  • Fixed up access to the admin console in the various FALs, in order to work on expanding the admin console (#3).
  • Abstracted the showModeratorGroups function into the various FALs instead of trying to code support for multiple FALs into the room library.
  • Add the ability to set debug level in the database, which will eventually be part of the admin console.
  • Log backtraces with each log message, for fine-grained detail of what’s going on.
  • Add filtering to the logging code, so you can tag log messages with a debug level (long overdue).
  • Enable/disable plugins via the admin console (#3).
  • Adjust timeout values via the admin console (#3).
  • Adjust general settings via the admin console (#3).
  • Modify/remove front page announcement value via the admin console (#3).
  • Adjust cron settings (which will generate crontab syntax for the user to use) via the admin console (#3).
  • Add link in the submit panel to the Admin Console if the user is a moderator.
  • Add idle/limbo/warning code to the script-based session handler.
  • Fixed (hopefully) the issue with ghosting sessions (#15).
  • Disable timing out of moderators. That was annoying.
  • Add a social network filter plugin for twitter:username and fb:username expansion. More to come, perhaps.
  • Add a StormChat Bug filter plugin to expand bug:id to be a link to the issue tracker.
  • Implement chat font face and size options directly (#26).
  • Implement the ability to control the frame layout values. This allows people to do things like hide the avatar pane when they are using chat in the Firefox sidebar (#24).

You can check out the code via Subversion, or download one of the following files:

NOTE: Please note that when upgrading from 0.2.0 to 0.2.1, you must run update.php (go to in your browser) to update your database schema to work properly with the code changes. Annoying things will happen if you do not!

StormChat Web 0.2.0 Release

StormChat Web 0.2.0, released on 5/30/2011

Changes include:

  • Database migration updater that handles schema updates transparently.
  • Removal of the need to run the server to handle session timeouts and event timers with the addition of two standalone scripts to be run via cron (#5).
  • Initial support for phpBB 3.x added (phpbb3 FAL — #21).
  • Fixed several FALs to support updating the user_lastvisit_chat column in the prefs table when a user logins. It had been missing from several of the FALs (and incorrect in the phpBB FAL).

You can check out the code via Subversion, or download one of the following files:

Lustre tip

Since I had a hard time finding this… If you’re setting up Lustre on a system with multiple interfaces, it seems to default to picking eth0. If you’re looking to route traffic over another interface instead (such as an internal network, a VPN, etc.) you’ll need to make a tweak on your MGS. In /etc/modprobe.conf (or equivalent file, depending on your distro), add:

options lnet networks=tcp(eth1)

Obviously, substitute eth1 for whatever your applicable interface is. Obviously, you’ll need to reboot for this to take effect. (There may be a way to change it without a reboot using the lctl tool, but I didn’t find one.)

Failure to make this change will result in OST mounts failing like so:

mount.lustre: mount /dev/lustre/ost1 at /opt/lustre/ost1 failed: Input/output error
Is the MGS running?

This is because, having selected eth0 as its only network ID, it refuses connections from other hosts (even though the actual daemon binds to… a little odd, but, whatever). The appropriate snipped from dmesg:

LustreError: 120-3: Refusing connection from for No matching NI
LustreError: 4695:0:(socklnd_cb.c:1714:ksocknal_recv_hello()) Error -104 reading HELLO from
LustreError: 11b-b: Connection to at host on port 988 was reset: is it running a compatible version of Lustre and is one of its NIDs?

Hopefully this will help solve someone else’s headaches :)