Tuesday, January 25, 2011

Python: SSL Hell

I was having a hard time getting SSL to work with gevent on Python 2.6. It turns out I had two problems.

The first resulted in this error message:
SSLError: [Errno 336265218] _ssl.c:337: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib
It turned out to be a permissions issue. I ran "cat" on the file, and it turned out that I didn't have access to it:
cat: /etc/mycompany/certs/httpd/mycompany-wildcard.key: Permission denied
I ran the command with sudo, and the problem went away.

The second error was related to using urllib2 under gevent:
URLError: <urlopen error [Errno 2] _ssl.c:490: The operation did not complete (read)>
<Greenlet at 0x2add8d0: start_publisher> failed with URLError
SSLError: [Errno 8] _ssl.c:490: EOF occurred in violation of protocol
<Greenlet at 0x2add958: <bound method WSGIServer.wrap_socket_and_handle of <WSGIServer at 0x2b48750 fileno=3 address=>>(<socket at 0x2b48a10 fileno=5 sock=, ('', 37858))> failed with SSLError
This problem was because I was using gevent to monkeypatch the socket module, but I wasn't using it to monkeypatch the ssl module. Once I monkeypatched the ssl module, everything worked.

I had a heck of a time writing nosetests that would fire up a server using gevent and connect to it over SSL using urllib2. However, those nosetests proved very valuable in helping me figure out when and where SSL was breaking for me.

Here's what one of those nose tests looked like:
# Unfortunately, this monkey patching is not isolated to just this module.

from gevent import monkey
monkey.patch_all(thread=False) # Nose uses threads.

import urllib2

import gevent

from myproj import server

TEST_PORT = 34848
URL = "https://%s:%s" % (TEST_INTERFACE, TEST_PORT)

def test_server():

test_successful_box = [False]

def start_server():
server.main(interface=TEST_INTERFACE, port=TEST_PORT)

def start_publisher():
response = urllib2.urlopen(URL)
assert response.msg == "OK"
test_successful_box[0] = True

greenlets = [gevent.spawn(start_server), gevent.spawn(start_publisher)]
assert test_successful_box[0]

Friday, January 07, 2011

Python: use_twisted Decorator

This is a decorator that you can put on a Python function that will temporarily fire up Twisted's reactor and run a function under Twisted. This is useful if most of your program doesn't use Twisted, but you have a function that must use Twisted.

Here's an example of using it:
def do_something_twisted():
value = yield do_something_else_twisted()
other_value = yield do_more_stuff()
defer.returnValue(value + other_value)
Here's the decorator:
def use_twisted(twisted_function):

"""This is a decorator to run a function under Twisted.

Temporarily fire up the reactor and run the function under Twisted.
It should return a deferred, of course.

Unfortunately, there's a bug in Twisted that only allows you to
start and stop the reactor once. See
Hence, this decorator will prevent you from calling it twice. So
sorry :(


from twisted.python.failure import Failure

captured_value = [] # Think of this as a box.

def wrapper(*args, **kargs):
reactor.callLater(0, call_twisted_function, twisted_function, args,
assert captured_value
value = captured_value[0]
if isinstance(value, (Exception, Failure)):
raise value
return value

def call_twisted_function(twisted_function, args, kargs):
deferred = twisted_function(*args, **kargs)

def capture_and_stop(value):
reactor.callLater(0, reactor.stop)
return value

if globals().get("_reactor_run", False):
raise AssertionError("The use_twisted decorator can only be used once")
globals()["_reactor_run"] = True

return wrapper
Updated: Added code to prevent the decorator from being called twice.

Thursday, January 06, 2011

Linux: pssh

Have you ever needed to run a bunch of shell commands over ssh on a bunch of servers? I know there are probably a ton of tools out there to do this, but when I asked my operations buddy Geoff which he preferred, he told me to try out pssh (aka parallel-ssh). I tried it, and I was pleased to discover it was easy to setup and easy to use.

It's a Python package. Make sure you have setuptools installed (on Ubuntu, use "sudo apt-get install python-setuptools"). Then run "sudo easy_install pssh". It creates the following binaries in /usr/local/bin: prsync, pssh, pnuke, pslurp, pscp, and pssh-askpass.

It's best if you install your ssh key on each system. I have a shell script called ssh-installkey to do that:
# Install my ssh key on a remote system.

[ -n "$1" ] || {
echo "usage: ssh-installkey username@host" >&2
return 1
ssh $1 "mkdir -p -m 700 .ssh"
ssh $1 "cat >> ~/.ssh/authorized_keys2" < ~/.ssh/id_dsa.pub
ssh $1 "chmod 600 ~/.ssh/authorized_keys2"
Unfortunately, you'll have to run this script manually for each of the servers, which involves typing in your password a bunch of times. However, once you have your ssh key installed, your life will be much more pleasant.

To use pssh, you should create a hosts file with the hosts that you want to control. It's a simple file with one host per line. If you need to specify a username, you can use the format username@host, just like ssh.

Now, you can try out pssh, "pssh -h hosts.txt -i ls". The "-i" tells pssh to output the results from each server "inline" (which looks nice). If you don't care about the output of the command (for instance, if you're compiling something), you can just leave out the -i.

There are a couple of gotchas to be aware of. First of all, each command starts with a fresh login. That means using "cd" in one command doesn't help at all for the next command. I tend to use commands like "cd dir && do_something" when running pssh. Secondly, if your command takes a long time to run, pass "-t -1" to turn off timeouts.

Lastly, you'll need to do some more work if you want to use sudo. By default, sudo won't run if you don't have a tty, which you won't if you're using pssh. To fix this, you'll have to manually log into each server and edit /etc/sudoers. Comment out the line that says "Defaults requiretty". Once you do that, you'll be able to use sudo with pssh.

I was able to use pssh to control a cluster of 10 EC2 instances in order to install ZeroMQ. (In real life, I'd add ZeroMQ to the AMI so that it was already installed on each server, but using pssh helped me get something up quickly so that I could experiment.)

Linux: Unity

I tried out Unity which is going to be the user interface for Ubuntu 11.04. It's built on top of GNOME, and it's currently used in Ubuntu's Netbook edition. Overall, I liked it. It's definitely one step closer to OS X. For instance, menus are shown at the top of the screen rather than at the top of each window. Unfortunately, I encountered several bugs when trying to use an external monitor, so clearly Canonical's programmers have their work cut out for them to hit Ubuntu's 11.04 release (due in April, of course).

Monday, January 03, 2011

Linux: CrunchBang Linux 10 on a MacBook Pro

I tried CrunchBang Linux 10 on a MacBook Pro. Previously, I had a lot of trouble dual booting with OS X, so I did the same thing I did for Ubuntu--I told it to use the entire disk. This turned out to be a big mistake.

I put GRUB on the MBR since I wasn't dual booting. I also set up an encrypted LVM. The system wouldn't boot. I just got a flashing folder with a "?" icon. I think this is a known problem with Debian right now.

I also wanted to try MEPIS. It's based on Debian as well. It even has a utility that you run from within OS X that sets stuff up for dual booting. Unfortunately, since I wiped OS X, that wasn't available.

I was doing all this on my company's spare Macbook Pro (while my Macbook Pro was in the shop). Unfortunately, the DVD for my Macbook Pro wouldn't boot on the other Macbook Pro (which was just a few months newer). Hence, I couldn't reinstall OS X.

My solution was to install Ubuntu. I gave it the whole disk, and everything worked out okay. I was a little bummed since I wanted to try a new distro (either CrunchBang or MEPIS), but it's hard to argue with a working system ;)