Thursday, January 30, 2014

Python: A lightning quick introduction to virtualenv, nose, mock, monkey patching, dependency injection, and doctest

virtualenv

virtualenv is a tool for installing Python packages locally (i.e. local to a particular project) instead of globally. Here's how to get everything setup:

# Make sure you're using the version of Python you want to use.
which python

sudo easy_install -U setuptools
sudo easy_install pip
sudo pip install virtualenv

Now, let's setup a new project:

mkdir ~/Desktop/sfpythontesting
cd ~/Desktop/sfpythontesting
virtualenv env

# Do this anytime you want to work on the application.
. env/bin/activate

# Make sure that pip is running from within the env.
which pip

pip install nose
pip install mock
pip freeze > requirements.txt

# Now that you've created a requirements.txt, other people can just run:
# pip install -r requirements.txt

nose

Nose is a popular Python testing library. It simple and powerful.

Create a file, ~/Desktop/sfpythontesting/sfpythontesting/main.py with the following:

import random

def sum(a, b):
  return a + b

Now, create another file, ~/Desktop/sfpythontesting/tests/test_main.py with the following:

from nose.tools import assert_equal, assert_raises
import mock

from sfpythontesting import main

def test_sum():
  assert_equal(main.sum(1, 2), 3)

To run the tests:

nosetests --with-doctest

Testing a function that raises an exception

Add the following to main.py:

def raise_an_exception():
  raise ValueError("This is a ValueError")

And the following to test_main.py:

def test_raise_an_exception():
  with assert_raises(ValueError) as context:
    main.raise_an_exception()
  assert_equal(str(context.exception), "This is a ValueError")

Your tests should still be passing.

Monkeypatching

Sometimes there are parts of your code that are difficult to test because they involve randomness, they are time dependent, or they involve external things such as third-party web services. One approach to solving this problem is to use a mocking library to mock out those sorts of things:

Add the following to main.py:

def make_a_move_with_mock_patch():
  """Figure out what move to make in a hypothetical game.

  Use random.randint in part of the decision making process.

  In order to test this function, you have to use mock.patch to monkeypatch random.randint.

  """
  if random.randint(0, 1) == 0:
    return "Attack!"
  else:
    return "Defend!"

Now, add the following to test_main.py. This code dynamically replaces random.randint with a mock (that is, a fake version) thereby allowing you to make it return the same value every time.

@mock.patch("sfpythontesting.main.random.randint")
def test_make_a_move_with_mock_patch_can_attack(randint_mock):
  randint_mock.return_value = 0
  assert_equal(main.make_a_move_with_mock_patch(), "Attack!")

@mock.patch("sfpythontesting.main.random.randint")
def test_make_a_move_with_mock_patch_can_defend(randint_mock):
  randint_mock.return_value = 1
  assert_equal(main.make_a_move_with_mock_patch(), "Defend!")

Your tests should still be passing.

Here's a link to a more detailed article on the mock library.

Dependency injection

Another approach to this same problem is to use dependency injection. Add the following to main.py:

def make_a_move_with_dependency_injection(randint=random.randint):
  """This is another version of make_a_move.

  Accept the randint *function* as a parameter so that the test code can inject a different
  version of the randint function.

  This is known as dependency injection.

  """
  if randint(0, 1) == 0:
    return "Attack!"
  else:
    return "Defend!"

And add the following to test_main.py. Instead of letting make_a_move_with_dependency_injection use the normal version of randint, we pass in our own special version:

def test_make_a_move_with_dependency_injection_can_attack():
  def randint(a, b): return 0
  assert_equal(main.make_a_move_with_dependency_injection(randint=randint), "Attack!")

def test_make_a_move_with_dependency_injection_can_defend():
  def randint(a, b): return 1
  assert_equal(main.make_a_move_with_dependency_injection(randint=randint), "Defend!")

To learn more about dependency injection in Python, see this talk by Alex Martelli.

Since monkeypatching and dependency injection can solve similar problems, you might be wondering which one to use. This turns out to be sort of a religious argument akin to asking whether you should use Vi or Emacs. Personally, I recommend using a combination of PyCharm and Sublime Text ;)

My take is to use dependency injection when you can, but fall back to monkeypatching when using dependency injection becomes impractical. I also recommend that you not get bent out of shape if someone disagrees with you on this subject ;)

doctest

One benefit of using nose is that it can automatically support a wide range of testing APIs. For instance, it works with the unittest testing API as well as its own testing API. It also supports doctests which are tests embedded inside of the docstrings of normal Python code. Add the following to main.py:

def hello_doctest(name):
  """This is a Hello World function for using Doctest.

  >>> hello_doctest("JJ")
  'Hello, JJ!'

  """
  return "Hello, %s!" % name

Notice the docstring serves as both a useful example as well as an executable test. Doctests have fallen out of favor in the last few years because if you overuse them, they can make your docstrings really ugly. However, if you use them to make sure your usage examples keep working, they can be very helpful.

Conclusion

Ok, there's my lightning quick introduction to virtualenv, nose, mock, monkey patching, dependency injection, and doctest. Obviously I've only just scratched the surface. However, hopefully I've given you enough to get started!

As I mentioned above, people tend to have really strong opinions about the best approaches to testing, so I recommend being pragmatic with your own tests and tolerant of other people's strong opinions on testing. Furthermore, testing is a skill, kind of like coding Python is a skill. To get really good at it, you're going to need to learn a lot more (perhaps by reading a book) and practice. It'll get easier with time.

If you enjoyed this blog post, you might also enjoy my other short blog post, The Zen of Testing. Also, here's a link to the code I used above.

Tuesday, January 14, 2014

Interesting Computer Failures from the Annals of History

If you're young enough to not know what "Halt and Catch Fire", "Killer poke", and "lp0 on fire" are, here's a fun peek at some of the more interesting computer failures, failure modes, and failure messages from the annals of computer history:

Thanks go to Chris Dudte for first introducing me to "Halt and Catch Fire" ;)

Thursday, January 09, 2014

Humor: More Knuth Jokes

When Knuth implements tail call optimization, it's actually faster than iteration.

All of Knuth's loops terminate...with extreme prejudice.

The NSA is permanently parked outside of Knuth's house, hoping that he might help them crack public key encryption. Sometime last year, Knuth gave them a copy of "The Art of Computer Programming", but refused to tell them which page the algorithm was on.

Knuth taught a group of kids how to use their fingers as abacuses. It turns out that his method is Turing Complete.

Python: My Notes from Yesterday's SF Python Meetup

Here are my notes from yesterday's SF Python Meetup:

Embed Curl

John Sheehan.

embedcurl.com

It creates a pretty version of a curl command and the output. You can embed it in your site.

shlex is a module in Python to do simple lexical analysis.

1-Click Deployment with Launch and Docker

Nate Aune @natea

There are 10 million repos on GitHub. The curve is exponential.

appsembler.com

It launches a Docker container.

It makes it easy to deploy certain types of apps.

You can embed a widget on your app that says "Launch demo site".

docker.io (see also docker.io/learn_more/)

He talked about Containers vs. VMs.

Containers share the OS, so they launch very quickly.

You can create new containers, and each container is just a diff of another container, so it uses very little space.

Yelp: Building a Python Service Stack

Julian Krause, John Billings

They're moving toward a Service Oriented Architecture.

There are over 100 engineers at Yelp.

They have about 180k lines of code in a Python webapp called yelp-main.

This has increased the amount of time to come up to speed and release new features.

They're splitting their large codebase into a lot of little Python codebases that speak HTTP/REST.

Example: metrics = json.loads(urllib2.urlopen(url).read())

They were using Tornado 1.

"Global dependencies considered harmful."

They couldn't upgrade to Tornado 2 because there were too many dependencies on Tornado 1.

They're using virtualenv now.

He thinks that virtualenv's bin/activate is doing it wrong. It should work slightly differently.

I mentioned that one of the problems he was trying to solve could be solved by PEX.

Future directions for isolation: Docker.

They're using pip.

wheel is a built-package format for Python.

pip install -r requirements.txt

Always use specific versions in your requirements.txt. Use ==.

Originally, they were using git submodules. They're not great.

They have separate repos for everything, and they release libraries for everything. They have a tool that monitors git tags.

They use Jenkins.

They use pypiserver.

They're switching from Tornado to Pyramid. It's been a successful migration.

There were issues in Tornado including testing.

Application servers: gunicorn, modwsgi, Circus, Pylons/waitress, and uWSGI

They evaluated all of them and picked uWSGI. It's working well. It's stable. It's fast. A lot of it is written in C. It has good documentation. They can integrate the logging with Scribe. The community is good. They have proper rolling restarts for their Java apps. uWSGI has hot reloading.

Metrics, metrics, metrics!

What is the 99th percentile time for this endpoint?

Are all service instances slow, or is it just one?

How many QPS is this endpoint handling?

Which downstream service is killing our performance?

Are any clients still using the old API?

Did the new service version introduce a performance regression?

They use a Metrics package for their Java code.

They wrote a package for Python called uwsgi_metrics. It's not open source yet, but they'll open source it shortly.

Example: with uwsgi_metrics.timing('foo'): ...

They have a JSON endpoint on all their services that exposes metrics.

uwsgi uses a prefork worker model.

uwsgi has mule processes. They're processes that don't handle interactive traffic, but you can use them as worker processes.

They measured a 50us overhead for recording a metric. You don't want to do this for too many metrics. 10s of metrics is okay. 1000s of metrics isn't.

airbnb/nerve is a service registration daemon that performs health checks.

airbnb/synapse is a transparent service discovery framework for connecting an SOA.

He showed service registration using Nerve. It sends stuff to ZooKeeper.

ZooKeeper is a highly available key value store with nice consistency guarantees.

They use HAProxy.

ZooKeeper -> Synapse -> HAProxy

Client -> HAProxy -> Service hosts

They have an operations dashboard.

There's no static configuration. If a service is running, it appears in the dashboard.

They're not using Smart Stack in production yet.

They have a service called Service Docs. When you build a service, all the docs get put on this website, keyed by service name.

People almost always end up writing client libraries for services. If you don't write one up front, you'll end up writing one implicitly anyway.

The nerve thing should be running on the same machines as the services.

They use memcache within services. They don't yet put caches in front of services.

They're thinking about putting Nginx in front of their services to add a little HTTP caching.

They're still investigating security between services.

WTF is PEX

Brian Wickman.

This is the first time Brian has really formally announced PEX.

This is a shortened version of an internal talk he gave, so I'm not going to take notes for everything he said.

You can create a __main__.py, and then run "python .", and it'll work.

pip search twitter.common

pip install twitter.common.python

This gives you the pex command.

A .pex is a ZIP file containing Python code that's executable. It's used to "compile" your Python projects down to a single file.

You can also use it to create a file that acts like a Python interpreter with all the requirements bundled ahead of time into it.

pex -r flask -p flex.pex

./flask.pex hello_world.py

You can use PEX to easily create self-contained Python applications.

Twitter uses pants. It's a build tool.

Aurora is a service scheduler built on top of Mesos.

Download Aurora to see an example of something that uses pants.

Pants builds modularity into your monorepo.

pants is like blaze at Google.

Pants is multi-interpreter and multi-platform. The pex files work on multiple platforms.

Aurora is half Java and half Python.

A .pex file is similar to a Java .war file.

Thursday, January 02, 2014

Shooting a Screencast that Shows an IDE

Many moons ago, I had to record a screencast. Most of the screencast was spent looking at code in an IDE. I wanted the IDE to be fullscreen, but I also wanted the text to be readable, even when the viewer wasn't watching fullscreen. Furthermore, I didn't want to spend all day zooming in on the cursor; I wanted things to "just work". After playing around with settings way too much, this is what worked for me:

  • I used Camtasia for Mac.
  • I plugged my laptop into my TV (using an HDMI cable) and configured the screen to be 720p. That's the only easy way I know of to get the screen to be exactly 720p.
  • I recorded the video at 720p. Hence, what was in the video matched 1 to 1 with what was on my screen.

Here's a link to the original video. The video is quite viewable in a normal YouTube window, but if you go fullscreen, it looks even better. The text is very crisp once YouTube switches over to the 720p version, but it's still readable even in lower bandwidth environments.