Why I left Heroku, and notes on my new AWS setup

Written by Adrian Holovaty on May 20, 2013

On Friday, we migrated Soundslice from Heroku to direct use of Amazon Web Services (AWS). I'm very, very happy with this change and want to spread the word about how we did it and why you should consider it if you're in a similar position.

My Heroku experience

Soundslice had been on Heroku since the site launched in November 2012. I decided to use it for a few reasons:

  • Being a sysadmin is not my thing. I don't enjoy it, and I'm not particularly good at it.
  • Soundslice is a two-man operation (developer and designer), and my time is much better spent working on the product than doing sysadmin work.
  • Heroku had the promise of easy setup and easy scaling in cases of high traffic.

While I was getting Soundslice up and running on Heroku, I ran into problems immediately. For one, their automatic detection of Python/Django didn't work. I had to rejigger my code four or five times ("Should settings.py go in this directory? In a subdirectory? In a sub-subdirectory?") in order for it to pick up my app -- and this auto-detection stuff is the kind of thing that's very hard to debug.

Then I spent a full day and a half (!) trying to get Django error emails working. I verified that the server could send email, and all the necessary code worked from the Python shell, but errors just didn't get emailed out from the app for some reason. I never did figure out the problem -- I ended up punting and using Sentry/Raven (highly recommended).

These experiences, along with a few other oddities, made me weary of Heroku, but I kept with it.

To its credit, Heroku handled the Soundslice launch well, with no issues -- and using heroku:ps scale from the command line was super cool. In December, Soundslice made it to the Reddit homepage and 350,000 people visited the site in a period of a few hours. Heroku handled it nicely, after I scaled up the number of dynos.

But over the next few months, I got burned a few more times.

First, in January, they broke deployment. Whenever I tried to deploy, I got ugly error messages. I ended up routing around their bug by installing a different "buildpack" thanks to a tip from Jacob, but this left a sour taste in my mouth.

Then, one April evening, I deployed my app, and Heroku decided to upgrade the Python version during the deploy, from 2.7.3 to 2.7.4. (In itself, that's vaguely upsetting, as I didn't request an upgrade. But my app code worked just as well on the new version, so I was OK with it.) When the deployment was done, my site was completely down -- a HARD failure with a very ugly Heroku error message being shown to my users. I had no idea what happened. I raced through my recent commits, looking for problems. I looked at the Heroku log output, and it just said some stuff about my "soundslice" package not being found. I ran the site locally to make sure it was working. It was working fine. I had deployed successfully earlier in the day, and I had made no fundamental changes to package layout.

After several minutes of this futzing around, with the site being completely down, after I had just sent the link to some potential partners who, for all I know, were evaluating the site that very moment -- I deployed again and the site worked again. So it was nothing on my end. Clearly just something busted with the Heroku deployment process.

That's when Heroku lost my trust. From then on, whenever I deployed, I got a little nervous that something bad would happen, out of my control.

Around the same time, Soundslice began using some Python modules with compiled C extensions and other various non-Python code that was not deployable on Heroku with their standard requirements.txt process. Heroku offers a way to compile and package binaries, which I used successfully, but it was more work using that proprietary process than running a simple apt-get command on a server I had root access to.

With all of this, I decided it was time to leave Heroku. I'm still using Heroku for this blog, and I might use it in the future for small/throwaway projects, but I personally wouldn't recommend using it for anything more substantial. Especially now that I know how easy it is to get a powerful AWS stack running.

My AWS setup

I'm lucky to be friends with Scott VanDenPlas, who was director of dev ops for the Obama reelection tech team -- you know, the one that got a ton of attention for being awesome. Scott helped me set up a fantastic infrastructure for Soundslice on AWS. Despite having used Amazon S3 and EC2 a fair amount over the years, I had no idea how powerful Amazon's full suite of services really were until Scott showed me. Unsolicited advertisement: You should definitely hire Scott if you need any AWS work done. He's one of the very best.

The way we set up Soundslice is relatively simple. We made a custom AMI with our code/dependencies, then set up an Elastic Load Balancer with auto-scaling rules that instantiate app servers from that AMI based on load. I also converted the app to use MySQL. In detail:

Step 1: "Bake" an AMI. I grabbed an existing vanilla Ubuntu AMI (basically a frozen image of a Linux box) and installed the various packages Soundslice needs with apt-get and pip. I also compiled a few bits of code I needed that aren't in apt-get, and I got our app's code on there by cloning our Git repository. After that instance had all my code/dependencies on it, I created an AMI from it ("Create Image (EBS AMI)" in the EC2 dashboard).

Step 2: Set up auto-scaling rules. This is the real magic. We configured a load balancer (using Amazon ELB) to automatically spawn app servers based on load. This involves setting up things called "Launch configurations" and "scaling policies" and "metric alarms." Check out my Python code here to see the details. Basically, Amazon constantly monitors the app servers, and if any of them reaches a certain CPU usage, Amazon will automatically launch X new server(s) and associate them with the load balancer when they're up and running. Same thing applies if traffic levels go down and you need to terminate an instance or two. It's awesome.

Step 3: Change app not to use shared cache. Up until the AWS migration, Soundslice used memcache for Django session data. This introduces a few wrinkles in an auto-scaled environment, because it means each server needs access to a common memcache instance. Rather than have to deal with that, I changed the app to use cookie-based sessions, so that session data is stored in signed cookies rather than in memcache. This way, the web app servers don't need to share any state (other than the database). Plus it's faster for end users because the app doesn't have to hit memcache for session data.

Step 4: Migrate to MySQL. Eeeek, I know. I have been a die-hard PostgreSQL fan since Frank Wiles showed me the light circa 2003. But the only way to use Postgres on AWS is to do the maintenance/scaling yourself...and my distaste for doing sysadmin work is greater than my distate for MySQL. :-) Amazon offers RDS, which is basically hosted MySQL, with point-and-click replication. I fell in love with it the moment I scaled it from one to two availability zones with a couple of clicks on the AWS admin console. The simplicity is amazing. (UPDATE, spring 2014: Amazon now supports PostgreSQL in its RDS product!)

Step 5: Add nice API with Fabric. Deployment was stupidly simple with Heroku, but it's easy to make it equally simple using a custom AWS environment -- I just had to do some upfront work by writing Fabric tasks. The key is, because you don't know how many servers you have at a given moment, or what their host names are, you query the Amazon API (using the excellent boto library) to get the hostnames dynamically. See here for the relevant parts of my fabfile.

Ongoing: Update AMI as needed. Whenever there's a new bit of code that my app needs -- say, a new apt-get package -- I make a one-off instance of the AMI, install the package, then freeze it as a new AMI. Then I associated the load balancer with the new AMI, and each new app server from then on will use the new AMI. I can force existing instances to use the new AMI by simply terminating them in the Amazon console; the load balancer will detect that they're terminated and, based on the scaling rules, will bring up a new instance with the new AMI.

Another approach would be to use Chef or Puppet to automatically install the necessary packages on each new server at instantiation time, instead of "baking" the packages into the AMI itself. We opted not to do this, because it was unnecessary complexity. The app is simple enough that the baked-AMI approach works nicely.

Put all this together, and you have a very powerful setup that I would argue is just as easy to use as Heroku (once it's set up!), with the full power of root access on your boxes, the ability to install whatever you want, set your scaling rules, etc. Try it!

Comments

Posted by Karan on May 20, 2013, at 4:10 p.m.:

Hey Adrian,

Postgres is avail as a service. There are a couple of options -
1. Offered by enterpriseDB itself for AWS -
http://www.enterprisedb.com/cloud-database
2. Beta version of another DBaaS provider -
http://www.cloudpostgres.com/

Posted by Cheyne on May 20, 2013, at 4:11 p.m.:

Nice write up, im considering AWS myself as a Heroku alternative. One question though, how did you find the switch price wise compared to Heroku?

Posted by Chris Streeter on May 20, 2013, at 4:12 p.m.:

When auto-scaling launches a new server, are there scripts that then update your code to the latest version? Otherwise, wouldn't you need to create a new AMI anytime you pushed new code?

Posted by Adrian Holovaty on May 20, 2013, at 4:14 p.m.:

Cheyne: Ah, I should have mentioned this in the post. The price is going to end up being basically the same. I could have done it even cheaper, but I'm paying extra for multi-AZ stuff (i.e., making the database and load balancer available in multiple availability zones, for failover protection).

Posted by Adrian Holovaty on May 20, 2013, at 4:17 p.m.:

Chris: Yes, you set "user data" for the instances. Check out "USER_DATA" in the code I posted at https://gist.github.com/adrianholovaty/4e354645dcf34ee0da92

Here's the relevant bit of what my user data looks like:

"""
#!/bin/sh
su ubuntu
cd /path/to/soundslice/code
git pull
sudo service soundslice start
"""

Posted by Brantley Harris on May 20, 2013, at 4:29 p.m.:

I don't do the AMI baking thing, I prefer to create fabric scripts that will set up just about any debian server, this means I'm not locked into any service. It does mean I have to manage more on my own, but with Fabric I'm able to do it pretty easily. Also with boto, I have AWS management in my fabric scripts as well, which means I can literally deploy a server and a website with one command-- it's pretty sweet.

I will be releasing the scripts soon.

Posted by Adrian Holovaty on May 20, 2013, at 4:43 p.m.:

Brantley: Yeah, that's the tradeoff there -- simplicity vs. flexibility. I found myself asking: at what point does flexibility for the sake of flexibility stop being worth it? Everybody's got a different answer, depending on their site, their sysadmin chops, etc.

Posted by Brantley Harris on May 20, 2013, at 4:50 p.m.:

Definitely. But I do believe decoupling our servers from the various services is of great importance. The fact that within a few minutes I could transfer completely from AWS to Linode is a tremendous boon. Ideally we'd have a layer of separation; that's what saltstack and some other projects are doing.

Posted by Pierre on May 20, 2013, at 5:01 p.m.:

You may want to consider using elasticache for sessions. Works great, simple set up and is inexpensive.

Posted by Cameron on May 20, 2013, at 6:39 p.m.:

What happens to users' sessions when auto-scaling down triggers and app servers are removed from the pool?

Posted by Dmitry on May 20, 2013, at 6:43 p.m.:

Great writeup, I wish they had postgres in the could :)

What happens when you have X instances running and you update your AMI?

Posted by David Song on May 20, 2013, at 6:56 p.m.:

Hi Adrian,
Can you talk about the differences in cost? If not a detailed breakdown, just a ballpark figure. What size instances are you currently launching on AWS?

Posted by Khash Sajadi on May 20, 2013, at 7:03 p.m.:

We have setup Cloud 66 (www.cloud66.com) exactly to allow fellow developers take advantage of the flexibility and control they deserve.

Posted by Justin Abrahms on May 20, 2013, at 7:08 p.m.:

I'm curious if you weighed AMI vs a configuration management thing like Chef. I'm guessing this comes down to "I don't want to be a sysadmin"?

Posted by Jason on May 20, 2013, at 7:10 p.m.:

Just to add to what Karan said about Postgres services - We are a current customer of EnterpriseDB's Postgres Plus Cloud Database, and I'm in the process of migrating our application to MySQL and RDS.

We've experienced several issues with EnterpriseDB that made me lose all confidence in the service, and here's why:

1) Even though you're paying a per hour usage fee, support is not included. If you have any troubles at all, they will not talk to you. I can work through most issues myself, but there were a few (which I'll mention below) that I needed to talk to an engineer on, because it was a problem on their end. I finally had to get nasty and had to talk to their Product Managers because support refused to help me.
2) First problem was I couldn't connect to the db's from my application through the load balancer port. When I contacted support, I was told they couldn't help me. After talking to the product manager, he told me it was a known issue and I had to restart the LB service in order to connect for the first time.
3) They just started warning customers when they would be applying updates to their instances. Back in February, there was a major problem with an update that ruined the integrity of many customer clusters - ours being one of them. When I started getting flooded with Django errors, I knew it was them, but did all my homework first so I had a good argument when I went to their support. I contacted support to ask them if they had a recent problem because my cluster was toast. Again, I was told they couldn't talk to me because I didn't have a support contract. Finally got through to a PM again and was told an unannounced update broke the clusters. Had they been honest and up front and warned us they were planning an update, I would have only lost an hour or less of customer data rather than 8.
4) I posted on their forum a time back asking if anyone else was having issues with automated backups suddenly not working. Again, I was told that an update broke it and it would be back up shortly. I realized I'd been working 3 days with no cluster backups.

I wasn't expecting free support - I understand the need to make money. However, when the problem is caused by your own updates, you need to communicate that to the customers. I can't justify paying a support fee when every time I've had to contact you it's been your fault.

On the other hand, I will say their Product Managers are awesome.

I'm in the same boat - I don't want to be a full time sysadmin or dba, but unfortunately I'm all my company has right now. I thought EnterpriseDB would resolve that, but all I've gotten from them was more stress. So far RDS has addressed our needs. While it's not as versatile, it will accomplish the balance I need so I can concentrate on other things.

Posted by Gabe da Silveira on May 20, 2013, at 7:15 p.m.:

@Cameron - Session are stored in cookies, not on the servers.

Posted by Adrian Holovaty on May 20, 2013, at 7:21 p.m.:

Cameron: Session data is stored in cookies, not in the servers, so adding/removing web servers has no effect on that. :-)

Dmitry: When I update an AMI, I can terminate existing instances, and they'll automatically come back up per my scaling rules. In practice, depending on the change I'm making to the AMI, I may or may not want to actually do that.

David Song: It's around $120/month, which is more or less what I was paying with Heroku.

Justin Abrahms: Yes, see the next-to-last paragraph of my post. We figured Chef would be too much complexity for my needs. You're right, though, that it comes down to sysadmin tastes.

Jason: Good to know. Thanks for the info.

Posted by Tor Erik Linnerud on May 20, 2013, at 7:33 p.m.:

It isn't clear from your writeup how you address the problem with a system / python upgrade from repeating on your new setup. If you were to upgrade something, it could still break.

Heroku's architecture makes it very easy to setup a staging environment identical to your production environment. There is even a fork command to automate this. Pushing the code to a staging environment before pushing it to productiom dramatically reduces the chance of a bad deploy. Should you still have one, you can immediately rollback with the heroku rollback command.

Posted by Vivek Sancheti on May 20, 2013, at 8:07 p.m.:

thanks for the nice line up why we should prefer AWS over heroku.
I am beginning to start developing few apps will surely prefer AWS for them after reading this article.

Posted by Adrian Holovaty on May 20, 2013, at 8:14 p.m.:

Tor Erik Linnerud: In the new setup, if I want to make a big upgrade, I'll spin off a one-off instance with the latest AMI, do the upgrade on that instance and make sure it works.

If it works, I'll just freeze that instance as the new AMI and set up AWS to use it for all new instances. If it doesn't work, I'll tweak until it works -- all on the one-off instance. Note that the one-off instance has its own hostname, such that I can poke at it directly via a web browser.

Seems to be a reasonable approach, no? Hope that makes sense.

Posted by theTimmy on May 20, 2013, at 8:28 p.m.:

Great post - We have a similar set up on AWS, and use Capistrano to deploy - there's an ec2 gem to read your EC2 instances (we filter on status and security group) and then it's simple git push && cap deploy to watch the code go to all of the relevant EC2 instances.

One argument for baking the AMI rather than Chef - starting up the AMI and adding it to the pool is usually faster than starting with a blank image, having Chef install the codebase, and then deploying. Helpful if you need a lot of servers fast.

Posted by Cameron on May 20, 2013, at 8:29 p.m.:

@Adrian - My mistake. I was mixing up session state in the cookies with setting the ELB to use cookies for session affinity. Makes sense. (Reading comprehension ftw!)

Posted by Oz on May 20, 2013, at 9:30 p.m.:

Man I wish Amazon would add Postgres support to RDS. There is a discussion thread requesting this feature dating back to 2009 (with 8 pages of +1's):
https://forums.aws.amazon.com/thread.jspa?threadID=37834

The idea of using boto to dynamically pull the list of servers used by fabric is so simple and awesome. I've been maintaining a static list like a sap. Thanks for that!

Posted by Bill on May 20, 2013, at 9:32 p.m.:

I have always heard bad things from using session based cookies: http://wonko.com/post/why-you-probably-shouldnt-use-cookies-to-store-session-data

I certainly like not having to use memcache or redis to store user state, but am worried about the tradeoffs with speed and security. What are peoples thoughts?

Posted by waxzce on May 20, 2013, at 10:10 p.m.:

I can offer you some http://www.clever-cloud.com credits if you want, python work very nice on it

Posted by Rune Kaagaard on May 20, 2013, at 10:35 p.m.:

For deployment, check out http://ansible.cc/. No really, just do it. It's as fast or faster go get started with than Fabric, but so much lighter and simpler than Puppet/Chef. And it does the whole instal-server-from-bare-metal routine which is not really Fabric's sweet spot.

Posted by David Allison on May 20, 2013, at 10:44 p.m.:

Hi Adrian,

We're also big AWS users. I strongly encourage you to write to them (you have a pretty big name and big sway in the community) and ask them to support Postgres fully in RDS. I also use MySQL in RDS and would instantly switch to Postgres in RDS if it were available.

I write to them and have contacted our AWS reps many times in the past year about this, and I think if they hear from enough people, they will begin to support it.

Best,
David Allison
CTO + Co-Founder
Nulu, Inc.

Posted by Philip Dorrell on May 20, 2013, at 11:22 p.m.:

I have suffered from hard-to-debug "auto-detection" in completedly different circumstances.

Auto-detection sounds good because supposedly "you just put file X in location Y, and the system automatically configures itself", but then you put what you think is file X in what you think is location Y, and nothing happens. And then what?

If you're going to provide auto-detection in your service/platform/framework/whatever, you should also provide an auto-detection logging option, which will log auto-detection something like this:

* I will attempt to auto-detect Ruby, Python
* I looked for files in this directory "/" and I looked at the extensions and I found ".css" ".py", so because of ".py" I detected Python.
* Assuming Python, I attempt to auto-detect Django, Flask, etc.
* For Django I am searching for settings.py
* I looked for settings.py in this directory and this directory and that directory, but it was not in any of those directories.
* I then moved on to looking for Flask
* etc etc

(This is all a bit made up, just to give the general idea, because I can't be bothered researching in detail whatever it is that Heroku claims to do when it auto-detects.)

Posted by Hassan Shahid on May 21, 2013, at 1:32 a.m.:

Not sure if you thought about this or not, but heroku's Postgres as a service can be used entirely independently of using heroku for your app, and they provide all the nifty replication services (followers, forks, etc...).

Posted by Geoff Hill on May 21, 2013, at 3:50 a.m.:

You might not necessarily have to drop shared state cache. AWS offers a service called Elasticache, a key-value memory cache, with servers located a couple racks away from your EC2 instances.

http://aws.amazon.com/elasticache/

Posted by wcdolphin on May 21, 2013, at 4:22 a.m.:

I personally have my issues with Heroku, but these do not seem like issues with Heroku, as far as I can tell.

"For one, their automatic detection of Python/Django didn't work"

What didn't work about it? I am under the impression that the location of the settings.py file is not specific to Heroku in any way, and you could in fact place it almost wherever you want,as long as you correctly reference it from within your app? The Python build pack is simple and available for developers to read and hack on, on github.

"Then, one April evening, I deployed my app, and Heroku decided to upgrade the Python version during the deploy, from 2.7.3 to 2.7.4. (In itself, that's vaguely upsetting, as I didn't request an upgrade. But my app code worked just as well on the new version, so I was OK with it.)"

If you didn't expect to run the latest version of Python, you should have specified it in a runtime.txt, as described in their Python getting started guide. This behavior is consistent with pip an requirements without version numbers specified, the latest released version is assumed.

Posted by John Digweed on May 21, 2013, at 6:31 a.m.:

Having used Heroku, I can confirm it is a really bad service. Do not be fooled into thinking it's great. ** You will end up spending more time in the long run debugging their build set up than you will save in the short term by not having to get involved with sys admin/dev-ops stuff. ***. Do not take a short cut here. Invest some time to learn dev ops or sys admin, it will definitely pay off in the future.

As far as AWS goes, I don't like being locked into a particular cloud vendor so I opted not to learn the AWS platform.

I would recommend going down the dev ops route - personally I spent about a week learning puppet, and can now bring up a fixed RoR environment on any cloud service (I use rackspace) in 5mins. Having got this far, I dont see scaling as being an issue in the future, but I guess cross that bridge when it occurs.

Posted by Dave on May 21, 2013, at 9:01 a.m.:

I have a very similar setup, I just don't use fabric yet. I agree it is a nice setup, but I felt like there is a lot of sysadmin work to do before everything is up and running. And debugging in production is just not an option.

What do you do with logs? Do you use sticky sessions with ELB? Did you consider any other cloud provider beside AWS and Heroku?

Posted by John Doe on May 21, 2013, at 11:49 a.m.:

@wcdolphin, your arguments are moot.

The Python buildpack detection has changed a few times: https://github.com/heroku/heroku-buildpack-python/commits/master/bin/detect
One of them was to limit the search for settings.py 2 subdirectories below instead of 3. Just because this is simple and available to developers, it doesn't mean they'll have any care about breaking changes, especially warning you about them if they're really necessary.

Then, the Python version upgrade was not the problem. The author made sure the local version worked with either version, and he deployed it again later, without change, only to verify that the second time it worked.
So, it must have been something in Heroku. Nothing indicates that the deployment wouldn't be as broken with either 2.7.3 or 2.7.4, so a runtime.txt wouldn't be of any use. You're speculating, this is speculating, but at least this is based on observed behavior instead of on good intentions.

Posted by Torsten Engelbrecht on May 21, 2013, at 1:05 p.m.:

Great post.

I would like to hear more from you regarding how it is on AWS now compared to Heroku before. My concern is that there will be more overhead in sysadmin work sooner or later, which you could have avoided on heroku. If not, even better though.

Posted by Patrick Glandien on May 21, 2013, at 5:44 p.m.:

I would be careful with using cookies as general session storage, as storing even just small amounts of data adds significant overhead to every single web request and thus results in more latency and bandwidth.
As Pierre already mentioned, ElastiCache is a great candidate for your scenario.

Posted by David Robertson on May 22, 2013, at 1:20 a.m.:

What are the advantages of doing all of this manually instead of using Amazon Elastic Beanstalk? With elastic beanstalk, you can just push to a git repository, similar to Heroku.

Posted by Katie C. on May 22, 2013, at 7:44 p.m.:

And people look at me strange when I tell them that I dislike dealing with managed services. Yes, I have to do a bit more work with my Rackspace slice, but no one's ever going to upgrade me without me knowing.

Posted by william on May 25, 2013, at 12:48 p.m.:

I can add a nugget of extra goodness to your plan.

As a long time fabric + boto user I recently began using saltstack. It's extremely easy and allows you to stay 100% in python for remote execution of "everything". It's also good at config management.

Posted by Dmitry on May 26, 2013, at 8:56 a.m.:

@David, beanstalk is still on py2.6, uses mod_wsgi and documentation on building your own AMI for it is not that good. Correct me if I'm wrong?

Posted by Shawn on July 19, 2013, at 4:53 p.m.:

Hi, thanks for the write up, I have been on heroku for playframework and I have to say it was flawless. I am however doing my next app on AWS. The only downside I see with AWS is it's a bit of a maze and constantly evolving. I have had some pricing shocks on AWS after configuring beanstalk. I was done with my dev instance and shut them down via the console but beanstalk was configured to start up instances if any failed so they just started up. Cost me $1000 for that mistake.

Posted by Chris Streeter on September 21, 2013, at 2:08 a.m.:

For what it is worth, Heroku does offer it's Postgres DB-as-a-service, just like RDS (https://postgres.heroku.com/). And since they run on AWS too, it's very similar to using RDS, just you get Postgres instead.

Posted by Scott VanDenPlas on September 24, 2013, at 8:25 p.m.:

The problem with the Postgres offerings available is that all of them take management and configuration for any sort of redundancy, as far as I know.

If there is an RDS-like postgres service that handles Multi-AZ failover and redundancy, I am unaware of it. If you know of one, please share... I am actively looking.

Posted by Patrick on November 13, 2013, at 7:13 p.m.:

Thanks for this. I was curious what the all-in time effort was and if anyone has ever migrated from Heroku to AWS' custom HIPAA solution. Thanks!

Posted by Scott on November 14, 2013, at 9:42 p.m.:

RDS Postgres is available for BETA!

Posted by Ryan on December 18, 2013, at 3:01 p.m.:

Hey Adrian,

Excellent post, I've read it a few times now and it grounds me every time :) I like AWS, and am definitely going with it.

I have two questions:

1. Have you heard of elastic beanstalk for AWS? What are your thoughts on it?
2. What kind of instances do you have, and what are your scaling rules primarily based on?
3. If possible, do you mind sharing how much it costs in total to run soundslice every month, please? I'd be extremely grateful, as it would give me a really good benchmark.

Thanks again for a great writeup.

Ryan

Comments have been turned off for this page.