The Business Model Sketch

I was helping a friend work through some business ideas, and realized that like writing an outline helps you structure a essay, doing a “business model sketch” can help you break apart a business idea and evaluate its viability. Just like an essay needs a thesis, a body and a conclusion or it can fall flat, a business model needs specific components or it is not viable. I identified six components that roughly draw from The Personal MBA and the “lean startup” philosophy. If you are thinking of starting a new business, consider taking one page of paper, drawing six boxes, and filling out these six areas: customer, value proposition, marketing, sales, value delivery, and finance.

The problem with many business ideas is that, while they sometimes hit on a few of these areas, they are weak in others. That can be okay, but as a startup, your goal should be to develop a plan about how to address your idea’s weak points, and quickly test whether that plan is viable. If it is, great! If not, reconsider or proceed with caution!

The template

The customer

The specific person to whom value is being provided and is making a purchasing decision. Many times a business’s value proposition will include many people. For example, if you are selling family vacations, your business is offering value to the father (maybe you bundle in a few rounds of golf), mother (there’s a reason cruises have spas), and kids (I don’t think Disney pays actors to dress as Micky Mouse for dad). This is important to understand, especially when you get to the value delivery portion of the business model sketch. However, first identify the decision maker!

Value proposition

The value proposition is a description of the value you are providing your customer. Remember that your customer is the person making the purchasing decision, so when crafting your value proposition, you need to understand the wants and needs of that particular person above any secondary party.

Ideally, focus your value proposition on addressing core human drives. The Personal MBA identifies five core human drives: the drives to acquire, bond, learn, defend, and feel. Many consumer products clearly target their value propositions to these core human drives. This can be difficult if you are not selling consumer products or services, but if you look closely, many business-to-business or business-to-government sales are targeted the same way. Living in Washington, DC I notice this every time I go through the Pentagon metro station. Large defense contractors clearly target their buyer’s core human drive to defend with advertisements that depict the strength and sophistication of their offerings. This example of advertisements for Northrop Grumman’s unmanned drones is a great example.

Marketing

Marketing is fundamentally your strategy for getting your customer’s attention to deliver your value proposition. When you create a business model sketch, you need to attempt to identify how you will reach enough customers at an economical cost. If you have an excellent value proposition but your customers do not know about it, your business model will fail.

In the marketing portion of your sketch, you should also attempt to identify in rough terms the market size and dynamics: how many potential customers do you have and is that number increasing and/or gaining purchasing power?

Sales

If marketing is your strategy for reaching your target customer, sales is your strategy for negotiating a contract to deliver your value proposition in exchange for money. For an online business, this could be as simple as “sign up for PayPal and put a Buy Now button on my website”. If you are an enterprise software, this could be extensive contract negotiations.

Sales and marketing are closely intertwined, but separating them out to different boxes in the business model sketch should help you to separate the act of reaching your customer and delivering your value proposition (marketing) and the mechanics of “closing the deal” (sales).

Value delivery

This is what many people think of as the meat of your business model, but the previous sections are also key to determining its viability. Value delivery is the processes that delivery the promised value to your customers. In the family vacation example, this is the operations of your hotel from hiring staff to procuring food and drinks for the restaurant. If you are an online business, this is the cost of developing your website as well as operational cost of running your servers and supporting your customers.

Finance

The finance section should answer “what financial resources do I need to support this business model, and what is the return on investment?”. This is hard to answer in specifics in a business model sketch, and should be tackled in more detail in a complete business plan. However, once you complete the other sections, you should be able to answer these questions:

  • Is this a financial capital-intensive business to set up? To run? If yes, what is the risk to capital committed to the business? What return can I offer owners of capital given that risk? How can I source that capital, loans or equity investment?
  • Is this a human capital-intensive business to set up? If yes, what is my recruitment strategy? Can I attract the right talent? Would I compensate them with equity or a salary?

Writing Zero-Downtime Migrations for Rails and Postgres

Let’s suppose you are building an app. It is under heavy development and the dev team is cranking out new features left and right. Developers need to continually change the database schema, but you don’t want to take down the app for database migrations if at all possible. How the heck do you do this with Rails?

We had this problem recently, and have come up with a procedure that solves it for most small database migrations. The goals of this procedure are to:

  • Avoid downtime by running database migrations while the app is live
  • Avoid too many separate deployments to production
  • Keep the application code as clean as possible
  • Balance the cost of additional coding with the benefit of having a zero-downtime migration. If the cost or complexity of coding a migration in this way is too great, then a maintenance window is scheduled and the migration is written in a non-zero downtime fashion.

The first thing to understand when writing a zero downtime migration is what types of Postgres data definition language (DDL) queries can be run without locking tables. As of Postgres 9.3, the following DDL queries can be run without locking a table:

Postgres can create indexes concurrently (without table locks) in most cases. CREATE INDEX CONCURRENTLY can take significantly longer than CREATE INDEX, but it will allow both reads and writes while the index is being generated.

You can add a column to a table without a table lock if the column being added is nullable and has no default value or other constraints.

If you want to add a column with a constraint or a column with a default value, one option may be to add the column first without a default value and no constraint, then in a separate transaction set the default value (using UPDATE)  or use CREATE INDEX CONCURRENTLY to add a index that will be used for the constraint. Finally, a third transaction can add the constraint or default to the table. If the third transaction is adding a constraint that uses an existing index, no table scan is required.

Dropping a column only results in a metadata change, so it is non-blocking. When the table is VACUUMED, the data is actually removed.

Creating a table or a function is obviously safe because no one will have a lock on these objects before they are created.

Process for Coding the Migration

The guidelines I have been using for writing a zero-downtime migration are to:

  • Step 1: Write the database migration in Rails.
  • Step 2: Modify the application code in such a way that it will work both before and after the migration has been applied (more details on this below). This will probably entail writing code that branches depending on that database state.
  • Step 3: Run your test suite with the modified code in step 2 but before you apply the database migration!
  • Step 4: Run your test suite with the modified code in step 2 after applying the database migration. Tests should pass in both cases.
  • Step 5: Create a pull request on Github (or the equivalent in whatever tool you are using). Tag this in such a way that whoever is reviewing your code knows that there is a database migration that needs careful review.
  • Step 6: Create a separate pull request on Github that cleans up the branching code you wrote in step 2. The code you write in this step can assume that the DB is migrated.

When the migration is deployed, you’ll deploy first the code reviewed in step 5. This code will be running against the non-migrated database, but that is a-ok because you have tested that case in step 3. Next, you will run the migration “live”. Once the migration is applied, you will still be running the code reviewed in step 5, but against the migrated database. Again, this is fine because you have tested that in step 4.

Finally, once the production database has been migrated, you should merge your pull request from step 6. This eliminates the dead code supporting the unmigrated version of the database. You should write the code for step 6 at the same time you write the rest of this code. Then just leave the pull request open until you are ready to merge. The advantage of this is that you will be “cleaning up” the extraneous code while it is still fresh in your mind.

Branching Application Code to Support Multiple DB States

The key to making this strategy work is that you’ll need to write you application code in step 2 in a way that supports two database states: the pre-migrated state and the post-migrated state. The way to do this is to check the database state in the models and branch accordingly.

Suppose you are dropping a column called “deleted”. Prior to dropping the column, you have a default scope that excludes deleted rows. After dropping the column, you want the default scope to include all rows.

You would code a migration to do that like this:

Then, in your Post model, you’d add branching like this:

 But doesn’t this get complicated for larger migrations?

Yes, absolutely it does. What we do when branching like this and it gets too complicated, we either sequence the DB changes over multiple deployments (and multiple sprints in the Agile sense) or “give up” and schedule a maintenance window (downtime) to do the change.

Writing zero-downtime migrations is not easy, and you’ll need to do a cost-benefit analysis between scheduling downtime and writing lots of hairy branching code to support a zero-downtime deploy. That decision will depend on how downtime impacts your customers and your development schedule.

Hopefully, if you decide to go the zero-downtime route, this procedure will make your life easier!

Querying Inside Postgres JSON Arrays

Postgres JSON support is pretty amazing. I’ve been using it extensively for storing semi-structured data for a project and it has been great for that use case. In Postgres 9.3, the maintainers added the ability to perform some simple queries on JSON structures and a few functions to convert from JSON to Postgres arrays and result sets.

One feature that I couldn’t figure out how to implement using the built-in Postgres functions was the ability to query within a JSON array. This is fairly critical for lots of the reporting queries that I’ve been building over the part few days. Suppose you have some JSON like this, stored in two rows in a table called “orders”:

If you want to run a query like “find all distinct IDs in the products field”, you can’t do that with the built in JSON functions that Postgres currently supplies (as far as I’m aware!). This is a fairly common use case, especially for reporting.

To get this work, I wrote this simple PgPL/SQL function to map a JSON array.

What this function does is given a JSON array as “json_arr” and a JSON path as “path”, it will loop through all elements of the JSON array, locate the element at the path, and store it in a Postgres native array of JSON elements. You can then use other Postgres array functions to aggregate it.

For the query above where we want to find distinct product IDs in the orders table, we could write something like this:

That would give you the result:

Pretty cool!

Volatility of Bitcoin Index

A while back when I was a research assistant at the Federal Reserve, I worked on a project to make exchange rate volatility indexes for the major world currencies. We basically had some high frequency data for USD/EUR, USD/CHF, and USD/JPY and wanted to see how the financial crisis affected volatility. With all of the hype and turmoil around Bitcoin, I though it would be interesting to make a similar index for the Bitcoin/USD exchange rate.

Before Bitcoin is ever able to become a viable “currency”, volatility needs to come down a lot. Low volatility isn’t sufficient for it to take off, but is probably is necessary. If you take the traditional definition of a currency as a “store of value”, a “medium of exchange”, and a “unit of account”, persistently high volatility is absolutely a death knell. This is especially true in Bitcoin’s case where there is no government backing and there are attractive substitutes in traditional currencies.

One of the cool things about Bitcoin, however, is that lots of the data is fairly open. Most of the rest of the financial market data in the U.S. is behind the copyright lock and seal of the major exchanges and electronic trading networks. Both the NYSE and NASDAQ make lots of money off of selling market data, and they recently won a court case to raise their rates even further. That makes doing this kind of analysis on other securities an expensive endeavor for armchair quarterbacks like myself!

Bitcoin, however, has a rather open ethos and most of the exchanges publish realtime or near-realtime price data for free. CoinDesk aggregates this into their Bitcoin Price Index, which they graciously agreed to send over for this analysis.

The Volatility of Bitcoin Index (VOB Index—I’m open to suggestions on the name) is a simple rolling standard deviation of minutely log-returns. That is not nearly as complicated as it sounds. To create the index, I started with a time series of minutely Bitcoin price data and calculated the log returns (basically percent increase or decrease for each minute).  Then for each period, I took all of the returns within a trailing window and computed the standard deviation. I did this for three window lengths: 30, 60 and 90 days, and then annualized it. The Bitcoin markets are 24/7/365, so there are no holiday or weekend adjustments, which makes things a bit easier.

Here’s what the index looks like for the period August 2011 to January 31, 2014.

Volatility of Bitcoin IndexHere’s the BPI (Bitcoin Price Index) over that period.

BPI Index
Bitcoin volatility looks like it is dropping over time, especially from the early days in 2011 and 2012, except for a large bump in volatility in April of 2013. That was probably sparked by problems at Bitcoin’s largest exchange, Mt. Gox during that time period. This decrease is despite an increase in speculative interest in Bitcoin and moves by China to curtail the currency’s use.

Whether Bitcoin takes off is anyone’s guess and predicting if it does is probably a fool’s errand. There are lots of smart people working on it, but big questions about its viability still remain. In either case, it should be exciting to watch!

Tech Predictions for 2014

This is a just a bunch of fun tech predictions for 2014. I can’t claim any insider knowledge, but it will be interesting to look back on them at the end of the year to see what the outcomes are!

Bitcoin will become “legitimate”, but not widespread

In 2014, regulators will likely issue guidance on how exactly companies and people can deal in Bitcoin without running afoul of anti-money laundering and money transmitter regulations, and how it is to be taxed. This will “legitimize” Bitcoin, but because of the difficulty of using the cryptocurrency and the insane amount of volatility of Bitcoin against fiat money, it will not become mainstream in 2014. Instead, Bitcoin will be relegated to particular types of transactions that the mainstream banks are either not accepting (due to the high probability of fraud or high compliance costs) or are charging exorbitant fees to facilitate. These will likely be cross-border transactions, especially in countries with capital controls (Argentina), legally grey-area transactions, or transactions that the major payment processors don’t accept.

Docker plus Deis or Flynn will result in an “open-source Heroku” and lots of dev-ops innovation

Heroku is probably one of the biggest dev-ops innovations in the last five years, making it vastly easier to deploy web applications, freeing development teams to actually build applications instead of focusing on DevOps. Heroku’s 12 Factor App architecture is now widely used and 2014 will see that continuing. However, Heroku has a few problems.

First, There currently isn’t a good way to “build your own Heroku” out of open source components. If Heroku is too constraining, you are forced to spin up your own servers on Amazon Web Services for part of your application, which eliminates lots of the advantages Heroku brings to the table. Last year there was a ton of excitement about Linux containers (LXC) and Docker, which is abstraction on top of LXC that makes them easier to manage. Both Heroku and Google Cloud use Linux containers internally, but they are closed-source implementations. Docker will likely begin to change that this year.

However, Docker alone is not a Heroku replacement. Much of the innovation in Heroku lies in the build packs and routing mesh, which Docker does not provide. Two other open source projects aim to become that layer on top of Docker, and these are the ones I’m most excited to watch. The first is Deis, which seems to be the furthest along in creating an open-source Heroku. Deis has both a routing mesh and an application layer created as well as scripts to automatically scale your “personal Heroku” on AWS, Digital Ocean and Rackspace. Flynn has many of the same goals, but doesn’t appear to be as far along. Deis has commercial support from Opsware, while Flynn is raising money Kickstarter-style to build out their platform. In any case, while Heroku is great, it is very exciting to see open source competitors come to the scene.

AngularJS and EmberJS will win the front-end wars for web apps

For highly interactive web apps, both AngularJS and EmberJS will become the clear choices in 2014. Backbone, Knockout and other front-end JS frameworks will see declining usage simply because they don’t provide as much of a framework as AngularJS or EmberJS. For new sites except for “CMS-like” apps, people will stop generating HTML on the server and push the page rendering to Javascript on the browser. Because of this, back-end frameworks will pivot towards being better REST API servers and focus less on their HTML template rendering abilities. The wildcard is “CMS-style” sites that need to be indexed by Google. While Google’s crawler can execute Javascript, content-heavy sites will still need a mechanism to serve HTML from the server for reliable SEO. This means that full-stack Rails apps will still be important in 2014. I think the writing is on the wall for this kind of app, however.

Mobile will continue to be “write twice” for the highest quality

Unfortunately, while HTML5 is great, it still won’t deliver the highest quality apps on mobile in 2014. As a cost-saving measure, or for apps that don’t need lots of interaction, HTML5 will be a viable choice. However, to create the highest-quality mobile apps in 2014, we’ll still need to write them twice: once for Android and once for iOS.

Wearable tech won’t be mainstream; in fact, society will push back from being “always connected”

Google Glass and the like will remain curiosities and not mainstream. In fact, I think that people are beginning to push back from being always connected to the Internet. Smartphone usage in many social situations is become a faux pas and the word “glassholes” has already been coined for people that wear Google Glass in public. That being said, we will see the Internet smartly integrated into more consumer products, including continued innovation in automobile technology and home automation. The key for the “Internet of things” in 2014 will be unobtrusive, discrete, and large value-add, which probably isn’t wearable technology in its current form.

This is a dirt road leading from a waterfall outside of Vang Vieng, Laos back into town. The photo was taken during the rainy season,  during a break in the afternoon showers.

Motorcycling in Laos

Last month I spent a week motorcycling around Laos with a friend, starting at the capital and heading north through probably one of the most scenic parts of Southeast Asia. This is still a fairly undeveloped part of the world, meaning empty roads, not many other tourists, and an unspoiled landscape. It also means the logistics of this trip were not easy, but the ride was definitely worth it. If you are interested in doing a trip like this, read on!

Preparing for the trip

Roads in Laos range from two lanes and decent blacktop on the main highways to muddy dirt tracks in the villages, so you will want to be a fairly proficient rider with at least some dirt biking experience before you go. We went early in the rainy season, which contributed to the mud, but even in the dry season, a lot of the smaller roads will require some dirt riding skills.

The two most sensible starting points for a trip like this are the capital, Vientiane, and Luang Prabang, a UNESCO World Heritage Town about 340km to the north. Both cities have international airports, but they are very small, so expect to fly through a major Asian hub like Bangkok or Seoul if coming from the U.S. or Europe. You can also take a bus from Thailand, Cambodia or Vietnam if you are already in Southeast Asia, but expect it to be an overnight bus (12 hours+) on mountainous, winding roads.

Renting bikes

We rented a Honda CRF250 and CRF230 from Remote Asia in Vientiane, who were excellent. The bikes were in great shape and Jim from Remote Asia was very helpful when we had a minor mishap (I lost an ignition key to my bike!). You really will want to rent bikes from someone who can also provide a cell phone and “roadside assistance”, and by that I mean basically translating via cell phone and recommending mechanics. English is not widely spoken in Laos and emergency services are non-existent, so you will want an English speaking contact in-country from your rental agency.

We had no issues with Laotian police (in fact we did not see any outside of Vientiane), so it would seem that you don’t strictly need an International Drivers License or a motorcycle license at all. Your rental bike should come with a Laotian number plate and turn signals, and riding with a headlight during the day is apparently illegal so should be avoided.

The route

Laos has a few excellent motorcycle routes, and the one we took seems to be the most popular. You can order an excellent map of Laos with elevation profiles, road conditions, and city maps from GT Rider in Thailand here. This route took us 6 days of riding at a fairly leisurely pace. There are three long days of riding (over 150km) and one day each spent in Vang Vieng, Ponsavan, and Luang Prabang that you can use to explore the area surrounding these cities.

In addition to the GT Rider map, you should also get an Open Street Map app for your smart phone. OSM actually had pretty good coverage of Laos, including some of the caves and waterfalls outside of the cities. For Android, try OSMAnd, which lets you download country maps for offline use.

Here’s a map of the route we took and details of each leg.

 

Vientiane to Vang Vieng, 154km

If picking up your bikes in Vientiane, likely your first stop will be Vang Vieng. The ride to Vang Vieng is 154 km and fairly easy and flat until the last 50km or so where there are some small hills. Leaving Vientiane is where you will probably encounter the most traffic of the trip (not much!), but within 50km, it drops off a lot.

Vang Vieng is a tourist town along the Nam Song river with lots of caving, waterfalls, and rafting available nearby. It is definitely worth at least one whole day exploring outside of the town. I think of the three towns we visited, Vang Vieng had the most spectacular scenery. The town is surrounded by a dramatic karst landscape that is even more surreal in the rainy season when the peaks are surrounded in fog. In Vang Vieng, we went to a cave, waterfall and “blue lagoon” (swimming hole) on our one full day there. These were all found on dirt paths leading out from the town, for which Open Street Map was a real help locating.

Vang Vieng to Ponsavan, 233km

The second long day of riding, Vang Vieng to Ponsavan, is 233 km of challenging riding with tons of very steep switchbacks and incredible views of some of the highest peaks in Laos. On this part of the ride, you are heading to Ponsavan, which is the capital of the providence containing the Plain of Jars, a neolithic archeological site with large stone jars (some over 4 feet tall). The Plain of Jars was also bombed heavily during the Vietnam War as part of the CIA’s covert war in Laos, so there is lots of war history in this area as well. Ponsavan has a few NGOs operating to clear unexploded ordinance that have interesting exhibits in the town on the war.

Ponsavan to Luang Prabang, 259km

The last long day of riding will take you 259 km from Ponsavan to Luang Prabang (backtracking to Pho Khoun, then north to Luang Prabang). Backtracking in this way isn’t all that bad because the scenery and riding is so great heading from Pho Khoun to Ponsavan, you’ll have no problem doing it twice.

Pho Khoun makes a good stopping place for lunch, as its about at the midpoint of this ride and there are a few places to refuel and grab lunch. From Pho Khoun, you’ll have similar riding (and that’s to say great!) to Luang Prabang.

Luang Prabang is a UNESCO World Heritage Site and has by far the best developed tourist infrastructure of the three cities we visited. Luang Prabang is at the intersection of the Mekong River and the Nam Khan River and has a large temple in the center of the city with great views of both rivers. There are waterfalls about 20km outside of town as well as rafting, elephant tours, and kayaking if you choose to stay here for a few days.

Route: Vientiane - Vang Vieng- Pho Khoun - Ponsavan - Phou Khoun - Luang Prabang

Elevation Profile: Vientiane – Vang Vieng – Pho Khoun – Ponsavan – Phou Khoun – Luang Prabang

Using NGINX to proxy TileStache

I’m working on a re-rendering of OpenStreetMap for hiking, with hill shading and topographic lines and I decided to use TileStache to render the tiles. TileStache has the nice ability to render tiles from a bunch of different sources and then overlay them with different amounts of transparency and layer masks (just like Photoshop). TileStache is a Python WSGI server that you can run in mod_python or GUnicorn to serve tiles directly over HTTP. TileStache can cache map tiles to the file system and serve the static PNGs if they exist or render them from scratch using Mapnik if they don’t. Its pretty fast, especially if the tiles are pre-rendered.

However, GUnicorn is a pre-forking server. This means that it needs to fork a different process for each client connection. What happens if a slow client connects is that TileStache processes are rapidly used up to serve that client (typically clients make up to 8 separate HTTP connections for slippy maps, resulting in 8 processes each!). This is the case even if the tiles are being served from cache.

What you need to do is add a reverse proxy in front of GUnicorn, using something like NGINX. The reverse proxy using an evented IO model, which enables it to manage sending data back to a slow client without using an operating system process. NGINX can also directly serve static assets from the filesystem, which means we can serve the cached tiles without even hitting GUnicorn/TileStache.

Getting this to work requires a bit of NGINX HttpRewriteModule voodoo, though. The issue is that TileStache saves cached tiles in a slightly different path than the URI path that comes in via HTTP. Say you have a OpenStreetMap-style URL like this: myserver.com/tiles/layer/$z/$x/$y.png. In this URL, $z is zoom level (usually 1-19), and $x and $y are tile coordinates. For higher zoom levels, you can have 10,000+ by 10,0000+ tiles in the x and y directions. That’s way too many PNG files to store in one folder on the filesystem. So, TileStache splits up the x and y paths into two levels. Say you have a URL like /tiles/layer/7/12345/67890.png. TileStache will store that in the filesystem path /tiles/layer/7/012/345/067/890.png. Notice how 12345 is broken into 012/345? That means that there will be at most 1,000 files or folders in each directory—a manageable amount. The issue is we need to get NGINX to rewrite URLs to server these static assets. Here’s how I accomplished that:

This mountain of rewrite lines will rewrite the request URL to the filesystem format, then look for tiles in the filesystem tree starting at /osm/cache. The last line tells NGINX to look for the rewritten URL, then if the file is not found, to send the request to the @tilestache; location block, which looks like this:

That location block proxies the request to the GUnicorn server listening on localhost:8080.

This seems to be working great. NGINX is far faster in serving static assets, and if all of the worker TileStache processes are busy rendering, the cached zoom levels of the map work fine!

Link

Craigslist is on board: OpenStreetMap soars to new heights

Craigslist is now using OpenStreetMap for showing the location of apartment listings! This is great for two reasons. First, looking for apartments will become much easier on Craigslist. Second, it’s validation that OpenStreetMap’s dataset is hugely valuable and robust enough for commercial use. Hopefully, I’ll be rolling out my OpenStreetMap project within the next week or so. The project is a re-rendering of the basic slippy map with topographic lines and hiking trails taking more prominence. More to come!

Backbone.js: Thankfully, a great MVC framework for the frontend

Frameworks, frameworks…

On the backend, web development frameworks have been growing quickly in popularity. Rails, Django, CakePHP, and others are popular because they save developers a ton of time. Someone once said that a good developer is a lazy developer, and frameworks enable developers to kick back with a Corona on a beach (well, not quite, but close) by making a lot of the architectural decisions for the developer. Rails is a great example of this, with a clear MVC structure, preset file system hierarchy, and even database schema conventions. If you were coding a site in pure Ruby, you’d need to make all of these decision yourself.

While backend frameworks are really popular, there has been a dearth of good choices in front end frameworks. As web apps are moving more processing to client-side Javascript, this was becoming a growing problem.

The front-end Javascript jungle

Front-end javascript tends to be a huge mess of jQuery callbacks, DOM elements stuffed with extra object properties, and a polluted root window object. As client-side applications are get larger, this is completely unsustainable. It makes code hard to maintain, hard to understand, and un-testable with unit testing libraries. Backbone.js greatly helps with all of these issues.

Enter Backbone.js

Backbone is a minimalist MVC framework for Javascript. It adds models and collections with persistance code that works well with JSON REST APIs, as well as views that automatically re-render on model updates and controllers that handle hash-bang URLs or HTML5 pushState.

All of this comes in 4KB of minified Javascript that ties in with jQuery, Underscore, or any other Javascript library.

Backbone dos and don’ts

I’m currently working on a small side project to brush up on some Javascript coding, and decided to use Backbone as a front-end framework (I’m using Rails on the backend). Here’s some brief notes from a my first impressions:

Do

  • Put most of your front-end application (or business) logic in the models. This is basically the same thing you would do with a MVC app on the server.
  • Use a templating library. Underscore.js has a pretty decent _.template() function. HTML in really clutters your Javascript code.
  • Try using the Rails asset pipeline or some other way to minify and compile your JS. This way, you can spread your Backbone code out into many files. I tended to use a folder hierarchy similar to Rails (folders for models, collections, controllers, and views).

Don’t

  • Put much logic in your views. It is very hard to debug view code because the function that renders the view is typically created programmatically by the templating library.
  • Don’t prematurely optimize your view code. The easiest way to render a view is to just create an HTML fragment from a templating library then insert it into the DOM with jQuery. This is fine for most cases. You can also manipulate the DOM by changing the inner text of elements on a view re-render, which might be faster but often isn’t worth the extra work.

Using Capistrano to deploy Rails apps from Gitolite

Here, I’ll show you how to deploy a Rails app from a Gitolite repository via Capistrano. In this example, I’m running a Phusion Passenger on NGINX on Ubuntu 10.04. The instructions should be very similar for Ubuntu 12.04.

First, understand what we’re doing here. I’m assuming you are using Gitolite for version control (although similar instructions would probably work for Github). We’re going to add a read-only deployment key to the Gitolite repository. When you run cap deploy, Capistrano will log into your production server via SSH (using public key authentication). Then the Capistrano script will instruct the production server to check out the latest version of your app from the Gitolite repository into a directory on the production server. Finally, the Capistrano script will change a symlink to the new version of your app and instruction Phusion Passenger to reload the app into memory on the next hit.

Setting up your production server

Create a new user for deployment-related tasks on your production server. Switch to that user.

Now, generate some SSH keys for that user. Run as the deployuser:

I don’t typically enter a password for this keypair. The reason is that this keypair is only used for read-only access to your code repository in Gitolite. If your code is highly sensitive, you might want a password. If you enter one here, you will be prompted for it each time you deploy code.

Now, wherever you have your Gitolite admin repository checked out, open it up and add the public key to your keydir folder. I like to keep my deployment keys in a subfolder called something like “deployment”.

Say, for example, your Gitolite admin repository is at ~/repos/gitolite-admin. Switch to that path. Now enter the folder keydir. Make a new subfolder called deployment, and then a new file in that folder called something like MyDeploymentKey.pub. Open that file in your editor and paste the public key that you just created from your deployment server. Typically, that key is found at ~/.ssh/id_rsa.pub.

Now, open your gitolite.conf file (in the conf folder in your Gitolite repository). Find your project and add a directive to grant your deployment key read-only access. Here’s an example project section:

Note that even though the deployment key could be in a subfolder, you still just enter the filename minus the “.pub”.

Save the Gitolite files, commit and push to your Gitolite server.

Setting up Capistrano

Now, open up your Rails project you want to deploy. Add these gems:

Run bundle install and then from the top directory of your project, run capify .. This adds Capistrano to your project. Open up config/deploy.rd and add something like this:

This deploy script will checkout your code from the project myproject on mygitoliteserver.com and deploy it to /srv/mydomain.com/public on your production server (make sure you create this directory). Whenever you deploy, Capistrano will touch tmp/restart.txt so that Phusion Passenger restarts with the new code.

Once you are finished editing this script, commit your changes, push your latest code to your Gitolite server.

Deciding who gets to deploy

For each user you want to allow to deploy code, have them generate a SSH key. On your deployment server, open or create ~deployuser/.ssh/authorized_keys. For each user you want to allow to deploy, add their public key (one key per line) to this file.

Deploying!

Now, to test out deployment, run from your Rails root on your development machine (the machine that has the SSH key you added to ~deployuser/.ssh/authorized_keys), run cap deploy.