If you’re like me, you installed PostgreSQL (or MySQL or whatever) locally for development purposes. Then you copied data into your instance, and along you went.
Eventually, you wrote a migration that touched nearly all rows in your database, and you waited, and waited, and waited some more (grab some coffee!) for the migration to finish.
In my case, I had a migration using pure Ruby which took 30 minutes. The migration ran along the lines of:
1 def self.up
2 add_column :segments, :pathname, :string
4 # There are 200,000 segments in the DB
5 Segment.find_each do |segment|
6 segment.pathname = segment.full_pathname
9 change_column :segments, :pathname, :string, :null => false
12 class Segment < ActiveRecord::Base
13 belongs_to :parent, :class_name => "Segment"
15 def full_pathname
16 full_name = 
17 m = self
18 while m
19 full_name << m.name
20 m = m.parent
23 full_name.reverse.join(" / ")
The intent being that we’d cache the segment’s pathname in each record (think a directory tree). Since we had 200,000 segments, and each of those would walk up the chain, it took around 30 minutes to run that migration. I rewrote the migration to use CTE, which PostgreSQL supports out-of-the-box. My migration now ran in 17 minutes or so. I warned my teammates about this migration, and let it be. Eventually, a colleague took my branch and ran the migration. When he reported the migration took 30 seconds for him, at first I thought his SSD was the major difference between our machines. It turns out it wasn’t…
1 # PostgreSQL installed through Homebrew
2 # /usr/local/var/postgres/postgresql.conf
3 # Your file location will vary
4 # …
5 shared_buffers = 2400kB
6 # …
Changing that single line from
96MB let me run the migration in 30 seconds as well… I was very impressed at my stupidity. Let it be a lesson learned!