This is a series of posts on migration from Apache and MySQL to Nginx+uwsgi and PostgreSQL. In this post, I’ll be detailing the steps we took to migrate the database from MySQL to PostgreSQL, with as little downtime as possible. Please leave a comment if you have suggestions!
One-time Pre-migration Steps
UTF-8
The default encoding on PostgreSQL is SQL_ASCII and you probably want UTF-8. If you don’t know what you want, you want UTF-8 (trust me). The easiest way was to blow away the default cluster and re-create it (Thanks jacobian!)
sudo pg_dropcluster --stop 9.1 main sudo pg_createcluster --start -e UTF-8 9.1 main
Make PostgreSQL listen on all interfaces
Edit /etc/postgresql/9.1/main/postgresql.conf
and ensure PostgreSQL is listening on all interfaces.
listen_addresses = '0.0.0.0'
Allow access to PostgreSQL from the old server
Edit /etc/postgresql/9.1/main/pg_hba.conf
and add an entry for the old server (where 123.123.123.123 is the IP address of the old server).
host all all 123.123.123.123/32 md5
Install client libraries on the old server
We use sqlalchemy for db access and I had to do apt-get install python-psycopg2
.
Creating Users and Databases
Our process is to create a user for each app and have that app’s database be owned by this user, here’s a script that automated creating the user and database.
#!/bin/bash sudo -u postgres createuser -d -R -S $1 sudo -u postgres createdb $1 -O $1
The move
Import Preparation
Create user and database on the new server with the script above. Remember to set a password for this new user.
Exporting
The most worrisome bit about the whole migration was exporting the data from MySQL and importing it into PostgreSQL. We used mysql2psql and it didn’t give a lot of troubles except for the bit where floats got a little messed up. My personal recommendation is to not use real
, but use numeric(7,4)
with the accuracy adjusted for what you need (this particular definition is used for our lat/long definitions.
First, run mysql2psql
on your command line, this will create the config file.
Now edit the mysql2psql.yml
file and add your appropriate entries. Here’s what ours looked like
mysql: hostname: localhost port: 3306 socket: /var/run/mysqld/mysqld.sock username: mysuperuser password: mypassword database: mydb destination: # if file is given, output goes to file, else postgres file: mydb.sql postgres: hostname: localhost port: 5432 username: mysql2psql password: database: mysql2psql_test # if tables is given, only the listed tables will be converted. leave empty to convert all tables. #tables: #- table1 #- table2 # if exclude_tables is given, exclude the listed tables from the conversion. #exclude_tables: #- table3 #- table4 # if supress_data is true, only the schema definition will be exported/migrated, and not the data supress_data: false # if supress_ddl is true, only the data will be exported/imported, and not the schema supress_ddl: false # if force_truncate is true, forces a table truncate before table loading force_truncate: false
When you run psql2mysql
again, it will export the database mydb
into mydb.sql
. Before we did that, we removed this particular site from /etc/apache2/sites-enabled and restarted apache. We didn’t want the sql file to go stale as soon as it was exported. This is where the downtime starts
Importing
Copy the file over to the new server and import it into PostgreSQL with psql.
sudo -u mydb psql mydb < mydb.sql
In retrospect, I should have just imported it directly with mysql2psql
. I was initially hesitant because it involved creating a user that could access that machine from outside. But I later realized I needed it anyway.
Go live!
Now change the settings on the old server to use the postgres database as the backend, enable the site in Apache and you’re all set to serve this site from PostgreSQL!
Leave a Reply