Home > Documentation > PostgreSQL to MySQL
Migrate PostgreSQL to MySQL |
[PostgreSQL to MySQL Converter] [Tutorial] [Types Mapping] |
Although PostgreSQL offers more advanced features than MySQL does, some companies and organizations (including world's famous brands such as Uber) migrate their databases from PostgreSQL to MySQL due to the following limitations:
Writing into database. One of the core concept of PostgreSQL is unchangeable row data. The unchangeable row is called "tuples", each tuples is uniquely identified by "ctid". Ctid literally represents the on-disk location (in other words, physical disk offset) of a tuple. Different ctids may describe a single row (for instance, when multiple versions of the row exist for MVCC purposes). Table that is a collection of tuples may have indexes, which are organized as B-trees data structures, to map indexed fields to a ctid payload.
Now, assume we have to update a row in table containing personal data, for example change the birth date for some person. Since tuples are unchangeable, PostgreSQL adds a new tuple having a new ctid to the table. Obviously, Postgres needs to distinguish the new tuple from the old, for this purpose it stores version and pointer to the previous version for each tuple.
When we made small update of a row as mentioned above, PostgreSQL has to issue at least three physical updates:
These three updates only reflect the writes made to the primary tablespace and each of them also needs to be reflected in the write-ahead log (WAL), so there will be even more writes on disk.
MySQL does not have that writing bottleneck due to important architectural difference from PostgreSQL. While Postgres directly maps index records to on-disk locations, MySQL (InnoDB) secondary index records hold a pointer to the primary key value. This means that row updates in MySQL (InnoDB) only need to update index records that are actually modified by the row update.
Replication. PostgreSQL writing issue specified above naturally appears into the replication as well because replication occurs at the level of on-disk changes. Instead of replicating a small part of the data (birth date in the example above), PostgreSQL writes out WAL entries for all three updates described above, and all three of these entries propagate over the network. Therefor, the write problem causes the replication problem. As result, the volume of Postgres replication data grows extremely fast occupying a large amount of bandwidth.
Also, Postgres does not support true replica MVCC. The fact that replicas apply WAL updates causes to having a copy of on-disk data identical to the master at any time.
On the other hand, MySQL supports different replication models:
UPDATE users
SET birth_date=1970-04-08 WHERE id = 4
)There are cons and pros for all these modes. For instance, the statement-based replication is usually the most compact, yet may require replicas to apply expensive updates for small amounts of data. At the same time, row-based replication similar to the PostgreSQL WAL replication, is more verbose but leads to more predictable and efficient updates on the replicas.
1. CSV Files This is the most straight forward methos requiring a lot of manual work:
pg_dump -st table schema
copy table to 'filename.csv'
delimiter ',' CSV HEADER;
It is important to validate string
representation of dates and binary data before loading the data to MySQL.LOAD DATA LOCAL INFILE '/path/to/file/filename.csv' INTO TABLE table_name FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'
Note, there is a MySQL parallel table import utility that might be usefull for rapid data import of large data files into MySQL table. It analyzes the source data file, splits it into fragments, and uploads those fragments to the target table through simultaneous connections. This approach allows to complete a large import many times faster than a standard LOAD DATA statement.
2. MySQL Workbench
Since version 5.2.41 MySQL Workbench includes migration wizard that allows to migrate databases from various RDBMS to MySQL via user friendly interface. Prerequisites:
As soon as all these requirements are fulfilled, Postgres to MySQL database migration may be started using the following steps:
3. Commercial Tools
For those who consider the tree previous migration methods too complicated, it is reasonable to use special commercial tools that can fully automate database migration procedure through intuitive graphical interface within just a few button clicks. One of these tools is PostgreSQL to MySQL converter developed by Intelligent Converters. It has all necessary features to handle large and complex database migration projects and does not require users to have deep technical skills to use it.
Have questions? Contact us