Proposed Migration Process: MySQL to PostgreSQL/TimescaleDB (Seeking Feedback)
Hello everyone,
We recently migrated a production Traccar environment from a legacy MySQL database to PostgreSQL with the TimescaleDB extension. Our entire stack is running in Docker (using docker-compose), so the commands below reflect that containerized approach. It was quite a journey, and we hit a few roadblocks along the way. I wanted to share the process we ended up using to see if anyone else has experienced similar issues, or if @Anton Tananaev has any feedback or corrections on this approach.
We are hoping this can be refined into a solid community guide for others making the jump!
The Challenges We Faced
During our migration attempts, we encountered a few specific issues:
1. The org.traccar.tools.Copy Tool
We initially tried using the official DB copy tool, but on modern Traccar versions (6.x), we hit ClassNotFoundException. It seems this tool might have been deprecated or is incompatible with the new TimescaleDB architecture?
2. pgloader and MySQL 8 Authentication
We really wanted to use pgloader for a direct transfer, but it threw Lisp SIMPLE-ARRAY errors when connecting to MySQL 8 (likely due to caching_sha2_password). Because of this, we fell back to a standard mysqldump.
3. MySQL Binary Booleans (_binary '\0')
When mysqldump exported our boolean columns (like disabled, administrator), it outputted them as binary strings (e.g., _binary '\0'). PostgreSQL rejected these during the INSERT. We had to heavily sanitize the SQL file before importing.
4. The UI Crash Bug (React charAt)
This was the most confusing part. After successfully importing the data, the UI crashed on the "Stops" report with a React charAt error. After a lot of debugging, we found the root cause: PostgreSQL 10+ uses is_identity for auto-increment columns. Because we forced the historical IDs from MySQL during the INSERT, the Postgres sequences didn't update and stayed at 1. New positions were being inserted with ID 1, 2, 3... which completely broke the chronological sorting in the Reports API, sending undefined data to the frontend React components.
Our Proposed Migration Steps
Here is the step-by-step process we used to resolve these issues. We'd love to know if there's a simpler or more "official" way to handle this!
Step 1: Export Data without Schema
We exported the data from MySQL without creating tables (--no-create-info) to let Traccar generate the clean Postgres schema itself. We also ignored tc_servers.
docker exec -i traccar-db mysqldump -u traccar -p"YOUR_PASSWORD" --no-create-info --complete-insert --skip-extended-insert --compact --ignore-table=traccar.DATABASECHANGELOG --ignore-table=traccar.DATABASECHANGELOGLOCK --ignore-table=traccar.tc_keystore --ignore-table=traccar.tc_servers traccar > dump.sql
Step 2: Sanitize the SQL Dump
We used sed to replace the binary boolean artifacts with standard false/true and to clean up literal 'null' strings. We also used perl to convert MySQL's escaped quotes (\") back into standard double quotes ("), which is crucial for PostgreSQL to properly parse JSON objects stored in the Traccar attributes table.
sed -i 's/_binary \x27\\0\x27/false/g' dump.sql
sed -i 's/_binary \x27\\1\x27/true/g' dump.sql
sed -i 's/_binary \x27\x00\x27/false/g' dump.sql
sed -i 's/_binary \x27\x01\x27/true/g' dump.sql
sed -i 's/`//g' dump.sql
sed -i "s/'null'/NULL/g" dump.sql
perl -pi -e 's/\\"/"/g' dump.sql
Step 3: Initialize the Postgres Schema
We started up a clean PostgreSQL database and let Traccar boot up once. As soon as Liquibase finished (Liquibase: Update has been successful), we stopped Traccar.
Step 4: Truncate Default Generated Data
Traccar generated a default admin user, so we cleared it out to prepare for our dump.
docker exec -i traccar-timescaledb psql -U traccar -d traccar -c "
DO \$\$ DECLARE r RECORD;
BEGIN
FOR r IN (SELECT tablename FROM pg_tables WHERE schemaname = 'public' AND tablename NOT LIKE 'databasechangelog%' AND tablename != 'tc_servers') LOOP
EXECUTE 'TRUNCATE TABLE ' || quote_ident(r.tablename) || ' CASCADE;';
END LOOP;
END \$\$;"
We injected the SQL dump while setting `session_replication_role = 'replica'` so that foreign key constraints didn't block the out-of-order inserts.
```bash
cat << 'EOF' > full_import.sql
SET session_replication_role = 'replica';
BEGIN;
\i /workspace/dump.sql
COMMIT;
SET session_replication_role = 'origin';
EOF
docker run --rm -v $(pwd):/workspace -e PGPASSWORD="YOUR_PASSWORD" --network traccar-network timescale/timescaledb:latest-pg16 psql -h traccar-timescaledb -U traccar -d traccar -f /workspace/full_import.sql
Step 6: Sequence Resynchronization (Crucial Fix)
To fix the charAt UI crashes, we had to resynchronize the is_identity sequences to the new MAX(id).
The part of this script is that it is fully dynamic! It queries the PostgreSQL information_schema to automatically find and fix every single table in the Traccar database that has an auto-increment column (Positions, Events, Users, Geofences, etc.). You don't need to list the tables manually.
docker exec -i traccar-timescaledb psql -U traccar -d traccar -c "
DO \$\$
DECLARE
r RECORD;
BEGIN
FOR r IN (
SELECT c.table_name, c.column_name,
pg_get_serial_sequence(c.table_name, c.column_name) AS seq_name
FROM information_schema.columns c
WHERE c.table_schema = 'public' AND c.is_identity = 'YES'
) LOOP
IF r.seq_name IS NOT NULL THEN
EXECUTE format(
'SELECT setval(%L, COALESCE((SELECT MAX(%I) FROM %I), 1))',
r.seq_name, r.column_name, r.table_name
);
END IF;
END LOOP;
END \$\$;
ANALYZE;
"
Step 7: Convert to TimescaleDB Hypertables
Finally, we converted tc_positions and tc_events into Hypertables with a 1 day chunk interval and added compression policies.
docker exec -it traccar-timescaledb psql -U traccar -d traccar
Then execute:
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
ALTER TABLE tc_positions DROP CONSTRAINT IF EXISTS tc_positions_pkey;
ALTER TABLE tc_positions ADD PRIMARY KEY (id, fixtime);
SELECT create_hypertable('tc_positions', 'fixtime', chunk_time_interval => INTERVAL '1 days', migrate_data => true);
ALTER TABLE tc_events DROP CONSTRAINT IF EXISTS tc_events_pkey;
ALTER TABLE tc_events ADD PRIMARY KEY (id, servertime);
SELECT create_hypertable('tc_events', 'servertime', chunk_time_interval => INTERVAL '1 days', migrate_data => true);
ALTER TABLE tc_positions SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'deviceid',
timescaledb.compress_orderby = 'fixtime DESC'
);
ALTER TABLE tc_events SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'deviceid',
timescaledb.compress_orderby = 'servertime DESC'
);
SELECT add_compression_policy('tc_positions', INTERVAL '2 months');
SELECT add_compression_policy('tc_events', INTERVAL '2 months');
\q
Is there anything we missed, or could this process be simplified? Specifically, is there a recommended alternative to scrubbing mysqldump outputs for the boolean inserts?
Looking forward to your thoughts!
Proposed Migration Process: MySQL to PostgreSQL/TimescaleDB (Seeking Feedback)
Hello everyone,
We recently migrated a production Traccar environment from a legacy MySQL database to PostgreSQL with the TimescaleDB extension. Our entire stack is running in Docker (using docker-compose), so the commands below reflect that containerized approach. It was quite a journey, and we hit a few roadblocks along the way. I wanted to share the process we ended up using to see if anyone else has experienced similar issues, or if @Anton Tananaev has any feedback or corrections on this approach.
We are hoping this can be refined into a solid community guide for others making the jump!
The Challenges We Faced
During our migration attempts, we encountered a few specific issues:
1. The
org.traccar.tools.CopyToolWe initially tried using the official DB copy tool, but on modern Traccar versions (6.x), we hit
ClassNotFoundException. It seems this tool might have been deprecated or is incompatible with the new TimescaleDB architecture?2.
pgloaderand MySQL 8 AuthenticationWe really wanted to use
pgloaderfor a direct transfer, but it threw LispSIMPLE-ARRAYerrors when connecting to MySQL 8 (likely due tocaching_sha2_password). Because of this, we fell back to a standardmysqldump.3. MySQL Binary Booleans (
_binary '\0')When
mysqldumpexported our boolean columns (likedisabled,administrator), it outputted them as binary strings (e.g.,_binary '\0'). PostgreSQL rejected these during theINSERT. We had to heavily sanitize the SQL file before importing.4. The UI Crash Bug (React
charAt)This was the most confusing part. After successfully importing the data, the UI crashed on the "Stops" report with a React
charAterror. After a lot of debugging, we found the root cause: PostgreSQL 10+ usesis_identityfor auto-increment columns. Because we forced the historical IDs from MySQL during theINSERT, the Postgres sequences didn't update and stayed at1. New positions were being inserted with ID 1, 2, 3... which completely broke the chronological sorting in the Reports API, sendingundefineddata to the frontend React components.Our Proposed Migration Steps
Here is the step-by-step process we used to resolve these issues. We'd love to know if there's a simpler or more "official" way to handle this!
Step 1: Export Data without Schema
We exported the data from MySQL without creating tables (
--no-create-info) to let Traccar generate the clean Postgres schema itself. We also ignoredtc_servers.docker exec -i traccar-db mysqldump -u traccar -p"YOUR_PASSWORD" --no-create-info --complete-insert --skip-extended-insert --compact --ignore-table=traccar.DATABASECHANGELOG --ignore-table=traccar.DATABASECHANGELOGLOCK --ignore-table=traccar.tc_keystore --ignore-table=traccar.tc_servers traccar > dump.sqlStep 2: Sanitize the SQL Dump
We used
sedto replace the binary boolean artifacts with standardfalse/trueand to clean up literal'null'strings. We also usedperlto convert MySQL's escaped quotes (\") back into standard double quotes ("), which is crucial for PostgreSQL to properly parse JSON objects stored in the Traccar attributes table.sed -i 's/_binary \x27\\0\x27/false/g' dump.sql sed -i 's/_binary \x27\\1\x27/true/g' dump.sql sed -i 's/_binary \x27\x00\x27/false/g' dump.sql sed -i 's/_binary \x27\x01\x27/true/g' dump.sql sed -i 's/`//g' dump.sql sed -i "s/'null'/NULL/g" dump.sql perl -pi -e 's/\\"/"/g' dump.sqlStep 3: Initialize the Postgres Schema
We started up a clean PostgreSQL database and let Traccar boot up once. As soon as Liquibase finished (
Liquibase: Update has been successful), we stopped Traccar.Step 4: Truncate Default Generated Data
Traccar generated a default admin user, so we cleared it out to prepare for our dump.
docker exec -i traccar-timescaledb psql -U traccar -d traccar -c " DO \$\$ DECLARE r RECORD; BEGIN FOR r IN (SELECT tablename FROM pg_tables WHERE schemaname = 'public' AND tablename NOT LIKE 'databasechangelog%' AND tablename != 'tc_servers') LOOP EXECUTE 'TRUNCATE TABLE ' || quote_ident(r.tablename) || ' CASCADE;'; END LOOP; END \$\$;" ### Step 5: Inject the Dump We injected the SQL dump while setting `session_replication_role = 'replica'` so that foreign key constraints didn't block the out-of-order inserts. ```bash cat << 'EOF' > full_import.sql SET session_replication_role = 'replica'; BEGIN; \i /workspace/dump.sql COMMIT; SET session_replication_role = 'origin'; EOF docker run --rm -v $(pwd):/workspace -e PGPASSWORD="YOUR_PASSWORD" --network traccar-network timescale/timescaledb:latest-pg16 psql -h traccar-timescaledb -U traccar -d traccar -f /workspace/full_import.sqlStep 6: Sequence Resynchronization (Crucial Fix)
To fix the
charAtUI crashes, we had to resynchronize theis_identitysequences to the newMAX(id).The part of this script is that it is fully dynamic! It queries the PostgreSQL
information_schemato automatically find and fix every single table in the Traccar database that has an auto-increment column (Positions, Events, Users, Geofences, etc.). You don't need to list the tables manually.docker exec -i traccar-timescaledb psql -U traccar -d traccar -c " DO \$\$ DECLARE r RECORD; BEGIN FOR r IN ( SELECT c.table_name, c.column_name, pg_get_serial_sequence(c.table_name, c.column_name) AS seq_name FROM information_schema.columns c WHERE c.table_schema = 'public' AND c.is_identity = 'YES' ) LOOP IF r.seq_name IS NOT NULL THEN EXECUTE format( 'SELECT setval(%L, COALESCE((SELECT MAX(%I) FROM %I), 1))', r.seq_name, r.column_name, r.table_name ); END IF; END LOOP; END \$\$; ANALYZE; "Step 7: Convert to TimescaleDB Hypertables
Finally, we converted
tc_positionsandtc_eventsinto Hypertables with a 1 day chunk interval and added compression policies.docker exec -it traccar-timescaledb psql -U traccar -d traccarThen execute:
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE; ALTER TABLE tc_positions DROP CONSTRAINT IF EXISTS tc_positions_pkey; ALTER TABLE tc_positions ADD PRIMARY KEY (id, fixtime); SELECT create_hypertable('tc_positions', 'fixtime', chunk_time_interval => INTERVAL '1 days', migrate_data => true); ALTER TABLE tc_events DROP CONSTRAINT IF EXISTS tc_events_pkey; ALTER TABLE tc_events ADD PRIMARY KEY (id, servertime); SELECT create_hypertable('tc_events', 'servertime', chunk_time_interval => INTERVAL '1 days', migrate_data => true); ALTER TABLE tc_positions SET ( timescaledb.compress, timescaledb.compress_segmentby = 'deviceid', timescaledb.compress_orderby = 'fixtime DESC' ); ALTER TABLE tc_events SET ( timescaledb.compress, timescaledb.compress_segmentby = 'deviceid', timescaledb.compress_orderby = 'servertime DESC' ); SELECT add_compression_policy('tc_positions', INTERVAL '2 months'); SELECT add_compression_policy('tc_events', INTERVAL '2 months'); \qIs there anything we missed, or could this process be simplified? Specifically, is there a recommended alternative to scrubbing
mysqldumpoutputs for the boolean inserts?Looking forward to your thoughts!