I think pgloader is the best options most people recommend. If the auth the only issue, it's probably easier to just fix that by using mysql_native_password just for the migration. Have you tried that?
Hi Anton, thanks for the feedback!
You are completely right, temporarily downgrading the Auth to mysql_native_password is definitely the easiest way to bypass the pgloader connection error on MySQL 8.
To be honest, the main reason we ended up choosing the manual mysqldump route was to ensure absolute Liquibase Schema Integrity. When using pgloader, it handles the schema translation on the fly. We wanted to avoid any risk of creating a slightly mismatched PostgreSQL schema (like missing specific indexes, incorrect data types, or having different constraints) that might break future Traccar updates. By letting the native Traccar Liquibase engine create the pristine Postgres schema first (our Step 3) and then injecting the raw data, we felt it was the safest bet for long-term stability.
Also, there's another major factor we encountered: the Sequence Synchronization bug (Step 6). Even when migration tools successfully port the data, they frequently fail to correctly update the is_identity sequences on modern PostgreSQL 10+ databases. If those sequences stay at 1, the backend starts inserting new records out of chronological order compared to the historic MySQL data. This is exactly what caused our UI to completely crash on the Stops Report (throwing a React .charAt() error) until we ran the dynamic setval script to resynchronize the database.
For context, the migrated instance using the mysqldump and scrubbing method described above has been running in production validation for 3-4 days now, without a single bug or missing data issue.
Why this post?
When I started this migration, I couldn't really find a centralized, up-to-date process or official guide for moving from MySQL to TimescaleDB for modern Traccar versions. The goal of this thread is exactly to start this conversation, share workarounds, and gather feedback from people like you and the community.
Hopefully, by pooling everyone's experience (whether using pgloader with auth fixes or mysqldump with scrubbing), we can establish a definitive, go-to migration process for everyone!
I'd love to hear if anyone who used pgloader successfully avoided the Postgres 10+ identity sequence reset issue out-of-the-box?
Definitely thanks for sharing. We actually might be interested in migrating some of our old servers from MySQL to TimescaleDB.
+1 interest
+1 interest
Can I know why you went down the PostrgreSQL + TimescaleDB path? We are about to create a Traccar instance using the same configuration for our large mining customers, and I would like to know the rationale behind your choice.
+1 interest
TimescaleDB is one of the most popular time series databases, so it's a good option for historical data, like positions and events.
PostrgreSQL is a good option to store regular relational data, like devices, users etc.
Together it's one of the best options available and also it's very compatible with existing Traccar infrastructure.
We are deploying this combo for most our new server and so far haven't had any issues. The nice benefit is that you can automatically set data retention period without any custom scripts, and it's super fast truncation.
Hello everyone,
Just a quick update for anyone following this migration process!
During another migration from MySQL 8.4 to PostgreSQL, I encountered a silent error that abruptly stopped the data import during Step 5 (leaving tables like tc_devices completely empty).
The issue:
It turns out that MySQL exports geographic locations or names containing apostrophes as escaped single quotes (e.g., 'Saint-Andre-L\'Allier'). PostgreSQL strictly refuses this \' syntax and expects doubled single quotes instead (e.g., 'Saint-Andre-L''Allier'). This single character caused the PostgreSQL import to fail silently for the rest of the file.
The Fix:
I have updated Step 2 (Data Sanitization) to include an additional perl command that automatically safely replaces all escaped single quotes with doubled single quotes across the entire dump file.
If you are using this playbook, please make sure your Step 2 looks like this:
sed -i 's/_binary \x27\\0\x27/false/g' dump.sql
sed -i 's/_binary \x27\\1\x27/true/g' dump.sql
sed -i 's/_binary \x27\x00\x27/false/g' dump.sql
sed -i 's/_binary \x27\x01\x27/true/g' dump.sql
sed -i 's/`//g' dump.sql
sed -i "s/'null'/NULL/g" dump.sql
# Fix escaped double quotes for JSON arrays
perl -pi -e 's/\\"/"/g' dump.sql
# NEW: Fix escaped single quotes (convert \'\ to '') for PostgreSQL compatibility
perl -pi -e "s/\\\'/''/g" dump.sql
Hope this saves someone a headache!
Proposed Migration Process: MySQL to PostgreSQL/TimescaleDB (Seeking Feedback)
Hello everyone,
We recently migrated a production Traccar environment from a legacy MySQL database to PostgreSQL with the TimescaleDB extension. Our entire stack is running in Docker (using docker-compose), so the commands below reflect that containerized approach. It was quite a journey, and we hit a few roadblocks along the way. I wanted to share the process we ended up using to see if anyone else has experienced similar issues, or if @Anton Tananaev has any feedback or corrections on this approach.
We are hoping this can be refined into a solid community guide for others making the jump!
The Challenges We Faced
During our migration attempts, we encountered a few specific issues:
1. The
org.traccar.tools.CopyToolWe initially tried using the official DB copy tool, but on modern Traccar versions (6.x), we hit
ClassNotFoundException. It seems this tool might have been deprecated or is incompatible with the new TimescaleDB architecture?2.
pgloaderand MySQL 8 AuthenticationWe really wanted to use
pgloaderfor a direct transfer, but it threw LispSIMPLE-ARRAYerrors when connecting to MySQL 8 (likely due tocaching_sha2_password). Because of this, we fell back to a standardmysqldump.3. MySQL Binary Booleans (
_binary '\0')When
mysqldumpexported our boolean columns (likedisabled,administrator), it outputted them as binary strings (e.g.,_binary '\0'). PostgreSQL rejected these during theINSERT. We had to heavily sanitize the SQL file before importing.4. The UI Crash Bug (React
charAt)This was the most confusing part. After successfully importing the data, the UI crashed on the "Stops" report with a React
charAterror. After a lot of debugging, we found the root cause: PostgreSQL 10+ usesis_identityfor auto-increment columns. Because we forced the historical IDs from MySQL during theINSERT, the Postgres sequences didn't update and stayed at1. New positions were being inserted with ID 1, 2, 3... which completely broke the chronological sorting in the Reports API, sendingundefineddata to the frontend React components.Our Proposed Migration Steps
Here is the step-by-step process we used to resolve these issues. We'd love to know if there's a simpler or more "official" way to handle this!
Step 1: Export Data without Schema
We exported the data from MySQL without creating tables (
--no-create-info) to let Traccar generate the clean Postgres schema itself. We also ignoredtc_servers.docker exec -i traccar-db mysqldump -u traccar -p"YOUR_PASSWORD" --no-create-info --complete-insert --skip-extended-insert --compact --ignore-table=traccar.DATABASECHANGELOG --ignore-table=traccar.DATABASECHANGELOGLOCK --ignore-table=traccar.tc_keystore --ignore-table=traccar.tc_servers traccar > dump.sqlStep 2: Sanitize the SQL Dump
We used
sedto replace the binary boolean artifacts with standardfalse/trueand to clean up literal'null'strings. We also usedperlto convert MySQL's escaped quotes (\") back into standard double quotes ("), which is crucial for PostgreSQL to properly parse JSON objects stored in the Traccar attributes table.sed -i 's/_binary \x27\\0\x27/false/g' dump.sql sed -i 's/_binary \x27\\1\x27/true/g' dump.sql sed -i 's/_binary \x27\x00\x27/false/g' dump.sql sed -i 's/_binary \x27\x01\x27/true/g' dump.sql sed -i 's/`//g' dump.sql sed -i "s/'null'/NULL/g" dump.sql perl -pi -e 's/\\"/"/g' dump.sqlStep 3: Initialize the Postgres Schema
We started up a clean PostgreSQL database and let Traccar boot up once. As soon as Liquibase finished (
Liquibase: Update has been successful), we stopped Traccar.Step 4: Truncate Default Generated Data
Traccar generated a default admin user, so we cleared it out to prepare for our dump.
docker exec -i traccar-timescaledb psql -U traccar -d traccar -c " DO \$\$ DECLARE r RECORD; BEGIN FOR r IN (SELECT tablename FROM pg_tables WHERE schemaname = 'public' AND tablename NOT LIKE 'databasechangelog%' AND tablename != 'tc_servers') LOOP EXECUTE 'TRUNCATE TABLE ' || quote_ident(r.tablename) || ' CASCADE;'; END LOOP; END \$\$;" ### Step 5: Inject the Dump We injected the SQL dump while setting `session_replication_role = 'replica'` so that foreign key constraints didn't block the out-of-order inserts. ```bash cat << 'EOF' > full_import.sql SET session_replication_role = 'replica'; BEGIN; \i /workspace/dump.sql COMMIT; SET session_replication_role = 'origin'; EOF docker run --rm -v $(pwd):/workspace -e PGPASSWORD="YOUR_PASSWORD" --network traccar-network timescale/timescaledb:latest-pg16 psql -h traccar-timescaledb -U traccar -d traccar -f /workspace/full_import.sqlStep 6: Sequence Resynchronization (Crucial Fix)
To fix the
charAtUI crashes, we had to resynchronize theis_identitysequences to the newMAX(id).The part of this script is that it is fully dynamic! It queries the PostgreSQL
information_schemato automatically find and fix every single table in the Traccar database that has an auto-increment column (Positions, Events, Users, Geofences, etc.). You don't need to list the tables manually.docker exec -i traccar-timescaledb psql -U traccar -d traccar -c " DO \$\$ DECLARE r RECORD; BEGIN FOR r IN ( SELECT c.table_name, c.column_name, pg_get_serial_sequence(c.table_name, c.column_name) AS seq_name FROM information_schema.columns c WHERE c.table_schema = 'public' AND c.is_identity = 'YES' ) LOOP IF r.seq_name IS NOT NULL THEN EXECUTE format( 'SELECT setval(%L, COALESCE((SELECT MAX(%I) FROM %I), 1))', r.seq_name, r.column_name, r.table_name ); END IF; END LOOP; END \$\$; ANALYZE; "Step 7: Convert to TimescaleDB Hypertables
Finally, we converted
tc_positionsandtc_eventsinto Hypertables with a 1 day chunk interval and added compression policies.docker exec -it traccar-timescaledb psql -U traccar -d traccarThen execute:
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE; ALTER TABLE tc_positions DROP CONSTRAINT IF EXISTS tc_positions_pkey; ALTER TABLE tc_positions ADD PRIMARY KEY (id, fixtime); SELECT create_hypertable('tc_positions', 'fixtime', chunk_time_interval => INTERVAL '1 days', migrate_data => true); ALTER TABLE tc_events DROP CONSTRAINT IF EXISTS tc_events_pkey; ALTER TABLE tc_events ADD PRIMARY KEY (id, servertime); SELECT create_hypertable('tc_events', 'servertime', chunk_time_interval => INTERVAL '1 days', migrate_data => true); ALTER TABLE tc_positions SET ( timescaledb.compress, timescaledb.compress_segmentby = 'deviceid', timescaledb.compress_orderby = 'fixtime DESC' ); ALTER TABLE tc_events SET ( timescaledb.compress, timescaledb.compress_segmentby = 'deviceid', timescaledb.compress_orderby = 'servertime DESC' ); SELECT add_compression_policy('tc_positions', INTERVAL '2 months'); SELECT add_compression_policy('tc_events', INTERVAL '2 months'); \qIs there anything we missed, or could this process be simplified? Specifically, is there a recommended alternative to scrubbing
mysqldumpoutputs for the boolean inserts?Looking forward to your thoughts!