Traccar Postgres to MySQL migration

Hello Anton,

First of all i take this chance to congratulate you on your success, you've done a tremendous job with Traccar, and it's a very big thing that you've made it Open-source.

Coming to the point, I've been using Traccar for over 6 months, during the installation i chose Postgres as the database. But now we're experiencing a significant lag to load historical data as our db reached over 60GB in size. I read in the forum that you recommend MySQL over any other database and it's 5x faster in speed, so I decided to migrate our current db to MySQL. I've managed to backup the data, and tried a couple of tools to convert it to MySQL, but unfortunately i didn't work. I didn't see any topic related to migration from postgres to mysql in this forum, may be i've missed, so thought to ask here. Is there any recommended method/tool that i can use to do this ? Any information will be appreciated.

Thanks again.

Anton Tananaev3 years ago

Postgres should work just fine. You need to analyze why performance degraded.

Thanks for your reply Anton.

I've found a software to convert and now the conversion is ongoing.

If the lag issue in my case is not DB then what else it could be in your opinion?

I'm running Traccar in a 8 core 16 GB VPS server. I've around 100 devices online sending data every 5 seconds if moving. I had to set the message interval to 5 seconds because we wanted to have an option to see the number of Tolls for each car. Also the distance is around 150meters. Do you think that could be the problem?

Thanks in advance, and please pardon me for the novice question.

Anton Tananaev3 years ago

You have to check execution plans for SQL queries to see why they are slow.

select * from tc_positions where fixtime > now()-interval '30 days' took 52 s 12 ms to execute.
I'll do a full execution plan and update here, i hope i can fix it.
And this below is the explain result.

Seq Scan on public.tc_positions  (cost=0.00..8419701.24 rows=15265164 width=516)
JIT:
"  Output: id, protocol, deviceid, servertime, devicetime, fixtime, valid, latitude, longitude, altitude, speed, course, address, attributes, accuracy, network"
"  Options: Inlining true, Optimization true, Expressions true, Deforming true"
  Functions: 2
  Filter: (tc_positions.fixtime > (now() - '30 days'::interval))
Anton Tananaev3 years ago

That is not a type of request Traccar would send. You should check actual requests.

I've been checking the reports/summary source code. In our case we face the issue mainly when we call the api for reports/summary.

private static Collection<SummaryReport> calculateSummaryResults(
            long userId, long deviceId, Date from, Date to, boolean daily) throws SQLException {

        ArrayList<Position> positions = new ArrayList<>(Context.getDataManager().getPositions(deviceId, from, to));

        ArrayList<SummaryReport> results = new ArrayList<>();
        if (daily && !positions.isEmpty()) {
            int startIndex = 0;
            int startDay = getDay(userId, positions.iterator().next().getFixTime());
            for (int i = 0; i < positions.size(); i++) {
                int currentDay = getDay(userId, positions.get(i).getFixTime());
                if (currentDay != startDay) {
                    results.add(calculateSummaryResult(deviceId, positions.subList(startIndex, i)));
                    startIndex = i;
                    startDay = currentDay;
                }
            }
            results.add(calculateSummaryResult(deviceId, positions.subList(startIndex, positions.size())));
        } else {
            results.add(calculateSummaryResult(deviceId, positions));
        }

        return results;
    }

The execution of this function taking very long to finish.

Tony Shelver2 years ago

Have you tried tuning your Postgres DB?
60GB isn't that big as far as Postgres goes.
Much bigger datasets such OpenStreetMap run fine on Postgres.

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

@Tony Shelver Thanks for the reply, I've tuned my db to the optimum required considering my server capacity.
I've read in this forum, indexing fixtime in tc_positions will increase the performance. In databases I'm not much experienced, i would like to try this method, but I'm still searching on how to do it.

max_connections = 100
shared_buffers = 4GB
effective_cache_size = 12GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 300
work_mem = 10485kB
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 8
max_parallel_workers_per_gather = 4
max_parallel_workers = 8
max_parallel_maintenance_workers = 4

The above is my postgres.conf

Tony Shelver2 years ago

I've read in this forum, indexing fixtime in tc_positions will increase the performance. In databases I'm not much experienced, i would like to try this method, but I'm still searching on how to do it.

That was one of the issues I found on an earlier build of Traccar, but the build I am on now it seems to have been sorted out.
Still useful to have a look at common queries and then evaluate the available indexing.

@Tony Shelver Sorry for the late reply and thanks for your response. I've migrated to a new server with a much better capability and seems like the performance has improved drastically as expected. Until I've get to add a big number of devices, i think that the current performance will be fine for me. Still looking forward for any performance improvement tweaks.