We have many servers running version 5.6 and we don't have any issues like that. Are you using default configuration? Have you checked logs for warnings?
Hello Anton, thank you for responding.
I use the default configuration with the exception of filter.zero true and web.path to legacy.
After comparing around 1 million lines to look for obvious differences I can say that at some point the server starts sending 01 to devices and getting shorter answers like 000f333539363332313031323336323532 there are no log entries for decoded locations anymore.
Then many devices try to reconnect multiple times.
The first concrete error messanges are: [IPs are replaced]
2023-03-13 16:04:55 WARN: handleException /app.min.js java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 71895/30000 ms
2023-03-13 16:04:25 INFO: [Te0183fd7: teltonika > 2.0.0.117] 01
2023-03-13 16:03:31 WARN: Failed to send content - Idle timeout expired: 49784/30000 ms - TimeoutException (...)
2023-03-13 16:02:22 WARN: Failed to send content - Idle timeout expired: 102765/30000 ms - TimeoutException (...)
2023-03-13 16:00:24 INFO: [T48beeb09: teltonika > 2.0.0.168] 01
2023-03-13 16:00:29 WARN: HikariPool-1 - Thread starvation or clock leap detected (housekeeper delta=46s773ms53µs730ns).
2023-03-13 16:00:10 INFO: [T0c80e059: teltonika > 2.0.0.70] 01
2023-03-13 15:59:47 INFO: [T265e24d3: teltonika > 2.0.0.151] 01
2023-03-13 16:13:14 INFO: [Te4c16ed6: teltonika < 2.0.0.181] 000f333539363333313033353030333135
2023-03-13 16:13:09 WARN: HikariPool-1 - Thread starvation or clock leap detected (housekeeper delta=12m52s361ms450µs35ns).
2023-03-13 16:12:53 INFO: [T1bd51511] error - HikariPool-1 - Connection is not available, request timed out after 30284ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:72 < Storage:49 < ...)
2023-03-13 16:12:23 WARN: /api/devices - Java heap space - OutOfMemoryError
2023-03-13 16:12:20 WARN: /api/devices - Java heap space - OutOfMemoryError
2023-03-13 16:11:51 INFO: [Tb4d41f1e] error - Java heap space - OutOfMemoryError
2023-03-13 16:11:41 WARN: /api/positions - Java heap space - OutOfMemoryError
2023-03-13 16:11:54 WARN: /api/socket - Java heap space - OutOfMemoryError
2023-03-13 16:10:48 ERROR: An exception has been thrown from an exception mapper class org.traccar.api.ResourceErrorHandler. - Java heap space - OutOfMemoryError
2023-03-13 16:10:24 WARN: Failed to register an accepted channel: [id: 0x1c74a202, L:0.0.0.0/0.0.0.0:5027] - Java heap space - OutOfMemoryError
2023-03-13 16:09:43 WARN: /api/socket - Java heap space - OutOfMemoryError
2023-03-13 16:09:46 ERROR: Thread exception - Java heap space - OutOfMemoryError
2023-03-13 16:09:46 WARN: Accept Failure - Java heap space - OutOfMemoryError
2023-03-13 16:09:44 WARN: /api/socket - Java heap space - OutOfMemoryError
2023-03-13 16:06:57 INFO: [T8f2c0909: teltonika > 2.0.0.14] 01
2023-03-13 16:16:54 WARN: unhandled due to prior sendError - Java heap space - OutOfMemoryError
2023-03-13 16:16:46 WARN: /api/positions - Java heap space - OutOfMemoryError
2023-03-13 16:16:46 WARN: /api/devices - Java heap space - OutOfMemoryError
2023-03-13 16:16:11 ERROR: An exception was not mapped due to exception mapper failure. The HTTP 500 response will be returned. - InterruptedException (... < QueryBuilder:67 < *:140 < DatabaseStorage:72 < PositionUtil:68 < ...)
2023-03-13 16:15:35 WARN: unhandled due to prior sendError - Java heap space - OutOfMemoryError
2023-03-13 16:15:15 INFO: [Tc9800776: teltonika < 2.0.0.247] 000f333537303733323933383638343136
2023-03-13 16:15:01 WARN: unhandled due to prior sendError - Java heap space - OutOfMemoryError
2023-03-13 16:14:43 WARN: unhandled due to prior sendError - Java heap space - OutOfMemoryError
2023-03-13 16:14:36 WARN: HikariPool-1 - Thread starvation or clock leap detected (housekeeper delta=1m27s185ms369µs411ns).
2023-03-13 16:14:31 WARN: Find device error - HikariPool-1 - Connection is not available, request timed out after 31122ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:72 < Storage:49 < ...)
2023-03-13 16:14:26 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 31122ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:30 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 37754ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:30 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 31122ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:20 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 32721ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:20 INFO: [T132ac983: teltonika < 2.247.251.54] 000f333532363235363931363334393039
2023-03-13 16:14:25 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 32720ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:25 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 32719ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:25 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 32719ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:25 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 37480ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:13 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 30984ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:10 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 30982ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:17 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 31054ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:17 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 31054ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:10 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 30982ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:08 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 30589ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:14:03 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 30295ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:13:58 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 30123ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:13:58 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 30123ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
2023-03-13 16:13:58 WARN: Update device status error - HikariPool-1 - Connection is not available, request timed out after 30123ms. - SQLTransientConnectionException (... < QueryBuilder:67 < *:140 < DatabaseStorage:110 < ConnectionManager:266 < ...)
The Mysql shows there errors around the same time (logs have different timezone):
2023-03-13T17:15:47.524504Z 4855 [Note] Got timeout reading communication packets
2023-03-13T17:17:53.807867Z 4852 [Note] Aborted connection 4852 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
2023-03-13T17:17:53.808608Z 4851 [Note] Aborted connection 4851 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
2023-03-13T17:17:53.808617Z 4853 [Note] Aborted connection 4853 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
2023-03-13T17:17:53.808740Z 4849 [Note] Aborted connection 4849 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
2023-03-13T17:17:53.808878Z 4848 [Note] Aborted connection 4848 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
2023-03-13T17:17:53.822300Z 4854 [Note] Aborted connection 4854 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
2023-03-13T17:17:53.825113Z 4838 [Note] Aborted connection 4838 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
2023-03-13T17:17:53.857054Z 4857 [Note] Aborted connection 4857 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
2023-03-13T17:17:53.857745Z 4856 [Note] Aborted connection 4856 to db: 'traccar' user: 'traccar' host: '172.31.0.3' (Got an error reading communication packets)
That would indicate that max_allowed_packet could be to low in the mysql server, but at 16mb it should be high enough.
So I was thinking they were more a wait_timeout problem, with the traccar server not responding in time.
Is there anything else I can look for?
Maybe your MySQL database or connection to the database is too slow. Newer versions query data from the database more frequently than before.
Hey, did you get this resolved?
I am getting the same Java heap space - OutOfMemoryError
and server side everything looks ok.
Restarting the server fixes the issue, however I'd like to fix the root cause in the first place instead of rebooting the server each time.
I forgot to mention, this is traccar 5.6, no changes made to the original code.
Sadly I could not resolve this.
I could rule out, that the mysql server caused this problem. (Why should it when Traccar 5.2 runs fine and 5.6 not).
There isn't much utilization of the mysql server and that would also not explain why the memory usage of traccar keeps creeping up, till it eats everything it can obtain.
I resorted to restart traccar every 7 days. Additionally it can now use the 32gb ram as I mentioned above.
We tested and there are no missing entrys in the database caused by restarting traccar, which in turn means that every entry is send correctly to the mysql server to be stored, and the explanation that the memory leak is caused my a slow mysql connection is not plausible to us.
Did you try upgrading to version 5.8?
I will try that later this week if time allows, hopefully the issue gets solved.
In my case, with allocated 2G of RAM, the server needs one or two restarts per day, otherwise it reaches the Java heap space - OutOfMemoryError
error.
For example, the traccar java process has been running for about 4 hours and the Memory usage has been slowly creeping up to reach 20% as of this moment.
Can you share your MySQL connection status? I am using this command: show status like '%onn%';
There is huge amount of Aborted_connects
and I am not sure whether it's due to the java memory issue, or the aborted connections are causing the issue:
+-------------------------------------------------------+---------------------+
| Variable_name | Value |
+-------------------------------------------------------+---------------------+
| Aborted_connects | 20614 |
| Connection_errors_peer_address | 515 |
| Connections | 302094 |
| Max_used_connections | 52 |
| Threads_connected | 12 |
+-------------------------------------------------------+---------------------+
We have now doubled the innodb_buffer_pool_size
and innodb_log_file_size
and will keep monitoring.
Hi, upgrading to 5.8 solved the memory issue. It appears that the JEXL library related to the computed attributes had a memory leak issue and it was fixed in version 5.7.
FYI - the memory usage now still increases but way more slowly than before. Now it's at about 23% running for 14 hours now. Will keep on monitoring if it will stop at some point or it will continue to increase.
Hello there,
we recently upgraded from Traccar 5.2 to 5.6, and somehow the server is consuming all the memory.
With 5.2 we allocated 2GB to Traccar via the java arguments which was never used by the server to manage our 250 Objects.
After the update we noticed that the server used more and more memory and became unresponsive after 1-2 days and had to be restarted.
We thought that maybe the overall memory usage was a bit higher and that 5.2 was at its limit before. So we gave the process 4GB. That did only help to get the server running for around 3-5 days, than we are back to the out of memory problem.
At last we gave the process 32GB ram, which should be plenty for the 250 Objects but the server just consumes it all and stopped responding after aroung 14-16 days. For your information we are running inside the official docker container. Docker reported a ram utilization of 40GB before the out of memory messages started to apear.
Is there any known bug that eats all available memory? Is there any way I can track this down?