Server Instability on EC2 t2.micro

maxw3ll6 years ago

I am running Traccar 3.14, MySQL and a NodeJS service on an EC2 t2.micro with 1GB RAM.

About once a month the Traccar service shuts down and I have to restart it. I guess this is due to the limited RAM availability and I will move to a higher instance.

Just to make sure that this is the error I would like to share the log entries of tracker-server.log and wrapper.log here (right before the service terminates):

WARNING|23450/0|Service traccar|18-02-14 14:54:56|ping between java application and wrapper timed out. if this this is due to server overload consider increasing wrapper.ping.timeout

2018-02-14 14:59:18  WARN: HikariPool-1 - Connection is not available, request timed out after 436106ms. - SQLTransientConnectionException (... < QueryBuilder:56 < *:132 < DataManager:310 < DeviceManager:200 < ...)

INFO|23450/0|Service traccar|18-02-14 14:59:19|[HikariPool-1 housekeeper] WARN com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Thread starvation or clock leap detected (housekeeper delta=1m10s224ms757µs657ns).


WARNING|8045/0|Service traccar|18-02-03 02:02:31|ping between java application and wrapper timed out. if this this is due to server overload consider increasing wrapper.ping.timeout
INFO|8045/0|Service traccar|18-02-03 02:02:37|[HikariPool-1 housekeeper] WARN com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Thread starvation or clock leap detected (housekeeper delta=52s24ms450µs57ns).

2018-02-03 02:08:22  WARN: HikariPool-1 - Connection is not available, request timed out after 289400ms. - SQLTransientConnectionException (... < QueryBuilder:56 < *:137 < DataManager:453 < NotificationManager:53 < ...)


2017-12-23 16:15:18  WARN: Communications link failure
The last packet successfully received from the server was 29,104 milliseconds ago.  The last packet sent successfully to the server was 29,026 milliseconds ago. - CommunicationsException (... < QueryBuilder:477 < DataManager:455 < NotificationManager:53 < ...)
2017-12-23 16:15:19  INFO: Shutting down server...
Anton Tananaev6 years ago

It does look like resource issue. Usually we recommend 1GB server just for Traccar. If you have some else running there, you probably need a bit more, or spin a separate instance just for Traccar.

maxw3ll6 years ago

Thanks Anton for the quick response. I did some further investigation and the memory usage was always above 85% before the Traccar shut down. Will switch to 2GB RAM now.