Traccar Docker on Kubernetes (AKS) – Repeated Liquibase lock & pod restarts after production rollout (previously stable setup)

Yash Ahirrao6 days ago

Hello Traccar Team,
We are running Traccar using the official Docker image (traccar/traccar:latest) on Kubernetes (AKS) and are looking for guidance on a production issue that appeared suddenly, even though the same setup was running smoothly earlier.
Environment
Traccar version: latest Docker image
Deployment platform: Kubernetes (AKS)
Database: PostgreSQL (Azure Database for PostgreSQL)
TimescaleDB extension: enabled
Traccar replicas: 1
Database type: External DB (not H2)
Current Kubernetes Setup
Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: traccar
  namespace: cargopro
spec:
  replicas: 1
  selector:
    matchLabels:
      app: traccar
  template:
    metadata:
      labels:
        app: traccar
    spec:
      containers:
        - name: traccar
          image: traccar/traccar:latest
          ports:
            - containerPort: 8082
            - containerPort: 5055
          envFrom:
            - secretRef:
                name: traccar-secret
          volumeMounts:
            - name: traccar-config-volume
              mountPath: /opt/traccar/conf/traccar.xml
              subPath: traccar.xml
      volumes:
        - name: traccar-config-volume
          secret:
            secretName: traccar-secret

Secret (includes traccar.xml)

apiVersion: v1
kind: Secret
metadata:
  name: traccar-secret
  namespace: cargopro
type: Opaque
stringData:
  TRACCAR_URL: http://traccar:8082
  TRACCAR_USER: info@cargopro.ai
  TRACCAR_PASSWORD: ********

  traccar.xml: |
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM 'http://java.sun.com/dtd/properties.dtd'>
    <properties>
      <entry key="database.driver">org.postgresql.Driver</entry>
      <entry key="database.url">
        jdbc:postgresql://cargopro-db.postgres.database.azure.com:5432/traccar_cargopro
      </entry>
      <entry key="database.user">cargopro_db_admin</entry>
      <entry key="database.password">********</entry>
      <entry key="database.timescale">false</entry>
    </properties>
Service
apiVersion: v1
kind: Service
metadata:
  name: traccar
  namespace: cargopro
spec:
  type: ClusterIP
  ports:
    - name: web
      port: 8082
      targetPort: 8082
    - name: osmand
      port: 5055
      targetPort: 5055
  selector:
    app: traccar

Problem Description
This exact setup was running smoothly earlier, connected to the same PostgreSQL database.
However, during a recent production rollout, the Traccar pod suddenly started restarting repeatedly with the following logs:

Waiting for changelog lock....
Waiting for changelog lock....
ERROR: LockException
Could not acquire change log lock.
Currently locked by <pod-name> (<pod-ip>)

The pod enters CrashLoopBackOff.

Anton Tananaev6 days ago

Database lock means that you restarted the service before the database migration finished. There are many threads about it.

Yash Ahirrao6 days ago

Hi Anton, thanks for the response.
Just to clarify, this was not an intentional or manual restart during migration.
This was a normal production rollout using the same Kubernetes Deployment, same PostgreSQL database, and same configuration that had been running stably before.
We did not change replicas (still 1), did not scale up/down, and did not manually restart the pod during startup.
What surprised us is that:
The same database had already been initialized and working earlier
The same deployment manifest had been used previously without issues
The issue appeared suddenly on a new rollout, even though no DB reset or parallel instance was introduced
From Kubernetes’ perspective, this rollout only recreated the pod once (standard rolling replace), but it seems the Liquibase lock persisted and caused subsequent restarts to fail.
We wanted to understand:
Whether Kubernetes pod recreation (even with replicas=1) can still trigger this lock scenario
If there are best practices for Traccar on Kubernetes to avoid Liquibase lock persistence (e.g., startup probes, init containers, or migration disabling after first run)
And what the recommended recovery is in production without risking data integrity
Appreciate any guidance on how to make this setup more resilient for Kubernetes-based deployments.
Thanks again.

Anton Tananaev5 days ago

Any restart could potentially cause this, but most common case is the actual Traccar version upgrade.

Yash Ahirrao5 days ago

Hi Anton, thanks for the clarification.
Understood that this was likely caused by a Traccar version upgrade via the latest image and a restart during database migration.
At this point, we would like to reset and start clean in a safe way. Could you please advise the recommended process to start fresh in this situation for a Kubernetes-based deployment?
Specifically, we want to know the correct steps to:
Cleanly recover from the Liquibase lock state
Ensure database migrations complete successfully
We want to follow the Traccar-recommended approach rather than applying ad-hoc fixes.
Thanks for your guidance.
Best regards,
Yash

Anton Tananaev4 days ago

Have you found other threads about this issue?

Yash Ahirrao4 days ago

Yes I did not got any relevant solution