I am using Kubernetes to spin up a postgres pod/container. I also want to make it persistent, so I am using a smb server (TrueNas) to create a PV / PVC to be used by this pod.
Here is my simple setup:
Config map to store some credentials.
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-secret
labels:
app: postgres
data:
POSTGRES_DB: ps_db
POSTGRES_USER: postgres
PGUSER: postgres
POSTGRES_PASSWORD: <password>
---
Persistent volume and persistent volume claim to be used by the pod
apiVersion: v1
kind: PersistentVolume
metadata:
name: services-postgres
spec:
storageClassName: freenas-smb-csi
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- username=postgres
- password=<passoword>
- uid=999 #Mapping the user running the postgres process with the one that is the owner of the SMB share
- dir_mode=0700 # Doing this as Postgres demands the pgdata directory to have 0700 or 0750 permissions
- noperm # Truenas uses acl for permissions. Need to use this so that Postgres ignored it.
csi:
driver: org.democractic-csi.smb
readOnly: false
fsType: cifs
volumeHandle: postgres
volumeAttributes:
server: <ip_addr>
share: postgres
node_attach_driver: smb
provisioner_driver: smb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: services-postgres-claim
spec:
storageClassName: freenas-smb-csi
volumeName: services-postgres
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 50Gi
---
Service to expose the server
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
sessionAffinity: ClientIP
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres-tcp
protocol: TCP
Main deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
securityContext:
runAsUser: 999
fsGroup: 1008
containers:
- name: postgres
image: 'postgres:14'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-secret
env:
- name: PGDATA
value: /var/lib/postgresql/mounted_data/pgdata # PGDATA directory is different from the normal /var/lib/postgresql/data
# Mounting a subdirectory to the mount point as it has been said that
# postgres doesn't like it. Something to do with permissions.
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres
subPath: var_lib_postgresql
# Not necessarily needed but from the logs I could see that postgres was failing once before restarting
# I tried to prevent that by adding this check
# but it seems that this is an expected behavior from postgres (?)
readinessProbe:
exec:
command:
- pg_isready
initialDelaySeconds: 5
timeoutSeconds: 5
failureThreshold: 5
volumes:
- name: postgres
persistentVolumeClaim:
claimName: services-postgres-claim
Now, the issue is that for some reason,
Postgres container has the /var/lib/postgresql/data directory mounted to some other virtual device (probably created by containerd runtime) when the /var/lib/postgres directory is already mounted to the PVC.
From inside the container, I see the following:
$ df -hT /var/lib/postgresql
Filesystem Type Size Used Avail Use% Mounted on
//<ip-addr>/postgres cifs 8.7T 96K 8.7T 1% /var/lib/postgresql
$ df -hT /var/lib/postgresql/data
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda6 xfs 99G 12G 88G 12% /var/lib/postgresql/data
Intially, I had the PGDATA set to /var/lib/postgresql/data/data but this was leading me to loose the entire data because of the exact same problem. Because of this, I cannot use the data driectory to store anything as it would be mounted later on.
Is this expected behavior? Is there anyway, I can resolve this issue or is this something that is not related to postgres but kubernetes?
I am using Kubernetes to spin up a postgres pod/container. I also want to make it persistent, so I am using a smb server (TrueNas) to create a PV / PVC to be used by this pod.
Here is my simple setup:
Config map to store some credentials.
Persistent volume and persistent volume claim to be used by the pod
Service to expose the server
Main deployment file.
Now, the issue is that for some reason,
Postgres container has the
/var/lib/postgresql/datadirectory mounted to some other virtual device (probably created by containerd runtime) when the/var/lib/postgresdirectory is already mounted to the PVC.From inside the container, I see the following:
Intially, I had the PGDATA set to
/var/lib/postgresql/data/databut this was leading me to loose the entire data because of the exact same problem. Because of this, I cannot use thedatadriectory to store anything as it would be mounted later on.Is this expected behavior? Is there anyway, I can resolve this issue or is this something that is not related to postgres but kubernetes?