CloudNativePG 환경에서 Barman Cloud 플러그인을 활용한 PostgreSQL 복제 클러스터 배포 – 1부
Swapnil Suryawanshi
2026년 4월 15일
이 블로그에서는 백업 및 WAL 아카이빙을 위해 Barman Cloud 플러그인을 사용하여 CloudNativePG용 EDB Postgres® AI 복제 클러스터(Replica Cluster)를 설정하는 단계별 프로세스를 설명합니다.
📌 환경 세부 정보
- 오퍼레이터(Operator): EDB Postgres® AI for CloudNativePG 1.28.1
- 데이터베이스(Database): EDB Postgres Advanced Server 18 (EPAS)
- 백업 설정(Backup Setup): Barman Cloud Plugin v0.11.0
- 스토리지(Storage): AWS S3
1. 사전 요구 사항: 네임스페이스 및 자격 증명 생성
먼저 프라이머리(Primary) 및 레플리카(Replica) 클러스터를 위한 별도의 네임스페이스를 생성하고, Barman 플러그인이 오브젝트 스토어에 액세스하는 데 필요한 S3 자격 증명을 저장합니다.
Bash
user% kubectl create ns primary namespace/primary created user% kubectl create ns replica namespace/replica created
프라이머리 네임스페이스에 동일한 AWS 자격 증명을 생성합니다.
Bash
user% kubectl create secret generic aws-creds \ --from-literal=ACCESS_KEY_ID=xxxxxxxxN3GE5FSxxxxxx \ --from-literal=ACCESS_SECRET_KEY=xxxxxxxxGrS+xlfTlCZTaTxxxxxx -n primary secret/aws-creds created
레플리카 네임스페이스에 동일한 AWS 자격 증명을 생성합니다.
Bash
user% kubectl create secret generic aws-creds \ --from-literal=ACCESS_KEY_ID=xxxxxxxxN3GE5FSxxxxxx \ --from-literal=ACCESS_SECRET_KEY=xxxxxxxxGrS+xlfTlCZTaTxxxxxx -n replica secret/aws-creds created
2. Barman 플러그인 설정
Barman 플러그인은 안전한 통신을 위해 cert-manager가 필요합니다. cmctl 설치를 확인하고 cert-manager를 설치한 후 Barman Cloud 플러그인을 배포합니다.
Bash
user% brew install cmctl ==> Auto-updating Homebrew... Adjust how often this is run with `$HOMEBREW_AUTO_UPDATE_SECS` or disable with `$HOMEBREW_NO_AUTO_UPDATE=1`. Hide these hints with `$HOMEBREW_NO_ENV_HINTS=1` (see `man brew`). ==> Auto-updated Homebrew! Updated 2 taps (homebrew/core and homebrew/cask). ==> New Formulae : :notchi: Notch companion for Claude Code nvidia-sync: Utility for launching applications and containers on remote Linux systems You have 30 outdated formulae and 1 outdated cask installed. installed
Bash
user% kubectl create namespace cert-manager namespace/cert-manager created user% kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml Warning: resource namespaces/cert-manager is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. namespace/cert-manager configured customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io configured customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io configured customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io configured customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io configured customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io unchanged customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io unchanged serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-tokenrequest created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-tokenrequest created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager-cainjector created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Bash
user% kubectl get pods -n cert-manager NAME READY STATUS RESTARTS AGE cert-manager-6bfcb455c7-q75dr 1/1 Running 1 (37m ago) 107m cert-manager-cainjector-84d45cd8f4-j4zvg 1/1 Running 0 107m cert-manager-webhook-5bb447875c-lsn7s 1/1 Running 0 107m user% cmctl check api The cer t-manager API is ready
오퍼레이터의 네임스페이스에 플러그인 매니페스트를 적용합니다.
Bash
user% kubectl apply -f https://github.com/cloudnative-pg/plugin-barman-cloud/releases/download/v0.11.0/manifest.yaml customresourcedefinition.apiextensions.k8s.io/objectstores.barmancloud.cnpg.io created serviceaccount/plugin-barman-cloud created role.rbac.authorization.k8s.io/barman-plugin-leader-election-role created clusterrole.rbac.authorization.k8s.io/barman-plugin-metrics-auth-role created clusterrole.rbac.authorization.k8s.io/barman-plugin-metrics-reader created clusterrole.rbac.authorization.k8s.io/barman-plugin-objectstore-editor-role created clusterrole.rbac.authorization.k8s.io/barman-plugin-objectstore-viewer-role created clusterrole.rbac.authorization.k8s.io/plugin-barman-cloud created rolebinding.rbac.authorization.k8s.io/barman-plugin-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/barman-plugin-metrics-auth-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/plugin-barman-cloud-binding created secret/plugin-barman-cloud-52ggkcd52d created service/barman-cloud created deployment.apps/barman-cloud created certificate.cert-manager.io/barman-cloud-client created certificate.cert-manager.io/barman-cloud-server created issuer.cert-manager.io/selfsigned-issuer created user% kubectl rollout status deployment -n postgresql-operator-system barman-cloud deployment "barman-cloud" successfully rolled out
3. ObjectStore 리소스 생성
두 네임스페이스 모두에 ObjectStore 리소스를 정의합니다. 이는 플러그인에게 백업 및 WAL 파일을 저장하고 검색할 위치를 알려줍니다.
프라이머리 네임스페이스에 프라이머리 클러스터용 ObjectStore를 생성합니다.
YAML
user% vi ObjectStore-primary.yaml
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: s3-store
namespace: primary # Also apply a copy to the 'replica' namespace
spec:
configuration:
destinationPath: s3://swapnil-backup/cnp/
s3Credentials:
accessKeyId:
name: aws-creds
key: ACCESS_KEY_ID
secretAccessKey:
name: aws-creds
key: ACCESS_SECRET_KEY
wal:
compression: gzip
Bash
user% kubectl apply -f ObjectStore-primary.yaml objectstore.barmancloud.cnpg.io/s3-store created
레플리카 네임스페이스에 레플리카 클러스터용 ObjectStore를 생성합니다.
YAML
user% vi ObjectStore-replica.yaml
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: s3-store
namespace: replica
spec:
configuration:
destinationPath: s3://swapnil-backup/cnp/
s3Credentials:
accessKeyId:
name: aws-creds
key: ACCESS_KEY_ID
secretAccessKey:
name: aws-creds
key: ACCESS_SECRET_KEY
wal:
compression: gzip
Bash
user% kubectl apply -f ObjectStore-replica.yaml objectstore.barmancloud.cnpg.io/s3-store created
4. 프라이머리 클러스터 배포 및 검증
프라이머리 네임스페이스에 프라이머리 클러스터를 배포하고 Barman 플러그인을 통해 WAL 아카이빙 상태를 확인합니다.
YAML
user% vi cluster-primary.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-primary
namespace: primary
spec:
instances: 3
imageName: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9
primaryUpdateStrategy: unsupervised
storage:
size: 1G
replica:
primary: cluster-primary
source: cluster-primary
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: s3-store
externalClusters:
- name: cluster-primary
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: s3-store
serverName: cluster-primary
- name: cluster-replica
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: s3-store
serverName: cluster-replica
Bash
user% kubectl apply -f cluster-primary.yaml cluster.postgresql.k8s.enterprisedb.io/cluster-primary created user% kubectl get pods -L role -n primary NAME READY STATUS RESTARTS AGE ROLE cluster-primary-1 2/2 Running 0 3m11s primary cluster-primary-2 2/2 Running 0 2m10s replica cluster-primary-3 2/2 Running 0 102s replica
CNP 클러스터 상태를 확인하고 WAL 아카이빙이 “OK” 상태인지 검증합니다.
Bash
user% kubectl cnp status cluster-primary -n primary Cluster Summary Name primary/cluster-primary System ID: 7621517035557347356 PostgreSQL Image: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9 Primary instance: cluster-primary-1 Primary promotion time: 2026-03-26 10:59:23 +0000 UTC (3m8s) Status: Cluster in healthy state Instances: 3 Ready instances: 3 Size: 169M Current Write LSN: 0/8000060 (Timeline: 1 - WAL File: 000000010000000000000008) Continuous Backup status (Barman Cloud Plugin) No recovery window information found in ObjectStore 's3-store' for server 'cluster-primary' Working WAL archiving: OK WALs waiting to be archived: 0 Last Archived WAL: 000000010000000000000007.00000028.backup @ 2026-03-26T11:00:11.577402Z Last Failed WAL: - Streaming Replication status Replication Slots Enabled Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- ---------------- cluster-primary-2 0/8000060 0/8000060 0/8000060 0/8000060 00:00:00 00:00:00 00:00:00 streaming async 0 active cluster-primary-3 0/8000060 0/8000060 0/8000060 0/8000060 00:00:00 00:00:00 00:00:00 streaming async 0 active Instances status Name Current LSN Replication role Status QoS Manager Version Node ---- ----------- ---------------- ------ --- --------------- ---- cluster-primary-1 0/8000060 Primary OK BestEffort 1.28.1 replicacluster-control-plane cluster-primary-2 0/8000060 Standby (async) OK BestEffort 1.28.1 replicacluster-control-plane cluster-primary-3 0/8000060 Standby (async) OK BestEffort 1.28.1 replicacluster-control-plane Plugins status Name Version Status Reported Operator Capabilities ---- ------- ------ ------------------------------ barman-cloud.cloudnative-pg.io 0.11.0 N/A Reconciler Hooks, Lifecycle Service
5. 수동 백업 실행
복제 클러스터 부트스트랩을 위해 S3 오브젝트 스토어에 데이터가 채워지도록 Barman 플러그인을 통해 수동 백업을 수행합니다.
플러그인 방식을 사용하여 백업을 트리거합니다.
Bash
user% kubectl cnp backup cluster-primary --method=plugin --plugin-name=barman-cloud.cloudnative-pg.io -n primary backup/cluster-primary-20260326163523 created
백업 완료 여부를 확인합니다.
Bash
user% kubectl get backup -n primary NAME AGE CLUSTER METHOD PHASE ERROR cluster-primary-20260326163523 33s cluster-primary plugin completed
복구 가능한 첫 번째 지점(First Point of Recoverability)과 마지막 성공 백업을 검증합니다.
Bash
user% kubectl cnp status cluster-primary -n primary Cluster Summary Name primary/cluster-primary System ID: 7621517035557347356 PostgreSQL Image: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9 Primary instance: cluster-primary-1 Primary promotion time: 2026-03-26 10:59:23 +0000 UTC (7m27s) Status: Cluster in healthy state : : Continuous Backup status (Barman Cloud Plugin) ObjectStore / Server name: s3-store/cluster-primary First Point of Recoverability: 2026-03-26 16:35:25 IST Last Successful Backup: 2026-03-26 16:35:25 IST Last Failed Backup: - Working WAL archiving: OK WALs waiting to be archived: 0 Last Archived WAL: 000000010000000000000008 @ 2026-03-26T11:05:08.855493Z Last Failed WAL: -
6. 복제 클러스터 배포 및 검증
복제 클러스터는 프라이머리의 오브젝트 스토어에서 복구(Recovery)를 통해 부트스트랩되도록 구성된 후 동기화를 유지합니다. (복제 클러스터는 복구를 위해 cluster-primary 소스를 가리킵니다.)
YAML
user% vi cluster-replica.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-replica
namespace: replica
spec:
instances: 3
imageName: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9
storage:
size: 1G
bootstrap:
recovery:
source: cluster-primary
replica:
primary: cluster-primary
source: cluster-primary
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: s3-store
externalClusters:
- name: cluster-primary
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: s3-store
serverName: cluster-primary
- name: cluster-replica
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: s3-store
serverName: cluster-replica
Bash
user% kubectl apply -f cluster-replica.yaml cluster.postgresql.k8s.enterprisedb.io/cluster-replica created user% kubectl get pods -L role -n replica NAME READY STATUS RESTARTS AGE ROLE cluster-replica-1 2/2 Running 0 116s primary cluster-replica-2 2/2 Running 0 88s replica cluster-replica-3 2/2 Running 0 60s replica
복제 상태와 소스 클러스터 연결을 확인합니다.
Bash
user% kubectl cnp status cluster-replica -n replica Replica Cluster Summary Name replica/cluster-replica System ID: 7621517035557347356 PostgreSQL Image: docker.enterprisedb.com/k8s/edb-postgres-advanced:18-standard-ubi9 Designated primary: cluster-replica-1 Source cluster: cluster-primary Primary promotion time: 2026-03-26 11:08:43 +0000 UTC (2m8s) Status: Cluster in healthy state Instances: 3 Ready instances: 3 Size: 88M Continuous Backup status (Barman Cloud Plugin) No recovery window information found in ObjectStore 's3-store' for server 'cluster-replica' Working WAL archiving: OK WALs waiting to be archived: 0 Last Archived WAL: 000000010000000000000008 @ 2026-03-26T11:08:49.722535Z Last Failed WAL: - Instances status Name Current LSN Replication role Status QoS Manager Version Node ---- ----------- ---------------- ------ --- --------------- ---- cluster-replica-1 0/9000000 Designated primary OK BestEffort 1.28.1 replicacluster-control-plane cluster-replica-2 0/9000000 Standby (in Replica Cluster) OK BestEffort 1.28.1 replicacluster-control-plane cluster-replica-3 0/9000000 Standby (in Replica Cluster) OK BestEffort 1.28.1 replicacluster-control-plane Plugins status Name Version Status Reported Operator Capabilities ---- ------- ------ ------------------------------ barman-cloud.cloudnative-pg.io 0.11.0 N/A Reconciler Hooks, Lifecycle Service
7. 복제 검증
프라이머리 클러스터에 데이터를 생성하고 복제 클러스터로 올바르게 복제되는지 확인합니다.
프라이머리 읽기/쓰기 테스트:
Bash
user% kubectl cnp psql cluster-primary -n primary psql (18.3.0) Type "help" for help. postgres=# create table test(id int); CREATE TABLE postgres=# postgres=# insert into test values (1); INSERT 0 1 postgres=# select * from test; id ---- 1 (1 row) postgres=# checkpoint; CHECKPOINT postgres=# select * from pg_switch_wal(); pg_switch_wal --------------- 0/9020208 (1 row) postgres=# select * from pg_switch_wal(); pg_switch_wal --------------- 0/A000078 (1 row)
레플리카 읽기 테스트:
Bash
user% kubectl cnp psql cluster-replica -n replica
psql (18.3.0)
Type "help" for help.
postgres=# \dt
List of relations
Schema | Name | Type | Owner
--------+------+-------+----------
public | test | table | postgres
(1 row)
postgres=# \dt
List of relations
Schema | Name | Type | Owner
--------+------+-------+----------
public | test | table | postgres
(1 row)
postgres=# select * from test;
id
----
1
(1 row)
결론
primary 네임스페이스의 cluster-primary에서 부트스트랩되어 replica 네임스페이스에 cluster-replica를 성공적으로 배포한 것은 CloudNativePG 환경 내에서 Barman Cloud 플러그인의 뛰어난 견고성을 입증합니다.
전체 바이너리 백업과 지속적인 WAL 아카이빙을 위해 S3 호환 스토리지를 활용함으로써, 초기 단계 동안 프라이머리의 활성 포드에 직접적인 네트워크 종속성 없이 레플리카 클러스터를 초기화하고 동기화할 수 있는 분리된 아키텍처(Decoupled Architecture)를 달성했습니다.
🚀 고급 운영: 라이프사이클 관리
이제 분산 토폴로지가 설정되고 Barman Cloud 플러그인을 통해 데이터가 동기화되므로 클러스터 간의 트래픽을 수동 또는 자동으로 전환할 수 있습니다.
다음 단계: 복제 클러스터를 프라이머리 상태로 승격시키고 데이터 손실 없이 기존 프라이머리를 강등시키는 방법은 런북 [분산 토폴로지(K8s)에서 CloudNativePG 복제 클러스터의 스위치오버 및 스위치백 – 2부]를 참조하십시오.
메일: salesinquiry@enterprisedb.com

