Decoupling minio from Fossa-Core
As we prepare to upgrade/migrate Minio, we need to decouple minio from fossa-core helm chart as a sub chart dependency. This is needed so that minio can be independenly managed
Version 5.0.0 changes:
MinIO will be deployed via its Helm chart for both fossa-core and hubble, requiring a scheduled maintenance window of approximately 30 minutes to 1 hour. The following configuration changes will also be necessary:
storage.endpointmust include thehttpprefixiam.kindmust be set toAccessKeyandiam.regionmust be set tominiostorage.forcePathStylemust be set totruewhen using MinIOstorage.hubble.endpointmust include thehttpprefixhubble.iam.kindmust be set toAccessKeyandhubble.iam.regionmust be set tominio
Decoupling steps
Assumptions:
- Namespace:
fossa - fossa-core release name:
fossa - Values files:
- MinIO bundled config:
fossa-core-config.yml - New MinIO decoupled config:
decoupled-fossa-core-config.yml
- MinIO bundled config:
- Persistent Volume Claims (PVCs):
- fossa-core-minio
- fossa-hubble-minio
- Existing buckets:
- storage.bucket:
fossa.test - hubble.storage.bucket:
hubble.fossa.test
- storage.bucket:
- storage.endpoint:
http://fossa-core-minio - storage.forcePathStyle:
true - hubble.storage.endpoint:
http://fossa-hubble-minio - iam overrides:
- iam.kind:
AccessKey - iam.region:
minio - hubble.iam.kind:
AccessKey - hubble.iam.region:
minio
- iam.kind:
-
Set the namespace where fossa-core is deployed to your current namespace
kubectl config set-context --current --namespace=fossa -
Find the Minio PVCs and make note of them, it should follow the pattern
RELEASE-miniofor fossa-core andRELEASE-hubble-miniofor hubble.- fossa-core-minio
- fossa-hubble-minio
-
Ensure that the PVCs have the resource policy annotation set to to
keep. If this is not the case DO NOT PROCEEDYou can confirm this using the following command:
kubectl get pvc fossa-core-minio -o jsonpath='{.metadata.annotations.helm\.sh/resource-policy}' kubectl get pvc fossa-hubble-minio -o jsonpath='{.metadata.annotations.helm\.sh/resource-policy}' -
Enable maintenance mode with the current values config file (this will trigger a downtime event)
helm upgrade -i fossa fossa/fossa-core --values fossa-core-config.yml --set global.maintenanceMode.enabled=true --version "^4.0.0" -
Delete fossa-core release (this will trigger a downtime event)
helm delete fossa -
Deploy externally managed minio releases for fossa-core and hubble while using existing PVCs with flag
persistence.existingClaimhelm upgrade -i fossa-core-minio fossa/minio --set bucket=fossa.test --set auth.accessKey=minio --set auth.secretKey=minio123 --set persistence.existingClaim=fossa-core-minio helm upgrade -i fossa-hubble-minio fossa/minio --set bucket=hubble.fossa.test --set auth.accessKey=minio --set auth.secretKey=minio123 --set persistence.existingClaim=fossa-hubble-minio -
Upgrade fossa-core to the next major release while ensuring that
storage.endpointiam.kindiam.region hubble.storage.endpointhubble.iam.kindhubble.iam.regionare defined in your values file.Example:
decoupled-fossa-core-config.yml#decoupled-fossa-core-config.yml . . . iam: kind: AccessKey region: minio storage: endpoint: http://fossa-core-minio bucket: fossa.test forcePathStyle: true auth: accessKey: minio secretKey: minio123 . . . hubble: iam: kind: AccessKey region: minio storage: bucket: hubble.fossa.test endpoint: http://fossa-hubble-minio auth: accessKey: minio secretKey: minio123 . . .Upgrade:
helm upgrade -i fossa fossa/fossa-core --values decoupled-fossa-core-config.yml --version "^5.0.0" -
Ensure all the pods become healthy and that your data is still available
kubectl exec fossa-core-minio-0 -- /opt/bitnami/minio-client/bin/mc ls local/fossa.test/BUILDS/ kubectl exec fossa-hubble-minio-0 -- /opt/bitnami/minio-client/bin/mc ls local/hubble.fossa.test/BUILDS/ -
Log in to the UI and ensure data is available.
Updated about 3 hours ago
