Enable Ceph RGW Object Storage#
Pelagia enables you to deploy Ceph RADOS Gateway (RGW) Object Storage instances and automatically manage its resources such as users and buckets.
Pelagia has an integration for Ceph Object Storage with OpenStack Object Storage (Swift
) provided by Rockoon.
Ceph RGW Object Storage parameters #
name
- Required. Ceph Object Storage instance name.-
dataPool
- Required ifzone.name
is not specified. Mutually exclusive withzone
. Must be used together withmetadataPool
.Object storage data pool spec that must only contain
replicated
orerasureCoded
,deviceClass
andfailureDomain
parameters. ThefailureDomain
parameter may be set tohost
,rack
,room
, ordatacenter
, defining the failure domain across which the data will be spread. ThedeviceClass
must be explicitly defined. FordataPool
, We recommend using anerasureCoded
pool.spec: objectStorage: rgw: dataPool: deviceClass: hdd failureDomain: host erasureCoded: codingChunks: 1 dataChunks: 2
-
metadataPool
- Required ifzone.name
is not specified. Mutually exclusive withzone
. Must be used together withdataPool
.Object storage metadata pool spec that must only contain
replicated
,deviceClass
andfailureDomain
parameters. ThefailureDomain
parameter may be set tohost
,rack
,room
, ordatacenter
, defining the failure domain across which the data will be spread. ThedeviceClass
must be explicitly defined. Can use onlyreplicated
settings. For example:spec: objectStorage: rgw: metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3
where
replicated.size
is the number of full copies of data on multiple nodes.Warning
When using the non-recommended Ceph pools
replicated.size
of less than3
, Ceph OSD removal cannot be performed. The minimal replica size equals a rounded up half of the specifiedreplicated.size
.For example, if
replicated.size
is2
, the minimal replica size is1
, and ifreplicated.size
is3
, then the minimal replica size is2
. The replica size of1
allows Ceph having PGs with only one Ceph OSD in theacting
state, which may cause aPG_TOO_DEGRADED
health warning that blocks Ceph OSD removal. We recommend settingreplicated.size
to3
for each Ceph pool. -
gateway
- Required. The gateway settings corresponding to thergw
daemon settings. Includes the following parameters:port
- the port on which the Ceph RGW service will be listening on HTTP.securePort
- the port on which the Ceph RGW service will be listening on HTTPS.-
instances
- the number of pods in the Ceph RGW ReplicaSet. IfallNodes
is set totrue
, a DaemonSet is created instead.Note
We recommend using 3 instances for Ceph Object Storage.
-
allNodes
- defines whether to start the Ceph RGW pods as a DaemonSet on all nodes. Theinstances
parameter is ignored ifallNodes
is set totrue
. splitDaemonForMultisiteTrafficSync
- Optional. For multisite setup defines whether to split RGW daemon on daemon responsible for sync between zones and daemon for serving clients request.rgwSyncPort
- Optional. Port the rgw multisite traffic service will be listening on (http). Has effect only for multisite configuration.resources
- Optional. Represents Kubernetes resource requirements for Ceph RGW pods. For details see: Kubernetes docs: Resource Management for Pods and Containers.-
externalRgwEndpoint
- Required for external Ceph cluster Setup. Represents external RGW Endpoint to use, only when external Ceph cluster is used. Contains the following parameters:ip
- represents the IP address of RGW endpoint.hostname
- represents the DNS-addressable hostname of RGW endpoint. This field will be preferred over IP if both are given.
spec: objectStorage: rgw: gateway: allNodes: false instances: 3 port: 80 securePort: 8443
-
preservePoolsOnDelete
- Optional. Defines whether to delete the data and metadata pools in thergw
section if the Object Storage is deleted. Set this parameter totrue
if you need to store data even if the object storage is deleted. However, we recommend setting this parameter tofalse
. -
objectUsers
andbuckets
- Optional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.-
objectUsers
- a list of user specifications to create for object storage. Contains the following fields:name
- a user name to create.displayName
- the Ceph user name to display.-
capabilities
- user capabilities:user
- admin capabilities to read/write Ceph Object Store users.bucket
- admin capabilities to read/write Ceph Object Store buckets.metadata
- admin capabilities to read/write Ceph Object Store metadata.usage
- admin capabilities to read/write Ceph Object Store usage.zone
- admin capabilities to read/write Ceph Object Store zones.
The available options are
*
,read
,write
,read, write
. For details, see Ceph documentation: Add/remove admin capabilities. -
quotas
- user quotas:maxBuckets
- the maximum bucket limit for the Ceph user. Integer, for example,10
.maxSize
- the maximum size limit of all objects across all the buckets of a user. String size, for example,10G
.maxObjects
- the maximum number of objects across all buckets of a user. Integer, for example,10
.
spec: objectStorage: rgw: objectUsers: - name: test-user displayName: test-user capabilities: bucket: '*' metadata: read user: read quotas: maxBuckets: 10 maxSize: 10G
-
buckets
- a list of strings that contain bucket names to create for object storage.
-
-
zone
- Required ifdataPool
andmetadataPool
are not specified. Mutually exclusive with these parameters.Defines the Ceph Multisite zone where the object storage must be placed. Includes the
name
parameter that must be set to one of thezones
items. For details, see the Ops Guide: Enable Multisite for Ceph Object Storage.spec: objectStorage: multisite: zones: - name: master-zone ... rgw: zone: name: master-zone
-
SSLCert
- Optional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated.spec: objectStorage: rgw: SSLCert: cacert: | -----BEGIN CERTIFICATE----- ca-certificate here -----END CERTIFICATE----- tlsCert: | -----BEGIN CERTIFICATE----- private TLS certificate here -----END CERTIFICATE----- tlsKey: | -----BEGIN RSA PRIVATE KEY----- private TLS key here -----END RSA PRIVATE KEY-----
-
SSLCertInRef
- Optional. Flag to determine that a TLS certificate for accessing the Ceph RGW endpoint is used but not exposed inspec
. For example:spec: objectStorage: rgw: SSLCertInRef: true
The operator must manually provide TLS configuration using the
rgw-ssl-certificate
secret in therook-ceph
namespace of the managed cluster. The secret object must have the following structure:data: cacert: <base64encodedCaCertificate> cert: <base64encodedCertificate>
When removing an already existing
SSLCert
block, no additional actions are required, because this block uses the samergw-ssl-certificate
secret in therook-ceph
namespace.When adding a new secret directly without exposing it in
spec
, the following rules apply:cert
- base64 representation of a file with the server TLS key, server TLS cert, and CA certificate.cacert
- base64 representation of a CA certificate only.
To enable the RGW Object Storage:#
- Open the
CephDeployment
resource for editing:Substitutekubectl -n pelagia edit cephdpl <name>
<name>
with the name of yourCephDeployment
. -
Using Ceph RGW Object Storage parameters, update the
objectStorage.rgw
section specification.For example:
3. Save the changes and exit the editor.rgw: name: rgw-store dataPool: deviceClass: hdd erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3 gateway: allNodes: false instances: 3 port: 80 securePort: 8443 preservePoolsOnDelete: false