Discussion:
Cinder-Ceph Multi-Backend Example
James Beedy
2018-04-14 01:25:24 UTC
Permalink
Looking for examples that describe how to consume multiple ceph backends
using the cinder-ceph charm.

Thanks
alex barchiesi
2018-04-16 12:28:28 UTC
Permalink
Hi James,
at GARR we recently tested the cinder multi backend with the following idea
in mind:
support 3 different backends:

- a default one, for general-purpose disks like virtual machine boot
disks: replicated pool with replica factor equal to 3
- a reduced redundancy one: replicated pool with replica factor 2, which
should slightly improve latency
- a large capacity one: erasure-coded (possibly with a small frontend
replicated pool)

Premise: we have a juju deployed O~S (spanning 3 geographical data centers).

We configured Cinder such that it allows selection between multiple “Volume
Types”, where each Volume Type points to a distinct Ceph pool within the
same Ceph cluster.

This is the simplest configuration, as it involves Cinder configuration
alone. Volumes which are created can be later attached to running
instances, but all instances will have their boot disk on the default pool
cinder-ceph.

We faced some issues as reported in details here:
https://docs.google.com/document/d/1VSS28cvZBIOEzTOmVMWZ0o9FiVFkVLvu__ZOLxneMqQ/edit#

Would be interesting to find a way to be able to select the pool also for
the boot disk of a VM

Any comment, idea, "whatever" (also on the doc) is very much appreciated

best Alex



Dr. Alex Barchiesi
____________________________________
Senior cloud architect
Art -Science relationships responsible

GARR CSD department

Rome GARR: +39 06 4962 2302
Lausanne EPFL: +41 (0) 774215266

linkedin: alex barchiesi
<http://www.linkedin.com/profile/view?id=111538190&goback=%2Enmp_*1_*1_*1_*1_*1_*1_*1_*1_*1_*1&trk=spm_pic>
_____________________________________
I started with nothing and I still have most of it.
Post by James Beedy
Looking for examples that describe how to consume multiple ceph backends
using the cinder-ceph charm.
Thanks
--
Juju mailing list
Modify settings or unsubscribe at: https://lists.ubuntu.com/
mailman/listinfo/juju
James Beedy
2018-04-17 15:05:58 UTC
Permalink
Alex,

Thanks for the response, and nice work on the write up.

~James
Post by alex barchiesi
Hi James,
at GARR we recently tested the cinder multi backend with the following
- a default one, for general-purpose disks like virtual machine boot
disks: replicated pool with replica factor equal to 3
- a reduced redundancy one: replicated pool with replica factor 2,
which should slightly improve latency
- a large capacity one: erasure-coded (possibly with a small frontend
replicated pool)
Premise: we have a juju deployed O~S (spanning 3 geographical data centers).
We configured Cinder such that it allows selection between multiple
“Volume Types”, where each Volume Type points to a distinct Ceph pool
within the same Ceph cluster.
This is the simplest configuration, as it involves Cinder configuration
alone. Volumes which are created can be later attached to running
instances, but all instances will have their boot disk on the default pool
cinder-ceph.
We faced some issues as reported in details here: https://docs.google.com/
document/d/1VSS28cvZBIOEzTOmVMWZ0o9FiVFkVLvu__ZOLxneMqQ/edit#
Would be interesting to find a way to be able to select the pool also for
the boot disk of a VM
Any comment, idea, "whatever" (also on the doc) is very much appreciated
best Alex
Dr. Alex Barchiesi
____________________________________
Senior cloud architect
Art -Science relationships responsible
GARR CSD department
Rome GARR: +39 06 4962 2302
Lausanne EPFL: +41 (0) 774215266
linkedin: alex barchiesi
<http://www.linkedin.com/profile/view?id=111538190&goback=%2Enmp_*1_*1_*1_*1_*1_*1_*1_*1_*1_*1&trk=spm_pic>
_____________________________________
I started with nothing and I still have most of it.
Post by James Beedy
Looking for examples that describe how to consume multiple ceph backends
using the cinder-ceph charm.
Thanks
--
Juju mailing list
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
an/listinfo/juju
Loading...