the ceph ansible playbooks include a switch-from-non-containerized-to-containerized.yml

is there any advantage to containerizing ceph?

Noticed that ceph-deploy install is setting up repositories which is nice in many cases.

But while running Spacewalk and having the proper channels that is no way cool.

Is the idea to edit the scripts to suit one's installation needs?

Also have another question: multitenant use of monitor machines?

Post has attachment
Allen Samuels talking about Bluestore

Expect great performance improvement!

https://www.youtube.com/watch?v=bFGoxOfMnuw

Post has shared content
If you are in Berlin and interested in +Ceph, particularly the new features and functions in CephFS, do not miss Jan's talk - and the group discussion and Q&A - on Nov 28th!

I'll be there too. (I hope that doesn't scare anyone off.)

(Please don't be deterred if the meet-up appears crowded. Folks tend to drop out before the event starts.)

Post has shared content
If you are running a Ceph cluster (be it in production or for testing purposes), we'd be interested in your feedback for using openATTIC for monitoring/managing it. Thanks in advance!

What happen when someone accidentally shut switch down?

Post has shared content
We're making good progress on adding Ceph support to openATTIC. Feedback is very welcome!
Version 2.0.13 has been released, including a number of improvements and enhancements to the Ceph management and monitoring capabilities. See the release notes for additional details.

Post has attachment
The recording of my SUSE MOST webinar about managing Ceph and storage with +openATTIC is available on demand now. I ran out of time and could not address the questions from the audience, but I now posted a blog post answering these.

Dear all , our ceph is in very bad condition . Showing 14 % degraded , too many OSD full state , Couple are about to full state .
388 osds : 238 up and 384 in

Can anyone assist to bring system stable and all OSD up and running ??

Hello, everyone! I got a really big problem with Ceph cluster.

We are running Ceph to support cloud block storage, last Friday, it seems all three Ceph monitors shut down, this is a problem, but not a disaster. The disaster is that my naive colleague remove all three Ceph monitors's cluster and PG map database files in /var/lib/ceph/mon/HostName/store.db, he though these file are cache files, I think he didn't know English, I don't know Ceph well, I know the word "store.db". He is fired for his stupid action, but now we have to clean up the mess.

I am really new to Ceph, is it possible to restore the data saved in the cluster? I google it, it seems no people has got a problem like this, so I come here for help, any suggestion is welcome.
Wait while more posts are being loaded