Yesterday, I attended the ceph science meeting. I noticed the meeting was advertised on the ceph user mailing list and decided to go along to see what it is about. Ceph sysadmins from various institutions across the world attended to exchange information and discuss issues arising with scientific workloads. The meeting was very interesting. Our ceph installation is comparatively small. However, there was some interest in how we use cephfs for our general storage. There was some discussing about the message about clients failing to respond to cache pressure we are also seeing. The most recent release of nautilus seems to add a new parameter to deal with this issues by starving misbehaving clients of new caps. I am not sure we would want to use that for the ganesha NFS file server. Another topic was the new tool cephadm used by the new major version of ceph to manage the cluster. It seems like ceph will be containerised to make managing the system easier and I guess more uniform. We will need to consider how this plays with our LCFG managed systems. Physics is looking into using a newer version of ceph on LCFG Ubuntu. We can see what they come up with.