Description
What should the feature do:
In the existing cluster, I deployed ceph almost by default. All devices are sas.
For performance reasons, I now need to use ssd for the medata pool of mds.
I expect the newly added ssd osd to be used only for the cephfs meta pool. So I try to set the deviceclass of matapool to ssd, but at this time some pgs of other pools will be transferred to ssd. If I set the deviceclass of hdd for other pools, all pgs will be rebalanced, which is unacceptable in a large ceph cluster.
I tried to manually operate the cluster, create a new crushroot, and point the crush of the ssd osd to the new crushroot, and then point the crushroot of the ceph fs metadata pool to the new crushroot. This can make the cephfs metadata pool use ssd at the same time, while other data pools remain unchanged.
According to the above content, we need to automatically set up osd crushroot for each device