File tree Expand file tree Collapse file tree
Expand file tree Collapse file tree Original file line number Diff line number Diff line change @@ -16,11 +16,6 @@ Ceph Storage
1616
1717 The Ceph deployment is not managed by StackHPC Ltd.
1818
19- Troubleshooting
20- ===============
21-
22- .. include :: include/ceph_troubleshooting.rst
23-
2419Working with Ceph deployment tool
2520=================================
2621
@@ -31,3 +26,13 @@ Working with Ceph deployment tool
3126.. ifconfig :: deployment['cephadm']
3227
3328 .. include :: include/cephadm.rst
29+
30+ Operations
31+ ==========
32+
33+ .. include :: include/ceph_operations.rst
34+
35+ Troubleshooting
36+ ===============
37+
38+ .. include :: include/ceph_troubleshooting.rst
Original file line number Diff line number Diff line change 1+
2+
3+ Replacing drive
4+ ---------------
5+
6+ See upstream documentation:
7+ https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd
8+
9+ In case where disk holding DB and/or WAL fails, it is necessary to recreate
10+ (using replacement procedure above) all OSDs that are associated with this
11+ disk - usually NVMe drive. The following single command is sufficient to
12+ identify which OSDs are tied to which physical disks:
13+
14+ .. code-block :: console
15+
16+ ceph# ceph device ls
17+
18+ Host maintenance
19+ ----------------
20+
21+ https://docs.ceph.com/en/quincy/cephadm/host-management/#maintenance-mode
22+
23+ Upgrading
24+ ---------
25+
26+ https://docs.ceph.com/en/quincy/cephadm/upgrade/
You can’t perform that action at this time.
0 commit comments