File tree Expand file tree Collapse file tree
Expand file tree Collapse file tree Original file line number Diff line number Diff line change @@ -16,11 +16,6 @@ Ceph Storage
1616
1717 The Ceph deployment is not managed by StackHPC Ltd.
1818
19- Troubleshooting
20- ===============
21-
22- .. include :: include/ceph_troubleshooting.rst
23-
2419Working with Ceph deployment tool
2520=================================
2621
@@ -31,3 +26,13 @@ Working with Ceph deployment tool
3126.. ifconfig :: deployment['cephadm']
3227
3328 .. include :: include/cephadm.rst
29+
30+ Operations
31+ ==========
32+
33+ .. include :: include/ceph_operations.rst
34+
35+ Troubleshooting
36+ ===============
37+
38+ .. include :: include/ceph_troubleshooting.rst
Original file line number Diff line number Diff line change 1+
2+
3+ Replacing drive
4+ ---------------
5+
6+ See upstream documentation:
7+ https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd
8+
9+ In case where disk holding DB and/or WAL fails, it is necessary to recreate
10+ (using replacement procedure above) all OSDs that are associated with this
11+ disk - usually NVMe drive. The following single command is sufficient to
12+ identify which OSDs are tied to which physical disks:
13+
14+ .. code-block :: console
15+
16+ ceph# ceph device ls
17+
18+ Host maintenance
19+ ----------------
20+
21+ https://docs.ceph.com/en/quincy/cephadm/host-management/#maintenance-mode
22+
23+ Upgrading
24+ ---------
25+
26+ https://docs.ceph.com/en/quincy/cephadm/upgrade/
Original file line number Diff line number Diff line change @@ -34,14 +34,17 @@ cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection.
3434Running Ceph commands
3535=====================
3636
37- Ceph commands can be run via ``cephadm shell `` utility container:
37+ Ceph commands are usually run inside a ``cephadm shell `` utility container:
3838
3939.. code-block :: console
4040
4141 ceph# cephadm shell
4242
43- This command will be only successful on ``mons `` group members (the admin key
44- is copied only to those nodes).
43+ Operating a cluster requires a keyring with an admin access to be available for Ceph
44+ commands. Cephadm will copy such keyring to the nodes carrying
45+ `_admin <https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels >`__
46+ label - present on MON servers by default when using
47+ `StackHPC Cephadm collection <https://github.com/stackhpc/ansible-collection-cephadm >`__.
4548
4649Adding a new storage node
4750=========================
You can’t perform that action at this time.
0 commit comments