In this guide I created 3 virtual machines in VirtualBox with these specifications:
• 2 CPUs
• 4GB of memory
• 64GB of system storage
• 8GB of additional storage
• CentOS 7 latest
• Network topology:
Let’s begin.
The following guide is applied to all nodes (cephcluster1, cephcluster2, and cephcluster3).
1. Install Ceph Nautilus repository.
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
2. Update.
yum -y update
3. Create a Ceph user (cephadm) with password (cephadm).
useradd cephadm && echo “cephadm” | passwd –stdin cephadm
4. Sudo Ceph user (cephadm) with no password required.
echo “cephadm ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/cephadm
5. Set permission to Ceph user (cephadm) configuration.
chmod 0440 /etc/sudoers.d/cephadm
6. Reboot.
reboot
The following guide is applied only to cephcluster1 node.
7. Change to Ceph user (cephadm).
su cephadm
8. Generate SSH key on default location with no passphrase.
ssh-keygen
9. Copy generated SSH key to other nodes (cephcluster2 and cephcluster3) using Ceph user’s password (cephadm).
ssh-copy-id cephadm@cephcluster2
ssh-copy-id cephadm@cephcluster3
10. Make sure when SSH to other nodes (cephcluster2 and cephcluster3) no password required.
ssh cephadm@cephcluster2
ssh cephadm@cephcluster3
11. Create SSH config
vi ~/.ssh/config
with the following configuration.
Host cephcluster2
Host Hostname cephcluster2
Host User cephadm
Host cephcluster3
Host Hostname cephcluster3
Host User cephadm
12. Set permission to SSH config.
chmod 644 ~/.ssh/config
13. Install ceph-deploy.
sudo yum -y install ceph-deploy python-setuptools
14. Create Ceph working directory and change directory to there.
mkdir /home/cephadm/cephcluster; cd /home/cephadm/cephcluster
15. Deploy new Ceph cluster.
ceph-deploy new cephcluster1
16. Set Ceph public network
vi ceph.conf
insert the following configuration on bottom.
public network = 172.20.20.0/24
17. Install Ceph to all nodes (cephcluster1, cephcluster2, and cephcluster3)
ceph-deploy install –release nautilus cephcluster1
if ERROR occurred, just retry it again. Once it’s done continue to next node.
ceph-deploy install –release nautilus cephcluster2
ceph-deploy install –release nautilus cephcluster3
18. Deploy Ceph monitor on cephcluster1.
ceph-deploy mon create-initial
19. Deploy Ceph admin to all nodes (cephcluster1, cephcluster2, and cephcluster3).
ceph-deploy admin cephcluster1 cephcluster2 cephcluster3
20. Deploy Ceph manager to all nodes (cephcluster1, cephcluster2, and cephcluster3).
ceph-deploy mgr create cephcluster1 cephcluster2 cephcluster3
21. Deploy Ceph OSD to additional storage on all nodes (cephcluster1, cephcluster2, and cephcluster3)
ceph-deploy osd create –data /dev/sdb cephcluster1
once it’s done continue to next node.
ceph-deploy osd create –data /dev/sdb cephcluster2
ceph-deploy osd create –data /dev/sdb cephcluster3
22. Check Ceph health status for the first time.
sudo ceph -s
23. Deploy Ceph metadata server to all nodes (cephcluster1, cephcluster2, and cephcluster3).
ceph-deploy mds create cephcluster1 cephcluster2 cephcluster3
24. Deploy Ceph monitor to other nodes (cephcluster2 and cephcluster3)
ceph-deploy mon add cephcluster2
once it’s done continue to next node.
ceph-deploy mon add cephcluster3
25. Check Ceph quorum status.
sudo ceph quorum_status –format json-pretty
26. Exit from Ceph user (cephadm) and back to root again.
exit
27. Create Ceph pools and file system.
ceph osd pool create cephclusterfs_data 128
ceph osd pool create cephclusterfs_metadata 64
ceph fs new data cephclusterfs_data cephclusterfs_metadata
28. Obtain Ceph client key and copy it.
more /home/cephadm/cephcluster/ceph.client.admin.keyring
29. Create Ceph client key file and paste key from the obtained key.
vi cephclusterfs.secret
30. Create Ceph file system mounting directory.
mkdir /data
31. Mount Ceph file system from all nodes (because Ceph metadata server is deployed on all nodes and also for redundancy reason) to the directory using Ceph client key file
mount -t ceph 172.20.20.101,172.20.20.102,172.20.20.103:/ /data -o name=admin,secretfile=cephclusterfs.secret
and verify the mounted Ceph file system.
df -h
32. Repeat steps from 28 to 31 to mount on other nodes (cephcluster2 and cephcluster3).
The following guide is applied to all nodes (cephcluster1, cephcluster2, and cephcluster3).
33. Install Ceph dashboard.
yum -y install ceph-mgr-dashboard
The following guide is applied only to cephcluster1 node.
34. Enable Ceph dashboard on all nodes with insecured HTTP and also create user with administrator level.
ceph mgr module enable dashboard
ceph config set mgr mgr/dashboard/ssl false
ceph config set mgr mgr/dashboard/cephcluster1/server_addr 172.20.20.101
ceph config set mgr mgr/dashboard/cephcluster2/server_addr 172.20.20.102
ceph config set mgr mgr/dashboard/cephcluster3/server_addr 172.20.20.103
ceph dashboard ac-user-create cephadm cephadm administrator
The following guide is applied only to VirtualBox host.
35. Set hostname to all nodes (cephcluster1, cephcluster2, and cephcluster3) on VirtualBox host.
172.20.20.101 cephcluster1
172.20.20.102 cephcluster2
172.20.20.103 cephcluster3
36. Access Ceph dashboard (http://cephcluster1:8080) on VirtualBox host with a browser and login with generated user (cephadm).
37. Ceph as a basic central storage is ready to use.