您现在的位置是:首页 > 经验记录>服务器相关>ceph-对象存储服务搭建 网站首页 服务器相关

ceph-对象存储服务搭建

CEPH对象储存搭建[我直接在113上搭建的]:

 

#安装ceph-radosgw

[ceph-admin@ceph113 my-cluster]$ yum -y install ceph-radosgw[之前这台服务器安装过了,所以跳过,后面的,安装过的我都会备注一下,在新机器上就不能跳过了]

 

#检测7480端口是否已启动

[ceph-admin@ceph113 my-cluster]$ netstat -tunlp | grep 7480

(No info could be read for "-p": geteuid()=1000 but you should be root.)

tcp        0      0 0.0.0.0:7480            0.0.0.0:*               LISTEN

#创建池

[ceph-admin@ceph113 my-cluster]$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/pool

[ceph-admin@ceph113 my-cluster]$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/create_pool.sh

[ceph-admin@ceph113 my-cluster]$ vi ./create_pool.sh

//修改成如下,红色为修改部分

#!/bin/bash

PG_NUM=250

PGP_NUM=250

SIZE=3

for i in `cat ./pool`

        do

        ceph osd pool create $i $PG_NUM

        ceph osd pool set $i size $SIZE

        done

for i in `cat ./pool`

        do

        ceph osd pool set $i pgp_num $PGP_NUM

        done

[ceph-admin@ceph113 my-cluster]$ ./create_pool.sh

 

注意:如果这一步报错了,及pg_num分配不合理,需要删除池然后重新分配,如下(没报错跳过该中括号内容):

[

    [ceph-admin@ceph113 my-cluster]$ sudo vi /etc/ceph/ceph.conf

添加下行内容然后保存:

mon_allow_pool_delete = true

    [ceph-admin@ceph113 my-cluster]$ sudo systemctl restart ceph-mon.target

    [ceph-admin@ceph113 my-cluster]$ ceph osd pool delete rbd rbd --yes-i-really-really-mean-it

    pool 'rbd' removed //删除成功

    [ceph-admin@ceph113 my-cluster]$ cp create_pool.sh delete_pool.sh

[ceph-admin@ceph113 my-cluster]$ vi delete_pool.sh

修改为下图内容保存:

image.png

[ceph-admin@ceph113 my-cluster]$ ./delete_pool.sh

//删除成功,有些创建成功的已被删除,没创建的会提示不存在,不用管

 

//以下是 重新创建

[ceph-admin@ceph113 my-cluster]$ ceph osd pool create rbd 512//这里演示是512,请根据实际数量设置

[ceph-admin@ceph113 my-cluster]$ vi create_pool.sh

//根据实际设置PG_NUM和PGP_NUM的数量

[ceph-admin@ceph113 my-cluster]$ ./create_pool.sh

    //重新创建完成

]

 

#测试是否能够访问ceph集群

[ceph-admin@ceph113 my-cluster]$ sudo cp /var/lib/ceph/radosgw/ceph-rgw.ceph113/keyring ./           //这个目录确认别错了,你可以另外开一个窗口查看该路径[ceph113是我本机名,实际每台是不同的]

[ceph-admin@ceph113 my-cluster]$ ceph -s -k ./keyring --name client.rgw.ceph113

    cluster:

    id:     6c06d762-3762-462f-8219-e66c1f953025

    health: HEALTH_OK

 

  services:

    mon: 3 daemons, quorum ceph113,ceph114,ceph115

    mgr: ceph113(active), standbys: ceph115, ceph114

    osd: 36 osds: 36 up, 36 in

    rgw: 3 daemons active

 

  data:

    pools:   18 pools, 1236 pgs

    objects: 183 objects, 0B

    usage:   37.1GiB used, 196TiB / 196TiB avail

    pgs:     1236 active+clean

 



文章评论

未开放
Top