当前位置:首页>办公设备>打印机>

打印机错误0x00000bbb怎么解决(打印机0x0000002错误解决方法)

打印机错误0x00000bbb怎么解决(打印机0x0000002错误解决方法)

更新时间:2022-02-19 00:57:27
一、ceph安装1 前期准备

本次安装环境为:

ceph1(集群命令分发管控,提供磁盘服务集群)

centos7.5

192.168.3.61

ceph2(提供磁盘服务集群)

CentOs7.5

192.168.3.62

ceph3(提供磁盘服务集群)

CentOs7.5

192.168.3.63

2 编辑hosts文件,增加以下内容

# ceph1-master and ceph1-osd1 192.168.3.61 ceph1 # ceph2-osd2 192.168.3.62 ceph2 # ceph3-osd3 192.168.3.63 ceph3 3 ceph是通ssh下发指令。

首先配置管理节点可以无密码访问存储节点,生成密钥,并建立touch authorized_keys文件(在ceph1操作)

ssh-keygen -t rsa # 生成公钥及私钥 touch /root/.ssh/authorized_keys cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys chmod 700 /root/.ssh chmod 644 /root/.ssh/authorized_keys # 将密钥拷贝到其他主机 ssh-copy-id ceph2 ssh-copy-id ceph3 # 关闭防火墙及安全选项,所有节点 egrep -v "^$|^#" /etc/selinux/config systemctl stop firewalld systemctl disable firewalld 4 ceph各节点安装ceph-deploy

(各节点都要安装)

安装EPEL软件包仓库,并安装

yum install -y wget mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo rpm -Uvh http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm

替换ceph.repo 服务器

sed -i 's#htt.*://download.ceph.com#http://mirrors.aliyun.com/ceph#g' /etc/yum.repos.d/ceph.repo

安装 ceph-deploy和python-setuptools

yum update yum install ceph-deploy --disablerepo=epel yum install python-setuptools 5 安装NTP服务器

centos7默认的ntp是chronyd,系统默认自己安装,只需要把ceph1改成阿里云ntp,其他机器改成ceph1并启动服务。

ceph1:

vi /etc/chrony.conf # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server ntp1.aliyun.com iburst ....... allow 192.168.3.0/24

ceph2,ceph3:

vi /etc/chrony.conf # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server ceph1 iburst ........

所有节点,重启ntp服务:

systemctl restart chronyd systemctl enable chronyd 6 搭建ceph mon

在ceph-master节点上创建ceph-install目录

mkdir /ceph-install && cd /ceph-install

在ceph-master的ceph-install上创建MON节点。

ceph-deploy new ceph1 ceph2 ceph3

新创建的集群被自动命名为ceph,集群创建完成后,在my-cluster目录中会产生ceph配置文件、monitor秘钥文件和log文件,具体如下:

ls . ceph.conf ceph-deploy-ceph.log ceph.mon.keyring

ceph.conf默认配置如下:

[root@ceph1 ceph-install]# cat ceph.conf [global] fsid = 319d2cce-b087-4de9-bd4a-13edc7644abc mon_initial_members = ceph1, ceph2, ceph3 mon_host = 192.168.3.61,192.168.3.62,192.168.3.63 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx

在ceph1用ceph-deploy安装,注意ceph-deploy已经默认安装N版本了,将repo环境变量更改成阿里云的地址再安装

export CEPH_DEPLOY_REPO_URL=http://mirrors.aliyun.com/ceph/rpm-luminous/el7 export CEPH_DEPLOY_GPG_URL=http://mirrors.aliyun.com/ceph/keys/release.asc

在ceph1进入ceph-install目录,用deploy安装

ceph-deploy install ceph1 ceph2 ceph3

初始化monitors节点并收集keys

[root@ceph1 ceph-install]# ceph-deploy mon create-initial

初始化完成后,在ceph-install中将会新增多个秘钥文件,具体如下:

[root@ceph1 ceph-install]# ls ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring

将ceph-install目录的keyring拷贝到/etc/ceph/下面,其他节点也拷贝过去。

cp -p /ceph-install/*keyring /etc/ceph/ scp /ceph-install/*keyring root@ceph2:/etc/ceph/ scp /ceph-install/*keyring root@ceph3:/etc/ceph/

查看集群健康情况

# ceph -s cluster: id: 319d2cce-b087-4de9-bd4a-13edc7644abc health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 0B used, 0B / 0B avail pgs: 7 创建OSD存储节点

每个节点都执行nvme分区:

# nvme0n1作为journal盘,画成4个分区 parted /dev/nvme0n1 mklabel gpt parted /dev/nvme0n1 mkpart primary 0% 25% parted /dev/nvme0n1 mkpart primary 26% 50% parted /dev/nvme0n1 mkpart primary 51% 75% parted /dev/nvme0n1 mkpart primary 76% 100%

显示如下:

[root@ceph2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 153.9G 0 part ├─centos-root 253:0 0 150G 0 lvm / └─centos-swap 253:1 0 3.9G 0 lvm [SWAP] sdb 8:16 0 300G 0 disk sdc 8:32 0 300G 0 disk sdd 8:48 0 300G 0 disk nvme0n1 259:0 0 100G 0 disk ├─nvme0n1p1 259:1 0 25G 0 part ├─nvme0n1p2 259:2 0 24G 0 part ├─nvme0n1p3 259:3 0 24G 0 part └─nvme0n1p4 259:4 0 24G 0 part

创建osd,在ceph1执行

ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdb --journal /dev/nvme0n1p1 ceph1 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdc --journal /dev/nvme0n1p2 ceph1 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdd --journal /dev/nvme0n1p3 ceph1 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdb --journal /dev/nvme0n1p1 ceph2 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdc --journal /dev/nvme0n1p2 ceph2 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdd --journal /dev/nvme0n1p3 ceph2 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdb --journal /dev/nvme0n1p1 ceph3 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdc --journal /dev/nvme0n1p2 ceph3 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdd --journal /dev/nvme0n1p3 ceph3

把配置文件和admin 秘钥到管理节点和ceph节点

ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3

安装mgr :

ceph-deploy mgr create ceph1 ceph2 ceph3

执行命令查询:

# ceph -s cluster: id: 319d2cce-b087-4de9-bd4a-13edc7644abc health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 mgr: ceph1(active), standbys: ceph3, ceph2 osd: 9 osds: 9 up, 9 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 968MiB used, 2.63TiB / 2.64TiB avail pgs: # ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 2.63507 root default -3 0.87836 host ceph1 0 hdd 0.29279 osd.0 up 1.00000 1.00000 1 hdd 0.29279 osd.1 up 1.00000 1.00000 2 hdd 0.29279 osd.2 up 1.00000 1.00000 -5 0.87836 host ceph2 3 hdd 0.29279 osd.3 up 1.00000 1.00000 4 hdd 0.29279 osd.4 up 1.00000 1.00000 5 hdd 0.29279 osd.5 up 1.00000 1.00000 -7 0.87836 host ceph3 6 hdd 0.29279 osd.6 up 1.00000 1.00000 7 hdd 0.29279 osd.7 up 1.00000 1.00000 8 hdd 0.29279 osd.8 up 1.00000 1.00000 8 ceph1整体记录

[root@ceph1 ceph-install]# history 1 ip add 2 poweroff 3 ip add 4 ip add 5 cd /etc/sysconfig/network-scripts/ 6 ls -ls 7 mv ifcfg-ens3 ifcfg-eth0 8 vi ifcfg-eth0 9 vi /etc/default/grub 10 grub2-mkconfig -o /boot/grub2/grub.cfg 11 reboot 12 ip add 13 poweroff 14 ip add 15 lsblk 16 poweroff 17 hostnamectl set-hostname ceph1 18 exit 19 ip add 20 vi /etc/sysconfig/network-scripts/ifcfg-eth0 21 ifdown eth0 && ifup eth0 22 vi /etc/hosts 23 scp /etc/hosts root@ceph2:/etc/hosts 24 scp /etc/hosts root@ceph3:/etc/hosts 25 ping ceph1 26 ping ceph2 27 ping ceph3 28 ssh-keygen -t rsa 29 touch /root/.ssh/authorized_keys 30 cat id_rsa.pub > authorized_keys 31 cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys 32 chmod 700 /root/.ssh 33 chmod 644 /root/.ssh/authorized_keys 34 ssh-copy-id ceph2 35 ssh-copy-id ceph3 36 ssh ceph1 37 ssh ceph2 38 ssh ceph3 39 egrep -v "^$|^#" /etc/selinux/config 40 systemctl stop firewalld 41 systemctl disable firewalld 42 lsblk 43 poweroff 44 exit 45 cd /etc/yum.repos.d/ 46 ls 47 cd .. 48 vi /etc/yum.conf 49 cd /etc/yum.repos.d/ 50 ls 51 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak 52 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 53 mv /etc/yum.repos.d/CentOS-Base.repo.bak /etc/yum.repos.d/CentOS-Base.repo 54 yum install -y wget 55 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak 56 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 57 rpm -Uvh http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm 58 cat ceph.repo 59 sed -i 's#htt.*://download.ceph.com#http://mirrors.aliyun.com/ceph#g' /etc/yum.repos.d/ceph.repo 60 cat ceph.repo 61 ssh ceph1 62 ssh ceph2 63 ssh ceph3 64 systemctl status chronyd.service 65 vi /etc/chrony.conf 66 systemctl restart chronyd 67 systemctl enable chronyd 68 chronyc sources 69 vi /etc/chrony.conf 70 systemctl restart chronyd 71 chronyc sources 72 vi /etc/chrony.conf 73 systemctl restart chronyd 74 vi /etc/chrony.conf 75 systemctl restart chronyd 76 chronyc sources 77 systemctl status firewalld 78 yum install -y ansible 79 ls 80 cat CentOS-Base.repo 81 vi epel.repo 82 yum install -y ansible 83 vi /etc/chrony.conf 84 systemctl restart chronyd 85 cd /root 86 cd /etc/ansible/ 87 ls 88 vi hosts 89 ls 90 cp -p hosts hosts.bak 91 echo 1 > hosts 92 vi hosts 93 ansible -i hosts -m ping all 94 ansible -i hosts -m shell -a 'date' 95 ansible -i hosts -m shell -a 'date' all 96 chronyc sources 97 systemctl restart chronyd 98 cd /etc/yum.repos.d/ 99 ls 100 scp epel.repo root@:/etc/yum.repos.d/ 101 scp epel.repo root@ceph2:/etc/yum.repos.d/ 102 scp epel.repo root@ceph3:/etc/yum.repos.d/ 103 yum update 104 yum install ceph-deploy --disablerepo=epel 105 yum install python-setuptools -y 106 mkdir /ceph-install && cd /ceph-install 107 ceph-deploy new ceph1 ceph2 ceph3 108 ls 109 cat ceph.conf 110 export CEPH_DEPLOY_REPO_URL=http://mirrors.aliyun.com/ceph/rpm-luminous/el7 111 export CEPH_DEPLOY_GPG_URL=http://mirrors.aliyun.com/ceph/keys/release.asc 112 ceph-deploy install ceph1 ceph2 ceph3 113 ceph-deploy mon create-initial 114 ls 115 cp -p /ceph-install/*keyring /etc/ceph/ 116 scp /ceph-install/*keyring root@ceph2:/etc/ceph/ 117 scp /ceph-install/*keyring root@ceph3:/etc/ceph/ 118 ceph -s 119 cat ceph.conf 120 netstat -lpnt 121 yum search netstat 122 ss -lpnt 123 yum install -y net-tools.x86_64 124 yum search netstat 125 netstat -lpnt 126 poweroff 127 ceph -s 128 ls / 129 cd /ceph-install/ 130 ls 131 ceph-deploy osd create --help 132 lsblk 133 parted /dev/nvme0n1 mklabel gpt 134 parted /dev/nvme0n1 mkpart primary 0% 25% 135 lsblk 136 cat /etc/fstab 137 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdb --journal /dev/nvme0n1p1 ceph1 138 lsblk 139 cd /var/lib/ceph/osd/ceph-0 140 ls 141 history 142 ceph osd tree 143 parted /dev/nvme0n1 mkpart primary 26% 50% 144 parted /dev/nvme0n1 mkpart primary 51% 75% 145 lsblk 146 parted /dev/nvme0n1 mkpart primary 76% 100% 147 lsblk 148 cd / 149 cd /ceph-install/ 150 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdc --journal /dev/nvme0n1p2 ceph1 151 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdd --journal /dev/nvme0n1p3 ceph1 152 ceph -s 153 ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 154 ceph-deploy mgr create ceph1 ceph2 ceph3 155 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdb --journal /dev/nvme0n1p1 ceph2 156 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdc --journal /dev/nvme0n1p2 ceph2 157 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdd --journal /dev/nvme0n1p3 ceph2 158 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdb --journal /dev/nvme0n1p1 ceph3 159 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdc --journal /dev/nvme0n1p2 ceph3 160 ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdd --journal /dev/nvme0n1p3 ceph3 161 lsblk 162 ceph -s 163 ceph osd tree 164 history 9 优化配置文件ceph.conf并下发

cat ceph.conf [global] fsid = 319d2cce-b087-4de9-bd4a-13edc7644abc mon_initial_members = ceph1, ceph2, ceph3 mon_host = 192.168.3.61,192.168.3.62,192.168.3.63 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx mon max pg per osd = 3000 mon_allow_pool_delete = true rbd_default_features = 1 public_network = 192.168.3.0/24 [client] rbd cache = true rbd cache size = 536870912 rbd cache max dirty = 26738688 rbd cache max dirty age = 15 [osd] filestore min sync interval = 10 filestore max sync interval = 15 filestore queue max ops = 25000 filestore queue max bytes = 1048576000 filestore queue committing max ops = 50000 filestore queue committing max bytes = 10485760000 filestore fd cache size = 1024 filestore op threads = 32 journal max write bytes = 1073714824 journal max write entries = 10000 journal queue max ops = 50000 journal queue max bytes = 10485760000

下发配置:

ceph-deploy --overwrite-conf config push ceph{1..3} 二、ceph集成社区版openstack1 集成cinder

(1) 在cinder-volume节点安装ceph client

yum install -y ceph-common

注意:glance要安装python-rbd

cinder-volume节点安装ceph成功后,会生成/etc/ceph/这个目录,里面有一个文件。

# ls -ls /etc/ceph/ 总用量 4 4 -rwxr-xr-x. 1 root root 2910 10月 31 2018 rbdmap

(2) ceph提前建立pool volumes,并创建cinder账号和授权;

ceph1:

# ceph osd pool create volumes 128 # ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=volumes-cache, allow rwx pool=vms, allow rwx pool=vms-cache, allow rx pool=images, allow rx pool=images-cache' [client.cinder] key = AQC0Ma5favxeBRAA4qIdM04lzflv7VF7guntJQ== # ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' [client.glance] key = AQA5Mq5f7GGqORAAu7b2EyHTjuBDUm0e45QqSg==

在ceph1 /etc/ceph/创建2个认证文件,内容为上一步auth创建最后两行的账号及密钥内容:

# cat ceph.client.cinder.keyring [client.cinder] key = AQC0Ma5favxeBRAA4qIdM04lzflv7VF7guntJQ== # cat ceph.client.glance.keyring [client.glance] key = AQA5Mq5f7GGqORAAu7b2EyHTjuBDUm0e45QqSg==

将生成的文件发送到运行glance-api和cinder-volume的节点上,glance-api通常运行在控制节点上,本例的cinder-volume也运行在控制节点上。

# scp ceph.client.cinder.keyring root@192.168.3.10:/etc/ceph/ # scp ceph.client.glance.keyring root@192.168.3.10:/etc/ceph/

在控制节点和存储节点更改权限:

# chown glance:glance ceph.client.glance.keyring # chown cinder:cinder ceph.client.cinder.keyring

运行nova-compute服务节点需要用到client.cinder的秘钥文件,并需将其传递到计算节点,本例子计算与存储为同一节点,所以略。运行nova-compute节点需要将client,cinder用户的秘钥文件存储到libvirt中,当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到秘钥文件以访问ceph集群。

............

在运行nova-compute的计算节点上将秘钥文件添加到libvirt中:

# uuidgen 26966ae6-d92f-43d4-b89f-c7327973df36 # cat > secret.xml <<EOF > <secret ephemeral='no' private='no'> > <uuid>26966ae6-d92f-43d4-b89f-c7327973df36</uuid> > <usage type='ceph'> > <name>client.cinder secret</name> > </usage> > </secret> > EOF # virsh secret-define --file secret.xml 生成 secret 26966ae6-d92f-43d4-b89f-c7327973df36 virsh secret-set-value --secret 26966ae6-d92f-43d4-b89f-c7327973df36 --base64 AQC0Ma5favxeBRAA4qIdM04lzflv7VF7guntJQ== # AQC0Ma5favxeBRAA4qIdM04lzflv7VF7guntJQ==为ceph.client.cinder.keyring的key

(3) 将ceph-server服务器的ceph.conf拷贝过来cinder-volume及compute节点/etc/ceph这个目录下。

ceph1:

scp /etc/ceph/ceph.conf root@192.168.3.10:/etc/ceph/

本例子controller及compute为同一台主机。

(4) 编辑cinder-volume节点,在/etc/cinder/cinder-volume添加以下信息

[DEFAULT] enabled_backends = lvm,rbd-1 glance_api_version = 2 [rbd-1] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = rbd-1 rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = False rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = 5 rbd_user = cinder rbd_secret_uuid = 26966ae6-d92f-43d4-b89f-c7327973df36 report_discard_supported = True image_upload_use_cinder_backend = False

(5) 创建cinder type类型rbd-1

# cinder type-create rbd-1 # cinder type-key rbd-1 set volume_backend_name=rbd-1

查询tpye是否生效

# openstack volume type list -------------------------------------- ------------- ----------- | ID | Name | Is Public | -------------------------------------- ------------- ----------- | 946d519d-a747-4c63-b083-437a3cb495e9 | rbd-1 | True | | 162afe89-b5ed-4ede-b90d-02a354b8253a | __DEFAULT__ | True | -------------------------------------- ------------- -----------

创建ceph块存储

# openstack volume create disk1113 --type rbd-1 --size 1 --------------------- -------------------------------------- | Field | Value | --------------------- -------------------------------------- | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-13T07:51:46.000000 | | description | None | | encrypted | False | | id | 4bda7201-dbd5-4274-b375-619f16a7d402 | | migration_status | None | | multiattach | False | | name | disk1113 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | rbd-1 | | updated_at | None | | user_id | 607785c930c44219b6462b1e2bff7dcc | --------------------- -------------------------------------- [root@controller cinder]# openstack volume list -------------------------------------- ----------------------------------------------------- ----------- ------ --------------------------------- | ID | Name | Status | Size | Attached to | -------------------------------------- ----------------------------------------------------- ----------- ------ --------------------------------- | 4bda7201-dbd5-4274-b375-619f16a7d402 | disk1113 | available | 1 | | 2 集成glance

(1) 在ceph创建pool : images

# ceph osd pool create images 128

(2) 在openstack控制节点操作

修改 /etc/glance/glance-api.conf

........... [glance_store] stores = file,http,rbd #default_store = file #filesystem_store_datadir = /var/lib/glance/images/ default_store = rbd rbd_store_user = glance rbd_store_pool = images rbd_store_chunk_size = 8 rbd_store_ceph_conf = /etc/ceph/ceph.conf ............

重启glance服务

systemctl restart openstack-glance-api.service

上传镜像

# glance image-create --name cirros.raw --disk-format raw --container-format bare --file cirros.raw --store rbd --progress [=============================>] 100% ------------------ ---------------------------------------------------------------------------------- | Property | Value | ------------------ ---------------------------------------------------------------------------------- | checksum | ba3cd24377dde5dfdd58728894004abb | | container_format | bare | | created_at | 2020-11-16T06:15:49Z | | disk_format | raw | | id | 1430bac6-3e89-476f-b971-a4ac2e3c4083 | | min_disk | 0 | | min_ram | 0 | | name | cirros.raw | | os_hash_algo | sha512 | | os_hash_value | b795f047a1b10ba0b7c95b43b2a481a59289dc4cf2e49845e60b194a911819d3ada03767bbba4143 | | | b44c93fd7f66c96c5a621e28dff51d1196dae64974ce240e | | os_hidden | False | | owner | e2c91481258f417c83ffb4ea9d7a2339 | | protected | False | | size | 46137344 | | status | active | | tags | [] | | updated_at | 2020-11-16T06:15:54Z | | virtual_size | Not available | | visibility | shared | ------------------ ----------------------------------------------------------------------------------

(3) ceph1查看:

# rbd ls images 1430bac6-3e89-476f-b971-a4ac2e3c4083,