ceph-deploy osd activate激活OSD报错

[root@node1 ceph]# ceph-deploy osd prepare  node2:/data/
[root@node1 ceph]# ceph-deploy osd activate node2:/data/
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd activate node2:/data/
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2137830>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x2129e60>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('node2', '/data/', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node2:/data/:
[node2][DEBUG ] connected to host: node2 
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] activating host node2 disk /data/
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /data/
[node2][WARNIN] /usr/lib/python2.7/site-packages/ceph_disk/main.py:5653: UserWarning: 
[node2][WARNIN] *******************************************************************************
[node2][WARNIN] This tool is now deprecated in favor of ceph-volume.
[node2][WARNIN] It is recommended to use ceph-volume for OSD deployments. For details see:
[node2][WARNIN] 
[node2][WARNIN]     http://docs.ceph.com/docs/master/ceph-volume/#migrating
[node2][WARNIN] 
[node2][WARNIN] *******************************************************************************
[node2][WARNIN] 
[node2][WARNIN]   warnings.warn(DEPRECATION_WARNING)
[node2][WARNIN] main_activate: path = /data/
[node2][WARNIN] activate: Cluster uuid is 97e33890-66ea-400e-be50-066dd771107c
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node2][WARNIN] activate: Cluster name is ceph
[node2][WARNIN] activate: OSD uuid is f35d7c75-82fb-49ea-9c9e-f4d83971b63f
[node2][WARNIN] allocate_osd_id: Allocating OSD id...
[node2][WARNIN] command: Running command: /usr/bin/ceph-authtool --gen-print-key
[node2][WARNIN] __init__: stderr 
[node2][WARNIN] command_with_stdin: Running command with stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f35d7c75-82fb-49ea-9c9e-f4d83971b63f
[node2][WARNIN] command_with_stdin: 1
[node2][WARNIN] 
[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph-authtool /data/keyring --create-keyring --name osd.1 --add-key AQBdKzJa5/QgKxAAezK6kpWDzIspdqHxPNztBg==
[node2][DEBUG ] creating /data/keyring
[node2][DEBUG ] added entity osd.1 auth auth(auid = 18446744073709551615 key=AQBdKzJa5/QgKxAAezK6kpWDzIspdqHxPNztBg== with 0 caps)
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /data/keyring
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /data/keyring
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /data/whoami.10891.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /data/whoami.10891.tmp
[node2][WARNIN] activate: OSD id is 1
[node2][WARNIN] activate: Initializing OSD...
[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /data/activate.monmap
[node2][WARNIN] got monmap epoch 1
[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs -i 1 --monmap /data/activate.monmap --osd-data /data/ --osd-uuid f35d7c75-82fb-49ea-9c9e-f4d83971b63f --setuser ceph --setgroup ceph
[node2][WARNIN] /usr/lib/python2.7/site-packages/ceph_disk/main.py:5677: UserWarning: 
[node2][WARNIN] *******************************************************************************
[node2][WARNIN] This tool is now deprecated in favor of ceph-volume.
[node2][WARNIN] It is recommended to use ceph-volume for OSD deployments. For details see:
[node2][WARNIN] 
[node2][WARNIN]     http://docs.ceph.com/docs/master/ceph-volume/#migrating
[node2][WARNIN] 
[node2][WARNIN] *******************************************************************************
[node2][WARNIN] 
[node2][WARNIN]   warnings.warn(DEPRECATION_WARNING)
[node2][WARNIN] Traceback (most recent call last):
[node2][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[node2][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5736, in run
[node2][WARNIN]     main(sys.argv[1:])
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5674, in main
[node2][WARNIN]     args.func(args)
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3783, in main_activate
[node2][WARNIN]     init=args.mark_init,
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3595, in activate_dir
[node2][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3701, in activate
[node2][WARNIN]     keyring=keyring,
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3153, in mkfs
[node2][WARNIN]     '--setgroup', get_ceph_group(),
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 570, in command_check_call
[node2][WARNIN]     return subprocess.check_call(arguments)
[node2][WARNIN]   File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[node2][WARNIN]     raise CalledProcessError(retcode, cmd)
[node2][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '-i', u'1', '--monmap', '/data/activate.monmap', '--osd-data', '/data/', '--osd-uuid', u'f35d7c75-82fb-49ea-9c9e-f4d83971b63f', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status 1
[node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /data/

[root@node1 ceph]#

我用目录作为OSD,发现目录权限不对,改成ceph用户就好了

[root@node2 ~]# chown ceph:ceph /data/

相关文章
相关标签/搜索