CentOS安装HDP集群 遇到的问题记录

这里主要记录一下安装hdp集群时遇到的问题。
看过我博客的可能会知道我之前装的是Ambari2.6.2,HDP2.6.5,安装到这里出现各种问题,后来领导说版本低,自己就直接重装并提高了版本,安装的是最新版Ambari2.7.3和HDP3.1.0,系统CentOS7.2。
前面安装过程都差不多,就不说了,到这里依然有各种错误。这里就记录下遇到的错误。

错误1. ambari自动在我的/etc/yum.repos.d目录下创建reop文件
如下日志,可以发现Ambari在/etc/yum.repos.d目录下创建reop文件:ambari-hdp-6.repo,并不使用我配置的本地源,更可恨的是里面配置的baseurl为空,直接导致yum无法使用,以致安装失败。

stderr: 
2019-01-03 23:48:43,475 - Reporting component version failed
Traceback (most recent call last):
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
    self.save_component_version_to_structured_out(self.command_name)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
    stack_select_package_name = stack_select.get_package_name()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
    package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
    supported_packages = get_supported_packages()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
    raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in 
    BeforeInstallHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 33, in hook
    install_packages()
  File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
    retry_count=params.agent_stack_retry_count)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
    self._pkg_manager.install_package(package_name, self.__create_context())
  File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
    shell.repository_manager_executor(cmd, self.properties, context)
  File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 753, in repository_manager_executor
    raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install hdp-select', exited with code '1', message: '



 One of the configured repositories failed (Unknown),

 and yum doesn't have enough cached data to continue. At this point the only

 safe thing yum can do is fail. There are a few ways to work "fix" this:



     1. Contact the upstream for the repository and get them to fix the problem.



     2. Reconfigure the baseurl/etc. for the repository, to point to a working

        upstream. This is most often useful if you are using a newer

        distribution release than is supported by the repository (and the

        packages for the previous distribution release still work).



     3. Run the command with the repository temporarily disabled

            yum --disablerepo= ...



     4. Disable the repository permanently, so yum won't use it by default. Yum

        will then just ignore the repository until you permanently enable it

        again or use --enablerepo for temporary usage:



            yum-config-manager --disable 

        or

            subscription-manager repos --disable=



     5. Configure the failing repository to be skipped, if it is unavailable.

        Note that yum will try to contact the repo. when it runs most commands,

        so will have to try and fail each time (and thus. yum will be be much

        slower). If it is a very temporary problem though, this is often a nice

        compromise:



            yum-config-manager --save --setopt=.skip_if_unavailable=true



Cannot find a valid baseurl for repo: HDP-3.1-repo-6
'
 stdout:
2019-01-03 23:48:42,100 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
2019-01-03 23:48:42,107 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2019-01-03 23:48:42,110 - Group['kms'] {}
2019-01-03 23:48:42,111 - Group['livy'] {}
2019-01-03 23:48:42,111 - Group['spark'] {}
2019-01-03 23:48:42,112 - Group['ranger'] {}
2019-01-03 23:48:42,112 - Group['hdfs'] {}
2019-01-03 23:48:42,112 - Group['zeppelin'] {}
2019-01-03 23:48:42,113 - Group['hadoop'] {}
2019-01-03 23:48:42,113 - Group['users'] {}
2019-01-03 23:48:42,113 - Group['knox'] {}
2019-01-03 23:48:42,114 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,115 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,117 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,118 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,119 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,120 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-03 23:48:42,122 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,123 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,124 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2019-01-03 23:48:42,126 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-03 23:48:42,127 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2019-01-03 23:48:42,128 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None}
2019-01-03 23:48:42,129 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,131 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2019-01-03 23:48:42,132 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,133 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2019-01-03 23:48:42,135 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-03 23:48:42,136 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,137 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-01-03 23:48:42,138 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,140 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,141 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,142 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-03 23:48:42,144 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2019-01-03 23:48:42,145 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-03 23:48:42,147 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-01-03 23:48:42,152 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-01-03 23:48:42,153 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2019-01-03 23:48:42,154 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-03 23:48:42,156 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-03 23:48:42,157 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2019-01-03 23:48:42,164 - call returned (0, '1024')
2019-01-03 23:48:42,165 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1024'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2019-01-03 23:48:42,214 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1024'] due to not_if
2019-01-03 23:48:42,214 - Group['hdfs'] {}
2019-01-03 23:48:42,215 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-01-03 23:48:42,215 - FS Type: HDFS
2019-01-03 23:48:42,215 - Directory['/etc/hadoop'] {'mode': 0755}
2019-01-03 23:48:42,215 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-01-03 23:48:42,216 - Changing owner for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hdfs
2019-01-03 23:48:42,230 - Repository['HDP-3.1-repo-6'] {'base_url': '', 'action': ['prepare'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-6', 'mirror_list': None}
2019-01-03 23:48:42,236 - Repository['HDP-UTILS-1.1.0.22-repo-6'] {'base_url': '', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-6', 'mirror_list': None}
2019-01-03 23:48:42,238 - Repository[None] {'action': ['create']}
2019-01-03 23:48:42,238 - File['/tmp/tmpP143Jn'] {'content': '[HDP-3.1-repo-6]\nname=HDP-3.1-repo-6\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-6]\nname=HDP-UTILS-1.1.0.22-repo-6\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-01-03 23:48:42,239 - Writing File['/tmp/tmpP143Jn'] because contents don't match
2019-01-03 23:48:42,251 - Rewriting /etc/yum.repos.d/ambari-hdp-6.repo since it has changed.
2019-01-03 23:48:42,251 - File['/etc/yum.repos.d/ambari-hdp-6.repo'] {'content': StaticFile('/tmp/tmpP143Jn')}
2019-01-03 23:48:42,252 - Writing File['/etc/yum.repos.d/ambari-hdp-6.repo'] because it doesn't exist
2019-01-03 23:48:42,252 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-03 23:48:42,579 - Skipping installation of existing package unzip
2019-01-03 23:48:42,579 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-03 23:48:42,772 - Skipping installation of existing package curl
2019-01-03 23:48:42,772 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-03 23:48:42,904 - Installing package hdp-select ('/usr/bin/yum -y install hdp-select')
2019-01-03 23:48:43,475 - Reporting component version failed
Traceback (most recent call last):
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
    self.save_component_version_to_structured_out(self.command_name)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
    stack_select_package_name = stack_select.get_package_name()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
    package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
    supported_packages = get_supported_packages()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
    raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select

Command failed after 1 tries

解决办法:
在管理员中,配置本地源即可。
CentOS安装HDP集群 遇到的问题记录_第1张图片
CentOS安装HDP集群 遇到的问题记录_第2张图片

错误2:Error: Package: 2:postfix-2.10.1-7.el7.x86_64 (base)
看日志是mysql的问题

stderr: 
2019-01-04 00:07:03,018 - The 'accumulo-client' component did not advertise a version. This may indicate a problem with the component packaging.
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ACCUMULO/package/scripts/accumulo_client.py", line 61, in 
    AccumuloClient().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ACCUMULO/package/scripts/accumulo_client.py", line 33, in install
    self.install_packages(env)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 849, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
    self._pkg_manager.install_package(package_name, self.__create_context())
  File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
    shell.repository_manager_executor(cmd, self.properties, context)
  File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 753, in repository_manager_executor
    raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install accumulo_3_1_0_0_78', exited with code '1', message: 'Error: Package: 2:postfix-2.10.1-7.el7.x86_64 (base)

           Requires: libmysqlclient.so.18(libmysqlclient_18)(64bit)

Error: Package: 2:postfix-2.10.1-7.el7.x86_64 (base)

           Requires: libmysqlclient.so.18()(64bit)
'
 stdout:
2019-01-04 00:06:40,147 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
2019-01-04 00:06:40,152 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2019-01-04 00:06:40,165 - Group['kms'] {}
2019-01-04 00:06:40,166 - Group['livy'] {}
2019-01-04 00:06:40,166 - Group['spark'] {}
2019-01-04 00:06:40,167 - Group['ranger'] {}
2019-01-04 00:06:40,167 - Group['hdfs'] {}
2019-01-04 00:06:40,167 - Group['zeppelin'] {}
2019-01-04 00:06:40,167 - Group['hadoop'] {}
2019-01-04 00:06:40,167 - Group['users'] {}
2019-01-04 00:06:40,168 - Group['knox'] {}
2019-01-04 00:06:40,168 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,169 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,170 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,171 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,172 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,173 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-04 00:06:40,174 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,175 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,177 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2019-01-04 00:06:40,177 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-04 00:06:40,178 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2019-01-04 00:06:40,179 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None}
2019-01-04 00:06:40,180 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,181 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2019-01-04 00:06:40,182 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,183 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2019-01-04 00:06:40,184 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-04 00:06:40,185 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,186 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-01-04 00:06:40,187 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,188 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,189 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,190 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-04 00:06:40,191 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2019-01-04 00:06:40,191 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-04 00:06:40,193 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-01-04 00:06:40,197 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-01-04 00:06:40,198 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2019-01-04 00:06:40,199 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-04 00:06:40,200 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-04 00:06:40,201 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2019-01-04 00:06:40,207 - call returned (0, '1023')
2019-01-04 00:06:40,208 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1023'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2019-01-04 00:06:40,211 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1023'] due to not_if
2019-01-04 00:06:40,212 - Group['hdfs'] {}
2019-01-04 00:06:40,212 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-01-04 00:06:40,213 - FS Type: HDFS
2019-01-04 00:06:40,213 - Directory['/etc/hadoop'] {'mode': 0755}
2019-01-04 00:06:40,213 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-01-04 00:06:40,228 - Repository['HDP-3.1-repo-6'] {'base_url': 'http://hdp-3/ambari/HDP/centos7/3.1.0.0-78/', 'action': ['prepare'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-6', 'mirror_list': ''}
2019-01-04 00:06:40,234 - Repository['HDP-UTILS-1.1.0.22-repo-6'] {'base_url': 'http://hdp-3/ambari/HDP-UTILS/centos7/1.1.0.22/', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-6', 'mirror_list': ''}
2019-01-04 00:06:40,237 - Repository[None] {'action': ['create']}
2019-01-04 00:06:40,237 - File['/tmp/tmpqeQLAk'] {'content': '[HDP-3.1-repo-6]\nname=HDP-3.1-repo-6\nbaseurl=http://hdp-3/ambari/HDP/centos7/3.1.0.0-78/\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-6]\nname=HDP-UTILS-1.1.0.22-repo-6\nbaseurl=http://hdp-3/ambari/HDP-UTILS/centos7/1.1.0.22/\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-01-04 00:06:40,258 - Writing File['/tmp/tmpqeQLAk'] because contents don't match
2019-01-04 00:06:40,258 - Rewriting /etc/yum.repos.d/ambari-hdp-6.repo since it has changed.
2019-01-04 00:06:40,258 - File['/etc/yum.repos.d/ambari-hdp-6.repo'] {'content': StaticFile('/tmp/tmpqeQLAk')}
2019-01-04 00:06:40,259 - Writing File['/etc/yum.repos.d/ambari-hdp-6.repo'] because it doesn't exist
2019-01-04 00:06:40,259 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-04 00:06:40,565 - Skipping installation of existing package unzip
2019-01-04 00:06:40,565 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-04 00:06:40,684 - Skipping installation of existing package curl
2019-01-04 00:06:40,684 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-04 00:06:40,778 - Skipping installation of existing package hdp-select
2019-01-04 00:06:40,907 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-01-04 00:06:40,950 - call returned (0, '')
2019-01-04 00:06:41,306 - Command repositories: HDP-3.1-repo-6, HDP-UTILS-1.1.0.22-repo-6
2019-01-04 00:06:41,307 - Applicable repositories: HDP-3.1-repo-6, HDP-UTILS-1.1.0.22-repo-6
2019-01-04 00:06:41,307 - Looking for matching packages in the following repositories: HDP-3.1-repo-6, HDP-UTILS-1.1.0.22-repo-6
2019-01-04 00:06:48,732 - Adding fallback repositories: HDP-3.1.0.0, HDP-UTILS-1.1.0.22
2019-01-04 00:06:54,629 - Package['accumulo_3_1_0_0_78'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-04 00:06:54,897 - Installing package accumulo_3_1_0_0_78 ('/usr/bin/yum -y install accumulo_3_1_0_0_78')
2019-01-04 00:07:02,973 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-01-04 00:07:03,018 - call returned (0, '')
2019-01-04 00:07:03,018 - The 'accumulo-client' component did not advertise a version. This may indicate a problem with the component packaging.

Command failed after 1 tries

解决办法:
之前安装mysql时,安装了
rpm -ivh mysql-community-common-5.7.24-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-5.7.24-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-5.7.24-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-5.7.24-1.el7.x86_64.rpm
rpm -ivh mysql-community-devel-5.7.24-1.el7.x86_64.rpm

安装后mysql正常,其中mysql-community-libs-5.7.24-1.el7.x86_64.rpm还依赖mysql-community-libs-compat-5.7.24-1.el7.x86_64.rpm ,所以必须要安装:

rpm -ivh mysql-community-libs-compat-5.7.24-1.el7.x86_64.rpm 

======================================================

之后正常安装,如图:
CentOS安装HDP集群 遇到的问题记录_第3张图片

你可能感兴趣的:(hadoop,install)