https://community.hortonworks.com/questions/118453/ambari-not-using-available-repos.html?page=2&pageSize=10&sort=oldest
Installing package hadoop_2_6_0_3_8-hdfs ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_3_8-hdfs') 2017-05-26 17:07:30,977 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_3_8-hdfs' returned 1. Error: Package: hadoop_2_6_0_3_8-hdfs-2.7.3.2.6.0.3-8.x86_64 (HDP-2.6) Requires: libtirpc-devel You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest 2017-05-26 17:07:30,977 - Failed to install package hadoop_2_6_0_3_8-hdfs. Executing '/usr/bin/yum clean metadata' 2017-05-26 17:07:31,544 - Retrying to install package hadoop_2_6_0_3_8-hdfs after 30 seconds
|
针对 redhat 7.2: yum install libtirpc-devel-0.2.4-0.6.el7.x86_64.rpm -y 针对 oracle linux 7.3: yum install libtirpc-devel-0.2.4-0.6.el7.i686.rpm -y
|
针对 centos6.5 ambari2.5.1.0: libtirpc-0.2.1-13.el6.x86_64.rpm -y libtirpc-devel-0.2.1-13.el6.x86_64.rpm -y
|
[root@ambari02 ~]$ sudo service ntpd start Starting ntpd: [ OK ]
|
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 177, in |
# rpm -ivh libtirpc-devel-0.2.4-0.* --nodeps
warning: libtirpc-devel-0.2.4-0.10.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:libtirpc-devel-0.2.4-0.8.el7 ################################# [ 50%]
2:libtirpc-devel-0.2.4-0.10.el7 ################################# [100%]
snappy 下载地址:
删除每台机器上的高版本snappy,安装低版本snappy rpm -e snappy-1.1.0-3.el7.x86_64 yum install snappy-1.0.5-1.el6.x86_64 -y
|
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 123, in |
此问题是由于/etc/hadoop/ 下面的conf目录不存在导致的,从别的好的机器上拷贝一份过来即可 scp -r |
2016-09-12 16:34:18,905 - User['activity_analyzer'] {'gid': 'hadoop', 'groups': [u'hdfs']} Deploying activity analyzer Command: /usr/sbin/hst activity-analyzer setup root:root '/etc/rc.d/init.d' Exit code: 127 Std Out: None Std Err: /usr/sbin/hst: line 321: install-activity-analyzer.sh: command not found Command failed after 1 tries
|
yum remove smartsense-hst rm -rf /var/log/smartsense/
|
2017-06-14 10:03:29,878 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService failed in state INITED; cause: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:351) at org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.serviceInit(EmbeddedElectorService.java:103) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceInit(AdminService.java:152) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:281) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1236) Caused by: org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion for /yarn-leader-election at org.apache.zookeeper.KeeperException.create(KeeperException.java:115) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399) at org.apache.hadoop.ha.ActiveStandbyElector$7.run(ActiveStandbyElector.java:1050) at org.apache.hadoop.ha.ActiveStandbyElector$7.run(ActiveStandbyElector.java:1044) at org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries(ActiveStandbyElector.java:1067) at org.apache.hadoop.ha.ActiveStandbyElector.setAclsWithRetries(ActiveStandbyElector.java:1044) at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:349) ... 9 more 2017-06-14 10:03:29,880 INFO ha.ActiveStandbyElector (ActiveStandbyElector.java:processWatchEvent(600)) - Session connected. 2017-06-14 10:03:29,889 INFO ha.ActiveStandbyElector (ActiveStandbyElector.java:processWatchEvent(626)) - Successfully authenticated to ZooKeeper using SASL. 2017-06-14 10:03:29,890 INFO ha.ActiveStandbyElector (ActiveStandbyElector.java:quitElection(406)) - Yielding from election 2017-06-14 10:03:29,891 INFO ha.ActiveStandbyElector (ActiveStandbyElector.java:terminateConnection(835)) - Terminating ZK connection for elector id=482307698 appData=null cb=Service org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService in state org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService: STOPPED 2017-06-14 10:03:29,910 INFO zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x15ca4396eda001a closed 2017-06-14 10:03:29,911 INFO ha.ActiveStandbyElector (ActiveStandbyElector.java:terminateConnection(832)) - terminateConnection, zkConnectionState = TERMINATED 2017-06-14 10:03:29,911 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service org.apache.hadoop.yarn.server.resourcemanager.AdminService failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election org.apache.hadoop.service.ServiceStateException: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceInit(AdminService.java:152) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:281) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1236) Caused by: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:351) at org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.serviceInit(EmbeddedElectorService.java:103) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) ... 7 more Caused by: org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion for /yarn-leader-election at org.apache.zookeeper.KeeperException.create(KeeperException.java:115) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399) at org.apache.hadoop.ha.ActiveStandbyElector$7.run(ActiveStandbyElector.java:1050) at org.apache.hadoop.ha.ActiveStandbyElector$7.run(ActiveStandbyElector.java:1044) at org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries(ActiveStandbyElector.java:1067) at org.apache.hadoop.ha.ActiveStandbyElector.setAclsWithRetries(ActiveStandbyElector.java:1044) at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:349) ... 9 more 2017-06-14 10:03:29,912 INFO service.AbstractService (AbstractService.java:noteFailure(272)) - Service ResourceManager failed in state INITED; cause: org.apache.hadoop.service.ServiceStateException: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election org.apache.hadoop.service.ServiceStateException: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceInit(AdminService.java:152) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:281) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1236) Caused by: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:351) at org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.serviceInit(EmbeddedElectorService.java:103) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) ... 7 more Caused by: org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion for /yarn-leader-election at org.apache.zookeeper.KeeperException.create(KeeperException.java:115) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399) at org.apache.hadoop.ha.ActiveStandbyElector$7.run(ActiveStandbyElector.java:1050) at org.apache.hadoop.ha.ActiveStandbyElector$7.run(ActiveStandbyElector.java:1044) at org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries(ActiveStandbyElector.java:1067) at org.apache.hadoop.ha.ActiveStandbyElector.setAclsWithRetries(ActiveStandbyElector.java:1044) at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:349) ... 9 more 2017-06-14 10:03:29,913 INFO resourcemanager.ResourceManager (ResourceManager.java:transitionToStandby(1078)) - Transitioning to standby state 2017-06-14 10:03:29,914 INFO resourcemanager.ResourceManager (ResourceManager.java:transitionToStandby(1085)) - Transitioned to standby state 2017-06-14 10:03:29,914 FATAL resourcemanager.ResourceManager (ResourceManager.java:main(1240)) - Error starting ResourceManager org.apache.hadoop.service.ServiceStateException: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceInit(AdminService.java:152) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:281) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1236) Caused by: java.io.IOException: Couldn't set ACLs on parent ZNode: /yarn-leader-election at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:351) at org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.serviceInit(EmbeddedElectorService.java:103) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) ... 7 more Caused by: org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion for /yarn-leader-election at org.apache.zookeeper.KeeperException.create(KeeperException.java:115) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399) at org.apache.hadoop.ha.ActiveStandbyElector$7.run(ActiveStandbyElector.java:1050) at org.apache.hadoop.ha.ActiveStandbyElector$7.run(ActiveStandbyElector.java:1044) at org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries(ActiveStandbyElector.java:1067) at org.apache.hadoop.ha.ActiveStandbyElector.setAclsWithRetries(ActiveStandbyElector.java:1044) at org.apache.hadoop.ha.ActiveStandbyElector.ensureParentZNode(ActiveStandbyElector.java:349) ... 9 more 2017-06-14 10:03:29,910 INFO zookeeper.ClientCnxn (ClientCnxn.java:run(524)) - EventThread shut down 2017-06-14 10:03:29,922 INFO resourcemanager.ResourceManager (LogAdapter.java:info(45)) - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down ResourceManager at bigdata-nn-01/172.16.0.124 ************************************************************/
|
登录zookeeper: [root@bigdata-nn-01 ~]$ su - hadoop [hadoop@bigdata-nn-01 ~]$ zookeeper-client -server bigdata-nn-01.cars.com:2181 其中: -server 后面一定要用FQDN主机名 删除:/yarn-leader-election [zk: bigdata-nn-01.cars.com:2181(CONNECTED) 1] rmr /yarn-leader-election
|
2017-08-16 09:44:19,927 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6 2017-08-16 09:44:19,929 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf User Group mapping (user_group) is missing in the hostLevelParams 2017-08-16 09:44:19,931 - Group['hadoop'] {} 2017-08-16 09:44:19,933 - Group['users'] {} 2017-08-16 09:44:19,934 - User['hadoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']} 2017-08-16 09:44:19,935 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-08-16 09:44:19,937 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hadoop /tmp/hadoop-hadoop,/tmp/hsperfdata_hadoop,/home/hadoop,/tmp/hadoop,/tmp/sqoop-hadoop'] {'not_if': '(test $(id -u hadoop) -gt 1000) || (false)'} 2017-08-16 09:44:19,964 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hadoop /tmp/hadoop-hadoop,/tmp/hsperfdata_hadoop,/home/hadoop,/tmp/hadoop,/tmp/sqoop-hadoop'] due to not_if 2017-08-16 09:44:19,965 - Directory['/tmp/hbase-hbase'] {'owner': 'hadoop', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2017-08-16 09:44:19,967 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-08-16 09:44:19,969 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hadoop /home/hadoop,/tmp/hadoop,/usr/bin/hadoop,/var/log/hadoop,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hadoop) -gt 1000) || (false)'} 2017-08-16 09:44:19,990 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hadoop /home/hadoop,/tmp/hadoop,/usr/bin/hadoop,/var/log/hadoop,/tmp/hbase-hbase'] due to not_if 2017-08-16 09:44:19,991 - Group['hadoop'] {} 2017-08-16 09:44:19,992 - User['hadoop'] {'fetch_nonlocal_groups': True, 'groups': ['users', 'hadoop']} 2017-08-16 09:44:19,993 - FS Type: 2017-08-16 09:44:19,993 - Directory['/etc/hadoop'] {'mode': 0755} 2017-08-16 09:44:20,020 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hadoop', 'group': 'hadoop'} 2017-08-16 09:44:20,021 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hadoop', 'group': 'hadoop', 'mode': 01777} 2017-08-16 09:44:20,041 - Initializing 2 repositories 2017-08-16 09:44:20,042 - Repository['HDP-2.6'] {'base_url': 'http://172.17.109.11/HDP/centos6/', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None} 2017-08-16 09:44:20,052 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://172.17.109.11/HDP/centos6/\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-08-16 09:44:20,053 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://172.17.109.11/HDP-UTILS/', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None} 2017-08-16 09:44:20,057 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://172.17.109.11/HDP-UTILS/\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-08-16 09:44:20,057 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-08-16 09:44:20,147 - Skipping installation of existing package unzip 2017-08-16 09:44:20,148 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-08-16 09:44:20,155 - Skipping installation of existing package curl 2017-08-16 09:44:20,155 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-08-16 09:44:20,164 - Skipping installation of existing package hdp-select 2017-08-16 09:44:20,444 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-08-16 09:44:20,453 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20} 2017-08-16 09:44:20,508 - call returned (0, 'hive-server2 - None') 2017-08-16 09:44:20,510 - Failed to get extracted version with /usr/bin/hdp-select 2017-08-16 09:44:20,510 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6 2017-08-16 09:44:20,527 - Package['mysql-connector-java'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-08-16 09:44:20,615 - Installing package mysql-connector-java ('/usr/bin/yum -d 0 -e 0 -y install mysql-connector-java') 2017-08-16 09:45:09,408 - Package['hive_2_6_1_0_129'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-08-16 09:45:09,421 - Installing package hive_2_6_1_0_129 ('/usr/bin/yum -d 0 -e 0 -y install hive_2_6_1_0_129') 2017-08-16 09:45:46,355 - Package['hive_2_6_1_0_129-hcatalog'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-08-16 09:45:46,368 - Installing package hive_2_6_1_0_129-hcatalog ('/usr/bin/yum -d 0 -e 0 -y install hive_2_6_1_0_129-hcatalog') 2017-08-16 09:45:52,828 - Package['hive_2_6_1_0_129-webhcat'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-08-16 09:45:52,844 - Installing package hive_2_6_1_0_129-webhcat ('/usr/bin/yum -d 0 -e 0 -y install hive_2_6_1_0_129-webhcat') 2017-08-16 09:46:00,697 - Package['hive2_2_6_1_0_129'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-08-16 09:46:00,710 - Installing package hive2_2_6_1_0_129 ('/usr/bin/yum -d 0 -e 0 -y install hive2_2_6_1_0_129') 2017-08-16 09:46:05,216 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hive2_2_6_1_0_129' returned 1. There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them. The program yum-complete-transaction is found in the yum-utils package. Error: Package: hive2_2_6_1_0_129-2.1.0.2.6.1.0-129.noarch (HDP-2.6) Requires: python-argparse You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest 2017-08-16 09:46:05,216 - Failed to install package hive2_2_6_1_0_129. Executing '/usr/bin/yum clean metadata' 2017-08-16 09:46:05,400 - Retrying to install package hive2_2_6_1_0_129 after 30 secondsCommand failed after 1 tries
|
yum install python-argparse-1.2.1-2.1.el6.noarch.rpm -y
|
https://community.hortonworks.com/questions/145/openssl-error-upon-host-registration.html
The good news is that message was referenced in http://docs.hortonworks.com/HDPDocuments/Ambari-1.5.1.0/bk_using_Ambari_book/content/ambari-chap3-4_2x.html and following the information provided I found out that I needed a newer version of OpenSSL which was accomplished by running "sudo yum upgrade openssl
" on all the boxes to get past these errors.
I also found out that I had a warning from each of the four nodes that the ntpd service was not running. I thought I took care of this earlier, but either way I just followed the instructions on this back in building a virtualized 5-node HDP 2.0 cluster (all within a mac) the warnings cleared up.
Unlike the other cluster install instructions, for this setup we want all services checked on the Choose Services page and then you can take some creative liberty on the Assign Masters page. Here's a snapshot of my selections.
2018-04-24 09:41:52,554 - Installing package hadoop_2_6_0_3_8-yarn ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_3_8-yarn') 2018-04-24 09:41:54,545 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_3_8-yarn' returned 1. Error: Package: hadoop_2_6_0_3_8-2.7.3.2.6.0.3-8.x86_64 (HDP-2.6) Requires: nc Error: Package: hadoop_2_6_0_3_8-2.7.3.2.6.0.3-8.x86_64 (HDP-2.6) Requires: redhat-lsb-core Error: Package: hadoop_2_6_0_3_8-2.7.3.2.6.0.3-8.x86_64 (HDP-2.6) Requires: psmisc You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest 2018-04-24 09:41:54,545 - Failed to install package hadoop_2_6_0_3_8-yarn. Executing '/usr/bin/yum clean metadata' 2018-04-24 09:41:54,680 - Retrying to install package hadoop_2_6_0_3_8-yarn after 30 seconds Command failed after 1 tries
从以上明显看出是需要nc和redhat-lsb-core与psmisc
# /etc/yum.repos.d
# vi ol73.repo
[Server]
name=Server
baseurl=ftp://192.168.0.70/pub/os/OL-73
gpgcheck=0
# yum clean all
# yum install nc
# yum install redhat-lsb-core
# yum install psmisc