static void usage() {
System.out.println("Usage : Client
static void printAndExit(String str) {
System.err.println(str);
System.exit(1);
}
public static void main(String[] argv) throws IOException {
for(String arg : argv) {
System.out.println("arg="+arg);
}
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
if (argv.length != 2) usage();
// Hadoop DFS deals with Path
Path inFile = new Path(argv[0]);
Path outFile = new Path(argv[1]);
// Check if input/output are valid
if (!fs.exists(inFile)) {
printAndExit("Input file not found");
}
if (!fs.isFile(inFile)) {
printAndExit("Input should be a file");
}
if (fs.exists(outFile)) {
printAndExit("Output already exists");
}
// Read from and write to new file
FSDataInputStream in = fs.open(inFile);
FSDataOutputStream out = fs.create(outFile);
byte buffer[] = new byte[256];
try {
int bytesRead = 0;
while ((bytesRead = in.read(buffer)) > 0) {
out.write(buffer, 0, bytesRead);
}
} catch (IOException e) {
System.out.println("Error while copying file");
} finally {
in.close();
out.close();
}
}
运行报错:
Exception in thread "main" java.io.IOException: Login failed: Cannot run program "whoami": CreateProcess error=2, ?????????
at org.apache.hadoop.dfs.DFSClient.createNamenode(DFSClient.java:124)
at org.apache.hadoop.dfs.DFSClient.
at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:65)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)
at org.apache.hadoop.fs.FileSystem.getNamed(FileSystem.java:122)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:94)
at com.chua.hadoop.client.Client.main(Client.java:25)
网上查资料,发现windows上没有whoami命令(即使有估计也是访问不了,因为hdfs的文件权限形式;后来试了一下随便填了一个用户去做,报错:org.apache.hadoop.fs.permission.AccessControlException: Permission denied),
在hadoop-site.xml里加入
还是报错
java.net.ConnectException: Connection refused
此时我又把代码放到linux下运行,还是同样的错误,期间让费了很多时间,看了很多blog,都没有相关资料,我看了hadoop中关于Permissions and Security的资料,也没有看出问题.
于是把代码放到linux下,并把把fs.default.name改回到namenode的运行参数,即:localhost:prot的形式,这次运行成功,我纳闷
分别telnet localhost prot与telnet ip port,发现localhost的可以连上,而ip的连不上,问题集中在这里了,我没有socket编程经验,所以问了旁边的人,说socket listener的建立分形式的,像我这样的情况,listener只能创建本地socket.
这次修改hdfs环境的hadoop-site.xml将fs.default.name改成ip形式,重启hdfs
然后把clinet的hadoop-site.xml中的fs.default.name改为ip,运行成功,这个问题困扰了一天时间.
windows环境的clinet运行也同样成功了