hadoop 分布式应用(个人学习笔记)

一,hadoop介绍
hadoop 是apache 的开源软件,用于分布式任务计算,包括mapreduce(首先由谷歌提出,并应用) 分布式计算框架和hdfs 文件系统两部分。hadoop 让开发人员在不了解底层细节的情况下,轻松开发分布式应用。

二,hadoop job提交流程
1)JobClient 运行Job 任务
JobClient.runJob(Job.clss);
2)JobClient 向JobTracker 申请一个JobId ;
3)配置Job运行环境(copy 相关配置,和Jar 文件到本地);
JobClient.copyRemoteFile();
4)计算Job 输入数据(根据配置的InputSplit 大小,计算Map 数)
5)提交Job,调用JobTracker.submit();
6) JobTracker 分配Job 给TaskTracker ;
7) TaskTracker 启动Jvm 计算任务(默认为每个任务启动一个Jvm)。

三, mapreduce 框架原理(分久闭合)
map 端拆分数据,reduce端合并数据
3.1) map端用于数据解析,mapreduce 框架会尽量使用数据本地化计算(减少数据传输带来的网络流量),提高效率。map计算过程中,首先会将计算结果<key,value>存在内存中,在达到一定阀值(可以配置)后写入临时文件(减少磁盘IO次数)。map端的计算结果做为中间数据用于reduce 端的数据输入。
3.2)reduce 端负责最终结果统计,reduce 计算结束将结果<key,value>存入hdfs,然后删除map 端中间数据。

四,hadoop任务分配相关源码学习
1) List<Task> tasks = getSetupAndCleanupTasks(taskTrackerStatus);
        if (tasks == null ) {
          tasks = taskScheduler.assignTasks(taskTrackers.get(trackerName));
        }
        if (tasks != null) {
          for (Task task : tasks) {
            expireLaunchingTasks.addNewTask(task.getTaskID());
            if(LOG.isDebugEnabled()) {
              LOG.debug(trackerName + " -> LaunchTask: " + task.getTaskID());
            }
            actions.add(new LaunchTaskAction(task));
          }
        }
1.1) getSetupAndCleanupTasks(taskTrackerStatus);//先分配辅助任务
1,2) tasks = taskScheduler.assignTasks(taskTrackers.get(trackerName));
//如果不存在辅助任务就由hadoop任务调度器分配计算任务

2) public void offerService();//启动jobtracker服务(启动各种内部线程)
3) public synchronized HeartbeatResponse heartbeat(TaskTrackerStatus status,  boolean restarted,boolean initialContact,
boolean acceptNewTasks, short responseId) throws IOException {}//jobtracker 处理tasktracker发送心跳方法,首先检查该tasktracker是否属于节点黑名单,再检查tasktracker是否健康...最后通过心跳给tasktracker下达分配的任务。

五,部分相关属性
1) Map<JobID, JobInProgress> jobs = 
    Collections.synchronizedMap(new TreeMap<JobID, JobInProgress>());
//存储客户端提交的job信息,jobtracker为客户端提交的job生成一个JobInProgress对象进行跟踪;jobtracker采用三层多叉树(一个jobInProgress对应多个TaskInProgress,一个TaskInProgress对应多个Task Attempt)的结构进行任务的追踪。
2)   TreeMap<String, ArrayList<JobInProgress>> userToJobsMap =
    new TreeMap<String, ArrayList<JobInProgress>>();//映射用户的job信息,hadoop 集群可能创建多个账户

六,jobtracker 任务分配
jobtracker 从正在运行的job任务中选择取job,分配task前jobtracker 会判断申请任务的tasktracker 是否还有能力去执行新任务。如果可以执行,jobtracker 首先会尝试为每个job分配一个本地任务或者机架本地任务,如果不存在本地任务就分配非本地任务(先分配map任务,然后分配reduce任务)。
Collection<JobInProgress> jobQueue =
      jobQueueJobInProgressListener.getJobQueue();//获取job队列
synchronized (jobQueue) {
      for (JobInProgress job : jobQueue) {
        if (job.getStatus().getRunState() == JobStatus.RUNNING) {
          remainingMapLoad += (job.desiredMaps() - job.finishedMaps());
          if (job.scheduleReduces()) {
            remainingReduceLoad +=
              (job.desiredReduces() - job.finishedReduces());
          }
        }
      }
    }//计算所有运行job剩下的map数和reduce数
for (int i=0; i < availableMapSlots; ++i) {
      synchronized (jobQueue) {
        for (JobInProgress job : jobQueue) {
          if (job.getStatus().getRunState() != JobStatus.RUNNING) {
            continue;
          }

          Task t = null;
         
          // Try to schedule a node-local or rack-local Map task
          t =
            job.obtainNewNodeOrRackLocalMapTask(taskTrackerStatus,
                numTaskTrackers, taskTrackerManager.getNumberOfUniqueHosts());
          if (t != null) {
            assignedTasks.add(t);
            ++numLocalMaps;
           
            // Don't assign map tasks to the hilt!
            // Leave some free slots in the cluster for future task-failures,
            // speculative tasks etc. beyond the highest priority job
            if (exceededMapPadding) {
              break scheduleMaps;
            }
          
            // Try all jobs again for the next Map task
            break;
          }
         
          // Try to schedule a node-local or rack-local Map task
          t =
            job.obtainNewNonLocalMapTask(taskTrackerStatus, numTaskTrackers,
                                   taskTrackerManager.getNumberOfUniqueHosts());
         
          if (t != null) {
            assignedTasks.add(t);
            ++numNonLocalMaps;
           
            // We assign at most 1 off-switch or speculative task
            // This is to prevent TaskTrackers from stealing local-tasks
            // from other TaskTrackers.
            break scheduleMaps;
          }
        }
      }
    }//map 任务分配
boolean exceededReducePadding = false;
    if (availableReduceSlots > 0) {
      exceededReducePadding = exceededPadding(false, clusterStatus,
                                              trackerReduceCapacity);
      synchronized (jobQueue) {
        for (JobInProgress job : jobQueue) {
          if (job.getStatus().getRunState() != JobStatus.RUNNING ||
              job.numReduceTasks == 0) {
            continue;
          }

          Task t =
            job.obtainNewReduceTask(taskTrackerStatus, numTaskTrackers,
                                    taskTrackerManager.getNumberOfUniqueHosts()
                                    );
          if (t != null) {
            assignedTasks.add(t);
            break;
          }
         
          // Don't assign reduce tasks to the hilt!
          // Leave some free slots in the cluster for future task-failures,
          // speculative tasks etc. beyond the highest priority job
          if (exceededReducePadding) {
            break;
          }
        }
      }
    }//reduce任务分配,reduce任务不用考虑数据本地计算特性

你可能感兴趣的:(hadoop)