What is the JobTracker and what it performs in a Hadoop Cluster?

JobTracker is a daemon service which submits and tracks the MapReduce tasks to the Hadoop cluster. It runs its own JVM process. And usually it run on a separate machine, and each slave node is configured with job tracker node location.
The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted.
JobTracker in Hadoop performs following actions PappuPass Learning Resources 9
=> Client applications submit jobs to the Job tracker.
=> The JobTracker talks to the NameNode to determine the location of the data
=> The JobTracker locates TaskTracker nodes with available slots at or near the data
=> The JobTracker submits the work to the chosen TaskTracker nodes.
=> The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker.
=> A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable.
=> When the work is completed, the JobTracker updates its status.
=> Client applications can poll the JobTracker for information.