Wednesday, June 5, 2019

MapReduce for Distributed Computing

MapReduce for Distributed Computing1.) IntroductionA distributed compute organization provide be defined as a collection of memberors interconnected by a communicating net do work such that apiece mathematical operationor has its own local memory. The communication between any two or more shapeors of the remains takes place by spill information over the communication electronic network. It has its application in non-homogeneous fields like Hadoop and Map Reduce which we all in allow be discussing further in details.Hadoop is becoming the technology of choice for enterprises that need to effectively collect, store and process heavy(a) amounts of structured and complex entropy.The purpose of the thesis is to research about the possibility of using a MapReduce framework to implement Hadoop.Now all this is practical by the charge cabinet remains that is employ by Hadoop and it is HDFS or Hadoop Distributed File System.HDFS is a distributed file placement and capable to slip away on hardw be. It is similar with animated distributed file dusts and its main advantage over the other distributed File governing body is, it is designed to be deployed on low-cost hardw atomic number 18 and highly fault-tolerant. HDFS provides extreme throughput coming to applications having huge data sets.Originally it was built as infrastructure support for the Apache Nutch web search engine. Applications that run using HDFS affirm extremely turgid data sets like some gigabytes to even terabytes in size. Thus, HDFS is designed to support very large sized files. It provides high data communication and raise connect hundreds of nodes in a angiotensin converting enzyme gather and supports tens of millions of files in a agreement at a time.Now we take all the higher up issues mentioned above in details. We will be discussing various fields where Hadoop is being implemented like in storage belt alonginess of Facebook and twitter, HIVE, PIG etc.2.) Serial vs. Parallel ProgrammingIn the early decades of deliberation, programs were serial or sequential, that is, a program consisted of a categorization of instructions, where each instruction penalise sequential as see suggests. It ran from start to exhaust on a single processor.Parallel programming (grid computing) developed as a means of improving performance and efficiency. In a parallel program, the process is broken up into several parts, each of which will be put to death concurrently. The instructions from each part run simultaneously on different CPUs. These CPUs can exist on a single machine, or they can be CPUs in a set of computers connected via a network.Not only are parallel programs faster, they can excessively be apply to solve problems on large datasets using non-local elections. When you have a set of computers connected on a network, you have a vast pool of CPUs, and you often have the ability to read and write very large files (assuming a distributed file syst em is also in place).Parallelism is nonhing but a strategy for performing complex and large projections faster than traditional serial demeanor. A large undertaking can every be performed serially, one step by-line a nonher, or can be decomposed into smaller tasks to be performed simultaneously using concurrent apparatus in parallel systems.Parallelism is done byBreaking up the process into smaller processesAssigning the smaller processes to multiple processors to work on simultaneouslyCoordinating the processorsParallel problem solving can be seen in real life application too.Examples automobile manufacturing plant direct a large organization building construction3.) History of clumpsClustering is the intention of cluster of computers, typically PCs or some workstations, storage devices, and interconnections, appears to outsider (substance ab exploiter) as a single highly super system. Cluster computing can be apply for high availability and committal balancing. It can be apply as a relatively low-cost form of parallel process system for scientific and other related applications.Computer clustering technology put cluster of few systems together to provide better system reliability. Cluster server systems can connect a mathematical group of systems together in order to provide have processing service for the clients in the cluster.Cluster operating systems distribute the tasks amongst the available systems. Clusters of systems or workstations can connect a group of systems together to share critically demanding and tough tasks. Theoretically, a cluster operating system can provide seamless optimization in every gaucherie.At the present time, cluster server and workstation systems are mostly apply in High Availability applications and in scientific applications such as numerical computations.A cluster is a fibre of parallel or distributed system thatconsists of a collection of interconnected tout ensemble computersand is used as single, unifi ed computing resource.The whole computer in above definition can have one or more processors built into a single operating system image.Why a Cluster scorn cost In all-purpose small sized systems profit from using proper technology. Both hardware and software costs tend to be expressively minor for minor systems. However one essential study the entire cost of proprietorship of your computing environment dapple do a purchasing conclusion. Next subdivision facts to some issues which may counterbalance some of the gains of primary cost of acquirement of a cluster. .Vendor independence Though it is usually fit to use similar components through a number of servers in a cluster, it is worthy to retain a certain degree of vendor independence, especially if the cluster is being organized for long term usage. A Linux cluster created on mostly service hardware permits for much better vendor liberation than a large multi-processor scheme using a proprietary operating system.Scalability In several environments the problem load is too large that it just cannot be touch on on a specific system within the time limits of the organization. Clusters similarly provide a hassle-free path for increasing the computational means as the load rises over time. Most large systems scale to a as trustworthyd number of processors and require a costly upgradeReliability, Availability and Serviceability (RAS) A large system is typically more vulnerable to failure than a smaller system. A major hardware or software component failure fetches the whole system down. Hence if a large single system is positioned as the computational resource, a module failure will bring down considerable computing office staff. In case of a cluster, a single module failure only affects a small part of the overall computational resources. A system in the cluster can be repaired without bringing rest of the cluster down. Also, additional computational resources can be added to a cluster charm it is running the drug user assignment. Hence a cluster maintains steadiness of user operations in both of these cases. In similar type of situations a SMP system will require a complete shutdown and a restart.Adaptability It is much easier to adapt the topology. The patterns of linking the compute nodes together, of a cluster to best suit the application requirements of a computer center. Vendors typically support much classified topologies of MPPs because of design, or sometimes testing, issues.Faster technology innovation Clusters benefit from thousands of researchers all around the world, who typically work on smaller systems rather than luxurious high end systems.Limitations of ClustersIt is noteworthy to reference certain shortcomings of using clusters as opposite to a single large system. These should be closely cautious while defining the best computational resource for the organization. System managers and programmers of the organization should intensely take part in estimating the fol lowing trade-offs.A cluster increases the number of individual components in a computer center. Every server in a cluster has its own sovereign network ports, power supplies, etc. The increased number of components and cables going across servers in a cluster partially counterbalances some of the RAS advantages stated above. It is easier to achieve a single system as opposed to numerous servers in a cluster. at that place are a lot more system services available to manage computing means within a single system than those which can assistance manage a cluster. As clusters progressively find their way into profitable organizations, more cluster savvy tools will become retrieveible over time, which will bridge some of this gap.In order for a cluster to scale to harbour actual use of numerous CPUs, the workload needs to be properly well-adjusted on the cluster. Workload inequity is easier to call in a dual-lane memory environment, because switching tasks across processors doesnt i nvolve too much data movement. On the other hand, on a cluster it tends to be very tough to move a by this time running task from one node to another. If the environment is such that workload balance cannot be controlled, a cluster may not provide good parallel proficiency.Programming patterns used on a cluster are typically respective(a) from those used on shared-memory systems. It is relatively easier to use parallelism in a shared-memory system, since the shared data is gladly available. On a cluster, as in an MPP system, either the programmer or the compiler has to explicitly transport data from one node to another. Before deploying a cluster as a key resource in your environment, you should make sure that your system administrators and programmers are comfortable in working in a cluster environment.Getting Started With Linux ClusterAlthough clustering can be performed on various operating systems like Windows, Macintosh, Solaris etc. , Linux has its own advantages which are as follows-Linux runs on a wide range of hardwareLinux is exceptionally stableLinux source code is freely distributed.Linux is relatively virus free.Having a wide variety of tools and applications for free.Good environment for developing cluster infrastructure.Cluster Overview and languageA compute cluster comprises of a lot of different hardware and software modules with complex interfaces between various modules. In fig 1.3 we show a simplified concept of the key layers that form a cluster. Following sections give a brief overview of these layers.4.) Parallel computing and Distributed Computing systemParallel computingIt is the concurrent execution of some permutation of multiple instances of programmed instructions and data on multiple processors in order to achieve gives faster.A parallel computing system is a system in which computer with more than one processor for parallel processing. In the past, each processor of a multiprocessing system every time came in its own processor packaging, but in recent times-introduced multicore processors contain multiple logical processors in a single package. There are many diverse kinds of parallel computers. They are well-known by the kind of interconnection among the processors (processing elements or PEs) and memory.Distributed Computing SystemThere are two types of distributed Computing systemsTightly coupled system In these systems, there is a single system wide primary memory (address space) that is shared by all the processors. In these systems any communication between the processors usually takes place through the shared memory. In tightly coupled systems, the number of processors that can be usefully deployed is usually small and limited by the bandwidth of the shared memory. Tightly coupled systems are referred to as parallel processing systemsLoosely coupled systems In these systems, the processors do not share memory, and each processor has its own local memory. In these systems, all physical communicatio n between the processors is done by passing messages across the network that interconnects the processors. In this type of System Processors are expandable and can have unlimited number of processor. Loosely coupled systems, are referred to as distributed computing systems.Various regulates are used for building Distributed Computing System4.1) Minicomputer ModelIt is a simple extension of the centralized time-sharing system. A distributed computing system found on this classical consists of a few minicomputers or large supercomputers unified by a communication network. for each one minicomputer usually has many user simultaneously logged on to it through several terminals linked to it with every user logged on to one exact minicomputer, with outback(a) coming to other minicomputers, The network permits a user to access external resources that are available on same machine other than the one on to which the user is currently logged.The minicomputer determine is used when reso urce sharing with remote users is anticipated.The initial ARPAnet is an example of a distributed computing system based on the minicomputer model.4.2) Workstation ModelWorkstation model consists of several workstations unified by a communication network. The best example of a Workstation Model can be a companys office or a university plane section which may have quite a few workstation scattered throughout a building or campus, with each workstation equipped with its individual disk and function time which is specifically during the night, Notion of using workstation Model is that when certain workstations are idle (not being used), dissolvering in the waste of great amounts of CPU time the model connects all these workstations by a high-speed LAN so that futile workstations may be used to process jobs of users who are logged onto to other workstations and do not have adequate processing power at their own workstations to get their jobs handled efficiently.A user logs onto one of the workstations which is his dwelling house workstation and submits jobs for execution if the system does not have sufficient processing power for punish the processes of the submitted jobs resourcefully, it transfers one or more of the processes from the users workstation to some other workstation that is currently ideal and gets the process executed there, and finally the outcome of execution is given back to the users workstation deprived of the user being aware of it.The main Issue increases if a user logs onto a workstation that was idle until now and was being used to perform a process of another workstation .How the remote process is to be controlled at this time .To handle this type of problem we have three solutions The first system is to allow the remote process share the resources of the workstation along with its own logged-on users processes. This method is easy to apply, but it setbacks the main idea of workstations helping as personal computers, because if remote processes are permitted to execute concurrently with the logged-on users own processes, the logged-on user does not get his or her fail-safe response.The second method is to kill the remote process. The main disadvantage of this technique is that all the processing done for the remote process gets lost and the file system may be left in an erratic state, making this method repellent.The third method is to migrating the remote process back to its home workstation, so that its execution can be continued there. This method is tough to implement because it involves the system to support preemptive process migration facility that is stopping the current process when a higher priority process comes into the execution.Thus we can say that the workstation model is a network of individual workstations, each with its own disk and a local file system.The Sprite system and experimental system developed at Zerox PARC are two examples of distributed computing systems, based on the workstation mo del.4.3) Workstation-Server ModelWorkstation Server Model consists of a limited minicomputers and numerous workstations (both diskful and diskless workstations) but most of them are diskless connected by a high speed communication Network. A workstation with its own local disk is generally called a diskful workstation and a workstation without a local disk is send ford as diskless workstation.The file systems used by these workstations is either applied either by a diskful workstation or by a minicomputer armed with a disk for file storage. One or more of the minicomputers are used for applying the file system. Other minicomputer may be used for providing other types of service area, such as database service and print service. Thus, every minicomputer is used as a server machine to provide one or more types of services. Therefore in the workstation-server model, in addition to the workstations, there are give machines (may be specialized workstations) for running server processes (called servers) for handling and providing access to shared resources.A user logs onto a workstation called his home workstation, Normal computation activities required by the users processes are performed at the users home workstation, but requirements for services provided by special servers such as a file server or a database server are sent to a server providing that type of service that performs the users requested activity and returns the result of request processing to the users workstation. Therefore, in this model, the users processes need not be migrated to the server machines for getting the work done by those machines.For better complete system performance, the local disk of diskful workstation is normally used for such purposes as storage of temporary file, storage of unshared files, storage of shared files that are rarely changed, paging activity in virtual-memory management, and caching of remotely accessed data.Workstation Server Model is better than Workstation Mod el in the following waysIt is much cheaper to use a few minicomputers equipped with large, fast disks than a large number of diskful workstations, with each workstation having a small, slow disk.Diskless workstations are also preferred to diskful workstations from a system maintenance point of view. Backup and hardware maintenance are easier to perform with a few large disks than with many small disks scattered all Furthermore, initiation new releases of software (such as a file server with new functionalities) is easier when the software is to be break ined on a few file server machines than on every workstations.In the workstation-server model, since all files are managed by the file servers, users have the flexibility to use any workstation and access the files in the same manner regardless of which workstation the user is currently logged on .Whereas this is not true with the workstation model, in which each workstation has its local file system, because different implements are needed to access local and remote files. Unlike the workstation model, this model does not need a process migration facility, which is difficult to implement.In this model, a client process or workstation sends a request to a server process or a mini computer for getting some service such as reading a block of a file. The server executes the request and sends back a reply to the client that contains the result of request processing.A user has guarantied response time because workstations are not used for executing remote process. However, the model does not utilize the processing capability of idle workstation.The V-System (Cheriton 1988) is an example of a distributed computing system that is based on the workstation-server model.4.4) Processor-Pool ModelIn the process of pool model the processors are pooled together-to be shared by the users needed. The pool -or processors consist of a large number of micro-computers and minicomputers attached to the network. Each processor i n the pool has its own memory to load and run a system program or an application program of the distributed-computing system. The processor-pool model is used for the purpose that most of the time a user does not need any computing power but once in a while he may need a very large amount of computing power for short time (e.g., when recompiling a program consisting of a large number of files after changing a basic shared declaration).In processor-pool model, the processors in the pool have no terminal attached directly to them, and users access the system from terminals that are attached to the network via special devices. These terminals are either small diskless workstations or graphic terminals. A special server called a run server manages and allocates the processors in the pool to different users on a demand basis. When a user submits a job for computation an appropriate number of Processors are temporarily assigned to his or her job by the run server. In this type of model we do not have a concept of home machine, in this when a user logs on he is logged on to the whole system by default.The processor-pool model allows better utilization of the available processing power of a distributed computing system as in this model the entire processing power of the system is available for use by the current logged-on users, whereas this is not true for the workstation-server model in which several workstations may be idle at a particular time but they cannot be used for processing the jobs of other users.Furthermore, the processor-pool model provides greater flexibility than the workstation-server model as the systems services can be easily expanded without the need to install any more computers. The processors in the pool can be allocated to act as extra servers to carry any additional load arising from an increased user population or to provide new services.However, the processor-pool model is usually considered to be unsuitable for high-performance interactive application, program of a user is being executed and the terminal via which the user is interacting with the system. The workstation-server model is generally considered to be more suitable for such applications.Amoeba Mullender et al. 1990. Plan 9 Pike et al. 1990, and the Cambridge Distributed Computing System Needham and Herbert 1982 are examples of distributed computing systems based on the processor-pool model.5) ISSUES IN DESIGNING A DISTRIBUTED OPERATING SYSTEMTo design a distributed operating system is a more difficult task than designing a centralized operating system for several reasons. In the design of a centralized operating system, it is assumed that the operating system has access to complete and accurate information about the environment is which it is functioning. In a distributed system, the resources are physically separated, their is no common clock among the multiple processors as the bringing of messages is delayed, and not have up-to-date, consistent knowled ge about the state of the various components of the underlying distributed system .And lack of up-to-date and consistent information makes many thing (such as management of resources and synchronization of cooperating activities) much harder in the design of a distributed operating system,. For example, it is hard to schedule the processors optimally if the operating system is not sure how many of them are up at the moment.Therefore a distributed operating system moldiness be designed to provide all the advantages of a distributed system to its users. That is, the users should be able to view a distributed system as a virtual centralized system that is flexible, efficient, reliable, secure, and easy to use. To meet this challenge, designers of a distributed operating system must deal with several design issues. Some of the key design issues are5.1) enhancerThe main goal of a distributed operating system is to make the existence of multiple computers invisible (transparent) and tha t is to provide each user the feeling that he is the only user working on the system. That is, distributed operating system must be designed in such a way that a collection of distinct machines connected by a communication subsystem appears to its users as a virtual unprocessed.Accesses TransparencyAccess transparency typically refers to the situation where users should not need or be able to recognize whether a resource (hardware or software) is remote or local. This implies that the distributed operating system should allow users to access remote resource in the same ways as local resources. That is, the user should not be able to distinguish between local and remote resources, and it should be the responsibility of the distributed operating system to locate the resources and to arrange for servicing user requests in a user-transparent manner.Location TransparencyLocation Transparency is achieved if the name of a resource is kept hidden and user mobility is there, that isName tran sparencyThis refers to the fact that the name of a resource (hardware or software) should not reveal any hint as to the physical location of the resource. Furthermore, such resources, which are capable of being locomote from one node to another in a distributed system (such as a file), must be allowed to move without having their names changed. Therefore, resource names must be unique system wide.User Mobility this refers to the fact that no matter which machine a user is logged onto, he should be able to access a resource with the same name he should not require two different names to access the same resource from two different nodes of the system. In a distributed system that supports user mobility, users can freely log on to any machine in the system and access any resource without making any extra effort.Replication TransparencyReplicas or copies of files and other resources are created by the system for the better performance and reliability of the data in case of any loss. Th ese replicas are placed on the different nodes of the distributed System. Both, the existence of multiple copies of a replicated resource and the replication activity should be transparent to the users. Two essential issues related to replication transparency are naming of replicas and replication control. It is the responsibility of the system to name the various copies of a resource and to map a user-supplied name of the resource to an appropriate replica of the resource. Furthermore, replication control decisions such as how many copies of resource should be created, where should each transcript be placed, and when should a copy be created/deleted should be make entirely automatically by the system in a user -transparent manner.Failure TransparencyFailure transparency deals with covering fire from the users partial failures in the system,Such as a communication link failure, a machine failure, or a storage device crash. A distributed operating system having failure transparency property will continue to function, perhaps in a degraded form, in the face of partial failures. For example compute the file service of a distributed operating system is to be made failure transparent. This can be done by implementing it as a group of file servers that closely cooperate with each other to manage the files of the system and that function in such a manner that the users can utilize the file service even if only one of the file servers is up and working. In this case, the users cannot notice the failure of one or more file servers, except for laggard performance of file access operations. Be implemented in this way for failure transparency. An attempt to design a completely failure-transparent distributed system will result in a very slow and highly expensive system due to the large amount of redundancy required for tolerating all types of failures.Migration TransparencyAn endeavor is migrated from one node to another for a better performance, reliability and grea t security. The aim of migration transparency is to ensure that the movement of the object is handled automatically by the system in a user-transparent manner. Three important issues in achieving this goal are as followsMigration decisions such as which object is to be moved from where to where should be made automatically by the system.Migration of an object from one node to another should not require any change in its name.When the migrating object is a process, the interposes communication mechanism should ensure that a massage sent to the migrating process reaches it without the need for the sender process to resend it if the receiver process moves to another node before the massage is received.Concurrency TransparencyIn a distributed system multiple users uses the system concurrently. In such a situation, it is economical to share the system resource (hardware or software) among the concurrently executing user processes. However since the number of available resources in a comp uting system is restricted one user processes, must necessarily influence the save of other concurrently executing processes. For example, concurrent update to the file by two different processes should be prevented. Concurrency transparency means that each user has a feeling that he is the sole user of the system and other users do not exist in the system. For providing concurrency transparency, the recourse sharing mechanisms of the distributed operating system must have the following propertiesAn event-ordering property ensures that all access requests to various system resources are properly ordered to provide a consistent view to all users of the system.A mutual-exclusion property ensures that at any time at most one process accesses a shared resource, which must not be used simultaneously by multiple processes if program operation is to be correct.A no-starvation property ensures that if every process that is granted a resources which must not be used simultaneously by multip le processes, eventually releases it, every request for that restore is eventually granted.A no-deadlock property ensures that a situation will neer occur in which competing process prevent their mutual progress ever though no single one requests more resources than available in the system. movement TransparencyThe aim of performance transparency is never get

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.