| Peer-Reviewed

Dynamic Load Balancing Using Periodically Load Collection with Past Experience Policy on Linux Cluster System

Received: 21 October 2016     Accepted: 9 January 2017     Published: 9 March 2017
Views:       Downloads:
Abstract

Fast execution of the applications achieved through parallel execution of the processes. This is very easily achieved by high performance cluster (HPC) through concurrent processing with the help of its compute nodes. The HPC cluster provides super computing power using execution of dynamic load balancing algorithm on compute nodes of the clusters. The main objective of dynamic load balancing algorithm is to distribute even workload among the compute nodes for increasing overall efficiency of the clustered system. The logic of dynamic load balancing algorithm needs parallel programming. The parallel programming on the HPC cluster can achieve through massage passing interface in C programming. The MPI library plays very important role to build new load balancing algorithm. The workload on a HPC cluster system can be highly variable, increasing the difficulty of balancing the load across its compute nodes. This paper proposes new idea of existing dynamic load balancing algorithm, by mixing centralized and decentralized approach which is implemented on Rock cluster and maximum time it gives the better performance. This paper also gives comparison between previous dynamic load balancing algorithm and new dynamic load balancing algorithm.

Published in American Journal of Mathematical and Computer Modelling (Volume 2, Issue 2)
DOI 10.11648/j.ajmcm.20170202.13
Page(s) 60-75
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2017. Published by Science Publishing Group

Keywords

MPI, Parallel Programming, HPC Clusters, DLBA ARPLCLB, ARPLCPELB

References
[1] Bernd F reisleben Dieter Hartmann ThiloKielmann [1997] “Parallel Raytracing A Case Study on Partitioning and Scheduling on Workstation Clusters” 1997 Thirtieth Annual Hawwaii International Conference on System Sciences.
[2] Blaise Barney, (1994) Livermore Computing, MPI Web pages at Argonne National Laboratory http://www-unix.mcs.anl.gov/mpi "Using MPI", Gropp, Lusk and Skjellum. MIT Press
[3] Erik D. Demaine, Ian Foster, CarlKesselman, and Marc Snir [2001] “Generalized Communicators in the Message Passing Interface” 2001 IEEE transactions on parallel and distributed systems pages from 610 to 616.
[4] Hau Yee Sit Kei Shiu Ho Hong Va Leong Robert W. P. Luk Lai Kuen Ho [2004] “An Adaptive Clustering Approach to Dynamic Load balancing” 2004 IEEE 7th International Symposium on Parallel Architectures, Algorithms and Networks (ISPAN’04).
[5] JanhaviB, SunilSurve, Sapna Prabhu-2010 “Comparison of load balancing algorithms in a Grid” 2010 International Conference on Data Storage and Data Engineering Pages from 20 to 23.
[6] M. Snir, SW. Otto, S. Huss-Lederman, D. W. Walker and J. Dongarra,(1996) MPI: The Complete Reference (MIT Press, Cambridge, MA, 1995). 828 W. Gropp et al./Parallel Computing 22 (1996) 789-828.
[7] Marta Beltr´an and Antonio Guzm´an [2008] “Designing load balancing algorithms capable of dealing with workload variability” 2008 International Symposium on Parallel and Distributed Computing Pages from 107 to 114.
[8] ParimahMohammadpour, Mohsen Sharifi, Ali Paikan,[2008] “A Self-Training Algorithm for Load Balancing in Cluster Computing”, 2008 IEEE Fourth International Conference on Networked Computing and Advanced Information Management, Pages from 104 to 110.
[9] Paul Werstein, Hailing Situ and Zhiyi Huang [2006] “Load Balancing in a Cluster Computer” Proceedings of the Seventh International Conference on Parallel and Distributed Computing, Applications and Technologies.
[10] SharadaPatil, DrArpita Gopal,[2012], MsPratibhaMandave “Parallel programming through Message Passing Interface to improving performance of clusters” – International Docteral Conference (ISSN 0974-0597) SIOM, WadgoanBudruk in Feb 2013.
[11] SharadaPatil, Arpita Gopal “Comparison of Cluster Scheduling Mechanism using Workload and System Parameters” 2011 ISSN 0974-0767 International journal of Computer Science and Application.
[12] SharadaPatil, Arpita Gopal “STUDY OF DYNAMIC LOAD BALANCING ALGORITHMS FOR LINUX CLUSTERED SYSTEM USING SIMULATOR” 2011 ISSN 0974-3588 International journal of Computer Applications in Engineering Technology and Sciences.
[13] SharadaPatil, DrArpita Gopal, [2011] “Study of Load Balancing ALgorithms” – National Conference on biztech 2011, Selected as a best paper in the conference, got first rank to the paper, DICER, Narhe, Pune in year March 2011.
[14] SharadaPatil, DrArpita Gopal, [2013] “Cluster Performance Evaluation using Load Balancing Algorithm” – INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND EMBEDDED SYSTEMS ICICES 2013,978-1-4673-5788-3/13/$31.00©2013IEEE (ISBN 978-1-4673-5786-9) Chennai, India in year Feb 2013.
[15] SharadaPatil, DrArpita Gopal,[2012] “Need Of New Load Balancing Algorithms For Linux Clustered System” – International Conference on Computational techniques And Artificial intelligence (ICCTAI’2012) (ISBN 978-81-922428-5-9) Penang Maleshia in year Jan 2012.
[16] SharadaPatil, DrArpita Gopal,[2013] “Enhancing Performance of Business By Using Exctracted Supercomputing Power From Linux Cluster’s ” – International Conference on FDI 2013 (ISSN 0974-0597) SIOM, WadgoanBudruk in Jan 2013.
[17] Sun Nian, Liang Guangmin [2010] “Dynamic Load Balancing Algorithm for MPI Parallel Computing” 2010 Pages 95 to 99.
[18] William Gropp, Rusty Lusk, Rob Ross, and Rajiv Thakur [2005] “MPI Tutorials “ Retrieved from www.mcs.anl.gov/research/projects/mpi/tutorial Livermore Computing specific information:
[19] Yanyong Zhang, AnandSivasubramaniam, Jose Moreira, and Hubertus Franke [2001] “Impact of Workload and System Parameters on Next Generation Cluster Scheduling Mechanisms” 2001 IEEE transactions on parallel and distributed systems Pages from 967 to 985.
[20] Yongzhi Zhu Jing GuoYanling Wang [2009] “Study on Dynamic Load Balancing Algorithm Based on MPICH” 2009 MPI_COMM_RANK: World Congress on Software Engineering. Pages from 103 to 107.
[21] Michel Daydé, Jack Dongarra (2005) “High Performance Computing for Computational Science - VECPAR 2004” ISBN 3-540-25424-2 pages 120-121.
[22] G. Bums and R. Daoud, MPI Cubix (1994) Collective POSIX I/O operations for MPI, Tech. Rept. OSC-TR- 1995- 10, Ohio Supercomputer Center, 1995.
[23] Sharada Santosh Patil, Arpita N. Gopal. Authority Ring Periodically Load Collection for Load Balancing of Cluster System. American Journal of Networks and Communications. Vol. 2, No. 5, 2013, pp. 133-139. doi: 10.11648/j.a jnc.20130205.13.
Cite This Article
  • APA Style

    Sharada Santosh Patil, Arpita Nirbhay Gopal. (2017). Dynamic Load Balancing Using Periodically Load Collection with Past Experience Policy on Linux Cluster System. American Journal of Mathematical and Computer Modelling, 2(2), 60-75. https://doi.org/10.11648/j.ajmcm.20170202.13

    Copy | Download

    ACS Style

    Sharada Santosh Patil; Arpita Nirbhay Gopal. Dynamic Load Balancing Using Periodically Load Collection with Past Experience Policy on Linux Cluster System. Am. J. Math. Comput. Model. 2017, 2(2), 60-75. doi: 10.11648/j.ajmcm.20170202.13

    Copy | Download

    AMA Style

    Sharada Santosh Patil, Arpita Nirbhay Gopal. Dynamic Load Balancing Using Periodically Load Collection with Past Experience Policy on Linux Cluster System. Am J Math Comput Model. 2017;2(2):60-75. doi: 10.11648/j.ajmcm.20170202.13

    Copy | Download

  • @article{10.11648/j.ajmcm.20170202.13,
      author = {Sharada Santosh Patil and Arpita Nirbhay Gopal},
      title = {Dynamic Load Balancing Using Periodically Load Collection with Past Experience Policy on Linux Cluster System},
      journal = {American Journal of Mathematical and Computer Modelling},
      volume = {2},
      number = {2},
      pages = {60-75},
      doi = {10.11648/j.ajmcm.20170202.13},
      url = {https://doi.org/10.11648/j.ajmcm.20170202.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajmcm.20170202.13},
      abstract = {Fast execution of the applications achieved through parallel execution of the processes. This is very easily achieved by high performance cluster (HPC) through concurrent processing with the help of its compute nodes. The HPC cluster provides super computing power using execution of dynamic load balancing algorithm on compute nodes of the clusters. The main objective of dynamic load balancing algorithm is to distribute even workload among the compute nodes for increasing overall efficiency of the clustered system. The logic of dynamic load balancing algorithm needs parallel programming. The parallel programming on the HPC cluster can achieve through massage passing interface in C programming. The MPI library plays very important role to build new load balancing algorithm. The workload on a HPC cluster system can be highly variable, increasing the difficulty of balancing the load across its compute nodes. This paper proposes new idea of existing dynamic load balancing algorithm, by mixing centralized and decentralized approach which is implemented on Rock cluster and maximum time it gives the better performance. This paper also gives comparison between previous dynamic load balancing algorithm and new dynamic load balancing algorithm.},
     year = {2017}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Dynamic Load Balancing Using Periodically Load Collection with Past Experience Policy on Linux Cluster System
    AU  - Sharada Santosh Patil
    AU  - Arpita Nirbhay Gopal
    Y1  - 2017/03/09
    PY  - 2017
    N1  - https://doi.org/10.11648/j.ajmcm.20170202.13
    DO  - 10.11648/j.ajmcm.20170202.13
    T2  - American Journal of Mathematical and Computer Modelling
    JF  - American Journal of Mathematical and Computer Modelling
    JO  - American Journal of Mathematical and Computer Modelling
    SP  - 60
    EP  - 75
    PB  - Science Publishing Group
    SN  - 2578-8280
    UR  - https://doi.org/10.11648/j.ajmcm.20170202.13
    AB  - Fast execution of the applications achieved through parallel execution of the processes. This is very easily achieved by high performance cluster (HPC) through concurrent processing with the help of its compute nodes. The HPC cluster provides super computing power using execution of dynamic load balancing algorithm on compute nodes of the clusters. The main objective of dynamic load balancing algorithm is to distribute even workload among the compute nodes for increasing overall efficiency of the clustered system. The logic of dynamic load balancing algorithm needs parallel programming. The parallel programming on the HPC cluster can achieve through massage passing interface in C programming. The MPI library plays very important role to build new load balancing algorithm. The workload on a HPC cluster system can be highly variable, increasing the difficulty of balancing the load across its compute nodes. This paper proposes new idea of existing dynamic load balancing algorithm, by mixing centralized and decentralized approach which is implemented on Rock cluster and maximum time it gives the better performance. This paper also gives comparison between previous dynamic load balancing algorithm and new dynamic load balancing algorithm.
    VL  - 2
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • MCA, Dept. Sinhgad Institute of Business Administration and Research, Kondhwa, Pune, Maharashtra, India

  • MCA, Dept. Sinhgad Institute of Business Administration and Research, Kondhwa, Pune, Maharashtra, India

  • Sections