| Peer-Reviewed

Reproducing Musicality: Immediate Human-like Musicality Through Machine Learning and Passing the Turing Test

Received: 17 April 2021     Accepted: 12 May 2021     Published: 26 May 2021
Views:       Downloads:
Abstract

Musicology is a growing focus in computer science. Past research has had success in automatically generating music through learning-based agents that make use of neural networks and through model and rule-based approaches. These methods require a significant amount of information, either in the form of a large dataset for learning or a comprehensive set of rules based on musical concepts. This paper explores a model in which a minimal amount of musical information is needed to compose a desired style of music. This paper takes from two concepts, objectness, and evolutionary computation. The concept of objectness, an idea directly derived from imagery and pattern recognition, was used to extract specific musical objects from single musical inputs which are then used as the foundation to algorithmically produce musical pieces that are similar in style to the original inputs. These musical pieces are the product of evolutionary algorithms which implement a sequential evolution approach wherein a generated output may or may not yet be fully within the fitness thresholds of the input pieces. This method eliminates the need for a large amount of pre-provided data as well as the need for long processing times that are commonly associated with machine-learned art-pieces. This study aims to show a proof of concept of the implementation of the described model.

Published in American Journal of Artificial Intelligence (Volume 5, Issue 1)
DOI 10.11648/j.ajai.20210501.13
Page(s) 38-45
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2021. Published by Science Publishing Group

Keywords

Artificial Intelligence, Computer Music, Turing Testing

References
[1] Belgum, E., Roads, C., Chadabe, J., Tobenfeld, T. E., & Spiegel, L. (1988). A Turing Test for" Musical Intelligence". Computer Music Journal, 12 (4), 7-9.
[2] Shah, S. A. A., Bennamoun, M., Boussaid, F., & El-Sallam, A. A. (2013, February). Automatic object detection using objectness measure. In Communications, Signal Processing, and their Applications (ICCSPA), 2013 1st International Conference on (pp. 1-6). IEEE.
[3] D. T. Bogdan A. and V. Ferrari, “Measuring the objectness of image windows”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 99, no. PrePrints, 2012.
[4] Samson, A. V., & Coronel, A. D. (2016, November). Evolutionary algorithm-based composition of hybrid-genre melodies using selected feature sets. In Computational Intelligence and Applications (IWCIA), 2016 IEEE 9th International Workshop on (pp. 51-56). IEEE.
[5] G. Burns: “A Typology of ‘Hooks’ in Popular Records”, Popular Music, Vol. 6, No. 1, pp. 1?20, 1987.
[6] Kronengold, C. (2005). “Accidents, hooks and theory.” Popular Music, 24 (3), 381.
[7] Tarhio, Jorma, and Esko Ukkonen. "Approximate boyer-moore string matching." SIAM Journal on Computing 22.2 (1993): 243-260.
[8] Ukkonen, Esko. "On-line construction of suffix trees." Algorithmica 14.3 (1995): 249-260.
[9] C. McKay and I. Fujinaga, “jSymbolic: A feature extractor for MIDI files”, 2006.
[10] Samson, A. V., & Coronel, A. D. (2018, February). Estimating Note Phrase Aesthetic Similarity using Feature-based Taxicab Geometry. In Digital Arts, Media and Technology (ICDAMT), International Conference on. IEEE.
[11] Legname, O. (1998) Density Degree Theory. SUNY Oneonta, www.oneonta.edu/faculty/legnamo/theorist/density/density.html.
[12] Samson, A. V., & Coronel, A. D. (2019, February). Reproducing Musicality: Detecting Musical Objects and Emulating Musicality through Partial Evolution. In Intelligence in Information and Communication (ICAIIC).
[13] Colby, K. M.; Hilf, F. D.; Weber, S.; Kraemer, H. (1972), "Turing-like indistinguishability tests for the validation of a computer simulation of paranoid processes", Artificial Intelligence, 3: 199-221.
[14] M. Boden and M. Bishop, The Turing test and artistic creativity, Kybernetes.
[15] McKay, C., Cumming, J., & Fujinaga, I. (2018). jSymbolic 2.2: Extracting features from symbolic music for use in musicological and MIR research. Proceedings of the 19th International Society for Music Information Retrieval Conference, 348–354.
[16] Horner, A., & Goldberg, D. E. (1991). Genetic algorithms and computer assisted music composition. Urbana, 51 (61801), 437-441.
[17] Cope, David, et al. Virtual Music: Computer Synthesis of Musical Style. MIT Press, 2004.
[18] Yujian, L., & Bo, L. (2007). A normalized Levenshtein distance metric. IEEE transactions on pattern analysis and machine intelligence, 29 (6), 1091-1095.
[19] Aggarwal, C. C., Hinneburg, A., & Keim, D. A. (2001, January). On the surprising behavior of distance metrics in high dimensional spaces. In ICDT (Vol. 1, pp. 420-434).
[20] Lavoie, T., & Merlo, E. (2012, June). An accurate estimation of the levenshtein distance using metric trees and manhattan distance. In Software Clones (IWSC), 2012 6th International Workshop on (pp. 1-7). IEEE.
[21] Aucouturier, J. J., & Pachet, F. (2002, October). Music similarity measures: What’s the use? In ISMIR (pp. 13-17).
[22] Wongsaroj, C., Prompoon, N., & Surarerks, A. (2014, May). A music similarity measure based on chord progression and song segmentation analysis. In Digital Information and Communication Technology and it’s Applications (DICTAP), 2014 Fourth International Conference on (pp. 158-163). IEEE.
[23] Gusfield, Dan. ”Linear-time construction of suffix trees.” Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology (1997).
[24] Miller, B. (2020). " All of the rules of jazz": Stylistic Models and Algorithmic Creativity in Human-Computer Improvisation. Music Theory Online, 26 (3).
[25] Hsu, C. C., Tsai, Y. H., Lin, Y. Y., & Yang, M. H. (2020, August). Every Pixel Matters: Center-aware Feature Alignment for Domain Adaptive Object Detector. In European Conference on Computer Vision (pp. 733-748). Springer, Cham.
[26] Pang, Y., Wu, Y., Wu, C., & Zhang, M. (2020). Salient object detection via effective background prior and novel graph. Multimedia Tools and Applications, 79 (35), 25679-25695.
Cite This Article
  • APA Style

    Aran Samson, Andrei Coronel. (2021). Reproducing Musicality: Immediate Human-like Musicality Through Machine Learning and Passing the Turing Test. American Journal of Artificial Intelligence, 5(1), 38-45. https://doi.org/10.11648/j.ajai.20210501.13

    Copy | Download

    ACS Style

    Aran Samson; Andrei Coronel. Reproducing Musicality: Immediate Human-like Musicality Through Machine Learning and Passing the Turing Test. Am. J. Artif. Intell. 2021, 5(1), 38-45. doi: 10.11648/j.ajai.20210501.13

    Copy | Download

    AMA Style

    Aran Samson, Andrei Coronel. Reproducing Musicality: Immediate Human-like Musicality Through Machine Learning and Passing the Turing Test. Am J Artif Intell. 2021;5(1):38-45. doi: 10.11648/j.ajai.20210501.13

    Copy | Download

  • @article{10.11648/j.ajai.20210501.13,
      author = {Aran Samson and Andrei Coronel},
      title = {Reproducing Musicality: Immediate Human-like Musicality Through Machine Learning and Passing the Turing Test},
      journal = {American Journal of Artificial Intelligence},
      volume = {5},
      number = {1},
      pages = {38-45},
      doi = {10.11648/j.ajai.20210501.13},
      url = {https://doi.org/10.11648/j.ajai.20210501.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20210501.13},
      abstract = {Musicology is a growing focus in computer science. Past research has had success in automatically generating music through learning-based agents that make use of neural networks and through model and rule-based approaches. These methods require a significant amount of information, either in the form of a large dataset for learning or a comprehensive set of rules based on musical concepts. This paper explores a model in which a minimal amount of musical information is needed to compose a desired style of music. This paper takes from two concepts, objectness, and evolutionary computation. The concept of objectness, an idea directly derived from imagery and pattern recognition, was used to extract specific musical objects from single musical inputs which are then used as the foundation to algorithmically produce musical pieces that are similar in style to the original inputs. These musical pieces are the product of evolutionary algorithms which implement a sequential evolution approach wherein a generated output may or may not yet be fully within the fitness thresholds of the input pieces. This method eliminates the need for a large amount of pre-provided data as well as the need for long processing times that are commonly associated with machine-learned art-pieces. This study aims to show a proof of concept of the implementation of the described model.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Reproducing Musicality: Immediate Human-like Musicality Through Machine Learning and Passing the Turing Test
    AU  - Aran Samson
    AU  - Andrei Coronel
    Y1  - 2021/05/26
    PY  - 2021
    N1  - https://doi.org/10.11648/j.ajai.20210501.13
    DO  - 10.11648/j.ajai.20210501.13
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 38
    EP  - 45
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20210501.13
    AB  - Musicology is a growing focus in computer science. Past research has had success in automatically generating music through learning-based agents that make use of neural networks and through model and rule-based approaches. These methods require a significant amount of information, either in the form of a large dataset for learning or a comprehensive set of rules based on musical concepts. This paper explores a model in which a minimal amount of musical information is needed to compose a desired style of music. This paper takes from two concepts, objectness, and evolutionary computation. The concept of objectness, an idea directly derived from imagery and pattern recognition, was used to extract specific musical objects from single musical inputs which are then used as the foundation to algorithmically produce musical pieces that are similar in style to the original inputs. These musical pieces are the product of evolutionary algorithms which implement a sequential evolution approach wherein a generated output may or may not yet be fully within the fitness thresholds of the input pieces. This method eliminates the need for a large amount of pre-provided data as well as the need for long processing times that are commonly associated with machine-learned art-pieces. This study aims to show a proof of concept of the implementation of the described model.
    VL  - 5
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Department of Information Systems and Computer Science, Ateneo de Manila University, Quezon City, Philippines

  • Department of Information Systems and Computer Science, Ateneo de Manila University, Quezon City, Philippines

  • Sections