Here is a list of selected research works.  These explore the intersection of music and art with computer science, human computer interaction, and artificial intelligence.  They are arranged chronologically.

Liminal Space (2018)

Liminal Space is an aleatoric piece for cello, motion capture, and interactive software. It explores what happens when the past – J.S. Bach’s Sarabande from Cello Suite No. 1 in G major (BWV1007) – meets the present, i.e., movement computing, stochastic music, and interaction design. Through aleatoric means, the composition creates an interface between a cellist and a dancer. As the dancer moves, she creates sounds. The two performers engage in a musical dialog, utilizing Bach’s original material.

Liminal Space was originally performed, as part of the 15th Sound & Music Computing Conference (SMC 2018) music program, in Limassol, Cyprus, Jul. 2018.

For more information on the technology involved, and a description of the piece, see

The Veil (2017)

The Veil is an experiment in musical group dynamics, i.e., musical interaction and collaboration among performers. It was first presented at the Music Library of Greece, in Athens Greece, Dec. 2017.

The Veil framework involves the stitching of several Kinect and other motion sensors (e.g., LeapMotion) into a cohesive whole, with a common coordinate system to register user movement, and to assign semantics for musical interaction. It is used to explore the musical language and gestures that may emerge, given a particular set of mappings for interaction.

For more information, see

  • a brief demonstration at the Music Library of Greece (Dec. 2017), and
  • an article (in Greek) on The Veil and Computing in the Arts, in general.

SoundMorpheus (2016)

SoundMorpheus is a sound spatialization and shaping interface, which allows the placement of sounds in space, as well as the altering of sound characteristics, via arm movements that resemble those of a conductor. The interface displays sounds (or their attributes) to the user, who reaches for them with one or both hands, grabs them, and gently or forcefully sends them around in space, in a 360° circle. The system combines MIDI and traditional instruments with one or more myoelectric sensors.

These components may be physically collocated or distributed in various locales connected via the Internet. This system also supports the performance of acousmatic and electronic music, enabling performances where the traditionally central mixing board, need not be touched at all (or minimally touched for calibration). Finally, the system may facilitate the recording of a visual score of a performance, which can be stored for later playback and additional manipulation.

For more information, see

Time is All I Have Now (2016)

This piece combines aesthetic image sonification and computer programming (i.e., sonifying an aesthetically-pleasing image with the intent of preserving / mapping that aesthetic onto sound), with traditional music composition techniques (the latter provided by Maggie Dimogiannopoulou).  Leslie Jones on cello.  Recorded at the 2016 NSF Workshop on Computing in the Arts, UNC Asheville, May 2016.

For more information, see

Migrant (2015)

Migrant  is a cyclic piece combining data sonification, interactivity, and sound spatialization. It utilizes migration data collected over 23 years from 56,976 people across 545 US counties and 43 states. 120 people were selected randomly.  Each person becomes a single note.  Melody, harmony and dynamic are driven by the data.  The composition plays with the golden ratio against harmonic density, dissonance and resolution (hint – listen carefully to the sounds, behind the sounds).  It looks at people’s lives as interweaving – sometimes consonant, sometimes dissonant, and many times somewhere in between – unresolved.

Migrant was originally composed for Undomesticated, a public-art installation in the context of ArtFields 2015, Lake City, SC, USA (

It was originally performed, as part of the ISMIR 2015 music program, in Málaga, Spain, Oct. 2015 (Bill Manaris, guitar).

Also, here is another performance at the American College of Greece, Mar. 2016 (John Bafaloukas, piano).

For more information, see

Diving Into Infinity (2015)

Diving into Infinity is a Kinect-based system which explores ways to interactively navigate M.C. Escher’s works involving infinite regression. It focuses on Print Gallery, an intriguing, self-similar work created by M.C. Escher in 1956.

The interaction design allows a user to zoom in and out, as well as rotate the image to reveal its self-similarity, by navigating prerecorded video material. This material is based on previous mathematical analyses of Print Gallery to reveal / explain the artist’s depiction of infinity. The system utilizes a Model-View Controller architecture over OSC.

For more information, see

Time Jitters (2014)

Time Jitters is a four-projector interactive installation, which was designed by Los Angeles-based visual artist Jody Zellen for the Halsey Institute of Contemporary Art in Charleston, SC, USA, Jan. 2014.

Time Jitters includes two walls displaying video animation, and two walls with interactive elements. The concept is to create an immersive experience for participants, which confronts them with a bombardment of visual and sound elements. This project synthesizes AI, interaction, music and visual art. It utilizes invisible, computer-based intelligent agents, which interact with participants. Each person entering the installation space is tracked by a computer-based agent. The agent presents a unique image and sounds, which change as the person moves through the space.

For more information, see

JythonMusic (2014)

JythonMusic is a software environment for developing interactive musical experiences and systems. It is based on jMusic, a software environment for computer-assisted composition, which was extended within the last decade into a more comprehensive framework providing composers and software developers with libraries for music making, image manipulation, building graphical user interfaces, and interacting with external devices via MIDI and OSC, among others. This environment is free and open source. It is meant for musicians and programmers alike, of all levels and backgrounds. For instance, here is a first-year university class performing Terry Riley’s “In C”.

JythonMusic is based on Python, therefore it provides more economical syntax relative to Java- and C/C++-like languages. JythonMusic rests on top of Java, so it provides access to the complete Java API and external Java-based libraries as needed. Also, it works seamlessly with other tools, such as Pd, Max/MSP, and Processing, among others. It is being actively used to develop interactive sound art installations, new interfaces for sound manipulation and spatialization, as well as various explorations on mapping among motion, gesture and music.

For more information, see

Harmonic Navigator (2013)

Harmonic Navigator is a real-time, interactive system for navigating vast harmonic spaces in music corpora. It provides a high-level view of the harmonic (chord) changes that occur in such corpora, and may be used to generate new pieces, by stitching together chords in meaningful ways.

A Piece

This piece was generated by the system from exploring the harmonic space of 371 Bach chorales.  In this recording, it is performed by the Student String Orchestra, at the College of Charleston (conducted by Yiorgos Vassilandonakis).


Visual Navigation – example 1

Here is one user interface for navigating harmonic spaces.   In this example we use 371 Bach chorales.  The system is making suggestions (yellow circle), which the user chooses to follow or ignore.  Red denotes dissonance, blue denotes consonance, and shades of purple denote everything in between.

Interesting possibilities emerge, as harmonies flow into new and unexpected places, yielding new ideas for inspiration and exploration.


Visual Navigation – example 2

Here is another user interface for navigating harmonic spaces.  In this example we use 371 Bach chorales.  The system generates a harmonic flow presenting all alternatives at every step (chord).  This user interface allows to scrub back-and-forth, and select different alternatives. Red denotes dissonance, blue denotes consonance, and shades of purple denote everything in between.

User makes selections, and system outputs generated chord sequence.


For more information, see

Monterey Mirror (2011)

Monterey Mirror is an experiment in interactive music performance. It is engages a human performer and a computer (the mirror) in a game of playing, listening, and exchanging musical ideas.

The computer player employs an interactive stochastic music generator, which incorporates Markov models, genetic algorithms, and power-law metrics. This approach combines the predictive power of Markov models with the innovative power of genetic algorithms, using power-law metrics for fitness evaluation. For more information, see

Armonique (2009)

Armonique is a music search engine, where users navigate through large musical collections based solely on the similarity of the music itself, as measured by hundreds of music-similarity metrics based on Zipf’s Law.  In contrast, the majority of online music similarity engines are based on user listening habits and tagging by humans.  This includes systems like Pandora, which involve either musicologists listening and carefully tagging every new song across numerous dimensions, and other systems which capture listening preferences and ratings of users.

Our approach uses 250+ metrics based on power laws, which have been shown to correlate with aspects of human aesthetics. Through these metrics, we are able to automatically create our own metadata (e.g., artist, style, or timbre data) by analyzing the song content and finding patterns within the music. Since this extraction does not require interaction by humans (musicologists or listeners) it is capable of scaling with rapidly increasing data sets.  The main advantage of this technique is that (a) it
requires no human pre-processing, and (b) it allows users to discover songs of interest that are rarely listened to and are hard to find otherwise.

For more information, see

NevMuse (2007)

NevMuse (Neuro Evolutionary Music environment) is a prototype of an evolutionary music composer, which evolves music using artificial music critics based on power laws.

Tools based on this framework could be utilized by human composers to

  • help generate new ideas,
  • help overcome “writer’s block”, and
  • help explore new compositional spaces.

Several experiments have been conducted, exploring the system’s ability to “compose” novel, aesthetically pleasing music.  For example, here are two pieces composed by humans utilizing output from the above tool:


A piece composed by Bill Manaris using NEvMuse’s Variation H.  For more info, see here.



A piece composed by Patrick Roos using NEvMuse’s Variation Z and Variation Q. For more info, see here.



We use 250+ metrics based on power laws, which have been shown to correlate with aspects of human aesthetics. Through these metrics, we are can automatically classify music according to style, composer, and even perceived pleasantness (or popularity).  For example, these figures show calculated differences between J.S. Bach’s pieces BWV500 through BWV599, and Beethoven’s piano sonatas (1 through 32).  For more info, see here.

BachScape – a 3D contour map of six metrics over 32 Bach pieces.

BeethovenScape – a 3D contour map of six metrics over 32 Beethoven pieces.

For more information, see

Other Projects

For earlier projects, see Zipf’s LawSUITEKeys, and NALIGE.


Skip to toolbar