Research

Here is a list of selected research works.  These explore the intersection of music and art with computer science, human computer interaction, and artificial intelligence.  They are arranged chronologically.

Liminal Space (2018)

Liminal Space is an aleatoric piece for cello, motion capture, and interactive software. It explores what happens when the past – J.S. Bach’s Sarabande from Cello Suite No. 1 in G major (BWV1007) – meets the present, i.e., movement computing, stochastic music, and interaction design. Through aleatoric means, the composition creates an interface between a cellist and a dancer. As the dancer moves, she creates sounds. The two performers engage in a musical dialog, utilizing Bach’s original material.

Liminal Space was originally performed, as part of the SMC 2018 music program, in Limassol, Cyprus, Jul. 2018.

The Veil (2017)

The Veil is an experiment in musical group dynamics, i.e., musical interaction and collaboration among performers. It was first presented at the Music Library of Greece, in Athens Greece, Dec. 2017.

The Veil framework involves the stitching of several Kinect and other motion sensors (e.g., LeapMotion) into a cohesive whole, with a common coordinate system to register user movement, and to assign semantics for musical interaction. It is used to explore the musical language and gestures that may emerge, given a particular set of mappings for interaction.

For more information, see

  • a brief demonstration at the Music Library of Greece (Dec. 2017), and
  • an article (in Greek) on The Veil and Computing in the Arts, in general.

SoundMorpheus (2016)

SoundMorpheus is a sound spatialization and shaping interface, which allows the placement of sounds in space, as well as the altering of sound characteristics, via arm movements that resemble those of a conductor. The interface displays sounds (or their attributes) to the user, who reaches for them with one or both hands, grabs them, and gently or forcefully sends them around in space, in a 360° circle. The system combines MIDI and traditional instruments with one or more myoelectric sensors.

These components may be physically collocated or distributed in various locales connected via the Internet. This system also supports the performance of acousmatic and electronic music, enabling performances where the traditionally central mixing board, need not be touched at all (or minimally touched for calibration). Finally, the system may facilitate the recording of a visual score of a performance, which can be stored for later playback and additional manipulation.

For more information, see


Migrant (2015)

Migrant  is a cyclic piece combining data sonification, interactivity, and sound spatialization. It utilizes migration data collected over 23 years from 56,976 people across 545 US counties and 43 states. 120 people were randomly selected.  Sonification design:

  • each note represents a single person,
  • melody, harmony and dynamic are all driven by the data,
  • the length of each note metaphorically represents a person’s lifetime.

The composition schema explores the golden ratio and notions of density, dissonance and resolution. It looks at people’s lives as interweaving – sometimes consonant, sometimes dissonant, or many times somewhere in between – unresolved.

Migrant was originally composed for Undomesticated, a public-art installation in the context of ArtFields 2015, Lake City, SC, USA (http://www.artfieldssc.org). The performance instructions may include a performer who spatializes sound and controls the tempo, volume, and position of each note. It was originally performed, as part of the ISMIR 2015 music program, in Málaga, Spain, Oct. 2015.

For more information, see


Diving Into Infinity (2015)

Diving into Infinity is a Kinect-based system which explores ways to interactively navigate M.C. Escher’s works involving infinite regression. It focuses on Print Gallery, an intriguing, self-similar work created by M.C. Escher in 1956.

The interaction design allows a user to zoom in and out, as well as rotate the image to reveal its self-similarity, by navigating prerecorded video material. This material is based on previous mathematical analyses of Print Gallery to reveal / explain the artist’s depiction of infinity. The system utilizes a Model-View Controller architecture over OSC.

For more information, see


Time Jitters (2014)

Time Jitters is a four-projector interactive installation, which was designed by Los Angeles-based visual artist Jody Zellen for the Halsey Institute of Contemporary Art in Charleston, SC, USA, Jan. 2014.

Time Jitters includes two walls displaying video animation, and two walls with interactive elements. The concept is to create an immersive experience for participants, which confronts them with a bombardment of visual and sound elements. This project synthesizes AI, interaction, music and visual art. It utilizes invisible, computer-based intelligent agents, which interact with participants. Each person entering the installation space is tracked by a computer-based agent. The agent presents a unique image and sounds, which change as the person moves through the space.

For more information, see


JythonMusic (2014)

JythonMusic is a software environment for developing interactive musical experiences and systems. It is based on jMusic, a software environment for computer-assisted composition, which was extended within the last decade into a more comprehensive framework providing composers and software developers with libraries for music making, image manipulation, building graphical user interfaces, and interacting with external devices via MIDI and OSC, among others. This environment is free and open source. It is meant for musicians and programmers alike, of all levels and backgrounds. For instance, here is a first-year university class performing Terry Riley’s “In C”.

JythonMusic is based on Python, therefore it provides more economical syntax relative to Java- and C/C++-like languages. JythonMusic rests on top of Java, so it provides access to the complete Java API and external Java-based libraries as needed. Also, it works seamlessly with other tools, such as Pd, Max/MSP, and Processing, among others. It is being actively used to develop interactive sound art installations, new interfaces for sound manipulation and spatialization, as well as various explorations on mapping among motion, gesture and music.

For more information, see


Harmonic Navigator (2013)

Harmonic Navigator is a real-time, interactive system for navigating vast harmonic spaces in music corpora. It provides a high-level view of the harmonic (chord) changes that occur in such corpora, and may be used to generate new pieces, by stitching together chords in meaningful ways.

Here is a piece generated automatically by the system using 371 Bach chorales as input.  It is performed by the College of Charleston Student String Orchestra (Yiorgos Vassilandonakis, conductor).

 

For more information, see


Monterey Mirror (2011)

Monterey Mirror is an experiment in interactive music performance. It is engages a human performer and a computer (the mirror) in a game of playing, listening, and exchanging musical ideas.

The computer player employs an interactive stochastic music generator, which incorporates Markov models, genetic algorithms, and power-law metrics. This approach combines the predictive power of Markov models with the innovative power of genetic algorithms, using power-law metrics for fitness evaluation. For more information, see


Armonique (2009)

Armonique is a music search engine, where users navigate through large musical collections based solely on the similarity of the music itself, as measured by hundreds of music-similarity metrics based on Zipf’s Law.  In contrast, the majority of online music similarity engines are based on user listening habits and tagging by humans.  This includes systems like Pandora, which involve either musicologists listening and carefully tagging every new song across numerous dimensions, and other systems which capture listening preferences and ratings of users.

Our approach uses 250+ metrics based on power laws, which have been shown to correlate with aspects of human aesthetics. Through these metrics, we are able to automatically create our own metadata (e.g., artist, style, or timbre data) by analyzing the song content and finding patterns within the music. Since this extraction does not require interaction by humans (musicologists or listeners) it is capable of scaling with rapidly increasing data sets.  The main advantage of this technique is that (a) it
requires no human pre-processing, and (b) it allows users to discover songs of interest that are rarely listened to and are hard to find otherwise.

For more information, see


Earlier Projects

For earlier projects, such as NEvMuseZipf’s LawSUITEKeys, and NALIGE, see here.

 

Skip to toolbar