Fundamentals of
Music Composition
for AVIA Translation
Professor of Music,
Universidad Simón Bolívar, Caracas, Venezuela
Gastkünstler, ZKM-Zentrum für Kunst und
Medientechnologie, Karlsruhe, Germany
This paper was
presented in a reduced format in the 2nd International
Symposium on Systems Research in the Arts and Humanities:
“On Interaction/Interactivity in Music, Design, Visual and
Performing Arts,” in conjunction with InterSymp 2008: 20th
International Conference on Systems Research, Informatics
and Cybernetics, July 24-30, 2008, in Baden-Baden,
Germany, sponsored by the International Institute for
Advanced Studies in Systems Research and Cybernetics.
Published in the Symposium
Proceedings Vol. II. Canada: The
International Institute for Advanced Studies in System
Research and Cybernetics, 2008, 55-61.
Abstract
The paper analyses briefly the development of visual music and a selection of music elements in their compositional use in order to establish an identical relationship to the moving visuals for the translation of a time structure between the two sensory domains. The production of an integrated work of art that can be similarly and simultaneously perceived by the aural and visual senses is named AVIA, Audiovisual Integrated Art, characterized by its close interrelated musical/visual equivalences, within the domain of visual music. The elements analyzed are time, rhythm, space, intensity, pitch and color, indicating the requirements necessary in the music composition for its AVIA translation.
Visual Music
In the history of the confluence of Western music with visuals, a search for centuries, there are many examples of works which have explicitly tried to create a perceivable link between music and a corresponding visual display, whether still or in time. The intention to integrate structured sounds with a logically resulting image has been a long-standing challenge to artists, theorists and entertainers, producing in the process many symbolic and personal solutions, theories, instruments and devices, entertainment and art.
Accompanied by the music video of the MTV revolution since the 1980s, this joint art-form known as ‘visual music’ has significantly evolved in the last decades by the advancement of audiovisual digital technology, consolidating its position in both worlds of sound and visual arts. Also known as color music, lumina, and intermingling with abstract film, audiovisual, multimedia, intermedia and video arts, visual music differs from the wide spectrum of audiovisual forms in the purposely search for the close connection between the music and the moving visuals.
Any audiovisual production with rhythmic coincidences between visual and sound events may fall short of being visual music, and it applies similarly to the case of the common film soundtrack. Since both sensory domains function in the same time line, there is an implicit basic connection in respect to time occurrences and linear rhythms, being the natural and obvious relationship existing between all forms of expressions using simultaneously sounds and visuals.
In visual music, its name and present characteristics give a predominance and hierarchy to the music side as the pre-existing entity that will be identically visualized in most of its time-structural aspects, although the efforts, experimenting and technological advancements, as well as the results, are all mainly concentrated on the visual side. Music has had from its birth the expertise of communication through structures in time, and moving visuals is relatively a newcomer to this field, specially when dealing with abstract elements. The hidden agenda may seem to be, not to simply make music and visuals an unity, nor to render a faithful visualization of music, but the quest for moving visuals to achieve the same expressive power as music enjoys in all its abstract conveyance in time. Visual music is basically a provisional training station for the moving visuals to mature.
As previously stated, the present goal is to produce a narrow interrelation between music and moving visuals, as its main defining characteristic. However, it is approaching a high degree of sophistication by recent software development in which the two sides can be digitally locked together, but with the risk at the same time of rendering ineffective the different perceptual properties intrinsic to each of the two worlds, necessary for successful communication. Efforts are concentrated in the interfacial connections rather than in the global results, since it emerges from the given musical structure. In the formal aspects, which are the overall direct perceptual entities, moving visuals adopt from music its linear rhythmic occurrences, the formal structure of parts with contrasts, repetitions, variations and coherence, usually differentiated by their textures and resultant moods.
The shortcut solution in providing a complex time-structure to visuals taken from music, however, has an essential problem: although in visual music the two sensory domains function simultaneously, and may be digitally shared, they are separate fields with particular different characteristics and behavior at the perceptual level. Their components and how they function and can be structured, may not all directly match: certain proprietary music constructions are not necessarily translatable into the moving-visual arts, nor should it seamlessly function in the other way round. The viewer may be tempted to try, with little success, to watch the visual music with the sound off, in order to expect in the imagination anything near to the corresponding music used, ruling out of course, any synesthetic experiences.
In the other hand, each side can be developed in complexity far more than can be expected from the constraint of any matching equivalences, and the search for a close inter-connectedness may result in the end a hindrance rather than a worthy purpose. Although some elements can naturally be shared by both worlds, it is expected that the moving-visuals domain will eventually develop independently from music, in its own terms. Coexistence with music may again come to be in terms of the ‘soundtrack’ type, or in complimentary interplay through the coincidental aspects as well as separately in their own technologies and behaviors, as in the spirit of silent films of the late 1920s, such as in the work of Hindemith (Schubert, 2007), who also collaborated with Oskar Fischinger’s films in the early 1930s. Visual music should eventually take out from its name the ‘music’ part and be simply called moving-visuals or perhaps visual time-art. Some artist are working towards this tendency, such as expressed by Paul Friedlander (Friedlander, 2008). For fractal and geometrical animations (Gallet, 2007), the visual complexities may be unmatchable by any sound constructions, as also in Bret Battey’s works (Battey, 2007), for example. In these cases, the music is set back to its soundtrack role. If the visuals are moving regularly, just as looking out of the window from the inside of a moving train, any music will fit as a background, specially if it has strong rhythmic presence. The coincidence with the music depends on what point in the z-axis perspective the viewer fixes its eye on, and it will always match with the timing of the visuals. Another tendency for visual music which may be more dependent on the development of the rhythmic and physical space connections to music, is in the live interaction and performance, and in the video-jockeying (vjing).
AVIA
Within the transient life of visual music, the AVIA Translation Project (Audiovisual Integrated Art) was started within the Music Graduate Program in the Universidad Simón Bolívar, Caracas, in September, 2007, in colaboration with the ZKM-Zentrum für Kunst und Medientechnologie, Karlsruhe, Germany, in order to determine the effective interconnections of time structures between what is seen and heard, as a subgroup of visual music. The project considers that the development of the audiovisual arts will continue to produce many different forms and the project maintains that the pursuit of the one-art form in itself, the basis of visual music, is no longer a justifiable endeavor. The conception of the project is to obtain a closer look as to how the links may function in order to fine-tune their use for any audiovisual art form.
For this purpose, the strategy adopted is to concentrate on the music side. The one-way translation, that is, music to moving visuals, was preserved. The aspects relevant to the musical-visual conversion are being examined in detail, in order to elucidate the adequate possibilities to create a work of art structured in time that equally functions in both language systems: time and rhythm, space, intensity, pitch and color.
First, the music needed to be thought in the process of composition within a certain restrained frame of possibilities, in terms and purpose of its potential for visual translation. It was clear that not all music can be translated into visuals, hence, it has to be custom-made for it. In each case, a music composition was to be created devoid of all the other elements but the one or two which would be translated, achieving the time structure of the music only through them. Examples can be briefly given of two compositions by the author for AVIA translation concerning timbre/color (Secretos, see Figure 1), and physical space (Susurro, see Figure 2).
Secretos, for wind sextet was made specifically to achieve a structured music only through the element of instrumental color, that is, with no melodic, harmonic, spatial, dynamic, nor rhythmic change, as in a true Klangfarbenmelodie (melody of sound color), in honor to Schönberg’s Farben, the third of his Five Pieces for Orchestra, op. 16 of 1909 (Jewanski, 2008). The work evolves only through the performance of a four note chord (Bb, D, E, A), in an interlaced matrix whereby the six different wind instruments interchange the notes smoothly so that the chord is always sounding with 24-color fluctuations. AVIA is accomplished by digitally rendering each sound source on six horizontally-distributed locations on the monitor screen, on six separate monitors or displayed screens.
Susurro is a physical-space composition using white noise from six horizontally-distributed locations, and visual distribution similarly as the previous piece. No melodic, harmonic, color changes, exist, with simple linear rhythmic appearances. The musical structuring and the AVIA translation is obtained only through sound source location in a straight horizontal, front axis.
Fine-tuning in the visual rendering may involve changing or adjusting the compositions until effectiveness is achieved, and then the two will be merged together in a color & space composition for visual translation.
Second, a stipulated set of parameters were defined for a selection of musical/visual elements used in the translation, which are in the process of originating new compositions for the AVIA translation. These will be outlined below:
The Time-Lines
As music and moving images are perceived, their time-lines are exactly the same as the perceiver’s life time-line, all three chained in real time. They may differ, for example, in the case that the recorded music and moving-visual are stopped, fast-forwarded or slowed down, since the perceiver’s real life-time cannot be changed. However, the visuals can be halted in time, resulting in a still-image or frame, and this tiny part of the whole can be perceived, although disconnected from the time-line. In music, this option is not possible since in order for it to be heard, it needs to be linked to the listener’s time-line: music cannot be perceived, even a small part of it, without any sort of time occurring. The only exception would be with silence at the halting point, since it does not need to be heard in order to be perceived as such, and it could work as a “frozen frame of music,” but it is a deceivable image since the time counting (the perceiver’s and the given music time-lines), goes on in the listener’s mind, and this coupled-time perception is still taken as music although nothing sounds. In these terms, the corresponding absence of sound would be equaled by absence of light, that is, darkness. Since the time goes on in the perceiver’s mind, silence/darkness coupled-appearances work rhythmically in these three chained time-lines. Silence and darkness need to exist within a structured context of a perceived time-line, if not, they are no music at all.
An important achievement in the development of music in our Western civilization was the detachment of music from real-time existence. Music achieved this detachment in two stages, first by the development of graphical notation from the Middle Ages onwards, and second by the invention of sound recording in the twentieth century. In the graphical-notation music analogy, time as such does not exist, although it is represented on the x-axis, passing from left to right when the observer sets the eyes on the paper and starts to read. Time consciousness in this representation is completely flexible, personal and imaginary, since it will not be “hooked” in real-time to the reader’s time-line.
Nevertheless, this graphical representation of time in the x-axis can be paralleled in the moving visual images on the screen, producing a double time-line: the real-time of the music moving images in sync with the perceiver’s time-line in one hand, and the time occurring in the visual x-axis from left to right, or in any other axis. The graphical representation also includes a memory imprint of what has happened. Although it may be useful for visually portraying a melody in its whole moving contour, as it sounds or as it has sounded, we proposed for the AVIA project to discard this graphical representation of time for the reasons that music has no real memory rendition as it is being perceived at the instant called present. It is understood and enjoyed by relating what the perceiver is hearing, to what has just been heard, expect to hear, or has heard and is able to recall from its life’s sound-memories and experiences. The visuals in this case would have two time-lines, whereas in the music only one. Music as perceived is totally dependent on memory and in AVIA moving-visuals, this instant of perception has to be matched with no memory imprint on the screen.
In the sound recording, music functions simultaneously with the perceiver’s time-line but can also be independently stopped and heard in different variations of time such as faster, slower, backwards and even dynamically distorted. These time-line changes were never possible in traditional real-time live-perception of music up to this technological break. In A/V recording and playback, music and visuals are locked simultaneously, allowing for any time-distortions and time processes.
Another aspect that we must consider in the time-line discussion, is the agreement on the “instant of perception.” Music is perceived in the instant that it is heard, which is additionally, one of the narrower descriptions of the “present time.” Notwithstanding, music is understood and enjoyed by relating what the perceiver is hearing, to what has just been heard, expect to hear, or has heard and is able to recall from its life’s sound memories and experiences. The perception takes place, nonetheless, at the instant usually called present. In moving-visuals, an imaginary time-axis can occur created by the perspective effect on the screen of an object moving in time between two points in any axis, such as a series of objects moving and increasing in size from the middle of the screen to both sides of it. This may give the illusion of a time-line within the real-time perceptional line. It opens up an interesting aspect of the unfeasibility of “future listening”: if the time of the incidence of music, its “instant of present” is coupled to the events when they reach the front of the screen, then the events that are coming in the future are seen as the small ones appearing from the middle, something that cannot happen in music. The perceiver may expect things to happen in music, as in the case of a sequence that is repeated and it may be expected yet another repetition or scaling of the formula to happen, but certainly music cannot be heard ahead of its instant!
Rhythm
Visual information is part of the communication experience in the musical live performance. The customary relation to the perceiver’s memory bank, that someone is playing the instrument that is heard, may also arise in a performed and recorded sound heard through speakers, and a visual performance may be induced in the imagination. Even if although sounds are loosing their visual-production relationship as more music is heard which is not directly produced by human activity, the imagination is in turn open to visual freedom, attending perhaps to the feelings stimulated by the music. Nevertheless, the rhythmic coincidence between sound and visual information is grounded in the music-production link to the music making process.
The main coupling in
the visual production with the music perception, apart from
many subsidiary factors, is that both perceptions occur at
the same time, that is, their time-appearance structure, or
rhythm, is exactly identical. Rhythm is present in almost
everything in our life, and similarly we could say the same
about time, so naturally, rhythm is by far the main link
between music and visuals and perhaps the one mostly used
between the two sensory worlds. The rhythmic relationship
works in both fields in a sequential occurrence, that is, it
simply happens in a linear mode. Most of the dance, theatre,
film and video arts function with this simultaneous
connection with music. The rhythmic coupling is so strong
between the musical and visual domains that rhythms may
easily be “heard” through a solely visual input, such as a
dance group or lights turning on and off rhythmically, but
these perceptions are channeled as if they were a musical
experience. This takes effect only if the tempo induced in
the rhythmic structures falls within the "music tempo range"
(see below). Outside this range, any time structure trough
whicheve medium, will not be perceived as "music".
Other more complex time-structural devices such as grouping, phrase-forming, multiple layers, to name a few, which are particular of the music rhythmic domain, would have to be explored through only-rhythmic compositions for AVIA translation, determining whether ‘visual rhythm’ can be perceived from a musical standpoint of rhythmic understanding, but without the sounds. The AVIA project needs to examine further the visual rhythmic potential using, for example, space-marking (as in hocket) and brightness- or color-marking (as for accents or timbre) to induce grouping perception.
On the other hand, the understanding of rhythmic structures through beat-induction has a strong association to the physical movement of the body, that is to say, musical rhythm is understood by the inductive process of beat forming which is movement-related, and this, to say the least, must have a visual connection. So the art of dance may have solutions here to offer. The AVIA group has to the date no compositions for the rhythmic translation to moving visuals.
Tempo
There is a range of
tempo in which rhythm is musically understood: Any
pulse-inducing structures which fall between the ±42 – ±182
bpm beat-spectrum can be perceived as music, even without
sound. Out of this range, rhythm is not sensed as such and
must be measured by other non-musical means. There are
time-ordered events in all of our nature, from the smallest
cycling electron to the complex-moving planetary system.
Only a small range in this vibrant universe is musically
understood and this range, the "music tempo
range", is what makes the difference between
musical rhythm and any other kind of time order. Now, this
range may be different when applied to rhythmic events in
the visual world and we could expect visual group-forming in
tempi in which music just does not work. In order to relate
to a faster or slower rhythmic structure happening in
moving-visuals that fall out of the range of the musical
rhythmic field, a simple double and half factor can be
applied. No results in
this development yet.
Space Relationships
In music, two-ear perception can allow for the discrimination of the sound source location within the real, physical space around the listener. However, other imaginary physical spaces can be formed from the perceptual and associative processes deriving from sound intensity and pitch. In the perception of sound intensity, an association to the z-axis can be formed as a sound being physically located far or near. In the perception of pitch, an association can be formed with the y-axis, where a sound can be imaginary located as being high or low. For AVIA translation, the imaginary, intensity-related, z-axis space relationships are not yet included, translating sound intensity as color brilliance, but it may be also used as space z-axis relations, in the near future through Wave Field Synthesis. The music compositions consider only physical space and vertical, pitch-related, imaginary, y-axis space. This means that a visual should be seen to appear at the same physical placement where the sound is heard to come from. This may happen in a scaled form, that is, within the smaller size of a video screen in scaled representation of a larger sound environment, or in true scale dimensions, one-to-one visual and sound, with large video screens or space-distributed smaller monitors. Because of the difference between the visual and musical physical ranges, AVIA-translation composition for space must be restrained to the limitations of the visual range. Although sound can be heard coming from the back of the perceiver’s location, the visual matching is impossible since humans are not capable of simultaneous 360º vision.
Color and
Abstraction
In the initial stage of
the AVIA project, the pitch translation was approached only
in relation to the visual y-axis imaginary space. The
timbre-only composition mentioned above, Secretos,
is in the process of being rendered to visual color using
the Cycling '74's Max 5/Jitter software, adopting and
testing first the different combining relationships found in
the existing literature to relate musical timbre to color
theory. Having a music composition ideally suited for color
AVIA translation, provides an effective reference tool to
test the translation possibilities in existence, or develop
new ones. Nevertheless, preliminary test reconfirmed the
"...haphazardous nature of the color-pitch analogy."
(Mattis, 2005).
The correspondence between the audio and the visual fields in respect to pitch, timbre and color has been expressed in the Western arts in many different ways, from the use of words occurring similarly in both arts such as “color,” “texture” and “chromaticism,” vague descriptions of “bright” or “dark,” to complex theories to relate the musical timbres and pitches to visual color. Browsing through the development of music composition and visual music, it can be easily recognized that visual counterparts of music have been more often attempted through or associated with color and pitch than through the more basic and obvious elements such as rhythm and space. The theoretical link between pitch and color proposed by Newton in mid seventeenth century exherted an influence in the search through the pitch-color link. He made an analogy of the light spectrum divided into its seven colors with the distribution of the seven notes in an octave, by the fact that octaves of notes and their frequencies can be related in proportions of two and matched to the color range (van Campen, 2007). This concept triggered the constructions of "color organs" (see below) and was followed by a series of treatises at the end of the 19th century, influenced by the wave theory, such as the works of d'Udine on the correspondence of visual art to musical rhythm (Zilczer, 2005). Perhaps the reason for this historic preference of pitch-to-color lies also in the fact that much of the initial production at the change of the twentieth century, was performed by artists who “suffered” the sensory-perceptive disorder known as synesthesia, receiving color images when hearing music. Another reason for the main use of color in the visual translation of music may lie in the assumption made from its historical use, that for a Western ear musical timbre alone does not provide the necessary retention of time structures in the detail and effectiveness that pitch and rhythmic structures do, and in this sense, timbre remains as a latent, vague and ambiguous vehicle of expression in our culture. The assumption yet to be proved may be a reason to explain that color makes it easier to be used for loose-matching translations, or as previously quoted, there is no real equivalence at all possible. In another field of expression, programmatic music, descriptions of colored scenery, whether real or imaginary, expanded the link within the realm of individual fantasy.
Composers at the
beginning of the twentieth century and later,
may have been impelled by the development of the photography
and film technology of the times, connected with their
innate synesthetic aptitudes, or perhaps as another
language-structure solution to the decaying tonal system
which triggered composers in the change of the century to
search for new ways to organize the musical language.
Musical timbre was one of the main elements in this
overhaul, adopted by the impressionists, the post-romantic
composers, as well as by the later serial and electronic
movements of the 1950s. Music creators such as Skryabin,
Varèse, Messiaen, Gershwin and Bliss at
the beginning of the twentieth century and
later, produced works which dealt directly with the
association of pitch and harmony with visual color
(Griffiths, 2008). Works such as Skryabin’s Prométhée,
le poème du feu from 1910, included the Clavier à
Lumière, an organ with lights, as well as dancers and
perfumes, (Powell, 2007). Schönberg, although creator of one
of these language changes with his twelve-tone composition
method, approached the color translation from a different
perspective through his concept of Klangfarbenmelodie,
and a concrete attempt with his Farben composition
(1909, op. 16). Although not an admitted synesthetic,
Schönberg as a dual artist was involved in the visual arts
as an active painter within the Expressionistic movement,
which was developing into the abstract territory, and draws
a parallel to the structural possibilities of color
similarly as to what was happening in atonal music at the
times. Another important composer-painter to be mentioned
was Ciurlionis, whose approach was not theoretical but
through cosmological imagenary (Zilczer, 2005). Thus the
tendency towards visual music through the use of color
translation (musical and visual), was in essence a search
towards abstraction, in both fields simultaneously. There is
a considerable strong output of works dealing with the translation
of music to still-visuals in the hands of
well-known painters such as Kandinsky, Klee, Mondrian,
Picabia, Delaunay, as well as significant lesser-known
artists such as Hartley, Valensi,
Kupka, Baranoff-Rossiné, Russell,
Macdonald-Wright, Matiushin, O'Keeffe and Dove,
who based their search for abstraction in the
utopic quest for music representation on their still
canvases. In the hands of Stieglitz, early
photographic art had the same intentions of music analogy.
These visual artists were inspired by the abstract nature of
music in the evolution of visual still-art. The parallel in
the development of music to escape the tonal system and in
visual art towards abstraction seem to have reciprocal
nourishment. These
artists must be differentiated from the use of musical
objects, instruments, notation, and music symbols in
paintings of Picasso, Braque and Matisse (Strick, 2005), or
the development of composers as Cage, Kagel, Feldman,
Stockhausen, among others, in transforming the
music score into a graphic art or in the case of Xenakis, in
translating music into architectural space
(Bosseur,
1993).
The language of visual
music today falls into this same spectrum, ranging between
the use of recognizable figures of real life and geometrical
shapes, hyper-reality figures and complete abstraction.
Technological Century for Visual Music
Once in the twentieth century, the color music/visual crusade branched into many possibilities by the continuously advancement of mechanical, electric, electronic and digital devices, innovations in the sensory domains such as photography and film, recorded music, television, video, computers, and communication networks, which have shaped our lives dramatically, and of course, the arts. We have ultimately come to be looking at a video screen for work, creation, communication, socializing and entertainment. There has been a consistent trend in this century towards multimedia and specifically to the audiovisual art forms, gaining intensity in the last decade and establishing into the 21st century due to the expected development of the needed digital technology. In the preliminary computer art movements such as the Nove tendencije in the early sixties in Zagreb, were linked to music, from which the Op-Art boom of the mid 1960s grew, as well as constructive and kinetic art from 1968 onwards, and included movement as a concept of the perception of its art, specially on the side of the onlooker. As an example, the Venezuelan kinetic artist Jesús Soto of the Nove tendencije, whose works are brought to life in time by the movement of the onlooker passing by, was equaled by a composer, Antonio Estévez also from Venezuela, since movement involves time and time involves music. Estévez had at that time immersed himself in the electronic music field and created a work to be heard simultaneously at the Soto exhibition in Caracas, 1974, his Cromovibrafonía Multiple.
The twentieth century
has been specially fruitful in its production towards this
ancient goal: Important achievements in the film domain with
the committed intention to present simultaneously a work of
music in a visual form can include the well-known Disney’s
two Fantasias (1940, 2000), Kubrick’s 2001: A
Space Odyssey (1968), Fricke’s Baraka (1992).
Mechanical devices had been invented to attempt the
musical-visual conversion since Arcimboldo (late 16th
century), Beltrand Castel (mid 18th century), Kastner y
Bishop (19th century), Rimington and Baranoff-Rossiné
(beginning 20th century). But it is in the first half of the
20th century that a crop of innovators establish a
self-standing visual art form in time, parallel but not
exclusive to the celluloid domain: Fischinger, Greenewalt,
Klein, Wilfred, László and Bentham, among others. This
pre-video and pre-computer A/V art-form continues with Lye,
Hirsh, Smith, Hirschfeld-Mack, Richter, Eggelin,
Ruttman, the Whitney brothers, Brakhage and
Belson to reach a peak in the drug-induced
synesthesia of the psychedelic
light-shows of the seventies. The artificial synesthetic dream
achieved through the use of hallucinant drugs in the hands
of the music counter-culture from the late 1960s onwards,
indirectly caused the development of a synesthetic
entertainment culture of live concerts with bubble and laser
light-shows, to the MTV video-clip presentation of music
with an obligatory visual complement. This development has
expanded to become the status quo of music
consumption, as established by the democratization of
video web-publishing through the website YouTube since 2005.
In a retrospective
afterthought, once the dissemination of music came into
effect through the radio, initiating the consumption
of music detached from its original source and location, music perception became a
"blind" activity, devoid of its visual source-counterpart. It was followed by a whole
technological development of music objects for mass consumption (records, cassettes, cd',
mp3), simultaneously to a development in mass
dissemination of music (radio, TV, satellite
communication, internet), which up until recently had been
a "blind" music perception. As explained in the beginning
of this paper, the visual imagination recreated not only
the missing performance images, but any creative visual
imagination to what was being heard. One of the drawbacks
of electronic music concerts, where no performance-linked
visual imagination is possible for its absence of visual
relation to instrumental activity, lies in this "blind"
perception which demands much imagination from the part
of the listener and which can be helped by any
viusual aid such as dance, videos or light shows. The
music-video link established and provided in a highly
democratic and effective way (and still free) of the YouTube site is a
logical consequence of re-establishing the natural visual
counterpart of music, lost for almost a century. In this
sense, there can never be an electronic music concert
again without a visual counterpart and we could presume
that the Zeitgeist
of audiovisual art expressions is approaching its golden
age. Recent films such as Disneys/Pixar's Wall-E (2008),
achieves the unity of sound and visual link as a
construction of an expressive A/V language devoid of words
and without music but highly effective in identifying an
image with a sound, process called "sound design" by Ben
Burtt, Jr.
Due to its special
attraction to the senses, visual music has been part of the
"circus circuit" since the days of the color organs and the
like, always finding a direct public interested in its
novelty. Composers such as Varèse and Messiaen
undertook audiovisual productions in world fairs and other
effective but lighter side-products for entertainment
include the Son et
Lumiere complement to famous touristic sites, and
Rockola's famous line of Rock-Ola Jukeboxes from 1935, which
carrousel light and sound-derived magic is continued in
today’s softwares that render a moving image to sounding
music in the Media Player, iTunes, mirror-balls,
disco-lights apparel and the exciting fast-upcoming
video-jockeys.
Audiovisual media has grown to become an intrinsic influence in our daily lives and to music composition. Visual music has re-surged as a natural consequence of this audiovisual overall tendency and as a new art form, in accordance with our video-screen age. Its primarily purpose is still pending to be achieved, in its new space to create simultaneously abstract musical and visual expressions with new tools and, eventually, in a new language: an ideal meeting place for the two arts to combine and coexist. From the early color organs to laptops, works by Fischinger to videoart, kinetic art to Kraftwerk, the jukeboxes to iTunes, disco mirror-balls to video-jockeys, visual music today centers and closes the circle of the integration of sounds and moving images, it is the arts of our times and the AVIA project may perhaps help to learn to know how.
References
Brougher, K., Mattis, O., Strick, J., Wiseman, A. and J. Zilczer (2005). Visual Music: Synaesthesia in Art and Music since 1900. Thames & Hudson.
Bosseur, J. (1993). Sound and the Visual Arts: Intersections between Music and Plastic. Dis Voir.
Cytowic, R. E. (2002). Synesthesia: A Union of the Senses 2 edition. The MIT Press.
DeWitt, T. (1987). “Visual Music: Searching for an Aesthetic.” Leonardo 20 (pp. 115-122).
Evans, B. (2005). “Foundations of a visual music.” Computer Music Journal 29 (pp. 11-24).
Faulkner/D-Fuse, M. (ed.) (2006). VJ: Audio-Visual Art and VJ Culture. Lawrence King Ltd.
Feineman, N. and S. Reiss (2000). Thirty Frames Per Second: The Visionary Art of the Music Video. Harry N. Abrams.
Jones, R. and B. Nevile (2005). “Creating visual music in Jitter: Approaches and techniques.” Computer Music Journal 29 (pp. 55-70).
Mendoza, E. (2007). Dos Composiciones AVIA: Secretos, Susurro. ArteMus, C.A.
Peacock, K. (1988). “Instruments to perform color-music: Two centuries of technological experimentation.” Leonardo 21 (pp. 397-406).
Roeckelein, J. E. (2000). The Concept of Time in Psychology A Resource Book and Annotated Bibliography. Greenwood Press.
Rudi, J. (2005). “Computer music video: A composer’s perspective.” Computer Music Journal 29 (pp. 36-44).
van Campen, C. (2007). The Hidden Sense: Synesthesia in Art and Science. The MIT Press.
Vernallis, C. (2004). Experiencing Music Video: Aesthetics and Cultural Context. Columbia University Press.
Von Maur, K. (1999). The Sound of Painting: Music in Modern Art. Prestel.
Winick, S. D. (1974). Rhythm: An Annotated Bibliography. Scarecrow Press.
Web References
Battey, B. (2007). Bret Battey/Bat Hat Media. <http://www.mti.dmu.ac.uk/~bbattey>.
Friedlander, P. (2008). What is Visual Music?. <http://www.paulfriedlander.com/text/visualmusic.html>.
Gallet, S. (2007). Fractal Animations by Silvie Gallet. <http://sylvie.gallet.free.fr>.
Griffiths, P. (2008). “Messiaen, Olivier.” Grove Music Online, ed. L. Macy. <http://www.grovemusic.com>.
Jewanski, J. (2008). “Colour and music 2. Music as related to colours.” Grove Music Online, ed. L. Macy. <http://www.grovemusic.com>.
Powell, J. (2007). “Skryabin, Aleksandr.” Grove Music Online, ed. L. Macy. <http://www.grovemusic.com>.
Ox, J. (s/f). A Complex System for the Visualization of Music. <http://home.bway.net/jackox/complexity.html>.
Schubert, G. (2007). “Hindemith, Paul: Works Other dramatic.” Grove Music Online, ed. L. Macy.
<http://www.grovemusic.com>.
Other Associated Links
http://acg.media.mit.edu/people/golan/aves/
http://cimatics-masterclass.blogspot.com/
http://rhythmiclight.com
http://spotworks.com/
http://sylvie.gallet.free.fr
http://vispo.com/misc/ia.htm
http://visualmusic.blogspot.com/
http://www.centerforvisualmusic.org
http://www.generatorx.no/
http://www.iotacenter.org/
http://www.johnadamczyk.com/performance.html
http://www.openendedgroup.com/artworks/biped/biped.htm
http://www.paradise2012.com/visualMusic/musima/
http://www.pixelsumo.com/
http://www.solu.org
http://www.transmediale.de
http://www.vjspain.com/
http://www.vjtheory.net