Enemy of Music

So my thesis is on ‘Irreversible Noise’ where irreversibility should be understood in a double sense as both following a generalized thermodynamics and as presupposing a unilateral cut between being and thought so that they are no longer posited as reversible terms in a relation of reciprocal co-constitution. Noise is both observer relative and observer independent, the former may be measured by a system according to its degree of randomness, its algorithmic compressibility, or its computability, while the latter may be understood as radical contingency (a hyperchaos that is generally below or beyond our threshold of perception). Both irreversibility and noise then point to the unobjectivizable nature of the real presupposed by scientific reason.

To argue that the real is unobjectivizable due to its radical immanence is not to claim that objects or processes are irreducible to scientific explanation but rather that they are interminably reducible in the sense that they afford multiple modes of abstraction, any of which are open to contingent revisal. On this account, theory and practice are no longer philosophically opposed, but superposed according to unilateral duality, so that science and art undergo an epistemological flattening. That is, we radicalize the Marxian concept of a unified science of man, such that science and art are equally conceptually creative theoretical practices, engaged in the underdetermined production of reason.

After making this unilateralizing cut, what is the status of thought or reason with respect to the radical contingency of the real? We may understand the brain as primarily an organ evolved for pattern recognition, a tendency that is geared towards both the generation of unpredictable behavior, and the anticipation of future contingency. Such an evolutionary trajectory is driven by uncertainty within the ecological conditions of competition and cooperation, parasitism and symbiosis. A major strategy guiding the development of intelligence in predator prey relations is cunning deception (which the Greeks called metis) and the use of noise as a defensive or offensive sheild (which evolutionary game theory calls crypsis). However the cerebral generation and resolution of unpredictability should not be opposed to the compulsive and habitual functions of the body.

Modern neuroscience demonstrates rather that phenomenal content supervenes on automatic processes in the brain that are transparent to consciousness and have a long evolutionary history. This is why Laruelle likens the non-philosophical subject to a ‘transcendental computer’ or ‘unimaton’, which he distinguishes from an automaton. Moreover, human intelligence draws on diverse sets of formal systems, or technics, such as mathematics and the phonetically organised alphabet, that function through the discretization of continuous phenomena. Cognitive science has overcome the Promethean myth of man as exception to the natural order by flattening technics onto evolutionary adaptive functions that capture an invariant in a matrix of shifting matter-energy gradients.

Functions develop according to the MaxEnt principle, that is they maximise the entropy of all the microstates that do not correspond to the resulting macrostate. This leads to a (negentropic) reduction of energetic cost through offsetting entropy to the environment. This global increase in entropy, following Jevon’s paradox, is determined by the observer independence of the Maximum Entropy Production Principle (MEPP). The steeper the gradient that a function taps, the higher the reward is, however this also corresponds to an acceleration of entropic throughput and an increase in risk.

The Industrial revolution saw the abstraction and automation of manual labour, the rise of global financial capitalism, and a huge increase in entropic throughput in the form of pollution and matter-energy resource depletion. The twentieth century was marked by the formalization of thought processes and the automation of cognitive and perceptual faculties (i.e. pattern recognition models geared towards the generation and anticipation of unpredictability). Capitalism precipitates the tapping of gradients through risk modeling and the speculative extraction of value, thus heightening risk in a double feedback cycle. However, this mathematization of the creative and predictive capacities of the intellect is not only the dream of capital.

Russia at the turn of the century might be described in the language of Simondon as entering a critical phase of metastability, a supersaturated state which would require the smallest of event to act as structuring germ for an irreversible phase transition. This imminent capacity for revolutionary transormation made itself felt across the sciences and arts as much as it did in politics. It is in this spirit, of the possibility of a collective definition of the future, that Russian cosmism emerges, with the audacious plan to realize an inter-planetary[1] techno-utopian and mortality-transcending[2] communism.

After the failed revolution ending in 1907 the zeal for the future did not cease, and there were heated debates over how to bring about a socialist society that could take full advantage of the emancipatory potential of technology. One of the major proponents of this absolute overthrowing of the inherited traditions of the past was Alexander Bogdanov, a cosmist with a fervent desire to realize Marx’s concept of a unified science of man. It is in this context that the 1910 appearance of Francesco Pratella’s prosaic ‘Manifesto of Future Musicians’, and Russolo’s subsequent 1913 ‘The Art of Noises’ (yawn), are revealed to be dissapointingly terrestrial as well as fascist and musically inept.

A much more rigorous conceptualisation of the unity of science and art, and their relation to noise, is to be found in a 1917 paper by Evgeny Sholpo entitled ‘The Enemy of Music’ in which he detailed the possibility of a single machine programmed by means of a graphical score, with the capacity to synthesize any sound or combination of sounds in the entire audible spectrum, using multiple sine-wave oscillators and producing an ultrachromatic modulation so fine that pitch transitions would occur below human audition. His ideas were so ahead of their time that it would be thirty-five years before Sholpo’s theoretical description was realized in a physical device, the ANS Synthesizer. Of course, there was more to come…

What is interesting is the way in which scientific and artistic practices enable a formalization of creativity according to computable rules while continually challenging musical convention and thus necessitating more complex formalizations of composition. Notably, Schoenberg dispenses with the key signature and the reversible relation between harmony and dissonance that had become the modernist convention by formalizing a calculative system for composition that flattens music onto an atonal plane. This dodecaphonic method is enthusiastically taken up by Messaien and his students (Boulez, Stockhausen and Xenakis), leading to the development of absolute serialism, where the anti-hierarchal flattening of tone is extended to elements such as rhythm, pitch and duration.

However, while Boulez and Stockhausen remain in thrall to human experience and creativity, Xenakis alone pursues the radically disenchanting vector of computationally generated algorithmic compostion. Of course, there are other pioneers of computer music but in general they proceed by formalizing existing compositional theory, often according to psychoacoustic analyses of listening experience. Xenakis rejects this rule-based manipulation of sine-waves according to experiential categories for the same reason that he excoriates Fourier synthesis, instead he formalizes composition according to the idiomatic specificity of the computational medium he is working on at the atomic level of the sample. This micro-compositional technique extends the computational capacity for synthesis beyond even Sholpo’s cosmist fantasy (i.e. no longer based on sine waves but on probabilistic distributions of micro-sonic grains that occur below the threshold of human audition).

Although Xenakis conceives of automated composition at the granular level through stochastic synthesis in the late 50’s, it is not until ‘91 that he is able to fully implement such a system. It is important to note that the resulting program (GENDYN) cannot be considered as an extension, reflection or exteriorisation of human cognitive capacities, but is instead an abstract algorithmic procedure that searches within a phase space of the entire spectral continuum between the absolute periodicity of the sonologically emergent sine tone and the rigorous aperiodicity of complex modulation noise.

A rhythm is a temporally extended pattern. The detection of a rhythm is simultaneously and non-problematically theoretical and practical. A rhythm appears to an information processing system according to several parameters: (spatial, temporal, amplitude, frequency, superposition) – this is the observer relative reality of periodicity determined by the system’s umwelt (the world appearing to the organism’s distinctive sensory array – of course, there has been a techno-scientific expansion of the human umwelt). However, there is also an observer independent reality (Umgebung) where periodicity and non-periodicity exist whether or not they are known to exist. Rhythms may emerge from noise, such as with stochastic resonance (rainfall – Xenakis, Haswell + Hekker), and indeed in noise there are a vast quantity of superposed periodic vibrations occurring at different time scales with a relative degree of identifiability, but noise itself is of a larger magnitude than rhythm.

Indeed, noise is indifferent to its ontological or epistemological characterization by any information system, and is presupposed by any thought process. Noise may be entropic, in the sense of destructive destruction that we increasingly witness across both environmental and socio-economic orders. It may be negentropic, manifesting in the creative destruction unleashed by technological transformation (Schumpeter’s gale). It may also serve an anti-entropic function through the non-geodetic tendency in biological evolution that Gould identifies and Longo elaborates on, where there is a push to complexity that does not obey the geodetic principles that govern the movement of inert matter – complex organisms and musical compositions owe the possibility of their existence to the huge diversity and fitness of simpler organisms that they parasite on. This is why I argue that contemporary musicians working in the non-standard phase space between periodic sine tones and non-periodic complex modulation (such as Haswell and Hecker, Mark Fell and many others) are capable of producing a radically inhuman and non-aesthetic music that mobilizes unpredictable complexity across many orders of magnitude.

 


[1] In 1903 Konstantin Tsiolkovsky, a leading representative of the cosmists, published the first detailed scientific study of space flight ‘The Exploration of Cosmic Space by Means of Reaction Devices’. Like other cosmists he advocated the colonization of space.

[2] Another significant member, Nikolai Fyodorov, argued that scientific use of technology would enable indefinite extension of life (he is thus considered a precursor of transhumanism) and even resurrection of the dead (as Quentin Meillassoux does in ‘Divine Inexistence’).

2 thoughts on “Enemy of Music

  1. Pingback: Machine art. | greghainge

  2. Pingback: Rhizomatique » Blog Archive » Laruelle and Non-Musicology

Leave a comment