Elements of live cinema
Live cinema performances occur in a space shared by the performers, their tools, projections and the public. A performer covers various spaces simultaneously during her performance. I have divided these spaces into 5 different types according to their characteristics: digital, desktop, performance, projection and physical space.
Optimizing and compressing are two essential activities in digital space. They are specially relevant for live cinema artists who work with video material, as uncompressed digital video occupies huge blocks of digital space. One minute of full quality video can take up over one gigabyte of digital space. Also processing "heavy" videos in real-time would demand a lot of RAM (Random Access Memory), very fast processor and a good graphic card. Without compression techniques it would be practically impossible to work with video on a normal computer. Nor would it be possible to watch videos online or on DVDs. Ron Burnett has written about the era of compression in his book How Images Think, as follows: "What do compression technologies do to conventional notions of information and image? This is a fascinating issue, since compression is actually about the reduction of information and the removal through an algorithmic process of those elements of an image that are deemed to be less important than others. The more compressed, the more that is missing, as data is eliminated or colors removed. The reduction is invisible to the human eye, but tell that to the image-creator who has worked hard to place "content" in a particular way in a series of images and for whom every aesthetic change is potentially a transformation of the original intent." (Burnett, Ron. How Images Think. The MIT Press. 2005. Page 45)
Desktop space is the work space for laptop performance artists, as it is the background for the interface of the software. For software which contain so-called "open architecture", like MAX/ MSP/JITTER, PureData or Isadora, desktop space is essential. In these cases the artist creates the interface or patch, as it is called, by choosing socalled objects from the object library, connecting them to each other with cords and adding different parameters (controls) to the objects. The metaphor for these kind of interfaces is the video signal (cord), which goes through all the objects in the patch. If the continuity of the signal is cut, there is no video output.
The interface can occupy more space than is available on the desktop. This is already taken into account in the design of these software, as there are several options available to "compress" the patch using sub-patches. Furthermore, other methods lie at the artists disposal, like changing the size of the objects (Isadora). Therefore, desktop space becomes a multiple space where the invisible and visible can be continuously altered depending on the needs of the performer. The design of the interface / patch should be also optimized for an intuitive and fast way of working.
Personalizing the interface is one of the most interesting qualities of the open architecture software. Basic software like Arkaos, which offers an interface in which video clips and effects can be activated with keys on the keyboard, could catalyze a visual show where different clips can be changed rapidly and even randomly. In an open architecture software like Isadora the user has to create a special patch to be able to change clips with the keyboard, and on the process could discover other possibilities.
The performance space is where the performance takes place. Everything that is included in the performance in one way or another belongs to the performance space. This varies according to the performance, although still the most usual setup for live cinema is a stage where the performer is located with her equipment, with the projection screen behind her. In this case the stage is the performance space. Live cinema artists can also work for example with dancers which means that there are more performers and the combined space of action becomes the performance space.
The projection space is the space filled with the projections. Many live cinema performances are presented in a cinematic 2-dimensional setup, where one or several rectangular screens are facing the public. There are other possibilities as the projection surface does not have to be a flat surface. It can be a human body, a table, a building, etc.
Cinema remains a flat-screen based medium, while live cinema and installation artists are exploring the possibilities of expanding the screen and changing our audiovisual experiences into audiovisual environments.
Physical space is the space shared between the audience and the performer. All the other spaces of live cinema lay inside the physical space. The physical space defines the setup of the performance. The space can have arcs or other architectural elements which can limit the visibility of the projections for the audience. It is also important to explore the physical space before mounting projectors, as bigger projections require more distance from the screen. Care must also be taken to ensure that projectors are located in such a way that the audience does not obstruct the beam. In site-specific projections the physical space is the starting point for planning the performance.
As the title already suggests, the difference between cinema and live cinema is that in the latter something is done live, in front of an audience. What qualities does live give to cinema? Seeing the creator presenting her work is different to watching a movie: There is a possibility of instant feedback both ways. The live context enforces the possibilities of participation of the audience. Also most performances are not documented. They become moments shared between the artist and the audience, unique and difficult to repeat.
Live situation also calls for improvisation. As musicians can jam together for hours, on improvisational basis, a similar kind of jamming can happen also between live cinema artists and musicians, allowing intuition and collaboration to take precedence over following a previously defined plan. This is an interesting challenge, as communication between the performers becomes literally audible and visible to the audience. Musicians and visualists can improvise on what they see and hear. This is actually easier to say than to do. In most audiovisual performances, it seems that the visual artist is improvising to the music already composed by the musician. Some visual performers attempt to make the visuals react to the music on rhythmic basis, while others construct audiovisual performances where the image and the audio are in constant dialogue.
Computer-based work is a real-time environment, for example, the movement of the mouse is rendered as the movement of the cursor without delay and received immediately. Computer games function on the same basis. However, in the live cinema context there are different levels of realtime. Mixing videoclips can happen in real-time, asthe performer makes simultaneous choices. The visuals can also be generated in real-time. A further example is the image created by live camera, which can be modified using real-time video effects in which case the production, processing and the output reception are simultaneous.
The production of electronic music is based on audio samples, and their repetitions and variations. Similarly, video clips/samples (or algorithmic programs) are the basic elements of real-time visual performance. I use the term "presentation time" to describe the time a visual element is visible to the public. In cinema, different shots are edited together linearly, and each of them appears only once during the movie. The duration of the shots equals their presentation time. In live cinema the presentation time can be longer than the actual duration of the clip. This is caused by various repetitions of the same visual sequence during the performance. This means that even if a clips duration is 10 seconds, it can be presented in a loop for a minute or even longer. The clip can be also presented various times during the performance. In a "cinematic" loop, the beginning and the end of the clip is different which appears evident to the audience. Seeing the same loop over and over again could become tiring after several repetitions although sometimes this can add extra value to the performance, like repeating a movement which becomes ironic in the long run. In this case, the careful selection of the loops and their montage are the basis of the work and video scratchers like London-based Hexstatic, Cold Cut or Exceeda have done excellent performances using this method. In these cases, the interaction with music is crucial for the success of the show, and the three groups mentioned are all audiovisual groups who synchronize music to fit their images perfectly.
Another type of loop is what I call "seamless loop". In this kind of loop the beginning and the end are so similar that the clip seems to continue without a cut even though it is looping. One example is a landscape where nothing seems to happen, until someone appears in the scenery and then leaves the image. The cut is done when the person has left the image, thus the beginning and the end show the same landscape and continuity of the loop appears seamless. With many repetitions, the exact duration of this kind of loop can also become obvious, but until that point, the loop's presentation time has exceeded its actual duration. The seamless loop seems to offer more presentation time in the performance.
So why is presentation time so important? Real-time performances are based on looping material. Real-time software automatically loops all clips until told otherwise. Let us imagine a performance which lasts one hour, where the artist has a library of video clips each lasting 15 seconds. If each clip were shown only once, the artist would need 240 clips, which could be quite a lot to handle during the performance, not to mention the time consumed on the production of the clips.
In live cinema performances, cinematic set-up is common, although there are many other ways in which to use projections. Unlike cinema, live cinema incorporates the setting up of projections as part of the creative process. The extended cinema artists, as well as contemporary installation artists, have done plenty of experimentation with projections. One of the goals has been to create spatial experiences. Many visual creators use different shapes like balls as a projection surface and transparent screens which create 3-dimensional effects. Another possibility is surround visual environments such as Pictorama – 360degree panoramic projection, in which projectors are adjusted to the spherical surfaces of the screens with the help of Light Twist free software.
In this case, the projections become an environment, and thus calls for spatial narrative, as the viewer can not see all of the image simultaneously. Surround audio is already a well known concept. It is a very different concept for visuals, but nevertheless interesting one, especially as audio can support the visuals in order to draw public's attention to a certain direction.
A projector is not the only possibility with which to show visuals. Computers can be directly connected to LED screens which are more powerful light. Interactive media facades which use LEDs or other light sources are interesting also from an architectural point of view, as digital skins can implemented in the design of houses. Facades can also be reactive, i.e. the external input like weather, pollution, noise or movements of people could determine the content of the visuals.
The projection can also function as an interface like in the case of Alvaro Cassinelli's Khronos projector, described as a video time-warping machine with a tangible deformable screen. It explores time-lapse photography. The audience can interact with the image by touching the tangible screen and therefore, effectively go back and forth in time.
As these examples show, projection is a flexible concept. We can understand projection as an interface, as in the case of the Khronos Projector. Or as an environment as in the case of Pictorama. These kinds of projects give an idea as to what projections might become in the near future, and how they could change the concept of performing visuals in real-time. One prognosis is that the projected image could turn out to be the best visual instrument for real-time performance, as also the performer's body would become an integrated part of the live show.
What is the role of the performer in live cinema? In most laptop performances the audience sees the performer standing or sitting behind the computer, attentively watching the monitor while moving the mouse and pressing keys on the keyboard. The Laptop performer resembles an operator who carefully performs tasks with the machine more than a performer in the traditional sense of the word. According to the Club Transmediale 2004 press release: "The laptop hype is over. Laptop performers, who resemble a withdrawn scientist publishing results of laboratory research, are now just one role-model amongst many. Electronic music takes a turn towards being more performance based, towards ironic playfulness with signifiers and identity, and to being a more direct communication between the public and the artists." (http://www.clubtransmediale.de. 2004)
The question arises of how to form a relationship with the audience and create "liveness" during the performance? This can be a challenging issue in a laptop performance, as the audience can not see what the live cinema artist is actually doing with the laptop. How would the audience know if they were watching a playback of a DVD? It is also challenging for the performers, to perform and use the software at the same time, as in the live situation the computer screen normally requires their total attention.
After a performance I am often asked what I did live. I wonder how the experience of watching visuals changes by knowing whether it is done live or as playback? In TV shows, musicians play electric guitars, while it is obvious that it is playback as the guitar is not even plugged into the amplifier. The musician's presence is more important. On the other hand, there arguably exists a certain sense of betrayal and doubt on the part of the viewer. London based Slub has resolved this problem by using two projections; one with a view from their desktop, which in their case shows how they use only command line to create the audio and the visuals, and another with the view of the results. This enables the audience to know what they are doing, which in their case is coding. In this case, their bodies still remains static and the attention focuses to the projection screens.
It is quite obvious that a laptop is not the best tool to bring the body into the performance, as concentrating on what is happening on the screen limits the physical actions to moving the mouse, or turning knobs on a midi controller, which might not be the most interesting sight for the audience. On the other hand, the necessity to "prove" liveness can lead to performances where live becomes the "content" of the show rather than integrated part of the performance. There are audiovisual groups who have successfully united liveness and content, including the Swedish audiovisual group AVCENTRALEN. At the Pixelache Festival, in Helsinki in 2003, their whole performance was based on live camera work. They had set up a "visual laboratory" of different miniature scenes; in one they dropped colored powders into a glass of water which was shot (close up), with the video camera. In the projection, the image from the camera had transformed into an abstract visual world resembling space travel. Without having seen the setup, it would have been impossible to define how the projections were produced. In this case, watching the process of "creative misuse of technology" and the results became interesting for the public.
Justin Manor, MIT graduate (2003), wrote his thesis on gestural performance tools for realtime audiovisual performance. He also produced the Cinema Fabrique instrument, which allowed the control of the audiovisual environment with gloves, especially designed for real-time visual performance. Data gloves and sensors are also the performance tools of S.S.S Sensors_Sonics_ Sights, a performing trio formed by Cecile Babiole, Laurent Dailleau, and Atau Tanaka who take laptop performance to another level by creating the audiovisual worlds by their movements and gestures.
There have been various attempts to build instruments which would allow visuals to be played while the performer moves her body. On the other hand, if visuals are played with instruments similar to a guitar, or a piano, what does it tell us of the true nature of the image? What actually constitutes playing visuals? What could be a visual instrument that would not be a copy of a musical instrument?
In order to fully implicate the body in the performance, visual instruments, data suits, data gloves, and sensors are used to allow the body of the performer to be more active. Using this kind of equipment requires technically demanding set-ups and also programming skills. Controlling the performance with gestures and movements is also a valuable skill as gestures can limit the whole range of possible controls available in the software. Another issue is the meaning of the gestures in the performance. Should they have a corresponding effect in the visuals ? Without this kind of correspondence the performer's actions can become vague for the audience. In a piano concert, when a pianist presses the keys, the sound immediately corresponds to the actions of her fingers. If the pianist plays faster, the speed of the music accelerates. If this correspondence were to suddenly disappear, the audience would immediately think it were a playback. The key concept in gestural interfaces seems to be realtime correspondence between the actions and the results.
In cinema the public does not generally have a very active role, though the experience of watching a movie cannot be called passive either. In cinema the public does not participate in the creative process of movie making, although the viewer can decide which films they watch and thus choose which directors have more possibilities to get funding for their work in the future.
In the 60's video artists responded to TV's "one to many" formula by transforming the signal and creating video installations, where the viewer formed part of the work. Video cameras played a central role in these experiments. In these installations, the viewer became the protagonist and her body and actions played a central role. Many installations did not exist without the viewer's presence. These installations were also called "video environments", and they paved the way for the interactive installations of the 90s, in which computer controls the environment. Virtual reality environments are perhaps the most immersive experiences for the public.
Inspired by the possibilities of digital technologies the live cinema artists are also exploring how to involve the audience in their performances? The use of cameras in the performances allows the public to become the protagonist of the projections. Cameras are also used as sensors to track motion, which has become more and more popular lately due to applications like SoftVNS, Isadora or MAX/MSP/JITTER, which offer possibilities for tracking movement. The idea of the public as the user/performer of the visuals is attractive one, although the question arises: would the performance then become an installation?
— Mia Makela