Participatory Interactive Hybrid Performance Environment
Isabel Valverde (choreographer, performer & researcher)
Yiannis Melanitis (performance artist & researcher)
With the participation of:
Tania Barr (Motion Capture expert)
Panos Panagiotopoylos (programmer)
Company: Independent Collaboration Project
Countries: Portugal, Greece, and France
Isabel Valverde: choreographic concept - inter-personal interaction (haptic-visual) interface
Yiannis Melanitis: 3d space, interactions design
Tania Barr : Wearable garment, Mocap system design
Panos Panagiotopoulos: Motion Builder
Wired Physical Space with real objects and Virtual Space with avatars
4 Visitors-Participant at each the time:
(w/ 3D glasses + mocap inside the physical environment with objects)
(w/ data-gloves outside the physical environment with video projection)
Touch Terrain is a participatory performance environment, where 4 subjects at the time are challenged to experience the lost of references and to alter dominant perceptive hierarchy, while interacting with the space and with one another. While wondering about a room wired with mocap sensors, through your vr vision you see a moving and mutating avatar within a blank space. Will you try following, approach? Meanwhile, as you look at this character moving, you run into something you can't see in the physical space, but to your surprise, it turns into an element of your vr landscape. A tree, a flower, a mountain appear in the virtual space you see through your glasses. The avatar is now clearly located and you can get to it easier. As you finally get in touch with this mutant, you suddenly see yourself as another avatar. Felt bodies, which only by feeling and being felt do become truly seeing bodies, sensing bodies. Inter-subjective kineasthetic/synesthesic perceptions.
Preparing for the physical-virtual immersive experience, the 4 participants put on the wearable interfaces. Then, the 2 performers enter the space while the 2 puppeteers stay outside and interact with the performers' avatars and space through a screen. The performers are in a blank space seeing a moving and mutating avatar (the mocap representation of the other performer manipulated by the puppeteers). The puppeteers make changes to attract/repulse the performers into/from one another, mutating their appearance ? dimensions, shape, scale, gender ? as well as interfering in their movement, and even type of being, and non-being/object, plus the VR environment.
Without spatial reference, the performers wonder in space until they run into, touch or intercept the objects in the room. That's when the touched objects enter the VR environment turning into aspects of its landscape, and helping locate themselves and the avatar. The objects are made of natural and artificial materials ? including cardboard, foam, fabrics, plastic, artificial flowers, etc), and wired with touch sensors. They will compose the vr environmental, rendered as landscaping elements, such as what seems like grass, flowers, trees, mountains, country house, etc, along with abstract shapes
Although without bodies, much like in video games, the performers' movement through space also helps them understand if approaching or distancing from the avatar. Will they continue building the vr landscape to get situate within the environment? Or decide to approach the avatar in search for themselves in the other? The fact is that the moment the performers touch each other's avatar they suddenly see their own avatar body.
Throughout the experience the performers choose between interacting with the environment and/or the avatars through spatial and touch interactive aspects, i.e., virtual landscaping through physical touch and moving the objects in space, or getting in touch with the avatars. If deciding to find and touch the avatar they will find out that it's of a real person and, by doing so, they can finally see their avatar, being able to watch themselves in the experience.
- Perception / Synaesthesia: Reversing and conversing perceptual dominance of vision by touch. The work attempts to challenge these extremely fixed perceptive hierarchical tendencies by reversing them.
Towards making physical touch the main perceptual reference to construct a virtual space and body, by building an interactive system mapping the spatial proximity and physical touch to a virtual 3D space and body. Given the dominance of vision and image in our Westernized patriarchal societies, Touch Terrain, is an attempt to aesthetically experiment with reversing this tendency by constructing an interface that makes vision dependent on touch.
- Intelligent Multimodal Interfaces: Altering hierarchy of channels/modes of expression and communication from verbal, logic to bodily/touch/space between subjects and between subjects and their avatars.
- Active Visitor Participation: Requesting active inter-subjective corporeal engagement.
- New applications of 3D VR vision + touch with navigation manipulation of 3D space.
# Focus on designing interfaces that reverse perceptive hierarchies, such as the common dependence of touch to vision by the vision from touch and therefore to the bodily kineasthesia - our somatic sense of embodiment. Although senses and perceptions are interdependent, the goal is to, nonetheless, make touch more autonomous and have the visuals spring from the rich qualities of tactile experience.
#This alteration by isolation and reversing common (socially constructed) perceptive/sensorial experiences will contribute to rising awareness and change concepts of subjective embodied experience within mediated/wired environments.
# By actively and consciously engaging with our full bodies in these parallel reality experiences, will challenge and help us become familiar with hidden aspects of embodiment enabled by the information network systems.
# The permanent effect of perceptive alteration. Consciousness alteration.
Space : any regular space (ideal w/ 2 doors) and an outside space for puppeteers interface
Dimensions : minimum 4m x 4m
- 1 high-end PC Computers Running 3D space and animation
- 2 pars 3D glasses ? for feedback from touch + navigation in 3D space
Virtual Reality HMD with orientation tracker (two high-contrast SVGA 3D OLED Microdisplays) Model: Z800 3DVisor from Emagin
- 1-2 pars Data Gloves ? for the navigation + control/manipulation of 3D space and animation
5DT Data Glove Ultra Series from Connexion
- Touch/Pressure Sensors
2 Wireless touch/pressure sensors and/or mocap wearable garment
20 touch/pressure/optical sensors embedded in space and objects.
- Mocap optical system ? simplified (5 reflective points)
- Video projector (Requested)
- 2 Web Cameras
- Mocap software
- Motion Builder Pro
Degree of completion:
- conceptual collaborative basis achieved
- 3D space and bodies interaction in progress
- software interactivity ? mapping
- scenic space under-construction
PROJECT SUPPORT AND FINANCING
Confirmed Referred Equipment and Software by:
- Yiannis Melanitis at Athens School of Fine Arts, Sculpture Department, Greece
3D Glasses + Data Gloves (kindly lent by UTL / Instituto Superior Tecnico / Departamento de Informatica / TeDance Project supported by Foundation for Science and Technology, Lisbon, Portugal
Motion Capture System MOCAP lent by Autodesk/Animazoo-Europe, Tania Barr
Requesting European funding for Residency and testing at Animazoo, and looking for other residencies
Isabel Valverde is a performer, interdisciplinary choreographer and researcher from Portugal. Ph.D. in Dance History and Theory from U.C. Riverside, she is currently pursuing Post-doctoral Research in Dance-technologies as a fellow of the E.U./Funda??o para a Ci?ncia e a Tecnologia (Portugal). MA in Creative Arts: Interdisciplinary Arts from the Inter-Arts Center, SFSU, funded by Fulbright/I.I.E. and PRAXIS XXI Program. Her dance studies include a BA in Dance from the Lisbon Technical University, and the School for New Dance Development in Amsterdam.
Towards the familiarization with new embodiments and posthuman interactions as well as the continuum of actualization and virtualization, Isabel has been collaborating in participatory interactive environment projects, including Blind Date , with Yiannis Melanitis, Monaco Dance Forum (2002), and IN TOUCH , with V. Zordan, K. Chi, V. Sundar, P. Chagas, (first prototype performed at Siggraph 2005 - Cyberfashion Show, Los Angeles). Since 2003 she has been developing My Fado Dance: What Portugueseness? a work-in-progress solo dance using Portuguese Fado music, and video.
She published several essays, including "Blind Date: a participatory installation," in Body, Space, and Technology Journal, V4, and "Catching the Ghosts in Ghostcatching: Choreographing Gender and Race in Riverbed/Bill T. Jones' Virtual Dance," in Extensions: the online journal of embodied technologies, V2. She is revising her dissertation titled ?Interfacing Dance and Technology: a theoretical framework for performance in the digital domain?, into a book.
He holds degrees in Painting (prof.X.Botsoglou), Sculpture (prof. G.Lappas) and Masters degree in Digital Arts from Athens School of Fine Arts. His art refers to the body in relation to the epistemological (biological) context which defines it. He produced the term bio-performance based on the conception of the "analogical body" (as opposed to digital), in an attempt to re-establish the corporeal status of experience, de-depenting the body from the domination of simulations or virtual allusions. The notion of the body as soft and malleable unity, forming a "liquid space" formation within space is proposed as a neo-appearance of corporeality. In his recent works (The Garden, the Diffusion of the Elements), technology is not visibly apparent, forming a magical interaction between materials and the body.
He presents interactive performances using originally designed software, as well as sculptures and drawings related to them. Recent presentations include two group exhibitions at Ileana Tounta Center for Contemporary Art with sculptures and video-works(1999), an interactive performance ¡ Pleasure Machine¡ at the 8thNew York Digital Salon (2000) also presented at the Blue Stage in the House of World Cultures of Berlin, the video-work ¡The artist as a bird ¡ at Deste Foundation (exhibition ¡Toxic' 2001). Also at the Media@Terra Festival/ De-Globalizing/Re-Globalizing, presenting ¡ Bio-robotic symbiosis' interactive installation ?(Lavrion, Sofia,Maribor, Frankfurt, 2001). At Ex Teresa Arte Actual he presented the interactive web-performance ¡Terra Ingognita¡ (a telerobotic piece) for the X International Festival of Performance Art: Life in Another Planet , Mexico City, 2001. One of his most recent performances was ¡Animal Accessories¡ for the group exhibition ?Living inside?, Athens 2002, also the interactive performance «Prometheus» performed at the Kourzoum Djami and at the Athens Academy. At the International Science Fiction Conference ¡Biotechnological and Medical Items in Science Fiction¡ he presented the paper "Biorobotic environments, interactions and hyperhumans: Interactive robotic performances exploring the potentials of an unknown space towards the potential of a renewed anatomy ", Aristotelian University of Thessaloniki 2001. His latest interest concerns "biotechnology and control of the body", where he uses his term "bio-performance" as an advanced environment -in this situation, the control of the performance is led to machines controlled by biological algorithms. Other performances were presented in Monaco Dance Forum 2002 (interactive performance ¡Blind Date'), The Garden interactive performance at Eugenidio Foundation (Athens 2005), also the interactive performance ¡The Diffusion of the Elements' at the D624 project space (2005). He writes for artzine journal and Futura magazine. Since 2002 he is teaching as collaborator of Sculpture Deptm., Athens School of Fine Arts. Melanitis Yiannis is member of Delphi Society. He is a PhD Fellow (June 2005) at the School of Architecture, Athens.
The ?workstation? is made for 4 people to participate at the time. 2 visitors-performers with 3D VR glasses and mocap sensors, and 2 visitor-puppeteers with data-gloves.
The 2 performers enter the space from a different entrance after putting on a headset and garment (mocap and touch sensors).
The 2 puppeteers (data-glove participants) are outside in front of a screen ready to interact with the performers' (and their own) avatars.
At first each performer only sees avatars inside a blank space. One of these avatars is the other performer in rt mocap in the room with them.
The avatar's movement and altering look is controlled by the 2 puppeteers, which try to attract or repulse their live performers to each other. The other 2 data-glove participants, like puppeteers, manipulate the avatars by changing their shape, dimensions, even gender, and movement (and landscape?) to help or difficult the encounter of the 2 performers.
Inside the space, as the performers start touching objects they get to create the VR landscape and at the same time they reference themselves in relation to the avatars they see. The physical space is filled with different materials and objects. These objects with touch sensors are natural and artificial and will be rendered as vr landscape elements. (including cardboard, foam, fabrics, plastic, artificial flowers).
The performers will have to choose between their virtual landscaping physical touch interaction or getting in touch with the avatars. If deciding to find and touch the avatars they will find out that it's of a real person or not and, by doing so, they can finally see their own avatar, being able to watch themselves in the experience.
However, they just see the touched member. Only continuing touching the other, engaging playing with different body parts do they get to see their whole felt bodies in movement, much like in Contact Improvisation.