As of August 1st 2011 I’m full-time employed by United Visual Artists in London to infuse the design team with a tad of nerdiness.
“On the back part of the step, toward the right, I saw a small iridescent sphere of almost unbearable brilliance. At first I thought it was revolving; then I realised that this movement was an illusion created by the dizzying world it bounded.” — Jorge Luis Borges
(dutch version below)
The Aleph, as described by Jorge Luis Borges in his text “El Aleph” (1945), is a (meta-)physical point, in which one can see the entire universe, from every angle, at the same time.
I have used this metaphor to visualise my memory. In ALEPH I’ve tried to capture the past four years of this study through the collection of photographs I took (1000+) during this period. Each photo has been individually tagged with associations and a perspective, so that in much the same way as I construct mental images in my head, this installation synthesises dynamic images from a given association. These images are presented as mini-Alephs, or sub-Alephs of the greater Aleph of my memory. And that in turn can be browsed by anyone else based on perspective or association. The project was inspired by the (im-)possibilities in representations of the memory, as it is by the (un-)attainable ambition of visual art to represent a reality in a reduced number of dimensions.
De Aleph, zoals beschreven in “El Aleph” (1945) door Jorge Luis Borges, is een punt waarin het gehele universum te zien is, vanuit elke hoek, tegelijkertijd.
Ik gebruik deze metafoor om de werking van mijn geheugen te visualiseren. In ALEPH heb ik de afgelopen vier jaar op de academie proberen te vangen, waarbij mijn foto’s (1000+) staan voor de herinneringen aan de academieperiode. Elke foto is individueel van associaties en een perspectief voorzien, en op eenzelfde manier als dat ik beelden zie in mijn hoofd synthetiseert de installatie dynamische beelden— mini-Alephs— bij een gegeven associatie, en stelt hij het publiek in staat dmv. een verandering van perspectief mijn associaties bij dit beeld te verkennen. Het project is tot stand gekomen gebaseerd op de (on)mogelijkheden in de vergelijking tussen digitale geheugens en het menselijke, en de (on)mogelijke ambitie van visuele kunst (en het kubisme in het bijzonder) een werkelijkheid in een gereduceerd aantal dimenties weer te geven.
ABOUT THE PROCESS
This project was the result of a long research process that took off in the autumn of 2010 with the question of how a digital memory would look like, versus a current state of a digital environment. This idea was rooted in several previous projects that touched either the subject of memory, or a fascination of a virtual environment. (Object Oriented, Comment Condensator, Perspective Print) Gradually, my interests shifted towards visualising the process of memorising through contemporary machinal metaphors and, at the same time, the problem of having too many dimensions to display. Digital assemblages of snapshots and perspectives…
I took inspiration from the quite interesting popsci books of Douwe Draaisma, the pioneering Cubists from the previous century as well as Archigrams Temple Island project.
How do you experience a digital landscape?
ESCAPE is a room projection installation that immerses a visitor through four walls of projection surface. It transforms aroom into a virtual great glass elevator, vertically transporting spectators through an unlimited sequence of generative landscapes. Escape will be a study on the definition of ‘a digital landscape’. We will explore what elements constitute a landscape, and which qualities are required for immersion.
The project consists of multiple layers; static, dynamic, and interactive; a general narrative and cycle length (5 mins) has been defined, accompanied with a precomposed soundtrack (by Charlie Berendsen). Based on this narrative, landscapes are generated, color ranges shifted, special effect probabilities set and generative soundscapes blended on top of the soundtrack. Finally, the visitor is given control over the vertical navigation through these landscapes.
What’s happening behind the scenes?
One master computer is reading values from a pre-composed timeline, values that are converted into generic ranges mapped to a specific time. Every instantiation of a new ‘scape’ by the master has set a position within this range. Changing the elevator’s position allows for new scapes being created, exploring the variation as dictated by the current range (which in turn is set by the timeline).
The parameters of the scape are then sent to four networked slave computers that render the same scene to each screen from a different perspective, and with certain degree of individual freedom in rendering mode—again as set by the timeline. A fifth slave also listens in, but translates these parameters to sound scapes and noise fields. This sound slave also triggers the soundtrack that goes along the timeline narrative.
Follow us through facebook.com/escapeproject
Using subtly lit relief panels, Linnaeus uses the park’s design as a framework for setting evolutionary conditions, to be evaluated by a generative algorithm. The actual topography is used to divide the panel’s surface area into sections that have corresponding ecosystems. As a result, an abstracted genetic tree of botanics runs left to right across the walls.
The second level contains the vegetation that make their genetics visible. They branch off the abstract first level branches and generate a more organic visual complexity.
In the end, all this is represented as two giant 10 metre long fossil excavations. The actual park topography and height data is translated to rock texture, with superimposed fossilised vegetation. (Text: UVA)
Linnaeus is the project I have been primarily occupied with when I was doing my internship at United Visual Artists last summer. I found Linnaeus to be quite an interesting project, as it confronted some fundamentals of generative design and the process of evolution while working within the constraints of aesthetics and of a historical context. As well as of course, some serious technical limitations. Rather uniquely, I was given the opportunity to develop the project pretty much on my own, supervised by Alexandros Tsolakis, senior architect and Matthew Clark, UVA’s creative director. Which was great.
The project itself isn’t interactive, but the two 10 metre drilled panels are the result of quite an exhaustive process. As an excerpt from the project manual I left behind describes:
One starts by generating the main branch structure, or genetic tree, in 3dsMax. Once this structure has been properly set up, a Maxscript interpolates between every two nodes, generating leaves and branches whose form represent the evolution from one node to the next. For every section between two nodes, the generated branches and leaves are rendered and exported as a heightmap.
Using Processing, the hundreds of exported heightmap images are aligned, blended and composited into three layers, those being leaves, branches and the main structure. The whole process is typically repeated a few times to provide more layers of leaves and add depth. Also in Processing, the rock texture is being generated by use of Perlin noise and some blending modes.
In Photoshop, the layers of generated imagery are composed together and with some human touch condensed into a controlled design which is then superimposed on the background layer. This background has the original park topography (where the evolution parameters were based on) embedded in it. The whole image, which has a resolution of a 0,5mm stepover, is now being divided into 20 slices approximately 1000px wide.
Again in Processing, the exported heightmap slices of the panel are converted to 3d models. The program generates one .obj file for every slice with a corresponding depth of 5 mm.
These .obj files then are sent to the CNC milling machine, drilling out the final panels. And that’s it! Easy as pie. Project done!
It has been quiet for some time, so here’s a heads up of why: I’m working on my final projects.
The date for graduation has been set: JULY 7th. The projects will be on display at the graduation show in Arnhem from July 6th through July 10th.
The fourth and last year of our course is divided up into four parts, all of which are presented at the end of year as my graduation work. These be:
- Thesis + visual representation of the thesis
- Practical Assignment
- Research Project
At the moment I’m still busy with the bottom three, in parallel
Internship: United Visual Artists
During the summer I spent four months in London doing internships. From August till November I was hosted by the great studio of UVA. It has been a blast. I’ve worked on several projects and one in particular: an interesting challenge to design a 3D mural through generative evolution. More details on the project follow.
I’m writing my dissertation on relationships between art, science and technology. In it, I pose several definitions of art and science as they are documented in art and respectively science philosophy and propose models of how they might interlink based of these definitions. To visualise these links and connections, I’m creating an interactive way of reading it, where a column based layout automatically scrolls and provides you with the relevant context of the other fields as you read along. This creates a non-linear reading experience that allows you follow different reading paths.
Practical Assignment: Escape (TodaysArt 2011)
Charlie Berendsen and I were asked to look into the TodaysArt Cinechamber and create a contemporary piece for it. The CineChamber is a room projection installation that immerses a visitor through four walls of projection surface. Our take on this interesting architectural projection system was to transform this room into a great glass elevator, vertically transporting spectators through an unlimited sequence of generative landscapes. For us, Escape will be a study on the definition of ‘a landscape’. We will explore what elements constitute a landscape, and which qualities are required for immersion. Ultimately we aim for a real-time generative piece that has a narrative timeline. (start, middle & end). The project deadline is July 1st.
Research Project: Memory Aleph (working title)
My personal graduation project deals with my fascination for the human memory. It’s another attempt of visualizing the impossible. But I am trying nonetheless, with as most important tool at my disposal: the virtual.
The project problems are two-fold; apart from visualising the process of memorising, I’m creating virtual memory objects. Objects that have at least four dimensions. Getting those to display is nothing but a contemporary reiteration of the Cubist problem: how to translate something that has more dimensions than its target medium.
I’m trying for a crude Borges’ Aleph of the mind… Luckily I have still time
Back from China: thousands of books with my signature on the cover! No, all joking aside, LAVA Graphic Design has asked me to help develop a series of book covers for the Mathematics branch of EPN Education called Getal & Ruimte.
I was asked on board to develop the project in Processing after this was judged to be the tool for this job. We explored several formulae and printed out the results of 1000+ different combinations of parameters. From these in turn series were selected to be used for the different learning paths. In addition, we introduced different families for the different math subsets that were represented by the books. Oh, and as extra gimmick, the formula used for each cover was printed on the book, so students can recreate the shape on their (graphical) calculators.
While recovering from all the food I stuffed myself with during christmas, I documented some old projects and realised it might be fun to post their functional, interactive prototypes on the blog for anyone interested to download. Some projects just need playing with.
They come as .app (mac) & .exe (windows) executables and should run straight out of the box (if you have Java installed, which you most probably have)
First a sneak peek, a brand new little app as the newest member of the Mathematical Typography series. It’s a fun experiment with interactive typography.
This app revisits last year’s project which was about a three dimensional representation of an interactive calendar. This app uses offline bogus data, while the real thing aggregates from an online RSS source. Seeing the objects gradually break the surface of ‘now’, spin the lot around and performing some casual time travel definitely is part to the experience, so please have a go.
Download: Windows / Mac
This prototype refers to another project finished last summer. It sketches a 3D node structure which is constructed through the structure of a text; paragraphs, sentences, words, letters.
Although this app has also being posted to OpenProcessing, downloading this adds some performance & lets you run it full screen. Which I think is nice. Fact remains that you can see and download the source online here.
Sadly no fancy iPhone apps yet, although I’m currently in the process of making my first (!)
Back home, after four months of internship(s) in London.
The month at FIELD was a fantastic glimpse into the kitchen of a promising young design studio, and the subsequent three months at United Visual Artists were amazing. Although my expectations were high, working at UVA impressed me even more. The workspace itself provides a spatious and pleasant environment, but it was the people and the UVA design culture that I found the most fascinating. Studio-wide design discussions, exciting commissions, desks littered with prototypes, a drumkit in the basement and a red bearded pirate. But most of all it was the UVA’s combination of talented individuals that made this an inspiring couple of months.
I’ve spent part of my time at developing a single project which combined generative evolution, the esthethics of organic growth and digital processing, with the tangible and history of mural reliefs. A generated 3D model is being drilled out of solid rock as I’m typing this, the results of which I will post in a month or two.
The rest of my time I spent on doing research and developing concepts and sketches for a couple of other projects, getting my butt kicked by Ash and posting nonsense to the studio’s mailinglist.
From August till November this year, I will be doing an internship at the great studio of United Visual Artists.
UVA is pretty much top of their field. But what field is that exactly? They do architectural, light-, responsive or even interactive installations, but don’t shy generative animation either. They also house James Medcraft, who not only documents their work, but also initiates some extraordinary photography projects. A studio where architects, lighting designers, 3D-animators and programmers roam free is an exciting place to be part of at any time.
Their list of impressive projects is long and diverse-hard to choose from, but the picture from the recent project called “Speed of Light” should invite you to explore their portfolio further.
This was an attempt to create a ‘meta-documentary’, by tracing original nature documentaries using OpenCV and representing the result as a documentary of its own, based on ideas presented in Philip Ball’s popsci book “Critical Mass”, covering historical research on finding mathematical laws for social behaviour.
I have rendered the images with Processing, and edited afterwards in Final Cut Pro. Although the original camera work & cuts provide most of the dynamics.
Please watch on Vimeo in HD & turn your sound on.
2030: Envision a world in which every object would have an ‘online’ identity, a digital representation, say– a facebook page. Every physical object would be reduced to its properties, be it value, energy usage, lifetime, color, previous owner, parent class or atom arrangement. A concept quickly getting abstract beyond the point of no return, hence this sketch visualizing the concept using a structure that would represent ‘the digital genome’ of the real world. A structure that would contain the digital avatars of all physical objects.
For this, I used as placeholder a text, dividing it’s hierarchy into branches of objects. Each node in this structure should represent a physical object, with it’s properties attached. In this case, a chapter of the book “The Dark Tower”, by Stephen King has been chopped up. The text comprises of paragraphs, each paragraph contains sentences, sentences often are compilations of subsentences and all these consist of words. And words of letters.
This framework of nodes should represent the idea of the world as a collection of objects, as information is attached them. Every object has a parent, from which it inherits certain properties, and has other ones that make it unique.
As part of the last phase of my education at Arnhem I’m very happy to announce that I will be doing an internship at FIELD in London.
We design custom software tools and processes to express an idea across a wide range of media: from print to animation, interactive installations and websites. Our goal is to merge code-based design with established digital content creation methods. And to create room and connection points for a collaborative and iterative process.
Inspired by modern art, nature, science and technology, we aim to create animate images with a life of its own. Generative Processes, Interactive Systems, and Artificial Life are the reference points from which we draw ideas, methods, and mindset into our design process. We believe this FIELD is still largely unexplored, and that there are exciting visual places yet to be discovered in this landscape.
Though FIELD only started last year they’ve already built a really impressive portfolio. Here are some gems that sucked me in.
Real nice concept that combines dynamic data visualization and 3d visuals into a festival’s graphic identity.
Will be seeing this when I get there!
Last few days were pretty much about OFLab Breda, an OpenFrameworks workshop by Zach Lieberman & Todd Vanderlin. With the technical guidance of Zach & Todd we were encouraged to develop a project in these few days that was to be exhibited at the Graphic Design Festival. Along with the festivals theme “DECODING” the Arnhem team consisting of Jasper van Loenen, Bart van Haren and Sander Sturing came up with the idea of transmitting data by sound, hence making the transmitting process physical. Background noise and sounds from the environment provide distortions and artifacts.
Other than the live installation, we presented the project as a series of “Sound Portraits” portraying several of the Lab’s participants.
It took three hours to send the image data through sound to the laptop depicted above. The image shows the environment that is hardcoded in the image by the laptop on the table, listening. The colors show recorded sound that’s outside of the spectrum that is used to send the image data. The image has been built up gradually from left to right, top to bottom.