UPDATE: Images from the talk are here
This is the longest post yet.Context: Few weeks ago, I was asked by Flow Associates to give a small talk to ‘young adults’ on using technology to view and understand cities. This is talk I intend to give today at 4pm IST aat FICA.The Talk
‘Does anyone remember this?’
(short video of Code Rain from The Matrix)
It was the 2001. I was 16-17 and very sick. The operation was over and officially I was in ‘recovery’. What this meant was, that I was sitting at home, alone, with an old computer and absolutely nothing to do.
I started renting movies. Going through hours and hours of footage everyday. And one day I came across ‘The Matrix’. The concept of living in a computer-generated space completely blew my mind.
Where was this thought coming from?Could we really build something like this?Could we build our reality into a computer?
The web was slow back then. Youtube didn’t exist (that came in 2005), Wikipedia has just been launched and people weren’t sure it would work. I decided, one way or another, I’d find The Matrix.
Turns out, in 1984 (the year I was born) an American Science Fiction author was writing about the Matrix. William Gibson didn’t call it that; he called it just ‘Matrix’. In his book ‘Neuromancer’, ‘Matrix’ is a city, it has skyscrapers and security systems, it had places that were run by large co-operations. It’s a walled garden, one where people can come and go, but looks like a very large block of offices. And yes, it was all ‘virtual’; everything was in a place called ‘cyberspace’ accessible from a computer that plugged into your brain. No Internet existed at that point. This was the first time that the word ‘cyberspace’ was ever used. (Display Book Cover)
But it didn’t stop there. Eight years later, in 1992, a book called ‘Snow Crash’ hit the markets. And like ‘Neuromancer’, this too had a virtual city. But unlike ‘matrix’, ‘Metaverse’ (from ‘meta’ meaning ‘beyond’ and ‘verse’ from ‘universe’) was a much more social space. One where people, in the form of avatars, met up, chatted, danced at clubs, raced motorcycles, fought with samurai swords. You name it. It many ways it was just like the spaces that we live in, but without the constraints of physics and reality. You could fly in Metaverse, you could dance on the moon, build skyscrapers that went on to space. There was almost no limit. (Display Book Cover)
That sparked off a generation of coders. For many years company after company tried to create a space, divorced from geography, but as real as reality. In 2006, a platform called ‘Second Life’ became popular. It was the Metaverse and Matrix and it was real. Second life, let people roam imaginary cities, create new islands, play games, explore everything in a 3D virtual world stored on the Internet.
For some people, having a space that looked like a real space but had no constraints worked. For others, it didn’t. In 2008, I started working with two companies that wanted to create a virtual space that was rooted in reality. Instead of letting people create just about anything, they started building virtual versions of real life cities.
This is what Near looks like.
This is what Twinity looks like.
In many ways virtual spaces started mirroring reality. Now you could roam London, Berlin, Singapore and Miami all from the comfort of your seat. You could discover monuments, public spaces, see buildings, visit museums and do just about anything without moving an inch.
A whole new way of seeing cities had emerged. Cities are places, spaces, buildings, people, interactions merged into one living, breathing organism. And that organism had gone online. The Matrix had arrived.
But it didn’t end there. Reality is very hard to model. There are just too many things. Inanimate things, from dust particles to buildings, each of them behaves differently. There are things that are alive, trees and animals, roaming and growing, reacting on their own will. And of course people. Millions of them, each one different from other; in the way they look, the clothes they wear, the way they talk and it goes on.
Some where down the line two things became apparent:
One, that no matter how hard we tried, the computing capacity to create something as real as reality just didn’t and may be couldn’t exist.
Second, computer weren’t on our desks anymore. They had merged with phones and settled into the palms of our hands.
In 2009 a company called Layar appeared. It did something else. Instead of recreating reality online (‘virtual reality’) it started layering reality with extra information, it created ‘Augmented Reality’. Layar wasn’t on a computer, it was on a phone. It used your phones camera to show you what your eyes saw and on top of that would add other information. This could be information could be images, sounds and animation. It could be pictures of things that don’t exist anymore or it could sounds that you don’t hear usually.
All this is confusing. Let me show you a video:
In this video, layar ‘reveals’ parts of Warsaw in 1944 when it was relentlessly bombed. Instead of recreating Warsaw virtually, Layar understands where you are and knows what you’re looking at. It then bring up relevant images and shows them to you.
We did something similar in Delhi last year. We went to artists association that used to put up pictures and paintings in and around Khirkee Village. The problem was that once the exhibition was over, they would remove images. How could I see the gallery just as it was in the summer of 2006? Using layar, we built a system that would reveal images exactly how they looked and exactly in the spaces they were placed in but via a smartphone.
But what exactly is the use of this technology?
Augmented Reality’s ability to layer what is visible with data makes it useful for a variety of things. It can be used as a tourist guide/monument discovery tool; point your phone at a monument and you get all the information about it. It can be used for mapping; tell the phone where you want to go, and it’ll highlight the route as you walk it. It can be used for research; takeout your phone in a neighborhood, and it can pull out information census data on it. In each case, it helps visualize data, by placing it in the real world.
And like with virtual worlds, Science Fiction has already started pushing the boundaries of our imagination, by thinking up new and innovative ways to use this technology. ‘Paintwork’ by Tim Maugham is a great example of this. In the book AR is used for everything from advertising to playing games, by merging all the media with the physicalspace that they are in.
And yet this is just the beginning. Google is working on ‘Google Goggles’ a system that will put all this information about the city in your glasses. When you look at a monument, it will tell you how old it is and who built it. When you look for directions, it will overlay them in front of your eyes, when you walk it will tell you things of significance around you.
The virtual city and real city aren’t distinct anymore. One no longer mirrors the other. Instead they are now melding into one city; half online and half offline, half organic and half digital.
Our cities no longer house people and buildings, they house information. And how you display this information is how you can see the city.
So how can you use these technologies?
While using virtual worlds or augmented reality applications is quite easy, creating them still requires a lot of very technical work. However, the core message, combining what’s around you with information, can still be done fairly easily.
The simplest way is to use Google Maps. On G Maps, you can create your own ‘points of interest’, these are little red pins, that you can place in the map, of locations that you find interesting or significant. The pins, when clicked on can be made to reveal any information about that space, anything from text, pictures, videos and links to other material.
If you want to create 3D dimensional spaces to view your cities, you can also look at Google SketchUp and Google Building Maker. Google SketchUp lets you create or add 3D models of buildings and spaces, which you can share with everyone on the web. However, this does require a little practice or some understanding of 3D modeling. Building Maker solves this problem. Instead of making models, Building Maker lets you stich pictures of a building or a space together to convert it into a 3D model.
Social media, especially twitter, usually captures your location, when you post a something. A couple of interesting sites, like TrendsMap, takes hot topics being discussed in an area and displays them over a map. This visualization can be used when you want to see what is being discusses in an area. A similar app called TouchGraph does something similar for facebook.
Social Media is already popular across our cities, with millions of people sharing images, sounds and text from all different locations. Once you add the mobility of smart phones to it, you get information or updates, from many different people as they traverse the city. Their movement, what they share, what they talk about, in itself creates a vast amount of information about the city. This information can help visualize what a city is all about!