The convergence between the Internet and Computer Graphics has resulted in the emergence of a completely new media: Massive online environments. While still in constant evolution, mainly driven by the entertainment industry, they are no longer an anecdotic aspect of the digital age: Blizzard entertainment recently reported 10 million active subscribers world-wide for the game World of Warcraft [Blizzard 2008]. That is roughly the population of Paris and its surroundings; these subscribers pay a monthly fee to visit, explore and contribute to this virtual world. Beyond entertainment, massive on-line environments have a tremendous potential for social development and educational purposes. Among these, virtual urban environments are of particular interest for applications in education, transportation, urban planning and tourism. Unfortunately, virtual exploration of cities is currently impaired by the impossibility to capture and store the many images representing surfaces at such a large scale.
Our project aims at unlocking the full potential of these applications, enabling the creation of tools to explore and understand every aspect of our cities. The technologies we aim to develop in SIMILAR-CITIES are targeted at greatly enhancing the visual quality of virtual cities used in many different applications. Several scientific challenges have to be addressed. Managing content in virtual environments has become one of the main difficulties in computer graphics applications. Highly detailed graphical environments need to be created, stored, transferred and displayed on devices ranging from multi-screen systems to cell phones. Interactive on-line urban environments concentrate all these difficulties. Whether they depict real cities such as those in Google earth or Virtual earth, or whether they depict imaginary worlds such as those of Second Life or World of Warcraft, these applications all let the user freely explore large urban environments made of thousands of buildings and objects of various shapes, sizes and styles. Of course, capturing an existing city is different from creating an imaginary one, but both activities share many common challenges: Producing, storing, transferring and displaying the high resolution data required to accurately represent buildings, facades, signs, roads and the many human-made structures composing an urban landscape is equally challenging in both cases.
Our key insight for this project is that parts of these cities are of central interest (historical buildings, shops, main roads etc.), while others are less important (anonymous buildings, empty spaces, dead-end streets etc.). Of course, we want to focus resources on important elements. However, the less crucial parts of the scene must be reproduced and displayed with as much detail as necessary to maintain the sense of user immersion. Instead of compressing the actual data – which often implies losing details – we propose to replace it by an equivalent, but much smaller representation. While it will carry the same meaning (a red brick wall will remain a red brick wall) it may exhibit differences in the details (the actual location and size of the bricks may vary). However, using our approach these details will be generated instead of being stored: Immersion will be reinforced by much richer content, while the overall appearance of the initial data will be preserved.