While computing power and GPU is increasing, there is still a limit to the amount of 3D data that can be displayed in real-time. Before being game-ready, 3D assets generated by photogrammetry need to be optimized to reduce rendering cost and hardware/software limitations.
This article gives an overview of where the photogrammetry, game-engines and geospatial visualization technology stands today and where that technology is heading.
We also did an interview with Epic Games about using Virtual clones for digital twin and smart city applications if you’d like to learn more about recreating cities in game engines.
Photogrammetry is the process of generating a three-dimensional representation of an asset using overlapping high-resolution photographs taken from multiple angles. The photos are then stitched together using the principle of triangulation through specialized photogrammetry software such as Bentley’s ContextCapture, Capturing Reality’s RealityCapture, Agisoft’s Metashape, Skyline’s Photomesh, nFrames’ Sure to produce a detailed, geometrically accurate and textured 3D mesh of this asset.
Photogrammetry can scale to object size so you can digitally recreate anything from a small rock to an entire city. We use a range of capture methods; from the ground with a hand-held camera, through a rig or mounted on aerial platforms.
Photogrammetry was first used to produce topographic maps in the middle of the 19th century. Photogrammetry has now been used for more than 150 years by the mapping and surveying industry and is now used in architecture and cultural heritage, geology and archaeology, and engineering. More recently it has been used in the film and gaming industry.
Many of these distinct fields are now merging and interacting more widely with easier access to processing power, photogrammetry tools, advanced computer graphics and image processing capabilities. A great example of increased cooperation between the geospatial and gaming industry is the announcement that Cesium will be making geospatial data available through Unreal Engine. Large 3D city models generated using photogrammetry techniques are also being directly integrated into Unreal Engine and Twinmotion to create realistic digital representations of the real-world environment.
Game-engines are rapidly evolving to provide 3D creation environments to deliver immersive and interactive content across many industries. The film and entertainment, training and simulation, architecture, and design sectors are all starting to integrate assets generated in game-engines. The two major players include Epic Games’ Unreal Engine and Unity3D. These game-engines give a broad set of powerful tools to allow developers to focus on the creative process.
Maps, and by extension the spatial aspect of them, are a core component of virtual worlds used in video games. These maps are used as a base for the creation of fantasy worlds or designed to simulate real-world environments. Geographic data is used as input information during a game’s development phases to give the developers real-world information to build their maps on and emulate real environments to improve player immersion and familiarity.
“Call of Duty” based in Iraq and “Grand Theft Auto” in California are examples of the real world represented in the virtual one. The most popular game using real-world geographic data, besides simulation for training and military applications, has been “Microsoft Flight Simulator” with a vast selection of scenery packs and add-ons based on aerial orthophotography.
Google, Apple and Microsoft have primarily been responsible for the democratization of GIS by initiating the transition between a geospatial specialist-dominated community of few hundreds of thousands to the general public with millions of users and growing. In recent years, we have seen these companies make use of 3D photogrammetry techniques to generate entire cities in 3D automatically.
Geospatial Industry-specific real-time 3D environment platforms able to ingest complex 3D photogrammetry data have been available for more than a decade. Significant players include Cesium, Skyline Software Systems TerraExplorer, ArcGIS Pro, ArcGIS CityEngine, Euclideon udStream which provide a range of analysis and simulation tools on top of powerful 3D visualization capabilities. Other players providing real-time 3D analysis and visualization platforms for a broader range of industries includes Twinsity, Bentley LumenRT, OpenCities Planner, Urban Circus Urban-Engine, GeoCirrus.
From a geospatial point of view, game-engines provide alternative real-time 3D platforms to create high-quality geographic visualization and simulations. Game-engines provide tools to animate objects and are highly customizable to create gameplay elements. Both Unity and Unreal-Engine can import spatial data as FBX, elevation data as height map and corresponding imagery to drape on top. ESRI CityEngine can directly export data into Unreal Engine Datasmith format. Datasmith is a collection of tools to bring CAD, BIM and GIS content to Unreal Engine. Complex 3D environments can also be streamed to any web browser via Pixel Streaming. Unreal Engine provides tools to manage large terrain-based worlds such as Landscape and World Composition.
The Philadelphia City Hall rendered in Twinmotion
One of the main priorities of modern games is to give players an immersive and realistic experience. A growing number of studios have made the headlines with their innovative use of photogrammetry to produce ultra-realistic assets such as rocks, buildings, street furniture and human bodies to populate their worlds and create their characters.
It is now possible to create almost any environment as virtual clones of real locations or totally imaginary worlds. Titles such as The Vanishing of Ethan Carter, Call of Duty: Modern Warfare, Star Wars Battlefront are just a small sample amongst the growing list of games making heavy use of Photogrammetry.
Photogrammetry is disrupting the game development pipeline. 3D assets are now widely available and are not the exclusivity of big-budget games anymore. Epic Games has made available more than 10,000 2D and 3D photogrammetry assets to their Unreal Engine developers (Megascans library) through the acquisition of the company Quixel.
Games are designed to provide users with an immersive and realistic real-time experience, and many decisions must be made to find the right compromise between aesthetics, realism, design requirements, gameplay and complexity of the 3D assets used.
3D assets generated from photogrammetry contain geometric and texture details which would be very hard and time consuming for a 3D artist to recreate. These small details and imperfections of the real world are what creates the illusion of realism. However, the high complexity of the geometry and high resolution of textures can be burdensome and need to be optimized to reduce rendering cost and satisfy processing capacity, GPU and limit video memory usage.
This video showcases the use of 3D photogrammetry to develop a next-level immersive experience for the City of Pau within Unreal Engine 4. Large scale photogrammetry and city-wide 3D models can give creators the freedom to develop interactive virtual worlds and photo-realistic rendering by adding weather or light effects and dynamic physics. With large-scale 3D mesh models, it is possible to build design visualizations and cinematic experiences, opening up a world of possibilities for video games, virtual & augmented reality and more.
Different types of optimization must be performed to make 3D assets game-ready.
Typical human-size 3D assets generated using photogrammetry software come out as a unique 3D object with millions of triangles and multiple large texture images. In the case of large objects generated from photogrammetry, the 3D data is split into multiple individual tiles, which are also made of millions of triangles and multiple large-texture images. Many software provides the option of generating levels of detail with a reduced number of triangles and texture size.
Some tools such as Granite(Graphine) are available to provide innovative tile-based texture streaming to optimize memory usage and speed up load-time. In the same way, objects can be simplified into multiple levels of detail. That means the further an object is from the virtual camera, the lower the level of detail that shall be loaded. This technique is being used extensively in the geospatial industry.
3D assets need to be retouched to remove artifacts both in the geometry and texture depending on the quality of the 3D reconstruction. The capture conditions are essential to minimize the impact of real lighting on the texture of the object, and it can become time-consuming, if not impossible to properly remove baked light (and shadow) in the texture. Some tools are available to assist in the task such as Unity de-lighting.
Here is a typical workflow to create game-ready assets:
Real-time application technology is evolving very rapidly, and game-engine developers are at the forefront of these developments.
Epic announced the release of Unreal-Engine 5 (late 2021) and their virtualized micro polygon geometry Nanite technology. Data will be streamed and scaled on-the-fly and will allow extremely complex data to display in real-time without the need to use levels of detail and will not impact polygon count budgets, polygon memory budgets or draw count budgets.
Aerometrex has produced many large 3D mesh models of entire cities comprised of billions of triangles. The Nanite technology will not only revolutionize the way games are designed but also open new opportunities to the geospatial industry and enable them to integrate entire cities into ultra-realistic, real-time smoothly, and immersive solutions.
AUTHOR: Fabrice Marre, Geospatial Innovation Manager, Aerometrex