Multifragment redering and large scene creation in graphics engines (Master thesis)
We consider two aspects of rendering. First, we study the problem of rendering all surfaces in a 3D scene (even the hidden ones) by implementing and comparing two well-known multifragment algorithms by using as development platform the Lightweight Java Game Library, and GLSL shaders. The purpose of this line of work is to compare the quality of the final result and the efficiency of mesh rendering, for order independent transparency. Then we consider the problem of building a large three dimensional scene that contains actual (reconstructed objects) and artificial components by using of a game/graphics engine. The workflow for achieving this task is presented along with, setbacks, issues and solutions related to the magnitude of the scene and the compatibility of the modeling formats employed. More specifically, for the rendering we have implemented transparency for two commonly used models, the Stanford Dragon and Bunny, as well as a third terrain mesh used in the aforementioned video. A key difference between the two algorithms is that A-buffer has an unlimited depth, resulting in potentially unlimited memory requirements, whereas the K buffer works with a fixed K depth value, where K represents the amount of fragments stored.. For K-buffer we have conducted experiment with three depth values K=10, K=5, K=2. The depth level on A-buffer used was 16, since the largest overall mesh depth does not exceed 10. A-buffer provides very good visual results, with the transparency visualizing every surface in the scene. This results in high memory requirements and decent fps.. The results of the three different K values, the algorithms result in varying visible quality depending on the value of depth. A-bufferprovides smaller fps and requires more memory as compared with the K-buffer, In the second part of this thesis we report the development of a large scene, depicting the area of Meteora through time, For the Meteora area we have used models reconstructed from aerial photos from an unmanned aerial vehicle (airplane) and from drones. The models were edited for missing parts and were disseminated for use in the graphics engine. For the artificial part of the scene we have used several Unreal assets, third party models, and modeled developed in house. To produce the stereoscopic video we have used a set up of two virtual cameras in the Unreal engine and a third camera that capture their result side by side. The end result is a video of 3840x1080p, so in stereoscopic playback devices the output for each eye is 1920x1080p, with a framerate of 60 fps. During the final production of the video, through the video editing software, 39,000 images were created for the 10-minute length with each image resolution at 1920x1080 and size around 7Mb. We present running time information for each part of this process.
|Institution and School/Department of submitter:||Πανεπιστήμιο Ιωαννίνων. Σχολή Θετικών Επιστημών. Τμήμα Μηχανικών Η/Υ & Πληροφορικής|
|Appears in Collections:||Διατριβές Μεταπτυχιακής Έρευνας (Masters)|
Files in This Item:
|Μ.Ε. ΕΥΤΑΞΟΠΟΥΛΟΣ ΕΥΑΓΓΕΛΟΣ-ΑΛΕΞΑΝΔΡΟΣ 2016.pdf||27.46 MB||Adobe PDF||View/Open|
Please use this identifier to cite or link to this item:This item is a favorite for 0 people.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.