Please use this identifier to cite or link to this item: https://olympias.lib.uoi.gr/jspui/handle/123456789/27787
Full metadata record
DC FieldValueLanguage
dc.contributor.authorΕυταξόπουλος, Ευάγγελος-Αλέξανδροςel
dc.date.accessioned2017-01-09T11:29:23Z-
dc.date.available2017-01-09T11:29:23Z-
dc.identifier.urihttps://olympias.lib.uoi.gr/jspui/handle/123456789/27787-
dc.identifier.urihttp://dx.doi.org/10.26268/heal.uoi.1785-
dc.rightsDefault License-
dc.subjectΠολυθρωσματικήel
dc.subjectΜηχανήel
dc.subjectΓραφικώνel
dc.subjectMultifragmenten
dc.subjectRenderingen
dc.subjectGraphicsen
dc.subjectEngineen
dc.titleΑπόδοση πολλαπλών θραυσμάτων (multifragment rending) και μεγάλων σκηνών σε μηχανές γραφικώνel
dc.titleMultifragment redering and large scene creation in graphics enginesen
heal.typemasterThesis-
heal.type.enMaster thesisen
heal.type.elΜεταπτυχιακή εργασίαel
heal.classificationMultifragmenten
heal.languageel-
heal.accessfree-
heal.recordProviderΠανεπιστήμιο Ιωαννίνων. Σχολή Θετικών Επιστημών. Τμήμα Μηχανικών Η/Υ & Πληροφορικήςel
heal.publicationDate2016-
heal.bibliographicCitationΒιβλιογράφία : σ. 47el
heal.abstractΓια την παρούσα εργασία υλοποιήθηκαν αλγόριθμοι απόδοσης πολλαπλών θραυσμάτων Α-buffer, Κ-buffer τεχνολογίας με τη χρήση λογισμικού LightweightJavaGameLibrary και γλώσσας προγραμματισμού GLSL. Αρχικά παρουσιάζεται μία πειραματική συγκριτική αξιολόγηση των προαναφερόμενων μεθόδων πολυθραυσματικής απόδοσης Α-buffer και K-buffer ως προς την αποδοτικότητα χώρου και χρόνου. Στη συνέχεια παρατίθεται η διαδικασία καθώς και μία επισκόπηση των προβλημάτων που αντιμετωπίστηκαν στην δημιουργία μίας μεγάλης υβριδικής σκηνής αποτελούμενης απο πραγματικά 3Δ μοντέλα, τα οποία προέρχονται από ανακατασκευή 3Δ αντικειμένων μέσω φωτογραφιών, καθώς τεχνητά 3Δ μοντέλα, τα οποία δημιουργήθηκαν με εργαλεία γραφικών. Τέλος παρουσιάζουμε τη χρησιμοποίηση μίας τέτοιας υβριδικής 3Δ σκηνής στην πλατφόρμα γραφικών UnrealEngine και την απόδοσή της, και τη συνολική αποτίμηση των προαναφερθείσων διαδικασιών.el
heal.abstractWe consider two aspects of rendering. First, we study the problem of rendering all surfaces in a 3D scene (even the hidden ones) by implementing and comparing two well-known multifragment algorithms by using as development platform the Lightweight Java Game Library, and GLSL shaders. The purpose of this line of work is to compare the quality of the final result and the efficiency of mesh rendering, for order independent transparency. Then we consider the problem of building a large three dimensional scene that contains actual (reconstructed objects) and artificial components by using of a game/graphics engine. The workflow for achieving this task is presented along with, setbacks, issues and solutions related to the magnitude of the scene and the compatibility of the modeling formats employed. More specifically, for the rendering we have implemented transparency for two commonly used models, the Stanford Dragon and Bunny, as well as a third terrain mesh used in the aforementioned video. A key difference between the two algorithms is that A-buffer has an unlimited depth, resulting in potentially unlimited memory requirements, whereas the K buffer works with a fixed K depth value, where K represents the amount of fragments stored.. For K-buffer we have conducted experiment with three depth values K=10, K=5, K=2. The depth level on A-buffer used was 16, since the largest overall mesh depth does not exceed 10. A-buffer provides very good visual results, with the transparency visualizing every surface in the scene. This results in high memory requirements and decent fps.. The results of the three different K values, the algorithms result in varying visible quality depending on the value of depth. A-bufferprovides smaller fps and requires more memory as compared with the K-buffer, In the second part of this thesis we report the development of a large scene, depicting the area of Meteora through time, For the Meteora area we have used models reconstructed from aerial photos from an unmanned aerial vehicle (airplane) and from drones. The models were edited for missing parts and were disseminated for use in the graphics engine. For the artificial part of the scene we have used several Unreal assets, third party models, and modeled developed in house. To produce the stereoscopic video we have used a set up of two virtual cameras in the Unreal engine and a third camera that capture their result side by side. The end result is a video of 3840x1080p, so in stereoscopic playback devices the output for each eye is 1920x1080p, with a framerate of 60 fps. During the final production of the video, through the video editing software, 39,000 images were created for the 10-minute length with each image resolution at 1920x1080 and size around 7Mb. We present running time information for each part of this process.en
heal.advisorNameΦούντος, Ιωάννηςel
heal.committeeMemberNameΦούντος, Ιωάννηςel
heal.committeeMemberNameΖάρρας, Απόστολοςel
heal.committeeMemberNameΒασιλειάδης, Παναγιώτηςel
heal.academicPublisherΠανεπιστήμιο Ιωαννίνων. Σχολή Θετικών Επιστημών. Τμήμα Μηχανικών Η/Υ & Πληροφορικήςel
heal.academicPublisherIDuoi-
heal.numberOfPages48 σ.-
heal.fullTextAvailabilitytrue-
Appears in Collections:Διατριβές Μεταπτυχιακής Έρευνας (Masters) - ΜΥ

Files in This Item:
File Description SizeFormat 
Μ.Ε. ΕΥΤΑΞΟΠΟΥΛΟΣ ΕΥΑΓΓΕΛΟΣ-ΑΛΕΞΑΝΔΡΟΣ 2016.pdf27.46 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons