Skip to main content

Abstract

The research presented herein is a methodology for reconstructing a 3D object from a single 2D image through the use of a back-propagation neural network to identify the depicted object as a member of one of four classes: rectangles/boxes, spheres, cylinders, and others. This process currently outputs a correctly textured 3D VRML, X3D, or WebGL file for two classes of objects: boxes and spheres. The approach applies a combination of edge detection and geometry to the 2D input image to ascertain the center of gravity and calculate a set of perimeter distances around that center of gravity. These calculated values are passed to a trained back-propagation neural network comprising 36 input nodes, 100 intermediate nodes and 4 output nodes corresponding to the four classes of objects above. Once the object has been classified, it is deconstructed in 2D using a subset of the calculated perimeter points and reconstructed in 3D as a textured model for display.

Files

This is a metadata-only record.

Metrics

Metadata

  • Subject
    • Computer Science & Information Systems

  • Institution
    • Dahlonega

  • Event location
    • Library Third Floor, Open Area

  • Event date
    • 2 April 2014

  • Date submitted

    18 July 2022

  • Additional information
    • Acknowledgements:

      Bryson Payne, Ph.D., Markus Hitz, Ph.D.