Skip to main content

Abstract

The goal of this paper will be to analyze and present the discrepancies in performance of different implementations of neural networks. The paper aims to compare basic feed-forward neural networks, feed-forward neural networks with convolutional layers and lastly a recurrent convolutional neural network in the task of character recognition. Performance will be measured in terms of maximum accuracy achieved for the MNIST character dataset (with similar training times), training speed, and accuracy in recognizing handwritten digits outside of the MNIST dataset; for this purpose a custom dataset with handwriting samples will be created. To implement these neural networks, Python and TensorFlow will be utilized. The collected data will be used as a framework to make predictions regarding solutions for more elaborate deep learning utilizations, for instance object recognition. A conclusion about the potential held by different implementations for presenting viable solutions to problems the deep learning research community is currently concerned with will be presented at the end.

Files

File nameDate UploadedVisibilityFile size
ARC_Poster_YG.pdf
19 Jul 2022
Public
654 kB
0-ARC_Poster_YG.pdf
19 Jul 2022
Public
647 kB

Metrics

Metadata

  • Subject
    • Computer Science & Information Systems

  • Institution
    • Dahlonega

  • Event location
    • Library Technology Center 3rd Floor Common Area

  • Event date
    • 24 March 2017

  • Date submitted

    19 July 2022

  • Additional information
    • Acknowledgements:

      Bryson Payne, Ph. D