This study aims to investigate an automated approach using Convolutional Neural Network (CNN) to efficiently recognize COVID-19 cases vs healthy cases using chest X-ray and CT images. Several models using pre-trained weights, including VGG16, VGG19, InceptionV3, InceptionResNetV2, Xception, DenseNet201, ResNet152V2, and NASNetLarge were investigated. We concluded that the models trained on the X-Ray image dataset outperform the models using the same architectures trained on the CT scan image dataset, and that the VGG16-based model outperforms all the rest of the models trained on the same X-Ray image dataset, and that model decisions could be interpreted for human understanding via local interpretable model-agnostic explanations (LIME). Comparing the models trained on the two types of image sources regarding their cost, an X-Ray based model for COVID-19 would be a lower cost solution to help physicians to be more confident by having a second opinion in assessing patients where there is shortage of test kits, while X-ray devices are widely available.