Model 12 Skin Tumorous Disorders
Model 12Dx (http://dx.medicalphoto.org) is trained with 19,398 manually cropped images and additional 159,477 images (approximately 260 classes).
Paper – Journal of Investigative of Dermatology (Feb. 2018)
– Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm
Letter Paper – Journal of Investigative of Dermatology (June. 2018)
– In here, we explained the difference between AUC and Top-accuracy and between multi-class classification and binary classification. If the results were calculated without considering threshold of AUC and multi-class classification, the sensitivity appeared low inevitably.
– Interpretation of the Outputs of Deep Learning Model trained with Skin Cancer Dataset
MODEL DERMATOLOGY AND MODEL MELANOMA
Model Dermatology and Model Melanoma are using the same AI model which was trained with 220,680 images (174 disease classes).
Paper – Journal of Investigative of Dermatology (March. 2020)
– Augmented Intelligence Dermatology : Deep Neural Networks Empower Medical Professionals in Diagnosing Skin Cancer and Predicting Treatment Options for 134 Skin Disorders.
Interview – MedicalResearch.com
– AI improved diagnosis of skin disorders, especially distinguishing benign from malignant tumors
NEW MODEL DERMATOLOGY
The current web-DEMO (http://modelderm.com) is running using the newest AI model, which has been trained with 1100k image blobs (178 disease classes). In developing the new model, I mainly tried to reduce false positives for the diagnosis of skin cancer, while the sensitivity is comparable with that of the JID2020 model (http://jid2020.modelderm.com). This model is the same model with that of the RCNN paper (JAMA Dermatology, Dec 2019) below.
We have done a large scale retrospective study (patient N=10,426, image N=40,331).
Paper – medRxiv.org (Dec. 2019)
– Retrospective Assessment of Deep Neural Networks for Skin Tumor Diagnosis
A beta version is available at http://t.modelderm.com in which the disease classifier was trained with 410k image crops. The beta version includes a feed forward network which was trained with metadata (age and sex) from 21,711 Asian patients (694,858 image patches). In addition, the algorithm considers basic characteristics of the lesion (onset, hardness, pain, and itching) for diagnosing general skin disorders.
MODEL DERMATOLOGY WITH RCNN
The Model Dermatology with RCNN (http://rcnn.modelderm.com) was developed for the use of screening skin cancer. Region-based CNN technology was used to train a blob detector and used CNNs to train a fine image selector and a disease classifier. We collected and annotated approximately 1100k images by extracting all possible lesional areas from the entire training photographs.
DEMO – http://rcnn.modelderm.com
Paper – JAMA Dermatology (Dec. 2019)
– Keratinocytic Skin Cancer Detection on the Face Using Region-Based Convolutional Neural Network
Model Onychomycosis (http://nail.medicalphoto.org) is trained with 49,567 images generated by region-based CNN (R-CNN). In order to create a deep learning model that demonstrates diagnostic capabilities beyond the specialists, we generated a huge nail dataset by using faster R-CNN.
– http://nail.modelderm.com (+RCNN DEMO)
– Android (Model Onychomycosis)
Original article – PLoS One (Jan. 2018)
– Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network
Magazine – IEEE Spectrum (Feb. 2018)
– AI Beats Dermatologists in Diagnosing Nail Fungus
PROXIMAL HUMERUS FRACTURE STUDY
In order to obtain results similar to the specialist with current deep learning models and a small number of data (500 images per class), we should analyze the lesion of interest after cropping the part.
Original article – Acta Orthopaedica (Feb. 2018)
– Automated detection and classification of the proximal humerus fracture by using deep learning algorithm
MedicalPhoto(http://medicalphoto.org) is a non-commercial medical image management program. This program was developed and maintained by Dr. Han Seung Seog. The core parts were written by standard C++ with boost asio library with Unicode support (both UTF-8 and UTF-16LE), and SQLite was utilized as a main database engine. The source codes and binaries of this project were released in 2007.