Posts

How to maintain Model Effectiveness after deployed

Image
When we ready to deploy a good predictive model which given a good accuracy score on training and testing dataset, there is one more problem to solve.   How long will this model effectively solve the problem with the same high accuracy and what is the strategy to maintain the model accuracy. We also need to know what action we need to take when the model's effectiveness on the decline trend. In this article, I am sharing my strategy for validating and maintaining the predictive model effectiveness. We can two things we can take before deploying a model in production First, if possible add a low percentage of negative test data as part of model results. Negative testing  is a method of  testing  an application that ensures that the model handles the unwanted input the way it should be. For example, for a recommendation model that recommends potential customers from large customer datasets to call for a marketing purpose based on the customer. In this model, including a low recom

Deep Learning: Detecting Fraudulent Healthcare Provider using AutoEncoder

Image
Introduction In this article, I will share my experience that how to use the power of deep neural networks to effectively identify fraudulent healthcare providers from the health care transactions that can be identified as anomalies in a dataset. For this solution, I used autoencoder machine l earning algorithm and implemented it in the H2O platform. Let us start with the definition. An anomaly refers to a data instance that is significantly different from other instances in the dataset. Often these considered as statistical outliers or errors in the data before developing a predictive model. But sometimes an anomaly in the data may indicate some potentially harmful events that have occurred previously. In health care insurance claim data, those are fraudulent claims. Health Care Fraud Healthcare provider fraud is one of the biggest problems facing Medicare. According to the government, the total Medicare spending increased exponentially due to fraud in Medicare claims.

Deep Learning: Leverage Transfer Learning

Image
For my current project, I was reaching about how to leverage the transfer learning techniques to improve model accuracy. In this blog, I am sharing my experience of how to leverage transfer learning in deep learning models. In the recent development in deep learning space, we can develop a complex neural network model which trained on a very large dataset. However, the main challenge is the limitation of resources and time to train the model. Even a simple image classification model with 1000 images gets hours of training time and GPU resources. In addition, due to limited training data getting good accuracy is challenges. The transfer learning technique is used to help limited resource and time challenges. Simply put, Transfer learning is leveraging a deep learning model which trained on millions of data on a different problem can be used to similar problem without full training. In general, deep learning models are highly re-purposeful. Since the original already training