Fetal Health Classification
Background
Cardiotocography (CTG) is a procedure that involves placing an ultrasound transducer on the mother's abdomen and continuously measuring the fetal heart rate. Cardiotocograms (CTGs) are a simple and inexpensive way for healthcare providers to examine fetal health and take measures to reduce infant and maternal mortality. The machine sends ultrasound pulses and reads the response, providing information on the fetal heart rate (FHR), fetal movements, uterine contractions, and more.CTG is commonly used to check fetal well-being during pregnancy, especially in pregnancies with a higher risk of complications.It enables for the diagnosis of fetal distress at an early stage.The interpretation of a CTG can help determine whether a pregnancy is high-risk or low-risk. An abnormal CTG may require further study and in some cases, intervention.The ability to predict the outcome of the ctg test is essential for assuring the fetus's safety. As a result, predicting the dangers demands a good understanding of cardiotocography.
Objective
This usecase is an effort to use cardiotocograms to determine fetal health. The major goal is to recognize and analyze the provided data, test it using various classification models, assess and enhance the best model, and present the results and discuss next steps.
Relevance with Xceed Analytics
Xceed Analytics provides a single integrated data and AI platform that reduces friction in bring data and building machine models rapidly. It further empowers everyone including Citizen Data Engineers/Scientist to bring data together and build and delivery data and ml usecases rapidly. It's Low code/No code visual designer and model builder can be leveraged to bridge the gap and expand the availability of key data science and engineering skills.
This usecase showcases how to create, train/test and deploy a fetal health classification model. The dataset was obtained from UCI machine learning repository.It consists of Fetal Health Classification dataset. Xceed will provide a NO-CODE environment for the end-to-end implementation of this project, starting with the uploading of datasets from numerous sources to the deployment of the model at the end point. All of these steps are built using Visual Workflow Designer, from analyzing the data to constructing a model and deploying it.
Data Requirements
The dataset that is used here includes :
- Fetal Health Classification dataset : contains fetal Cardiotocogram information.
Columns of interest in the dataset
Model Objective
By analyzing the underlying data, building a classification machine learning model, and deploying it after specifying the model's primary features, we forecast whether fetal health is likely to be normal or not.
Steps followed to develop and deploy the model
- Upload the data to Xceed Analytics and create a dataset
- Create the Workflow for the experiment
- Perform initial exploration of data columns.
- Perform Cleanup and Tranform operations
- Build/Train a classification model
- Review the model output and Evaluate the model
- Improve on the metrics which will be useful for the productionizing
- Deploy/Publish the model
Upload the data to Xceed Analytcs and Create the dataset
- From the Data Connections Page, upload the the dataset to Xceed Analytics. For more information on Data Connections refer to Data Connections
- Create a dataset for each dataset from the uploaded datasource in the data catalogue. Refer to Data Catalogue for more information on how to generate a dataset.
Create the Workflow for the experiment
- Lets Create our Workflow by going to the Workflows Tab in the Navigation. Create Workflow has more information on how to create a workflow.
- We'll see an entry on the workflow's page listing our workflow once it's been created.
- To navigate to the workflow Details Page, double-click on the Workflow List Item and then click Design Workflow. Visit the Workflow Designer Main Pagefor additional information.
- By clicking on '+,' you can add the Input Dataset to the step view. The input step will be added to the Step View.
Perform initial exploration of data columns.
- Examine the output view with Header Profile, paying special attention to the column datatypes. Refer to Output Window for more information about the output window.
- Column Statistics Tab (Refer to Column Statistics for more details on individual KPI)
Perform Cleanup and Transform Operations
- Before we build our model, we need to perform a few cleanup modifications.Since the dataset used here is already a cleaned one, we proceed to further steps.
Build/Train a classification Model
- You have a dataset to work with in order to create a classification model. Some of the actions to take before developing a model are listed below.
- Feature Selection
- Feature Encoding
- Choose the algorithm and train the model.
Feature Selection
- Go to the Column Profile View and select Multi-variate profile to construct a correlation matrix to manually identify the features of interest. The peason correlation is shown by Xceed Analytics. Select all of the columns that are strongly correlating to the target feature.
Feature Encoding
- Take all of the categorical columns and encode them based on the frequency with which they occur. for more infomation on this processor, refer to Feature Encoding
Choose the algorithm and train the model.
- You are estimating a categorical variable- genetic disorder for the prediction model. From the Transformer View, select Classification(auto pilot) andfill in the relevant information.
Review the model output and Evaluate the model
After you finish building the model, it is time to review the model output. Look at the output window to first review your predicted results .You will get a new column in the view like the one below.
When you finish building your model you will see another tab in the view called Ml explainer . Click on that to evaluate your model.
- The first view you see when you click on ML explainer is the Summary view
- The second view under Ml explainer is configuration view
The configuration view will give you the information about the step you filled in the Classification step . The view would look like the one below.
The third view under Ml explainer is Performance View . You can see confusion matrix ,ROC Curve, Precision vs Recall and Cumulative Gain Curve. Look at the built charts and decide if the charts are good enough for your model. The confusion matrix is a good indicator to understand how well your model was trained.
- The fourth view under Ml explainer is Leaderboard . In this view you can see the number of algorithms trained and all the feature engineering done on the algorithms used with ranking system to rank the best algorithm trained.
- The last view you see under ML explainer is Interpretability . In this view you will be able to interpret your model in simple terms where you will be getting results pertaining to feature importance , PDP Plots , Sub Population Analysis , Independant Explanation , Interactive Scoring . for more infomation on these results , refer to Interpretability . The Interpretability tab and the results under this tab would look like the one below.