Patient-Specific Models Of Congenital Heart Disease Defect Detection By Using Clustering

CHD results in low oxygenation of blood, obstruction to flow through the pulmonary valve. To predict the Tetrology of Fallot in heart, it uses GLCM and FELICM clustering technique which predicts by using grouping the data into number of data points. Clustering of numerical data forms the basis of many classification and system modelling algorithms. To identify natural groupings of data from a large data set to produce a concise representation of a system's behaviour. Fuzzy local information C-means is the best image clustering method used for image segmentation. The effects of noise are avoided by analysing spatial relationship between pixel values. One of the best image clustering method, called as Fuzzy c-Means with Edge and Local Information(FELICM) introduce the weights for a pixel values with in local neighbor windows which improves the edge detection accuracy. FCM- a dataset is grouped into n clusters with every data point in the dataset belonging to every cluster to a certain degree.

Introduction

A congenital heart defect (CHD), also known as a congenital heart anomaly or congenital heart disease, is a problem in the structure of the heart that is present at birth. Signs and symptoms depend on the specific type of problem. Symptoms can vary from none to life-threatening. When present they may include rapid breathing, bluish skin, poor weight gain, and feeling tired. It does not cause chest pain.

Most congenital heart problems do not occur with other diseases. Complications that can result from heart defects include heart failure. To predict Tetrology of Fallot in heart by using GLCM and FELICM clustering. Clustering in image processing is basically defined as the technique in which groups of identical image primitive are identified.

Clustering is a method in which objects are unified into groups based on their characteristics. A cluster is basically an assembly of objects which are similar between them and are not similar to the objects fitting to additional clusters. GLCM described here is used for a series of "second order" texture calculations. First order texture measures are statistics calculated from the original image values, like variance, and do not consider pixel neighbour relationships.

Second order measures consider the relationship between groups of two (usually neighbouring) pixels in the original image. Third and higher order textures (considering the relationships among three or more pixels) are theoretically possible but not commonly implemented due to calculation time and interpretation difficulty. There has been some recent development of a more efficient way to calculate third-order textures. The texture filter functions provide a statistical view of texture based on the image histogram. These functions can provide useful information about the texture of an image but cannot provide information about shape, i. e. , the spatial relationships of pixels in an image. o boost the performance of image segmentation further an improved Fuzzy-C Mean with Edge and Local Information (FELICM) based gray stretch approach is proposed that will help to obtain an adaptive threshold to segment an image. By predicting the edges of the input and then by grouping the data feature value is predicted. By taking the example of the personalised simulation of the Congenital Heart Disease, the platform uses advanced and interactive frameworks to provide researchers and clinicians with adapted tools for pre processing dynamic data, edge detection and by using GLCM and FELICM.

Literature Review

Many cardiovascular diseases are closely associated with the flow conditions in the blood vessel. One major type of arterial disease is coronary artery disease (CAD), which is characterized by localized accumulation of cholesterol. CAD is an occlusion of the coronary arteries resulting in insufficient supply of blood and oxygen deprivation to the heart muscle. When the blockage of an artery is complete, it results in a heart attack, or in very severe cases, may result in death of the patient. To overcome the problem, detailed knowledge associated with the disordered flow patterns is then important for the detection of localized arterial disease in its early stages. Early detection of the disease will enable to cure using medication rather than surgery of methods with regard to the problem specifications. The main purpose of this work is to propose a modified version of the HNPC in Early detection of heart diseases stands for an absolute necessity due to the serious impact of these illnesses on human health. In order to fulfill this aim, a panoply of computational intelligence techniques has been evaluated on many heart disease datasets. The first is to find the classification technique which usually outperforms other techniques regardless the decision making problem. The second is to get the classifier which outperforms the rest order to further improve the classification accuracy for the decision making problem of heart disease detection. Expert systems are computer programs that imitate the reasoning an expert with expertise in a particular area of knowledge. Variable centered intelligent rule systems (VCIRS) is one method that the expert system has an advantage in data repair system. If an error occurs or data development, data updates can be done without having to create a system from scratch. Previous studies discuss diagnosis a of acute coronary heart disease (ACHD) using experts’ tacit knowledge into personal computer. This research intends to develop an expert system that is connected to heart rate detector which is a combination of a pulse sensor- arduino ADK and android phones. In this research, a prototype of early warning system for heart disease using android mobile phone was successfully build. Coronary artery disease, also known as atherosclerotic heart disease, is the most common type of heart disease and cause of heart attacks. The disease is caused by plaque building up along the inner walls of the arteries of the heart, which narrows the arteries and reduces blood flow to the heart.

Recently, intracoronary Optical Coherence Tomography (OCT) has emerged as one of the most promising intra-coronary diagnostic tools with a resolution 15 um compared with 150 um of intravascular ultrasound system (IVUS), allowing a level of detail never reached before. The OCT acquisition has already been proved to be safe, effective, and highly reproducible. Most existing automatic OCT systems focused on vessel lumen segmentation or stent strut detection problems. Tsantiset al. presented an automatic vessel lumen segmentation method based on Markov Random Field (MRF) model. Some researchers focused on other problems such as calcified plaque detection. K-means clustering is difficult to predict the number of clusters (K-Value)

  • Initial seeds have a strong impact on the final results
  • The order of the data has an impact on the final results and Sensitive to scale.

System Design

Algorithm evaluation

To predict Tetrology of Fallot in heart by using GLCM and FELICM clustering. By predicting the edges of the input and then by grouping the data feature value is predicted. By taking the example of the personalised simulation of the Congenital Heart Disease, the platform uses advanced and interactive frameworks to provide researchers and clinicians with adapted tools for pre processing dynamic data, edge detection and by using GLCM and FELICM.

A gray scale conversion

It is also known as an intensity, gray scale, or gray level image. Array of class uint8, uint16, int16, single, or double whose pixel values specify intensity values. For single or double arrays, values range from. For uint8, valuesrange from [0, 255]. For uint16, values range from [0, 65535]. For int16, values range from [-32768, 32767]. Image formation using sensor and other image acquisition equipment denote the brightness or intensity I of the light of an image as two dimensional continuous function F(x, y) where (x, y) denotes the spatial coordinates when only the brightness of light is considered. Sometimes three-dimensional spatial coordinate are used. Image involving only intensity are called gray scale images.

Resolution

Similar to one-dimensional time signal, sampling for images is done in the spatial domain, and quantization is done for the brightness values. In the Sampling process, the domain of images is divided into N rows and M columns. The region of interaction of a row and a Coolum is known as pixel. The value assigned to each pixel is the average brightness of the regions. The position of each pixel was described by a pair of coordinates (xi, xj). The resolution of a digital signal is the number of pixel is the number of pixel presented in the number of columns × number of rows. For example, an image with a resolution of 640×480 means that it display 640 pixels on each of the 480 rows. Some other common resolution used is 800×600 and 1024×728, among other. Resolution is one of most commonly used ways to describe the image quantity of digital camera or other optical equipment. The resolution of a display system or printing equipment is often expressed in number of dots per inch. For example, the resolution of a display system is 72 dots per inch (dpi) or dots per cm. per cm.

Gray levels

Gray levels represent the interval number of quantization in gray scale image processing. At present, the most commonly used storage method is 8-bit storage. There are 256 gray levels in an 8 bit gray scale image, and the intensity of each pixel can have from 0 to 255, with 0 being black and 255 being white we. Another commonly used storage method is 1-bit storage. There are two gray levels, with 0 being black and 1 being white a binary image, which, is frequently used in medical images, is being referred to as binary image. As binary images are easy to operate, other storage format images are often converted into binary images when they are used for enhancement or edge detection.

A. Edge detection

Edge detection is a well developed field on its own within image processing. Edge detection is basically image segmentation technique, divides spatial domain, on which the image is defined, into meaningful parts or regions. Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Edges typically occur on the boundary between two different regions in an image. Edge detection allows user to observe those features of an image where there is a more or less abrupt change in gray level or texture indicating the end of one region in the image and the beginning of another. It finds practical applications in medical imaging, computer guided surgery diagnosis, locate object in satellite images, face recognition, and finger print recognition, automatic traffic controlling systems, study of anatomical structure etc. Many edge detection techniques have been developed for extracting edges from digital images.

B. Canny edge detection

Canny edge detector have advanced algorithm derived from the previous work of Marr and Hildreth. It is an optimal edge detection technique as provide good detection, clear response and good localization. It is widely used in current image processing techniques with further improvements.

Step I: Noise reduction by smoothing Noise contained in image is smoothed by convolving the input image I (i, j) with Gaussian filter G. Mathematically, the smooth resultant image is given by F(i, j)= G*I(i, j) Prewitt operators are simpler to operator as compared to sobel operator but more sensitive to noise in comparison with sobel operator.

Step II: Finding gradientsIn this step we detect the edges where the change in grayscale intensity is maximum. Required areas are determined with the help of gradient of images. Sobel operator is used to determine the gradient at each pixel of smoothened image.

Step III: Non maximum suppressions: Non maximum suppression is carried out to preserves all local maxima in the gradient image, and deleting everything else this results in thin edges.

Step IV: Hysteresis thresholding: The output of non-maxima suppression still contains the local maxima created by noise. Instead choosing a single threshold, for avoiding the problem of streaking two thresholds t high and t low are used.

C. Image analysis

Image is converted into lab colour space which is used to predict the exact image content what is used for the process. Color is a powerful descriptor and it plays the important role in the aspect of digital image denotation. The statistic shows that about 90% edge information in the color image is the same as in the gray image, that is to say, about 10% edge information in the color image has not been detected, so it is essential that to research the problem of color image edge detection.

So during the last years the researchers have put forward a lot of arithmetic on RGB image edge detection. Its material steps are the three steps are as follows: expanding the edge detection method to the three components of RGB color space, combining the edge of the three components by definite logic algorithm and obtaining the color image edge. The common shortcomings of the RGB image edge detection arithmetic are the low speed and the color losses after the each component processingThe basic idea of the HSV color image edge detection process is to only process the one component. The basic color image edge detection method in various color space consists of three steps. At first, color space is converted from RGB to other color space. Secondly, the (H/Y/Y), (S/CB/I) and (V/CR/Q) component is computed by the color image. In the third step, the one component is only processed including the histogram equalization. And finally the color image is given through the processed separate components.

D. Clustering techniques

Clustering in image processing is basically defined as the technique in which groups of identical image primitive are identified.

Clustering is a method in which objects are unified into groups based on their characteristics. A cluster is basically an assembly of objects which are similar between them and are not similar to the objects fitting to additional clusters. An image can be segmented based on its keyword (metadata) or its content (description).

GLCM (Gray Level Co-Occurrence Matrix)

GLCM the feature value is predicted. Gray Level Co-occurrence Matrix and associated texture feature calculations are done. GLCM is a tabulation of how often different combinations of gray levels. Texture feature calculation-> Measure of the variation in intensity at the pixel of interest. The co-occurrence matrix is a statistical model that is useful in a variety of image analysis applications, such as in biomedical, remote sensing, industrial defect detection systems, etc. FPGAs are reconfigurable hardware devices and have ability to execute many complex computations in parallel; these abilities enable a hardware system dedicated to performing fast co-occurrence matrix computations.

FELICM (Fuzzy Edge and Local Information C-Means Clustering)

In FELICM the image is segmented to predict the tetrology. Clustering -classification and system modelling algorithms. To identify natural groupings of data from a large data set to produce a concise representation of a system's behaviour. Fuzzy c-means (FCM) is a data clustering technique in which a dataset is grouped into n clusters with every data point in the dataset belonging to every cluster to a certain degree. This algorithm is an unverified clustering algorithm that is functional to quite many issues which involves classifier and clustering design, feature analysis. This algorithm has a number of applications like astronomy, geology, image analysis, chemistry, shape analysis, medical diagnosis, and recognizing of the target. It is a noise sensitive method without a priori knowledge. The clustering is dependent based on both the spectral and spatial information by using a fuzzy factor. This method assumes that the label of one pixel is related to the label of its spatial neighbors.

Results and Discussion

Hence, there is a requirement for a Clustering Technique. A new Technique is proposed and implemented for improving assessment to take evasive action against the Congential Heart Disease (CHD). This paper discusses the performance of GLCM and FELICM for predicting the edges of the input and then by grouping the data feature value is predicted. By taking the example of the personalised simulation of the Congenital Heart Disease, the platform uses advanced and interactive frameworks to provide researchers and clinicians with adapted tools for pre processing dynamic data, edge detection. In comparison, with reference to Efficiency of prediction is more effective to existing methods like K-Means Clustering.

Conclusion and Future Research

In this Paper we discuss the challenges of improving assessment in congential heart disease. In general, FELICM and GLCM are more promising than other Clustering Techniques. The major advantage of FELICM technique gives the partly cover set of data, this algorithm gives good results and it is quite efficient than k-means algorithm. The data point completely belong to single centre of cluster, where as in FCM the data point is given membership to each centre of cluster, because of which data point belongs to more than one centre of cluster. . Hence in future a efficient Clustering Technique for the assessment of CHD based on the comparison of the FELICM can be designed and developed. In the future we hope to develop priors specifically for Ebstiens abnormality and congenitally corrected transposition of the great arteries using some Clustering technique.

18 May 2020
close
Your Email

By clicking “Send”, you agree to our Terms of service and  Privacy statement. We will occasionally send you account related emails.

close thanks-icon
Thanks!

Your essay sample has been sent.

Order now
exit-popup-close
exit-popup-image
Still can’t find what you need?

Order custom paper and save your time
for priority classes!

Order paper now