Analysis Of Anchor Papers That Predict Flood

Anchor paper by Masturah and Muhaini (2007) uses Spiking Neural Network to predict flood by use three different algorithms with different tools which are Support Vector Machine (SVM), Multi-Layer Perceptron (MLP) and Dynamic Evolving Spiking Neural Network (deSNN). The objectives of this research is to prove that using Neucube tool by deSNNs method is the most accuracy result compared to SVM and MLP method. The historical data of this research is Spatio/Spectro Temporal Data Modelling (SSTD) where the data cannot use directly in WEKA. The analysis of the data was based on the analysis of space and time. As a comparative experiment, conventional machine learning methods such as MLP and SVM are used as a baseline performance and accuracy measures. The performance set up by take the same time length percentage as the experiments with NeuCube for deSNNs algorithm and Weka for MLP and SVM algorithms.

The researcher conducts two experiment. The first experiment will take the whole 100% time length while the second experiment will take 80% time length. Therefore, it was observed for these baseline algorithms, the time length of the training samples and validating samples need to be equal in NeuCube and WEKA tools. For 100% time length, deSNNs was 83. 33%, SVM was 68. 75% and MLP was 54. 17%, while 80% time length, deSNNs was 66. 67% SVM and MLP was 38. 89%. It shows that deSNN algorithm is perform compared to other algorithms.

For the anchor paper is value of different precipitation data for flood prediction in an alpine catchment was used to accurately predict such events, accurate and representative precipitation data are required. In this study, the researcher investigates the value of three precipitation datasets which are commonly used in hydrological studies for example station network precipitation (SNP), interpolated grid precipitation(IGP) and radar-based precipitation (RBP), for flood predictions in an alpine catchment. To quantify their effects on runoff simulations, the researcher performs a Bayesian uncertainty analysis with an improved description of model systematic errors. By using periods of different lengths for model calibration, the researcher explores the information content of these three datasets for runoff predictions. The results from an alpine catchment showed that using SNP resulted in the largest predictive uncertainty and the lowest model performance evaluated by the Nash–Sutcliffe efficiency. This performance improved from 0. 674 to 0. 774 with IGP, and to 0. 829 with RBP. The latter two datasets were also much more informative than SNP, as half as many calibration data points were required to obtain a good model performance. Thus, the results show that the various types of precipitation data differ in their value for flood predictions in an alpine catchment and indicate RBP as the most useful dataset.

Bayesian model averaging for river flow prediction is discussed about the practical benefits of Bayesian model averaging, for a problem with limited data, namely future flow of five intermittent rivers. This problem is a useful proxy for many others, as the limited amount of data only allows tuning of small, simple models. Bayesian model averaging is theoretically a good way to cope with these difficulties, but it has not been widely used on this and similar problems. The researcher uses real-world data to illustrate why Bayesian model averaging can indeed give a better prediction, but only if the amount of data is small. Then the weighted votes of those diverse models in Bayesian model averaging will (on average) give a better prediction than the single best model. In contrast, plenty of data can fit only one or a few very similar models since they will vote the same way, Bayesian model averaging will give no practical improvement. Even with limited data that agrees with a range of models, the improvement is not very big large, but it is the direction of the improvement that stands out as a help for forecasting.

In conclusion, Bayesian model averaging can only bring an improvement if the data is such that the distribution of plausible models is an asymmetric or multimodal distribution, for borderline cases, looking at the direction of the difference in the prediction brings improved accuracy. If instead the data is consistent with a tight, symmetric distribution of models, then the prediction will be the same as the single best-fit model.

15 July 2020
close
Your Email

By clicking “Send”, you agree to our Terms of service and  Privacy statement. We will occasionally send you account related emails.

close thanks-icon
Thanks!

Your essay sample has been sent.

Order now
exit-popup-close
exit-popup-image
Still can’t find what you need?

Order custom paper and save your time
for priority classes!

Order paper now