The Reality Of Artificial Intelligence
A lot of us have heard the words "Artificial Intelligence" or as some of us know it "AI". When we think of AI what comes to mind is that "monstrous tech or robot" that scientists keep talking about. You must have heard most of them say "AI is the future", "AI would revolutionize the World", "AI are the solutions to most of our world problems, such as natural disasters that destroy our crops, our houses, and lives". But just immediately you also hear - "AIs have the potential to destroy the world", "If you give a robot the ability to think and reason like a human, they can only become better than us and eventually destroy us", "AIs are more dangerous than nuclear weapons" What's my take on this? Well, I don't blame them, people tend to believe what they see and most of the movies that have featured AIs and robots have always shown that AIs have the potential for doom.
One of such movies is the very common "I, Robot" which featured Will Smith and Bridget Moynahan; The story centers on a world in 2035, where robots and AIs are a common thing with the humans, but after the death of one Dr. Lanning, it was discovered that the AI (VIKI) had evolved and posed a threat to the lives of humans. What started as a peaceful co-existence with the AIs and robots, turns into chaos. Another similar story is that captured in the movie The Terminator. Even in our time, one of the most reputable technologist and investor in the modern world "Elon Musk" speaking at a tech conference mentioned his concern on the potential threat Artificial Intelligence poses in the nearest future, as reported by Catherine Clifford in CNBC The REAL AI Picture Source - Pixabay (CCO) Not to jump into conclusions as blind men do, let us rather look critically at AIs as scientists that we are.
One of the very first recognized computers was created by Charles Babbage "The father of computer" a mechanical computer. It could only perform a few numerical calculations such as addition, subtraction etc. which he called the "difference engine". In 1833 he developed another one called "Analytical engine" one of the very first programmable computers known to man. The input to the machine was provided by the use of punch cards and the output was provided by a printer, curve plotter as well as a bell. The machine could as well punch numbers on the card so that it could be read at a later time. By 1947, the "Analytical Machine" was now history as vacuum tubes, transistors had evolved and changed the course of computing, as instructions and algorithm could now be saved on tape, allowing computers to be programmable, more arithmetic to be performed and even at greater speeds. Then, came the integrated circuits, allowing more computing power "transistors" to be placed in a single chip. Very large scale integration offered a process of combining hundreds of thousands of transistors in a single chip hence increasing the computing power available to the world. By 2007, as much as a million transistors could now fit into a chip; AMD K10 quad-core 2M L3 recorded a transistor count of about 463, 000, 000 million transistors.
Looking back we will see that the computing world, as well as "tech" in general, has been developing throughout the course of history. We now have "Devices capable of monitoring the level of pollution in our atmosphere", "MRI and CT Scanners capable of showing detailed images of internal organs in the human body", "Google", "Operating Systems (Windows or MAC)", "NASA's Tech", "Robots" and so many others. So the underlying question is: Are AI - Artificial Intelligence - any different? The answer is "No they are not". AIs are not much different from the "tech" we already know or are already used to. The technology behind AI builds on the available technology already around us everywhere with the sole purpose of allowing humans to accomplish more with smart software. Artificial Intelligence seeks to give a human face to the already available technology. It is a technology that can learn from the vast available data (data that has accrued over time, throughout history), learn our language, interpret the world the way we do. Artificial Intelligence may be a new technology for some, but Artificial Intelligence has been around for some years now, we even use them in our day to day lives and we don't know it.
One application of AI is the common "Google Assistant" we use in our phones, our laptops, our TVs even our Cars. Google Assistant is capable of making our phone calls, texting messages, searching the internet, set alarms for us by the mere instruction from our "MOUTH". The underlying capability to understand our language, know what we want and carry out those instructions within the shortest time. How? You may ask. It’s all thanks to Machine Learning. Learning, no matter the form involves the process of acquiring knowledge, or in the case of computers "data" through study or experience. But unlike humans machine don't learn by reading textbooks or listening to audiobooks or using cognitive reasoning rather they do so using codes, programs, algorithms written by humans. A basic software model is created and then the software model is trained using data, the model learns from this data or as preferably called cases. Once the model is trained with the data, the model is then used to predict data for new cases. One thing to keep in mind is that computers are very good with math (calculations) and it's on this concept that machine learning builds upon. To have the model make intelligent predictions, we just have to find a way to train the model to perform the correct calculations. We start with a data set of historical events or observations that include numerical features that quantify the characteristics of the item we are working with, we can call this dataset X, and we can call the Value we want to predict Y. We then use our dataset X to train the model to calculate values for Y. In simple mathematical terms we are creating a function F(X) = Y, that performs an operation on a data set X to produce predictions of values Y. Generally, in Machine learning, we have two types: Supervised and Unsupervised Learning In supervised Learning we make use of data sets or observations with known values called labels, for example {(X1, X2, X3 = Y), (X4, X4, X5 = Y)etc}. We reserve some known observations and train the model with the rest of the known observations using algorithms and we let the model predict the label values Y, we then compare the predicted values of Y with known ones. Once we know that the model is working, we then pass through the model, new observations of unknown label values to predict their values. Whereas in Unsupervised learning, we have observations with unknown label values, we then train the model to find similarities between observations and create clusters of similar observations. After the model is trained, each new observations that are passed through the model will be grouped into the clusters with the most similar characteristics.
In order to create an AI using machine learning that could perform as well as the "Google Assistant" would require great technological know-how and time. Teaching the model to identify and learn our language, match the instruction with the necessary output and finally execute the instructions, is not just talk but a series of experiments. Artificial Intelligence is still at its beta stage and there is still room for improvement. Will AI pose a threat to mankind? If we say yes, then it would just be mere speculation. The capability of AI would depend on us the programmers and users because the data is from us.