Development And Formation Of Computer Science

When most people think of computer science, what do they think of? Most people think of computers, stereotypical nerdy guys, and/or Bill Gates (Shaikh). Four years ago, I was one of those people who thought it was a boring tedious subject and I could never imagine myself pursuing it in the future. Despite this viewpoint on computer science, I still found myself applying to the IT program at Carver Center because I was still interested in math and science and I wanted to experience something new and challenging. Looking back at it four years ago, I’m glad I went out of my comfort zone. I experienced a field that is extremely challenging but equally rewarding. I not only gained more experience, but I also gain more confidence in my interest in STEM. Today, I can’t picture my future without a career in computer science.

Unfortunately, many people still believe computer science is dull and monotonous. However, this viewpoint can always be changed when one sees the real reality of computer science. In this research paper, the focus will be about the past, present, and future of computer science. We will investigate the game changers then and now and how this will impact the future. The result will be that people will realize that the development of computer science is the reason why the world is one step closer every day to having the futuristic reality that is seen in Tron: Legacy or Blade Runner. This research paper will also detail the key math and science concepts of this field that are vital for success.

In addition, the ethics behind artificial intelligence and cyber security and privacy will be discussed throughout the research paper. Finally, the research paper will mention the possible career choice a person can pursue in computer science. This section will not only focus on the timeline of a career path but it will also introduce how this field can be applied to different types of fields from medical science to art.

The foundations of computer science began with the abacus. This tool was used to help merchants in Asia with commerce. The abacus became man’s first attempt of automating the counting process by helping the user remember his current state of calculations in order to perform more complex mathematical operations. The evolution of computation continued when Ada Lovelace translated Charles Babbage’s paper called the Analytical Engine in the 1840’s. In her annotations, she included hoe codes could be created for machines to handle letters, symbols, and numbers, She also theorized a method for a machine to repeat a series of instructions (this concept is called looping today). Her ideas impacted the basic concepts of computer science (Hoyle).

In 1890, computation improved with the introduction of punched cards. Herman Hollerith was working for the U.S. Census Bureau at the time when he created a device that could automatically read census information from a punched card. As a result, efficiency of the process improved since reading errors greatly reduced and stacks of punched cards could be used as an accessible memory collection. Hollerith’s device was so successful that he began his own company, Internal Business Machines (Hoyle).

However, the punches cards were limited because you could not do complex mathematical computations. Konrad Zuse developed the Z3 in 1941 to solve this problem. This device was the first machine to work on the binary system. The binary system had two states: 0 and 1. This machine was the catalyst for many programming languages and two state devices like card readers and vacuum tubes. In 1952, Grace Hopper and her team created the first compiler for computers that could translate word instructions into code for the computers to read. This compiler was a precursor to the Common Business Oriented Language (COBOL), which became a widely adapted language that is used around the world (Hoyle).

From the 1960’s to the 1980’s, a revolution occurred in the computer science world. Programming languages like FORTRAN and BASIC were becoming more common since these languages were being used in classrooms and businesses. Companies like Microsoft, IBM and Apple introduced minicomputers for people to use at home (Hoyle). The personal computers became the catalyst for laptops and tablets today and introducing computer science for personal use (i.e videogames).

Today, artificial intelligence has exploded onto the field of computer science. New innovations occur every day due to the evolution of computer science. For example, Google was the first game player to introduce the first prototype of a self-driving car. The car has been tested on the streets of Mountain View, California and Austin, Texas (“Google Self-Driving Car Project”). Many other companies like Tesla, BMW, and Mercedes have released, or are planning to release cars with self-driving features. It is now even expected that there will be about 10 million self-driving cars on the road by 2020 (Greenough).

In addition to self-driving cars, robotics has progressed throughout the years. Today, many of these robots have the same capabilities humans possess. For example, a startup called Knightscope has developed a fleet of robots that are able to see, taste, hear, and smell. Their main purpose is to patrol the streets to fight crime; they are already patrolling the streets in Silicon Valley (Iyer). In addition, MIT researchers have also developed a robotic cheetah that can see and jump over hurdles as it runs autonomously (Fisher). This was an incredible achievement since it will lead to the possibilities of robots not only having human capabilities, but also possessing animal characteristics.

While these innovations in artificial intelligence have surpassed the expectations of many people, it has also raised some concerns: If artificial intelligence can surpass human intelligence, will it lead to our own downfall? Innovators like Elon musk, Bill Gates, and Stephen Hawking warn the public of the dangers of the evolution of artificial intelligence. They all describe how it may lead to unexpected consequences and eventually to the downfall of humankind. While opponents like Google disagree with this statement, the debate continues as our future of artificial intelligence remains unknown (Sainato).

Furthermore, computer security has become a main priority in national defense. Since the dangers of online terrorism are becoming more prominent, many US leaders are meeting with CEOs of technology firms to discuss the best strategy to combat this problem (Newsy). Universities like Carnegie Mellon and Towson University are advocating for the importance of software security to be taught in the computer science curriculum. Additionally, more companies are tracking their customers to help them improve their business strategies. For example, Google announced how they are tracking students who use their services and using some of that information to sell targeted ads. However, many privacy advocate groups warn school administrations that they may not realize what information Google is collecting (Peterson). This raises the question of whether computer science can lead to the decimation of privacy. Computer security is becoming more prominent to protect the public against hackers and terrorists; but will it also lead to us sacrificing our own privacy?

Finally, another concern in computer science is the gender gap within the field. As more and more jobs are created within this field, only 15-20% of undergraduate computer science majors are girls (Wing). Many researchers have associated the root of this problem to society only portraying the stereotypical computer science major as a nerdy white boy who spends his time on the computer 24/7 (Persaud). While many organizations like Girls Who Code and many universities like Stanford and Carnegie Mellon try to bridge the gender gap, it will take a lot of effort to increase female involvement in computer science.

Even though the present is filled with amazing innovations, the future will also bring technological advancements that will impact our society. For one, virtual reality will become commonplace.Virtual reality will not only be available through video games, but it can also enhance the experience of watching a movie or be integrated with education. In addition to these applications, virtual reality will also be improved to 3D. People will be able to see a 3D image of another person in the virtual reality. This will not only give a unique perspective in video games, but in music concerts or theatrical performances (Takahashi).

Artificial intelligence will play a major role in the daily lives of humans in the future. Autonomous cars will be more common on the roads across the nation, if not the world. There will be fewer accidents since human error will be completely eliminated and energy consumption and pollution will reduce (Anderson). In addition to autonomous cars, robots will become prevalent in our lives. From tour guides to receptionists, human-like social robots will have more interaction with humans. The new social robots will not only be friendly in your first interaction, but they will also be able to remember your name and the previous conversation you had with them before (Nanyang Technological University). Telepresence robots will also become common in airports to help passengers find their way around the airport and on streets in order to patrol crime. These robots can also go on future missions to Mars or other foreign planets in order to help scientists collect information about potential habitable planets (Technical Research Centre of Finland). Finally, these telepresence robots can help the elderly live more active lives or help nurses with their daily tasks in a hospital.

These robots will not only be more present in our lives, but they will also have human physical and emotional characteristics. In the future, it would be almost impossible to tell apart a human and an almost-human robot (Investigación Y Desarrollo). These robots will not only look human, but they will also be able to see, hear, touch, and smell. In addition, human and robot interaction will be stronger than before since robots will have human characteristics like personality traits and experiencing emotions. Furthermore, robots will revolutionize how humans combat in war. There will be robots that will be able to disable or dispose bombs in the front lines and humanoid robots that can fight dangerous missions for the army. Cyber security will be new and improved as many universities prepare students with the skills to secure their programs.

However, there is a downside to this amazing future: robots and humans will now have to compete in the job market. As a result, the unemployment rate will increase and may cause instability in the economy (Zolfagharifard). Furthermore, there is a lurking fear that artificial intelligence will surpass human intelligence, which may lead to the humankind’s downfall (Sainato). However, only time will tell if these fears are true.

Finally, the job force of computer science will exponentially increase. As more job availability increases through the decade, the gender gap within computer science will slowly shrink as girls began to get exposed to computer science in non-stereotypical environments (Fonda). Additionally, Silicon Alley may not be the hub of computer science and startups in the future. Places like Silicon Alley in New York City, San Francisco, Seattle, Washington and Chicago, Illinois will be the main focus for computer science jobs (Adams). Technology startups will continue to emerge on the market as major companies like Google will have a prominent influence on our society.

The chronological timeline of computer science is expansive and evolving. Throughout the development of computer science, concepts have evolved to help people today create technological advancements. Provided by Jeanette M. Wing at Carnegie Mellon University, this is a list of math and science concepts used in computer science.

Math Concepts

Discrete Math: Computer Scientists use algebraic structures (i.e. groups and graphs) and counting summation, and permutations. For example, they use this concept to create data structures, algorithms, and state machines.

Logic: Programmers also use proof techniques like induction, deduction, case analysis, etc. to develop algorithms related to recursion, invariants, and symbolic computation. This knowledge is constantly used in artificial intelligence, databases, and software engineering.

Linear Algebra: Concepts like the Gaussian elimination, matrices, determinants, and linear transformations are used for linear programming, clustering, and numerical methods. These concepts are very common in algorithms, artificial intelligence, robotics, and computer vision.

Probability: Programmers use conditional probability, limit theorems, regression, and random processes for randomization, performance analysis, and queueing theory. This is very common in algorithms, operating systems, artificial intelligence, and networking.

Science Concepts:

Physics: Newton’s Laws of Motions, Magnetism, Material Properties, and Time and Rate are commonly used in robotics. There are some videogames and mobile applications that use the concepts of Physics. For example, the mobile app Angry Birds uses kinematics and Newton’s Laws of Motion in the game.

With the growth of computer science, many positive impacts on society have risen throughout the years. For one, computer science has opened many opportunities within employment. For example, the employment of software developers is projected to grow 17% from 2014 to 2024. This rate is faster than the average for all occupations (Bureau Labor of Statistics). In addition, computer science can help facilitate daily activities for humans with new innovations. For example, autonomous cars can prevent accidents since human error will be completely eliminated and energy consumption and pollution will reduce (Anderson). Also, the prevalence of telepresence robots will also become common in airports to help passengers find their way around the airport and on streets in order to patrol crime. Finally, computer science has increased communication within our society. From Facebook to Google, people have the chance to make interactions with other people around the world and gain more insight of the world around them.

However, there are negative impacts to computer science. For one, there are suspicions about the growth of computer science. Elon Musk, Stephen Hawking, and Bill Gates try to warn society about the dangerous consequences of the growth of artificial intelligence (Sainato). In addition to the growing concern of artificial intelligence, many worry if robots will begin to dominate the workplace. IT may lead to an increase in unemployment and instability in the economy (Zolfagharifard). Furthermore, there is a constant worry that many will become dependent on technology in order to entertain themselves. From Wi-Fi to social networking, dependence on technology has shown to increase levels of anxiety for many according to a study (Schwartz). Even though many people are connected with the world around them, it may hinder there social skills when they are faced with daily interactions.

If the concepts, societal impacts, and innovations of computer science interests one, he or she should consider majoring in computer science in order to have career opportunities in the field. If one wants to only achieve their bachelor’s, they can become a software engineer, a computer systems analyst or a computer programmer. If the person wants to get their Master’s degree, the person has the option of also becoming a computer and information systems manager. Finally, if a person plans to get a Doctorate degree, the person has the opportunity of becoming a computer scientist (“Becoming A Computer Scientist”).

A computer scientist comes up with new and innovative ways of improving computers. In order to gain the best preparation to become a computer scientist, one should take as many advanced math classes and science classes the school offers. These classes will not only challenge the person, it will also help the person think logically. The person should also consider taking computer science classes if they’re available at the school in order to gain exposure to computer science. A person can also consider the Information Technology: Programming high school completer program that is offered in Carver Center, Western High School, Milford Mill, etc.

After high school, one should consider to major in computer science or computer science engineering. That person would also need a PhD in order to be considered for most positions as a computer scientist. You would take courses in artificial intelligence, computer system organization, software engineering, and the theory of formal programming languages. For graduate studies, a person would not only take graduate courses but they are also involved in a research project. This can be from a research paper with a professor at the university or creating a new, efficient algorithm. For example, the co-founders of Google created a page-rank algorithm during their time at Stanford University for their graduate studies. ("Computer Science College Degree Programs"). After their stud¬¬¬¬¬ies, a person would work in a university with a team and specialize in robotics, virtual reality, programming languages, etc. (“Becoming A Computer Scientist”). Dr. Blair Taylor, a computer science professor at Towson University, recommends that students should find a job that will pay for their graduate studies and pursue their degree at night after working. She believes this is the best economic strategy a student can do in order to gain experience while saving money.

There are benefits and disadvantages of working in computer science. According to Dr.Taylor, a person could experience good salaries, diverse working opportunities, and challenging projects in the field of computer science. However, the person can also experience long hours when trying to meet the deadlines of a big project. Not to mention, the field is still male-dominated and some people who work in the field can impede communication since they lack social skills. Nevertheless, Dr. Taylor recommends the field due to its flexibility and its challenging opportunities.

Below is the typical timeline a person could take to become a computer scientist.

10 September 2019
close
Your Email

By clicking “Send”, you agree to our Terms of service and  Privacy statement. We will occasionally send you account related emails.

close thanks-icon
Thanks!

Your essay sample has been sent.

Order now
exit-popup-close
exit-popup-image
Still can’t find what you need?

Order custom paper and save your time
for priority classes!

Order paper now