The Innovations and Advancements: Types of Research Theories

Abstract

Previously, technology was responsible for revolutionizing industries but now mankind has begun to rely upon robotics for everyday chores and basic human routine work. It is great to encounter the intelligence of humans and their achievements and how mankind has achieved so much and it doesn’t stop here. This revolution was only possible due to the abundance of data fed to these machines. But this raises a significant concern among people that is it safe to provide such private information to the machine and expect them not to use it against us. Technological innovations are necessary but also certain protocols and standards should be maintained and safety regulations must pertain. The paper “The Innovations and Advancements: Types of Research Theories” discusses the theories of research based on the published materials on the same topic. This research is undertaken to evaluate the potential risks involved and gives recommendations on what could be done to address such issues in upcoming proposals.

Introduction

The innovations and advancements are for the support of mankind and to make human lives effortless and easier with the ultimate goal of making this world a better place for human beings. To make it happen we need more technologies and innovations but if we can’t surpass our paranoia of being attacked by these machines all the time, we can’t even proceed to the prototype stage. This research proposal is to study and analyze different safety models and protocols in robotics for human safety.

Mankind has the propensity to draw in toward the cynicism spread around these advances without understanding the potential advantages that thus stop the development of technology. The developments and innovations are in help of humankind and making human lives easy and simpler with the extreme objective of making this world a better place for humans. To get it going we need more technologies and developments however in the event that we can't outperform our neurosis of being attacked by these machines constantly, we can't proceed past the prototype stage. The motivation behind this research is to recommend the effective safety model and conventions that give the best answers to this suspicion of robot wars and that satisfies Asimov's three laws of robotics. When the best models and practices are recognized, Scientists, Innovators, and Engineers can push their breaking points to find more noteworthy limits. Towards the conclusion of the research and performing the basic examination, I expect to recognize the most productive models and practices that prioritize human safety before continuing for any developments and innovations.

BACKGROUND

To explore the problem context, I researched published articles by scholars and researchers whose contents were in the line with the problem context. Following are those published materials by researchers whose problem context fell in line with the problem context of this research work. I initially proposed to research this problem using Asimov’s three laws of robotics but I chose other alternatives for which I have reasons that are explained in the following discussions

LITERATURE REVIEW

Different Control Models for prioritizing Human Safety in Robotics.

Researchers provide an analysis for the kind of control that people must have over semi-autonomous systems with the aim of avoiding irrational dangers, that human duty won't vanish, and that there is a spot to swing into during any event of unpredicted results. They state that higher amounts of autonomous systems can and ought to be joined with human control and obligation. Researchers here apply the idea of guidance control that has been created by Fischer and Ravizza (1998) in the philosophical discussion about moral responsibilities and choices and adapt it as to cover actions intervened by the utilization of semi-autonomous automated systems. Researchers prove that this examination can be effectively applied with regard to autonomous weapon systems just as of autonomous systems, all the more by and large. This itself gives a first undeniable philosophical record of meaningful human control over autonomous systems. 

Researchers here first survey the currently existing material on meaningful human control over autonomous weapon systems and recognize three related issues to be tended to by a hypothesis of meaningful human control. Researchers briefly present the qualification among in-compatibilist and compatibilist hypotheses of moral responsibility and explain their choice for opting for the compatibilist ways to deal with moral responsibility most appropriate to propose a hypothesis of meaningful human control over autonomous weapon systems. They also present Fischer and Ravizza's record of guidance control. They elaborate, coordinate, and transform it into a hypothesis of meaningful human control over actions interceded via autonomous systems.

Supervising Robotic Teams by Human controls is an existing concept. This work of research centers around the structure of systems in which a human operator is in charge of supervising autonomous systems and giving criticism dependent on sensor information. In the control systems network, the term human supervisory control is regularly utilized as a shorthand reference for systems with this kind of design. In a regular human supervisory control application, the operator does not communicate directly to autonomous systems yet rather by implication associates with these segments by means of a central information handling station. All things considered, framework developers have the chance to effortlessly consolidate computerized functionalities to control how data is introduced to the operator and how the info given by the operator is utilized via robotic systems. The objective of these functionalities is to exploit the inalienable strength and flexibility of human operators, while alleviating unfavorable impacts, for example, unexpected circumstances and performance variability. In certain specific circumstances, to meet the objective of single-operator supervision of various robotized sensor systems, such supporting components are valuable as well as essential for practical use. An effective framework configuration should cautiously think about the objectives of each piece of the framework in general and consistently join parts together utilizing supporting functionalities.

The decision of choosing the efficient security solution relies upon the particular task that the robot is playing out, the activity space, and the likelihood of wounds for the general population. The best decision for the defensive measures is equipment or a framework that ensures maximum safety with the minimum impact on typical machine activity. Safety-evaluated programmable logic controllers (PLCs) are significant parts of a robotic work cell. They gather input information from sensors about an individual's status inside the robot workspace, and also data generated from safety gadgets, for example, e-stops, pendants, sensors, and interlock switches. Results from the PLC help control the robotic power circuit, robot servos, and also other gadgets inside the cell.

Human safety in robotic cells is a standout amongst the most significant angles with regard to planning the robotic cell. The market today offers a wide scope of safety solutions. The last decision relies upon the particular robot task, the robot environment, and the dimension of human obstruction. Coordination of security in VALIP (Virtual Joint Laboratory for Advanced ICT (Information and Communication Technology) in Production) framework speaks to a new idea of perception of the robotic cell. As security is a significant aspect of robotic cells, it likewise represents the significant aspect of virtual reality, to secure the robot, hardware, and people who may end up in the vicinity. It is moreover significant for the user or operator to know about these actualities and this is the part where the VALIP segment has an essential role to play. 

Proposing Collaborative Safety of Humans and Robots in Coexisting Space.

Safety 2.0, is a concept that is known to the possibility of 'Guidelines on Comprehensive Safety Standards for Machines' as far as acknowledging the well-being and safety of work environments by both of machine and human safety practices. Manufacturers should use Guidelines to try and verify safety through structures of safe machines and then after providing/sharing results with users that also include data on usage and residual risks. Thus, users can mitigate this by performing safe actions. Basically, Safety 2.0, IoT, and ICT technology are utilized to trade data among the devices/machines, these machines are controlled by human-fed data. Humans are motivated to act according to machine-generated data to take any required actions. Moreover, the co-existence of humans and robots is possible because of technology itself. 

The concept of Safety 2.0 is to acknowledge high efficiency and high safety at a high state by trading individual data about humans, and devices, and establishing conditions for their collaboration in real-time. Thus, to understand the 4th Industrial Revolution, the 'Robot Revolution' in Japan, and 'Connected Industries', and also the traditional safety hardware, new systems and machines which fulfill appropriate dimensions as per the safety idea of Safety 2.0 are essential. In order to attain the single objective of pushing Safety 2.0 for wider acceptance in the community, researchers aspire to propose in foreseeable future, the CSL idea which is an assessment criterion for ensuring safety in collaborations. 

A test methodology has been proposed to assess safety issues for human-robot association with effects by utilizing a genuinely straightforward lab format with minimal effort models and hardware. An experimental methodology has been sketched out for the new index NIR by utilizing information likewise from robot execution in the user's guide. An approval battle is accounted with tests for robot impacts against models of human head and arm, whose outcomes have been additionally utilized for HIC (Head Injury Criterion) assessment that has approved the efficiency of the outcomes. Continuous similar outcomes demonstrate the efficiency of the proposed test method by directing towards the experimental systems of replicating the collision situation as an appropriate method to assess safety indexes for robots, notwithstanding when utilizing an ordinary and cheaper model as the ones utilized in the reported tests. A correlation of figured estimations of HIC and NIR with the announced test outcomes gives the noteworthiness of the NIR (New Index for Robots) plan as reasonable for the engineered portrayal of a well-being model both for examination and configuration purposes.

New ideal models in mechanical robotics will never again require physical detachment between robotic controllers and humans. Moreover, to increase manufacturing, humans, and robots are relied upon to team up at some level. Well, in such circumstances, including a mutual domain among humans and robots, regular movement-creating calculations may end up being insufficient for this reason. This research proposes a kinematic control procedure that authorizes safety while keeping up the maximum efficiency of the robot. The subsequent movement of the robot maybe a potentially repetitive task, is acquired as an outcome of a real-time algorithm in which safety is viewed as a mandatory criterion to be fulfilled. The concept is experimentally approved on a double-arm robot with 7-DOF per arm playing out a control task. This research talks about an approach to deal with safety necessities in human-robot communication working situations. An essentially tractable arrangement of the imperatives on the robot speed is determined to meet the guidelines forced by the minimum distance paradigm. As per the Safeguarding framework, this requirement can be utilized in real-time to restrict the robot's speed, depending upon the distance between the human and the robot.

Concept of Minimum Separation Between Humans and Robots

The existence of robots around us will end up being a reality sooner rather than later. In any case, they have to turn out to be secured in the manner they cooperate with us humans. Along these lines, safety is a noteworthy issue in collaborative robotics, since robots and humans will coincide and have a similar workspace. A collaborative robot in real-time should be able to get the sensor information. 

The capacity of proficient and quick estimation of the minimum separation among humans and robots is essentially significant for determining a safe human-robot interaction (HRI), where robots and human employees share a similar workspace. Basically, they are co-workers. The minimum distance is the primary data input for the vast majority of strategies for avoiding a collision. In this research, a novel procedure is introduced to analytically evaluate of the minimum distance between cylindrical primitives with spherical ends. Such components are significant because their geometrical shape is appropriate for simulating human employees and robotic equipment or device. This research proposes QR factorization to accomplish computational proficiency in evaluating the minimum distance mutually between each pair of cylinders. Tests exhibited the viability of the proposed methodology.

PUTTING PROBLEM IN CONTEXT

I began this research on the basis of Asimov’s three laws: First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

But researchers have declared these as fictionalized laws and do not accommodate practical scenarios while implementing them on machines. Asimov himself discovered flaws in his laws and introduced the zeroth law that modifies the first law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm. After this realization, I chose to move my attention toward rational models that dwell deep into enduring human safety in human-robot interaction. The papers mentioned in the Literature review help this research to proceed in the problem in this research. Their theories are derived and are in line with the problem statement and will help address the problems with the research topic.

APPROACH

This is purely research work and builds a foundation for the final report. The methodology used here was to refer to online published materials and other physical sources to refer if required. To gain access to the sources I have used the “multi-search” tool of Macquarie University. I have looked for research work published online and articles on problem topics and tried to derive a pattern from the researched materials so that it can categorize each material into the specified zone and help fellow researchers to easily identify their desired field of research. In future work, I hope to come to a decision about the best possible practice/s that ensure the safety of humans first.

OUTCOMES

Realization of Inefficiency of Asimov’s Laws

The work submitted on the safety of humans in human-robot interaction is divided into individual research approaches and all have different theories for achieving a similar objective. Many theories talk about different aspect but ultimately it talks about human safety in human-robot communication. As initially I began my research by focusing on Asimov’s three laws but after realizing it, researchers and himself have zeroed on the fact these laws are fictional and it cannot be implemented in real-world problems. These laws fail to work even at the most fundamental level of safety. Because programming them into an AI system requires a solution to all the ethical queries, concerns, or decisions that currently exist, also the majority of ethical issues are purely hypothetical in nature.

Derivations from the Literature Review

The present picture states that we have many protocols and practices proposed by many scholars and researchers but the main issue remains that these practices are just mere theories and hypotheses. I still haven’t found something concrete in the behavior that suggests that we are insured if any such unfortunate event as an “attack of robots” happens. Still, the research material that was referred to did perform simulations and was successful in achieving positive results. Also, a new category was explored during research that hints at the safety in a human-robot collaborative environment by maintaining minimum distance.

CONCLUSION

CONFLICT IN RESEARCH

Safety against conceivably risky research in any field of research is significant. A study area that is as convoluted as the study of artificial intelligence is, Security should be set up before the research initiates. The true objective isn't concrete. Almost certainly, scientists won't know when the program goes from being only a program to being really intelligent. It is even conceivable that the inception of artificial intelligence will happen incidentally with somebody altering a little which can result in unforeseen consequences. The safety of humans is a raising issue when it comes to the advancement in Artificial Intelligence and their ability to learn the behavior and mannerisms of humans by the data fed by them.

But the researchers believe that it’s a rare chance of Robots “going rogue” humans and considering themselves to be more superior to humans. Researchers strongly believe that any such scenario is rarely possible as we are feeding data to the machine and no such indications or events have been observed or filed anywhere. According to experts, it is just an elaborated con fed by Hollywood filmmakers and producers to people’s minds. Humans are believed to be the smartest living creatures on this planet and we won’t invest in any such experiment which can instead turn on us. 

FINAL WORD

In conclusion, the types of theory in research that I found that their content was aligning with the research problem where the researchers talk about users’ moral responsibility and control guidelines by Fischer and Ravizza that allows humans to have meaningful control of autonomous systems. On the other hand, other researchers with different mindsets push the concept of Safety 2.0 which ensures human safety and machine safety by reading data from machines. The new paradigm of human-robot minimal distance to avoid collision provides new ground to explore in the field of ensuring humans form AI and robotics. New research in the field of human-robot collaborative safety appears to be a more trending solution rather than a master-slave trend. Master-Slave trend is also an effective trend but after the research, I believe it is an ethical paradox.  

REFERENCES

  1. Santoni de Sio, F. and van den Hoven, J. (2019). Meaningful Human Control over Autonomous Systems: A Philosophical Account. [online] Frontiers in Robotics and AI. Available at: https://www.frontiersin.org/articles/10.3389/frobt.2018.00015/full#h10 [Accessed 14 Apr. 2019].
  2. Kaminka, G.A., Spokoini-Stern, R., Amir, Y., Agmon, N. and Bachelet, I. (2017) 'Molecular Robots Obeying Asimov's Three Laws of Robotics,' in Artificial Life, vol. 23, no. 3, pp. 343-350, Aug. 2017. doi: 10.1162/ARTL_a_00235
  3. Clarke, R. (1994) 'Asimov's laws of robotics: Implications for information technology. 2,' in Computer, vol. 27, no. 1, pp. 57-66, Jan. 1994. doi: 10.1109/2.248881
  4. Bilancia, L.F. (2014) 'Safe product design, forensic engineering, and Asimov's Laws of Robotics,' 2014 IEEE Symposium on Product Compliance Engineering (ISPCE), San Jose, CA, 2014, pp. 17-24. doi: 10.1109/ISPCE.2014.6841995
  5. Dohi, M., Okada, K., Maeda, I., Fujitani, S., Fujita, T. (2018) 'Proposal of Collaboration Safety in a Coexistence Environment of Human and Robots,' 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, 2018, pp. 1924-1930. doi:10.1109/ICRA.2018.8460869
  6. Fischer, J. M. & Ravizza, M. (1999). Responsibility and Control: A Theory of Moral Responsibility. Cambridge University Press.
  7. Peters, J.R., Srivastava, V., Taylor, G.S., Surana, A., Eckstein, M.P., Bullo, F. (2015) 'Human Supervisory Control of Robotic Teams: Integrating Cognitive Modeling with Engineering Design,' in IEEE Control Systems Magazine, vol. 35, no. 6, pp. 57-80, Dec. 2015. doi: 10.1109/MCS.2015.2471056
  8. Collis, F.E. “Telerobotics, Automation and Human Supervisory Control.” Robotica, vol. 11, no. 3, 1993, pp. 284–284., doi:10.1017/S0263574700016209
  9. Goodrich, M.A., and M.L. Cummings. Human Factors Perspective on next Generation Unmanned Aerial Systems. 2015, pp. 2405–2423.
  10. Dixon, S., & Wickens, C. (2006). Automation Reliability in Unmanned Aerial Vehicle Control: A Reliance-Compliance Model of Automation Dependence in High Workload. Human Factors: The Journal of Human Factors and Ergonomics Society, 48(3), 474-486.
  11. Parasuraman, R., Barnes, M., Cosenzo, K., Mulgund, S. (2007). Adaptive Automation for Human-Robot Teaming in Future Command and Control Systems. Int Command Control J. 1. 31.
  12. Hyken, S. (2019). Will AI Take Over the World? [online] Forbes.com. Available at: https://www.forbes.com/sites/shephyken/2017/12/17/will-ai-take-over-the-world. [Accessed 14 Apr. 2019].
  13. Thomas C., Busch F., Kuhlenkoetter B., Deuse J. (2011) Process and Human Safety in Human-Robot-Interaction - A Hybrid Assistance System for Welding Applications. In: Jeschke S., Liu H., Schilberg D. (eds) Intelligent Robotics and Applications. ICIRA 2011. Lecture Notes in Computer Science, vol 7101. Springer, Berlin, Heidelberg
  14. Kerezovic, T., Sziebig, G., Solvang, B. & Latinovic, T. 2013, 'HUMAN SAFETY IN ROBOT APPLICATIONS - REVIEW OF SAFETY TRENDS', Acta Technica Corviniensis - Bulletin of Engineering, vol. 6, no. 4, pp. 113-118.
  15. A. M. Zanchettin, N. M. Ceriani, P. Rocco, H. Ding and B. Matthias, 'Safety in human-robot collaborative manufacturing environments: Metrics and control,' in IEEE Transactions on Automation Science and Engineering, vol. 13, no. 2, pp. 882-893, April 2016. doi: 10.1109/TASE.2015.2412256.
  16. Wang, J., Li, Y., & Zhao, X. (2010). Inverse Kinematics and Control of a 7-DOF Redundant Manipulator Based on the Closed-Loop Algorithm. International Journal of Advanced Robotic Systems. https://doi.org/10.5772/10495.
  17. Safeea, M., Mendes, N., & Neto, P. (2017). Minimum Distance Calculation for Safe Human Robot Interaction. Procedia Manufacturing, 11, 99-106. http://dx.doi.org/10.1016/j.promfg.2017.07.157
  18. Stokes, C. (2018). “Why the three laws of robotics do not work”. International Journal of Research in Engineering and Innovation Vol-2, Issue-2 (2018), 121-126. 
10 October 2022
close
Your Email

By clicking “Send”, you agree to our Terms of service and  Privacy statement. We will occasionally send you account related emails.

close thanks-icon
Thanks!

Your essay sample has been sent.

Order now
exit-popup-close
exit-popup-image
Still can’t find what you need?

Order custom paper and save your time
for priority classes!

Order paper now