Analysis And Implementation Of Neural Networks For Error Detection And Classification
Industrial Automation is becoming a key area of Interest in the field of Machine Learning. The problem of peg in a hole in Robotics, is something that has existed for a long time now in the Industry. The presence of highly sensitive Joint Torque and force sensors of LBR iiwa, has given us the access to make potential use of data from these sensors to develop a quality process in assertion of screws into the holes. LBR iiwa is a 7 axis controlled Robot with payload capacities of 7 to 14 kilograms. Every joint is equipped with position sensor on the input side and a torque sensor for the output. The stiffness and damping values could be configured.
These sensitive sensors and controllers makes the robot highly complaint and also becomes useful for a wide range of tasks. The main idea behind inserting a screw in a hole, is the amount of force we would use during the process of insertion. This data, eliminates the need to use a camera for identifying holes in an object. Because mounting a camera to a robot is not helpful in all the cases. Firstly, cameras are an additional expense and they restrain/limit certain axis movements by the robot. And training our system using camera images to identify holes has limitation, when the hole areas are present in dark. Based on the positioning of the robot, the surrounding environment the sensing capability of the robot becomes a highly demanding task. Whereas, when we use the already existing sensor data we save up a lot of cost and this force data is more generalized and is not environment dependent.
Existing Automation Techniques:Using Sensor interpretation for the peg-in-hole problem has been researched extensively over the last 20 years. With the help of compliant robots, they make use of the sensory feedback from the robot for achieving the task. Recently, force and motion data have been used into assembly tasks using intelligent AI techniques. Most of these works are strictly theoretical and remain to be validated empirically or some of them are application specific. Also, most the insertion task research focusses on the jamming problem of the object into the holes, but our research focusses on the search pattern used to insert the object and position/torque/force data collected from the robot. So our research, instead of focusing on the insertion alone we study more about improving the search pattern to reach the hole and collecting the accurate data right from identifying the right hole, to the initial contact with the surface to the exact break condition after a successful attempt of screw insertion.
In 1989, Gottschlich and Kak presented a method invoking straight-line motion goals chosen based on interpretation of assembly states from sensor values. In our approach, we adopt this philosophy of invoking pattern searches and interpretation of sensory data based on such motion commitments. Gottschlich and Kak also presented a quasi-static force/moment balance analysis for a circular peg partially overlapping a circular hole where the surface of the peg is parallel to the assembly surface. This study of force analysis is very similar to our problem requirement, however in a realtime scenario the parallelism approach is not always helpful, because the object that we approach could have multiple conditions like for instance a partially covered hole in the assembly surface can’t be treated the same way just because the peg is overlapping the assembly surface. In 1990, Asada described a method for intelligent interpretation of force/moment data for guiding peg-in hole assemblies using a neural network.
This was probably a very early venture to deal with intelligent ways to automate the assembly process. Like many of the methods we have read so far, his work used interpretation of instant measurements of forces and positions to calculate incremental motion commands to drive the robot closer to the required assembly state position. However in our experiments, we won’t be depending only on the raw data collected, because that could lead to random results. The deep learning architecture that we use is very helpful in identifying the right parameters from the data collected, hence avoids unnecessary data noise from the dataset. In the recent times, there has been a considerable amount of work done in the field of Error detection for insertion tasks, especially with the time and frequency domain based datasets. But most of the grasping or fitting tasks in industrial robots are performed using Image datasets.
The image dataset work includes, pixel level data association learning where it works the best when there are distinct/dense descriptors, hence falls short in accuracy for Industrial applications. Some research work has been done without images, for instance the process of fitting a switch to the board was attempted using the robot Joint Torque values. But the data recording was done manually once the process of fitting was completed. Such methods could lead to manipulated results post data training. And RNN has been used standardly across problems, as the data varies in time range. The idea of inserting Peg in a hole is somewhat a similar research topic to ours. Firstly, there are few approaches that were implemented without the help of Machine Learning algorithms. The Robotic peg in a hole task was achieved without the use of contact force with object. The contact points were used to insert the peg, like the peg is tilted and is moved until it sees a 3 contact point over where the tilting angle becomes zero, hence the peg is dropped there.
The contact conditions and the kinematic information were ideally designed for the particular application and so were fined tuned specifically. Other insertion processes includes, component insertion to reduce the manual assembly. For this, they have used a 3 layer technique. The vision layer, to extract the features of the object to be inserted and then the motion layer to capture the motions that were successful in inserting and then the decision layer where SVMs, neural network, Bayes are used for classification. Experimental results show that SVMs work as a better classifier here. The sensor data has been widely used, but the approach of the problem is mostly fine tuned for the particular task. The force data is collected along with the joint positions, but the hole is approached manually and the operator stops once the hole is reached. This ideally has an advantage where, it avoids the positional restrictions while inserting the object, but the contact force won’t be accurate as they are controlled externally. Basically when we train using this dataset, it would learn the human motion pattern for the given problem.
Proposed Technique: Our proposed technique uses only non-vision dataset such as force and torque. We learn a classifier to predict the hole from a non-hole using the force patterns, torque values and few other parameters like the Joint positions for instance, in the collected dataset. Usually the training process is hindered with dimensionality reduction, so correct feature extraction is a challenge. Initially, the training is planned to be done with the force and torque data collected from the Joints. The data collection is done using OPCUA and are stored in the form of CSV datafiles. The main idea is to test different classifiers to see which works the best for this type of problem. A RBNN(Radial based neural network) gives good result with non-linear datasets. Other techniques like basic MLP and fuzzy ARTMAP networks will also be evaluated.
Data collection Process: The Java application we developed runs through all the holes and non-holes in the object, the OPCUA Data retrieval acts as the client and collects the data from the configured Joints. Using the Cartesian impedance controller in the LBR iiwa, we can create the desired pattern of movement. So we created a spiral movement along the hole with an uniform force of 10N, and when it finds the hole it slips inside to reach the surface. If it reaches the surface, it has ideally exceeded the defined force of 10N,so the robot gripper retracts from the hole, and continues the process to the next holes. With non-hole position, there are two possibilities: Either there is no hole there, hence the spiral movement is fully completed at such points and labelled as a true negative sample. Other one is a partly closed hole, these are manually labelled as true negatives.