International Journal Of Industrial Engineering Research And Development ISSN Print : ISSN 0976 – 6979, Volume 1, Issue 1 (2010) pp 84 - 93

INTELLIGENT PROCESS SELECTION FOR NTM - A NEURAL NETWORK APPROACH V. Sugumaran, V. Muralidharan, Bharath Kumar Hegde, Ravi Teja C 1

Deptartment of Mechatronics Engineering, SRM University, Kattankulathur, Kancheepuram Dt., India, [email protected]

ABSTRACT: Decision-making is an important phase in the manufacturing enterprises to complete in the global competition. The rapid industrial expansion is demanding the need for better quality decisions in the shortest possible time. The development of Nontraditional machining process is the result of a desire to deal with ‘difficult to machine‘ materials at a faster rate at lower cost with best possible quality. To meet all these requirements, research work is going on in manufacturing industries particularly Nuclear and Aerospace engineering industries. Despite being successful in solving many manufacturing problem, Non-traditional manufacturing also pose an important restriction in the selection of appropriate processes for a particular machining problem. In practice, no single process is capable of satisfying wide variety of machining problems. This non-versatility of the Non-traditional machining processes necessitates an intelligent system in this domain. The selection procedure described in this paper is intended as a general-purpose aid to the designer in making preliminary selections of non-traditional machining process for a given part. In the proposed procedure, work materials, shape machined, operational capabilities such as minimum tolerance, minimum surface finish, minimum corner radii, minimum hole diameter, maximum depth to diameter ratio and maximum thickness of work piece are included. Based on the required part characteristics, the proposed neural network generates a list of nontraditional machining processes to produce a particular part. This list helps a designer in identifying possible alternatives early in the design process. A neural network tool “Neuralyst” has been used for the development of system for Non-traditional machining. It uses Pattern matching/ associative memory. The network was trained and parameters are optimised for better results. Keywords: Artificial neural network, Neuralyst, Pattern recognition. 1.0 NONTRADITIONAL MACHINING Since the 1940s, a revolution in manufacturing has been taking place that once again allows manufacturers to meet the demands imposed by increasingly sophisticated designs and durable, but in many cases nearly unmachinable materials. This manufacturing revolution is now, as it has been in the past, centred in the use of new tools and new forms of energy. The result has been the introduction of new manufacturing processes known as Nontraditional machining (NTM) processes. The

conventional manufacturing processes rely on electric motors and hard tool to perform the desired operation. In contrast, Nontraditional-machining processes can be accomplished with electrochemical reactions, high temperature plasmas, and high velocity jet of liquids and abrasives etc. There are over 20 different Nontraditional processes have been invented and implemented successfully into production. Each process has its own characteristic attributes and limitations; hence no one process is best for all manufacturing situations. So there is a need for a tool to assist the production/design engineer to select a appropriate process for a given situation. In this paper, an attempt is made to use Artificial Neural Network (ANN) as a tool to perform this task. The parameters of NTM like minimum tolerance, minimum surface finish, minimum corner radius, minimum hole diameter, minimum over cut and maximum depth to diameter ratio etc. are considered as process capabilities for the process selection. 11 NTM process are taken and the corresponding process capabilities are given in the table 1. Min. Min. Min. Min. Min. Tolerance Surface Surface Corner Taper Process finish Damage Radius (mm) (CLA) (µm) (mm) (mm/mm) EDM 0.03 3 20 0.4 0.001 ECM 0.05 1 0.2 0 ECG 0.025 0.3 0.13 N.A ECH 0.01 0.1 N.A N.A N.A AJM 0.05 0.1 2.5 0.1 0.01 WJM N.A N.A 1.5 N.A USM 0.013 0.5 25 0.025 0.005 CHM 0.08 2 5 1.25 0.3 LBM 0.03 1 100 0.25 0.05 EBM 0.03 5 10 N.A 0.02 WEDM 0.005 1 20 0.06 0.05

Min. Hole Dia (mm) 0.2 0.5 N.A N.A 0.13 0.15 0.05 0.5 0.05 0.04 0.11

Min. Width of cut (mm) 0.05 0.1 N.A N.A 0.13 0.15 0.05 0.5 0.1 0.04 0.15

Min. Maximum Over Depth to Cut Dia ratio (mm) 0.03 20 0.13 30 0.13 N.A N.A N.A N.A 10 N.A 30 N.A 2.5 N.A 3 N.A 15 N.A 20 0.01 N.A

Table 1. NTM processes and Corresponding parameters 2.0 ARTIFICIAL NEURAL NETWORKS Artificial neural networks (ANN) are modeled on biological neurons and nervous systems. They have the ability to learn and the processing elements, known as neurons perform their operations in parallel. ANN’s are characterized by their topology, weight vector and activation functions. They have three layers namely an input layer, which receives signals from the external world, a hidden layer, which does the processing of the signals and an output layer, which gives the result back to the external world. Various neural network structures are available. The review of literature reveals that both supervised learning and unsupervised learning have been applied in similar problems. 2.1 Multi-layer Perceptron (MLP) This is an important class of neural networks, namely the feed forward networks. Typically, the network consists of a set of input parameters that constitute the input layer, one or more hidden layers of computation nodes and an output layer of

computation nodes (Figure 1). The input signal propagates through the network in a forward direction on a layer-by-layer basis.

Figure 1. Multi layer network MLPs have been applied to solve some difficult and diverse problems by training them in a supervised manner with a highly popular algorithm known as the error backpropagation algorithm. Each neuron in the hidden and output layer consists of a activation function, which is generally a non linear function like the logistic function which is given by 1 , (1) f ( x)  1  e x where f(x) is differentiable and x  Wij I   j (2) i 1

where Wij is the weight vector connecting the ith neuron of the input layer to the jth neuron of the hidden layer, I is the input vector and j is the threshold of the jth neuron of the hidden layer. Similarly, Wij is the weight vector connecting jth neuron of the hidden layer with the kth neuron of the output layer. i – represents the input layer, jrepresents the hidden layer and k-represents the output layer. The weights that are important in predicting the process are unknown. The weights of the network to be trained are initialized to small random values. The choice of value selected obviously affects the rate of convergence. The weights are updated through an iterative learning process known as ‘error back-propagation (BP) algorithm’. Error back-propagation process consists of two passes through the different layers of the network; a forward pass in which input patterns are presented to the input layer of the network and its effect propagates through the network layer by layer. Finally, a set of outputs is produced as the actual response of the network. During the forward pass the synaptic weights if the network are all fixed. The error value is then calculated, which is the mean square error (MSE) given by

1 n  En n n1 1 m En   ( kn  Okn ) 2 2 k 1 where, Etot 

(3)

where, m is the number of neurons in the output layer,  kn is the kth component of the desired or target output vector and Okn is the kth component of the output vector. The weights in the links connecting the output and the hidden layer Wjk are modified as follows: W jk   (E / W jk )   j y j , where  is the learning rate. Considering the

momentum term () W jk   j y j and W jknew  W jkold  W jk . Similarly the weights in the links connecting the hidden and input layer Wjk are modified as follows: (4) W jk   j  I , m

where ,  j  y j (1  y j )  k W jk . k 1

W

new ij

W

old ij

 Wij

 k  ( k  Ok )Ok (1  Ok ) for output neurons and

(5) (6)

m

 j  y j (1  y j )  k W jk for hidden neurons.

(7)

k 1

The training process is carried out until the total error reaches an acceptable level (threshold). If Etot < Emin the training process is stopped and the final weights are stored, which is used in the testing phase for determining the performance of the developed network. The training mode adopted was ‘batch mode’, where weight updating was performed after the presentation of all training examples that constitutes an epoch. 2.2 Neural Network Modeling The following is brief introduction to each step of training and validating neural network. 1. Determine the structure of ANN. 2. Divide the input and output known data into two groups, the first to be used to train the network, the second to be used to validate the network in an out-of-sample experiment. 3. Scale all input variables and the desired output variables to the range of 0 to 1. 4. Set initial weights and start a training epoch using the training data set. 5. Input scaled variables. 6. Distribute the scaled inputs to each hidden node. 7. Weigh and sum inputs to receiving nodes. 8. Transform hidden-node inputs to outputs. 9. Weight and sum hidden node outputs as inputs to output nodes. 10. Transform inputs at the output nodes. 11. Calculate the output errors 12. Back-propagate errors to adjust weights. 13. Continue the epoch. 14. Calculate the epoch RMS value of the error. 15. Judge output the sample validity. 16. Use the model for forecasting.

2.3 Neural network architecture The neural network model definitions and model architecture is as follows: Network type : Feed forward neural network No. of nodes in input layer : 9 No. of hidden layers : 1 No. of neurons in hidden layer : 12 Output layer : 11 Transfer function : Sigmoid transfer function in hidden and output layers Training rule : Back propagation Learning rule : Momentum learning method Momentum learning step size : 0.1 Momentum learning rate : 0.9 No. of epochs : 451 Training termination : Minimum mean square error 3.0 TRAINING AND TESTING OF NEURAL NETWORK The data used for training the network is shown in the table. Eleven parameters of the NTM processes are taken as input to the network. Each output node represents one process. There are 11 output nodes in the output layer. The basic principle behind the neural network is the input space variables are mapped to a higher dimensional feature space where the variables are linearly separable. Hence, the hidden layer should have at least one node greater than number of nodes in the input layer. In this case hidden layer has 12 nodes. There is no thumb rule to set the network parameters such as number of hidden layers and testing tolerance, learning rate. So, keeping other parameters constant the effect of testing tolerance and number of nodes in hidden layers are experimented with various values and the results are presented in the form of graph (shown in figure 2, figure 3 and figure 4). The testing data are given close to particular process to check the accuracy of the network. The results are shown in figure 4. Training Tolerance Vs No. of epoches

s e h c o p e f o . o N

1800 1600 1400 1200 1000 800 600 400 200 0 0

0.05

0.1

0.15

0.2

0.25

Tolerance

Figure 2. Tolerance Vs No. of epochs

0.3

0.35

No. of hidden layers Vs Epoches 920 900 880 s h c o p E

860 840 820 800 780 760 0

10

20

30

40

50

No. of nodes in Hidden layer

. Figure 3. No. of. Nodes in H.L Vs Epochs 4.0 ANALYSIS OF RESULTS As the training tolerance decreases, the number of epochs needed to learn the pattern (input data) is more. Because, the RMS error allowed in convergence of the network is very small and to achieve that, the network has to redistribute the error back through back-propagation algorithm. As the training tolerance decreases the prediction capability of the network will increase, but it takes more time for learning As discussed earlier, the minimum number of nodes in hidden layer should be 12 in this case. To verify the effect of the number of nodes on training epochs, the experiment was done at various values of number of nodes and the results are presented in table 2. As the number of nodes increases the training epochs also increases above and below 12. This means that in 12 dimensional space the input variables are linearly separable. Going beyond 12 is unwanted task and going below 12 nodes leads to a lower dimensional space where the input variables are not linearly separable Network Performance e u l a V d e t c i d e r P k r o w t e N

12 10 8 6 4 2 0 0

2

4

6

8

Expected value

Figure 4. Network performance

10

12

One should note that the neural network would give results based on the weights. That means, while the near values of the data used for training are given as input, the network will predict the same value used during the training. For example, using EDM process we can achieve up to 0.03 mm tolerance. Using this data network was trained. If an input of 0.027 mm is given as tolerance needed, then the network will possibly predict EDM as the suitable process provided all other parameters are close to the training data. Actually, using EDM we cannot achieve a tolerance of 0.027 mm. So, The network can only be used as an aid for making decision and designer has to check the result for practical application. This issue can be solved by an expert system [6], but, when more than one process satisfy the given specification the expert system fail to prioritize the process. The neural network designed does an additional job of prioritizing also. In this point of view, the network was found to be better and the accuracy of the results also matches most of the time to real world solutions. 5.0 CONCLUSION This investigation highlights the use of neural network in NTM process selection. The results are very encouraging. There is a need for further studies to carried out in order to utilize it effectively for NTM process selection application. REFERENCES: [1] Benidict G.F, “ Nontraditional manufacturing process”, Marcel Dekker, Inc.,New york, 1987. [2] Can Cogun, “ Computer-Aided Preliminary Selection of Nontraditional machining Process”, Int. J. Mech. Tools Manufact., vol. 34. No. 3, (1994), 315-326. [3] P.Venkateswara Rao, CH. Nagaraju, CH. V.V.RamaRao, “ Computer- Aided Selection of Unconventional Machining Process”, 17th AIMTD,REC,Warangal. [4] Zurada M.J, “Introduction to Artificial neural systems”, Jaico Publishing House,1999. [5] Production technology by HMT. [6] V.Sugumaran, M.K.Prabakaran, “Expert System for Nontraditional Machining”, Proceedings of national conference at Annamalai uiversity, (2002).

INTELLIGENT PROCESS SELECTION FOR NTM - A NEURAL ...

finish Damage Radius Dia of cut Cut Dia ratio. (mm) (CLA) (μm) ... INTELLIGENT PROCESS SELECTION FOR NTM - A NEURAL NETWORK APPROACH.pdf.

248KB Sizes 0 Downloads 233 Views

Recommend Documents

A Democratic Selection Process for the 21st Century.pdf
Rule Change - A Democratic Selection Process for the 21st Century.pdf. Rule Change - A Democratic Selection Process for the 21st Century.pdf. Open. Extract.

Exploring Pattern Selection Strategies for Fast Neural ...
data [1], Bangla handwritten numerals [2] and the Shuttle data from the UCI machine learning repository [3]. Keywords-fast pattern selection; neural networks; machine learning;. I. INTRODUCTION. In the last few decades, the popularity of different ne

A Prototype for An Intelligent Information System for ...
for Jamming and Anti-jamming. Applications of Electromagnetic Spectrum. .... As normal information system development life cycle [6,9,22] we need to analyze.

M.Sc_.-NTM-2011-C.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. M.Sc_.-NTM-2011-C.pdf. M.Sc_.-NTM-2011-C.pdf. Open. Extract.

A Prototype for An Intelligent Information System for ...
Data. Acquisition. & Manipulation. Threats Data. Platforms Technical Data. Threats Library. Pods Library. Intelligent. Information System. (IIS). Mission Plan. Data. Reporting &. Documentation. User Adaptive. Refinement. Available Decisions. Plan Doc

A Genomic Scan for Selection Reveals Candidates for ...
Nov 18, 2008 - Rows in bold indicate loci with P < 0.05 for both lnRV and. lnRH. ... b A locus that was previously mapped in a different cross (see text for details). ..... reading.ac.uk/Statistics/genetics/software.html) was used to investigate.

Justice Portfolio Committee should open up SAHRC selection process
proceeding under the radar of the media and civil society's attention. The Committee owes it to ... protecting and monitoring mandate. A Commissioner with the ...

MARYLAND'S FOREST CONSERVATION ACT: A PROCESS FOR ...
Abstract. The Maryland Forest Conservation Act (FCA) was passed in 1991 to protect the state's forest resources during development. Compliance is required for ...

A Process Semantics for BPMN - Springer Link
Business Process Modelling Notation (BPMN), developed by the Business ..... In this paper we call both sequence flows and exception flows 'transitions'; states are linked ...... International Conference on Integrated Formal Methods, pp. 77–96 ...

A Process Semantics for BPMN - Springer Link
to formally analyse and compare BPMN diagrams. A simple example of a ... assist the development process of complex software systems has become increas-.

A Multi-Agent Architecture For Intelligent Building ...
Department of Computer Science, Essex University, Colchester, U.K. For ... who are elderly or have physical or learning disabilities to achieve as great a degree of ... systems of embedded processors, dedicated networks and intelligent agent ...

A Novel Approach for Intelligent Route Finding through ...
expansion based on adjacency matrix of all nodes forming a network. Dijkstra's algorithm starts by assigning infinity as default score to all nodes except the source. Candidate nodes for subsequent computation will be stored into a priority queue acc

A Neural Conversational Model - arXiv
Jul 22, 2015 - However, most of these systems ... bined with other systems to re-score a short-list of can- ..... CleverBot: What is the color of the apple in the.

Course Selection Process for Rising Seniors 17-18.pdf
Page 1 of 11. WELCOME! THIS BEGINS YOUR COURSE. SELECTION PROCESS FOR. YOUR SENIOR YEAR. “CHALLENGED, BUT NOT. OVERWHELMED”. 2017-2018 Online Course Selection. Page 1 of 11. Page 2 of 11. 2017-2018 Course Selection. iStudent Opens. Students Must

A Review on Neural Network for Offline Signature Recognition ... - IJRIT
Based on Fusion of Grid and Global Features Using Neural Networks. ... original signatures using the identity and four Gabor transforms, the second step is to ...

A Deep Convolutional Neural Network for Anomalous Online Forum ...
as releasing of NSA hacking tools [1], card cloning services [24] and online ... We propose a methodology that employs a neural network to learn deep features.

Development and Optimizing of a Neural Network for Offline Signature ...
Computer detection of forgeries may be divided into two classes, the on-line ... The signature recognition has been done by using optimum neural network ...

A Regularized Line Search Tunneling for Efficient Neural Network ...
Efficient Neural Network Learning. Dae-Won Lee, Hyung-Jun Choi, and Jaewook Lee. Department of Industrial Engineering,. Pohang University of Science and ...