Cellular Neural Network for Real Time Image Processing G. Vagliasindia, P. Arenaa, L. Fortunaa, G. Mazzitellib, A. Muraric, and JET-EFDA Contributors* JET-EFDA, Culham Science Center, OX14 3DB, Abingdon, UK Dipartimento di Ingegneria Elettrica Elettronica e dei Sistemi - Università degli Studi di Catania, I95125 Catania, Italy b ENEA-Gestione Grandi Impianti Sperimentali, via E. Fermi 45, I-00044 Frascati, Rome, Italy c Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padova, Italy

a

Abstract. Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET). Keywords: cellular nonlinear networks, image processing, tokamaks, nuclear fusion PACS: 07.05.Pj, 87.57.N-, 52.55.Fa

INTRODUCTION CNNs are array of simple, identical, locally interconnected nonlinear dynamic circuits called cells, each of which interacts, via weighted connections, with the cells in the neighborhood of a limited radius. Due to their structure, they are particularly suited for image processing tasks. Moreover, thanks to their local interaction on a limited radius, CNNs can be easily implemented as analog and mixed-mode circuits. This kind of implementation, leading to an individual circuit for each cell, permits to process each pixel in a parallel way. CNNs can, therefore, represent a valid alternative to traditional methods as far as real time image processing is concerned. To demonstrate the feasibility of the CNN approach to real time image processing in nuclear fusion, several applications of CNN-based hardware and software techniques were developed in the last years at FTU and JET, exploiting the output *

See the Appendix of M.L.Watkins et al., Fusion Energy 2006 (Proc. 21st Int. Conf. Chengdu, 2006) IAEA, (2006)

from video cameras installed in the two devices. Problems like real time detection of striking points position, hot spots and MARFE have been successfully addressed. An overview of these developed techniques will be reported, together with a brief introduction on CNN technology.

CELLULAR NONLINEAR NETWORKS The concept of CNNs was introduced in 1988 by L.O. Chua [1]. The architecture of the CNN is made up of a basic circuit called cell, containing linear and nonlinear circuit elements. Each cell in a CNN is connected to its local neighboring cells, so a direct interaction occurs only among adjacent cells. The neighborhood of the cell on the i-th row and j-th column, denoted by C(i, j), has the following definition: N r (i, j ) = {C (k , l ) max{ {k − i , l − j } ≤ r}, 1 ≤ k ≤ M ,1 ≤ l ≤ N (1) where r is a positive integer number, which fixes the dimension of the neighborhood. A CNN is entirely characterized by a set of nonlinear differential equations associated with the cells in the circuit. Several mathematical model for the state equation of the single cell has been proposed since their introduction. The model as implemented in the CNN Universal chip family [2], also called Full Signal Range (FSR), is given by the following set of relations: dxij (t ) 1 Cx =− g xij (t ) + ∑ A(i, j; k , l ) y kl (t ) + ∑ B(i, j; k , l )u kl (t ) + I (2) dt Rx C ( k ,l )∈N r ( i , j ) C ( k ,l )∈N r ( i , j )

(

)

and the output equation is:

y ij (t ) = xij (t ) (3) where u, x, and y denote the input, state, and output of the cell, respectively; Rx and Cx are the values of the linear resistor and linear capacitor, that determine the time constant of the circuit; A(i, j; k, l) and B(i, j; k, l) are the feedback and control templates respectively; I is the bias term, that is constant for all the CNN cells; g(x) is the nonlinear function in the state equation (2), described in [2]. One of the main applications of CNNs is image processing. In this case the input image is mapped on the CNN in such a way that each image pixel is associated with the input or initial state of a particular cell. The CNN evolution implies a transformation of the input image into the corresponding output image obtained directly by equation (2). In this contest, the template operators work like the instructions in a programming code. A huge amount of templates and template algorithms for a variety of tasks is already available in the literature [3]. The hardware prototype system used in this application is based on the CNN Universal Chip prototype, which is a 128x128 mixed-signal CNN chip [4], developed following the concept of Single Instruction Multiple Data (SIMD) architectures. It is named ACE16K, where ACE is the acronym of Analogic Cellular Engine to underline the mixed signal nature of the chip (analog and logic). ACE16K can be described basically as an array of 128x128 identical, locally interacting, analog processing units designed for high speed image processing tasks requiring moderate accuracy (around 8bits). The system contains a set of on-chip

peripheral circuitries that, on one hand, allow a completely digital interface with the host, and on the other provide high algorithmic capability by means of conventional programming memories where the algorithms are stored. Implemented in this way, CNNs have the same computational power as Universal Turing machines and therefore they can be applied to a variety of tasks [5].

STRIKE POINT DETECTION AT JET In Tokamak plasmas, the divertor is the region of the vacuum vessel explicitly designed to handle power losses. The intersection of the separatrix, the last closed flux surface, with the divertor target plate represents a strike point. In JET the position of the strike points is mainly derived from magnetic measurements obtained through loops and pick-up coils located around the vacuum vessel. The code used at JET to determine the plasma shape and therefore also the position of the strike points is XLOC [6]. Another very useful diagnostic to derive information about the power deposition in the JET divertor is represented by infrared imaging. At the time of the experiments, two IR cameras framing the divertor region were available at JET, measuring the infrared radiation in the interval 3-5 μm with a resolution of 128x128 pixels [7]. A specific procedure was developed to derive the position of the strike points from the data of the infrared cameras exploiting the capabilities of CNN technology. A detailed description of the procedure is given in [8]. It is capable of supplying the position of both the inner and the outer strike points within 20 ms. In particular, the time to identify the strike points in the image is comprised between 13 and 19 ms, depending on the brightness condition of the starting frame.

HOT SPOTS DETECTION IN JET VACUUM VESSEL One of the future ITER relevant enhancements at JET will be the installation of a Beryllium wall. Since Be is much more vulnerable than stainless steel (the present JET wall material) preserving the integrity of the plasma facing components will be one of the main issues in future JET experiments. Detecting on time the presence of hot spots, i.e. regions of the first wall where the temperature approaches dangerous levels, is considered crucial in the safety strategy. CNN technology was applied to the real time identification of hot spots. Two different algorithms for the identification of the hot spots were developed. The first one, for the so called static detection, performs the analysis of a single frame at the time. It is more suited to the monitoring of the fixed parts of the machines, like the plasma facing components (limiters and divertor). A second approach, called dynamic detection, is based on the difference between subsequent frames. The latter algorithm is more complicated but it allows to follow the growth of the hot spots and their movements inside the field of view. A detailed description of both the algorithms is reported in [9]. Both algorithms were tested using frames acquired by JET new wide angle infrared camera [10]. The execution time is in the order of 60 ms to process a 384x384 frame while the time required to process a 128x128 subframe is about of 10 ms. This higher

time resolution could be used to monitor particularly delicate parts of the machine, like the RF antennas, with a higher time resolution.

EARLY MARFE DETECTION AT FTU The MARFE [11] is a radiation instability which appears in tokamaks as a toroidally symmetric ring of enhanced radiation. It usually occurs on the inner side of the torus. An uncontrolled MARFE could limit the maximum density achievable during a discharge or evolve to a disruption. In consequence of this real time control of MARFEs would be a useful mean to extend the plasma operation parameters and to avoid dangerous disruptions. Camera observations of a wide poloidal portion of the plasma edge represent a good candidate for early detection of the MARFE onset and the development of a feedback control in order to mitigate its effects. An algorithm has been devised in order to monitor the incipiency of a MARFE [12]. The whole algorithm has been implemented on the CNN based chip ACE16K. The data from the video monitoring system installed at FTU which provide a frame every 20 ms [13] was used in this work. The time taken by the algorithm to process two consecutive frames is about 16 ms, which is below the interframe rate. This real time detection system can help in carrying out a safe termination of the experiment when the probability of the occurrence of a disruption is high, thus preventing the tokamak from mechanical and thermal stresses.

ACKNOWLEDGMENTS The work was partially supported by the Project “Real-time visual extraction from plasma experiments for real time control,” funded by EURATOM. The work leading to this article was funded by the European Energy Community and is subject to the provisions of the European Development Agreement.

feature ENEAAtomic Fusion

REFERENCES 1. L. O. Chua, L. Yang, IEEE Transactions on Circuits and Systems - I, vol. 35, pp. 1257-1290, 1988. 2. S. Espejo et al., International Journal of Circuit Theory and Applications, vol. 24, pp 341-356, MayJune 1996. 3. T. Roska et al., CNN Software Library (Template and Algorithms) Version 7.3, Hungarian Academy of sciences ed. Budapest, Hungary: Computer and Automation Institute, 1999. 4. A. Rodríguez-Vázquez, el al., IEEE Transactions on Circuits and Systems-I, vol.51, no.5, May 2004 5. L.O. Chua, K.R. Crounse, IEEE Transactions On Circuits And Systems - I, vol. 53, no.4, April 1996. 6. F. Sartori, A. Cenedese and F. Milani, Fusion Eng. Des., vol. 66-68, pp. 735-739, 2003. 7. V Riccardo et al, Plasma Phys. Control. Fusion, vol. 44, pp. 905-929, 2002. 8. P. Arena et al., Rev. Sci. Instrum. 76, Issue 11, Article 113503 (November 2005). 9. G. Vagliasindi et al., “First Application of Cellular Neural Network Methods to the real time identification of Hot Spots in JET”, submitted to IEEE Transactions on Plasma Science. 10. E. Gauthier et al., 24th Symposium on Fusion Technology (SOFT 2006), 11-15 September 2006, Warsaw, Poland, to be published in Fusion Engineering and Design. 11. B. Lipschultz, Nuclear Fusion, vol. 24, n. 8, Aug. 1984, pp. 977-989. 12. P. Arena et al, IEEE Transactions on Plasma Science, Vol. 33, Issue 3, June 2005, pp. 1-9. 13. R. De Angelis et al., Rev. Sci. Instrum. 75, Issue 10, pp. 4082-4084, October 2004.

Cellular Neural Network for Real Time Image Processing

and infrared video cameras have become more and more important in ... Keywords: cellular nonlinear networks, image processing, tokamaks, nuclear fusion.

79KB Sizes 2 Downloads 233 Views

Recommend Documents

Image processing with cellular nonlinear networks ...
Fusion Energy Conference, Geneva, Switzerland, 2008 and the Appendix of Nucl. ... sults of specific software solutions implemented with the. C language.

Real-Time Video Processing for Multi-Object Chromatic ...
Our main contribution is a reduction operator that works over the video frames for localizing ... Video Frame, (center) Composite and (right) Trajectories .... operator we prepare each video frame using local evaluations, producing what we call a.

Distributed Processing for Modelling Real-time ... - LIRIS - CNRS
usually designed to speed up scientific computations [1, 2] or artificial neural ... allel computers. ... identified by the robot (with a degree of confidence) or a.

pdf-1833\parallel-computing-for-real-time-signal-processing-and ...
... apps below to open or edit this item. pdf-1833\parallel-computing-for-real-time-signal-proce ... dvanced-textbooks-in-control-and-signal-processing.pdf.

Distributed Processing for Modelling Real-time ... - LIRIS - CNRS
modality), the central data fusion (a unique BAM - Bidi- rectionnal ..... during the on-going process of data fusion. .... Analysing vision at the complexity level.

Neural network approaches to image compression
predictors have been shown to increase the predictive gain relative to a linear ... and algorithms which tend to be expressed in a linguistic form. Recently, there ...

Neural network approaches to image compression
partment of Electrical and Computer Engineering, McMaster University,. Hamilton .... A simple, yet powerful, class of transform coding tech- niques is linear block ...

Image Retrieval Based on Wavelet Transform and Neural Network ...
The best efficiency of 88% was obtained with the third method. Key Words: .... Daubechies wavelets are widely used in signal processing. For an ..... Several procedures to train a neural network have been proposed in the literature. Among ...

Digital Image Processing Digital Image Processing - CPE, KU
Domain Filtering. Band reject filter ... Reject Filter: Degraded image. DFT. Notch filter. (freq. Domain). Restored image. Noise ..... Then we get, the motion blurring transfer function: dt e. vuH. T ..... Another name: the spectrum equalization filt

Distributed Real Time Neural Networks In Interactive ...
real time, distributed computing, artificial neural networks, robotics. 1. INTRODUCTION. The projects Leto and Promethe aim at facilitating the de- velopment of ...

Technical Report - Heidelberg Collaboratory for Image Processing
supervised learning framework to tackle this problem. Our framework resembles a .... proposed in the computer vision community for natural image deblurring (see. [12] and ... Firstly, we draw basic statistics from low level features and use RBF kerne

Method and system for image processing
Jul 13, 2006 - images,” Brochure by Avelem: Mastery of Images, Gargilesse,. France. Porter et al. ..... known image processing techniques is that the image editing effects are applied ..... 6iA schematic illustration of the FITS reduction. FIG.

Neural Network Toolbox
3 Apple Hill Drive. Natick, MA 01760-2098 ...... Joan Pilgram for her business help, general support, and good cheer. Teri Beale for running the show .... translation of spoken language, customer payment processing systems. Transportation.

Digital Image Processing
companion web site offers useful support in a number of important areas. For the Student or Independent Reader the site contains: Brief tutorials on probability, ...

LONG SHORT TERM MEMORY NEURAL NETWORK FOR ...
a variant of recurrent networks, namely Long Short Term ... Index Terms— Long-short term memory, LSTM, gesture typing, keyboard. 1. ..... services. ACM, 2012, pp. 251–260. [20] Bryan Klimt and Yiming Yang, “Introducing the enron corpus,” .

feature extraction & image processing for computer vision.pdf ...
feature extraction & image processing for computer vision.pdf. feature extraction & image processing for computer vision.pdf. Open. Extract. Open with. Sign In.

Method and system for image processing
Jul 13, 2006 - US RE43,747 E. 0 .File Edi! Monan Palette Llybul. 09 Fib Edit Malian PM L. II I ... image editing packages (e.g. MacIntosh or Windows types), manipulates a copy of ...... ¢iY):ai(X>Y)¢ii1(X>Y)+[1_ai(X>Y)l'C. As there is no ...

Neural Network Toolbox
[email protected] .... Simulation With Concurrent Inputs in a Dynamic Network . ... iii. Incremental Training (of Adaptive and Other Networks) . . . . 2-20.

Neural Network Toolbox
to the government's use and disclosure of the Program and Documentation, and ...... tool for industry, education and research, a tool that will help users find what .... Once there, you can download the TRANSPARENCY MASTERS with a click.

Convolutional Neural Network Committees For Handwritten Character ...
Abstract—In 2010, after many years of stagnation, the ... 3D objects, natural images and traffic signs [2]–[4], image denoising .... #Classes. MNIST digits. 60000. 10000. 10. NIST SD 19 digits&letters ..... sull'Intelligenza Artificiale (IDSIA),

Image Compression in Real-Time Multiprocessor ...
clustering is an important component of real-time image ... At the same time, one Global Hawk UAV consumes 0.5 gbps. As a result, the number of surveillance platforms that can be used during major operations is severely limited by the availability of