Proceedings of the 33rd Conference on Decision and Control Lake Buena Vista, R December 1994

-

FP-4 3150

The 4D-Approach to Dynamic Machine Vision Feedback from experience in a wide range of real world applications. Ernst D. Dickmanns Dept. of Aero-Space Technology, Institut fur Systemdynamik Universitat der Bundeswehr, Munchen W.Heisenberg Weg 39, D-85577 Neubiberg, GERMANY Fax: +49 89 6004 2082 Abstract In this survey paper covering a decade of experience with recursive estimation methods for visual dynamic scene understanding, the common methodical background of the 4-D approach exploiting spatio-temporal models of objects and processes observed is discussed. Applications range from 'balancing an inverted pendulum' over 'vehicle docking', 'road vehicle guidance with reactions to several other vehicles in a normal highway traffic scene', 'on-board autonomous aircraft landing approaches', 'landmark navigation for land vehicles' to 'grasping of an object freely floating in a satellite orbit with large delay times (six seconds in the Spacelab mission D2 in May 1993)'. Simultaneous recognition of rough 3-D shape and motion parameters for moving road vehicles including occlusions has been achieved in well structured Autobahn scenes in real time ( 5 Hz). Recognition and tracking of moving humans in well structured environments has been advanced to a stage that real-time perforniiincc seems possible with the next generation of micro-processors just around the corner. Alicr a brief review of the general method the aspects of exploiting object-oriented, model-based predictions for very cl'licient intelligent image feature extraction are discuhscd Ibr some of the tasks mentioned above. Introduction With the introduction of recursive estimation techniques - well known in systems dynamics and control theory since thc early 60ies - into image sequence understanding

exploiting dynamical models for the representation of knowlcdge about motion processes observed, a new era in miicliine vision has begun [ I , 2, 3 , 41; a first review on the approach taken at UniBwM has been given in [SI. 'l'hc basic ideas on which the approach is grounded are discussed in more detail in [6, 71.

0-7803-1 968-0/94$4.0001994 IEEE

Active viewing direction control has been introduced as early as 1985 in the context of road vehicle guidance [8] and has become a common feature to all our applications taking advantage of bifocal vision [9]; this simultaneous usage of two TV-cameras fixed relative to each other and equipped with objectives of different focal lengths on a pointing platform allows both a wide field of view and high resolution in some subarea of this field if properly mounted. The dynamical models allow for motion prediction of objects of interest and, thus, easier visual tracking. Several generations of two-axis pointing platforms for dynamic machine vision have been developed and tested [8, IO]; the newer ones do have the option of inertial stabilisation using inexpensive rate sensors for reducing motion blur due to rotational perturbations from the vehicle carrying the cameras. Superimposed on the inertial stabilisation, visual fixation on objects of interest can be realised by centering a feature set in the tele-image. The task orientation exploiting generic models of typical objects involved alleviates the problem of visual recognition considerably: Object hypothesis generation is simplified through probability ranking among object classes and their likely aspect conditions in situations given. Therefore, recognition of situations, that is the relative arrangement of objects and their state, given the own goal and ego-state, is essential for autonomous systems. This has led to a system architecture with orientation towards physical objects on the one side for structuring the parallel processing hardware and towards situation assessment on hierarchical levels on the other side for efficiently organising the derivation of the system response as given in figure I ; in [ I I J the three-layered organisation of reflex-like feedback response to steady perturbations, of event-triggered feed-forward control with adaptive generic stereotypical behavioral elements, and with knowledge based activation of these modes on the situation level has been detailed.

3770

[Vehicle1 I

40 o b l d r*cognlilon

(52 En) hop.

E

I

I

Lanes

l

I

in the real world; this servo-maintained spatio-temporal world model with a complete state description of all physical objects observed, allows direct application of control theory for approaching one's goals: (optimal) state feedback and event-triggered stereotypical control time history feedforward components provide a range of behavioral capabilities which may be used by the higher levels for reducing the discrepancy between actual and desired state for the own body.

proc...ing 1e01UT.

The nonlinear mapping from shape parameters and object states into image features is locally approximated by first Parallel processor system for perception and control order (linear) relationships schematically written in the form of the 'Jacobian' maFigure 1. Overall object - oriented system architecture trix. The elements of this matrix may be determined numerically by differencing This approach allows to realize in technical vision what between the nominal and systematically perturbed (compsychologists call the 'Gestalt-idea' in human vision: ponent wise) states. The relative magnitudes of these model-guided feature grouping enables the recognition of (properly scaled) elements carry usefull information for objects in a task context in complex visual scenes in judging the value of features for determining the correwhich purely bottom-up image processing would be sponding object states; if the matrix has full rank perbound to fail. The frequent traversals of the organisation spective inversion is possible. The elements have to be scheme both data-driven bottom-up and model-driven sufficiently large in order to allow this inversion in a nutop-down in each cycle (typically of 40 to 80 ms duramerically stable way. If entire columns are close to zero, tion) make the system so efficient. the corresponding state component or parameter can not be determined. The paper is organised as follows: In the next section, the These mapping conditions may be influenced by proper basic idea is discussed; then, the role of the Jacobian machoice of the camera objectives. trix linking features in the image plane to states in 4-D object representation will be reviewed. The following Bifocal vision sections will then deal with specific application areas and lessons leamed. Bifocal vision with two cameras having different focal lengths and active viewing direction control improves efBasic method ficiently visual performance. Guidance of high speed vehicles requires large look-ahead ranges and a tele lens for The basic idea of the 4-D approach to dynamic machine sufficiently high resolution at these ranges. With a focal vision is to generate an internal model in the interpretalength ratio of about 1 to 4.5 the data stream for simultation process in 3-D space and time which represents obneously covering a certain spatial viewing angle and, at jects in a unified way consisting of spatial shape (feature the same time, allowing high resolution at a smaller region of interest, may be reduced by an order of magnidistribution) around a center of gravity, typical motion tude. A viewing direction control system has been characteristics in 3-D space over time in a differential description and visual mapping by perspective projection; designed and built which allows 20' saccades to be performed in less than 80 ms, thereby shifting a region of incontrary to many other applications of Kalman filtering terest in the wide angle image into the area of highest to image evaluation the motion process is not analysed in resolution in the tele-image; during these saccades, no the image plane but in the basic four dimensions we hapstable bottom-up image interpretation is possible. Therepen to live in. fore, the prediction capabilities of the 4-D approach are By high frequency prediction error feedback for well exploited for recovering dynamic scene understanding visible features in the image sequence the intemal model under the new mapping conditions. is continuously adapted to the motion processes observed ertmctlon (16 Ell)

1

377 1

.

Usage of the .Jacobian matrix The Jacobian matrices (for predicted states) carry most 01' the information fbr 4-11 process understanding; these matrice\ allow to select those l'cature combinations which, under the condition of limited computing performance available, will yield highest accuracies in relative state estimation f'or objects of' known shape. Interpreting the entries in the Jacobian matrices and distance measures butween image fixtures yields information for intelligent control of' the iteration process with respect to the state v;iri;thlcs o r shape parameters to be iterated depending on Ilie xipcct conditions. I.or tinagc

sequence interpretation with the two cameras,

llic wine internal spatio-temporal model is being used;

only the mapping from 3-11 state to image coordinates and tlie corresponding Jacobian matrices are being comptitccl specifically for cach camera. In general, a Jacobian matrix has IO be computed for each sensodobject-pair. '1 hi liict shows that orientation towards physical objects is essential; knowledge about the real world is affixed to objects which are observed with different sensory modalities. According to their trustworthiness they may contribute to the confidence in ascribing a certain state to the ob,ject observed. 'Filter tuning' for optimally adjusting hx parameters in the recursive estimation algorithm to tlie ac~ualperturbation environment is a difficult task requiring cxpcrience with the system under investigation.

Feature extraction Kohust perception of object states requires steady adjustiiicnl of' the parameters in the interpretation loop accordiiig to Ihc situation actually encountered. In natural cnvironincnts, changing lighting conditions and backgIowid inlcnsity levels have to be mastered. Working 1"-ccloniinantly with intensity gradients reduces thc depciiclcncc on absolute intensity levels; however, for objcct recognition in visually complex environments, i i ( l i l i t i o i i ; i l I-cgion based l'cature components will help in xliicviiig iiiore rohust perlbrmance. 'l'wo sets of algorilliiiis wliicli h;ive shown to be ef'ficient for real-time objccl I t d i n g :incl relative state recognition in connection with IIIC 4-1) approach arc in use: one is a generic edge cIciiiciiI Icilttlre extractor, the other ii simplilied oncdiiiic~iisioiial pyramid (a so-called 'triangle') operator; bo111;IIU ;ippliccl with parameters intelligently controlled li-c) I I I I Iic Iiiglier recogn it io t i levels there by implementing clcriicii~s01' [lie '(icstalt-idea'. Application examples 'l'lirec iipplication areas will be discussed in more detail:

I . Road vehicle guidance in normal highway traffic, 2. on-board autonomous automatic visual landing approaches of aircraft, and 3. visual grasping of an object freely floating in space on-board the Space-Shuttle Columbia in the Spacelab mission D2 (KOTEX, May 1993).

Road vehicle guidance: lJniBwM has equipped six full scale road vehicles with a visual perception system over the last nine years. VaMoRs, a 5-ton van owned by lJniBwM is the senior system still in operation; in 1987 it demonstrated its capability of going fully autonomously, i.e. both lateral and longitudinal, at high speeds (up to 100 km/h, limited only by engine power) on a free stretch of Autobahn not yet turned over to the public, over distances of more than 20 km. From 1988 to 92 it learned to react properly in response to an increasing number of obstacles and generically known other vehicles in its environment. Since 1992 it has accumulated, together with the companion vehicle VITA (a 6-ton van) of our industrial partner Daimler-Benz, a total of more than 3000 km of fully autonomous driving in normal Autobahn traffic. The behavioral competences encompass: a) Lane keeping with speed automatically adjusted to curvature such that a predefined lateral acceleration level will not be exceeded; b) convoy-driving behind another vehicle at a speed-dependent distance (two-seconds rule) including c) stop-and-go in a traffic jam; d) lane changes triggered by the human operator (who has to take care that the lane changed to is free), and e) transition from free-lane driving to convoy-driving when running up to a vehicle upfront 1121. In 1994 an all-terrain-vehicle and two passenger cars Mercedes-Benz 500-S13L,, one each for Daimler-Benz Research and UniIjwM, called VITA-2 and VaMoRs-P rcspcctively have been equipped with our new transputerbased vision systems; thcsc systems have active bifocal vision capabilities both in forward and rearward direction and will be able to track up to half a dozen other vehicles in their cnvironment [ 13, 14, IS]. They will possess both high-frequency (12.5 and 25 llz, rightmost two groups in lower block row of lig. I ) and lower frequency (2.5 to 5 I Iz, leftmost three in fig. I ) groups of 'object processors' fbr visual ob.ject detection, tracking and recognition (relative state estimation) [ 16-20]. Visual relative state estimation in aircraft landing approaches: 'The same transputer and vision hardware as in road vehicle guidance has been used for this very demanding task of state estimation in all six degrees of liccdom relative to the lendmark 'runway'. I n order to be able to deal with atmospheric perturbations lrom gusts, inertial sensors arc included for direct measurement of translational accelerations and rotational rates. thereby al-

3772

leviating the requirements on image sequence processing prone to larger delay times because of high data rates. The viewing direction is inertially stabilized and fixates the point at the horizon at which the borderlines of the runway intersect the horizon line; this yields somewhat simpler relations for image sequence interpretation. In real flight experiments with a twin turboprop aircraft different delay times in different data paths ocurred; they have to be carehlly tracked and taken care of by proper modeling if high performance is to be achieved. With a bifocal vision system, landing approaches with relative state estimation from more than 1500 m distance have been performed at the Brunswick airport in northem Germany in March 1994; results have been compared to other measurements like Differential GPS and radio altimeter data [21, 221. In hardware-in-the-loop simulations at UniBwM, closed loop landing approaches in real time with real vision hardware in the loop have been performed for several years; the capability of performing fully on-board autonomous automatic landing approaches by machine vision including flare till touch-down have been demonstrated. Curved approach trajectories have been shown to be possible with this fixation type vision even under crosswind and gust conditions. At present, work is in progress for equipping also helicopters with this type of vision system. Visual grasping of a free-floating object in weightlessness: During the German Spacelab mission D2 from April 26 till May 6, 1993, one of the many experiments was the 'Robot Technology Experiment ROTEX' under the direction of DLR-Oberpfaffenhofen. A robot arm in a work cell on board the Spacelab in the payload bay of the Space Shuttle Columbia carried two miniature CCD-TVcameras in its hand (beside several other sensors not of interest here); one of these cameras was used in connection with our vision system. The computers for data and image sequence processing had to be on the ground in the mission control center at Oberpfaffenhofen (near Munich: about 48' northem latitude and 11' eastem longitude). The data had to be routed through three geostationary relay satellites and several ground stations before arriving at the mission control center for further processing; in the average, a one-way delay time of up to three seconds had occured. That means that from taking measurements on board the Spacelab until the control commands derived from these data by the ground computers arrived there again, about six seconds delay time had elapsed. It is almost impossible for a human operator to deal with that large delay times. On the other hand, the motion laws are well known and the perturbation levels between robot arm and the freely

floating object are small in general; therefore, prediction of the motion behavior by the computer is relatively easy. It has been used always, also when the human operator was in charge of controlling robot arm movements. The object to be caught was a modified cube the comers of which had been cut off normal to the space-diagonals such that just the center points of the cube-edges remained. The original cube was painted black; the cuts resulted in triangles of bright metallic appearance which were well visible against the remaining black squares. By tracking these edges of dark-to-bright transitions or vice versa over time the translational position and the angular orientation of the object relative to the robot hand had to be determined. The strategy for automatic grasping was to first move the robot hand linearly such that the centroid of the object stayed in the center of the image; then a decoupled linear motion for closing in onto the object was superimposed. The command for closing the two fingers of the robot hand for grasping was given when the predictions indicated that a fix grasp would occur with the object center at the center of the grasping area of the fingers. On May 2nd, 1993, the free-floating object was automatically grasped on board Spacelab, flying somewhere over Africa and the Indian ocean, remotely controlled from half a dozen micro-processors on the ground. Details are given in [23]. Lessons learned 1. Modelling of real world processes to be observed by machine vision in both 3-D space time simultaneously has proven to be most efficient with respect to the price/performance-ratio of hardware needed and realtime performance level achieved. Both object shape (resulting in the feature distribution in the image after perspective mapping from aspect conditions simultaneously to be determined) and centerpoint-trajectory (determining the translational aspect conditions) have to be modeled in 3-D space; for understanding self-occlusion of features under rotation the continuity conditions along the time axis introduced by the dynamical model for the rotational degrees of freedom are of great help.

2. The predictive power of the dynamical models yield guidelines for directing feature extraction from the next image; this allows concentration of valuable computing power to those regions in the image where information for the process to be understood or controlled can be found most efficiently. Regions with ambiguous feature concentrations may be avoided all-together (at least for short time spans), thereby not disturbing a continuous interpretation of the process with the remaining features

3773

until new, no more ambiguous feature arrangements will again become available.

3. The orientation towards physical objects in 3-D space and time as basic units for dynamic scene understanding is considered to be most essential since knowledge about the world is affixed to objects; even though knowledge may be phrased as an abstract attribute of classes of objects or as principles, this ’symbolic’ description cannot be made operational in an actual recognition task but by an individual object being depicted in the image which also triggers a host of other inputs to the recognition system allowing it to solve the task much more easily. Especially, the embedding along the time axis and the interpretation relative to other objects help to solve the grounding problem associated with symbols. 4. For understanding complex dynamic scenes both differential and integral representations are required in parallel: the differential representations allow both spatially and temporally local subprocesses to be understood more easily without reference to the situational context; they are well suited for fraction-of-a-second motion understanding. With respect to judging ego-motion effects these representations have to correspond to signals coming from other motion sensors like inertial accelerations or angular rates, and they have to be linked to the control outputs affecting these variables with short but characteristic time delays. In combination with typical control time histories or taskspecific feedback control laws qualitative integrals can be formed linking initial states with final states over longer periods of time. Knowledge about these ’mission element units’ may be stored at a more abstract higher level as typical state transitions and referred to by a single symbol; based on a library of such mission elements a corresponding interpreter may be able to plan an entire missions on an abstract level. This scheme of layered response has proven to be nicely modular, easily expandable and very efficient with respect to reaction time ~31.

5. For deeper understanding of motion processes in the real world, subjects have to be distinguished from ’objects in general’ as a subgroup with the special capability of control activation depending on intemal ’mental states’. Thus, the basic definition in systems dynamics with respect to types of variables encountered in the real world shows to have far reaching consequences. In systems dynamics three basic types of variables are of importance with respect to their proper treatment in a task context: 1. state variables, that are those which cannot be changed at a given point in time, but that evolve over

time by the characteristics of the process, by control input, or by perturbations from the environment. 2. Control variables may be selected ’at will’ (in a certain range) on a time scale short as compared to the characteristic time scale of the state variable changes; 3. control parameters, which may be set at certain discrete points in time (usually before the process is started); afterwards they remain fixed. Subjects determine their control output from some internal program; in ’intelligent’ subjects this control computation is linked to situation dependent sensor inputs and to some intemal goal function (an actually selected one from a set at its disposal) which it tries to optimize.

6. Typical behaviors of subjects result from the feedforward- and feedback- control computation schemes installed and from the decision rules for switching between these different modes. The 4-D approach allows very easy definition of behaviors since both time and the physical state variables are represented explicitly by the recursive estimation scheme based on prediction error feedback. [It should be noted that implementations avoiding full spatio-temporal representations like neural net approaches [25] preclude taking advantage of this type of knowledge; we feel that endowing an autonomous system with this basic knowledge which humankind suceeded in developing only a few centuries ago (or even decades, depending the point of view, after millennia of struggling), makes it more efficient, easier to adapt to new tasks and more suited for integration of software packages available from human science and engineering efforts.] 7. Combined with object-oriented data bases (in the computer science sense) this approach will become very flexible and open; especially, the simulation and animation tools developed in computer graphics may become of great help once the basic discrepancies in the specific approaches have been overcome. From this point of view, ’virtual reality’ is nothing but the capability of the computer to translate symbolic and numeric inputs into spatio-temporal representations projected onto a video screen for interaction with humans; leaving the human controller out of the loop and gearing his former control inputs to what has been extracted from camera inputs and from other sensors observing motion states of the own body carrying the camera and the computer system, ’virtual reality’ is the computer’s (the autonomous system’s) intemal servo-maintained image of the world, its imagination or visualized intemal representation of the reality sensed. Items 5 to 7 are more recently being used for designing

3774

and developing more complex autonomous systems based on the 4-D approach. They will have the capability of learning from sensory experience; however, this leaming shall not start from scratch but be based on knowledge about 3-D space and time, about basic motion laws and the properties of perspective projection in an explicit way. Conclusions The 4-D approach combines recursive estimation techniques with full dynamical models in 3-D space and time for the representation of physical objects. Combining this with the object-oriented programming paradigm of computer science and with 4-D object-data-bases for generic object classes taking the imaging process into account, a very flexible approach to dynamic machine vision results. Affordable computing power is now becoming available to allow the realization of systems for practical applications in more complex environments.

The results achieved up to now lead us to the conclusion that the 4-D approach to dynamic machine vision is one of the more promising ones. It can be realized as an open systems approach which is presently being studied.

[9]

[lo] 111

121

[I31

[I41 [I 51

[161

References [I71 D. Gennery, "Tracking known three-dimensional objects", Proc., American Association for Artificial Intelligence, Pittsburgh, 1982, pp 13-17 H.G. Meissner, E.D. Dickmanns, "Control of an Unstable Plant by Computer Vision", in T.S. Huang (ed), Image Sequence Processing and Dynamic Scene Analysis, Springer-Berlag, Berlin, 1983, pp 532-548 H.-J. Wunsche, "Verbesserte Regelung eines dynamischen Systems durch Auswertung redundanter Sichtinformation unter Berucksichtigung der Einflusse verschiedener Zustandsschatzer und Abtastzeiten, Report HSBwM/LRT/WEI 3dIB/83-2, 1983 H.-J. Wunsche, "Detection and Control of Mobile Robot Motion in Real-Time Computer vision", in N. Marquino (ed), Advances in Intelligent Robotics Systems, Proc. SPIE, Vol. 727, Cambridge, Mass.. 1986, pp 100-109 E.D. Dickmanns, V. Graefe, a) "Dynamic machine vision", b) "Application of dynamic monocular machine vision", J. Machine Vision Application, Springer, Nov. 1988, pp 223-261 E.D. Dickmanns, "Machine Perception Exploiting HighLevel Spatio-Temporal Models", AGARD LS 185, Machine Perception, Hampton, VA, USA; Munich; Madrid, 1992 E.D. Dickmanns, "The 4-D Approach to Visual Control of Autonomous Systems", AIAANASA Conf. on Intelligent Robots in Field, Factory, Service and Space (CIRFFSS), Houston, Texas, March 1994 B. Mysliwetz, E.D. Dickmanns, "A Vision System with Active Gaze Control for real-time Interpretation of Well

[IS] [191

[20] [21] [22]

[23]

[24] [25]

3775

Structured Dynamic Scenes", in L.O. Hertzberger (ed) Proc. of 1st Conference on Intelligent Autonomous Systems (IAS), Amsterdam, 1986, pp 477-483 E.D. Dickmanns, "Active Bifocal Vision", in: S. Impedovo (ed.), Progress in Image Analysis and Processing 111, World Scientific Publ. CO, Singapore, 1994, pp 481496 J. Schiehlen, E.D. Dickmanns, "Two-Axis Camera Platform for Machine vision", AGARD Conf. Proc. 539, Pointing and Tracking Systems, 1993, pp 22-1 - 22-6 E.D. Dickmanns, "4-D Dynamic Vision for Intelligent Motion Control", in C. Harris (ed), Int. Joumal for Engineering Applications of AI (IJEAAI), Special Issue, Intelligent Autonomous Vehicles Research, Vol. 4. No. 4, pp 301-307, 1991 C. Briidigam, "Intelligente Fahrmanover sehender autonomer Fahrzeuge in autobahnahnlicher Umgebung", Dissertation, Universitat der Bundeswehr Munchen, Fakultat fur LRT, 1994 E.D. Dickmanns, R. Behringer, D.Dickmanns, T. Hildebrandt, M. Maurer, F. Thomanek, J. Schiehlen, "The seeing passenger car VaMoRs-P", Int. Symp. on Intelligent Vehicles, Paris, Oct. 1994 R. Behringer, "Road Recognition from Multifocal Vision", Int. Symp. on Intelligent Vehicles, Paris, Oct. 1994 F. Thomanek, E.D. Dickmanns, "Multiple Object Recognition and Scene Interpretation for Autonomous Road Vehicle Guidance", Int. Symp. on Intelligent Vehicles, Paris, Oct. 1994 J. Schick, E.D. Dickmanns,"Simultaneous Estimation of 3D Shape and Motion of Objects by Computer Vision", IEEE Workshop on Visual Motion, Princeton, N.J., 1991 M. Schmid, "An approach to model-based 3-D recognition of vehicles in real time by machine vision", IEEEConf, on Intelligent Robots and Systems (IROS), Neubiberg, Sept. 1994 V.V. Holt, "Tracking and Classification of Overtaking Vehicles on Autobahnen", Int. Symp. on Intelligent Vehicles, Paris, 1994 W. Kinzel, "Praattentive und attentive Bildverarbeitungsschritte zur visuellen Erkennung von FuRghgern", Dissertation, Universirat der Bundeswehr Munchen, Fakultat fur LRT, 1994 S. Estable, J. Schick, F. Stein, R. Ott, R. Janssen, W. Ritter, Y.Z. Zhang, "Real-Time Traffic Sign Recognition System", Int. Symp. on Intell. Vehicles , Paris, Oct. 1994 F.R. Schell, E.D. Dickmanns, "Autonomous landing of airplanes by dynamic machine vision", Machine Vision and Application, Vol. 7, No. 3, 1994, pp 127-134 E.D. Dickmanns, S. Werner, S. Kraus, F.R. Schell, "Experimental Results in Autonomous Landing Approaches by Dynamic Machine Vision", Proc. SPIE, Vol. 2220, 1994, pp 304-3 13 C. Fagerer, D. Dickmanns, E.D. Dickmanns,"Visual Grasping with Long Delay Time of a Free Floating Object in Orbit", J. 'Autonomous Robots' Vol. 1, No. I , Kluwer Acad. Publishers, Boston, 1994 C. Hock, "Wissensbasierte Fahrzeugfihrung mit Landmarken fdr autonome Roboter", Dissertation, Universitat der Bundeswehr Munchen, Fakultat fur LRT, 1994 D.A. Pomerleau, "Neural Network Perception for Mobile Robot Guidance", Dissertation, Camegie Mellon Univ. Pittsburgh, 1992

The 4D-approach to dynamic machine vision - IEEE Xplore

Universitat der Bundeswehr, Munchen. W.Heisenberg Weg 39, D-85577 Neubiberg, GERMANY. Fax: +49 89 6004 2082. Abstract. In this survey paper covering ...

620KB Sizes 15 Downloads 345 Views

Recommend Documents

Dynamic Interactions between Visual Experiences ... - IEEE Xplore
Abstract—The primary aim of this special session is to inform the conference's interdisciplinary audience about the state-of-the-art in developmental studies of ...

Fuzzy Logic and Support Vector Machine Approaches to ... - IEEE Xplore
IEEE TRANSACTIONS ON PLASMA SCIENCE, VOL. 34, NO. 3, JUNE 2006. 1013. Fuzzy Logic and Support Vector Machine Approaches to Regime ...

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...

wright layout - IEEE Xplore
ACCEPTED FROM OPEN CALL. INTRODUCTION. Two trends motivate this article: first, the growth of telecommunications industry interest in the implementation ...

Dynamic Request Splitting for Interactive Cloud ... - IEEE Xplore
mance Variability, Interactive Multi-tier Applications, Geo- distribution. I. INTRODUCTION. CLOUD computing promises to reduce the cost of IT organizations by ...

Evolutionary Computation, IEEE Transactions on - IEEE Xplore
search strategy to a great number of habitats and prey distributions. We propose to synthesize a similar search strategy for the massively multimodal problems of ...

Dynamic Local Clustering for Hierarchical Ad Hoc ... - IEEE Xplore
Hierarchical, cluster-based routing greatly reduces rout- ing table sizes compared to host-based routing, while reduc- ing path efficiency by at most a constant factor [9]. More importantly, the amount of routing related signalling traffic is reduced

The Viterbi Algorithm - IEEE Xplore
HE VITERBI algorithm (VA) was proposed in 1967 as a method of decoding convolutional codes. Since that time, it has been recognized as an attractive solu-.

I iJl! - IEEE Xplore
Email: [email protected]. Abstract: A ... consumptions are 8.3mA and 1.lmA for WCDMA mode .... 8.3mA from a 1.5V supply under WCDMA mode and.

Gigabit DSL - IEEE Xplore
(DSL) technology based on MIMO transmission methods finds that symmetric data rates of more than 1 Gbps are achievable over four twisted pairs (category 3) ...

IEEE CIS Social Media - IEEE Xplore
Feb 2, 2012 - interact (e.g., talk with microphones/ headsets, listen to presentations, ask questions, etc.) with other avatars virtu- ally located in the same ...

Grammatical evolution - Evolutionary Computation, IEEE ... - IEEE Xplore
definition are used in a genotype-to-phenotype mapping process to a program. ... evolutionary process on the actual programs, but rather on vari- able-length ...

SITAR - IEEE Xplore
SITAR: A Scalable Intrusion-Tolerant Architecture for Distributed Services. ∗. Feiyi Wang, Frank Jou. Advanced Network Research Group. MCNC. Research Triangle Park, NC. Email: {fwang2,jou}@mcnc.org. Fengmin Gong. Intrusion Detection Technology Divi

striegel layout - IEEE Xplore
tant events can occur: group dynamics, network dynamics ... network topology due to link/node failures/addi- ... article we examine various issues and solutions.

Digital Fabrication - IEEE Xplore
we use on a daily basis are created by professional design- ers, mass-produced at factories, and then transported, through a complex distribution network, to ...

Iv~~~~~~~~W - IEEE Xplore
P. Arena, L. Fortuna, G. Vagliasindi. DIEES - Dipartimento di Ingegneria Elettrica, Elettronica e dei Sistemi. Facolta di Ingegneria - Universita degli Studi di Catania. Viale A. Doria, 6. 95125 Catania, Italy [email protected]. ABSTRACT. The no

Device Ensembles - IEEE Xplore
Dec 2, 2004 - Device. Ensembles. Notebook computers, cell phones, PDAs, digital cameras, music players, handheld games, set-top boxes, camcorders, and.

Fountain codes - IEEE Xplore
7 Richardson, T., Shokrollahi, M.A., and Urbanke, R.: 'Design of capacity-approaching irregular low-density parity check codes', IEEE. Trans. Inf. Theory, 2001 ...

Multipath Matching Pursuit - IEEE Xplore
Abstract—In this paper, we propose an algorithm referred to as multipath matching pursuit (MMP) that investigates multiple promising candidates to recover ...

Privacy-Enhancing Technologies - IEEE Xplore
filling a disk with one big file as a san- ... “One Big File Is Not Enough” to ... analysis. The breadth of privacy- related topics covered at PET 2006 made it an ...

Binder MIMO Channels - IEEE Xplore
Abstract—This paper introduces a multiple-input multiple- output channel model for the characterization of a binder of telephone lines. This model is based on ...

On the Polarization Entropy - IEEE Xplore
polarimetric SAR image. In this paper, the authors propose a new method to calculate the polarization entropy, based on the least square method. Using a ...