A Microscopic Framework For Distributed Object-Recognition & Pose-Estimation Sathyanarayan Anand, Ahmed Kirmani, Siddharth Shrivastava, Santanu Chaudhury, Basabi Bhaumik Dept. of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, India. { [email protected], [email protected], [email protected], [email protected], [email protected] } Abstract— Effective self-organization schemes lead to the creation of autonomous and reliable robot teams that can outperform a single, sophisticated robot on several tasks. We present here a novel, vision-based microscopic framework for active and distributed object-recognition and pose-estimation using a team of robots of simple construction. The team performs the task of locating a given object(s) in an unknown territory, recognizing it with sufficient confidence and estimating its pose. The larger goal is to experiment with probabilistic frameworks and graph-theoretic methods in the design of robot teams to achieve autonomous self-organization independent of the task at hand. We have chosen 3D object recognition as a first problem area to evaluate the effectiveness of our system design. The system comprises a probabilistic framework for the successful detection of the object in a coordinated manner and adaptive measures in case of machinery failures or presence of obstacles. A pose estimation method for the detected object and graph theoretic solutions for optimal field coverage by the robots are also presented. Each robot is provided with a part-based, spatial model of the object. The object to be recognized is taken to be much bigger than the robots and need not fit completely into the field of view of the robot cameras. We assume no knowledge of the internal parameters of the robot cameras and perform no camera calibration procedures. Initial simulation results corroborate our system design and field coverage methods.

I. I NTRODUCTION The design of effective self-organization mechanisms in order to create autonomous robot teams is a challenging task. We tackle this problem at the more abstract inter-robot level without concerning ourselves with the intricacies of constructing and operating a single robot. The goal here is to experiment with probabilistic frameworks and graph-theoretic methods for task characterization, independent of the actual task under consideration. As a first step, we choose 3D object recognition as our problem area due to its relevance in real-life applications and due to the various challenging issues that it poses. 3D object recognition is the process of recognizing 3D objects from their views using image computable features. Most model based 3D object recognition systems consider solving this problem using a single view of the object [2], [4], [8] but a recent survey on active vision recognition [13] shows the need for employing multiple views to achieve better system performance. We present here, an active vision-based 3D object recognition and pose estimation system that employs an autonomous

c 2006 IEEE 1–4244–0342–1/06/$20.00 

team of robots, to obtain multiple views of the object, fusion of data from multiple sensors and uses a sensor selforganization mechanism to complete its tasks. Graph theoretic methods allow it to localize multiple instances of objects in an unknown territory. The overall aim is for a team of robots to explore an unknown field of known dimensions and perform the localization, recognition and pose estimation of multiple instances of objects on that field. Recent work by Omar Javed et al. [5] on multiple-camera surveillance systems assumes calibrated cameras to be fixed at known locations. These cameras perform track an object through their fields of view. On the other hand, our system is inherently mobile and performs the complimentary task of localizing and recognizing stationary objects in a given field. Mobility solves the problems of occlusion and segmentation variation. We incorporate data fusion from multiple sensors to allow for a more efficient and robust localization and detection mechanism. Many of the proposed active recognition schemes assume that the object fits completely into a single view taken by a camera. [2], [4], [8] None of these schemes can handle the case where the internal parameters of the camera vary, either by accident or by purpose. A next-view planning scheme using uncalibrated cameras was developed in [12], but using only a single mobile camera. Our uncalibrated multi-camera system employs a reactive self-organization scheme and is influenced by none of the constrains described above. The use of multiple cameras allows for a far greater coverage area and also allows for redundancy and correcting measures in cases of robot failure. The rest of the paper is organized as follows. Section II presents the problem scenario in detail and section III outlines the object model that we use in our system. Section IV details our system design and field coverage techniques. Results are presented in section V, followed by the conclusions and future work in section VI. II. P ROBLEM D EFINITION We consider a field of known dimensions, on which we wish to recognize and localize numerous object instances. We construct a robot team whose task is to explore the field attaining maximum visual coverage and to recognize

ICARCV 2006

and localize the maximum number of object instances whose generic object models are provided to the team as a knowledge base. The robots we employ are small and simple. Each robot has locomotive and odometric devices, a pan-tilt-zoom camera, a ranging device and a controlling circuit capable of wireless communication. The odometric accuracies needed are low and are compensated by the probabilistic nature of the object model (section III). Each robot contains the object models, object part classifiers and other implementation dependent data structures. An important constraint to note is that the robot can perform only 2D planar motion. This means that in the direction vertical to the plane of motion, the pixel length of an object can be considered as an invariant to determine it’s location. III. T HE O BJECT M ODEL



as an axes line, expected angles to other such lines are stored. Figure 1 shows an example using the rear left tyre of a car as the origin part. The first part that is detected (say pi ) will become the origin part, The table of interest Ti then becomes the one with pi as the origin part. A second part detection, pj , will allow us to form the axis line, and if the line lij is not the axis line of Ti , then we need to recalculate the angles with lij as the axis line. The Confidence Level Tree: A confidence level tree i is essentially a rooted directed acyclic graph (DAG) having n − 1 levels, where n is the number of parts. Formally,  each i = Tm = (T, W ), on m nodes, where m = n−1 k=1 (n−1)!/(n−k)!. Let r be the root and L be the set of leaves of Tm . Each node tijk ∈ (T −r) is the (i, j)th node in the kth level and represents the amount by which the confidence in detection will rise when the part pi is detected in the kth level by taking the jth path of the available (n − 1)!/(n − j − 1)! paths in the (j − 1)th level. The root of each of the i s is the increase in confidence when the first detected part is pi . Hence each path to the leaf l ∈ L corresponds to a different relevant combination of previously detected parts ending in successful detection of all parts. For example, the detection of a tyre when three other tyres have already been detected gives much more credence to the existence of a car than the detection of the first tyre. IV. T HE C ORE S YSTEM P ROCESS

Fig. 1. The top half shows an example of a directional pdf construction using the rear left tyre as the origin part. The angles between the lines in the top view are stored in the table. Any one line can be taken at random to be the axis line. Here, we take the line joining the two tyres to be the axis line. Examples of the three components of the object model are shown in the bottom half. More details given in section III.

The object model is the knowledge base provided to each robot of the team and provides an approximate but reasonably accurate probabilistic framework of what the object looks like in real life. We present the mathematical formulation of this model as follows: • Radial Probability Distribution function (pdf) Graph: This is an undirected complete graph Kn = (V, E) where each vertex vi ∈ V represents an identifiable part of the object, pi . The edges e = (vi , vj ) ∈ E represent the spatial pdf fij (xij ) between the location of parts pi and pj . Note that the spatial pdf function fij (xij ) is a function of the physical distance xij between the parts pi and pj . Also, fji (xji ) = fij (xij ) which preserves the non-directionality of the Graph Kn . The spatial pdf gives the likelihood of existence of one part w.r.t the other at the given separations. • The Directional PDF Set This is a set S of a set of tables Ti , one for every part pi of the object. The intent here is to provide directional guidance to the robots in their movement. Taking a line between the origin part and any other part on the object

The first stage in the detection process is for the team to explore the field and locate the desired objects. During this process, every robot continually updates the team regarding its own position and the presence of obstacles on the field. When a classifiable object part is detected, there is a possibility that an object exists at this location and the team goes into the self-organization stage. A. Field Exploration 1) Efficient Camera Operation via Data Fusion: We first ascertain a certain minimum object-robot distance D which flags the activation/sleep state for the image classifiers to run on the current image. This elementary heuristic reduces the loss of battery power in making futile attempts to detect and classify invalid (further away) large objects. The robot runs the classifiers only when it detects an object within the distance D of itself. For all object parts we know the threshold pixel size in the image, above which all part classifiers successfully operate. This value is completely determined by the classifiers used. We perform a set of known experimental camera adjustments that provide us with results relating the size of the object in the captured image to its distance from the camera. The procedure to estimate the value of D is given in algorithm 4.1. 2) Optimal Field Coverage: Before detection of the first object part, the aim of the robot team is to minimize the cumulative distance travelled by the team, subject to the constraint that the total area covered is equal to the area of the field. A naive approach would be a radial propagation of the robots

hence maximal coverage. We formally present the graph model and the problem formulation, • Input: A undirected grid graph G = (V, E) where V is the set of unit grid centers, and the edge set E is defined as the set of edges connecting any vertex to its nearest and second nearest neighbor vertices. Let N = {{s1 , t1 } . . . {smn , tmn }} be the set of source-destination pairs. In our initial case all robots begin from the same origin or a set of closely located origins. Hence all si s are not distinct.  • Objective: To find edge disjoint paths Pi s between si and mn ti for each i, 1 ≤ i ≤ mn such that i=1 length(Pi ) is minimum. This disjoint path problem can be viewed as a specific case of the integer multiflow problem. Given a graph G = (V, E), a set of Terminals {s1 , t1 , s2 , t2 , ...sk , tk }, demands {di : i = 1, ..., k,} and integer capacities on the edges, {c : E → Z}. The problem is to find for each i, a (si , ti )-flow fi of value di . Note that even for undirected graphs, the flow is directed. Let fi (e) be the amount of flow from si to ti that uses the edge e. A valid flow must obey the capacity constraint for each edge n e ∈ E, k=1 fi (e) ≤ c(e). To find edge disjoint paths, we can set c(e) = 1 for all e ∈ E and then find an integer multiflow.

starting from a corner entry point or a rectilinear grid maze that constrains the movement of the team. Another would be to decompose the field into a set if regular geometrical shapes and then allocate coverage of individual units to smaller robot teams. Finding an optimal assignment of paths to individual robots is NP-hard since the number of valid assignments can be exponentially large in input size and requires exhaustive evaluation which is computationally expensive. Moreover we wish to maintain a regular coordinated structure and formulate a deterministic policy which allocates and maintains team movements. Hence, we propose a strategy based on the well studied graph-theoretic problem of finding edge-disjoint paths in a given planar graph. The field is modelled as a rectangular m×n units grid graph having unit dimensions greater than the threshold distance D as defined in the previous section. The values of m and n are based on the fact that we wish to allocate each unit grid to a robot. The decomposition can be achieved optimally as, (m, n) = minimize(|m − n|) s.t. {m × n = N, m ≥ (f ield length/D) n ≥ (f ield width/D), (m, n) ∈ Z + } After this decomposition, we achieve optimal coverage in two stages, first in getting the robot to the center of its allocated unit grid using the minimum amount of resources and in the second the robot optimally covers the unit grid. We also attempt to maximize the chances of first part detection while the robots are moving to their respective centers. This is accomplished by allocating edge disjoint paths to robots which ensures that there is minimum overlap in robot trajectory and

Fig. 2. Approximate solution to optimal field cover using disjoint paths for a 4 × 4 grid. Here there are 3 entry points 1,2,5 and a maximum capacity violation of 2 units is allowed for the corresponding Integer Multiflow problem

Solving the edge-disjoint path problem in grid graph is NPcomplete [7]. However if the graph (V, E ∪ N ) is Eulerian, the Okamura and Seymour [9] gives a necessary and sufficient condition for solvability. The Wagner-Weihe algorithm in [15] either finds the si − ti edge-disjoint paths, or a proof of infeasibility in the form of a violated cut. The running time of the algorithm is a polynomial in |V |. Several heuristics can be applied to minimize the total path length. After optimally generating the initial set of paths, covering the smaller unit grids can be achieved optimally using simple strategies like a sequence of translations followed by rotational sweeps. For an online setting to regenerate an optimal redistribution in cases of presence of obstacles, blacklisted objects or machinery failure, we alter the above multi-flow problem allowing an edge to used up to two times (c(e) = 2) by different robots. We achieve a polynomial time performance using a poly-logarithmic approximation algorithm proposed by [1].

This strategy ensures that the new solution remains within a constant factor of the optimal. It is used to relocate the robots upon detection of the first part, when the team is required to deterministically relocate in order to converge to a particular location. It is also used when the self-organization stage of the recognition process completes, successfully or otherwise, and the exploration stage begins again. This approximated strategy provides online near-optimal path allocations for the robots to continue exploring the field while maintaining maximum visual coverage and accounting for already known obstacles and blacklisted objects on the field. B. The Object Recognition Process The goal of the self-organization stage is to identify enough object parts at a location in order to cross a certain threshold in the confidence of detection, and to confirm the detection via pose estimation. The robot team can be viewed as a finite state machine as shown in figure 3. The detection of the first part allows us to predict expected circles on which the other parts will lie using the radial pdf graph. The detection of a second part allows us to utilize the directional pdf tables and predict a more focussed search area for the remaining parts. Knowing robot positions and expected part positions on the field, one can allocate parts to robots such that the cumulative distance travelled by the team is minimized. The processes of blacklisting (section IV-D) and confidence level updating (in accordance with the object model) keep occurring at every stage of the process.

We use the following method for estimating the pose of the detected object using inner-camera invariants [16], [12] and with no knowledge of the object model. The method uses inner-camera invariants and the common projective camera model [3]: λm = P M = A[R|t]M

(1)

Here, M is a 3D world point and m is the corresponding image point. A is the matrix of the camera internals. R and t are the rotation and translational matrices that define the camera’s external parameters. Knowing three world 3D points, Mp = (Xp , Yp , Zp , 1)T , p ∈ i, j, k, and their images mp = (up , vp , 1)T , p ∈ i, j, k, we can eliminate the camera internals as follows: Jijk

ui − uj = = ui − uk

r1 Mi r3 Mi r1 Mi r3 Mi

vi − vj = vi − vk

r2 Mi r3 Mi r2 Mi r3 Mi

Kijk =

− − − −

r1 Mj r3 Mj r1 Mk r3 Mk

(2)

r2 Mj r3 Mj r2 Mk r3 Mk

(3)

We use these image measurements Jijk and Kijk , called inner-camera invariants, to estimate the pose of the object. Suppose we know the Euclidean coordinates (Xi , Yi , Zi , 1)T of 5 points (in general position) in the world coordinate system. Six independent inner camera invariant measurements give us six equations in 6 unknowns: 3 rotations and translations each. We solve these equations to get the pose, using a suitable nonlinear optimization routine. For a system with 4 degrees of freedom such as ours, we adopt the same procedure with four independent inner-camera invariant measurements from four equations. D. Blacklisting and Termination

Fig. 3. The finite state machine representation of the robot team. The search for the first part state is followed by a special state for the search of the second part, because one part is insufficient for the utilization of the complete object model. The various states correspond to the various system design aspects described throughout this paper.

C. Pose Estimation Once the confidence level of detection crosses the defined threshold, the team performs a pose estimation of the object.

With each attempt at the detection of a part within the specified search area, we attach a timeout to account for the non-existence of the part or of the robot’s inability to detect that part due to machinery failure. Upon a timeout, the robot would be deemed free and move on to attempt the detection of another part. The part that was being worked upon will now enter the blacklist of that particular robot, but will remain open to other robots for investigation. If a certain number of robots attempt and fail to detect that part, then it will be entered into a global blacklist and no robot will attempt to detect it in the future. If a number of parts make their way into the global blacklist and it is found that detection of the remaining parts will not allow for adequate confidence in recognition, then the object itself will be deemed as not present. The team will again begin the exploration stage, having marked this area of the field as an obstacle. This is necessary to ensure that the team does not return and reattempt the detection of the object at a previously failed location. When an instance of an object is recognized on the field, the team goes back into the exploration stage marking the area of this object as completed. The system terminates when the required number of objects have been detected by the team or the entire field has been covered by the cameras.

V. S YSTEM RUNS AND R ESULTS Figure 4 shows an assorted collection of images on which our part classifiers were run. Only one classifier was run on an image to maintain clarity of the resulting image.

Fig. 4. Some results of car part detections using the part classifiers trained using OpenCV. The collage shows three images of tyre detections, two of headlight detections and three of side door detections.

The simulation of our system design was done using Gazebo [11], the open-source 3D robot simulator in conjunction with Player [11], the open source robot control server. We chose Gazebo and Player because code written within their framework is portable to real robots. Gazebo is a multi-robot simulator capable of simulating a population of robots, sensors and objects, in a three-dimensional world. Gazebo is normally used in conjunction with the Player device server that provides simulated data in the place of real sensor data. We created a simulated world in Gazebo, as shown in figure 5, with a texture-mapped truck, four robots and bounding walls to limit the field size. We have not assumed the presence of obstacles on the field in our simulations so far. Our system model was coded in C and run via Player to make the robots behave accordingly. Each robot was provided with a camera, a laser for ranging measurements and an object model of a generic four-wheeled vehicle. The classifiers for the parts of the truck are based on [14], [6] and were trained using OpenCV [10]. Figures 5 and 6 show a couple of the successful system runs made using our system design. Text output of the system design code is also shown and proves the effectiveness of our global part allocation strategy. These two figures show only two of the numerous successful simulation runs performed under different object poses and different initial robot team configurations. The field coverage analysis given in section IV-A provides us with an assurance of maximum area begin covered and our system design ensures the detection and recognition of the object on the field. VI. C ONCLUSIONS AND F UTURE W ORK We have presented here, a novel system design for a distributed object-recognition and pose-estimation system, and have provided detailed analysis on the problem of path allocation and field coverage of the robot team. We assume no knowledge of the internal parameters of the robot cameras

and perform no camera calibration procedures, and also do not require that the entire object fit into one single camera view. Initial simulations using the standard open source simulator, Gazebo, have shown the accuracy and the reliability of our system design and the effectiveness of our proposed object model and pose-estimation technique using the model. The effectiveness of the pose-estimation technique using innercamera invariants has already been shown in [12]. Immediate future plans mainly revolve around implementing this system on a real robot team, which is made easier by the fact that the Gazebo and Player code is portable to real robots. Localization errors and the reliability of the odometric devices are significant to the performance of the system. This dependence is yet to be analyzed and minimized. We are also planning more complex simulations with the presence of obstacles on the field and the implementation of the poseestimation technique using inner-camera invariants in our code. Although our field coverage analysis provides an effective solution for a rectangular field, more generic solutions can be looked into that are invariant to the shape of the field. Work also needs to be done to model and simulate the wireless communication between the robots during the entire process. R EFERENCES [1] C. Chekuri, S. Khanna, and F. B. Shepherd. Edge-disjoint paths in planar graphs. In FOCS ’04: Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science (FOCS’04), pages 71– 80, Washington, DC, USA, 2004. IEEE Computer Society. [2] Sven J. Dickinson, Henrik I. Christensen, John K. Tsotsos, and Göran Olofsson. Active object recognition integrating attention and viewpoint control. Comput. Vis. Image Underst., 67(3):239–260, 1997. [3] Olivier Faugeras. Three-dimensional computer vision: a geometric viewpoint. MIT Press, Cambridge, MA, USA, 1993. [4] Keith D. Gremban and Katsushi Ikeuchi. Planning multiple observations for object recognition. Int. J. Comput. Vision, 12(2-3):137–172, 1994. [5] Omar Javed, Zeeshan Rasheed, Orkun Alatas, and Mubarak Shah. Knightm: ˆ A real time surveillance system for multiple overlapping and non-overlapping cameras. In Proceedings of The fourth International Conference on Multimedia and Expo (ICME 2003), 2003. [6] Rainer Lienhart and Jochen Maydt. An extended set of haar-like features for rapid object detection. In Proceedings of the International Conference on Image Processing, pages 900–903, Rochester, USA, September 2002. IEEE. [7] D´aniel Marx. Eulerian disjoint paths problem in grid graphs is npcomplete. Discrete Applied Mathematics, 143(1-3):336–341, 2004. [8] J. Maver and R. Bajcsy. Occlusions as a guide for planning the next view. IEEE Trans. Pattern Anal. Mach. Intell., 15(5):417–433, 1993. [9] H. Okamura and P. D. Seymour. Multicommodity flows in planar graphs. In Journal of Combinatorial Theory (B), pages 31:75–81. IEEE, 1981. [10] OpenCV. http://sourceforge.net/projects/opencvlibrary, 2005. [11] Player/Stage. http://playerstage.sourceforge.net, 2005. [12] Sumantra Dutta Roy, Santanu Chaudhury, and Subhashis Banerjee. Recognizing large 3-d objects through next view planning using an uncalibrated camera. In ICCV, pages 276–281, 2001. [13] Sumantra Dutta Roy, Santanu Chaudhury, and Subhashis Banerjee. Active recognition through next view planning: a survey. Pattern Recognition, 37(3):429–446, 2004. [14] Paul A. Viola and Michael J. Jones. Rapid object detection using a boosted cascade of simple features. In CVPR (1), pages 511–518, 2001. [15] Dorothea Wagner and Karsten Weihe. A linear-time algorithm for edgedisjoint paths in planar graphs. In ESA ’93: Proceedings of the First Annual European Symposium on Algorithms, pages 384–395, London, UK, 1993. Springer-Verlag. [16] Michael Werman, MaoLin Qiu, Subhashis Banerjee, and Sumantra Dutta Roy. Robot localization using uncalibrated camera invariants. In CVPR, pages 2353–2359, 1999.

Fig. 5. The figure shows a successful simulation run at its completion. The green area is the field with a truck as the object being detected. Two world-views are shown and the three smaller images to the right show the object parts that are detected. The text to the top left is part of the console output of the system code.

Fig. 6. Another successful simulation run with a different set of object parts being detected due to the object being placed at a different place on the field. This difference in position and pose leads to a different global part allocation but the object is recognized nonetheless.

A Microscopic Framework For Distributed Object ...

Abstract— Effective self-organization schemes lead to the cre- ation of autonomous and reliable robot teams that can outperform a single, sophisticated robot on several tasks. We present here a novel, vision-based microscopic framework for active and distributed object-recognition and pose-estimation using a team of ...

549KB Sizes 1 Downloads 259 Views

Recommend Documents

A Distributed Kernel Summation Framework for General ...
Dequeue a set of task from it and call the serial algorithm (Algo- ..... search Scientific Computing Center, which is supported .... Learning, pages 911–918, 2000.

A Distributed Kernel Summation Framework for ...
Scale? K k (xi ,xj ). The problem is inherently super-quadratic in the number ..... hyper-rectangle .... Each state on each process converges to the average of the.

A framework for parallel and distributed training of ...
Accepted 10 April 2017 ... recently proposed framework for non-convex optimization over networks, ... Convergence to a stationary solution of the social non-.

A wireless distributed framework for supporting ...
A wireless distributed framework for supporting Assistive. Learning Environments .... Documents of the W3-Consortium's Web Accessibility Initiative. (WAI) include ... propose the most relevant links (nodes) for the students to visit, or generate new

PrIter: A Distributed Framework for Prioritized Iterative ...
data involved in these applications exacerbates the need for a computing cloud and a distributed framework that sup- ports fast iterative computation.

Beyond Triangles: A Distributed Framework for ...
tion on multicore and distributed systems. 2. RELATED WORK. In this section, we describe several related topics and dis- cuss differences in relation to our work.

W-EHR: A Wireless Distributed Framework for secure ...
Technological Education Institute of Athens, Greece [email protected] ... advanced operations (such as to provide access to the data stored in their repository ...

A Distributed Object Oriented Approach for Parallel ...
receiving what the provider offers) [3]. Moreover, the ... Near Video-on-Demand (N-VoD): functions like fast ... broadband switching and access network [7].

A Feature Learning and Object Recognition Framework for ... - arXiv
K. Williams is with the Alaska Fisheries Science Center, National Oceanic ... investigated in image processing and computer vision .... associate set of classes with belief functions. ...... of the images are noisy and with substantial degree of.

A Feature Learning and Object Recognition Framework for ... - arXiv
systematic methods proposed to determine the criteria of decision making. Since objects can be naturally categorized into higher groupings of classes based on ...

A framework for visual-context-aware object detection ...
destrian detection in urban images using a state-of-the-art pedes- ... nation of this derived context priors with a state-of-the-art object detection ..... For illustration.

pdf-1424\formal-methods-for-open-object-based-distributed-systems ...
... the apps below to open or edit this item. pdf-1424\formal-methods-for-open-object-based-distrib ... ational-conference-on-formal-methods-for-open-obj.pdf.

A Software Framework to Support Adaptive Applications in Distributed ...
a tool to allow users to easily develop and run ADAs without ... Parallel Applications (ADA), resource allocation, process deploy- ment ..... ARCHITECTURE.

A distributed system architecture for a distributed ...
Advances in communications technology, development of powerful desktop workstations, and increased user demands for sophisticated applications are rapidly changing computing from a traditional centralized model to a distributed one. The tools and ser

iOverbook - Distributed Object Computing - Vanderbilt University
A 16:1 CPU overbooking ratio means that one physical CPU. (pCPU) core can be ..... demand when flash crowds occur by refining itself through learning new ...

Development of a Microscopic Traffic Simulator for Inter-Vehicle ...
Sep 20, 2006 - Inter-vehicle communication (IVC) has been a major component of research .... the log is read during subsequent simulation runs. A vehicle.

A Proposed Framework for Proposed Framework for ...
approach helps to predict QoS ranking of a set of cloud services. ...... Guarantee in Cloud Systems” International Journal of Grid and Distributed Computing Vol.3 ...

Anti-Yacc: MOF-to-text - Enterprise Distributed Object ...
MOF-to-text tool like Anti-Yacc is its availability to be a general report writer for ... Proceedings of the Sixth International ENTERPRISE DISTRIBUTED OBJECT COMPUTING Conference (EDOC'02) ... Wide Web Consortium (W3C) [10]. The XMI ...

Developing a Framework for Decomposing ...
Nov 2, 2012 - with higher prevalence and increases in medical care service prices being the key drivers of ... ket, which is an economically important segmento accounting for more enrollees than ..... that developed the grouper software.

A framework for consciousness
needed to express one aspect of one per- cept or another. .... to layer 1. Drawing from de Lima, A.D., Voigt, ... permission of Wiley-Liss, Inc., a subsidiary of.

A GENERAL FRAMEWORK FOR PRODUCT ...
procedure to obtain natural dualities for classes of algebras that fit into the general ...... So, a v-involution (where v P tt,f,iu) is an involutory operation on a trilattice that ...... G.E. Abstract and Concrete Categories: The Joy of Cats (onlin

Microbase2.0 - A Generic Framework for Computationally Intensive ...
Microbase2.0 - A Generic Framework for Computationally Intensive Bioinformatics Workflows in the Cloud.pdf. Microbase2.0 - A Generic Framework for ...