Decentralized Position and Attitude Estimation Using Angle-of-Arrival Measurements Gabrielle D. Vukasin, Jason H. Rife, Tufts University

BIOGRAPHIES Jason H. Rife received his B.S. from Cornell University, Ithaca, NY, and his M.S. and Ph.D. from Stanford University, Stanford, CA. He is currently an Associate Professor of Mechanical Engineering and an Associate Dean of Undergraduate Education (Engineering) at Tufts University in Medford, MA. At Tufts, he directs the Automation Safety and Robotics Laboratory (ASAR), which applies theory and experiment to characterize the integrity of autonomous vehicle systems. Gabrielle D. Vukasin received her B.A. from Williams College, Williamstown, MA, and is a candidate in the Mechanical Engineering Masters program at Tufts University (expected May 2016). She is a member of the ASAR Laboratory. ABSTRACT This paper describes the development of a decentralized position estimation algorithm that computes relative position and orientation of vehicles by combining noisy angle-of-arrival measurements acquired and broadcast in a network of collaborating vehicles. We envision an application might be backup to GPS. The novelty of this algorithm is that it allows aircraft to estimate their position and orientation relative to each other without requiring vehicles to sense their orientation relative to the Earth. INTRODUCTION We present a decentralized algorithm for members of a multi-vehicle team to estimate their relative position and orientation based on angle-of-arrival (AoA) measurements, which are assumed to be acquired from communication signals transmitted within the team. One potential application of this capability is in civil aviation, as a backup for GPS. GPS is increasingly critical to the operation of the air transport system. GPS is already used for en route navigation and non-precision approach [1], and the FAA has mandated that nearly all conventional aircraft equip by 2020 with Automatic Dependent Surveillance - Broadcast (ADS-B), a technology that will communicate an aircraft’s GPS position to other aircraft in the vicinity, effectively providing radar-like

surveillance to the flight deck [2]. Developments in the Ground Based Augmentation System (GBAS) are poised to enable automated aircraft landing with GPS in the near future [3]. In the longer term, it is envisioned that the next-generation air traffic control system will implement 4D trajectories (position and time) that take an aircraft from the departure gate all the way to the arrival gate using GPS [4–6]. Taken together these GPS-based capabilities lay the groundwork for a massive increase in the capacity of the national airspace without sacrificing safety. Despite its potential, this vision suffers a critical limitation, vulnerability to the loss of GPS [7]. Recent observations of unintentional GPS jamming at Newark International by so-called “personal privacy devices” underscore this risk [8]. Several alternatives to GPS are being considered in aviation, with a particular emphasis on enhancement of legacy systems. Despite being decommissioned in the US in 2010, Loran is emerging as an important potential backup navigation system [9, 10]. Researchers have also considered enhancing VOR and DME systems to increase positioning accuracy for airborne users [11, 12]. Another recently proposed alternative would invert ADS-B, using the ADS-B communication signal for multilateration during a GPS failure [13]. This paper builds on the last concept, by generalizing the notion that an existing communication capability might be leveraged as a navigation alternative during a GPS outage. We envision a concept of operations in which aircraft would acquire mutual bearing measurements by inferring angle-of-arrival (AoA) from peer-to-peer communication signals. Distributed estimation would be used to define aircraft position and orientation relative to a common coordinate system. The set of three aircraft in Figure 1 is an example. In the diagram, the red arrows indicate the heading of aircraft and the green arrows represent bearing vectors (which are an equivalent representation of AoA data). By measuring bearing angles, broadcasting them, and processing the collection of received measurements, it is possible to define a common coordinate system that describes the relative positions of all vehicles (subject to an unknown scale factor) even when the headings of the vehicles are arbitrary or unknown. If desired, sparsely located ground beacons might be introduced as anchors, allowing the common relative coordinate system to be related to Earth-fixed coordinates.

The result would be a combined communication, navigation, and surveillance (CNS) system, through which each aircraft would estimate its own location, estimate the location of nearby aircraft, and deliver communication data at the same time. The signals used to acquire AoA measurements might be ADS-B messages or other networked communication signals. Using communication signals to obtain angle-of-arrival measurements would enhance available communication bandwidth by avoiding the need to dedicate additional spectrum to navigation.

z

x

y

Figure 1: Example of Three Aircraft in a Common Coordinate System The proposed concept of operations has particular advantages for an emerging world in which unmanned aerial systems (UAS) have become commonplace. Systems involving multiple UAS have a wide range of applications including surveillance, imaging, and package delivery [14]. One issue with UAS applications is that autonomous aircraft may fly at low altitudes, where ground-based navigation beacons (e.g. VOR/DME) provide poor coverage [15, 16]. A collaborative navigation capability could overcome this deficit, by providing high accuracy positioning wherever aircraft (manned or unmanned) operate in density. The particular approach to collaborative navigation explored here is based on a distributed estimation algorithm dubbed the Alternating-Normals Iterative Method (ANIM), which invokes geometric constraints to determine relative positions of collaborating vehicles from noisy AoA measurements. The ANIM algorithm was first presented in [17]. This paper enhances the original ANIM algorithm to provide orientation information and not just positioning information. The orientation infor-

mation could, in concept, be used to enhance performance of an Attitude and Heading Reference System (AHRS). The paper commences with a description of the original ANIM algorithm. We will then describe modifications that enable estimation of orientation. Performance of the modified algorithm is assessed through simulation. A subsequent section describes preliminary work to verify the proposed algorithm in a lab-based setting. A brief summary concludes the paper. OVERVIEW OF ORIGINAL ANIM ALGORITHM This section describes the original ANIM algorithm [17], which processes AoA measurements to estimate node position. The term node refers to the transceiver unit on each vehicle. It is assumed that this transceiver unit broadcasts a unique identifier signal and obtains AoA measurements from similar such signals broadcast by collaborating vehicles. ANIM estimates the relative positions of nodes in a sensor network from initially noisy AoA measurements. ANIM is a decentralized strategy for AoA-based relative positioning for a network of n nodes. It is a decentralized strategy in the sense that no single collaborator needs access to all measurements. Each node conducts processing opportunistically, using its own AoA measurements and any broadcast AoA measurements it happens to receive. The original ANIM algorithm obtained positions from AoA measurements converted to unit pointing vectors in a global coordinate system, for example North-EastDown (NED), under the assumption that the aircraftbased coordinates could be related to world-fixed coordinates using an AHRS. In this paper, we will generalize the original ANIM algorithm so that it functions even if AHRS data are not available. This is the key difference between the method proposed in this paper (see next section) and the original method [17]. To provide context for the generalized ANIM algorithm, the remainder of this section will review the details of the original algorithm. Assuming AoA measurements can be rotated to a global coordinate system, they can be expressed as pointing vectors in that coordinate system. The simplest case in which ANIM can be used is in a network of three nodes, as represented in Figure 2. Each side of the figure shows three nodes labeled 1, 2, and 3. On the left in Figure 2a, the relative position vectors between each pair are labeled p12 , p23 , and p31 . Here the trailing subscripts refer to the measuring (receiving) node first and the detected (broadcasting) node second. The constraint that all three vectors between nodes should be coplanar can be used to reduce measurement noise. Moreover, the triangle geometry can be reconstructed (to a scale factor) using only angle measurements. If the scale

factor can be determined by complementary information uij · nijk = 0. (3) (as will be discussed later), then this triangle provides relative positioning information. Considering all of the measurements around a trianAs the number of nodes is increased, noise mitigation is enhanced. The sensor noise can be characterized in gle, and minimizing the residual errors of (3) applied to terms of a positioning error ij . The unit pointing vector all edges around a triangle, we can obtain an estimate of ˆ ij can be written in terms of ij and the true the normal vector as follows estimates u position vector pij . ˆ ijk = argmin(nT Aijk ATijk n). n (4) n

Figure 2: Three Nodes Define a Plane

ˆ ij = u

pij + ij ||pij + ij ||

(1)

The hat is used to indicate a noisy estimate. As mentioned above, ANIM provides a processing ˆ ij are not benefit in the presence of noise, when the u coplanar. ANIM effectively filters the measurements to ensure geometric consistency. The filtering is performed ˆ ijk is computed as an estiin two steps. First, a vector n mate of the normal vector to each plane containing three nodes. Second, the normal vectors are used to improve ˆ ij . Both steps are the estimated unit pointing vectors u performed by each node, incorporating all edge data available to it (both from direct measurements and broadcast data). To implement the first step (estimating the normal vector to each set of three nodes), a least-squares method is used. To do this, all the measurements between nodes in a triplet are compiled as row vectors of a matrix Aijk . Typically, for the triplet i, j, k, this matrix could include ˆ ij , u ˆ ji , u ˆ jk , u ˆ kj , u ˆ ki , as many as six measurements: u ˆ ik . For the case in which all six measurements are and u available, Aijk would be: ˆ ji u ˆ jk u ˆ kj u ˆ ki u ˆ ik ]T . Aijk = [ˆ uij u

(2)

ˆ ijk that is the We would like to find the estimate of n most orthogonal to all of the unit pointing vectors in the ˆ ijk plane. Another way to say this is that we want the n that minimizes its dot product with each of the unit pointing vectors, because the following is ideally true

ˆ ijk in (4), we use least-squares minimization, To solve for n implemented efficiently using a singular value decomposition (SVD) [17]. ˆ ijk for each triangle obNow that we have found n served by a node, the second step of ANIM is to update each unit pointing vector estimate. To do this, a matrix B is assembled of all of the normal vectors from triangles containing a particular pointing vector. Consider the update to the pointing vector from node i to node ˆ ij , it considers all triangles j. For node i to update u containing nodes i, j and any third node k. The nodes k ∈ {k1 , k2 , k3 , ..., kS } include all S nodes observed by node i (other than node j). Assembling these normal vectors as rows of a matrix gives ˆ ijk3 ... n ˆ ijkS ]T . ˆ ijk2 n Bij = [ˆ nijk1 n

(5)

ˆ ij that Based on the identity (3), we want to find the u minimizes its dot product with each of the normal vectors in Bij (6). ˆ ij = argmin(uT Bij BTij u) u

(6)

u

Mirroring the first step, we use SVD in the same way ˆ ij . This step is reas before to get the optimal value of u peated for all of the unit pointing vectors initially known by node i. After this update step, the new values of the ˆ ij are rebroadcast, so that they are shared estimated u with collaborators. All collaborators can then solve (4) ˆ ij . The process reand (6) again using the rebroadcast u peats to convergence. A proof of convergence is provided in [17]. After iterating to convergence, all nodes share a common set of pointing vectors, with inconsistencies removed. In other words, all converged pointing vectors are constrained so that any triangle connecting three nodes has coplanar edges (the pij ’s in Figure 2a). The next step is to estimate distances between nodes. This step can be accomplished by solving simultaneously, at each node, the summation of vectors around all complete triangles. This is to say that pij + pjk + pki = ~0.

(7)

Therefore, if the distance between any node pair is labeled dij , we can write ˆ ij + djk u ˆ jk + dki u ˆ ki = ~0. dij u

(8)

for any triplet of coplanar pointing vectors. Solving (8) simultaneously for all triangles seen by a node gives the unknown distances subject to a scale factor C since the set of equations (8) for all triangles is homogeneous. To reflect this scale-factor, which applies to the entire network of triangles, it is helpful to rewrite d (8) in terms of the normalized distances d0ij = Cij . ˆ ij + Cd0jk u ˆ jk + Cd0ki u ˆ ki = ~0 Cd0ij u

Acquire Measurements

(9)

In theory, the scale factor C can be determined if at least one distance measurement is available. Alternatively, if the positions of two nodes are known (e.g. if the network is broad enough to span two or more ground stations at surveyed locations), C can be determined. The final result is that each node has an estimate of the relative position of the rest of the nodes using AoA data for a single time step. ESTIMATING ORIENTATION WITH ROTATIONAL ANIM

Broadcast Pointing Vectors

Receive Pointing Vectors

Determine Orientation

Adjust Pointing Vectors for Orientation

ApplyOrignal ANIM (4) and (6)

Converged?

No

Yes

Solve (8) to Obtain Scaled Relative Position Vectors

Figure 3: Rotational ANIM Processing by Each Node We will now take away the assumption that the orientation of each node is measured. Recall that the term node refers to the transceiver, an object that can be associated with a point (e.g. measurement location) and a rigid basis indicating orientation. Together a point and an associated basis are sometimes called a rigid frame [18]. In the original ANIM algorithm it was assumed that the node orientation was registered to global coordinates using external sensing (e.g. an Attitude Heading Reference System or AHRS). In this paper we introduce Rotational ANIM, a modification of the original ANIM algorithm that enables nodes to estimate their orientation relative to each other, even when global orientation is unknown. The flow chart in Figure 3 describes the steps required to evaluate Rotational ANIM. As shown in the figure, the first step after each node acquires AoA measurements is to broadcast them to other nodes. Each node then individually estimates the relative orientation of the other nodes. Each node can only estimate the relative orientation of other nodes of which it has taken a measurement and from which it has received broadcast measurements. After relative orientation is determined, the rest of ANIM can continue as described in the previous section. Our process for estimating relative orientation involves the rotation matrices between the rigid frames as-

sociated with each pair of nodes. We estimate the rotation matrices by first aligning two pointing vectors that ˆ ij and u ˆ ji . we know should be collinear, for example u Subsequently, we rigidly rotate the set of pointing vectors at one node (node j) about the axis defined by the collinear pointing vectors to align the remaining measurements. In order to understand the process that aligns the remaining vectors at each node, it is useful to consider a simple example involving the orientation between two nodes (nodes i and j) that each observe a common third node k. Figure 4 represents this scenario. The red arrows represent AoA measurements taken by nodes i and j. In ˆ ik should be rotated about graphical terms, the vector u the axis between i and j until it falls into the same plane ˆ jk . as u The following is a quantitative description of the steps executed by node i to estimate the relative orientation between nodes i and j. Use the labels A and B to indicate the initial coordinate systems for the measurements obtained from nodes i and j, respectively. Initial coordinate systems are representative of body-fixed coordinates for each node. This means that the AoA measurements, or ˆ ij and u ˆ ik taken by node i are are unit pointing vectors, u initially expressed in coordinate frame A, etc.

k cost function is a good estimate of the true rotation. To understand this, consider each term of the summation. Each term corresponds to one triangle, as illustrated in Figure 4. For any term of (12), the two measurements to k can be split into components parallel and perpendicular to the common axis between nodes i and j.

Ûik

i

(A0 )

Ûjk Ûij

j

Ûji

(B 0 )

Figure 4: Three Nodes with Unknown Orientations The first step in estimating orientation is for node i to ˆ ji so that it is collinear and opposite in direction rotate u ˆ ij . The idea is to define intermediate bases A0 and to u ˆ ij or B 0 where the third vector in the basis is in the u ˆ ji direction, respectively. To do this, it is sufficient to u construct the rotation matrix A

0

ˆ ij ]T RA = [ra1 ra2 u

B0

Jij = −

(A )

ˆ ik · u

0

A

0

R(θ)B ·

0

ˆ jk = (B ) u ˆ jk,k + (B ) u ˆ jk,⊥ u

(14)

A’

B’

ujk,

(11)

Here the vectors ra1 and ra2 are arbitrary vectors normal ˆ ij , sequenced to obey the right-hand to each other and u rule. Similarly, rb1 and rb2 are arbitrary vectors normal ˆ ji , ordered again to obey the right to each other and u hand rule. Since the third axes are assumed equivalent for each of these bases, all that remains is to find the simple rotation angle about this common axis that fully aligns the bases. In order to find the simple rotation angle, the second step of our rotation algorithm uses planes containing common measurements. Where nodes i and j see many other ˆ ik nodes k in common, it will not be possible to make u ˆ jk coplanar for all k, because of measurement noise. and u For this reason, we frame the process of finding the simple rotation angle as an optimization problem. The optimization problem is described by the following cost function Jij . 0

0

(13)

uik,

ˆ ji ]T . RB = [rb1 rb2 u

X

0

In the above equations, the subscript k implies the parallel component of the vector (i.e. the third basis vector in A0 or B 0 ) and the subscript ⊥ implies the perpendicular component of the vector (i.e. the first two basis vectors in 0 ˆ jk and A0 or B 0 ). Figure 5 depicts the components of (B ) u 0 (A ) ˆ uik that are parallel (blue) and perpendicular (red) to the common axis between nodes i and j.

(10)

and the matrix

0

ˆ ik = (A ) u ˆ ik,k + (A ) u ˆ ik,⊥ u

0

(B )

ˆ jk u

(12)

k∈K

where K is the set of all nodes observed by both nodes i and j. Each vector is expressed in a coordinate system indicated by a leading superscript. The rotation matrix A0 B 0 R relates the two coordinate systems A0 and B 0 . The simple rotation about the ij axis is labeled θ. The simple rotation angle θ that minimizes the above

uik,||

ujk,||

uij

Figure 5: Geometry of Nodes After Orientation Algorithm The meaning of the cost function (12) becomes more clear after substituting equations (13) and (14). After substitution we obtain Jij = −

X

ˆ ik,k + [((A) u

(A)

ˆ ik,⊥ ) · u

k∈K A0

where

A0

0

ˆ jk,k + R(θ)B · ((A) u

(A)

ˆ jk,⊥ )] (15) u

0

RB (θ) can be expanded as  cos θ sin θ 0 0 A R(θ)B = − sin θ cos θ 0 0

 0 0 . 1

(16)

After simplifying Jij , the result is Jij = −

X

0

(B 0 )

0

A0

ˆ ik,k · [((A ) u

ˆ jk,k ) + u

k∈K

ˆ ik,k · ((A ) u

0

R(θ)B ·

(B 0 )

ˆ jk,⊥ )]. (17) u

Finally, we can simplify Jij even further to

SIMULATIONS OF ROTATIONAL ANIM Jij = constant + f (θ).

(18)

We define f (θ) as f (θ) = −

X

0

0

ˆ ik,⊥ || · ||(B ) u ˆ jk,⊥ || · cos(φk − θ)) (||(A ) u

k∈K

(19) (A0 ) ˆ

(B 0 ) ˆ

where φk is the angle between uik,⊥ and ujk,⊥ . In the absence of noise, all of the φk are equal, and the expression is minimized when θ is set equal to φk . In the more general case, the φk are noisy, and the estimate θˆ is chosen to get the best alignment possible, meaning the alignment that minimizes the f (θ) term. Understanding that minimizing (19) obtains a good estimate of the simple rotation angle θ, we note that the solution that minimizes (19) can be obtained by analytic ˆ means. The argument that minimizes (19) is θ:

Rotational ANIM is a powerful extension of the original ANIM algorithm in that it does not require global orientation to be known; however, because Rotational ANIM estimates a greater number of degrees of freedom, its accuracy may be slightly lower than that of conventional ANIM. This section quantifies the change in positioning accuracy that occurs when using Rotational ANIM (with AoA measurements only) as compared to an ideal case when using the original ANIM algorithm (with AoA measurements and perfect knowledge of orientation). In these simulations, we modeled a system of six nodes distributed in Euclidean space. Table 1 describes the true position of nodes 2 through 6 relative to node 1. We assumed each node communicated with all five other nodes. Also, each node sensed all five other nodes (for a total of 30 AoA measurements at a single time step). Table 1: Node Positions Relative to Node 1

β θˆ = arctan( ) α

(20)

where α and β are defined by α=

X

0

ˆ ik,⊥ · (A u

B0

ˆ jk,⊥ ) u

(21)

Node

2

3

4

5

6

X (m) Y (m)

0

300

-400

-200

400

0

-100

400

-200

100

Z (m)

100

0

100

200

600

k∈K

Each node was assigned an orientation relative to node 1. Table 2 lists orientation angle values (roll, pitch, and X yaw of nodes 2 through 6 relative to node 1). For Rota0 0 ˆ ik,⊥ × B u ˆ jk,⊥ ||. β= ||A u (22) tional ANIM processing, these orientation angles were esk∈K timated from AoA measurements. For the original ANIM processing, these orientation angles were assumed known. To be precise, there are two solutions to the arctan function on the range [0, 2π]. One of these solutions miniTable 2: Node Orientations Relative to Node 1 mizes the cost function and the other maximizes it. In practice, the minimizing solution can be identified by a Node 2 3 4 5 6 simple comparison of Jij for both cases. Roll (◦ ) 16.2 15.1 -9.8 3.9 -14.2 and

With an estimate of θ, the relative orientation between nodes i and j can be expressed by a single rotation matrix A B R . A

0

RB = A RA

A0

R(θ)B

0

B0

RB

(23)

This matrix can now be used to convert the measurements from node j (in coordinate system B) into a common coordinate system used by node i (coordinate system A). This process can be repeated to map all available measurement sets into the common coordinate system, so that the ANIM algorithm (as described in the previous section) can be applied. The result is that ANIM can be applied in the absence of global-orientation measurements (e.g. without requiring an AHRS).

Pitch (◦ )

13.9

0.0

-0.8

13.7

8.9

(◦ )

18.8

14.0

3.6

-10.4

-9.8

Yaw

To evaluate performance given random measurement noise ij , a Monte Carlo simulation was performed. The simulations of the two algorithms (original ANIM and Rotational ANIM) used the same set of ij values. For each case, results were compiled over 20 Monte Carlo trials. Error was simulated as proportional in magnitude to the distance between nodes. In other words, each Monte Carlo simulation obtained unit pointing vector measurements by adding a random perturbation to the true pointing vectors uij . Two noise levels were considered: 1% and a 5% error (one sigma). These noise levels correspond

approximately to an angular error of 0.57◦ and 2.86◦ , req 2 + σ2 + σ2 spectively. σxx yy zz e Σij = . (25) The results of the 20 Monte Carlo trials are expressed dij in Figure 6. The figure shows results for each of the two noise levels (1% top, 5% bottom). In the figure, the true Tables 3 and 4 list the position error and normalized location of each node is shown as an open blue circle. All position error for both algorithms and both noise levels locations are relative to node 1, so without loss of gener- (abbreviating the original ANIM as ORG and Rotational ality, node 1 is illustrated at the origin. The positions es- ANIM as ROT). timated by Rotational ANIM processing (without knowledge of relative orientation) are shown as green dots. The Table 3: Output Error Metrics for 1% Measurement Noise positions estimated by the original ANIM (with perfect orientation knowledge) are shown as red dots. Node

ORG/ROT

e ij Σij or Σ

Units

2

3

4

5

6

ORG

Σij

m

2.2

4.6

6.8

3.3

6.9

ROT

Σij

m

2.9

6.2

11

5.7

14

ORG

e ij Σ



.42

.85

1.2

.35

1.0

ROT

e ij Σ



.72

1.3

2.9

1.1

4.3

Table 4: Output Error Metrics for 5% Measurement Noise Node

Figure 6: Results for 20 Monte Carlo Trials In order to quantify error, we used the following metric that describes 3-D RMS error Σij =

p

σxx + σyy + σzz

(24)

where σxx , σyy , and σzz are the diagonal elements of the covariance matrix of the x, y, and z position error. Because measurement errors are angular, estimated position errors are a strong function of the distance between nodes. For this reason, it is useful to scale the 3D RMS error by the internode distance dij to define a relative error metric

ORG/ROT

e ij Σij or Σ

Units

2

3

4

5

6

ORG

Σij

m

11

19

35

19

33

ROT

Σij

m

15

28

58

29

63

ORG

e ij Σ



9.1

12

33

12

25

ROT

e ij Σ



19

25

81

29

86

There are two important ideas to draw from comparisons of the data. First, there appears to be a strong dependence of the position error on geometry. Second, there is a clear reduction in positioning accuracy introduced when orientation must be estimated simultaneously. By visual inspection, there is roughly a factor of two increase in the 3D RMS error for the case in which orientation must be estimated (using Rotational ANIM). As a final consideration, since ANIM algorithms are iterative, it is important to address convergence. The original ANIM algorithm has been proven to converge, with errors decreasing monotonically in each subsequent iteration [17]. Typically the original ANIM algorithm converges well within about 10 iterations. By contrast, the Rotational ANIM algorithm is not guaranteed to converge. In fact, empirical evidence indicates that the algorithm converges poorly when the relative angles between nodes are large (e.g. larger than 20◦ ). For the relative orientations described in Table 2, reliable convergence was observed; however, errors did not converge monotonically. An example is shown in Figure 7. The figure illustrates convergence in terms of the smallest singular value associated with each ANIM optimization step, solving either equation (4) for the normal,

as shown in blue, or equation (6) for the pointing vector, as shown in red. Ideally the minimum singular value in each case would be zero (indicating perfect orthogonality of normal and pointing vectors). Singular values are shown as a function of iteration for each of 20 Monte Carlo trials. Only the results for Rotational ANIM are shown.

microprocessor pairs that serve as nodes. The magnified image in Figure 8 is an example of such a sensormicroprocessor pair. Hardware System

Singular Value

There are four main components to each node: a transmitter, a receiver, a computer, and communication. A red LED acts as the indicator or transmitter for 0.07 each node. Any node can create angle of arrival measureNormal Vector ments by identifying the direction of the light arriving Pointing Vector 0.06 from the LED sources on other nodes. To keep the node’s 0.05 LED and camera close (within 3 cm), we put the LED on top. 0.04 The receiver is a Hue HD Camera [19]. The camera 0.03 acts as a receiver because it captures the intensity of LED measurements arriving from different directions. 0.02 A myRIO microprocessor [20] acts as the computer of 0.01 each node. It has the capability of controlling onboard vision processing using an FPGA. The image processing 0 0 5 10 15 20 25 30 Iteration code is written in LabVIEW. The communication subsystem is WiFi hardware Figure 7: Convergence for Sample Set of Data that communicates using Transmission Control Protocol The illustration shows reasonable convergence prop- (TCP). Each node is a TCP server as well as a TCP client. erties for the scenario studied in this paper. In fact, the Pointing Vector Measurement singular values associated with estimating the pointing Generating unit pointing vectors begins with an image vector converge monotonically toward zero. The behavior of the singular values for normal-vector estimation, taken by the camera. The image is thresholded so that by contrast, diverged initially in several cases before con- only the LEDs of the other nodes will show up in the proverging again toward zero. It would appear that the di- cessed image. The centroids of the LEDs are expressed vergence is introduced by the rotational correction step in (x, y) pixel coordinates. The camera-fixed basis can (since such divergence is not observed for the original be described by the orthonormal unit vectors cx , cy , and ANIM method). The divergence is a clear problem if the cz . As seen in Figure 9, x and y are in the cx and cy relative orientation angles are too large, as subsequent it- directions. The focal length, denoted f , is the distance erations are not guaranteed to transition from divergence from the image plane to the camera focal center (in the cz to convergence in those cases. More work is needed to direction). We calibrate the distance f in units of pixels. find a variant of Rotational ANIM for which convergence To convert (x, y) pixel location of each centroid to a unit ˆ , we will use pointing vector u can be guaranteed. HARDWARE ACQUISITION OF BEARING MEASUREMENTS Because the ANIM algorithms are sensitive to assumed levels of measurement error, it is important to characterize measurement error using physical hardware. This system describes a prototype that will be used to characterize measurement error for a particular type of AoA sensing: optical sensing using a conventional camera. Because these experiments have not yet been completed, this section will place a particular emphasis on ˆ ij can be generated. how the pointing vectors u Our proof-of-concept system is an autonomous, decentralized, and homogeneous group of nodes collaborating to complete a task. We have built a few sensor-

ˆ= u

y f x cx + cy + cz . uo uo uo

The scaling parameter uo is p uo = x2 + y 2 + f 2 .

(26)

(27)

The geometry of (26) is illustrated in Figure 9. To calibrate focal length f , in pixels, one approach is to place a camera at a known distance to an image plane and measure the distance of the plane from the camera focal point (zcm ) and the horizontal displacement of a point on the plane from the optical axis (xcm ). If the location of that point on the image is xpixels , then f = xpixels

zcm . xcm

(28)

Figure 8: Top View of Three-Node Experiment Top view of a three-node experimental system, with an enlarged image of one node tally. Our initial experiments will be static indoor tests that obtain ground truth from a calibrated grid. In the longer term we will seek to move experiments outdoors using a mobile robot system. SUMMARY

Figure 9: Calculating Unit Point Vector from Pixel Position

FUTURE WORK Algorithm Enhancements

We have presented a decentralized algorithm that estimates relative position and orientation of vehicles by combining noisy angle-of-arrival measurements. Simulations were used to compare the positioning accuracy with and without the availability of complementary orientation measurements (e.g. from an AHRS). When orientation information is not available, the position error of the algorithm increases by approximately a factor of two. A potential method of measuring angle-of-arrival is described using a camera and LED pair as a transceiver. ACKNOWLEDGEMENT This research was supported by National Science Foundation under grant No. 1100452.

ˆ ij is The first step of Rotational ANIM assumes that u aligned with −ˆ uji . However, this is not quite true because both measurements are noisy. Future work would relax REFERENCES ˆ ij = −ˆ the assumption that u uji , perhaps using Wahba’s solution [21]. This may be an important step in proving [1] Christopher J Hegarty and Eric Chatre. Evolution of convergence of Rotational ANIM. the global navigation satellite system (GNSS). Proceedings of the IEEE, 96(12):1902–1917, 2008. Experiment Implementation In the near future we will use the multi-node hardware system to verify the performance of ANIM experimen-

[2] RTCA. Report No. RTCA/DO-242A. Minimum Aviation System Performance Standards for Auto-

matic Dependent Surveillance Broadcast (ADS-B), [10] Resilient Navigation and Timing Foundation. It’s in RTCA, Inc. Washington, DC., June 2002. Law - Preserve Loran, Backup GPS. Web, January 2015. [3] J. Rife and S. Pullen. Aviation Applications. Artech [11] Demoz Gebre-Egziabher, C.O. Lee Boyce, J. David House, Norwood, MA, 2009. Powell, and Per Enge. An inexpensive DME-aided [4] Sergio Ruiz, Miquel A Piera, and Isabel Del Pozo. A dead reckoning navigator. Navigation, 50(4):247– medium term conflict detection and resolution sys263, 2003. tem for terminal maneuvering area based on spatial data structures and 4D trajectories. Transportation [12] Kuangmin Li and Wouter Pelgrum. Enhanced DME carrier phase: Concepts, implementation, and flightResearch Part C: Emerging Technologies, 26:396– test results. Navigation, 60(3):209–220, 2013. 417, 2013. [5] Victor HL Cheng, Anthony D Andre, and David C [13] S-L. Jheng and S-S. Jan. Sensitivity study of the wide area multilateration using ranging source Foyle. Information requirements for pilots to exeof 1090 MHz ADS-B signals. In Proceedings of cute 4D trajectories on the airport surface. In ProIEEE/ION GNSS+ 2015, page submitted. ceedings of the 9th AIAA Aviation Technology, Integration, and Operations Conference (ATIO), pages [14] Suraj G. Gupta, Mangesh M. Ghonge, and P. M. 21–23, 2009. Jawandhiya. Review of unmanned aircraft system (UAS). International Journal of Advanced Research [6] Thomas Prevot, Vernol Battiste, Everett Palmer, in Computer Engineering & Technology, 2(4):1646– and Stephen Shelden. Air traffic concept utilizing 1658, 04 2013. 4D trajectories and airborne separation assistance. In Proceedings of the AIAA Guidance, Navigation, [15] Albert Helfrick. Principles of Avionics. Avionics and Control Conference, AIAA-2003-5770, Austin, Communications Inc., Leesburg, VA, 7 edition, 2012. TX, USA, 2003. [16] David Montoya. The ABCs of VORs. Flight Tran[7] John A. Volpe Center. Vulnerability Assessment ning, December 2000. of the Transportation Infrastructure Relying on the Global Positioning System. Technical report, Na- [17] Jason Rife. Design of a distributed localization algorithm to process angle-of-arrival measurements. tional Transportation Systems Center, August 2001. IEEE International Conference on Technologies for [8] Sam Pullen, Grace Gao, Carmen Tedeschi, and John Practical Robotic Applications (TEPRA), May 2015. Warburton. The impact of uninformed RF interference on GBAS and potential mitigations. In Proceed- [18] Paul Mitiguy. Advanced Dynamics and Motion Simulation. Prodigy Press, Inc., January 2015. ings of the 2012 International Technical Meeting of the Institute of Navigation (ION ITM 2012), New[19] HUE HD USB Camera. Description. Web, January port Beach, CA, pages 780–789, 2012. 2015. [9] Gregory W Johnson, Peter F Swaszek, Richard J [20] NI myRIO-1900. User guide and specifications, NaHartnett, Ruslan Shalaev, and Mark Wiggins. An tional Instruments, August 2013. evaluation of eLoran as a backup to GPS. In Technologies for Homeland Security, 2007 IEEE Confer- [21] G. Wahba. A least squares estimate of satellite attience on, pages 95–100. IEEE, 2007. tude. SIAM Review, 7(3):409–409, July 1965.

Decentralized Position and Attitude Estimation Using ...

cation might be backup to GPS. ..... is chosen to get the best alignment possible, meaning the ... To be precise, there are two solutions to the arctan func-.

2MB Sizes 0 Downloads 226 Views

Recommend Documents

DECENTRALIZED ESTIMATION AND CONTROL OF ...
transmitted by each node in order to drive the network connectivity toward a ... Numerical results illustrate the main features ... bile wireless sensor networks.

BLIND DECENTRALIZED ESTIMATION FOR ...
fusion center are perfect. ... unlabeled nature of the fusion center observations makes the problem .... where ˆψML is the solution obtained through the EM algo-.

Blind Decentralized Estimation for Bandwidth ...
Bandwidth Constrained Wireless Sensor Networks. Tuncer C. Aysal ...... 1–38, Nov. 1977. [19] G. McLachlan and T. Krishnan, The EM Algorithm and Extensions.

decentralized set-membership adaptive estimation ... - Semantic Scholar
Jan 21, 2009 - new parameter estimate. Taking advantage of the sparse updates of ..... cursive least-squares using wireless ad hoc sensor networks,”. Proc.

Constrained Decentralized Estimation Over Noisy ...
tralized, distributed estimation, and power scheduling methods do .... Newton's method to obtain the optimal solution. ...... predictive control,” IEEE Contr. Syst.

Simultaneous Estimation of Self-position and Word from ...
C t. O. W. Σ μ,. State of spatial concept. Simultaneous estimation of. Self-positions .... (desk). 500cm. 500cm. The environment on SIGVerse[Inamura et al. (2010)].

Simultaneous Position Estimation & Ambiguity ...
called simultaneous position estimation and ambiguity resolution. (SPEAR), with the goal of delivering high-accuracy, high- integrity navigation with robustness to carrier-tracking interruptions. The algorithm operates by continuously applying intege

Simultaneous Estimation of Self-position and Word from ...
Phoneme/Syllable recognition. Purpose of our research. 5. Lexical acquisition. Lexical acquisition related to places. Monte-Carlo Localization. /afroqtabutibe/.

Spacecraft Attitude Stabilization using Magnetorquers ...
Jan 5, 2016 - Let Bi be the geomagnetic field at spacecraft expressed in inertial frame Fi. .... In order to simplify forthcoming expressions let us define ∆.

Spacecraft Attitude Stabilization using Magnetorquers ...
determined by length of measurement process (not a control design parameter) ... closed-loop dynamics given by sampled-data nonlinear time-varying system.

unit 6 attitude and attitude change
research study reveals that Indian females have a favourable attitude ..... Table 6.2 outlines the areas in which such inferences and actions can be contemplated.

Method and apparatus using geographical position and universal time ...
Aug 15, 2002 - basic remote user authentication process as implemented at .... Serial or parallel processing of the signals arriving from four satellites allows ...

Method and apparatus using geographical position and universal time ...
Aug 15, 2002 - M h d d pp. f p d g h ..... addition to transactional data, user location data and event ..... retrieve the user's public key from a public key database,.

Realtime Experiments in Markov-Based Lane Position Estimation ...
C. M. Clark is an Assistant Professor at the Computer Science Depart- ment, California Polytechnic State University, San Luis Obispo, CA, USA ..... Estimated vs. actual lane positions for computer 1 (top) and computer 2 (bottom). be explained ...

Cover Estimation and Payload Location using Markov ...
Payload location accuracy is robust to various w2. 4.2 Simple LSB Replacement Steganography. For each cover image in test set B, we embed a fixed payload of 0.5 bpp using LSB replacement with the same key. We then estimate the cover images, or the mo

Nonlinear Estimation and Multiple Sensor Fusion Using ...
The author is with the Unmanned Systems Lab in the Department of Mechanical Engineering, Naval Postgraduate School, Monterey, CA, 93943. (phone:.

Realtime Experiments in Markov-Based Lane Position Estimation ...
where P(zt) has the purpose of normalizing the sum of all. P(v1,t = la,v2,t = lb|zt). .... laptops was made through the IEEE 802.11b standard D-Link. DWL-AG660 ...

Position Bias Estimation for Unbiased Learning ... - Research at Google
Conference on Web Search and Data Mining , February 5–9, 2018, Marina Del. Rey, CA ... click data is its inherent bias: position bias [19], presentation bias. [32], and ...... //research.microsoft.com/apps/pubs/default.aspx?id=132652. [5] David ...

3D shape estimation and texture generation using ...
plausible depth illusions via local foreshortening of surface textures rendered from a stretched spatial frequency envelope. Texture foreshortening cues were exploited by a multi-stage image analysis method that revealed local dominant orientation, d

Cover Estimation and Payload Location using Markov ...
Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly ... Maximum a posteriori (MAP) inferencing:.

Photometric Stereo and Weather Estimation Using ...
We extend photometric stereo to make it work with in- ternet images, which are typically associated with differ- ent viewpoints and significant noise. For popular tourism sites, thousands of images can be obtained from internet search engines. With t

3D shape estimation and texture generation using ... - Semantic Scholar
The surfaces of 3D objects may be represented as a connected distribution of surface patches that point in various directions with respect to the observer.

inteligibility improvement using snr estimation
Speech enhancement is one of the most important topics in speech signal processing. Several techniques have been proposed for this purpose like the spectral subtraction approach, the signal subspace approach, adaptive noise canceling and Wiener filte

attitude structure and function pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. attitude structure ...