Distributed Averaging with Quantized Communication over Dynamic Graphs Mahmoud El Chamie1

Ji Liu2

Abstract— Distributed algorithms are the key to enabling effective large scale distributed control systems, which present many challenges due to lack of access to global information. This paper studies the effects of quantized communication among agents for distributed averaging algorithms. The agents rely on peer-to-peer interactions to exchange and update their local agreement variables that converge to the average of initial values asymptotically. Our previous work had shown that, when the underlying communication graph is static with certain types of uniform quantization effects, an optimized distributed selection of algorithm parameters results in the convergence of the agreement variables to a small neighborhood around the actual average, independent of the network size. In this paper, we extend this result to time-varying (dynamic) graphs. We first show, using an example, that the deviation from the desired average caused by quantized communication on timevarying graphs can be arbitrarily large even when the graph is connected at every iteration. This is contrary to the case without quantized communication. We then present a large class of randomized time-varying communication graphs for which the convergence to a small neighborhood around the average is guaranteed in the presence of quantized communication.

I. I NTRODUCTION Over the past few decades, there has been considerable interest in developing algorithms for distributing information among interactive subsystems via peer-to-peer interactions, each with limited sensing, computation, and communication capabilities. A large number of these algorithms aim at establishing consensus among agents in a distributed manner [1]–[9]. In a typical consensus seeking process, the agents send their estimates of a certain variable to their neighbors. This local information is then used by each agent to compute a new estimate of the variable, which is driven by the mismatch between neighbors’ estimates. A particular type of consensus process, which aims to reach a consensus at the average of all initial values (or some value close to that average), is called distributed averaging [10]–[18]. Most of the existing algorithms for precise distributed averaging require that agents are able to send and receive real values with infinite precision. In practice, such communication requirement cannot be satisfied since a realistic network can only transfer messages with limited 1 M. El Chamie and B. Ac ¸ ıkmes¸e are with the University of Washington, Department of Aeronautics and Astronautics, Seattle, WA 98195, USA. Emails: [email protected], [email protected]. Work of these two authors was supported in part by the Defense Advanced Research Projects Agency (DARPA) Grant No. D14AP00084, and in part by the National Science Foundation (NSF) Grant No. CNS-1624328. 2 J. Liu and T. Bas¸ar are with the University of Illinois at Urbana-Champaign, IL 61801, USA. Emails: [email protected], [email protected]. Work of these two authors was supported in part by the U.S. Air Force Office of Scientific Research (AFOSR) MURI grant FA9550-10-1-0573, and in part by NSF under grant CCF 11-11342.

Tamer Bas¸ar2

Behc¸et Ac¸ıkmes¸e1

length due to constraints on the capacity of communication links. With such a constraint, when a real value is transmitted between agents, the value will be truncated and only a quantized version will be received. Due to quantization, the precise average cannot be achieved in general. However, a value close to the true average can be achieved, which is the goal of quantized consensus. Quantized consensus has been widely studied and a number of new approaches have been proposed, including probabilistic algorithms [19]–[24], deterministic algorithms [25]–[30], and those with the additional constraint that the value held by each agent is an integer [31]–[33]. A brief review of these algorithms can be found in [34]. Recently, the performance of a class of deterministic linear distributed averaging algorithms has been thoroughly analyzed when the information exchange between agents is subject to uniform quantization [34]. For time-invariant, connected, and undirected communication graphs, it has been shown that these algorithms will, in finite time, either cause all agents to reach a quantized consensus, or they will cause the agents’ variables to oscillate in a small neighborhood around the true average, which depends on initial conditions but is independent of the network size. It is well known that linear distributed averaging algorithms guarantee the desired convergence when the communication graph changes over time, as long as the infinitely occurring graphs are jointly connected [13], [15]. A natural question to ask is how will this system behave if it is subject to both quantized communication and timevarying (dynamic) graphs. This paper aims to answer this question. Specifically, we investigate the effects of quantized communication on linear distributed averaging algorithms when the graph changes over time, both deterministically and probabilistically. We first present a motivating example of a deterministic dynamic graph that is connected for each iteration, but the distributed averaging algorithm does not converge and the deviation from the desired average, caused by quantized communication, can be arbitrarily large. This provides evidence that quantized consensus cannot always be guaranteed for dynamic graphs. We then provide mild conditions on the dynamics of the graph under which the convergence to a neighborhood of the average is achieved. In particular, if the “union graph” is connected and, in addition, each link in the union graph has a positive probability to appear at every iteration, then quantized consensus is guaranteed. This graph model includes, but is not limited to, asynchronous gossiping algorithms (one link active at a time [12], [21], [23], [31]), synchronous gossiping [12], graphs with link failures [35], and the standard static graphs.

II. P REVIOUS W ORK ON S TATIC G RAPHS Consider a group of agents labeled as 1 to n. Each agent i has control over a real-valued scalar quantity xi ∈ R, called an agreement variable, whose values can be updated by the agent at discrete time epochs k = 1, 2, . . . . Agents can only communicate with their “neighbors”. Agent j is a neighbor of agent i if (i, j) ∈ E is an edge in a given undirected and connected n-vertex graph G = (V, E) where V = {1, 2, . . . , n} is the set of vertices and E is set of edges where an edge provides a communication possibility between the corresponding agents. Let Ni be the set of neighbors of node i in graph G. The graph is called static if it does not change at any iteration k, i.e., G(k) = G, otherwise it is called dynamic or time-varying. Initially, Peach agent i has a real number xi (0). Let xave (k) = n1 i∈V xi (k) be the average value of all agreement variables in the network at time k. The initial average (at k = 0) is denoted by xave . In distributed averaging, the agents build on neighborto-neighbor communications (exchanging their agreement variables) to reach consensus on the average xave . Since communication channels have limited bandwidth, the estimates can undergo quantization effects (as truncation) before being received by the neighboring agents. Quantization introduces nonlinear effects into the system. The quantization that is considered in this paper uses a truncation quantizer Q = b.c, but the results extend to more general cases as well.1 The state equation of the system can be written in a distributed way for every i ∈ V as follows: P xi (k + 1)= xi (k) + j∈Ni wij (bxj (k)c − bxi (k)c) , (1) where wij is the weight selected by node i to its neighbor j, and it is a parameter of the distributed averaging algorithm. Remark: For each agent i, (1) requires only Q(xi (k)) to be sent to neighbors at iteration k. The number of bits for a quantized message sent by agents depends on: 1) the maximum and minimum values in the network; 2) the quantization stepsize  of the generic quantizer Q() (x) = Q(x/). For a fixed communication data rate,  would be adjusted to meet the specification. Without any loss of generality we focus in this paper on normalized stepsize  = 1, but the results hold also for a generic step size after scaling.  In matrix form, (1) would be as follows: x(k + 1) = W bx(k)c + x(k) − bx(k)c.

(2)

where W is called the weight matrix and is fixed for static graphs (i.e., does not change with any iteration). The dynamics of the system depends on the weight matrix. In distributed averaging, it is important to consider weights that can be chosen locally and can guarantee desired convergence properties. We impose the following assumption on W (designed given G) which can be satisfied in a distributed manner. 1 The algorithm relies on rounding non-integer values to the largest smaller integer. Using similar arguments as in [34], similar results can be obtained when rounding values to the closest integer.

Assumption 1. The weight matrix W satisfies two sets of properties (A and B) given as follows: A.1) W is a symmetric doubly stochastic matrix: P P wij = wji ≥ 0 ∀i, j ∈ V, and i wij = j wij = 1, A.2) Network connectivity constraints, i.e., if (i, j) ∈ / E, then wij = 0, B.1) Dominant diagonal entries of W , i.e., wii > 1/2 for all i ∈ V , B.2) For any link (i, j) ∈ E, we have wij ∈ Q+ , where Q+ is the set of rational numbers in the interval (0, 1). The first set of assumptions (A.1 and A.2) are “standard” ones for distributed averaging, and the second set of assumptions (B.1 and B.2) are specifically needed for convergence results of quantized systems. The choice of weights being rational numbers is not restrictive because any practical implementation would satisfy this property intrinsically. Relaxing assumption B.1 can cause large oscillations in the system (see [34, Appendix A]). We now state a relevant result for static graphs proved in our earlier work [34, Corollary to Theorem 1]: Theorem 1 ([34]). Consider the quantized system (2) such that W satisfies Assumption 1. Suppose that G(k) = G for all k (i.e., static and connected graph). Then, for any initial value x(0), the average is conserved, i.e., xave (k) = xave for k = 1, 2, . . .

(3)

and there exists a finite time iteration k0 such that for k ≥ k0 |xi (k) − xj (k)| < 1 for all i, j ∈ V.

(4)

In the sequel, we will extend this result to dynamic graphs. It turns out that the theorem cannot be generalized completely due to a counterexample shortly to be presented. Nevertheless, we will characterize a large class of dynamic graphs for which the statement of the theorem remains valid. III. DYNAMIC G RAPHS Let G = (V, E) be an undirected and connected union graph. We consider a dynamic graph G(k) = (V, E(k)) such that at each time k it is a subgraph of the union graph with the same set of vertices V and with the set of edges E(k) ⊆ E which may change over time. S G is referred to as union graph because we consider G = k≥0 G(k) with probability one. With a little abuse of notation, the expression (i, j) ∈ G(k) denotes that the link (i, j) belongs to the set of edges of the graph at iteration k, i.e., (i, j) ∈ E(k). Let {G(0 : k)} , {G(0), . . . , G(k)} be the chain of graphs up to iteration k. Let (Ω, F, {Fk }k≥0 , P) be a probability space equipped with the filtration {Fk }k≥0 . Thus {G(0 : k)} is a random process adapted to a natural filtration {Fk }k≥0 . Let E[.] be the expectation operator. An expression conditioned on knowing Fk signifies that the expression holds for any given realization of graphs up to iteration k. Let Ni (k) be the neighbors of node i in graph G(k). The system equation (2) on dynamic graphs becomes as follows: x(k + 1) = W (k)bx(k)c + x(k) − bx(k)c,

(5)

B. Finite Set States In this section we will show that the values of agents’ local variables evolve in a finite set, which is the same property as the static graph case [34]. Proposition 1. Consider system (5). Suppose that W satisfies Assumption 1 and the weight matrix W (k) is constructed using (6). Then, for any initial value x(0), the state vectors {x(k), k ≥ 0} belong to a finite set.

Fig. 1. The union graph G is given in the top figure. The dynamic graph G(k) cycles deterministically between two connected graphs. Given the initial values at k = 0, the system dynamics given by (5), with a weight matrix W defined as wij = 0.1 if (i, j) ∈ E, causes the values to cycle at further iterations. This dynamic graph example shows that even if the network is connected at every iteration, convergence of the quantized system sclose to the average is not guaranteed.

where the weight matrix W (k) changes over time, depending on the time-varying graph G(k) in that if a link does not appear in G(k), the corresponding weight must be 0. On dynamic graphs, we assume that W (k) satisfies Assumption 1 at every iteration k. One possible construction of the weight matrix (used in this paper) is that, initially, a weight matrix W corresponding to the graph G is designed to satisfy Assumption 1. The nodes can then adjust the weights locally with the time-varying graph to satisfy the connectivity constraints. For the time-varying graphs, a weight matrix W (k) is constructed from W as follows, for i 6= j: ( wij if (i, j) ∈ E(k) (6) wij (k) = 0 if (i, j) ∈ / E(k) The weights wii (k) for i = 1, . . . , n are then adjusted so that the weight matrix is doubly stochastic. Since each G(k) is a subgraph of G, wii (k) ≥ wii for all i and k. Note that with this choice of online weights, W (k) satisfies Assumption 1 for all iterations. Under such a choice of weight matrices {W (k)} and the dynamics given by (5), we refer to {x(k)} as the dynamics driven by a given realization of the random chain {G(k), k ≥ 0} started at an initial point (k0 , v) ∈ N × Rn , where N is the set of non-negative integers. A. Non-Convergent Dynamic Graph Example The example in Fig. 1 applies the quantized system dynamics (5) for a given time-varying graph with some initial values. For k = 2, 4, . . . , the network estimates at the nodes are the same as the initial iteration at k = 0. Similarly, the estimates are identical for k = 1, 3, . . . , that is, estimates are cyclic and hence non-convergent to the average. The non-convergent system in the example can be extended to arbitrarily large networks. Another important remark is that this network is connected at every iteration, and without quantization it is well known [2], [5] that the estimates must converge to the average exponentially fast. This shows that quantized communication can significantly change the behavior of the distributed averaging algorithm over dynamic graphs.

Proof. We will give a sketch of the proof. Let ci (k) = xi (k)−bxi (k)c be the decimal part of a node’s local estimate value; then we have ci (k) ∈ [0, 1). The idea is to prove that both ci (k) and bxi (k)c can only belong to a set of finite possible values for which we can deduce that xi (k) belongs to a set of finite possible values as well. It can be shown i (k) , that, similar to the static graph [34], ci (k) = ci (0) + ZB i where Zi (k) is an integer (can change as function of k) and Bi > 0 is a fixed positive integer. Thus ci (k) belongs to a finite set because ci (k) ∈ [0, 1). Since for all k, bxi (k)c ∈ {minj∈V bxj (0)c, . . . , maxj∈V bxj (0)c} belongs to a finite set, this implies that xi (k) can only have finite number of possible values, and this completes the proof. Note that the finite set from Proposition 1 can be constructed from the initial state x(0) and the union graph G, independent of the random graphs {G(k), k ≥ 0}. C. Lyapunov Function Consider m(k) and M (k) defined as follows: m(k) , minbxi (k)c, M (k) , maxbxi (k)c. i∈V

i∈V

(7)

Next define Sk , {y ∈ Rn : |yi − m(k) − 1| ≤ αi for i = 1, ..., n}, (8) where αi = 1 − wii + γ, and γ > 0 is a sufficiently small2 positive scalar that guarantees αi < 0.5. Sk depends on the iteration k because m(k) does, and it defines a neighborhood set around m(k) + 1. Note that Sk 6= Sk+1 if and only if m(k) 6= m(k + 1). Since m(k) is a non-decreasing integervalued function and is bounded above by M (0), Sk can only change a finite number of times. For this particular reason, without any loss of generality of the asymptotic convergence properties of the system, intermediary results to the main convergence theorem can be shown when m(k) is fixed (i.e., m(k + 1) = m(k)). Consider the following candidate Lyapunov function: V (k) = d(x(k), Sk ) = min ||y − x(k)||1 . y∈Sk

By minimizing along P each component of y independently, we obtain V (k) = i max{|xi (k) − m(k)| − αi , 0}. To study the stability using the candidate function V (k), we need the following term: ∇Vk , E[V (k + 1) − V (k) |Fk ]. 2 In

(9)

[34], we gave an exact characterization of how small γ should be.

Conditioning on Fk ensures that the random variables V (k + 1) and V (k) are driven by the same process x(k) up to iteration k by using the same realization of graphs {G(0), . . . , G(k − 1)}. The following lemma shows that for dynamic graphs, the Lyapunov function is non-increasing. Proposition 2. Consider system (5). Suppose that Assumption 1 holds and W (k) is constructed using (6). If m(k+1) = m(k), then ∇Vk ≤ 0. Proof. Let G(k) be the graph at iteration k. Since ∇Vk involves only one time step where W (k) is constructed by (6) that satisfies Assumption 1, the proof follows directly from the static case (see [34, Lemma 2]) because the proof in the case of static graphs does not use the connectivity assumption but only the assumption on weights. Remark: Note that Lemma 2 implies that for any given realization of the random graphs {G(k), k ≥ 0}, V (k) is nonincreasing with time when the minimum is not increased, which also holds for the example in Fig. 1. However, to guarantee the desired convergence, the function should be eventually decreasing with the number of iterations and not just non-increasing.  The counterexample in Fig. 1 shows that the Lyapunov function may never decrease to zero, and thus the desired convergence cannot be achieved for general deterministically dynamic graphs, even though the graph is connected at each time step. In the sequel, we will turn to probabilistically dynamic graphs and show that for a large class of such graphs, the system (5) will achieve the desired convergence almost surely. D. A Randomized Dynamic Graph Model In this section, we consider a probabilistic dynamic-graphs model. We assume that each link has a positive probability to appear in the graph at any iteration. Assumption 2. Prob[(i, j) ∈ G(k)|Fk−1 ] ≥ p for all k and all (i, j) ∈ G, where p > 0 is a positive constant. This graph model is quite general and is adopted in many dynamic graph models from the literature. It includes asynchronous gossiping graphs [12], [21], [31] where one node is selected first at random and then this node exchanges estimates with one of its neighbors randomly (i.e., only one link is active at a time). The model also includes synchronous gossiping graphs where the set of links is a pairwise matching in the graph at every iteration. Another class of time-varying graphs widely used in the literature (e.g., wireless sensor networks) is graphs where links are subject to failure at any iteration with a positive probability [35]. Note that we do not assume any independence in the time-varying G(k), i.e., the graph can be correlated as long as Assumption 2 is satisfied for all iterations. Also note that in an extreme case when p = 1, the graph is essentially static and hence the results here subsume the results in [34].

The next theorem presents the main technical convergence result of this paper and summarizes the behavior of the system (5). Theorem 2. Consider system (5). Suppose that W satisfies Assumption 1 and W (k) is constructed using (6). Suppose that Assumption 2 holds and let α = maxi αi . Then, for any initial value x(0) and for any realization of the dynamic graph {G(k), k ≥ 0}, almost surely there is a finite time iteration k0 such that for all k ≥ k0 either • the values of nodes evolve in a bounded neighborhood around the average (note that α < 1/2) such that: ( |xi (k) − xj (k)| ≤ αi + αj for all i, j ∈ V (10) |xi (k) − xave | ≤ 2α for all i ∈ V, •

or the quantized values reach a consensus, i.e., ( bxi (k)c = bxj (k)c for all i, j ∈ V |xi (k) − xave | < 1 for all i ∈ V.

(11)

Proof. The proof is given in the Appendix. Remark: Theorem 2 shows that for the dynamic graph model considered, the convergence result of static graphs applies also for dynamic networks that satisfy Assumption 2. The difference is that in dynamic graphs the system is not necessarily cyclic because of the probabilistic nature of appearance/disappearance of links.  IV. C ONCLUSION In this paper, we have shown that the convergence of distributed averaging on time-varying graphs is not guaranteed when quantization is involved, which can cause the agents’ agreement variables to deviate far away from the desired average. We have provided mild assumptions on the probabilistic model of the dynamic graphs for which the convergence result of static graphs would still hold. In particular, we have shown that if the graph is connected and links can appear with a positive probability at every iteration, then the system will either converge to a neighborhood of the average where the size of the neighborhood can be controlled by distributed design of weights, or the system will reach a quantized consensus almost surely. For future work, it would be interesting to investigate the validity of the results on directed graphs. A PPENDIX P ROOF OF T HEOREM 2 The high level proof of the theorem proceeds as follows: First, we identify situations under which the Lyapunov function V (k) strictly decreases (i.e., ∇Vk ≤ − where  > 0). Second, we analyze the convergence by arguing that there would be an iteration where almost surely none of these situations can actually occur because V (k) ≥ 0 for all k. This happens when the system is close to the average. Due to space limitation, we will only provide a sketch of the proof.

surely one of the three situations occurs and, in addition, ¯ such that E[R(k0 )] ≤ R, ¯ there exists a positive scalar R 1 1 n−1 ¯ , h = 1 + 2δ and δ = where R = nh 1−hn−1 p

min(ij)∈E wij . Fig. 2. The figure shows the network structure of nodes when partitioned into the five sets. A link incident to two sets means that there exists a link (i, j) ∈ G(k) where the node i belongs to one of these sets and j belongs to the other set. The dotted lines represent the situations under which ∇Vk ≤ − if the corresponding link (i, j) appears in G(k) (see s[34, Lemma 3] for a proof).

A. Situations for Strict Decrease of Lyapunov Function We identify some situations in the probabilistic graph model under which the Lyapunov function V (k) strictly decreases. These situations will play an important role in the proof of the main result. Given the state vector x(k), we group the nodes depending on their values at iteration k into five sets, X1 (k), X2 (k), X3 (k), X4 (k), and X5 (k): • Node i ∈ X1 (k) if m(k) ≤ xi (k) < m(k) + 1 − αi , • Node i ∈ X2 (k) if m(k)+1−αi ≤ xi (k) < m(k)+1, • Node i ∈ X3 (k) if m(k)+1 ≤ xi (k) < m(k)+1+αi , • Node i ∈ X4 (k) if m(k)+1+αi ≤ xi (k) < m(k)+2, • Node i ∈ X5 (k) if m(k) + 2 ≤ xi (k) For simplicity, henceforth we will drop the index k in the notation of the sets when there is no confusion. The notation {Xs , Xt } indicates the union operation (i.e., Xs ∪Xt ). Fig. 2 shows the situations under which ∇Vk ≤ − where  > 0. Note that if none of the three situation in Fig. 2 can actually occur, then either (a) all nodes are in {X1 , X2 } or (b) all nodes are in {X2 , X3 }. By the definition of the sets, these two possibilities are the ones given by equations (10) and (11) in the theorem. B. Convergence Analysis For the convergence analysis, we need to introduce more notation. For a given chain of graphs {G(0), . . . , G(k0 −1)}, let R(k0 ) be defined as the number of iterations it takes after k0 for either a) a situation in Fig. 2 occurs or b) the minimum m(k) strictly increases, i.e., m(k) > m(k0 ). Note that R(k0 ) is a random variable (contrary to the static case) because the future chain of graphs {G(k0 ), G(k0 +1), . . . } are random on the probability space (Ω, F, P). The proof of convergence of the quantized systems on dynamic graphs basically proceeds by showing that under Assumption 2 as long as one of the situations (red dotted links in Fig. 2) can eventually occur, ¯ where k¯ is a fixed positive constant we have R(k0 ) ≤ kY independent of k0 , and Y ∼ G(¯ p) is a random variable following a geometric distribution with probability of success p¯ > 0. Then, we deduce that the Lyapunov function would decrease almost surely until none of the three situations can occur (i.e., convergence near the average is attained). We now state the following important lemma for the dynamic case: Lemma 1. If {X1 , X4 , X5 } 6= ∅ at an iteration k0 , and M (k)−m(k) = M (k0 )−m(k0 ) > 1 for k ≥ k0 , then almost

h

+1

1−h

Proof. To find the number of iterations for a dotted (red) link to appear, we define the following functions for nodes in X23 := {X2 , X3 },3 ( Pt=k (1) Ti (k1 , k2 ) = t=k21 1i∈X3 (t) Pt=k (2) Ti (k1 , k2 ) = t=k21 1i∈X2 (t) where 1 is the indicator function (equal to 1 if the condition (1) is satisfied, otherwise 0). Thus Ti (k1 , k2 ) is the number (2) of times a node i is in X3 and Ti (k1 , k2 ) is the number of times a node i is in X2 . We will use Ti when the (1) (2) corresponding statement applies to both Ti and Ti . Note that Ti (k1 , k2 ) is a random variable because it depends on the specific realization of the random graphs {G(k), k ≥ 0} between the iterations k1 ≤ k < k2 . The proof of the lemma will proceed by showing that for any initial iteration k0 , and for any k1 , k2 ∈ [k0 , k0 + R(k0 )] such that k2 − k1 ≥ L, there exists a node i∗ such that Prob[Ti∗ (k1 , k2 ) ≥ 1] ≥ η > 0 where i∗ has a neighbor in a set (X1 or X4 ) causing a situation to occur with probability greater than pη, and L is a constant (independent of k0 ) to ¯ be determined. Having that, we can conclude R(k0 ) ≤ kY L ¯ where k = L, and Y ∼ G(pη), and E[R(k0 )] ≤ pη . To complete the proof, we have to identify the node i∗ and give the explicit expressions for the constants L and η. Note that if the sets X1 or/and {X4 , X5 } are nonempty, then since M (k) − m(k) = M (k0 ) − m(k0 ) > 1 for k ≥ k0 , at every iteration k there is at least one node in X2 (and one node in X3 ), so that we can write: P (12) i∈X23 Ti (k1 , k2 ) ≥ k2 − k1 , and there must necessarily be a node vu∗ in this sum such that Tvu∗ (k1 , k2 ) ≥ n1 (k2 − k1 ). Consider the path vu∗ , vu−1 , . . . , v1 in G from this node to the node v1 ∈ X23 which has a neighbor in the nonempty set (X1 or X4 ). If k0 ≤ k1 ≤ k2 < k0 + R(k0 ), we have PTi (k1 ,k2 ) Tj (k1 , k2 ) ≥ b h1 t=1 Yt c for all j ∈ Ni ∩ X23 , where Yt ∼ G(p).4 Thus, using this previous equation recursively we get Prob[Tv1 (k1 , k2 ) ≥ 1] ≥ ph Prob[Tv2 (k1 , k2 ) ≥ h] h h2

2

≥ p p Prob[Tv3 (k1 , k2 ) ≥ h ] 2

u−1

≥ ph ph . . . ph

(13) (14)

Prob[Tvu∗ (k1 , k2 ) ≥ hu−1 ]

u−1 h 1−h 1−h

1k2 −k1 ≥nhu−1

(15)

n−1 h 1−h 1−h

1k2 −k1 ≥nhn−1

(16)

≥p ≥p

3 The set X 23 is invariant if none of the situations occurred, i.e., X23 (k0 ) = X23 (k) for all k ∈ [k0 , k0 + R(k0 )]. 4 This equation uses Assumption 2, and it does not hold for the cyclic dynamic graph example given in Section III-A.

where 1 is the indicator function (equal to 1 if the condition is satisfied, otherwise 0). Inequality (15) follows because Tvu∗ (k1 , k2 ) ≥ n1 (k2 − k1 ), and inequality (16) holds simply because u ≤ n. Therefore, ifn−1k2 − k1 ≥ nhn−1 , then 1−h Prob[Tv1 (k1 , k2 ) ≥ 1] ≥ ph 1−h > 0 (therefore, i∗ = v1 , L = nhn−1 , and η = ph proof.

1−hn−1 1−h

). This completes the

Remark: In the case when p = 1, the underlying graph ¯ = nhn−1 = is essentially static and the upperbound R  1 n−1 is the same upperbound derived in [34].  n 1 + 2δ We are now in a position to state the following proposition. Proposition 3. Consider system (5), and suppose that Assumptions 1 and 2 hold. Then, for any initial value x(0), almost surely there is a finite time iteration where either {X1 , X4 , X5 } = ∅ or M (k) − m(k) ≤ 1. Proof. The value M (k) − m(k) cannot decrease by more than M (0) − m(0) number of times since M (k) is nonincreasing and m(k) is non-decreasing. Therefore, applying Lemma 1, it can be shown that either {X1 , X4 , X5 } = ∅ or M (k) − m(k) ≤ 1 in a finite number of iterations. Theorem 2 is a direct consequence of Proposition 3 because the proposition provides two possible cases for the system. For the case of a finite time iteration when {X1 , X4 , X5 } = ∅, by Proposition 3 all nodes are in {X2 , X3 }, and by the definition of the sets the first part of the theorem is established. For the case of M (k)−m(k) ≤ 1, by Proposition 3 we get xi (k) ∈ [m(k), m(k) + 1) for all i, and the second part of the theorem is established. R EFERENCES [1] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,” IEEE Transactions on Automatic Control, vol. 48, no. 6, pp. 988–1001, 2003. [2] R. Olfati-Saber and R. M. Murray, “Consensus seeking in networks of agents with switching topology and time-delays,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1520–1533, 2004. [3] L. Moreau, “Stability of multi-agent systems with time-dependent communication links,” IEEE Transactions on Automatic Control, vol. 50, no. 2, pp. 169–182, 2005. [4] W. Ren and R. W. Beard, “Consensus seeking in multiagent systems under dynamically changing interaction topologies,” IEEE Transactions on Automatic Control, vol. 50, no. 5, pp. 655–661, 2005. [5] M. Cao, A. S. Morse, and B. D. O. Anderson, “Reaching a consensus in a dynamically changing environment: a graphical approach,” SIAM Journal on Control and Optimization, vol. 47, no. 2, pp. 575–600, 2008. [6] J. M. Hendrickx and J. N. Tsitsiklis, “Convergence of type-symmetric and cut-balanced consensus seeking systems,” IEEE Transactions on Automatic Control, vol. 58, no. 1, pp. 214–218, 2013. [7] B. Touri and A. Nedi´c, “Product of random stochastic matrices,” IEEE Transactions on Automatic Control, vol. 59, no. 2, pp. 437–448, 2014. [8] J. Liu, A. S. Morse, A. Nedi´c, and T. Bas¸ar, “Internal stability of linear consensus processes,” in Proceedings of the 53rd IEEE Conference on Decision and Control, pp. 922–927, 2014. [9] Y. Pu, M. N. Zeilinger, and C. N. Jones, “Quantization design for distributed optimization,” IEEE Transactions on Automatic Control, 2016. Accepted. [10] D. Kempe, A. Dobra, and J. Gehrke, “Gossip-based computation of aggregate information,” in Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, pp. 482–491, 2003. [11] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems and Control Letters, vol. 53, pp. 65–78, 2004.

[12] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossip algorithms,” IEEE Transactions on Information Theory, vol. 52, no. 6, pp. 2508–2530, 2006. [13] A. Nedi´c, A. Olshevsky, A. Ozdaglar, and J. N. Tsitsiklis, “On distributed averaging algorithms and quantization effects,” IEEE Transactions on Automatic Control, vol. 54, no. 11, pp. 2506–2517, 2009. [14] A. G. Dimakis, S. Kar, J. M. F. Moura, M. G. Rabbat, and A. Scaglione, “Gossip algorithms for distributed signal processing,” Proceedings of the IEEE, vol. 98, no. 11, pp. 1847–1864, 2010. [15] J. Liu, S. Mou, A. S. Morse, B. D. O. Anderson, and C. Yu, “Deterministic gossiping,” Proceedings of the IEEE, vol. 99, no. 9, pp. 1505–1524, 2011. [16] K. Avrachenkov, M. El Chamie, and G. Neglia, “A local average consensus algorithm for wireless sensor networks,” in Proceedings of IEEE International Conference on Distributed Computing in Sensor Sytems and Workshops, pp. 482–491, 2011. [17] M. El Chamie, G. Neglia, and K. Avrachenkov, “Distributed weight selection in consensus protocols by Schatten norm minimization,” IEEE Transactions on Automatic Control, vol. 60, no. 5, pp. 1350– 1355, 2015. [18] B. Ac¸ıkmes¸e, M. Mandi´c, and J. L. Speyer, “Decentralized observers with consensus filters for distributed discrete-time linear systems,” Automatica, vol. 50, no. 4, pp. 1037–1052, 2014. [19] T. C. Aysal, M. Coates, and M. Rabbat, “Distributed average consensus using probabilistic quantization,” in Proceedings of the 14th IEEE/SP Workshop on Statistical Signal Processing, pp. 640–644, 2007. [20] S. Kar and J. M. F. Moura, “Distributed consensus algorithms in sensor networks: quantized data and random link failures,” IEEE Transactions on Signal Processing, vol. 58, no. 3, pp. 1383–1400, 2010. [21] J. Lavaei and R. M. Murray, “Quantized consensus by means of gossip algorithm,” IEEE Transactions on Automatic Control, vol. 57, no. 1, pp. 19–32, 2012. [22] M. Zhu and S. Mart´ınez, “On the convergence time of asynchronous distributed quantized averaging algorithms,” IEEE Transactions on Automatic Control, vol. 56, no. 2, pp. 386–390, 2011. [23] R. Carli, F. Fagnani, P. Frasca, and S. Zampieri, “Gossip consensus algorithms via quantized communication,” Automatica, vol. 46, no. 1, pp. 70–80, 2010. [24] S. Zhu, Y. C. Soh, and L. Xie, “Distributed parameter estimation with quantized communication via running average,” IEEE Transactions on Signal Processing, vol. 63, pp. 4634–4646, Sept 2015. [25] P. Frasca, R. Carli, F. Fagnani, and S. Zampieri, “Average consensus on networks with quantized communication,” International Journal of Robust and Nonlinear Control, vol. 19, no. 16, pp. 1787–1816, 2009. [26] T. Li and L. Xie, “Distributed consensus over digital networks with limited bandwidth and time-varying topologies,” Automatica, vol. 47, no. 9, pp. 2006–2015, 2011. [27] Q. Zhang and J. F. Zhang, “Quantized data-based distributed consensus under directed time-varying communication topology,” SIAM Journal on Control and Optimization, vol. 51, no. 1, pp. 332–352, 2013. [28] R. Carli, F. Fagnani, A. Speranzon, and S. Zampieri, “Communication constraints in coordinated consensus problems,” Automatica, vol. 44, no. 3, pp. 671–684, 2008. [29] D. Thanou, E. Kokiopoulou, Y. Pu, and P. Frossard, “Distributed average consensus with quantization refinement,” IEEE Transactions on Signal Processing, vol. 61, no. 1, pp. 194–205, 2013. [30] A. Censi and R. M. Murray, “Real-valued average consensus over noisy quantized channels,” in Proceedings of the 2009 American Control Conference, pp. 4361–4366, 2009. [31] A. Kashyap, T. Bas¸ar, and R. Srikant, “Quantized consensus.,” Automatica, vol. 43, no. 7, pp. 1192–1203, 2007. [32] K. Cai and H. Ishii, “Quantized consensus and averaging on gossip digraphs,” IEEE Transactions on Automatic Control, vol. 56, no. 9, pp. 2087–2100, 2011. [33] S. Etesami and T. Bas¸ar, “Convergence time for unbiased quantized consensus over static and dynamic networks,” IEEE Transactions on Automatic Control, vol. 61, no. 2, pp. 443–455, 2016. [34] M. El Chamie, J. Liu, and T. Bas¸ar, “Design and analysis of distributed averaging with quantized communication,” IEEE Transactions on Automatic Control, vol. 61, no. 12, 2016. [Available Online] http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7409947. [35] S. Patterson, B. Bamieh, and A. El Abbadi, “Convergence rates of distributed average consensus with stochastic link failures,” Automatic Control, IEEE Transactions on, vol. 55, pp. 880–892, April 2010.

Distributed Averaging with Quantized Communication ...

Ji Liu2. Tamer Basar2. Behçet Açıkmese1. Abstract—Distributed algorithms are the key to enabling effective large scale distributed control systems, which present ..... xi(k)−⌊xi(k)⌋ be the decimal part of a node's local estimate value; then we have ci(k) ∈ [0, 1). The idea is to prove that both ci(k) and ⌊xi(k)⌋ can only belong ...

430KB Sizes 1 Downloads 249 Views

Recommend Documents

Constructing Reliable Distributed Communication Systems with ...
liable distributed object computing systems with CORBA. First, we examine the .... backed by Lotus, Apple, IBM, Borland, MCI, Oracle, Word-. Perfect, and Novell) ...

Distributed Adaptation of Quantized Feedback for Downlink Network ...
... large feed- back set and high CSI quantization precision with small feedback ..... Adaptation of Quantized Feedback for Downlink Network MIMO Systems.pdf.

Distributed Dual Averaging for Convex Optimization ...
The issue is not however essential and we prove that a simple correction term ...... Illustration of the effect of fixed edge delays on distributed dual averaging.

Constructing Reliable Distributed Communication ... - CiteSeerX
bixTalk, and IBM's MQSeries. The OMG has recently stan- dardized an Event Channel service specification to be used in conjunction with CORBA applications.

Communication-Free Distributed Coverage for ...
May 25, 2015 - 125–142, 2006. [5] N. Megiddo, E. Zemel, and S. L. Hakimi, “The maximum coverage location problem,” SIAM Journal on Algebraic Discrete Methods, vol. 4, no. 2, pp. 253–261, 1983. [6] S. Khuller, A. Moss, and J. S. Naor, “The b

Fast Averaging
MapReduce. Conclusions. Motivation. Large data center (a cluster of computers). Used by Microsoft, Google, Amazon, Facebook, ... What functions ... 15/17. Introduction. Algorithm. MapReduce. Conclusions. Why it works. Estimated Frequency. Facebook. F

Fast Averaging
Laboratory for Information and Decision Systems. Massachusetts Institute of Technology. {bodas, devavrat}@mit.edu. Abstract—We are interested in the following question: given n numbers x1,...,xn, what sorts of approximation of average xave = 1 n (x

Default correlations derived with an averaging model
default correlation between groups of similar clients from a large rep- resentative data set. This approach assumes that all the elements in the correlation matrix ...

Rotation Averaging with Application to Camera-Rig Calibration
Similar Lie-averaging techniques have been applied to the distributed calibration of a camera network [9], and to generalized mean-shifts on Lie groups [10]. A .... The associated L2-mean is usually called the Karcher mean [17] or the geo- metric mea

DISTRIBUTED PARAMETER ESTIMATION WITH SELECTIVE ...
shared with neighboring nodes, and a local consensus estimate is ob- tained by .... The complexity of the update depends on the update rule, f[·], employed at ...

Distributed Node with Distributed Quota System (DNDQS).pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Distributed ...

Fingerprinting Quantized Host Signals
Mostly for uncompressed host signals. ❑ First work on collusion-resistant ... Motivation for Compressed Domain Fingerprinting. Cable TV distribution System.

Quantized Consensus on Gossip Digraphs
and load balancing in processor networks [8]. ... sip type [3,5,8], which specifies that in each time slot, ... symmetric (or balanced) topologies in random time-.

DISTRIBUTED ACOUSTIC MODELING WITH ... - Research at Google
best rescoring framework for Google Voice Search. 87,000 hours of training .... serving system (SSTable service) with S servers each holding. 1/S-th of the data.

Distributed Average Consensus With Dithered ... - IEEE Xplore
computation of averages of the node data over networks with band- width/power constraints or large volumes of data. Distributed averaging algorithms fail to ...

Patterned Magnetic Nanostructures And Quantized ...
and devices, developing ultra-high-density magnetic storage, and understanding ..... the magnetostatic and exchange energy) is approximately proportional to.

DISTRIBUTED AVERAGE CONSENSUS WITH ...
“best constant” [1], is to set the neighboring edge weights to a constant ... The suboptimality of the best constant ... The degree of the node i is denoted di. |Ni|.

Can Quantum Communication Speed Up Distributed ...
Research at the Centre for Quantum ..... talk to the server. ..... the server model as Carol, David, and the real server, while we call the nonlocal ...... For any α ≥ 1, an α-approximate solution of P on weighted network (N,ω) is a feasible sol

Distributed Programming with MapReduce
Jun 4, 2009 - a programming system for large-scale data processing ... save word_count to persistent storage … → will take .... locality. ○ backup tasks ...

Conveyor carousel with distributed drive system
Nov 23, 2011 - poWer, loWer energy use, closed loop system monitoring and reduced ... This application is a reissue of US. patent application Ser. No. 12/128 ...

Disorderly Distributed Programming with Bloom
Mutable for a short period. 2. Immutable forever after. • Example: bank accounts at end-of-day. • Example: distributed GC. – Once (global) refcount = 0, remains 0 ...

RESEARCH ARTICLE Quantized Control of Nonlinear ...
Oct 27, 2009 - The limitations on the communication rate between the plant sensors and its controller imposes to develop new approaches that are able to guarantee good performance ... limited due to scalability or energy-saving concerns, or due to ha