1

X-CSR: Dataflow Optimization for Distributed XML Process Pipelines Daniel Zinn∗

Shawn Bowers†

Timothy McPhillips†

Abstract— XML process networks are a simple, yet powerful programming paradigm for loosely coupled, coarse-grained dataflow applications such as data-centric scientific workflows. We describe a framework called ∆-XML that is well-suited for applications in which pipelines of data processors modify parts (“deltas”) of XML data collections while keeping the overall collection structure intact. We show how to optimize the execution of ∆-XML process networks by minimizing the data shipping cost in distributed settings. This X-CSR1 optimization employs static type inference based on XML Schema to determine the XML stream fragments that are relevant to a processor, allowing irrelevant fragments to be bypassed (“shipped”) to downstream pipeline steps. Finally, we present evaluation results for a realworld scientific workflow, which shows the practical feasibility of X-CSR. A long version of this paper is available as [1].

I. I NTRODUCTION We adopt a simple and flexible model for designing XML process networks called ∆-XML, and show how ∆-XML pipelines can be efficiently executed within distributed environments. ∆-XML has, for example, application in a scientific workflow setting as discussed in [2]. ∆-XML is well-suited for applications in which pipeline components (or actors) modify only parts (i.e., deltas) of incoming XML data streams (see Fig. 1a). ∆-XML supports explicit typing of data and actors based on XML Schema, leading to opportunities for analysis and optimization of XML process networks. Below, we describe how to minimize the overall cost of shipping data between distributed processing components. II. ∆-XML ACTOR C ONFIGURATIONS We view XML data as (i) unranked, labeled, ordered trees (for typing purposes), and (ii) as token sequences of XML events (for streaming purposes), ignoring XML attributes for simplicity. Data nodes contain (binary) data whose specific representation is unknown to the framework, but understood by actors. ∆-XML schemas are based on XML Schema [3] and are formalized via regular tree grammars [4]. A type declaration (production rule) has the form T ::=hti R, where the left-hand side (lhs) consists of a type T , and the right-hand side (rhs) consists of a label (tag) hti and a regular expression over types R. Regular expressions are defined using the symbols “.” (sequence), “|” (alternation or union), “?” (optional), “∗” (zero or more), “+” (one or more), and “” (empty word). We omit “.” when clear from the context. A ∆-XML schema τ is defined as a finite set of type declarations. Every schema τ implies a set of labels Lτ and 1 XML Cut, Ship, ∗ Dept. of Computer

and Reassemble; pronounced “X-scissor” Science, UC Davis; † UC Davis Genome Center, Contact: {dzinn, sbowers, tmcphillips, ludaesch}@ucdavis.edu

Bertram Ludäscher∗,†

types Tτ . Per convention, we use S as our “start type” and require each schema to have a type declaration for S such that S does not occur in the rhs of a declaration in τ . Similar to XML Schema, we also require that content models be deterministic.2 For convenience, we assume schemas are nonrecursive. By convention, the types Z, Z1 , Z2 , . . . are used to denote base data. Each actor A is assigned a configuration (or signature) ∆A = hσ, τα , τω i that consists of replacement (or match) rules σ for selecting and replacing relevant subtrees of an input stream, an input selection schema (or read scope) τα , and an output replacement schema (or write scope) τω . Each replacement rule is of the form X → R, where X is a type of τα , and R is a regular expression over types of τω . Intuitively, a replacement rule states that for any relevant fragment of type X in the input stream, the actor A will put in place of X an output of type R. We call X a match type, and each type in R a replacement type. We require the match types of a configuration to be unreachable from each other within the input selection schema τα , thus avoiding ambiguous or nested matches. Additionally, each match type occurs in exactly one match rule of σ.3 We also require that the result of applying σ to a channel schema τ results in a schema τ as defined above. For example, see Fig. 1a in which channel schemas are consecutively transformed by actor signatures. III. O PTIMIZING ∆-XML P IPELINES WITH X-CSR The X-CSR optimization transforms (compiles) ∆-XML pipelines into equivalent process networks optimized for reducing data-shipment costs. The result of the transformation introduces distributors and mergers as new components into the original pipeline. During pipeline execution, distributors route data directly to actors for which the data is relevant, and thus around actors for which the data is irrelevant. Similarly, mergers automatically receive, combine, and provide routed data to subsequent actors. X-CSR exploits schema information and actor configurations to construct the appropriate distributors. Note that the X-CSR optimization is performed only prior to workflow execution and thus does not affect the original pipeline design (it is invisible to regular workflow designers and users). Optimization goals and assumptions. The X-CSR optimization is designed to minimize the total amount of data that is shipped during workflow execution. This is expected to decrease the duration of workflow execution, i.e., the wallclock time associated with the enactment of a single workflow run, when the following conditions hold: 2 Also

called non-ambiguous in [3]. actions can be captured using the “|” operator in the replacement expression R. 3 Non-deterministic

(1) The computations carried out by individual actors are expensive in terms of memory or other node resources, requiring actors to be distributed across multiple nodes. (2) The number and/or size of data items passed between actors is large such that shipping data is time-consuming relative to the time needed to perform computations on the data. (3) The overall computation presents opportunities for pipeline parallelism (a form of concurrency with multiple dependent actors executing simultaneously). (4) Initializing the ∆-XML framework on all nodes takes little time relative to the total workflow execution time. These assumptions are often true of data-driven scientific workflows [5], [6], which typically work over large and complex data sets. To simplify the presentation below, we assume each actor in a pipeline is staged on a different host, and that data can be shipped between any two hosts. The general deployment pattern for mergers and distributors is shown in Fig. 1. The original ∆-XML pipeline with input schema τ is shown in Fig. 1a, and the result of applying XCSR to this pipeline is depicted in Fig. 1b. For example, the first distributor D0 is configured as follows. Because actor A1 contains only F as a match type in σ1 , and the relevant type path leading to F is S/A/D/F , each F fragment in the input stream (with their corresponding context) are routed directly to A1 . Each G fragment in the original input stream is routed directly to A2 , since A2 has a match type G and G is not used by A1 (but is in τ ). Similarly, E fragments in the original input stream are directly routed to A3 . Finally, because fragments of type B and C in τ are not used by any actor in the pipeline, these fragments are directly bypassed to the final merger M4 . Fig. 1c depicts how instances of the input schema τ are split by the various distributors inserted into the pipeline by XCSR. See [1] for a detailed description of the algorithm for partitioning the input schemas of actors in ∆-XML pipelines. Labeling Holes. To merge bypassed fragments, distributors add hole markers (◦) into the stream, and bypassed fragments are assigned filling tags (•). To match fillings with holes, each hole is labeled according to its corresponding channel number. Without such labels, a merger such as M2 would not know whether a given hole corresponds to a fragment sent via channel d•02 (and should thus be filled), or if the hole corresponds to a fragment sent via channel d•03 (and should therefore be ignored because merger M3 will fill the hole). Note that labeling holes using only the index of the merger that will fill in the fragment is not sufficient. For example, the merger M3 when receiving a hole labeled with “M3 ” cannot determine which bypassing channel it is supposed to obtain the filling from. However, it is not necessary to uniquely number every hole and filling pair, since the order of tokens is preserved along channels. Distributors. At runtime, distributors continuously map incoming tokens to type symbols in their input schemas. This is done via a Glushkov automata, which we generate for the distributor’s input schema (cf. [7]). Based on the partitioning of the schema, we associate a destination with each state and add holes and fillings as appropriate. During runtime, we therefore need to do only O(1) work to determine the

destination of any given token. Mergers. Mergers simply align hole and filler pairs. A merger Mi will sequentially read from the primary input stream (i.e., d◦i−1,i ), forwarding all tokens in the stream except holes labeled with ◦jk where k = i. When such a hole ◦ji is read, it reads a new filling from channel j and inserts the fragment back into the stream. Therefore, mergers also have complexity of O(1) for each token they read. IV. T HEORETICAL E VALUATION OF X-CSR Optimality of X-CSR. X-CSR-optimized pipelines do not exhibit unnecessary shippings of XML fragments (including data nodes).4 Notice that as soon as a fragment is produced by an actor (or provided to the first actor of a pipeline), X-CSR finds the closest actor downstream that has the fragment in scope. The fragment is then directly sent to this actor without passing through intermediate actors (as in the unoptimized case). Because the fragment must be received by this actor to guarantee correctness, this shipping is indeed necessary, and thus optimal. Saved shipping volume. In an unoptimized workflow, shipping data node #d from an actor Ai to an actor Ai+n , the closest downstream actor that has #d in scope, involves shipping sizeof(#d ) ∗ n bytes. The optimized version will directly send the data to Ai+n and will thus only ship sizeof(#d ) bytes, resulting in a savings of sizeof(#d ) ∗ (n − 1) bytes. In other words, the saved shipping cost is linear in the number of bypassed actors as well as in the size of the total data nodes (in general, shipped fragments) involved in the shipping optimization. Thus, in X-CSR the more data shipped, the larger the savings. Compile-time work. Constructing the X-CSR-optimized pipeline involves two main steps: Inferring distributor specifications and building distributor Glushkov automata. Inferring distributor specifications (cf. [1]) is of complexity O(N 2 ) where N is the number of actors in the pipeline. Since there are O(N 2 ) direct connections in a pipeline, this complexity is optimal as it is the theoretical lower bound for considering all direct connections. Building the internal Glushkov automata for the distributors is possible in quadratic time with respect to the length of the schema [8]. The PTIME complexity is reasonable in practice since in many scientific workflows, e.g., the schema themselves are rather small—in contrast to the amount and size of data (within a schema instance) that is typically shipped. Run-time work. When the X-CSR-optimized pipeline is run, roughly three-times the number of actors are started compared to the unoptimized (original) pipeline, because a distributor and a merge actor is started for each workflow actor. Because starting actors is only a small portion of the overall pipeline execution (and is also done in parallel), we do not expect the runtime overhead to be significant. While data is streamed through the pipeline, both mergers and distributors 4 Note that optimality is with respect to the available actor configuration information: If the signature of an actor claims that data is used by an actor, then this data will be shipped to the actor, even though the signature might not capture the actor’s real behavior.

(a) Original pipeline S

S

(A | B)∗

(A | B)∗ C

D+ | E ∗

BX(X | B)(X | B)

BE(E | B)(E | B)

H∗

H∗ H∗ C

C

A1

i∈τ

D+ | X ∗ H∗

H∗

(BEG)G C

S

C

C D+ | E ∗

D+ | E ∗ H∗

(A | B)∗

(A | B)∗

C

FG

S

S

H∗ C

C

A2

σA1 = {F → BEG}

C

C

A3

σA2 = {G → (E | B)}

T o

σA3 = {E → X}

(b) X-CSR optimized pipeline S

S

S

S S

S (A | ◦04 )∗

(A | ◦04 )∗

D+ | ◦∗03

D+ | ◦∗03

F ◦02

(BEG)◦02

(A | ◦04

)∗

(A | ◦04 )∗ (A | ◦04 )∗

(A | ◦04 )∗

S

◦14 ◦13 (E | B)(E | B)

H∗ ◦14 E(E | ◦24 )(E | ◦24 )

(◦14 ◦13 G)G

d◦01 = i1

D0

D+ | E ∗

D+ | ◦∗03

C H∗

i∈τ

D+ | ◦∗03

o1

A1

d•02

H∗ C H∗ C

D1

d◦12

M2

i2

A2

H∗ H∗

o2

D2

D+ | X ∗ ◦14 X(X | ◦24 )(X | ◦24 )

H∗

d◦23

M3

i3

o3

A3

o

M4

T

d•13

d•03

d•24

d•14

d•04

(c) Schema partitioning performed by distributors

D0 :

D1 :

S

D2 :

S

(A | ◦04 )∗

S

(A | ◦04 )∗

(A | ◦04 )∗

D+ | ◦∗03

D+ | ◦∗03

•04 D+ | ◦∗03

d◦01

•03

F ◦02

d◦12

B

E

C

•02

d•02

G

H∗

(◦14 ◦13 G)◦02

d•04

d•03 d•14

•14

•13

B

E

C

H∗

◦14 ◦13 (E | ◦24 ) (E | ◦24 )

d◦23

d•13

H∗

d•24

•24

H∗

•24

B

B

C

C

d•24

Fig. 1. X-CSR (“X-scissor”) in action: (a) pipeline design (unoptimized) with actor signatures σAi (part of configurations ∆Ai ), initial input channel schema τ and inferred intermediate schemas (dash-boxed schema trees, above channels) and final inferred schema (after A3 ). (b) optimized (system-generated) design: The sub-network M2 A2 D2 (M2 and D2 reside on the same host as A2 ) shows the general pattern: A2 receives, via the merger M2 , all parts relevant for its read-scope, then performs its local operation. The distributor D2 removes parts that are irrelevant to the subsequent actor A3 and ships them further downstream, but not before leaving marked “holes” behind where the removal was done. This allows downstream mergers to pair bypassed fragments with the holes from which they were originally taken; (c) distributors D0 , D1 , and D2 split the schemas on the wire as indicated.

S Reg



Ref

A

Z∗

Z∗

Input/Output data size: 400MB/900MB Original shippings: 4.5GB X-CSR shippings: 2.3GB (-49%) Original execution time: 160s X-CSR execution time: 95s (-40%)

align_warp Reg → Reg

reslice

softmean

Ref → Ref A→A.R

R→R.M

S Reg∗ Ref A R M S Z∗ Z∗ Z∗ Z∗ Z∗

slicer M→M.S

convert S→S

Fig. 2. fMRI workflow with input/output schemas and configurations. Statistics for original and X-CSR-optimized runs.

perform O(1) work for each token they receive. So although collection tokens must travel through roughly three times as many actors in the transformed pipeline, this overhead is also not significant since the majority of the time will be spent on actor computation and shipping data between actors. V. E XPERIMENTAL E VALUATION OF X-CSR Our implementation of the X-CSR optimization has successfully been used to reduce data shipment costs in both realworld scientific workflows and synthetic examples [1]. Here we briefly describe the optimization benefits of X-CSR applied to a workflow for studying functional Magnetic Resonance Imaging (fMRI) results [9], [10] in which voxel images of human brain scans are analyzed. The workflow consists of the major steps as shown in Fig. 2: align_warp, reslice, softmean, slicer, and convert. In our example, five different regions of the brain are studied with each region grouped under one hRegi label. A region contains one reference image, which is stored under hRefi, and multiple anatomy images taken from different subjects. Four of these anatomy images are used and each is stored under hAi. Each image has associated image files. We model all image files and header files as base data. The first step is to align all anatomy images with the associated reference image (align_warp). This step requires access to all input data and produces a “warp” specification which we store with the anatomy images; we therefore do not modify the collection structure. In the second step, “reslice” takes the reference image data and anatomy image data (with warp information) and creates “resliced” versions of the anatomy images. We put these in a new collection labeled with hRi. The signature models access to the reference images (Ref → Ref) and the creation of the new hRi collection as a sibling to the anatomy collection (A → A . R). Then, the “softmean” actor averages all reference images together to create one averaged image hMi; which is then sliced to create several (in our example three) 2D images hSi. The last step converts each 2-D image to a common image format and stores the result also beneath hSi. For provenance reasons, we want to preserve the intermediate data products and therefore append new data products without deleting the input data in each step. As a consequence, the output schema shown in Fig. 2 contains all input and intermediate data involved in data processing. For performance reasons, each of these steps is run on a single node and data is shipped from node to node. In our example with 400MB of input data, 4.5GB of total data needs to be shipped. Deploying

the X-CSR optimization, which sends all intermediate data that is not used by downstream actors to the very end, reduces the amount of data shipped to 2.3GB (a reduction of 49%). The optimization not only saves network bandwidth, but the workflow also executes in 95s as opposed to 160s in the unoptimized case, speeding up execution time by 40%. We averaged the times from 3 workflow runs in which runtimes varied by less than 4 seconds. Overhead. The startup time of the unoptimized version was 0.4 seconds whereas the optimized workflow needed 0.6 seconds5 to start up. That is, all compile-time work for XCSR is performed in less than 0.2 seconds, which is less than 0.2% of the overall execution time (both optimized and unoptimized). We observed similar optimization results on another workflow which we describe in [1]. VI. C ONCLUSION We have presented a formal model of ∆-XML data types and actor configurations—read scopes, write scopes, and replacement rules—using standard XML typing approaches [11]. Based on ∆-XML, we use a schema propagation approach that is used by X-CSR to determine optimal routings of stream fragments to appropriate actors in a pipeline. We have also described the X-CSR optimization and have shown its effectiveness on a real-world scientific workflow. Specifically, X-CSR can significantly reduce the overall cost of shipping large volumes of data in cases where actors are staged across distributed hosts. Acknowledgements. Work supported in part through NSF grants IIS-0630033, OCI-0722079, IIS-0612326, DBI0533368, and DOE grant DE-FC02-01ER25486. R EFERENCES [1] D. Zinn, S. Bowers, T. McPhillips, and B. Ludäscher, “Dataflow optimization for distributed XML process pipelines,” UC Davis, Tech. Rep. CSE-2008-15, 2008. [2] T. McPhillips, S. Bowers, D. Zinn, and B. Ludäscher, “Scientific Workflow Automation for Mere Mortals,” Future Generation Computer Systems, 2008, in press. [3] M. Murata, D. Lee, M. Mani, and K. Kawaguchi, “Taxonomy of XML schema languages using formal language theory,” ACM Transactions on Internet Technology (TOIT), vol. 5, no. 4, pp. 660–704, 2005. [4] M. Mani and D. Lee, “XML to relational conversion using theory of regular tree grammars,” VLDB Workshop on EEXTT, 2002. [5] B. Ludäscher, I. Altintas, C. Berkley, D. Higgins, E. Jaeger, M. Jones, E. A. Lee, J. Tao, and Y. Zhao, “Scientific Workflow Management and the Kepler System,” Concurrency and Computation: Practice & Experience, vol. 18, no. 10, pp. 1039–1065, 2006. [6] N. Podhorszki, B. Ludäscher, and S. Klasky, “Workflow Automation for Processing Plasma Fusion Simulation Data,” in 2nd Workshop on Workflows in Support of Large-Scale Science (WORKS), Monterey Bay, June 2007. [7] A. Brüggemann-Klein and D. Wood, “One-unambiguous regular languages,” Information and Computation, vol. 142, no. 2, pp. 182–206, 1998. [8] A. Brüggemann-Klein, “Regular expressions into finite automata,” Theoretical Computer Science, vol. 120, pp. 197–213, 1993. [9] Y. Zhao, J. Dobson, I. Foster, L. Moreau, and M. Wilde, “A notation and system for expressing and executing cleanly typed workflows on messy scientific data,” SIGMOD Rec., vol. 34, no. 3, pp. 37–43, 2005. [10] L. Moreau et al., “The first provenance challenge,” Concurrency and Computation: Practice and Experience, 2007. [11] H. Hosoya, J. Vouillon, and B. C. Pierce, “Regular expression types for XML,” TOPLAS, 2005. 5 Both

measures had a variance smaller than 0.05s throughout 10+ runs

X-CSR: Dataflow Optimization for Distributed XML ...

dataflow applications such as data-centric scientific workflows. We describe a ... We adopt a simple and flexible model for designing XML process networks ...

174KB Sizes 8 Downloads 202 Views

Recommend Documents

MapReduce/Bigtable for Distributed Optimization
our global model parameters, we use a distributed data-store known as Bigtable ... allows backup workers to duplicate effort without producing erroneous output.

A Method for Distributed Optimization for Task Allocation
the graph that underlies the network of information exchange. A case study involving ... firefighting, disaster management, and multi-robot cooperation. [1-3].

Adaptive Consensus ADMM for Distributed Optimization
defined penalty parameters. We study ... (1) by defining u = (u1; ... ; uN ) ∈ RdN , A = IdN ∈. R. dN×dN , and B ..... DR,i = (αi + βi)λk+1 + (ai + bi), where ai ∈.

Adaptive Consensus ADMM for Distributed Optimization
Adaptive Consensus ADMM for Distributed Optimization. Zheng Xu with Gavin Taylor, Hao Li, Mario Figueiredo, Xiaoming Yuan, and Tom Goldstein ...

Distributed Dual Averaging for Convex Optimization ...
The issue is not however essential and we prove that a simple correction term ...... Illustration of the effect of fixed edge delays on distributed dual averaging.

MapReduce/Bigtable for Distributed Optimization - Research at Google
With large data sets, it can be time consuming to run gradient based optimiza- tion, for example to minimize the log-likelihood for maximum entropy models.

Dataflow Predication
icate all instructions within a predicated basic block explicitly. ... predicate register file. In addition ..... register file or the memory system that the block has gener-.

Timely Dataflow: A Model
is central to this model, then the semantics of timely dataflow graphs. ...... Abadi, M., McSherry, F., Murray, D.G., Rodeheffer, T.L.: Formal analysis of a distributed ...

TensorFlow Debugger: Debugging Dataflow Graphs for Machine ...
Debuggability is important in the development of machine-learning (ML) systems. Several widely-used ML libraries, such as TensorFlow and Theano, are based on ... Each node in the original graph is watched by a pair of nodes, a Copy (C).

A Case for XML - IJEECS
butions. For starters, we propose a novel methodology for the investigation of check- sums (TAW), which we use to verify that simulated annealing and operating systems are generally incompatible. We use decen- tralized archetypes to verify that super

A Case for XML - IJEECS
With these considerations in mind, we ran four novel experiments: (1) we measured instant messenger and RAID array throughput on our 2-node testbed; (2) we ...

Timely Dataflow: A Model
N , a local variant of ... and do not consider multiple mutually recursive graphs and other variants. We ...... Proof of Proposition 9: By pure temporal reasoning.

OpenDF – A Dataflow Toolset for Reconfigurable ...
Software development for uniprocessor sys- tems is completely dominated by imperative style program- ming models, such as C or Java. And while they provide ...

XML Schema - Computer Science E-259: XML with Java
Dec 3, 2007 - ..... An all group is used to indicate that all elements should appear, in any ...

My First XML Parser
Oct 15, 2007 - Computer Science E-259: XML with Java, Java Servlet, and JSP .... phones didn't exist in 1636, so the course hadn't a phone number on file for.

Compilation of XSLT into Dataflow Graphs for Web ...
their computation to services, and making it easier to de- velop such applications. ..... second output of the DUP is fed into the subgraph which is compiled to ...

A distributed system architecture for a distributed ...
Advances in communications technology, development of powerful desktop workstations, and increased user demands for sophisticated applications are rapidly changing computing from a traditional centralized model to a distributed one. The tools and ser

XML programming with SQL/XML and XQuery
agers fulfill vital responsibilities in complex informa- tion systems by ... other information-service systems. Permission to ...... Client, network, and server resources ...

XML programming with SQL/XML and XQuery - IEEE Xplore
XML programming model evolution. SAX (the simple API [application programming in- terface] for XML)1 was the first popular interface for. XML programming.

A novel method for measuring semantic similarity for XML schema ...
Enterprises integration has recently gained great attentions, as never before. The paper deals with an essential activity enabling seam- less enterprises integration, that is, a similarity-based schema matching. To this end, we present a supervised a

XML Schema (Second Edition)
Nov 26, 2007 - Different companies all proposed different variations of ... XML Data (MS, Arbortext, Inso), January 1998 ... Storage of application information ...

XML Tooling - Sites
J2EE Tooling (2 of 2). Connector Projects. J2EE Connector Architecture (JCA) based. EJB Test Client – Universal Test Client. HTML-based. J2EE programming ...

My First XML Parser
Oct 15, 2007 - Consider now a larger excerpt from CSCI E-259's original database, the .... element can be printed as a start tag immediately followed by an ...

Learning XML
Extensible Markup Language (XML) is a data storage toolkit, a configurable vehicle for any kind of information, an .... computer programs to determine the functions and boundaries of document parts. ...... so perhaps the backup is a good idea.