A High-availability and Fault-tolerant Distributed Data Management Platform for Smart Grid Applications Ni Zhang, Student Member, IEEE, Yu Yan, Student Member, IEEE, Shengyao Xu, Student Member, IEEE, and Wencong Su*, Member, IEEE, Department of Electrical and Computer Engineering University of Michigan-Dearborn Dearborn, MI, USA {niz, yuya, shengyao, wencong}@umich.edu Abstract - In the next-generation power system, the massive volume of real-time data will help the power system operators gain a better understanding of a large-scale and highly dynamic power grid. In order to maintain the reliability and stability of the grid, a large amount of data, which is collected by the local devices referred to as smart meters, are needed for monitoring and controlling the system as well as predicting the electricity price. Therefore, how to store and process the great amount of data becomes a critical issue. In conventional power systems, the system operation is performed using purely centralized data storage and processing approaches. As the number of smart meters increases to more than hundreds of thousands, and huge amount of data would be generated. It is rather intuitive that the state-of-the-art centralized information processing architecture will no longer be sustainable under such data-explosion. On the other hand, to protect the privacy and keep the fairness of market competition, some users do not wish to disclose their operational conditions to each other. A fully distributed scheme may not be practically possible either. In this paper, we investigate a radically different approach through a Hadoopbased high-availability and fault-tolerant distributed data management framework to handle a massive amount of Smart Grid data in a timely and reliable fashion. All the system operational information is executed into the distributed middleware which consists of a cluster of low-cost commodity servers. We further substantiate the proposed distributed data management framework for Smart Grid operations on a proofof-concept testbed. This high-availability distributed file system and data processing framework can be easily tailored and extended to support data-intensive Smart Grid applications in a full-scale power system. Index Terms-- Smart Grid, Big Data, Hadoop

I.

INTRODUCTION

I

n the next-generation power system, the massive volume of real-time data will help the power system operators gain a better understanding of a large-scale and highly dynamic power grid. As for the electrical distribution system, it is important to maintain the reliability and stability of the grid, in order to do so, a large amount of data, which is collected by the local devices referred to as smart meter, will be * Corresponding Author This work is supported by the New Faculty Startup Fund at University of Michigan-Dearborn. This work is also supported in part by the National Science Foundation under Award Number EEC-0812121.

necessary for monitoring and controlling the system as well as predicting the price of energy. Therefore, how to store and process the huge amount of data becomes a critical issue. The data needs to be stored in a reliable and cost-effective way. In addition, this storage needs a protective mechanism to guarantee the data privacy. Another challenge is that the large volume of data is very difficult to interpret during a short period of time. The desired data storage system should be able provide us with key information rather than raw data, which moves Smart Grid operations from being dataintensive to information-directed [1]-[3]. For instance, Austin Energy in Austin, Texas has implemented 50,000 smart meters, and these smart meters send data every 15 minutes to the data center, which requires 200TB of storage capacity as well as significant recovery redundancy. Also, it takes more disk space to manage than just storing the information. If Austin energy moves from 15-minute to 5-minute interval data exchanging, their data storage needs will grow to 800TB [4]. Another example is the phasor measurement unit (PMU) data collection in North America. PMU data is collected directly from field devices at 30 times per second. As of 2009, there were currently more than 100 active PMU devices placed around the Eastern United States that actively send to the Tennessee Valley Authority (TVA). As a result, TVA had roughly 20TB of archived data in 2009, and expected 120TB per year by 2012. In conventional power systems, the system operation is performed using purely centralized optimization-based dispatching schemes that consider the problem at various time-scales. As the number of smart meters increases to more than hundreds of thousands in a future electrical distribution system, it is rather intuitive that the current state-of-the-art centralized information processing architecture of centralized control (e.g., SCADA) will no longer be sustainable under such data-explosion. The big challenge is how to reliably store power system data and make it available at all times.

When it comes to the large data storage and processing area, the centralized data storage with relational database is a common choice. However, the cost of relational database included the basic expenses of the licensed software and the maintenance cost of the database center will keep rising all the time. Another challenge is caused by the weak plug-andplay functionality. When the amount of data increases greatly, a lot of expensive data storage hardware has to be installed and the relational database will become more complex, so that the tool to access the data will be limited. With massive, heterogeneous and complex data that may span many disks, conventional data storage and management methods may experience hardware failures more frequently [5]. Even due to the single point of failure, the whole physical machine (e.g., centralized supercomputer) may lose the entire operational file. Therefore, a sophisticated and up-to-date data management system is needed for fast analysis with high efficiency and reliability. On the other hand, to protect the privacy and keep the fairness of electricity transaction, some producers naturally do not wish to disclose their operational conditions and bidding strategies to each other. A fully distributed data processing and management scheme may not be practically possible here. In order to maintain the comparatively low cost as well as managing the large data effectively and safely, we have to find a new way in dealing with the data explosion. Instead, in this paper, we investigate a radically different approach through scalable fault-tolerant distributed software agents (e.g., Hadoop) to process a massive amount of market data in a timely fashion. All the system operational information is executed into the distributed middleware which consists of a cluster of low-cost commodity servers. We further substantiate the proposed distributed data management framework for Smart Grid application on a proof-of-concept testbed. Figure 1 illustrates the envisioned distributed middleware for smart meter implementation in Smart Grid.

Fig. 1. The envisioned distributed data management framework

Compared with several alternatives such as SAN, NAS, and RDBM systems, Hadoop has drawing much more

attentions because of low cost, superior reliability, fast processing time, and scalability. Hadoop can manage complex data-flows with multiple inputs and outputs through a low-cost distributed middleware. Previous studies have discussed a variety of methods on storing and processing the big data in power grid. In [6], the authors compared the differences between the conventional database systems and the Hadoop Hbase, and introduced the features of the Hadoop architecture and MapReduce. In [7], the authors compared classical sequential multithreaded time series data processing with a distributed processing system using Pig on a Hadoop cluster. According to the authors, the result showed that when the size of big data exceeds 6.2 GB, inclusive of measured and simulated data, Hadoop cluster processing ability is superior to that using the multi-core systems. In [8], the authors investigated the privacy and security issues on energy management data. The authors classified several security and privacy issues, which guide us in building a higher security and privacy environment for electricity power system with cloud platform. In [9], the authors demonstrated a Smart Grid marketing architecture based on Hadoop platform. In [5], the authors proposed a processing method for monitoring Smart Grid data based on Hadoop platform. To our best knowledge, the distributed data management framework for the Smart Grid is still not well studied yet. The remainder of this paper is organized as follows: Section II introduces the working principle of the Hadoop platform and the proposed framework. Section III presents the proof-of-concept testbed using low-cost credit-card-sized single-board computer (e.g., Raspberry Pi boards). Section IV concludes the paper and discusses the future work. II.

THE WORKING PRINCIPLE AND SYSTEM ARCHITECTURE

A. The working principle of Hadoop Hadoop is an open-source software framework for largescale storage and processing of data-sets on clusters of commodity hardware. It was originally developed by Google to support Nutch search engine project. Yahoo also played an important role developing Hadoop for enterprise applications. Hadoop is particularly designed to handle a mixture of complex and structured data-sets for fast analysis using a bunch of commodity servers. Hadoop is architected as a Master-Slave structured system. The Master is regarded as NameNode which stores metadata such as file names and locations in a cluster, and the Slave is regarded as DataNodes which store replicated blocks [10]. NameNode is available to create, open, remove and rename a file or a directory as well as managing the file system namespace. The DataNodes manage the storage attached to the nodes that they run on, and serve as the reading and writing requests from the file system’s users. For the reliability of the system, DataNodes report Heartbeat and Blockreport messages to NameNode at regular intervals and NameNode controls the DataNodes through these messages [11]. Master is responsible for the JobTracker, while Slave is

responsible for TaskTracker. Since Slaves do not have to share any memory or disks with each other, the DataNodes and NameNode can be run on a whole bunch of low-cost commodity serves independently. We can easily plug-in any additional commodity hardware as the data repository grows. One key feature of Hadoop is the Hadoop Distributed File System (HDFS), which leads to a scalability, efficient and reliable properties. HDFS is a special distributed file system because of its highly fault-tolerant and low-cost hardware support. HDFS provides high throughput access to application data and is suitable for applications that have large data sets [11]. Data in a Hadoop cluster will be broken into several small blocks and distributed to all over these clusters. Through this way, the functions of Hadoop such as Map and Reduce functions can be executed on subsets of the larger data center. This is a useful method for big data processing. Another core design is called as MapReduce, which is a parallel programming technique [12]. The whole programming model can be divided into two functions, namely, Map and Reduce. The grid operators type the keywords into the program, and the Map function deals with these keywords and produces a set of new intermediate key/value pairs, and then passes these pairs to the Reduce function. It accepts the key/value pairs and merges these values together to generate a smaller set of values as the output, then, sends back the output to the users. The working principle of Hadoop is shown in Figure 2.

Input

Map Slave n …… Master

User

NameNode

Slave 02 Slave 01

DataNode JobTracker TaskTracker

Output

Reduce



Fig. 2. The working principle of Hadoop

Hive is an open-source data warehouse solution built on top of Hadoop and it was initially developed by the Facebook team. Hive is currently adopted and developed by other companies such as Amazon, and Netflix. The Hive architecture is presented as Figure 3. Hive supports queries expressed in a Standard Query Language (SQL)-like declarative language [13],[14] and could help us to translate the SQL style commands to perform map-reduce jobs. With this tool, we can easily create a table and insert the data received from various measurement units into tables, then, select a certain data for useful data exploration. Hive is just one of the powerful tools being used in the Hadoop. Due to the page limit, we perform an initial case study with the

Hadoop and Hive in the following section.

External Interface

Thrift Server

Driver (Compiler, Optimizer, Executor)

Metastore

Hive

Master NameNode

JobTracker

Slave n …… Slave 01 DataNode

TaskTracker

Hadoop Fig. 3. Hive architecture

B. The framework of the proof-of-concept platform Figure 4 shows the system framework of the proposed proof-of-concept platform, which can be divided into three layers: Layer 1: Smart meter We assume that smart meter devices are located at every transformer. We simulate the time-series data which is acquired and collected by smart meters at every sampling time. The smart meter data is embedded into Lego Mindstorm EV3 Intelligent Brick (ARM 9 processor with Linux-based operating system). We use the illuminated, three-color, sixbutton interface to indicate the transformer’s state. The builtin 178x128 pixel display can enable detailed graph viewing of grid operations. Layer 2: Hadoop Cluster The Hadoop-based distributed middleware is implemented into a cluster of low-cost Raspberry Pi Boards Model B (512 MB/Revision 2). Each Raspberry Pi is a credit-card-sized single-board computer. The Hadoop’s Distributed File System is modeled on Google’s File System, which distributes smart meter data across multiple Raspberry Pi boards and maintains duplicate copies of all of it. We set a “replication factor” of 3, such that every single HDFS block of data are copied three times (two on the same server and one on a different server) in the cluster. Even in the event of one server failure, the grid operators can still access all the information, due to the Hadoop’s aggressive replication capability. Layer 3: Hadoop Master for Grid Operators The NameNode of the Hadoop cluster is implemented into a workstation using Linux-based operating system. This node performs as the master of the Hadoop cluster in managing the directory of the data and in collecting the desired data from

the Hadoop cluster for fast processing. At the same time, this node is also one of the DataNodes of the Hadoop cluster. In following section, the case study will demonstrate the advantage of MapReduce in term of indexing capability (e.g., fast scan on a great amount of power grid information without loss of accuracy).

cluster, the data processing ability is not powerful enough as compared to the prevailing commercial Hadoop clusters. We processed a certain amount of smart meter data in a specific format which can be recognized by Hive; and then we compared the processing speeds between Hadoop Hive and conventional database. We can control the NameNode and monitor the DataNodes on another PC via SSH (putty), and it is convenient to monitor all three nodes on the same screen. The data is appended in a table which contains the information about the ID, voltage, current, temperature and frequency. At each row, different categories of information are separated by a blank space. All the simulated smart meter data is stored in a ".txt" file in the Hadoop cluster with respect to the data format as described above. We use Hive to create a table at the NameNode side and load the data into the table to make the data ready for further processing using MapReduce. Once a table is created, the data format is organized in place ID: integer; Voltage: Real; Current: Real; Temperature: double; Frequency: double. Fig. 5 shows the data format, Due to the page limit, we can only show the first ten rows of this table which contains the information about place ID, voltage, current, temperature and frequency.

Simulated smart meter data is implemented into Lego Mindstorm EV3 Intelligent Brick (ARM 9 processor with Linux-based operating system)

Fig. 4. The framework of the proposed proof-of-concept demonstration

III.

CASE STUDY

A. Hadoop cluster setup We set up a Hadoop cluster which contains three nodes, one NameNode performs as the master, at the same time, and it is also performed as the DataNode. The other two nodes are purely DataNodes acting as slaves. The master node run on a virtual machine (Ubuntu 12.04) based on an Intel(R) i7 @2.20 GHz computer with 16GB memory. For the two slave nodes (DataNode), they are run on the low-cost Raspberry Pi (700 MHz ARM1176JZF-S core, 256 MB memory, 4GB SD card, Raspbian operating system (Linux kernel)). Hadoop is written in JAVA, so the java virtual machine needs to be installed on the platform. In this paper, we use java-7openjdk-amd64, and the Hadoop version is Hadoop-1.2.1. All three nodes were connected to the same router and the NameNode is set to have the ability to access the DataNodes without entering the password. After setting up the Hadoop cluster, we install the Hive-0.11.0 on the NameNode. B. Case Study In this paper, since we just set up a small-scale Hadoop

Fig. 5. Smart meter data format

Fig. 6. The first 10 rows of the selected dataset

In order to manage the power grid, the power grid manager needs to know where and when the abnormal frequencies and voltages occur at a constant time interval. Here, we simulate one of these processes using the proof-of-concept testbed. We select the rows whose frequency is higher than 60.2Hz or

lower than 59.8Hz. These values are regarded as the abnormal frequency. Then, we load these specific rows of data into a new table, as shown in Fig. 6. In this table, we are able to identify which row contains the abnormal frequency. And this row also includes the information about the place ID, current, voltage and temperature. Then we can perform more complex data mining techniques to better understand how power systems responds to any variations and fluctuation, based on a mixture of smart meter data across the entire distribution system. C. Comparison and Discussion In order to compare the performance between Hadoop and conventional database, we install MySQL relational database in the DataNode, and then, we process with identical groups of data for a number of trials through both Hadoop Hive (red) and conventional database (blue). We collect ten groups of data with the size of different groups of data ranging from 172.8MB to 5529.6MB.

Raspberry Pi). In case study, we compared the proposed distributed data processing with the centralized approach in terms of processing time. As shown in our experimental results, the processing time of our approach does not exponentially grow with respect to the data size, demonstrating that the Hadoop-based approach is particularly suitable for large-scale Smart Grid applications. However, it is also important mentioning that the improved performance is only noticeable when dealing with a relatively large amount of data. In the future, we will build up a massive array of distributed middleware which based low-cost, smallsized PC to handle complex data storage and processing tasks. The generalized framework can be easily extended and tailored to facilitate the Smart Grid applications in a full-scale system. A user-friendly graphic user interface will be developed to visualize and archive the collected data and process commands and additional information. In addition, we will customize the MapReduce schemes in Hadoop infrastructure to meet the specific application requirements. V. [1] [2]

[3]

[4] Fig .7. Comparison on average data processing time [5]

As shown in Figure 7, using the conventional database, the processing time grows exponentially with respect to the data size. In contrast, the Hadoop’s processing time does not rise dramatically as the size of the data increases. That is because MapReduce technique splits the data into smaller pieces (64MB) for parallel data processing. However, the time of Hadoop’s initialization is also noticeable even for small data sizes. Due to the hardware limitation of the initial work, the data processing ability of Hadoop for Smart Grid application needs to be further investigated. Given the fact that Hadoop is a kind of open source platform, we can easily duplicate the DataNodes and scale up the Hadoop cluster without reconfiguring the entire data management system. In addition, we manually shut down one of the DataNodes to mimic the switch failure and server power outage. The entire grid operational files are still readable, which indicates the reliability of the proposed distributed data middleware. After fault, DataNodes talk to each other to rebalance data in order to keep the replication of data. IV.

CONCLUSION AND FUTURE WORK

In this paper, we presented a high-availability and faulttolerant distributed data management platform for Smart Grid application using low-cost Linux-powered min-PCs (e.g.,

[6] [7]

[8]

[9]

[10] [11]

[12]

[13]

[14]

REFERENCES

W. Su and J. Wang, “Energy Management Systems in Microgrid Operations,” The Electricity Journal, vol.25, no.8, pp.45-60, Oct. 2012. W. Su, J. Wang, and D. Ton, “Smart Grid Impact on Operation and Planning of Electric Energy Systems,” Handbook of Clean Energy Systems, Edited by A. Conejo and J. Yan, Wiley, 2013. (in press) P. Zhang, F. Li, and N. Bhatt, “Next-Generation Monitoring, Analysis, and Control for the Future Smart Control Center,” IEEE Trans. on Smart Grid, vol.1, no.2, pp.186-192, Sept. 2010. H. Mc and E. Stanley, “Grid Analytics: How Much Data Do You Really Need,” 2013 IEEE Rural Electric Power Conference (REPC), Apr. 2013. D. Wang and L. Xiao, “Storage and Query of Condition Monitoring Data in Smart Grid Based on Hadoop,” the Fourth International Conference on Computational and Information Sciences (ICCIS), pp.377-380, Aug. 2012. K. Bakshi, “Considerations for Big Data: Architecture and Approach,” 2012 IEEE Aerospace Conference, Mar. 2012. F. Bach, H.K. Cakmak, H. Maass and U. Kuehnapfel, “Power Grid Time Series Data Analysis with Pig on a Hadoop Cluster Compared to Multi Core Systems,” 21st Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp.208212, Feb. 2013. Y. Simmhan, A.G. Kumbhare, C. Baohua and V. Prasanna, “An Analysis of Security and Privacy Issues in Smart Grid Software Architectures on Clouds,” Cloud Computing (CLOUD), 2011 IEEE International Conference, pp.582 -589, July 2011. Y. Song, M.M. Wu and L. Ma, “Design and Realization of the Smart Grid Marketing System Architecture Based on Hadoop,” 2012 International Conference on Control Engineering and Communication Technology (ICCECT), pp.500-503, Dec. 2012. M. Zaharia, “Introduction to MapReduce and Hadoop,” UC Berkeley RAD Lab, Berkeley, CA, USA. [Online]. Official HDFS Architecture Guide [Online]. Available: http://hadoop.apache.org/docs/stable1/hdfs_design.html#NameNode+a nd+DataNodes. J. Ekanayake, S. Pallickara and G. Fox, “MapReduce for Data Intensive Scientific Analyses,” IEEE Fourth International Conference on eScience, pp.277-284, Dec. 2008. A. Thusoo, J.S. Sarma, N. Jain, Z. Shao, P. Chakka, S. Anthony, H. Liu, P. Wyckoff and R. Murthy, “Hive – a Warehousing Solution Over a Map-Reduce Framework,” Facebook Data Infrastructure Team. [Online]. Available: http://www.vldb.org/pvldb/2/vldb09-938.pdf T. White, “Hadoop: the Definitive Guide,” in Plastics, 2nd ed., M. Loukides, Ed. Sebastopol: O’Reilly Media, Inc., Oct. 2010.

A High-availability and Fault-tolerant Distributed Data Management ...

reliable properties. HDFS is a special distributed file system ... the SQL style commands to perform map-reduce jobs. With. this tool ... A High-availability and Fault-tolerant Distributed Data Management Platform for Smart Grid Applications.pdf.

468KB Sizes 2 Downloads 258 Views

Recommend Documents

A Management System for Distributed Knowledge and ... - CiteSeerX
to transfer added knowledge (meta data) from one type of system to ... from heterogeneous data sources. ... Business facet - specifies how to trade the content.

A Management System for Distributed Knowledge and ... - CiteSeerX
to transfer added knowledge (meta data) from one type of system to .... a business model and its semantics, rather than putting big locks on our content. This is ...

Secure and Distributed Knowledge Management in Pervasive ...
2 Department of Information and Communication Systems Engineering University of the. Aegean ... solutions observed in the past few years and the high rates of ..... of the Education and Initial Vocational Training. Program – Archimedes. 7.

Distributed Resource Management and Admission ...
control, data routing, and resource allocation mechanism that maximizes the ... set of stream processing applications flourish in a number of fields ranging from en- ..... demands, or for faster recovery in the case of node or link failures. With the

Distributed Resource Management and Admission ...
Enable by recent advances in computer technology and wireless communications, a new set of stream processing applications flourish in a number of fields ...

A distributed system architecture for a distributed ...
Advances in communications technology, development of powerful desktop workstations, and increased user demands for sophisticated applications are rapidly changing computing from a traditional centralized model to a distributed one. The tools and ser

Distributed Directories using Giga+ and PVFS - Parallel Data Lab
[1] SkyeFS implements the Giga+ algorithm on top of an unmodified ... network. PVFS-OID = resolve(path) to PVFS servers from PVFS clients .... As with all skye server RPCs, only the server responsible for a partition will service create requests.

A Management System for Distributed Knowledge ...
Many standards, little coherence and a ... ontological bias into such standards, which then causes ... about a new drug into a medical data management system).

A Distributed Service Management Infrastructure for ...
mainly developed for the file-sharing applications running on desktops, which ... Peer-to-Peer Overlay Network. SLA-aware scheduling config change monitoring. CMDB. P2P. Substrate. S LA attainment. & reporting. SLA alert prioritization service reques

Dynamic Data Migration Policies for* Query-Intensive Distributed Data ...
∗Computer Science, Graduate School and University Center, City University ... dials a number to make a call, a request is made to query the callee's location.

Fully Distributed Service Configuration Management
monitoring service. 2.1. SmartFrog ... distributed components to discover each other, monitor each others' ... failures of the application, server, and networking by.

Fully Distributed Service Configuration Management
applications. These systems are limited in scale, reliability, and security. We propose that dependable service configuration management is more naturally ...

Distributed Verification and Hardness of Distributed ... - ETH TIK
and by the INRIA project GANG. Also supported by a France-Israel cooperation grant (“Mutli-Computing” project) from the France Ministry of Science and Israel ...

Distributed Verification and Hardness of Distributed ... - ETH TIK
C.2.4 [Computer Systems Organization]: Computer-. Communication Networks—Distributed Systems; F.0 [Theory of Computation]: General; G.2.2 [Mathematics ...

Characteristics and Problems Related to a Distributed File System
computer. When a user accesses a file on the server, the server sends the user a copy of the file, which is cached on the user's computer while the data is being processed and is then returned to the server. This is the basic functioning of a distrib

Roots and Words in Chol (Mayan): A Distributed ...
deserves thanks for teaching me how, on the other hand, to abstract far away from this context and ponder the philosophical problems of language. Entering the ...

Metaserver Locality and Scalability in a Distributed NFS
access the file system through the PVFS library or using an OS-specific kernel module. The latter ... M.L., Navaux, P.O.A., Song, S.W., eds.: Proceedings of the ...

A framework for parallel and distributed training of ...
Accepted 10 April 2017 ... recently proposed framework for non-convex optimization over networks, ... Convergence to a stationary solution of the social non-.

Characteristics and Problems Related to a Distributed ...
All these things are to be thought about before implementing a distributed file system. ... of IT Department, Dronacharya College of Engineering, Gurgaon, Haryana ... Computer and Information Science and Engineering, University of Florida, ...

Towards a Secure, Resilient, and Distributed Infrastructure for ... - EWSN
Runs on. Inspired by IEC 61131. Offers Values as Datapoints. Hardware. View. Software. View. Cyclic. Task. Component. Component. Composition. Component.