IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

International Journal of Research in Information Technology (IJRIT)

www.ijrit.com

ISSN 2001-5569

Data Structure and Algorithm for Big Database Nwakanma I. C1 Egwudei Emm-Ani2 Amadi Chinonye Faith3 Judith Ogbeuka4 Ogbonna Dave E5 Department of Information Technology, Federal University of Technology, Owerri, Nigeria. [email protected], [email protected], [email protected], [email protected]

ABSTRACT We define big data as a data that is too big to fit in main memory resulting to a structure requirement on the data. The word “index” or “metadata” suggest that there are underlying data and structures. These structure scale to much larger sizes while efficiently using the memory hierarchy. This paper describes the study and outlines the underlying data structure for managing big data with particular attention to linear data structures (stack and queues) and nonlinear data structures (tree and graph). Furthermore; It explains how we can work with large amounts of data and still achieves high performance in small time frame and with lower space usage or other words, what is called time and space complexity. Finally this paper poses some threatening questions and issues on big data and gives recommendation for further exploration and some reading lists with some research questions. Keywords: Data, Big data, Structure, Stack, Queue, Tree and Graph

1. INTRODUCTION It is rooted in the expression; “create indexes on a table with 1billion users in 20 minutes and use 1 year to build the indexes on it”. We define big data as a data that is too big to fit in main memory resulting to a structure requirement on the data. The word “index” or “metadata” suggest that there are underlying data and structures. These structure scale to much larger sizes while efficiently using the memory hierarchy. According to McKinsey, [1] Big Data refers to datasets whose size are beyond the ability of typical database software tools to capture, store, manage and analyze. There is no explicit definition of how big a dataset should be in order to be considered Big Data. New technology has to be in place to manage this Big Data phenomenon. IDC defines Big Data technologies as a new generation of technologies and architectures designed to extract value economically from very large volumes of a wide variety of data by enabling high velocity capture, discovery and analysis. According to O’Reilly, “Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or does not fit the structures of existing database architectures. To gain value from these data, there must be an alternative way to process it.”[2].

Nwakama Cosmos, IJRIT

558

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

Reference [4] defined data structure as a collection of data elements whose organization is characterized by accessing operations that are used to store and retrieve the individual data element. This is very easy when it comes to minute data but surprisingly easier when it comes to big data with proper structure as this paper will disclose! We also know that data are transferred in blocks between RAM and Disk, and this block transfers dominate the running time of a machine processor. Therefore, for computation to perform faster with large elements, the block sizes of data are reduced to a significant size as to fit the memory allowable size for optimal performance. This will undoubtedly result to a complex structure of the data but with reduced time processing and faster sorting. More still, computer uses a fetch execution cycle to carry out pipelining operations on the instructions register. By this, computer performs instruction one after another (block of sizes) in the case of too big a task, it queues or stacks it. However, there could be a problem with the latter data structure but fortunately; there is a way to work around it. It is explored in the succeeding part of this article. Random access goes with Tree and Graph. B

RAM DISK M Figure 1: Performance bounds parameterized by block size B, memory size M, and data size N

2. Characteristics of Big Data Big Data is not just about the size of data but also includes data variety and data velocity. Together, these three attributes form the three Vs of Big Data.

Velocity Volume

Variety

Figure 2: The three V’s of Big Data Volume is synonymous with the “big” in the term, “Big Data”. Volume is a relative term – some smaller sized organizations are likely to have mere gigabytes or terabytes of data storage as opposed to the petabytes or exabytes of data that big global enterprises have. Data volume will continue to grow, regardless of the organization’s size. There is a natural tendency for companies to store data of all sorts: financial data, medical data, environmental data and so on. Many of these companies’ datasets are within the terabytes range today but, soon they could reach petabytes or even exabytes. Data can come from a variety of sources (typically both internal and external to an organization) and in a variety of types. With the explosion of sensors, smart devices as well as social networking, data in an enterprise has become complex because it includes not only structured traditional relational data, but also semi-structured and unstructured data.

Nwakama Cosmos, IJRIT

559

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

Structured data: This type describes data which is grouped into a relational scheme (e.g., rows and columns within a standard database). The data configuration and consistency allows it to respond to simple queries to arrive at usable information, based on an organization’s parameters and operational needs. Semi-structured data [7]: This is a form of structured data that does not conform to an explicit and fixed schema. The data is inherently self-describing and contains tags or other markers to enforce hierarchies of records and fields within the data. Examples include weblogs and social media feeds. Unstructured data: This type of data consists of formats which cannot easily be indexed into relational tables for analysis or querying. Examples include images, audio and video files. The velocity of data in terms of the frequency of its generation and delivery is also a characteristic of big data. Conventional understanding of velocity typically considers how quickly the data arrives and is stored, and how quickly it can be retrieved. In the context of Big Data, velocity should also be applied to data in motion: the speed at which the data is flowing. The various information streams and the increase in sensor network deployment have led to a constant flow of data at a pace that has made it impossible for traditional systems to handle. Handling the three Vs helps organizations extract the value of Big Data. The value comes in turning the three Vs into the three Is: • Informed intuition: predicting likely future occurrences and what course of actions is more likely to be successful. • Intelligence: looking at what is happening now in real time (or close to real time) and determining the action to take • Insight: reviewing what has happened and determining the action to take. 2.1 Linear Data Structure Approach In linear data structure memory allocation may be classified as static allocation and dynamic allocation. In static allocation, a fixed size of memory is reserved before loading and execution of a program. If that reserved is not sufficient or too large in amount then it may cause failure of program or waste of memory space. There are two methods of stack implementations: static and dynamic. Static stack implementation uses array while dynamic stack implementation can be achieved using linked list as it is a dynamic data structure. The limitation of static data structure can be removed using dynamic implementation. The memory is efficiently utilized with pointers. Memory is allocated only after element is inserted to the stack. The stack can grow or shrink but for our purposes; the stack will continue to grow with a proportionate growth in speed of processing, notwithstanding, the liked list approach is pertinent. However if a small and/or fixed amount of data is being dealt with, it is often simpler to implement the stack in form of array.

3. Implementation of Stack Using Linked List When a stack is implemented as a linked list, each node in the linked list contains that data and a pointer that give location of the next node in the list. In this implementation there is no need to declare the size of the stack in advance since we create nodes dynamically as we delete them dynamically. A pointer variable TOP is used to point to the top element of the stack. Initially, top is set to NULL to indicate an empty stack. Whenever a new element is to be inserted in the stack, a new node is created and the element is inserted onto the node. Then TOP is modified to point to this new node.

Nwakama Cosmos, IJRIT

560

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

4. Application of Stack Stacks have many applications. For example, a processor execute program when a function call is made, the call function must know where to return back the program, so the current address of program execution is pushed onto the stack. Once the function is finished, the address that was saved is removed from the stack and execution of the program resumes. The reason for function-use in a program is to segment large or long program of codes in block. In doing so, blocks of code become distinct and readable, easily maintained and runs fast. If a series of function call occurs, the successive return values are pushed onto the stack Last-In-First-Out order so that each function return back to the calling program. Stack support recursive call in the same manner as conventional non recursive call. Recursive functions are function that calls back itself each time that it runs. 0)) recursive(substr($str, 1)); echo substr($str,0,1); return; } ?> //Recursive function using PHP Stacks are also used by compilers in the process of evaluating expressions and generating machine language code. Two simple applications of stacks are described below: Reversal of string: We can reverse a string by pushing it on the stack. When the whole string is pushed on the stack we will pop the characters from the stack and we’ll get the meaningful reverse string.

INPUT: EVAD*HTIAF*IEDUWGE

OUTPUT: EGWUDEI*FAITH *DAVE

E G W U D E I * F A I T H * D A V E Figure 3: FIFO Operation

Evaluation of arithmetic expression: Stacks can also be applicable in the evaluation of arithmetic expression. Given an expression in postfix notation. Using stack they can be evaluated as follows: • Scan or traverse the symbol from left to right

Nwakama Cosmos, IJRIT

561

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

if the scanned symbol is an operand, push it onto the stack if the scanned symbol is an operator, pop two element from the stack perform the indicated operation push the result on the stack repeat the above procedure till the end of the input is encountered

• • • • •

5. The Queue Approach Queue is a linear data structure in which removal of elements is done in the same order they were inserted. The elements will be removed first which was first inserted. That is, First in First out (FIFO). Insertion takes place at the rear and deletion takes place at the front. 5.1 Implementation of Queue Using Circular Queue In the earlier implementation of queue, we learnt that shifting a queue element must be done to the left most direction of the array to make free space available at the rear end in case of insertion of new elements when ‘rear’ is equal to ‘size of the array’ -1and front is not equal to 0. So in case of queue with large number of element, shifting the queue elements has to make wastage which is a drawback of queue, so to remove this drawback, circular queue implementation is used. In case of circular queue implementation, if ‘rear’ is equal to (size of the queue) -1 and ‘front’ is not equal to 0 then a new element is inserted into the array at the subscript value 0th position. So here the array is imagined as a circular list. Here if the value of ‘rear’ or front is (size of queue) -1 then next value of ‘rear’ or ‘front’ is 0 otherwise the next value of the ‘rear’ or ‘front’ will be one more than the earlier value. In this implementation, the front and the rear are initialized with the value -1. The overflow condition is “‘front’ is equal to the next value of the rear’s recent value” and underflow condition is “‘front’ is equal to ‘-1.’ In this implementation, if ‘front’ is equal to ‘rear’ then it means that there is only one element available in the circular queue. But in some of the other implementation if front is equal to rear then it means that the circular queue is empty. Table 1: Let us illustrate: 0 1 2 3 4 5

10

front = -1 rear = -1

front = 0 rear = 0

10 11

front = 0 rear = 1

10 11 12

front = 0 rear = 3

10 11 12 13

front = 0 rear = 3

11 12 13

front = 1 rear = 3

Table 2 0 1 2 3 4

16 11 12 13 14

12 13 14

Nwakama Cosmos, IJRIT

12 13 14

12 13 14

16 17 12 13 14

16 17 13 14

562

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

5 6

15 front = 1

front = 2

rear = 4

front = 2

rear = 4

rear = 5

15 front = 2

15 front = 2

rear = 0

rear = 1

15

front = 3 rear = 0

Non-Linear Data Structure Approach

5. The Tree Approach A tree [8] is a nonlinear data structure, where the specially designated node called root and rest of the data/node are divided into n subsets. Where each subset follows the properties of tree and value of n is greater than or equal to zero. Again our attention is on the conservation of memory space, as a data structure grows complex and quite large. In a tree approach, binary is a type of a tree which is either empty or has at most two sub-trees also called a binary tree. It means each node in a binary tree can have 0, 1 or 2 subset. For example; A binary tree is said to be full if all the level contains maximum possible nodes In a full binary tree [9] Ith level will have 2i element/node If h is the height of a full binary tree. Then number of leaf node of the tree will be: 2h The number of internal node of the tree will be 1 + 2 + 22 ....... + 2h-1 = 2h-1 Total number of node will be Number of internal node + number of leaf node = 2h+2h-1 22h-1 2h+1 – 1 For a full binary tree, number of internal node = number of leaf node -1 For a full binary tree having n nodes n= number of internal node + number of leaf node Number of leaf node = n-number of internal node = n – (number of leaf node -1) Number of leaf node = (n+1)/2 Height of a full binary tree is log2 (number of leaf) = log2 (n+1)/2 For binary tree of height h can at level I it can have maximum 2i nodes Maximum Number of nodes in the tree can be 1+22 + 23 +……+ 2h = 2h+1 – 1 Minimum Number of nodes in the tree can be = h+1 For a binary tree having n nodes Maximum height of the tree can be = n+1 For a binary tree having n nodes Maximum height of the tree can be = n-1 Minimum height of the tree can be log2 (n+1)/2 Implementation of Tree Using A Linked List Binary tree, according to [10] can be represented into the memory in two ways (sequential and linked list representation). Sequential representation can be redundant and waste a lot of memory space. We are going to focus

Nwakama Cosmos, IJRIT

563

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

on the latter representation for our intent. In linked list representation, representation of each data on structure is called “node”, each node contain tree field’s data part and two pointers to contain the address of the left child and the right child. If any node has its left or right child empty then it will have in its respective link field, a null value. A leaf node has a null value in both of its links. Advantages of link list representation of binary tree • • •

No wastage of memory Enhancement of the tree is possible Insertion and deletions involve no data movement, only rearrangement of pointers

Figure 4: Balanced binary tree with buffers of size B Insert + delete • •

Send insert delete messages down from the root and store them in buffer When buffer is filled up, flush.

Point queries cost 0 (logB N) = 0(logB N) •

This is the tree height

Insert cost 0(logB N)/ B •

Each flush cost O(1) and I/Os and flushes B elements

This mean that the data structure can be made platform independent i.e., work simultaneously for all values of B and M

7. The Graph Approach A graph database contains nodes, edges and properties to represent and store data. In a graph database, every entity contains a direct pointer to its adjacent element and no index look-ups are required. A graph database is useful when large-scale multi-level relationship traversals are common and is best suited for processing complex many-to-many connections such as social networks. A graph may be captured by a table store that supports recursive joins such as

Nwakama Cosmos, IJRIT

564

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

BigTable and Cassandra. [7] Graphs are frequently used in all works of life. A map is a well known example of a graph. In map for example various connections are made between the cities. The cities are connected via road, rail way line and aerial networks. For example a graph that contains the cities of Nigeria connected by the means of road. We can assume that the graph is the interconnection of cities by roads. If we provide our office or home address to someone by drawing the roads, shops etc in a pieces of paper for easy representation, then that will also be a good example of graph. Practical Problem Illustration Using Third main land bridge Two island A and B (A is Lagos Island and B is Lagos mainland) are formed by parallel ocean in Lagos State and are connected by three bridges as shown in the figure below. The township people wondered whether they could start from any land areas walk over each bridge exactly once and return to the starting point. OBALENDE CMS

3rd Mainland

A

B

Figure 5: The 3rd Mainland Bridge Problem Memory efficiency There are several different ways to represent a graph in computer memory. The two main representations are (a) Adjacent Matrix and (b) Adjacent list. As usual, our attention is focused on the representation that makes best use of memory location which is the adjacent list representation. But briefly, let’s see that adjacent matrix. Adjacent list representation of graph Suppose G (V,E) is simple graph (directed/ undirected) with n vertices and e edge, the adjacent list have n heads nodes corresponding to n vertices of graph G, each of which point to a singly link list of nodes adjacent to the vertex representing to the head node. In contrast to adjacency matrix representation would generally report a complexity of O(n+e) or O(n+2e) based on whether graph is directed or undirected respectively, thereby rendering them efficient. Research questions There are still research questions: • • •

What if blocks have different sizes? There’s a write-back cost? (Complexity unknown.)? Least Recently Used: LRU may be too costly to implement (clock algorithm)?

Nwakama Cosmos, IJRIT

565

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 9, September 2014, Pg. 558-566

6. Conclusion Big data as data that is too big or does not fit into the structure of existing database architecture. Hence, this article explained that the volume, velocity and variety of data used by any organization should be considered before building a database for that organization so as to produce efficient performance and on-time service delivery. We also explained the static and dynamic ways of allocating memory to a data structure using the stack and queue methods so as to ensure efficient insertion or removal of elements from any position in a sequence.

7. Further reading list       

Data Structures by Seymour Lipschutz, Tata McGraw-Hill publication. Introduction to Data Structure in C by Kamthane, Pearson Education publication T. H. Cormen, C. E. Leiserson, R.L.Rivest, and C. Stein, Introduction to Algorithms, Second Edition, Prentice Hall of India Pvt. Ltd, 2006. Yedidyah Langsam,Moshe J. Augenstein, Aaron M.Tenenbaum: Data tructures using C and C++, PrenticeHall India. Ellis Horowitz, Sartaj Sahni: Fundamentals of Data Structures, Galgotia Publications. Ellis Horowitz, Sartaj Sahni and Sanguthevar Rajasekaran, Fundamental of data structure in C, Second Edition, Universities Press, 2009. Ellis Horowitz, Sartaj Sahni and Sanguthevar Rajasekaran, Computer Algorithms/ C++, Second Edition, Universities Press, 2007.

References [1] James Manyika, et al. Big data: The next frontier for innovation, competition, and productivity. [Online] Available from: http://www.mckinsey.com/insights/mgi/research/technology_and_innovation/big_data_the_next_frontier_for_innovation [Accessed 9th July 2012]. [2] Edd Dumbill. What is big data? [Online] Available from: http://radar.oreilly.com/2012/01/what-is-big-data.html [Accessed 9th July 2012]. [3] Michael A. Bender and Bradley C. Kuszmaul: ‘Data Structure and Algorithm for Big Database’ Journal Stony Brook State University of New York & Tokutek MIT [4] Dr ARUP Kr BHAUMIK SANTANU HALDAR SUBHRAJIT SINHA ROY (2010) ‘Data Structure Using C’ Textbook for 2nd year student U.P Technical University (UPTU) and Uttarakhand Technical University (UTU) S.Chand 1st edition. [5] Arabinda Saikia, KKHSO, Nabajyoti Sarma, Swapnanil Gogoi,Tapashi Kashyap Das (2011) ‘Master of Computer Application Data Structure through C language’: Krishna Kanta Handiqui State Open University. [6] Peter Buneman. Semistructured Data. [Online] Available from: http://homepages.inf.ed.ac.uk/opb/papers/PODS1997a.pdf [Accessed 9th July 2012]. [7] Info Grid. Operations on a Graph Database. [Online] Available from: http://infogrid.org/blog/2010/03/operations-on-a-graphdatabase-part-4/ [Accessed 9th July 2012]. [8] T. H. Cormen, C. E. Leiserson, R.L.Rivest, and C. Stein, Introduction to Algorithms, Second Edition, Prentice Hall of India Pvt. Ltd, 2006. [9] Ellis Horowitz, Sartaj Sahni and Sanguthevar Rajasekaran, Fundamental of data structure in C, Second Edition, Universities Press, 2009. [10] Ellis Horowitz, Sartaj Sahni and Sanguthevar Rajasekaran, Computer Algorithms/ C++, Second Edition, Universities Press, 2007.

Nwakama Cosmos, IJRIT

566

Data Structure and Algorithm for Big Database - IJRIT

With the explosion of sensors, smart devices as well as social networking, data in an ... When a stack is implemented as a linked list, each node in the linked list ...

144KB Sizes 9 Downloads 352 Views

Recommend Documents

Data Structure and Algorithm for Big Database - IJRIT
We define big data as a data that is too big to fit in main memory resulting to a structure requirement on the data. The word “index” or “metadata” suggest that ...

Data Structure and Algorithm for Big Database
recommendation for further exploration and some reading lists with some ... There is a natural tendency for companies to store data of all sorts: financial data, ...

An Optimised Structure Optimised Structure Optimised ... - IJRIT
Student, R V College of Engineering,ECE,Bangalore,Karnataka,India sachinbrraj@gmail. ... Fig 2: Differentiator. The CIC filter is a cascade of digital integrators followed by a cascade of combs (digital differentiators) in ... digital switch in betwe

Dynamic Auditing Protocol for Data Storage and ... - IJRIT
(TPA) to verify the correctness of the dynamic data stored in cloud. Here the .... analyze the audits (verification) performed by the verifier (TPA) and get a better ...

Dynamic Auditing Protocol for Data Storage and ... - IJRIT
Authentication Forwarding Mechanisms to scale the usability of the servers by ... the broad range of both internal and external threats for data integrity [11].

Data Mining: Current and Future Applications - IJRIT
Language. (SQL). Oracle, Sybase,. Informix, IBM,. Microsoft. Retrospective, dynamic data delivery at record level. Data Warehousing. &. Decision Support. (1990s). "What were unit sales in. New England last. March? Drill down to. Boston. On-line analy

Data Mining: Current and Future Applications - IJRIT
(KDD), often called data mining, aims at the discovery of useful information from ..... Advanced analysis of data for extracting useful knowledge is the next natural ...

The Relevance of Effective Database System for Efficient Office ... - IJRIT
Keyword: Database Security, Database System, Office Management, Oracle 11g, SQL, ... describe the nesting, and a sort field to keep the records in a particular order in .... Proper Documentation: Program flowcharts serve as a good program ...

The Relevance of Effective Database System for Efficient Office ... - IJRIT
Oracle database 11g comes in handy to solve and enhance data storage and retrieval offering ..... A hard disk minimum size of 50GB. 2. A minimum of duo core ...

Data Mining: Current and Future Applications - IJRIT
Artificial neural networks: Non-linear predictive models that learn through training ..... Semi-supervised learning and social network analysis are other methods ...

Techniques for DNA database security: A Review - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 133- ... From last few years DNA sequencing techniques are advancing. .... Genetic algorithm is having a high degree of autonomy used for.

The Relevance of Effective Database System for Efficient Office ... - IJRIT
Keyword: Database Security, Database System, Office Management, Oracle 11g, SQL, Tertiary Institution .... maintenance and problems diagnostic functionality, provide automatic backup, secure storage facilities, provide data ... RAID (Redundant Array

Networking-For-Big-Data-Chapman-Hall-CRC-Big-Data-Series.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

Review on Data Warehouse, Data Mining and OLAP Technology - IJRIT
An OLAP is market-oriented which is used for data analysis by knowledge ..... The data warehouse environment supports the entire decision. Database. Source.

Review on Data Warehouse, Data Mining and OLAP Technology - IJRIT
used for transactions and query processing by clerks, clients. An OLAP is market-oriented which is used for data analysis by knowledge employees, including ...

Read [PDF] Data Structure and Algorithmic Thinking with Python: Data Structure and Algorithmic Puzzles Read online
Data Structure and Algorithmic Thinking with Python: Data Structure and Algorithmic Puzzles Download at => https://pdfkulonline13e1.blogspot.com/8192107590 Data Structure and Algorithmic Thinking with Python: Data Structure and Algorithmic Puzzle

Survey on Data Clustering - IJRIT
common technique for statistical data analysis used in many fields, including machine ... The clustering process may result in different partitioning of a data set, ...

PDF Data Structure and Algorithmic Thinking with Python: Data Structure and Algorithmic Puzzles Full Pages
Data Structure and Algorithmic Thinking with Python: Data Structure and Algorithmic Puzzles Download at => https://pdfkulonline13e1.blogspot.com/8192107590 Data Structure and Algorithmic Thinking with Python: Data Structure and Algorithmic Puzzle