IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, March 2015, Pg. 178-185

International Journal of Research in Information Technology (IJRIT) www.ijrit.com ISSN 2001-5569

Data Mining with Big Data Neha Borse1, Gunjan Patil2, Mayur Patil3 1

Student, North Maharashtra University IT Department, North Maharashtra University Jalgaon, Maharashtra, India [email protected]

2

Student, North Maharashtra University IT Department, North Maharashtra University Jalgaon, Maharashtra, India [email protected] 3

Assistent Professor, IT Department, North Maharashtra University Jalgaon, Maharashtra, India [email protected]

Abstract Visualization of massively massive datasets presents two vital issues. First, the dataset should be ready for mental image, and ancient dataset manipulation ways fail attributable to lack of temporary storage or memory. The second drawback is that the presentation of the information within the visual media, significantly period of time mental image of streaming statistic information associate in progress research addresses each these issues, exploitation information from two national repositories. This work is conferred here, with the results of this effort summarized and future plans, together with 3D mental image, outlined.

Keywords: Visualization, Data Streams , Time series data, Visual Media, 3D. 1. Introduction Visualization of data patterns, considerably 3D visual image, represents one of the foremost necessary rising areas of study. Considerably for geographic and environmental systems, data discovery and 3D visual image may be a very active house of inquiry. Recent advances in association rule mining for data point data or data streams makes 3D visual image and pattern identification on data point data possible. In streaming data point data the matter is created harder owing to the dynamic nature of the knowledge. Rising algorithms enable the identification of time-series motifs which could be used as a region of a period processing visual image application. Geographic and environmental systems frequently use device networks or totally different pilotless news stations to assemble huge volumes of data that unit of measurement archived in really huge databases. Not all the knowledge gathered is extremely necessary . However the sheer volume of information generally clouds and obscures very important knowledge that causes it to be neglected or lost. The analysis given here outlines associate current research project operational to examine the knowledge from national repositories in a pair of really huge datasets. Problems encountered embody dataset navigation, at the side of storage and looking out, data preparation for visual image, and presentation. Neha Borse,

IJRIT-178

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, March 2015, Pg. 178-185

2. Theory Data filtering and analysis area unit essential tasks within the method of distinguishing and visualizing the information contained in giant datasets, that is required for well-read decision-making. Analysis is developing approaches for statistic information which is able to allow pattern identification and 3D image. Analysis outcomes embrace assessment mining techniques for streaming statistic data, still as instructive algorithms, and visualization ways which is able to allow relevant info to be extracted and understood quickly and suitably.

2.1 Large scale data for visualization This research works with datasets from the National Oceanic and Atmospheric Administration (NOAA), a federal agency in the U.S., focused on the condition of the oceans and the atmosphere. The purpose of the research project is to take meteorological data and analyze it to identify patterns that could help predict future weather events. Data from the GHCN (Global Historical Climatology Network) dataset was initially used. Earlier research had worked with NOAA’s Integrated Surface Dataset (ISD). Both of these datasets are open access and the volume of streaming time series data was significant and growing. The GHCN dataset consists of meteorological data from over 76,000 stations worldwide with over 50 different searchable element types. Examples of element types include minimum and maximum temperature, precipitation amounts, cloudiness levels, and 24-hour wind movement. Each station collects data on different element types.

2.2 Time series data analysis

Searching for temporal association rules in time series databases is important for discovering relationships between various attributes contained in the database and time. Association rules mining provides information in an “if-then” format. Because time series data is being analyzed for this research, time lags are included to find more interesting relationships within the data. A software package from Universidad de La Rioja’s EDMANS Groups was used to pre-process and analyze the time series from the NOAA datasets. The software package is called KDSeries and was created using R, a language for statistical computing. The KDSeries package contains several functions that pre-process the time series data so knowledge discovery becomes easier and more efficient. The first step in pre-processing is filtering. The time series are filtered using a sliding-window filter chosen by the user. The filters included in KDSeries are Gaussian, rectangular, maximum, minimum, median and a filter based on the Fast Fourier Transform. Important minimum and maximum points of the filtered time series are then identified. The optima are used to identify important episodes in the time series. The episodes include increasing, decreasing, horizontal and over, below, or between a user-defined threshold. After simple and complex episodes are defined, each episode is view as an item to create a transactional database. Another R based software package, arules, makes this possible. Arules provide algorithms that seek out items that appear within a window of a width defined by the user. From there, temporal association rules are then extracted from the database.

3. Big Data Characteristics: Hace Theorem HACE Theorem. massive information starts with large-volume,heterogeneous, autonomous sources with distributed and decentralized management, and seeks to explore advanced and evolving relationships among information. These characteristics create it associate degree extreme challenge for discovering helpful information from the large information. In a naı¨ve sense, we are able to imagine that variety of blind men are trying to examine a large elephant (see Fig. 1), which will be the large information during this context. The goal of every blind man is to draw an image of the elephant according to the a part of info he collects throughout the process. as a result of every person’s read is proscribed to his native region, it's not shocking that the blind men can every conclude severally that the elephant “feels” sort of a rope, a hose, Neha Borse,

IJRIT-179

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, March 2015, Pg. 178-185

or a wall, betting on the region every of them is proscribed to. to create the matter even a lot of complicated, allow us to assume that the elephant is growing rapidly and its create changes perpetually, and every blind man could have his own (possible unreliable and inaccurate) information sources that tell him regarding biased knowledge regarding the elephant e.g., one blind person could exchange his feeling regarding the elephant with another blind man, wherever the changed information is inherently biased. Exploring the large information during this state of affairs is equivalent to aggregating heterogeneous info from different sources blind men to assist draw a very best picture to reveal the real gesture of the elephant in an exceedingly real-time fashion. Indeed, this task isn't as straightforward as asking every blind person to explain his feelings regarding the elephant then obtaining associate degree knowledgeable to draw one single picture with a combined read, regarding that every individual could speak a distinct language (heterogeneous and various info sources) and that they could even have privacy considerations regarding the messages they deliberate within the information exchange method.

Figure 1. The blind men and the giant elephant: the localized (limited) view of each blind man leads to a biased conclusion.

3.1 Huge Data with Heterogeneous and Diverse Dimensionality One of the basic characteristics of the massive information is that the huge volume of information diagrammatical by heterogeneous and diverse dimensionalities. This can be as a result of completely different info collectors like their own schemata or protocols for data recording, and therefore the the nature of various applications also results in numerous information representations. for instance, each single creature during a medicine world will be represented by victimisation easy demographic info such as gender, age, family illness history, and so on. For X-ray examination and CT scan of every individual, images or videos area unit accustomed represent the results as a result of they provide visual info for doctors to hold careful examinations. For a DNA or genomic-related take a look at, microarray expression pictures and sequences area unit accustomed represent the ordination info as a result of this can be the way that our current techniques acquire the info. Under such circumstances, the heterogeneous options talk to the different types of representations for an equivalent people, and the numerous options talk to the variability of the options involved to represent every single observation. Imagine that different organizations (or health practitioners) could have their own schemata to represent every patient, the data heterogeneity and numerous spatiality problems become major challenges if we have a tendency to are attempting to alter information aggregation by combining information from all sources.

Neha Borse,

IJRIT-180

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, March 2015, Pg. 178-185

3.2 Autonomous Sources with Distributed and Decentralized Control Autonomous information sources with distributed and decentralized controls ar a main characteristic of massive information applications. Being autonomous, every information supply is in a position to generate and collect info while not involving (or relying on) any centralized management. this is often the same as the World Wide net (WWW) setting wherever every net server provides a particular quantity of data and every server is able to totally perform while not essentially counting on different servers. On the opposite hand, the large volumes of the data additionally build associate application liable to attacks or malfunctions, if the total system needs to admit any centralized management unit. For major massive Datarelated applications, such as Google, Flicker, Facebook, and Walmart, a large number of server farms ar deployed everywhere the world to make sure nonstop services and fast responses for local markets. Such autonomous sources don't seem to be solely the solutions of the technical styles, however additionally the results of the legislation and therefore the regulation rules in numerous countries/ regions. for instance, Asian markets of Walmart are inherently completely different from its North yank markets in terms of seasonal promotions, high sell things, and client behaviors. additional specifically, the regime laws also impact on the wholesale management method and lead to restructured information representations and information warehouses for native markets.

3.3 Complex and Evolving Relationships While the quantity of the large information will increase, so do the complexity and therefore the relationships beneath the info. In an early stage info} centralized information systems, the focus is on finding best feature values to represent every observation. This is often kind of like employing a range of information fields, such as age, gender, income, education background, and so on, to characterize every individual. this sort of samplefeature representation inherently treats every individual as an freelance entity while not considering their social connections, that is one in all the foremost vital factors of the human society. Our friend circles could also be fashioned based mostly on the common hobbies or folks square measure connected by biological relationships. Such social connections normally exist not solely in our daily activities, however are also terribly popular in cyberworlds. as an example, major social network sites, like Facebook or Twitter, square measure primarily characterized by social functions like friendconnections and followers (in Twitter). The correlations between people inherently complicate the full information illustration and any reasoning process on the info. within the sample-feature illustration, individuals square measure regarded similar if they share similar feature values, whereas within the samplefeature-relationship illustration,two people is coupled along (through their social connections) even supposing they may share nothing in common within the feature domains in the least. In a dynamic world, the options accustomed represent the people and the social ties accustomed represent our connections may also evolve with relevance temporal, spatial, and other factors. Such a complication is changing into a part of the truth for Big information applications, wherever the key's to require the complex (nonlinear, many-to-many) information relationships, along with the evolving changes, into thought, to discover helpful patterns from huge information collections.

4. Methodology The data used is found on NOAA’s FTP web site within the style of .dly files. Every station has its own .dly file that is updated daily (if the station still collects data). Each .dly file has all the info that has ever been collected for that station. Whenever new knowledge is added for a station, it's appended to the tip of the present .dly file, that presents the matter that every file should be downloaded yet again to stay our native info current. To obtain the info, a Java-based program was designed that might download each go into the folder holding the .dly files. The Java program used the Apache Commons internet library to transfer the files from the office FTP server. After downloading all of the .dly files, the Java program opens an input stream to every of the downloaded files (one at a time). Each line of the .dly files contains a separate knowledge record, so the Java program would browse in every line and use it to make a MySQL “INSERT” statement that might be wont to place knowledge into a neighborhood database. At one purpose, Neha Borse,

IJRIT-181

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, March 2015, Pg. 178-185

area was exhausted on the native machine, and researchers had to upgrade the disc from ~200 GB to 256 GB to continue inserting knowledge into the relative database. Once all of the info was placed into the native info, a web interface was designed that allowed users to look the dataset. The interface permits users to look by country, state, date range, and values that ar <, <=, >, >=, !=, or == to any chosen worth. As a result of the dataset contains the worth -9999 for any record that's invalid or wasn't collected, the web interface conjointly has the choice to exclude any -999 values from the results. The results are output with every line containing a different knowledge result, and every result consisting of month, day, year, and knowledge worth.

5. Result

Figure 1. Real-time visualization of a user query.

Figure 2. NOAA reporting stations in Google Earth.

Neha Borse,

IJRIT-182

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, March 2015, Pg. 178-185

Figure 3. Home page of Data Mining with Big Data.

Figure 4. Login page for user.

Figure 4. User Registration.

Neha Borse,

IJRIT-183

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, March 2015, Pg. 178-185

6. CONCLUSION This system is work for visualization of Big Data from NOAA and result is in the form of graph that will help to take the decision for any weather related activity. The data used is found on NOAA’s FTP computer inside the fashion of .dly files. Each station has its own .dly file that's updated daily. Each .dly file has all the information that has ever been collected for that station. Whenever new information is added for a station, it's appended to the tip of the current .dly file, that presents the matter that each file ought to be downloaded all over again to remain our native data current.

REFERENCES [1] A. Mueen and E. Keogh, “Online Discovery and Maintenance of Time Series Motifs”, Proceedings of 16thACM Conference on Knowledge Discovery and Data Mining (KDD’10). pp. 1089-1098. [2] P. Morreale, F. Qi, P. Croft, R. Suleski, B. Sinnicke, and F. Kendall. "Real-Time Environmental Monitoring and Notification for Public Safety." IEEE Multimedia, Vol. 17, No. 2, 2010, pp. 4-11. [3]

NOAA Global Historical Climatology http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/

Network

(GHCN)

Database

[4] NOAA Integrated Surface Database (ISD) http://www.ncdc.noaa.gov/oa/climate/isd/index.php [5]"IBM What Is Big Data: Bring Big 01.ibm.com/software/databigdata /, IBM, 2012.

Data

to

the

Enterprise,"

http://www-

[6]M.H. Alam, J.W. Ha, and S.K. Lee, "Novel Approaches to Crawling Important Pages Early," Knowledge and Information Systems, vol. 33, no. 3, pp 707-734, Dec. 2012. [7]J. Bughin, M. Chui, and J. Manyika, Clouds, Big Data, and Smart Assets: Ten Tech-Enabled Business Trends to Watch. McKinSey Quarterly, 2010. [8]S. Banerjee and N. Agarwal, "Analyzing Collective Behavior from Blogs Using Swarm Intelligence," Knowledge and Information Systems, vol. 33, no. 3, pp. 523-547, Dec. 2012. [9]R. Ahmed and G. Karypis, "Algorithms for Mining the Evolution of Conserved Relational States in Dynamic Networks," Knowledge and Information Systems, vol. 33, no. 3, pp. 603-630, Dec. 2012.

Authors

Name :- Neha Borse Birth Place: - Pune, Maharashtra, Birth Date;-05 February 1994. Completed Diploma in Information Technology from University of MSBTE and perusing Bachelor degree in Information Technology from NMU, Jalgaon, Maharashtra.

Name :- Gunjan patil Birth Place: - Jalgaon, Maharashtra, Birth Date;-08 October 1992. Perusing Bachelor degree in Information Technology from NMU, Jalgaon, Maharashtra. Neha Borse,

IJRIT-184

IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, March 2015, Pg. 178-185

Name :- Mayur Patil Birth Place: - Chalisgaon, Maharashtra, Birth Date;-13 May 1991. Completed Bachelor of Engineering in Information Technology from University of Pune and perusing Master degree in Computer Science and Engineering from NMU, and working as a Assistant professor at Department of IT SSBT’s COET, Jalgaon, Maharashtra.

Neha Borse,

IJRIT-185

Data Mining with Big Data

storage or memory. ... Visualization of data patterns, considerably 3D visual image, represents one of the foremost ... Big Data Characteristics: Hace Theorem.

1006KB Sizes 1 Downloads 414 Views

Recommend Documents

Data Mining with Big Data.pdf
volumes of data and extract useful information or knowledge for future. actions. In many situations, the knowledge extraction process has to be very. efficient and ...

Data Mining with Big Data - International Journal of ...
different searchable element types. Examples of element types include minimum and maximum temperature, precipitation amounts, cloudiness levels, and 24-hour wind movement. Each station collects data on different element types. 2.2 Time series data an

Data Mining Approach, Data Mining Cycle
The University of Finance and Administration, Prague, Czech Republic. Reviewers: Mgr. Veronika ... Curriculum Studies Research Group and who has been in the course of many years the intellectual co-promoter of ..... Implemented Curriculum-1 of Radiol