High Performance clustering for Big Data Mining using Hadoop

Published on May 2017 | Categories: Documents | Downloads: 25 | Comments: 0 | Views: 305
of 9
Download PDF   Embed   Report

Comments

Content

International Journal of Advanced Engineering Research and Technology (IJAERT) 223
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

High Performance clustering for Big Data Mining using Hadoop
Rama Satish K V
Assistant Professor,
RNSIT, Bangalore, India

Dr. N P Kavya
Professor & HR Manager
RNSIT, Bangalore, India

Abstract
Now a day, organizations across public and private
sectors have made a premeditated decision to big data
into competitive advantage. The motivation and
challenge of extracting value from big data is similar in
many ways to the age-old problem of distilling business
intelligence from transactional data. Hadoop is a
speedily budding ecosystem of components based on big
data Map Reduce algorithm and files system work for
implementing Map Reduce algorithms in a scalable
fashion and distributed on commodity hardware with
clustering process. In this paper, we have focused on the
distinct big data storage along with the web mining and
how big data is a solution to many problems of the
organizations. Big data not only focus to store and
handle the large volume of data but also to analyzed and
extract the correct information from the data in lesser
time span. In this work, we will take Hadoop an open
source framework for processing for massive datasets on
cluster of different computers, which is shown with
using the log file for extraction of information based on
user query. This technique of Map Reduce and
clustering technique will work together. Finally, we will
use cluster technique for discussion on a use-case
showing how enterprises can gain a competitive benefit
by being early adopters of big data analytics.
Key words: Hadoop, Data mining, clustering, principal
component analysis

I. INTRODUCTION
Cloud computing is emerging as the latest distributed
computing paradigm and attracts increasing interests of
researchers in the area of Distributed and Parallel
Computing [1], Service Oriented Computing [2].
Though there is yet no consensus on what is Cloud, but
some of its distinctive aspects as proposed by Ian Foster
in [3] can be borrowed for an insight: “Cloud computing
is a large-scale distributed computing paradigm that is
driven by economies of scale, in which a pool of
abstracted, virtualized, dynamically-scalable, managed
computing power, storage, platforms, and services are

delivered on demand to external customers over the
Internet [4].
Millions of user share cloud resources by submitting
their computing task to the cloud system. Scheduling
these millions of task is a challenge to cloud computing
environment. Cloud computing has emerged as the
advanced form of distributed computing, parallel
processing and grid computing. As Clouds are designed
to provide services to external users, providers need to
be compensated for sharing their resources and
capabilities [7] [14]. Since these computing resources
are finite, there is a need for efficient resource allocation
algorithms for the cloud platforms. Efficient
resource/data allocation would help reduce the number
of virtual machines used and in turn reduce the carbon
footprint leading to a lot of energy saving [8] [15].
Scheduling in Map Reduce can be seen analogous to this
problem [10]. If the scheduling algorithms are designed
in a more intelligent way to avoid overloading any node
and utilize most of the resources on a particular node, the
runtime of the jobs could be lowered to a greater extent
leading to a lot of energy saving [11]. These strategies
considers different factors like cost matrix generated by
using credit of tasks to be assigned to a particular
resource, quality of Service (QoS) based meta-scheduler
and Backfill strategy based light weight virtual machine
scheduler for dispatching jobs, QoS requirements,
heterogeneity of the cloud environment and workloads
[12] [13].
Optimal resource allocation or task
scheduling in the cloud should decide optimal number of
systems required in the cloud so that the total cost is
minimized and the SLA is upheld. Cloud computing is
highly dynamic, and hence, resource allocation problems
have to be continuously addressed, as servers become
available/non-available while at the same time the
customer demand fluctuates. Thus this study focuses on
scheduling algorithms in cloud environment considering
above mentioned characteristics, challenges and
strategies [18].
Map Reduce is a programming model and an associated
implementation for processing and generating large data
sets [19]. It enables users to specify a map function that
processes a key/value pair to generate a set of
intermediate key/value pairs, and a reduce function that

www.ijaert.org

International Journal of Advanced Engineering Research and Technology (IJAERT) 224
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

merges all the intermediate values associated with the
same intermediate key [17]. Map Reduce is used in
Cloud Computing in the beginning [16]. It is initiated by
Google, together with GFS and Big Table comprising
backbone of Google‟s Cloud Computing platform. Map
Reduce has achieved increasing success in various
applications ranging from horizontal and vertical search
engines to GPU to multiprocessors, e.g.,. Recently, Map
Reduce has become a standard programming model for
large scale data analysis. It has seen a tremendous
growth in recent years especially for text indexing, log
processing, web crawling, data mining, machine learning
etc.

II. RELATED WORKS
Here, a review of some of the work presented for task
scheduling in cloud computing. Rosmy C Jose et al. [21]
presented an approach to scheduling scientific
workflows on clouds. Scheduling multitask workflows
in virtual clusters is a NP-hard problem. Excessive
simulation time in weeks of time may be needed to
produce the optimal schedule using Monte Carlo
simulations. To reduce this scheduling overhead is
necessary in real-time cloud computing. They presented
a new workflow scheduling method based on iterative
ordinal optimization (IOO). Victoria López et al. [22]
presented a heuristic task scheduling algorithm called
Balance-Reduce (BAR), in which an initial task
allocation was produced at first, then the job completion
time can be reduced gradually by tuning the initial task
allocation. By taking a global view, BAR can adjust data
locality dynamically according to network state and
cluster workload. The simulation results show that BAR
was able to deal with large problem instances in a few
seconds and outperforms previous related algorithms in
term of the job completion time.
Sara del Río et al. [23] presented, an idealized Hadoop
model was presented to investigate the Hadoop task
assignment problem. It was shown that there was no
feasible algorithm to find the optimal Hadoop task
assignment unless P = NP. Assignments that are
computed by the round robin algorithm inspired by the
current Hadoop scheduler are shown to deviate from
optimum by a multiplicative factor in the worst case. A
flow-based algorithm was presented that computes
assignments that are optimal to within an additive
constant. Qingchen Zhang, et al. [24] presented the
scheduling problem in hybrid clouds presenting the main
characteristics to be considered when scheduling
workflows, as well as a brief survey of some of the
scheduling algorithms used in their systems. To assess
the influence of communication channels on job

allocation, they compared and evaluated the impact of
the available bandwidth on the performance of some of
the scheduling algorithms.
Qingchen Zhang et al. [24] presented a Revised
Discrete Particle Swarm Optimization (RDPSO) to
schedule applications among cloud services that takes
both data transmission cost and computation cost
into account. Experiment is conducted with a set of
workflow applications by varying their data
communication costs and computation costs according to
a cloud price model. Comparison is made on make span
and cost optimization ratio and the cost savings with
RDPSO, the standard PSO and BRS (Best Resource
Selection) algorithm. Experimental results show that the
proposed RDPSO algorithm can achieve much more cost
savings and better performance on make span and cost
optimization. Bogdan Ghit et al. [25] presented an
integer linear program (ILP) formulation for the problem
of scheduling SaaS customer‟s workflows into multiple
IaaS providers where SLA exists at two levels. In
addition, they presented heuristics to solve the relaxed
version of the presented ILP. Simulation results show
that the proposed ILP is able to find low-cost solutions
for short deadlines, while the presented heuristics are
effective when deadlines are larger.

III. RESEARCH METHODOLOGY
High dimensional data introduce several problems to
traditional statistical analysis. As previously mentioned,
computation time increases more rapidly with p than
with n. For combinatorial and projection pursuit
algorithms, this increase is of sufficient magnitude that it
is not clear how such methods can be made feasible for
high dimensional data.
a. Research Objective

www.ijaert.org

 To maintain the large dataset for classification
and clustering
 To maintain all data process with map-reduction
and hadoop
 To efficiently collect and preprocess complex
data sets that includes the huge quantities of
data.
 To improve the quality and efficiency of care
while cultivating patient centricity through
engagement and product personalization.
 To adapt to the changing big data processing
such as Hadoop, which uses the map-reduce
paradigm.

International Journal of Advanced Engineering Research and Technology (IJAERT) 225
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

b. Problem definition and Contribution of the paper
A classical supervised clustering problem consists of
finding a function which, taking a set of random feature
variables as arguments, predicts the value of a onedimensional discrete random class variable. There exist
scenarios, however, where more than one class variable
may arise, so the extension of the classical problem to
the multidimensional class variable case is increasingly
earning the attention of the research community.
Data mining, also popularly known as
Knowledge Discovery in Database, refers to extracting
or “mining" knowledge from large amounts of data. Data
mining techniques are used to operate on large volumes
of data to discover hidden patterns and relationships
helpful in decision-making. While data mining and
knowledge discovery in database are frequently treated
as synonyms, data mining is actually part of the
knowledge discovery process. The sequences of steps
identified in extracting knowledge from data are shown
in Figure1.
Recently, various researchers present several algorithms
for Facebook three product Facebook three product
classification based on classification methods. But, the
challenge is not only in finding the Facebook three
product words and also, how the dimensionality and
scalability is taken into consideration for Facebook three
product classification because, in reality, the processing
is with a large and high dimensional data. So, (i) curse of
dimensionality By handling these criteria, an Facebook
three product Facebook three product classification
technique is urgently needed for improving the
classification accuracy. By solving the above challenge
in this research, the feature selection is the main aspect
for our research. The feature selection method can solve
the curse of dimensionality by identifying the suitable
features.


Curse of dimensionality problem is solved by
identifying the suitable features. Here, firefly
algorithm is used for this suitable feature selection
process.

c. Efficient Facebook three product Facebook three
product classification through proposed feature
extraction algorithm for naïve classifier
The ultimate target of this research is to design and
develop a technique for Facebook three product
classification using SVM classifier. The SVM algorithm
Facebook three product filtering is a probabilistic
classification technique of Facebook three product
filtering which is based on SVM theorem with naïve
independence assumptions. Let us consider each of the

Facebook three product can be illustrated by a set of

features (attributes) a n  , where 1  n  N . Filtering of
Facebook three product with SVM by considering of all
features is very difficult also it need more time. In order
solve this problem in this paper; we propose an efficient
algorithm to select the significant features from the
available to filter the Facebook three product in efficient
manner. The overall model of the proposed-mail
Facebook three product classification system is given in
the following figure 1 and each part of the framework is
elucidated concisely in the following sections.
High
dimensional data

High
dimensi
onal
databas
e

Classified
output
data

Proposed
PCA
algorithm

Perfect low
dimensiona
l data
Low dimensional
clustered data
Classification
algorithm

Figure 1: System Architecture
d. Facebook Dataset
The words dataset is taken from the UCI machine
learning repository and which is formed by Mark
Hopkins, Erik Reeber, George Forman, Jaap Suermondt.
Hewlett-Packard Labs. Their collection of Facebook
three product wordss came from their postmaster and
individuals who had filed Facebook three product and
their collection of non-Facebook three product words
came from filed work and personal words, and hence
the word „George‟ and the area code „650‟ are indicators
of non-Facebook three product. This Facebook three
product dataset consists of 4601 instances and 58
attributes in which 57 continuous real attributes and 1
nominal attribute. From the Facebook three product
dataset, 80% (3681 instances) of instances are taken for
finding the significant features among the available (58)
attributes for the training process and the other 20% (920
instances) of instances are taken for the testing process
with the significant attributes. The descriptions of those
attributes are given in the following table 1.

www.ijaert.org

International Journal of Advanced Engineering Research and Technology (IJAERT) 226
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

Table 1: Description about the attributes of Facebook
three product dataset
Attribute
Category of
Description of
number
attribute
attribute
A1 to A10

A11 to A20

A21 to A30

Word_freq_WORD

char_freq_CHAR

capital_run_length_average

percentage of
words in the
Like that match
WORD
Percentage of
characters in the
Comment that
match CHAR
average length
of uninterrupted
sequences of
Follower

each dimension of the M  N matrix, where m ranging
from 1,…5.

e( m) 

1
N

N

 D[m, n]
n 1

(1)
Where, D is the matrix of size M  N . Then, the
derivations of the means are calculated and stored in a
matrix, derV .

derV  D  u.h, h  1,....,N

(2)

Thus, the covariance matrix CV can be given by,

CV 

1
derV .derV T

N

(3)

The eigenvector vector of the covariance matrix is to be

IV. PRINCIPAL COMPONENT ANALYSIS
(PCA)
The most common derivation of PCA is in terms of a
standardized linear projection, which maximizes the
variance in the projected space [29]. For a set of
observed

dimensional

data

vectors,

tn ,

calculated and to be stored in a matrix V . After finding
the eigenvalues from the matrix CV , we calculated
another diagonal matrix of CV which, contains the
eigenvalues.

diag (CV )  V 1CV V

where

(4)
From the above-obtained data, we construct another

those ortho-normal axes onto which the retained
variance under projection is maximal. It can be shown

matrix, W which contains the eigen values, represented
as,

n {1,...,N } and q the principal axes w j = 1,...,q , are

that the vectors w j are given by the q dominant
eigenvectors those with the largest associated Eigen
values
of
the
sample
covariance
T
matrix S  E  [(t   )(t   ) ] such that Sw j   j w j .
The q principal components of the observed vector
t n are given by the vector xn  W T (t n   ) , where

W T  {w1 ,...,wq }T .The variables x j

are then de-

T
correlated such that the covariance matrix E[ xx ] is

diagonal with elements  j .The objective of PCA is to
perform dimensionality reduction while preserving as
much of the randomness in the high-dimensional space
as possible.

W ( x, y)  V ( x, y)

(5)

Where, W is a M  L matrix, the field L is the
columns of matrix V . The next step of the
dimensionality reduction process is to calculate the
empirical standard deviation, which contains the
square root of each element along the main diagonal of
the covariance matrix CV and then, this value is
supplied to calculate the z-score matrix.
.

 (m)  CV [ x, y], x  y  m
Z scr 

derV
 .h

(6)

Thus, we get the reduced dimension matrix as R , which

M N

Consider a
matrix with random vector,
represented as a linear combination of ortho-normal
basis vectors C  {a1 , a2 ,......., an } . The next step is to
find the covariance matrix CV from the empirical mean
values which can be obtained by calculating through

is the dot product of the conjugate of the matrix W ,
which having dimension of M  L ,and the Z-score
matrix. Thus dimensionality reduction can be
represented as

www.ijaert.org

R  W T .Z

(7)

International Journal of Advanced Engineering Research and Technology (IJAERT) 227
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

a. Dimensionality Reduction Using PCA
The clustering of high dimensional data deals with many
interruptions. So it is quite hard to implement a
clustering algorithm into a high dimensional data with its
actual dimension. The most suitable way is to reduce the
dimension of the data for the smooth processing of
clustering. A number of methods are used for the
dimensionality reduction and those methods are effective
to some extends. In the proposed method, we also use a
dimensionality reduction algorithm. The PCA algorithm
is taken into concern for the proposed approach. This
section gives complete idea of the PCA algorithm. Let us
consider
the
high
dimensional
data
with

M  N dimension. According to the definition of the

PCA algorithm, we calculate the empirical mean value,

e , for the high dimensional data D . The covariance
matrix of the data D is calculated by deriving the e
values of the data.

Dcov 

1
De .DeT
N

(8)

The above expression represents the covariance matrix,

Dcov and the derived value and the transpose of the
derived values of empirical means values of the data D ,

Figure 2: Reduced Matrix
The reduced matrix is then considered for the further
processing of our proposed approach. The reduced
dimensionality helps a smooth processing of the
proposed approach.
b. Finding attribute weightage values
The reduced matrix obtained from the previous step is
used to find the weightage of the attributes that are the
most relevant part, which we are taken for the analysis of
the proposed clustering algorithm. The attribute
weightage calculation is the separate measure, which we
consider as the catalyst for our proposed approach. At
first, the discretization has done for reduced matrix using
the attribute threshold so that the matrix is converted to
the discretized value. Then, the attribute weightage
process is carried out. The attribute and data
representation obtained from the previous step is
represented in figure 3.

T

which are represented by De and De . The empirical
standard mean is calculated by virtue of the covariance
matrix and the diagonal matrix generated from the data.
The empirical standard mean of the data D can be given
by the following equation.

 (D)  Dcov[ x, y]

(9)

Where x, y are the subset of row values of the M  N
data

matrix. The  (D) represents

the

empirical

standards mean value of the data D . The reduced
dimension can be attained for the high dimensional data
by calculating the Z-score of the data using the empirical
standard mean (D) . The values are calculated as per
the definition of the PCA algorithm and the reduced
matrix R is obtained as the dot product of the Z-score
matrix and the conjugate of the matrix W, which
obtained from the covariance matrix.

R  W .Z

Figure 3. Attribute and data
In order to increase the effectiveness of the algorithm,
we find the weightage value for the attributes after the
discretization. We have adopted a method in which, the
weightage for each attribute W ( A j ) is calculated using
the following expression.
k

W (Aj )  
i 1

freq (a i ) * ( freq (a i )  1)
N ai ( N ai  1)

(10)

A j represents the attributes, a i denotes the
unique values in the attribute A j , freq ( a i ) signifies the
i
frequency of the a i , N a represents the total number of
Where,

www.ijaert.org

International Journal of Advanced Engineering Research and Technology (IJAERT) 228
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

values in the A j and k is the number unique values in
the attribute A j .
Consider the following example that is the reduced
matrix after discretization,
A1

A2

A3

A

C

D

B

C

C

A

C

E

Out    i * K ( si , pr )  bi

si  support vectors
  weight
b  bias
where
pr  vectors for classifica tion
k  kernel function

Consider the weightage value for attribute A1, the value
can be generated by the following process.

2 1 11
1

; W ( A1 ) 
3 2 3 2
2

3 2
; W ( A2 )  1
3 2

The aforementioned equation is the objective
function utilizes an optimization method to identify the
support vectors, weights and bias for classifying the
vector Pr where k is a kernel function. In the case of a
linear kernel, k is the dot product.
If out  threshthen
Normal
Else
Abnormal
End If

(11)

In the similar way, we calculate weightage of all other
attribute that constitute the data matrix. The weightage
value is then passed to the bisecting k-means algorithm
for the improving the effectiveness of the proposed
algorithm.

W ( A2 ) 

(14)

i

Figure 4. Example data matrix for attribute weightage
calculation

W ( A1 ) 

and its id, MFCC length for both the normal and
abnormal words are utilized and therefore 14 inputs as
total. Here we detail the SVM clustering for the
abnormality of words. The following equation is the
SVMs‟ objective function, which may identifies the
support vector for the clustering

The above step indicates that the value of the variable

out than the threshold value then the class falls into the
normal category else, it may falls into the abnormal
category. SVM contains error also the error
minimization function is as follows
ns' 1

arg min Pt  x  0.5T .

(12)

x 0

(15)

with the following constraints,

1 0 1 0 1 0
W ( A3 ) 


; W ( A3 )  0
3 2 3 2 3 2



cl x (T k ( prx )  c)  1   x
and
(13)

3.4. Clustering using SVM
SVMs pertain to the generalized linear classifier‟s
family. SVMs are also regarded as a special case of
Tikhonov regularization. A peculiar property is that they
lessen the empirical clustering error and increase the
geometric margin at the same time. Therefore, they are
termed as maximum margin classifiers. In this SVM, we
utilize the vector Pr for the training process in which
identifies the abnormal word. This vector contains the
parameters such as mean, standard deviation, maximum
amplitude value and its id, minimum amplitude value

x  0

(16)
(17)

In Eq. (4), pt is the penalty constant,  is a

parameter that handles the data and  is a matrix of
coefficients. In the constraints given in Eq. (16) and (6),

cl x is the class label of the x th dataset, c is a constant
and k is the kernel that transforms the input data to the
feature space. Hence, by minimizing the error function,


the SVM learns the training dataset pr well and so that
it can classify the vector that is similar to the training set.
Once the errors minimized to a minimum value and
hence we obtain the abnormal words separately.

www.ijaert.org

International Journal of Advanced Engineering Research and Technology (IJAERT) 229
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

Table 2: Multi SVM pseudo Code
Number of Class: N
Train Data
Test Data
Label
For n = 1: N
Train = Train data (n);
Group = find (label == n)
Method (n) = svmtrain (Train, group);
End
For t = 1: N
TestResu;t = svmclassify (method, test data (t))
End

mean squared error in order to get standardized
data. Furthermore, before conducting the metaanalysis, we carried out a descriptive analysis to get
more insight into the data. The obtained frequencies,
means, standard deviations, ranges, and correlations
of possible moderators.

V. EXPERIMENTAL SETUP
The trending topics application identifies recent trends
on the web by periodically launching Cloudera‟s
Distribution for Hadoop to process Wikipedia log files.
The daily pageview charts on the site were created by
running an initial MapReduce job on GB‟s of hourly
traffic logs collected from Wikipedia‟s mysql dumps.
We run a job on the trendingtopics.org server to fetch
the latest log files every hour and store a copy on mysql
db for processing by Hadoop and we use Sqoop to load
the data into hive.
For performing the big data experiments, setup of
Hadoop data cluster and Hadoop Distributed File System
(HDFS) for storage was used. Before moving to multinode cluster, single node cluster was first configured and
tested. We configured our cluster to run map reduce jobs
for finding trends in log data with Hive.Input and output
data for the Map/Reduce programs is stored in HDFS,
while input and output data for the data-parallel stackbased implementation is stored directly on the local
disks. The software used to setup these hosts are Sun
Java 1.7, Cloud era quick start vm 5.1.0 and for
visualization layer we have used Node.js.

b. Selected Drinking Item Clustering Results
From the whole selected set of data, some seats are taken
for training and part of it are taken for testing purpose
and then this procedure is repeated for the whole
Facebook database. The results are evaluated with both
the training algorithms on all the combination of sets,
but due to space constraints, only some of the results are
listed and compared. The specifications of selected
drinking item are given in table 1-3 with different
product with different branches. From the results, SVM
algorithm proves superior, as it is taking advantage of
clustering methodology and searching technique for
clustering.

VI. RESULTS
a. Data Sets and Descriptive Analysis
The selected high dimensional big data utilized
various measures and different time intervals and
session lengths. Hence, the obtained data were not
immediately comparable. To solve this issue, the data
were standardized. Using this method, we conducted
a series of ordinary participant-specific regression
analyses, whereby SVM was predicted by the
condition. That way, the root mean squared errors were
estimated. Subsequently, the raw data of each
participant were divided by the participant‟s root

www.ijaert.org

International Journal of Advanced Engineering Research and Technology (IJAERT) 230
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

done using the SVM classifier. Here, these two
processes are effectively distributed based on the
concept given in Map-Reduce framework. The results
for the reduction output are validated through evaluation
metrics namely, sensitivity, specificity, accuracy and
computation time. For comparative analysis, proposed
big data classification is compared with the existing
works such as SVM and PCA for Facebook datasets.

REFERENCES

Figure 5: Output Result
c. Map Resource Utilization:
The map resource utilization of Hadoop and our
proposed model is been plotted in the above graph. We
have considered a maximum of 8 maps. Here we have
taken the execution time by varying the map size and the
analytical result proves that the proposed resource
utilization time is reduced by 35 Sec from 57 Sec
approximately over Hadoop.

Figure 6: Map resource utilization

VII. CONCLUSION
In this paper, we have presented an efficient technique to
classify the data using SVM classifier. Proposed
technique is comprised into two phase, (i) Map reduce
framework for training with PCA and (ii) Map reduce
framework for testing SVM. Initially, the input
Facebook three product data is given to the feature
selection to select the suitable feature for big data
classification. The traditional some existing algorithm is
taken and the optimized feature space is chosen with the
best fitness. Once the best feature space is identified
through SVM algorithm, the big data classification is

[1] Rama Satish K V and N P Kavya “An approach to
optimize QOS Scheduling of MapReduce in Big Data”,
International Journal of Engineering Research and
Technology, Volume 2, Issue 11, May 2014.
[2] Yadav Krishna R and Purnima Singh “MapReduce
Programming Paradigm Solving Big-Data Problems by
Using Data-Clustering Algorithm”, International Journal
of Advanced Research in Computer Engineering &
Technology (IJARCET), Vol. 3, No. 1, January 2014
[3] Umut A. Acar and Yan Chen “Streaming Big Data
with Self-Adjusting Computation”, In proceedings of the
2013 workshop on Data driven functional programming,
pp. 15-18, New York, NY, USA, 2013.
[4] Benedikt Elser, Alberto Montresor “An Evaluation
Study of Big Data Frameworks for Graph Processing”,
IEEE International Conference on Big Data, 2013, pp.
60-67, 2013.
[5] Jeffrey Dean, and Sanjay Ghemawat. “MapReduce:
Simplified Data Processing on Large Clusters.”
Proceedings of the 6th conference on Symposium on
OperatingSystems Design & Implementation., USENIX
Association, Berkley, CA, USA, pp. 137-150, 2004.
[6] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D.
Piatko, R. Silverman, and A. Y. Wu. An Efficient kMeans
Clustering
Algorithm:
Analysis
and
Implementation. IEEE Computer Society, Washington,
DC, USA, 2002.
[7] Rama Satish K V and N P Kavya, “Big Data
Processing with harnessing Hadoop - MapReduce for
Optimizing Analytical Workloads ”, Proc. of IEEE
International Conference, Mysore, INDIA,
27-29,
November 2014.
[8] Rajagopal Ananthanarayanan, Karan Gupta, Prashant
Pandey, Himabindu Pucha, Prasenjit Sarkar, Mansi
Shah, and Renu Tewari Cloud analytics: Do we really
need to reinvent the storage stack? In Proceedings of the
Workshop on Hot Topics in Cloud Computing, San
Diego, California, 2009.
[9] Ricardo Baeza-Yates, Carlos Castillo, Flavio
Junqueira, Vassilis Plachouras, and Fabrizio Silvestri.
Challenges on distributed web retrieval. In Proceedings

www.ijaert.org

International Journal of Advanced Engineering Research and Technology (IJAERT) 231
Volume 3 Issue 6, June 2015, ISSN No.: 2348 – 8190

of the IEEE 23rd International Conference on Data
Engineering, pp. 6-20, Istanbul, Turkey, 2007.
[10] Luiz Andre Barroso, Jerey Dean, and Urs Holzle
“Web search for a planet: The Google cluster
architecture”, IEEE Micro, Vol. 23, No. 2, pp. 22-28,
2003.
[11] Bao Rong Chang, Hsiu Fen Tsai, Zih-Yao Lin and
Chi-Ming Chen “Access Security on Cloud Computing
Implemented in Hadoop System”, In preceding of Fifth
International Conference on Genetic and Evolutionary
Computing, pp. 77-80, 2011.
[12] Hongyong Yu, Deshuai Wang, “Research and
Implementation of Massive Health Care Data
Management and Analysis Based on Hadoop”, In
preceding of Fourth International Conference on
Computational and Information Sciences, pp. 514-517,
2012.
[13] Dalia Sobhy, Yasser El-Sonbaty and Mohamad
Abou Elnasr “MedCloud : Healthcare Cloud Computing
System”, The 7th International Conference for Internet
Technology and Secured Transaction, pp. 161-166,
2012.
[14] Sung-Hwan Kim, Jung-Ho Eom and Tai-Myoung
Chung, “Big data Security Hardening Methodology
using Attributes Relationship” , International Conference
on Information Science and Applications (ICISA), ,
pp.1,2, 24-26 June 2013
[15] Wang Lijun; Huang Yongfeng; Chen Ji; Zhou Ke;
Li Chunhua, "Medoop: A medical information platform
based on Hadoop," IEEE 15th International Conference
on e-Health Networking, Applications & Services
(Healthcom), pp.1,6, 9-12 Oct. 2013.
[16] Rosmy C Jose and Shaiju Paul “Privacy in Map
Reduce Based Systems: A Review”, International
Journal of Computer Science and Mobile Computing,
Vol. 3, No. 2, pp.463 – 466, 2014.
[17] Victoria López, Saradel Río, José Manuel Benítez,
Francisco Herrera “Cost-sensitive linguistic fuzzy rule
based classification systems under the Map Reduce
framework for imbalanced bigdata”, Fuzzy Sets and
Systems, In Press, 2014.
[18] Sara del Río, Victoria López, José Manuel Benítez,
Francisco Herrera “On the use of MapReduce for
imbalanced big data using Random Forest”, Information
Sciences, In Press, 2014.
[19] Qingchen Zhang, Zhikui Chen, Ailing Lv, Liang
Zhao, Fangyi Liu and Jian Zou “A Universal Storage
Architecture for Big Data in Cloud Environment”, IEEE
International Conference on Green Computing and
Communications, Beijing, pp. 447-480, 2013.
[20] Jiangtao Yin, Yong Liao, Mario Baldi, Lixin Gao
and Antonio Nucci “Efficient analytics on ordered
datasets using MapReduce”, In preceding of Proceedings

of the 22nd international symposium on Highperformance parallel and distributed computing, pp. 125126, 2013.
[21] Rosmy C Jose and Shaiju Paul, “Privacy in Map
Reduce Based Systems: A Review”, International
Journal of Computer Science and Mobile Computing,
Vol. 3. No. 2, pp. 463-466.
[22] Victoria López, Sara del Río, José Manuel Benítez,
Francisco Herrera “Cost-sensitive linguistic fuzzy rule
based classification systems under the MapReduce
framework for imbalanced big data”, Fuzzy Sets and
Systems, 2014
[23] Sara del Río, Victoria López, José Manuel Benítez
and Francisco Herrera “On the use of MapReduce for
imbalanced big data using Random Forest”, Information
Sciences, 2014.
[24] Qingchen Zhang, Zhikui Chen, Ailing Lv, Liang
Zhao, Fangyi Liu and Jian Zou “A Universal Storage
Architecture for Big Data in Cloud Environment”, IEEE
International Conference on Green Computing and
Communications and IEEE Internet of Things and IEEE
Cyber, Physical and Social Computing, pp. 476-480,
2013.
[25] Bogdan Ghit¸ Alexandru Iosup and Dick Epema
“Towards an Optimized Big Data Processing System”,
13th IEEE/ACM International Symposium on Cluster,
Cloud, and Grid Computing, pp. 83-86, 2013.

www.ijaert.org

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close