August - 2013 (Volume-3 ~ Issue-8 ~ Part-2)

Paper Type

::

Research Paper

Title

::

Near Sheltered and Loyal storage Space Navigating in Cloud

Country

::

India

Authors

::

N.Venkata Krishna ||, M.Venkata Ramana

Page No.

::

01-05

::
10.9790/3021-03820105
aned   0.4/3021-03820105 aned
iosrjen   3021-0308-0205 iosrjen
Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. Though the benefits are clear, such a service is also relinquishing users' physical possession of their outsourced data, which inevitably poses new security risks towards the correctness of the data in cloud. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. The auditing result not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the identification of misbehaving server. In this paper we introduce a technique called auditing, which is done by a person called third party auditor (TPA).

[1] C. Wang, Q. Wang, K. Ren, and W. Lou, "Ensuring data storage security in cloud computing," in Proc. of IWQoS'09, July 2009, pp. 1–9.

[2] M. A. Shah, R. Swaminathan, and M. Baker, "Privacy-preserving audit and extraction of digital contents," Cryptology ePrint Archive, Report 2008/186, 2008.

[3] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, "Scalable and efficient provable data possession," in Proc. of SecureComm'08, 2008, pp. 1–10.

[4] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, "Enabling public verifiability and data dynamics for storage security in cloud computing,".

[5] Sun Microsystems, Inc., "Building customer trust in cloud computing with transparent security," Online at https://www.sun.

 

Paper Type

::

Research Paper

Title

::

Non-invasive Blood Glucose Level Measurement from LASER Reflected Spectral Patterns Images

Country

::

India

Authors

::

Parminder Singh ||, Harshit Kaur ||, Dr. K.V.P. Singh

Page No.

::

06-10

::
10.9790/3021-03820610
aned   0.4/3021-03820610 aned
iosrjen   3021-0308-0210 iosrjen

Texture analysis is used here in the proposed work in order to establish the correlation between the glucose level and texture coefficients. The texture image is stored in jpeg format that is basically the rgb image. The rgb image is converted to gray image and then we analyze the texture by Using a Gray-Level Co-Occurrence Matrix (GLCM).The gray co matrix function creates a gray-level co-occurrence matrix (GLCM) by calculating how often a pixel with the intensity (gray-level) value i occurs in a specific spatial relationship to a pixel with the value j. In the earlier method, infra red light source is used in order to estimate the glucose level in human body as non-invasive method. However, it is observed that IR light source is affected by ambient light noise and results are not repeatable for the same patient under same circumstances. In order to enhance the repeatability of the measurements, LASER light source can be used as LASER light is highly coherent and a reliable reflected pattern can be achieved using the laser array of photodiode or receivers.

 

Keywords: - Non-invasive Blood Sugar Computation, LASER reflected Image

[1] Steven L. Jacques,1,2,* Ravikant Samatham,2 and Niloy Choudhury, "Rapid spectral analysis for spectral imaging" Biomed Opt Express. 2010 August 2; 1(1): 157–164.
[2] Mark A. Arnold, Ph.D., Lingzhi Liu, Ph.D., and Jonathon T. Olesberg, Ph.D "Selectivity Assessment of Noninvasive Glucose Measurements Based on Analysis of Multivariate Calibration Vectors" J Diabetes Sci Technol. 2007 July; 1(4): 454–462.
[3] Yevgeny Beiderman, Raz Blumenberg,"Bottom of FormDemonstration of remote optical measurement configuration that correlates to glucose concentration in blood", Biomedical Optics Express . 01/2011; 2(4):858-70.

[4] German campetelli "Improvements on Noninvasive Blood Glucose Biosensors Using Wavelets for Quick Fault Detection" Journal of Sensors Volume 2011 (2011), Article ID 368015
[5] Jonas Kottmann,1 Julien M. Rey,1 Joachim Luginbühl,2 Ernst Reichmann,2 "Glucose sensing in human epidermis using mid-infrared photoacoustic detection," Biomed Opt Express. 2012 April 1; 3(4): 667–680

 

Paper Type

::

Research Paper

Title

::

Crack Initiation and Crack Propagation of Pre-corroded Ni-16Cr Alloy in 4.5%NaCl Aqueous Solution

Country

::

Canada

Authors

::

Aezeden Mohamed

Page No.

::

11-15

::
10.9790/3021-03821115
aned   0.4/3021-03821115 aned
iosrjen   3021-0308-0215 iosrjen

This paper examines the characteristics of corrosion fatigue of a Ni-16Cr alloy workpiece soaked in 4.5%NaCl aqueous solution for a period of one week, followed by corrosion fatigue testing in the same aqueous solution. Corrosion, corrosion fatigue crack initiation mechanisms, crack surface morphology and fractographic analysis are discussed and compared to results from a workpiece fatigued in ambient laboratory air. Experimental results reveal susceptibility to surface pits in the Ni-16Cr alloy workpiece in a 4.5%NaCl aqueous solution, a result that is supported by morphology and fractographic analysis.

 

Keywords: - Corrosion, corrosion fatigue, crack initiation, intergranular fracture, pits

[1] R. I. Jaffee and R. A. Wood, Corrosion fatigue of steam turbine-blading alloys in operational environments, EPRI, CS-Report 2932, 10-140, 1984.
[2] R. Ebara, T. Kai, M. Mihara, H. Kino,, K. Katayama, and K. Shiota, Corrosion fatigue behavior of 13Cr stainless steel for turbine moving blades. In: Jaffee RI (ed). Corrosion fatigue of steam turbine blade materials. New York: Pergamon Press, 4-150–4-167, 1983.
[3] B. A. Kehler. Crevice corrosion stabilization and repassivation behavior of alloys 625. Corrosion Journal, 57, 2001, 1042-1065.
[4] D. D. Gorhe. Development of an electrochemical reactivation test procedure for detecting microstructural hetrogeneity in Ni-Cr-MoW alloy weld. Journal of Materials Science, 39,2004, 2257-2261
[5] A. Mohamed, Fatigue and Corrosion fatigue behavior of nickel alloys in saline solutions International Journal of Modern Engineering Research, 3(3), 2013, 1529-1533.

 

Paper Type

::

Research Paper

Title

::

HEDCPP: Data Structure for Application Programs Aimed to Engineering Education-Learning

Country

::

Brazil

Authors

::

Gilberto Gomes

Page No.

::

16-24

::
10.9790/3021-03821624
aned   0.4/3021-03821624 aned
iosrjen   3021-0308-0224 iosrjen

The development and/or adaptation of computer systems through the resources of computer graphics and languages object-oriented, like C++ and JAVA, has been shown substantially important in building tools to aid the process of teaching and learning as it enables, both the student and the teacher, explore the essential elements of typical engineering problems, such as shape, configuration and structural behavior. The development of these tools requires, in addition to the resources mentioned, the use of data structures and algorithms that allow a sophisticated modeling, perception and solution to these problems. Thus, this paper presents a robust data structure, called Hedcpp, originally written in C++ language, which can be used as a basis for computer programs aimed at engineering education and addressing the physical problem of form as real as possible, as well as assisting the learning lessons and become much more productive and motivating. Some application examples are presented.

 

Keywords: – data structure; modeling; education; oop; c++

[1] Silva, M. A. da, "Protótipo de uma ferramenta para auxiliar no ensino de técnicas de programação". Universidade do Planalto Catarinense, Lages, 2003.
[2] Booch, G., "Object-Oriented Analysis and Design with Applications", The Benjamin/Cumming Publishing Company, Inc., 1994.
[3] Gomes, G., "A data structure for representing two dimensional boundary element models", Master's thesis, University of Brasilia, Brazil, 2000. (In Portuguese)
[4] Gomes, G. & Noronha, M. A. M, "Estrutura de Dados para Geração de Malhas Bidimensionais de Elementos de Contorno", XXI CILAMCE, Rio de Janeiro, 2000.
[5] Mäntylä, M., "An Introduction to Solid Modeling", Computer Science Press, Rockville, Maryland, 1988.

 

Paper Type

::

Research Paper

Title

::

Comparative analysis of Relational and Graph databases

Country

::

India

Authors

::

Garima Jaiswal ||, Arun Prakash Agrawal

Page No.

::

25-27

::
10.9790/3021-03822527
aned   0.4/3021-03822527 aned
iosrjen   3021-0308-0227 iosrjen

The relational model has dominated the computer industry since the 1980s mainly for storing and retrieving data. Lately, however, relational database has been losing its importance due to its reliance on a strict schema which makes it difficult to add new relationships between the objects. Another important reason of its failure is that as the available data is growing manifolds, it is becoming complicated to work with relational model as joining a large number of tables is not working efficiently. One of the proposed solutions is to transfer to the Graph databases as they aspire to overcome such type of problems. This paper provides a comparative analysis of a graph database Neo4j with the most widespread relational database MySQL.

 

Keywords: - Flexibility, Maturity, Security, Retrieval, Schema

[1] Chad Vicknair, Michael Macias, Zhendong Zhao, Xiaofei Nan, Yixin Chen, Dawn Wilkins "A Comparison of a Graph Database and a Relational Database", ACM Southeast Regional Conference, 2010
[2] Database Trends and Applications. Available http://www.dbta.com/Articles/Columns/Notes-on-NoSQL/Graph-Databases-and-the-Value-They-Provide-74544.aspx,2012.
[3] M. Kleppmann. Should you go beyond Relational databases ?Available http://carsonified.com/blog/dev/should-you-go-beyond-relational-databadat.
[4] Neo4j Blog, Available http://blog.neo4j.org/2009/04/current-database-debate-and-graph.html.

[5] M. I. Jordan (Ed). (1998),"Learning in Graphical Models". MIT Press.

 

Paper Type

::

Research Paper

Title

::

Minimising the Supervision Costs of the Organizations

Country

::

Nigeria

Authors

::

Charles Ofiabulu ||, Oliver Charles-Owaba

Page No.

::

28-38

::
10.9790/3021-03822838
aned   0.4/3021-03822838 aned
iosrjen   3021-0308-0238 iosrjen

An Organisational structure is necessary as a framework for organising, planning, coordinating controlling and directing the organisational activities to achieve organisational goals Consequently, the organisational design problems have been formulated as that of minimisingcosts associated with supervision and coordination subject to some constraints with a heuristics as solution procedures. The difficulty of verifying the effectiveness of such heuristics in producing optimal organisation structures has created model acceptability problems. In this study, the concept of organisational work dynamics was used to reformulate supervision-cost function of a business organisation. Using this function, a dynamic programming version of the organisational design problem was defined and an associated algorithm developed. The performance (effectiveness and efficiency) of the algorithm was then compared to that of the existing heuristic. Also compared are the supervision-cost-based designed organisation structures and the existing organisational structure.

 

Keywords: - Dynamic programming, Cost Reduction, Organizational Design, Organizational structure, Supervision Costs.

[1] O. E. Charles-Owaba, Organisational Design: A Quantitative Approach. Ibadan: Oputuru books, 2002.
[2] Hax A. C. and Majiluf N. S., "Organisational Design: Survey and An Approach.," Operation Research, vol. 3, no. 29, pp. 417-447, 1981.
[3] J. Robert, Morden firm: Organizational design for performance and growth. New York: Oxford university press inc., 2004.
[4] Ofiabulu, C. E. and Charles-Owaba, O. E., "A personnel cost model for Organisational structure Designs," Industrial Engineering Letters, vol. 3, no. 6, p. 1=11, 2013.
[5] R. L Daft, Organization Theory and Design. Mason, OH, : South.Western, 2004.

 

Paper Type

::

Research Paper

Title

::

Traditional Group based policy management approach for online social networks

Country

::

India

Authors

::

V.Surendra Reddy

Page No.

::

39-44

::
10.9790/3021-03823944
aned   0.4/3021-03823944 aned
iosrjen   3021-0308-0244 iosrjen
Privacy control mechanism based on policy frameworks that are rich in semantic web technologies to control information flow in social networking applications. The privacy control mechanism Provides users of the system better control while sharing information than the state of the art systems Combines dynamic user context, For instance, current time, current location or current activity of the user Online social networking is viewed by many as the next new paradigm in personal, professional and organizational networking and marketing. (Social media are the tools that people use to do social networking, such as Facebook, LinkedIn and Twitter.) The following sections first give a broad overview of the movement of social networking, including basics about each of the most popular tools. The next section describes how to use the tools for personal, professional and organizational networking and marketing.
[1]. Accoria. Rock web server and load balancer. http://www.accoria.com.
[2]. Amazon Web Services. Amazon Web Services (AWS). http://aws.amazon.com.
[3]. V. Cardellini, M. Colajanni, and P. S. Yu. Dynamic load balancing on web-server systems. IEEE Internet Computing, 3(3):28{39, 1999.
[4]. L. Cherkasova. FLEX: Load Balancing and Management Strategy for Scalable Web Hosting Service. IEEE Symposium on Computers and Communications, 0:8, 2000.
[5]. F5 Networks. F5 Networks. http://www.f5.com.

 

Paper Type

::

Research Paper

Title

::

Data Cleaning by Genetic Programming Technique

Country

::

India

Authors

::

Rajnish Kumar ||, Pradeep Bhaskar Salve ||, Pritam Desale

Page No.

::

45-51

::
10.9790/3021-03824551
aned   0.4/3021-03824551 aned
iosrjen   3021-0308-0251 iosrjen

Today Generally all the websites having à search box from which data can be effectively extracted. When any query is submitted to databases then it retrieves the information from that database and get extracted but as the number of data is increasing rapidly in database, technique to extract the clean data i.e. non duplicate data has not been simultaneously updated so it becomes very hard to detect duplicate data and extract with clean data in effective manner. When data is uploaded in the database server from different location then it having more chances about duplicate data. User may use same link on same page. Links having some information. When link which having some information is duplicated more than one time then due to this data duplication, memory is wasted, space is wasted and hence performance and computational cost increases. In this paper, we propose a technique as a genetic programming technique which contains the three major operations. Those operations are selection, crossover and mutation. Execution of operations applies the de-duplication function. After removing the duplicate records apply the suggested function. Compare to all previous approaches present approach provides less burden, efficient and accurate results display here. It can provide good evidence based results.

 

Keywords: - Evolutionary Programming, Precision and Recall, XML, XHTML.

[1]. H. Zhao, W. Meng, Z. Wu, and C. Yu, "Automatic Extraction of Dynamic Record Sections from Search Engine Result Pages," Proc. 32nd Int'1 Conf. Very Large data Bases (VLDB), 2006.
[2]. V. Crescenzi, P. Merialdo, and P. Missier, "Clustering Web Pages Based on Their Structure," Data and Knowledge Eng., vol.54, pp. 279-299, 2005.
[3]. B. Liu, R.L. Grossman, and Y. Zhai, "Mining Data Records in Web Pages," Proc. Int'l Conf. Knowledge Discovery and Data Mining (KDD), pp. 601-606, 2003.
[4]. K. Simon and G. Lausen, "ViPER: Augmenting Automatic Information Extraction with Visual Perceptions," Proc. Conf. Information and Knowledge Management (CIKM), pp. 381-388, 2005.
[5]. M. Wheatley, "Operation Clean Data", CIO Asia Magazine.

 

Paper Type

::

Research Paper

Title

::

Effect of alternating current on electrolytic solutions

Country

::

India

Authors

::

Parantap Nandi

Page No.

::

52-59

::
10.9790/3021-03825259
aned   0.4/3021-03825259 aned
iosrjen   3021-0308-0259 iosrjen

Electrolysis is always carried out using direct current because here the electrodes have definite polarity. Generally low voltage and high current are preferred. For industrial purposes, an optimum voltage of 9-12V and 5-6A is favorable. In most cases metallic electrodes e. g Cu, Fe, Sn, Zn are used according to the requirement. Ionization occurs at anode. Hence no bubbles are observed. Towards A.C the behavior of the solution depends mainly on the electrodes. For most metallic electrodes namely Cu, Zn the solution behaves much like resistance and the energy is wasted in heating of the solution. Sometimes the solution reaches its boiling point. Aluminum (very cheap and commonly found) acts in a very different manner. Here liberation of O2 and H2 is possible just like D.C along with liberation of heat. While high voltage D.C (220V) is totally unsuitable for electrolysis, 220V A.C can produce useful products. Power consumption is also reasonable. So domestic A.C supply can be used to produce useful products like O2, H2, different hydroxides and iodine (I2) solution. In this paper the basic features of electrolysis using A.C have been outlined based on experiments.

 

Keywords: - A.C; Al; Cu; Capacitor.

[1]. The electrode reactions are taken from
[2]. The electrode potentials are taken from ELECTROCHEMICAL SERIES Petr Vany´sek (2000 by CRC PRESS LLC)