فهرست مطالب

Information Systems and Telecommunication - Volume:4 Issue: 3, Jul-Sep 2016

Journal of Information Systems and Telecommunication
Volume:4 Issue: 3, Jul-Sep 2016

  • تاریخ انتشار: 1395/07/24
  • تعداد عناوین: 8
|
  • Ali Tarihi, Hassan Haghighi *, Fereidoon Shams Aliee Page 134
    The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the emergence of life-like properties, called self-* in general which suits them well for pervasive computing. Achievement of these properties in organic computing systems is closely related to a proposed general feedback architecture, called the observer/controller architecture, which supports the mentioned properties through interacting with the system components and keeping their behavior under control. As one of these properties, selfconfiguration is desirable in the application of organic computing systems as it enables by enabling the adaptation to environmental changes. However, the adaptation in the level of architecture itself has not yet been studied in the literature of organic computing systems. This limits the achievable level of adaptation. In this paper, a self-configuring observer/controller architecture is presented that takes the self-configuration to the architecture level. It enables the system to choose the proper architecture from a variety of possible observer/controller variants available for a specific environment. The validity of the proposed architecture is formally demonstrated. We also show the applicability of this architecture through a known case study.
    Keywords: Organic Computing, Observer, Controller Architecture, Self, * Properties, Self, Configuration, Formal Verification
  • Hodjat Hamidi* Page 145
    New technologies and their uses have always had complex economic, social, cultural, and legal implications, with accompanying concerns about negative consequences. So it will probably be with the IoT and their use of data and attendant location privacy concerns. It must be recognized that management and control of information privacy may not be sufficient according to traditional user and public preferences. Society may need to balance the benefits of increased capabilities and efficiencies of the IoT against a possibly inevitably increased visibility into everyday business processes and personal activities. Much as people have come to accept increased sharing of personal information on the Web in exchange for better shopping experiences and other advantages, they may be willing to accept increased prevalence and reduced privacy of information. Because information is a large component of IoT information, and concerns about its privacy are critical to widespread adoption and confidence, privacy issues must be effectively addressed. The purpose of this paper is which looks at five phases of information flow, involving sensing, identification, storage, processing, and sharing of this information in technical, social, and legal contexts, in the IoT and three areas of privacy controls that may be considered to manage those flows, will be helpful to practitioners and researchers when evaluating the issues involved as the technology advances.
    Keywords: Security Issues, IoT, Information, Technology Advances, Privacy Enhancing
  • Hadi Mahdipour Hossein Abad*, Morteza Khademi, Hadi Sadoghi Yazdi Page 152
    Most popular fusion methods have their own limitations; e.g. OWA (order weighted averaging) has “linear model” and “summation of inputs proportions in fusion equal to 1” limitations. Considering all possible models for fusion, proposed fusion method involve input data confusion in fusion process to segmentation. Indeed, limitations in proposed method are determined adaptively for each input data, separately. On the other hand, land-cover segmentation using remotely sensed (RS) images is a challenging research subject; due to the fact that objects in unique land-cover often appear dissimilar in different RS images. In this paper multiple co-registered RS images are utilized to segment landcover using FCM (fuzzy c-means). As an appropriate tool to model changes, fuzzy concept is utilized to fuse and integrate information of input images. By categorizing the ground points, it is shown in this paper for the first time, fuzzy numbers are need and more suitable than crisp ones to merge multi-images information and segmentation. Finally, FCM is applied on the fused image pixels (with fuzzy values) to obtain a single segmented image. Furthermore mathematical analysis and used proposed cost function, simulation results also show significant performance of the proposed method in terms of noise-free and fast segmentation.
    Keywords: Fusion, Land, cover Segmentation, Multiple High, spatial Resolution Panchromatic Remotely Sensed (HRPRS) Images, Fuzzy C, means (FCM)
  • Leila Jafar Tafreshi *, Farzin Yaghmaee Page 167
    Data mining and knowledge discovery are important technologies for business and research. Despite their benefits in various areas such as marketing, business and medical analysis, the use of data mining techniques can also result in new threats to privacy and information security. Therefore, a new class of data mining methods called privacy preserving data mining (PPDM) has been developed. The aim of researches in this field is to develop techniques those could be applied to databases without violating the privacy of individuals. In this work we introduce a new approach to preserve sensitive information in databases with both numerical and categorical attributes using fuzzy logic. We map a database into a new one that conceals private information while preserving mining benefits. In our proposed method, we use fuzzy membership functions (MFs) such as Gaussian, P-shaped, Sigmoid, S-shaped and Z-shaped for private data. Then we cluster modified datasets by Expectation Maximization (EM) algorithm. Our experimental results show that using fuzzy logic for preserving data privacy guarantees valid data clustering results while protecting sensitive information. The accuracy of the clustering algorithm using fuzzy data is approximately equivalent to original data and is better than the state of the art methods in this field.
    Keywords: Privacy Preserving, Clustering, Data Mining, Expectation Maximization Algorithm
  • Reza Vahedi *, Farhad Hosseinzadeh Lotfi, Seyed Esmaeial Najafi Page 174
    By the mobile banking system and install an application on the mobile phone can be done without visiting the bank and at any hour of the day, get some banking operations such as account balance, transfer funds and pay bills did limited. The second password bank account card, the only security facility predicted for use mobile banking systems and financial transactions. That this alone cannot create reasonable security and the reason for greater protection and prevent the theft and misuse of citizens’ bank accounts is provide banking services by the service limits. That by using NFC (Near Field Communication) technology can identity and biometric information and Key pair stored on the smart card chip be exchanged with mobile phone and mobile banking system. And possibility of identification and authentication and also a digital signature created documents. And thus to enhance the security and promote mobile banking services. This research, the application and tool library studies and the opinion of seminary experts of information technology and electronic banking and analysis method Dematel is examined. And aim to investigate possibility Promote mobile banking services by using national smart card capabilities and NFC technology to overcome obstacles and risks that are mentioned above. Obtained Results, confirmed the hypothesis of the research and show that by implementing the so-called solutions in the banking system of Iran.
    Keywords: NFC Technology, National Smart Card, Mobile Banking, Identity, Security
  • Elham Mohsenifard, Ali Ghaffari* Page 182
    Wireless sensor networks (WSNs) consist of numerous tiny sensors which can be regarded as a robust tool for collecting and aggregating data in different data environments. The energy of these small sensors is supplied by a battery with limited power which cannot be recharged. Certain approaches are needed so that the power of the sensors can be efficiently and optimally utilized. One of the notable approaches for reducing energy consumption in WSNs is to decrease the number of packets to be transmitted in the network. Using data aggregation method, the mass of data which should be transmitted can be remarkably reduced. One of the related methods in this approach is the data aggregation tree. However, it should be noted that finding the optimization tree for data aggregation in networks with one working-station is an NPHard problem. In this paper, using cuckoo optimization algorithm (COA), a data aggregation tree was proposed which can optimize energy consumption in the network. The proposed method in this study was compared with genetic algorithm (GA), Power Efficient Data gathering and Aggregation Protocol- Power Aware (PEDAPPA) and energy efficient spanning tree (EESR). The results of simulations which were conducted in matlab indicated that the proposed method had better performance than GA, PEDAPPA and EESR algorithm in terms of energy consumption. Consequently, the proposed method was able to enhance network lifetime.
    Keywords: Wireless Sensor Networks (WSNs), Data Aggregation Technique, Data Aggregation Tree, Cuckoo Optimization Algorithm (COA), Network Lifetime Enhancement
  • Mohammad Hasheminejad, Hassan Farsi* Page 191
    This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers. A speaker verification system gets an input utterance and an identity claim, then verifies the claim in terms of a matching score. This score determines the resemblance of the input utterance and preenrolled target speakers. Since there is a variety of information in a speech signal, state-of-the-art speaker verification systems use a set of complementary classifiers to provide a reliable decision about the verification. Such a system receives some scores as input and takes a binary decision: accept or reject the claimed identity. Most of the recent studies on the classifier fusion for speaker verification used a weighted linear combination of the base classifiers. The corresponding weights are estimated using logistic regression. Additional researches have been performed on ensemble classification by adding different regularization terms to the logistic regression formulae. However, there are missing points in this type of ensemble classification, which are the correlation of the base classifiers and the superiority of some base classifiers for each test instance. We address both problems, by an instance based classifier ensemble selection and weight determination method. Our extensive studies on NIST 2004 speaker recognition evaluation (SRE) corpus in terms of EER, minDCF and minCLLR show the effectiveness of the proposed method.
    Keywords: Speaker Recognition, Speaker Verification, Ensemble Classification, Classifier Fusion, IBSparse
  • Javad Paksima, Homa Khajeh* Page 200
    Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly important to design a ranking algorithm to provide the results pertaining to user’s query due to the great deal of information on the World Wide Web. In this paper, a ranking method is proposed with a hybrid approach, which considers the content and connections of pages. The proposed model is a smart surfer that passes or hops from the current page to one of the externally linked pages with respect to their content. A probability, which is obtained using the learning automata along with content and links to pages, is used to select a webpage to hop. For a transition to another page, the content of pages linked to it are used. As the surfer moves about the pages, the PageRank score of a page is recursively calculated. Two standard datasets named TD2003 and TD2004 were used to evaluate and investigate the proposed method. They are the subsets of dataset LETOR3. The results indicated the superior performance of the proposed approach over other methods introduced in this area.
    Keywords: Ranking, Web Pages, Surfer Model, Learning Automata, Information Retrieval