فهرست مطالب

Information and Communication Technology Research - Volume:4 Issue: 4, Autumn 2012

International Journal Information and Communication Technology Research
Volume:4 Issue: 4, Autumn 2012

  • تاریخ انتشار: 1391/10/11
  • تعداد عناوین: 7
|
  • Mohammad Behdadfar, Hossein Saidi, Masoud, Reza Hashemi Page 1
    This paper introduces a new prefix matching algorithm called “Coded Prefix Search” and its improved version called “Scalar Prefix Search” using a coding concept for prefixes which can be implemented on a variety of trees especially limited height balanced trees for both IPv4 and IPv6 prefixes. Using this concept, each prefix is treated as a number. The main advantage of the proposed algorithms compared to Trie-based solutions is that the number of node accesses does not depend on IP address length in both search and update procedures. Therefore, applying this concept to balanced trees, causes the search and update node access complexities to be O(log n) where nis the number of prefixes. Also, compared to the existing range-based solutions, it does not need to store both end points of a prefix or to store ranges. Finally, compared to similar tree based solutions; it exhibits good storage requirements while it supports faster incremental updates. These properties make the algorithm capable of potential hardware implementation.
    Keywords: Coded Prefix, Scalar Prefix, Route Lookup, Longest Matching Prefix
  • Mohammad Shahram Moin, Alireza Sepas, Moghaddam Page 13
    JPEG compression standard is widely used for reducing the volume of images that are stored or transmitted via networks. In biometrics datasets، face images are usually stored in JPEG compressed format، and should be fully decompressed to be used in a face recognition system. Recently، in order to reduce the time and complexity of decompression step، face recognition in compressed domain is considered as an emerging topic in face recognition systems. In this paper، we have tested different feature spaces، including PCA and ICA in various stages of JPEG compressed domain. The goal of these tests was to determine the best stage in JPEG compressed domain and the best features to be used in face recognition process، regarding the trade-off between the decompression overhead reduction and recognition accuracy. The experiments were conducted on FERET and FEI face databases، and results have been compared in various stages of JPEG compressed domain. The results show the superiority of zigzag scanned stagecompared to other stages and ICAfeature space compared to other feature spaces،both in terms of recognition accuracy and computational complexity.
    Keywords: Face Recognition, JPEG Compressed Domain, JPEG Decompression, Face Database, Feature Extraction
  • Ali Abdolhoseini, Farzad Zargar Page 25
    Inthis paper we propose an automatic image annotation technique.The proposed techniqueis based oncoverage ratio of tags byemployingboth image content and metadata. The images in a reference imageset areemployed to automatically annotate a givenimageaccording to the coverage ratio of the tags in the reference image.Tagsand content descriptorsas well as coverage ratio of the tagsare generatedfor theentireimagesinthe referenceimage set. The color and texture content descriptors are employed to retrieve similar images for an un-annotated image from reference image datasetandthe tagsin the metadata of the retrievedimagesare used to annotate the un-annotated image. Simulationresultsindicate that the proposed techniqueoutperformsanother automatic annotation techniquethatuses similar content descriptorsboth in average precision and average recall.
    Keywords: coverage ratio, image content, image metadata, discrete wavelet transform, color histogram, automatic annotation
  • Mohammad Amin Golshani, Alimohammad Zarehbidoki Page 33
    Obtaining important pages rapidly can be very useful when a crawler cannot visit the entire Webin a reasonable amount of time.Several Crawling algorithms such as Partial PageRank,Batch PageRank, OPIC, and FICA have been proposed, but they have high time complexity or low throughput. To overcome these problems, we propose a new crawling algorithm called IECA which is easy to implement with low time O(E*logV)and memory complexity O(V) -Vand Eare the number of nodes and edges in the Web graph, respectively. Unlike the mentioned algorithms, IECA traverses the Web graph only once and the importance of the Web pages is determined based on the logarithmic distance and weight of the incoming links. To evaluate IECA, we use threedifferent Web graphs such as the UK-2005, Web graph of university of California, Berkeley-2008, and Iran-2010. Experimental results show that our algorithm outperforms other crawling algorithms in discovering highly important pages.
    Keywords: search engines, Web crawling, Web graph, logarithmic distance, reinforcement learning, World Wide Web
  • Sajjad Salehi, Fattaneh Taghiyareh, Mohammad Saffar Page 43
    One of the well-known cognitive concepts, commonly used in multi-agent systems, is the Shared Mental Model (SMM). In this paper we introduce a context aware mental model sharing strategy in a dynamic heterogeneous multi-agent environment in order to maximize collaboration via minimizing mental conflicts. This strategy uses a context aware architecture that is composed of three primary layers and a cross layer part which facilitates mental model sharing between agents. We model a complex inaccessible environment with specific dynamisms where agents must share their mental models in order to make correct decisions. Our proposed strategy is compared with other methods applying some important criteria such as shared information accuracy, communication load and performance in time constraint situations. Our findings may be interpreted as strong evidence that our method enables heterogeneous agents for a qualified teamwork as well as facilitating collective commitments.
    Keywords: shared mental model, mental model, dynamic environment, context awareness, sharing method
  • S.Amir Ehsani, Amirmasoud Eftekhari Moghadam Page 55
  • Rozita Tavakolian, Mohammad Taghi Hamidi Beheshti, Nasrollah Moghaddam Charkari Page 69
    Highly effective recommender systems may still face users’ interest drifting. One of the main strategies for handling interest-drifting is forgetting mechanism. Current approaches based on forgetting mechanism have some drawbacks: (i) Drifting times are not considered to be detected in user interest over time. (ii) They are not adaptive to the evolving nature of user’s interest. Until now, there hasn’t been any study to overcome these problems. This paper discusses the above drawbacks and presents a novel recommender system, named WmIDForg, using web usage mining, web content mining techniques, and forgetting mechanism to address user interest-drift problem. We try to detect evolving and time-variant patterns of users'' interest individually, and then dynamically use this information to predict favorite items of the user better over time. The experimental results on EachMovie dataset demonstrate our methodology increases recommendations precision 6.80% and 1.42% in comparison with available approaches with and without interest-drifting respectively.
    Keywords: web recommender system, user's interest drifting, forgettin mechachanism, web usage mining, web content mining, time, based hybrid weight