فهرست مطالب

Information Systems and Telecommunication - Volume:3 Issue: 2, Apr-Jun 2015

Journal of Information Systems and Telecommunication
Volume:3 Issue: 2, Apr-Jun 2015

  • تاریخ انتشار: 1394/04/05
  • تعداد عناوین: 8
|
  • Hassan Ghassemian*, Azra Rasouli Kenari Page 66
    Congenital heart disease is now the most common severe congenital abnormality found in live births and the cause of more than half the deaths from congenital anomalies in childhood. Heart murmurs are often the first signs of pathological changes of the heart valves, and they are usually found during auscultation in the primary health care. Auscultation is widely applied in clinical activity; nonetheless sound interpretation is dependent on clinician training and experience. Distinguishing a pathological murmur from a physiological murmur is difficult and prone to error. To address this problem we have devised a simplified approach to pediatric cardiac scanning. This will not detect all forms of congenital heart disease but will help in the diagnosis of many defects. Cardiac auscultatory examinations of 93 children were recorded, digitized, and stored along with corresponding echocardiographic diagnoses, and automated spectral analysis using discrete wavelet transforms was performed. Patients without heart disease and either no murmur or an innocent murmur (n = 40) were compared to patients with a variety of cardiac diagnoses and a pathologic systolic murmur present (n = 53). A specificity of 100% and a sensitivity of 90.57% were achieved using signal processing techniques and a k-nn as classifier.
    Keywords: Phonocardiogram (PCG), Murmur, Cardiac, K, nn Classifier, Pediatric, Wavelet
  • Hamed Modaghegh *, Seyed Alireza Seyedin Page 75
    This paper presents a new active steganalysis method to break the transform domain steganography. Most of steganalysis techniques focus on detecting the presence or absence of a secret message in a cover (passive steganalysis), but in some cases we need to extract or estimate a hidden message (active steganalysis). Despite the importance of estimating the message, little research has been conducted in this area. In this study, a new active steganalysis method based on Spars Component Analysis (SCA) technique is presented. Here, the sparsity property of the cover image and hidden message has been used to extract the hidden message from a stego image. In our method, the transform domain steganography is formulated mathematically as a linear combination of sparse sources. Thus, the active steganalysis can be presented as a SCA problem. The feasibility of the SCA problem solving is confirmed by Linear Programming methods. Then, a fast algorithm is proposed to decrease the computational cost of steganalysis and still maintains the accuracy. The accuracy of the proposed method has been confirmed in different experiments on a variety of transform domain steganography methods. According to these experiments, our method not only reduces the error rate, but also decreases the computational cost compared to the previous active steganalysis methods in the literature.
    Keywords: Sparse Component Analysis (SCA), Active Steganalysis, Blind Source Separation (BSS), Transform Domain steganography
  • Mina Alibeigi*, Niloofar Mozafari, Zohreh Azimifar, Mahnaz Mahmoodian Page 85
    Edge detection plays a significant role in image processing and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. It is clear that accurate edge map generation is more difficult when images are corrupted with noise. Moreover، most of edge detection methods have parameters which must be set manually. Here we propose a new color edge detector based on a statistical test، which is robust to noise. Also، the parameters of this method will be set automatically based on image content. To show the effectiveness of the proposed method، four state-of-the-art edge detectors are implemented and the results are compared. Experimental results on five of the most well-known edge detection benchmarks show that the proposed method is robust to noise. The performance of our method for lower levels of noise is very comparable to the existing approaches، whose performances highly depend on their parameter tuning stage. However، for higher levels of noise، the observed results significantly highlight the superiority of the proposed method over the existing edge detection methods، both quantitatively and qualitatively.
    Keywords: Edge Detection, Color, Noisy Image, RRO Test, Regression, Pratt's Figure of Merit
  • Neda Dousttalab*, Muhammad Ali Jabraeil Jamali, Ali Ghaffari Page 95
    It is undeniable that scheduling plays an important role in increasing the network quality on chip.If experts realize the significant of mapping and scheduling in getting rid of delays and increasing performance of these systems, they will ponder over these activities much more scrupulously. The operation scheduling problem in network on chip (NoC) is NP-hard; therefore, effective heuristic methods are needed to provide modal solutions. In this paper, ant colony scheduling was introduced as a simple and effective method to increase allocator matching efficiency and hence network performance, particularly suited to networks with complex topology and asymmetric traffic patterns. The proposed algorithm was studied in torus and flattened-butterfly topologies with multiple types of traffic pattern. For evaluating the performance of the proposed algorithm, specialized simulator network on chip entitled by BookSim working under Linux operation system was used. Evaluation results showed that this algorithm, in many causes, had positive effects on reducing network delays and increasing chip performance compared with other algorithms. For instance, for a complex topologies, this algorithm under maximum injection_rate of up to (10%) increasing throughput have been observed, injection rate, on average, compared to other existing algorithms.
    Keywords: On, Chip Interconnection Networks, Switch Allocator, Ant Colony, Scheduling
  • Mohammad Ranjkesh, Reza Hasanzadeh* Page 100
    This paper presents an automatic sound source localization approach based on a combination of the basic time delay estimation sub-methods namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach for finding the sound source location in long distances and reverberant environments and is so sensitive in noisy situations. On the other hand, the conventional SRP method is time consuming, but a successful approach to accurately find sound source location in noisy and reverberant environments. Also, another SRP-based method, SRP Phase Transform (SRP-PHAT), has been suggested for the better noise robustness and more accuracy of sound source localization. In this paper, based on the combination of TDOA and SRP based methods, two approaches were proposed for sound source localization. In the first proposed approach called Classical TDOA-SRP, the TDOA method is used to find the approximate sound source direction and then SRP based methods were used to find the accurate location of sound source in the Field of View (FOV) which is obtained by the TDOA method. In the second proposed approach which called Optimal TDOA-SRP, for more reduction of computational processing time of SRP-based methods and better noise robustness, a new criterion has been proposed for finding the effective FOV which is obtained through the TDOA method. Experiments were carried out under different conditions confirming the validity of the purposed approaches.
    Keywords: Steered Response Power, Time Delay Estimation, Steered Response Power Phase Transform, Sound Source Localization, Time Difference Of Arrival, Field Of View
  • Behnam Akbarian*, Saeed Ghazi Maghrebi Page 108
    The goal of the future terrestrial digital video broadcasting (DVB-T) standard is to employ diversity and spatial multiplexing in order to achieve the fully multiple-input multiple-output (MIMO) channel capacity. The DVB-T2 standard targets an improved system performance throughput by at least 30% over the DVB-T. The DVB-T2 enhances the performance using improved coding methods, modulation techniques and multiple antenna technologies. After a brief presentation of the antenna diversity technique and its properties, we introduce the fact of the well-known Alamouti decoding scheme cannot be simply used over the frequency selective channels. In other words, the Alamouti Space-Frequency coding in DVB-T2 provides additional diversity. However, the performance degrades in highly frequency-selective channels, because the channel frequency response is not necessarily flat over the entire Alamouti block code. The objective of this work is to present an enhanced Alamouti space frequency block decoding scheme for MIMO and orthogonal frequency-division multiplexing (OFDM) systems using the delay diversity techniques over highly frequency selective channels. Also, we investigate the properties of the proposed scheme over different channels. Specifically, we show that the Alamouti scheme with using Cyclic Delay Diversity (CDD) over some particular channels has the better performance. Then, we exemplarity implement this scheme to the DVB-T2 system. Simulation results confirm that the proposed scheme has lower bit error rate (BER), especially for high SNRs, with respect to the standard Alamouti decoder over highly frequency-selective channels such as single frequency networks (SFN). Furthermore, the new scheme allows a high reliability and tolerability. The other advantages of the proposed method are its simplicity, flexibility and standard compatibility with respect to the conventional methods.
    Keywords: Alamouti Coding, DVB, T2, MIMO, OFDM
  • Mohammad Esmaeel Yahyatabar, Yasser Baleghi*, Mohammad Reza Karami Page 115
    In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have more curvature and end in a specific style. An experiment has been designed to determine the function indicating the most robust features of Persian signatures. To improve the performance of verification, a combination of shape based and dynamic extracted features is applied to Persian signature verification. To classify these signatures, Support Vector Machine (SVM) is applied. The proposed method is examined on two common Persian datasets, the new proposed Persian dataset in this paper (Noshirvani Dynamic Signature Dataset) and an international dataset (SVC2004). For three Persian datasets EER value are equal to 3, 3.93, 4.79, while for SVC2004 the EER value is 4.43.These experiments led to identification of new features combinations that are more robust. The results show the overperformance of these features among all of the previous works on the Persian signature databases; however, it does not reach the best reported results in an international database. This can be deduced that language specific approaches may show better results.
    Keywords: Online Signature Verification, Support Vector Machine, Robust Feature Extraction, Online Signature Dataset
  • Navid Bahrami, Amir Hossein Jadidinejad*, Mozhdeh Nazari Page 125
    Exploiting semantic content of texts due to its wide range of applications such as finding related documents to a query, document classification and computing semantic similarity of documents has always been an important and challenging issue in Natural Language Processing. In this paper, using Wikipedia corpus and organizing it by three-dimensional tensor structure, a novel corpus-based approach for computing semantic similarity of texts is proposed. For this purpose, first the semantic vector of available words in documents are obtained from the vector space derived from available words in Wikipedia articles, then the semantic vector of documents is formed according to their words vector. Consequently, semantic similarity of a pair of documents is computed by comparing their corresponding semantic vectors. Moreover, due to existence of high dimensional vectors, the vector space of Wikipedia corpus will cause curse of dimensionality. On the other hand, vectors in high-dimension space are Usually very similar to each other. In this way, it would be meaningless and vain to identify the most appropriate semantic vector for the words. Therefore, the proposed approach tries to improve the effect of the curse of dimensionality by reducing the vector space dimensions through random indexing. Moreover, the random indexing makes significant improvement in memory consumption of the proposed approach by reducing the vector space dimensions. Additionally, the capability of addressing synonymous and polysemous words will be feasible in the proposed approach by means of the structured co-occurrence through random indexing.
    Keywords: Information Retrieval, Natural Language Processing, Random Indexing, Semantic Similarity, Semantic Tensor