فهرست مطالب

Scientia Iranica
Volume:20 Issue: 3, 2013

  • Transactions D: Computer Science & Engineering and Electrical Engineering
  • تاریخ انتشار: 1392/06/03
  • تعداد عناوین: 17
|
  • Y. Baseri, B. Takhtaei, J. Mohajeri Pages 637-646
    Eslami and Talebi (2011) proposed an untraceable electronic cash scheme and claimed that their scheme protects the anonymity of customers, detects the identity of double spenders and provides the date attachability of coins to manage the bank database. In this paper, illustrating Eslami and Talebi’s scheme, as one of the latest untraceable electronic cash schemes, and showing its weaknesses (in fulfilling the properties of perceptibility of double spender, unforgeability and date attainability of coins) and its faults (related to exchange protocol), we propose a new untraceable electronic cash scheme which is immune to the weaknesses of the former. Our scheme contains anonymity, double-spending detection, unforgeability and date attachability properties and prevents forging. To do this, we described a special construction which injects the expiration date and the identity of the customer onto the coin and detects the identity in the case of double spending. Lastly, we show that the efficiency of our scheme is comparable with other schemes.
  • A. Sharif Ahmadian, M. Hosseingholi, A. Ejlali Pages 647-656
    Recently, the tradeoff between low energy consumption and high fault-tolerance has attracted a lot of attention as a key issue in the design of real-time systems. Dynamic Voltage Scaling (DVS) is commonly employed as one of the most effective low energy techniques for real-time systems. It has been observed that the use of feedback-based methods can improve the effectiveness of DVS-enabled systems. In this paper, we have investigated reducing the energy consumption of fault-tolerant hard real-time systems using the feedback control theory. Our proposed method makes the system capable of selecting the proper frequency and voltage settings in order to reduce the energy consumption, while guaranteeing hard real-time requirements in the presence of unpredictable workload fluctuations and faults. In the proposed method, the available slack-time is exploited by a feedback-based DVS at runtime to reduce the energy consumption. Furthermore, some slack-time is reserved for re-execution in case of faults. The simulation results show that compared with the traditional DVS method, our proposed method not only provides up to 59% energy saving, but also satisfies hard real-time constraints. Our proposed method is also effective in harnessing the static energy. The transition overheads are also taken into account in our simulation experiments.
  • E. Ansari, M.H. Sadreddini, B. Sadeghi Bigham, F. Alimardani Pages 657-662
    Presenting an efficient general feature selection method for the problem of the curse of dimensionality is still an open problem in pattern recognition, and, considering the cooperation among features through search processes, it is the most important challenge. In this paper, a combinatorial approach has been proposed, which consists of three feature reduction algorithms that have been applied in a parallel manner to cooperate. We consider each of these algorithms as a component in a reduction framework. For each component, among all various attribute selection algorithms, the Tabu Search (TS) a useful and state of the art algorithm, is used. To take account of the interaction between features, more subsets should be examined. Hence, each component should explore individually through feature space in a local area which is different from other components. The proposed algorithm, called the Cooperative-Tabu-Search (CTS), and also a revised version of this new method, is introduced to accelerate the convergence. After sufficient iterations, which satisfy the objective function; the final subset has been selected by voting between three reduction phases, and the data is then transformed into the new space, where the data are classified with some commonly used classifiers, such as Nearest Neighbor (NN) and Support Vector Machine (SVM). The employed benchmark of this paper is chosen among the UCI datasets to evaluate the proposed method compared to others. The experimental results show the supremacy of the accuracy of the implemented combinatorial approach in comparison with traditional methods.
  • M. Davoodi, A. Mohades Pages 663-669
    Because of constraints in exact modeling, measuring and computing, it is inevitable that algorithms that solve real world problems have to avoid errors. Hence, proposing models to handle error, and designing algorithms that work well in practice, are challenging fields. In this paper, we introduce a model called the image-geometry model to handle a dynamic form of imprecision, which allows the precision to change monotonically in the input data of geometric algorithms. image-geometry is a generalization of region-based models and provides the output of problems as functions, with respect to the level of precision. This type of output helps to design exact algorithms and is also useful in decision making processes. Furthermore, we study the problem of orthogonal range searching in one and two dimensional space under the model of image-geometry, and propose efficient algorithms to solve it.
  • Ch., H. Lint., H. Chen, Ch., S. Wu Pages 670-681
    Taking batch image encryption into account, the existing single-image encryption schemes suffer from either high computational cost or image size expansion, which worsens when it is a batch of images to encrypt. This paper proposes a novel encryption scheme based on chaining random grids, suitable for dealing with batch binary, grey-level, and color images. Compared with the existing random-grid-based encryption schemes, our method encoded image images into image total random cipher-grids, rather than image cipher-grids encoded by existing encryption schemes. The decryption process is done by a human visual system and no computation is required. This method requires neither extra pixel expansion nor any encoding basis matrix, which are required and unavoidable for accustomed visual cryptography. Structured analysis and discussion about quality and secrets are performed, and the effectiveness of the method is shown in experimental results.
  • K. Etminani, M. Naghibzadeh, A.R. Razavi Pages 682-694
    In this paper, designing a Bayesian network structure to maximize a score function based on learning from data strategy is studied. The scoring function is considered to be a decomposable one such as BDeu, BIC, BD, BDe or AIC. Optimal design of such a network is known to be an NP-hard problem and the solution becomes rapidly infeasible as the number of variables (i.e., nodes in the network) increases. Several methods such as hill-climbing, dynamic programming, and branch and bound techniques are proposed to tackle this problem. However, these techniques either produce sub-optimal solutions or the time required to produce an optimal solution is unacceptable. The challenge of the latter solutions is to reduce the computation time necessary for large-size problems.In this study, a new branch and bound method called PBB (pruned brand and bound) is proposed which is expected to find the globally optimal network structure with respect to a given score function. It is an any-time method, i.e., if it is externally stopped, it gives the best solution found until that time. Several pruning strategies are proposed to reduce the number of nodes created in the branch and bound tree. Practical experiments show the effectiveness of these pruning strategies. The performance of PBB, on several common datasets, is compared with the latest state-of-the-art methods. The results show its superiority in many aspects, especially, in the required running time, and the number of created nodes of the branch and bound tree.
  • S.B. Pourpeighambar, M. Sabaei Pages 695-709
    In Wireless Sensor Networks (WSNs), power consumption of sensor nodes is the main constraint. Emerging in-network aggregation techniques are increasingly being sought after to address this key challenge and to save precious energy. One application of WSNs is in data gathering of moving objects. In order to achieve complete coverage, this type of application requires spatially dense sensor deployment, which, under close observation, exhibits important spatial correlation characteristics. The Rate Distortion (RD) theory is a data aggregation technique that can take advantage of this type of correlation with the help of a cluster based communication model. Due to object movement, the Rate-Distortion based aggregation incurs high computation overhead. This paper first introduces an introduction for the rate-distortion based moving object data aggregation model. Then, to overcome the high computation overhead, several low overhead protocols are proposed based on this model, namely, a static cluster-based protocol that uses static clustering, a dynamic cluster-based protocol that uses dynamic clustering, and a hybrid protocol which takes advantage of the other two protocols. Simulation results show that with the hybrid method, it is possible to save more than 36% of the nodes’ energy when compared to the other approaches.
  • R. Venkata Rao, Vivek Patel Pages 710-720
    Teaching–Learning-Based Optimization (TLBO) algorithms simulate the teaching–learning phenomenon of a classroom to solve multi-dimensional, linear and nonlinear problems with appreciable efficiency. In this paper, the basic TLBO algorithm is improved to enhance its exploration and exploitation capacities by introducing the concept of number of teachers, adaptive teaching factor, tutorial training and self motivated learning. Performance of the improved TLBO algorithm is assessed by implementing it on a range of standard unconstrained benchmark functions having different characteristics. The results of optimization obtained using the improved TLBO algorithm are validated by comparing them with those obtained using the basic TLBO and other optimization algorithms available in the literature.
  • P. Goudarzi Pages 721-729
    Quality of Experience (QoE) measurement for transmitted video sequences over packet error prone channels is an inevitable necessity. Due to the intrinsic properties of packet error prone channels in imposing quality degradation on transmitted media and also, the costly and time-consuming nature of subjective quality measurement techniques, exact modeling of the impact of packet loss on measured video quality based on on-line objective measurement methods is an important task. In the current work, a low-complexity objective video quality measurement algorithm is developed by which by considering various factors that may affect video quality, we can estimate subjective video quality with acceptable accuracy. Then, the performance of the proposed objective algorithm is compared with popular objective SSIM/VQM techniques. The simulation results verify that the proposed algorithm has high level of accuracy. On the other hand, the proposed algorithm has low-complexity which makes it suitable for online implementation. Some important and related patents in the field of QoE measurements and monitoring in networks are investigated.
  • Estimation of hypnosis susceptibility based on electroencephalogram signal features
    Z. Elahi, R. Boostani, A. Motie Nasrabadi Pages 730-737
    Quantitative estimation of hypnosis susceptibility is a crucial factor for psychotherapists. Waterloo–Stanford is the gold-standard qualitative index of measuring the hypnosis dept but still is not as correct as hypnotizers expect. In this way, a robust criterion is presented uses electroencephalogram (EEG) signal features to quantitatively estimate the hypnosis depth. Thirty two subjects were voluntarily participated in our study and their EEG signals from 19 channels were recorded during hypnosis induction. Several features, such as fractal dimension, autoregressive (AR) coefficients, wavelet entropy, and band power were extracted from the signals. Regarding high dimensionality of the extracted features, Sequential Forward Selection (SFS) is employed to reduce the size of input features. To categorize the hypnosis susceptibility of the participants based on their EEG features, Nearest Neighbor (NN), Fuzzy NN (FNN), and a Fuzzy Rule-Based Classification System (FRCBS) were utilized. Subjects were classified into three hypnosis ability classes including lows, mediums and highs. Leave-one(subject)-out cross validation method was utilized for validation of our results. Experimental results are completely matched to that of Waterloo–Stanford, such that degrees of hypnotic susceptibility for 32 (out of 32) subjects were correctly determined.
  • M. Shahabinejad, S. Talebi, M. Shahabinejad Pages 738-745
    The achievement of our previously proposed space–time coding algorithm entitled full-rate linear-receiver space–time block code (FRLR STBC) has motivated us to propose, in this paper, a new class of high-rate space–time–frequency block codes (STFBCs) over frequency-selective Rayleigh fading channels. We have called these codes FRLR STFBCs with interleaving (FRLR STFBCs-I). FRLR STFBCs-I could achieve a full-diversity property over quasi-static channels. Simulation results also verify that the proposed schemes exhibit proper performances in comparison with the recently proposed STFBCs. The most outstanding characteristic of the newly introduced high-rate codes is the linear complexity of the maximum likelihood (ML) receiver which makes possible a fast and economical decoding process.
  • U.K. Bhowmik, R.R. Adhami Pages 746-759
    Over the past several years,there have been substantial improvements in the area of three-dimensional (3D) cone-beam Computed Tomography (CT) imaging systems. Nevertheless, more improvement is needed to detect and mitigate motion artifacts in the clinical follow-up of neurological patients with multiple sclerosis, tumors, and stroke, etc., in which failure to detect motion artifacts often leads to misdiagnosis of disease. In this paper, we propose a marker-based innovative approach to detect and mitigate motion artifacts in 3D cone-beam brain CT systems without using any external motion tracking sensors. Motion is detected by comparing the motion-free ideal marker projections and the corresponding measured marker projections. Once motion is detected, motion parameters (six degrees-of-freedom of motion) are estimated using a numerical optimization technique. Artifacts, caused by motions, are mitigated in the back projection stage of the 3D reconstruction process by correcting the position of every reconstruction voxel according to the estimated motion parameters. We refer to this algorithm as the MB_FDK (Marker-based Feldkemp–Davis–Kress) algorithm. MB_FDK has been evaluated on a modified 3D Shepp–Logan phantom with a range of simulated motion. Simulation results demonstrate a quantitative and qualitative validation of motion detection and artifact mitigation techniques.
  • M. Nasri, S. Saryazdi, H. Nezamabadi, Pour Pages 760-764
    A Switching Non-Local Means (SNLM) filter is presented for high-density salt and pepper noise reduction. Firstly, the impulse noises are detected, based on the fact that their values must be the extreme gray-level of the image. Then, at the filtering stage, the noise-free pixels remain unchanged and noisy pixels are restored using a modified non-local means filter. However, to calculate the weights of the filter, only noise-free pixels are considered. It means that in a search window around the noisy pixel, some small patches are taken into account around noise-free pixels and the similarity between these patches and the central patch determines the weights. Experimental results show that the proposed method can provide better performance than many of the existing impulse denoising methods in high-density impulse noise in terms of PSNR, and MAE.
  • A. Darijani, M. Mohseni, Moghadam Pages 765-770
    In this paper, the solutions of nonlinear integral equations, including Volterra, Fredholm, Volterra–Fredholm of first and second kinds, are approximated as a linear combination of some basic functions. The unknown parameters of an approximate solution are obtained based on minimization of the residual function. In addition, the existence and convergence of these approximate solutions are investigated. In order to use Newton’s method for minimization of the residual function, a suitable initial point will be introduced. Moreover, to confirm the efficiency and accuracy of the proposed method, some numerical examples are presented. It is shown that there are considerable improvements in our results compared with the results of the existing methods. All numerical computations have been performed on a personal computer using Maple 12.
  • H. Nasiri Soloklo, M. Maghfoori Farsangi Pages 771-777
    A new method for model reduction of linear systems is presented, based on Chebyshev rational functions, using the Harmony Search (HS) algorithm. First, the full order system is expanded and then a set of parameters in a fixed structure are determined, whose values define the reduced order system. The values are obtained by minimizing the errors between the image first coefficients of the Chebyshev rational function expansion of full and reduced systems, using the HS algorithm. To assure stability, the Routh criterion is used as constraints in the optimization problem. To present the ability of the proposed method, three test systems are reduced. The results obtained are compared with other existing techniques. The results obtained show the accuracy and efficiency of the proposed method.
  • A. Lashkar Ara, J. Aghaei, M. Alaleh, H. Barati Pages 778-785
    This paper presents a new approach to determine the optimal location of an Optimal Unified Power Flow Controller (OUPFC) as an energy flow controller, under a single line contingency (N−1 contingency), to satisfy operational decisions. A contingency analysis is performed to detect and rank the faulted contingencies on the basis of their severity in electrical energy transmission systems. Minimization of the average loadability on all energy transmission lines is considered as the optimization objective function, while the network settings are set to minimize active power losses under pre-contingency conditions. The optimization problem is modeled using a Non-Linear Programming (NLP) framework and solved using the CONOPT solver. The proposed algorithm is implemented in MATLAB and GAMS software on the IEEE 14- and 30-bus test systems. The simulation results demonstrate the effectiveness of the proposed algorithm in enhancing energy system security under a single line contingency. Furthermore, the OUPFC is outperformed by a Unified Power Flow Controller (UPFC) in normal and contingency operations of electrical energy transmission systems, from technical and economical points of view.
  • H. Seraj, M.F. Rahmat, M. Khalid Pages 786-792
    In this paper, application of the cross correlation method for estimating the velocity of air/solid fluid is explained. Electrostatic sensors are used as the primary sensors for this measurement and by using the cross correlation of signals from two electrostatic sensors, the speed of air/solid fluid is calculated. In calculating cross correlation, a truncated version of cross correlation is considered in order to perform cross correlation in an on-line mode. Also, to improve the accuracy of truncated cross correlation, it is performed only when a particle passes across the sensors. MATLAB is selected as the programming tool for performing the cross correlation algorithm. Different tests have been performed, in which the distance between two electrostatic sensors is set at various specified values, and the calculated velocities of solid particles are compared. It is shown that the results of these measurements are very similar, which is proof of the correctness of the results.