فهرست مطالب

Majlesi Journal of Telecommunication Devices
Volume:9 Issue: 1, Mar 2020

  • تاریخ انتشار: 1399/01/16
  • تعداد عناوین: 6
|
  • Majid EskandariShahraki, Mehran Emadi* Pages 1-8

    The main part of the eye is the retina covering the entire back section of the eye. Eye disease is one of the most important cause of disability and even death in developed countries as well as in developing countries. Disorders created in the retina that occur due to special diseases can be detected by specific retinal images. Studying the variations in retinal photos in a special time could help physicians to diagnose the associated diseases. In this paper, the detection of blood veins in retina photos was investigated. For this purpose, first a new method is proposed to promote the quality of retina photos by combining the histogram adjustment and gray level grouping. We use the feature vector to classify the pixels. Next, a method for classifying the images based on the feature extraction vector is required. The use of neural networks is one of the best and most widely used methods of machine learning for classification. We used a 3-layer Perceptron to classify pixels.

    Keywords: Retinal images, Histogram modulation, Gray level grouping, Feature extraction vector, Perceptron neural network
  • Alaleh Sadat Hosseini Charyani*, Alireza Norouz Pages 9-15

    Sentiment Analysis, which is a new subfield of the processing of natural language and text mining, categorizes the texts based on the sentiment expressed in them. Sentiment plays a significant role in decision-making. So sentiment analysis technology has a broad scope for scientific applications. On the other hand, a huge amount of information in the world today is in the form of text. Therefore, text mining techniques are important. Exploring comments or analyzing sentiment as a branch of text mining, means finding the author's perspective on a specific subject. The Internet allows users to easily express their opinions and get informed about the opinions of others. The high volume and the lack of proper structure for the text of the comments provided on the web, make it difficult to use hidden knowledge within them. Therefore, it is important to provide methods that can prepare and provide this knowledge in a summarized and structured way. In this research, it has been tried to provide a fuzzy method for analyzing the following comments on news sites according to the text of the report. In this regard, it has been tried to investigate the relationship with the author's commentary and opinion in light of the subject of the text using the grammatical features of texts such as noun and verb, as well as sentimental load analysis of sentences. Subsequently, the method is evaluated by implementing it on the dataset collected from news and comments. The proposed method has 87% diagnosis accuracy.

    Keywords: sentiment analysis, fuzzy method, grammatical features of texts, text-mining techniques
  • Saeed Talati*, MohammadReza Hassani Ahangar Pages 17-22

    The neural network learning vector quantization can be understood as a special case of an artificial neural network, more precisely, a learning-based approach - winner takes all. In this paper, we investigate this algorithm and find that this algorithm is a supervised version of the vector quantization algorithm, which should check which input belongs to the class (to update) and improve it according to the distance and class in question. To give. A common problem with other neural network algorithms is the speed vector learning algorithm, which has twice the speed of synchronous updating, which performs better where we need fast enough. The simulation results show the same problem and it is shown that in MATLAB software the learning vector quantization simulation speed is higher than the self-organized neural network.

    Keywords: Neural network, learning vector quantization, self-organizing neural network, optimization
  • Majid Emadi, Mehran Emadi* Pages 23-33

    Face recognition has been one of the most widely used sub-disciplines of machine learning for so many years. Face detection has been employed as an effective method in a wide range of applications such as surveillance systems and Forensic pathology in the area of machine vision. However, the accuracy of face detection has dramatically declined over the past decade due to wide-ranging challenges such as face detection with changes in face angle, the density of the crowds in an image, quality of light, etc which require special attention of researchers in response to these challenges. In the present study, a new sustainable approach to light changes for face detection based on local features is employed. In this method, the local binary pattern is extracted from face images and Principal Component Analysis is utilized to reduce the feature vectors’ dimension by the descriptor. Eventually, the features are classified using Ada Boost. Tests done on the images on the web show that face recognition accuracy is 100% in the low density crowd, 96% in the high-density crowd and proper light conditions, and 90% in the high-density crowd and poor light conditions.

    Keywords: Face detection, Local binary pattern, light challenge
  • Saeed Talati, Pouriya Etezadifar* Pages 35-46

    The human ear is aware of a wide range of sound signal amplitudes, so signal-based amplitude marking techniques have their own complexity. This article uses Watermarking to insert the message in the audio coverage and its main purpose is to keep this information hidden for others. There are many benchmarks for evaluating insertion and extraction algorithms that, by performing multiple attacks on an algorithm, increase the ability of the method. (Resilience, Transparency and Capacity for Use). Although the LSB method is superior to other encryption techniques, it is highly vulnerable to all kinds of attacks and attacks, including Additive White Gaussian Noise. This article introduces a new idea for Watermarking voice data encryption, based on the LSB method, which follows similar bits with bits instead of pasting information. The message will be in 16-bit samples, given the introduction of the distortion reduction algorithm for the changes we have made to the signal bits for the receiver's awareness, which could be a new way of causing F Of its resistance additive white Gaussian noise and LSB standard will also improve the transparency of the procedure and the method for reducing the capacity has been used.

    Keywords: Watermarking, coding, audio signal, LSB
  • Mohsen Ashourian* Pages 47-52

    Quantization watermarking is a technique for embedding hidden copyright information based on dithered quantization. This non-blind scheme is only practical for watermarking applications, where the original signal is available to the detector as for a fingerprinting purpose. The goal of this paper is to analyse the quantization watermarking in the three-dimensional wavelet transform. We consider the nonlinear effect of dithered quantization in the time-domain representation of the filter bank. We derive a compact and general form for distortion in the host video due to the encoding and embedding process. The formulation has the capacity to be simplified and optimized for different filter banks and dither signals. We provide some supporting experiments for the three-dimensional wavelet analysis of video signal.

    Keywords: quantization watermarking, wavelet transform, dithering