به جمع مشترکان مگیران بپیوندید!

تنها با پرداخت 70 هزارتومان حق اشتراک سالانه به متن مقالات دسترسی داشته باشید و 100 مقاله را بدون هزینه دیگری دریافت کنید.

برای پرداخت حق اشتراک اگر عضو هستید وارد شوید در غیر این صورت حساب کاربری جدید ایجاد کنید

عضویت

جستجوی مقالات مرتبط با کلیدواژه « correlation » در نشریات گروه « برق »

تکرار جستجوی کلیدواژه «correlation» در نشریات گروه «فنی و مهندسی»
  • S. Abbasi, D. Nazarpour, S. Golshannavaz *
    Background and Objectives
    Distributed generations (DGs) based on renewable energy, such as PV units, are becoming more prevalent in distribution networks due to technical and environmental benefits. However, the intermittency and uncertainty of these sources lead to technical and operational challenges. Energy storage application, uncertainty analysis, and network reconfiguration are apt therapies to resist these challenges.
    Methods
    Energy management of modern, smart, and renewable-penetrated distribution networks is tailored here considering the uncertainties correlations. Network operation costs including switching operations, the expected energy not served (EENS) index as the reliability objective, and the node voltage deviation suppression as the technical objective are mathematically modeled. Multi-objective particle swarm optimization (MOPSO) is considered as the optimization engine. Scenario generation method and Nataf transformation are used in probabilistic evaluations of the problem. Moreover, the technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) is deployed to make a final balance between different objectives to yield a unified solution.
    Results
    To show the effectiveness of the proposed approach, the IEEE 33-node distribution network is put under extensive simulations. Different cases are simulated and interrogated to assess the performance of the proposed model.
    Conclusion
    For different objectives dealing with different aspects of the network, remarkable achievements are attained. In brief, the final solution shows 4.50% decrease in operation cost, 13.07% improvement in reliability index, and 18.85% reduction in voltage deviation compared to the initial conditions.
    Keywords: Energy Management, Renewable Energy Resources, Expected Energy Not Supplied, Uncertainty, Correlation}
  • Kirti Sharma, Pawan Tiwari *, Sanjay Sinha
    Data represents a compendium of information that perpetually expands with each passing moment, contributed by individuals worldwide. Within the domain of medical science, this reservoir of data accumulates at an almost exponential rate, doubling in volume annually. The emergence of advanced machine learning tools and techniques, subsequent to a substantial evolution in data mining strategies, has bestowed the capacity to glean insights and discern concealed patterns from vast datasets, thus enabling extensive analytical pursuits. This study delves into the application of machine learning algorithms to enhance societal well-being by harnessing the transformative potential of machine learning advancements in the domain of blood glucose concentration estimation through regression analysis. The culmination of this investigation involves establishing a correlation between glucose concentration and hematocrit volume. The dataset employed for this research is sourced from clinically validated electrochemical glucose sensors (commonly referred to as glucose strips). It encompasses diverse levels of both glucose concentration and hematocrit volume, the latter being furnished by an undisclosed source to ensure copyright compliance. This dataset comprises four distinct variables, and the aim of this research involves training the dataset using regression techniques to predict two of these variables. Our results indicate that when utilizing linear regression, the R2 score for GC is approximately 0.916, whereas for HV, it reaches around 0.537. In contrast, employing the support vector regressor yielded R2 scores of about 0.961 for GC and 0.506 for HV.
    Keywords: Estimation, Correlation, analysis, Regression, Healthcare, enlightenment, Machine Learning, Quantum leap, Data mining, Insights}
  • Abhishek Kumar*, Manoj Sindhwani, Shippu Sachdeva

    Side-channel attacks are attacks against cryptographic devices that are based on information obtained by leakage into cryptographic algorithm hardware implementation rather than algorithm implementation. Power attacks are based on analyzing the power consumption of a corresponding input and obtaining access to this method. The power profile of the encryption circuit maintains an interaction with the input to be processed, allowing the attacker to guess the hidden secrets. In this work, we presented a novel architecture of masked logic cells that are resistant to power attacks and have reduced cell numbers. The presented masking cell reduces the relationship between the actual power and the mathematically approximated power model measured by the Pearson correlation coefficient. The security aspect of the logic cell is measured with the correlation coefficient of the person. The proposed mask-XOR and mask-AND cells are 0.0053 and 0.3respectively, much lower than the standard XOR and AND cells of 0.134 and 0.372, respectively.

    Keywords: Side-Channel Attack, Power Attack, Mask Cell, Correlation, Data Hiding, Hardware Security}
  • M. Najjarpour, B. Tousi*, S. Jamali

    Optimal power flow is an essential tool in the study of power systems. Distributed generation sources increase network uncertainties due to their random behavior, so the optimal power flow is no longer responsive and the probabilistic optimal power flow must be used. This paper presents a probabilistic optimal power flow algorithm using the Taguchi method based on orthogonal arrays and genetic algorithms. This method can apply correlations and is validated by simulation experiments in the IEEE 30-bus network. The test results of this method are compared with the Monte Carlo simulation results and the two-point estimation method. The purpose of this paper is to reduce the losses of the entire IEEE 30-bus network. The accuracy and efficiency of the proposed Taguchi correlation method and the genetic algorithm are confirmed by comparison with the Monte Carlo simulation and the two-point estimation method. Finally, with this method, we see a reduction of 5.5 MW of losses.

    Keywords: Correlation, Distributed Generation, Distribution Networks, Orthogonal Arrays, Probabilistic Optimal Power Flow, Taguchi Method}
  • عمار محمدی، منصور نخکش*
    این مقاله روشی در پنهان نگاری معکوس پذیر در تصویر رمز شده را معرفی می کند که از همبستگی پیکسل های مجاور در تصویر بهره می برد. در روش پیشنهادی، تصویر اصلی می تواند با الگوریتم رمز دلخواه رمز شود. بیت هایی با ارزش بیشتر در پیکسل های تصویر به منظور ایجاد فضای خالی برای پنهان نگاری بیت های داده به کار گرفته می شوند. در این رویکرد تصویر به قالب های مجزا تقسیم و مرکزی ترین پیکسل در هر قالب به عنوان پیکسل مبنا در نظر گرفته می شود. خطای پیش بینی بین شدت پیکسل مبنا و شدت پیکسل های دیگر در قالب محاسبه و پیش بینی محلی نامیده می شود. این خطا به منظور بدست آوردن یک ویژگی از ظرفیت ذخیره هر قالب تحلیل می گردد. ویژگی های محاسبه شده برای تمام قالب ها توسط کدبندی حسابی، فشرده و ویژگی های فشرده شده به همراه بیت های داده در تصویر تعبیه می شوند. در گیرنده در ابتدا ویژگی های فشرده شده استخراج، سپس نافشردهشده و به منظور بازیابی بدون اتلاف تصویر اصلی و استخراج بیت های داده به کار گرفته می شوند. نتایج آزمایش تایید می کند که روش پیشنهادی روش های جدید در این زمینه را بهبود می دهد.
    کلید واژگان: تصویر رمز شده, همبستگی, خطای پیش بینی, پنهان نگاری معکوس پذیر, کدبندی حسابی}
    A. Mohammadi, M. Nakhkash *
    This paper presents a reversible data hiding method in encrypted image that employs correlation of neighboring pixels in the image. In the proposed method, original image may be encrypted by desire encryption algorithm. More significant bits of the pixels in the image are exploited to vacate room for embedding data bits. In the approach, image is divided into separated blocks and most central pixel of each block is considered as reference one. The prediction error between the intensity of other pixels and reference one is calculated and denoted local prediction. This error is analyzed determining a feature of block embedding capacity. Calculated features for all blocks are compressed employing arithmetic coding and embedded in the image along with data bits. At the recipient, at first, compressed features are extracted, then they are uncompressed and used to lossless reconstruction of the original image and extraction of the data bits. Experimental results confirm that the proposed algorithm outperforms state of the art ones.
    Keywords: Encrypted image, Correlation, Prediction error, Reversible data hiding, Arithmetic coding}
  • S. Mavaddati*
    Blind voice separation refers to retrieve a set of independent sources combined by an unknown destructive system. The proposed separation procedure is based on processing of the observed sources without having any information about the combinational model or statistics of the source signals. Also, the number of combined sources is usually predefined and it is difficult to estimate based on the combined sources. In this paper, a new algorithm is introduced to resolve these issues using empirical mode decomposition technique as a pre-processing step. The proposed method can determine precisely the number of mixed voice signals based on the energy and kurtosis criteria of the captured intrinsic mode functions. Also, the separation procedure employs a grey wolf optimization algorithm with a new cost function in the optimization procedure. The experimental results show that the proposed separation algorithm performs prominently better than the earlier methods in this context. Moreover, the simulation results in the presence of white noise emphasize the proper performance of the presented method and the prominent role of the presented cost function especially when the number of sources is high.
    Keywords: Voice Separation, Empirical Mode Decomposition, Grey Wolf Optimization, Equiangular Tight Frames, Correlation}
  • مهدی فرزندوی، فریدون شمس*
    امروزه تغییرات مداوم در نیازمندی های مشتریان به عنوان اصلی ترین چالش پیش روی سازمان ها است، معماری سرویس گرا به عنوان یک راه حل عملی برای رفع این مشکل برای سازمان های سرویس گرا مطرح می شود. در معماری سرویس گرا انتخاب و ترکیب سرویس ها برای پاسخ گویی سریع به نیازمندی های پیچیده مشتریان در دسترس سازمان های سرویس گرا قرار می گیرد. سازمان ها برای پاسخ گویی سریع تر به نیازمندی های پیچیده و متغیر مشتریان از سرویس های آماده و برون سازمانی استفاده می کنندکه یکی از فناوری های نوظهور در این زمینه وب سرویس ها هستند. با گسترش تمایل سازمان ها به استفاده از وب سرویس ها، به مرور زمان تامین کنندگان وب سرویس ها افزایش پیدا کردند و به همین دلیل وب سرویس هایی با عملکرد یکسان و ویژگی های کیفی متفاوت گسترش یافتند، بنابراین مسئله انتخاب وب سرویس با بهترین ویژگی کیفی برای سازمان ها اهمیت پیدا کرد. از طرفی سازمان ها تنها با یک وب سرویس نمی توانند نیازمندی های پیچیده مشتریان را پاسخ دهند، به همین دلیل نیازمند ترکیب چندین وب سرویس با هم هستند. از طرفی دیگر با افزایش وب سرویس ها با عملکردهای متفاوت، در ترکیب آنها، همبستگی، وابستگی و ناسازگاری بین وب سرویس ها نیز گسترش می یابد ولی تاکنون روشی ارائه نشده که وب سرویس های برتر را بر اساس ویژگی های کیفی انتخاب کند و ترکیب آنها با هم، وابستگی، ناسازگاری و همبستگی بین وب سرویس ها را نقض نکند. در این مقاله سعی می کنیم از روش های قبلی که به وابستگی یا ناسازگاری یا همبستگی در حالت های ساده ترکیب وب سرویس ها پرداخته اند، استفاده کنیم و یک روش جامع پیشنهاد دهیم تا این که حالت های پیچیده ای که از ترکیب وب سرویس ها ممکن است رخ دهد را نیز پشتیبانی کنیم و وب سرویس مرکب مناسب را از نظر ویژگی های کیفی با در نظر گرفتن وابستگی، ناسازگاری و همبستگی بیابیم.
    کلید واژگان: الگوریتم ژنتیک, انتخاب وب سرویس, ترکیب وب سرویس, ناسازگاری, همبستگی, وابستگی, وب سرویس, ویژگی کیفی}
    Feraydoun Shams
    Today, the continuous changes in customer requirements are the main challenges faced by enterprises. Service-oriented architecture is considered as a practical solution to solve this problem for service-oriented enterprises. In the service-oriented architecture, selection and composition of services to quickly respond to complex customer requirements is available to service-oriented enterprises. Enterprises use ready-to-use and outsourced services to respond more quickly to the complex and changing needs of customers. One of the emerging technologies in this area is web services. By expanding the desire of enterprises to use web services, overtime web services providers increased. For this reason, Web services with the same functionality and different qualities were expanded. Therefore, the issue of choosing a web service with the best quality for enterprises is important. On the other hand, enterprises with only one web service cannot meet the complex requirements of customers; therefore, they need to composite multiple web services together. In addition, with the increase of web services with different functions, correlation, dependency and conflict between Web services also expand in their composition. But so far, there is no way to choose the best web services based on the quality of service(QoS) and also their composition does not violate the dependency, conflict and correlation between web services. In this paper, we try to make use of previous methods that consider dependency or conflict or correlation in simple modes of web services composition. We will improve all these methods in a comprehensive approach and support complex situations that may arise from the composition of web services and find the suitable composite web service by considering dependency, conflict, and correlation between Web services.
    Keywords: Genetic algorithm, web services, web service selection, web service composition, QoS, dependency, conflict, correlation}
  • حامد امین زاده*

    عناصر مجتمع سازی شده در فرآیند ساخت مدار مجتمع کنونی، اغلب غیر خطی بوده و امکان دستیابی به تقارن کامل ساختاری را با مشکل مواجه می کنند. بدون استفاده از یک الگوریتم کالیبره سازی مناسب، این مساله باعث محدود شدن دقت تبدیل مبدل های آنالوگ به دیجیتال خط لوله می شود که عمدتا قابل قبول نمی باشد. در این شرایط، حداقل باید به میزان اختلاف تعداد بیت های قابل دستیابی و تعداد بیت های موردنیاز، از طبقات ابتدایی مبدل کالیبره سازی نمود. این مقاله، به مدل سازی خطای بهره طبقات در مبدل های آنالوگ به دیجیتال خط لوله می پردازد. سپس روش نوینی به منظور تخمین و کالیبره سازی مبنای غیرخطی خطا در طبقات ابتدایی ارائه می شود. قابلیت روش مورد نظر، در افزایش تعداد بیت موثر یک مبدل آنالوگ به دیجیتال 14 بیت و با فرکانس نمونه برداری MS/s 65، در فرآیند ساخت0.18µm CMOS به اثبات می رسد. پس از کالیبره سازی، تعداد بیت موثر در فرکانس نایکویست از 1/8 بیت به 4/13 بیت به افزایش می یابد.

    کلید واژگان: کالیبراسیون دیجیتال, مبدل های آنالوگ به دیجیتال, مبدل های خط لوله, مدل سازی, همبستگی}
    Hamed Aminzadeh *

    Most of the integrated components used in current CMOS integrated circuits (IC) technology are inevitably nonlinear. This issue complicates the matching between such elements, and affects strongly the performance of analog circuits. In data converters, the nonlinearity caused by employing nonlinear transistors lowers the overall resolution and may limit the number of bits to an unacceptable value. It can be demonstrated that the number of the stages that should be calibrated is equal to the difference between the maximum number of bits without calibration and the desired number of bits which should be calibrated. In this article, we modeled the main error sources of the gain factor in pipeline stages. A modified calibration technique is then applied to estimate and to calibrate the nonlinear gain factor of the primitive stages. The effectiveness of the proposed approach is verified through design and calibration of a 14-bit 65MS/s converter in 0.18µm standard CMOS technology.

    Keywords: Analog-to-digital converters, Correlation, Digital calibration, Modeling, Pipelined converters}
  • امیرمسعود سفیدیان، نگین دانشپور *
    حضور مقادیر جاافتاده در داده های دنیای واقعی مشکلی بسیار رایج و غیرقابل اجتناب است. بنابراین لازم است تا پیش از عملیات اکتشاف دانش، این مقادیر جاافتاده به طور دقیق پر شوند. در این مقاله، سه رویکرد جدید برای تخمین مقادیر جاافتاده عددی پیشنهاد می شود. در تمامی روش های پیشنهادی، مدل های رگرسیون بر زیرمجموعه هایی با همبستگی بالا اعمال می شوند. در انتخاب زیرمجموعه های مطلوب سعی می شود تا همبستگی بین صفت جاافتاده و دیگر صفات حداکثر شود. انتخاب این زیرمجموعه ها با استفاده از رویکردهایی مبتنی بر انتخاب روبه جلو انجام می شود. از معیار ضریب همبستگی برای اندازه گیری میزان ارتباط بین صفات استفاده شده است. همچنین در روش های پیشنهادی، ترتیب صفات جاافتاده برای انجام عمل جایگذاری اولویت دهی می شوند. عملکرد رویکردهای پیشنهادشده بر روی پنج مجموعه داده از دنیای واقعی با مقادیر مختلف جاافتادگی ارزیابی شده است. عملکرد رویکردهای ارائه شده با پنج رویکرد جایگذاری با مقدار میانگین، جایگذاری با استفاده از نزدیک ترین همسایگان، روش جایگذاری با خوشه بندی c-means فازی، روش جایگذاری با درخت تصمیم و روشی مبتنی بر رگرسیون به نام «الگوریتم جایگذاری با رگرسیون افزایشی صفات» (IARI) مقایسه شده است. از دو معیار شناخته شده ی ریشه میانگین مربعات خطا و ضریب تعیین برای مقایسه عملکرد رویکردهای پیشنهادی با دیگر روش های جایگذاری استفاده شده است. نتایج آزمایش ها نشان می دهد که رویکردهای ارائه شده، حتی زمانی که درصد جاافتادگی بالا است، بهتر از دیگر روش های مقایسه شده عمل می کنند.
    کلید واژگان: جایگذاری مقادیر جاافتاده, همبستگی, رگرسیون}
    A. M. Sefidian, N. Daneshpour *
    The presence of missing values in the real world data is a very prevalent and inevitable problem. So, it’s necessary to fill up these missing values accurately, before they are used for knowledge discovery process. This paper proposes three novel methods to fill numeric missing values. All of the proposed methods apply regression models on subsets of data which there are strong correlations among them. These subsets are selected using forward selection based approaches. In the selection of the desired subsets, it is tried to maximize the correlation between missing attribute and other attributes. The correlation coefficient is used to measure the relationships between attributes. The priority of each missing attribute for imputation purpose is also considered in the proposed methods. The performance of proposed methods is evaluated on five real world datasets with different missing ratios. The efficiency of the proposed methods is compared with five different estimation methods, namely, the mean imputation, the k nearest neighbours imputation, a fuzzy c-means based imputation, a decision tree based imputation, and a regression based imputation algorithm, called “Incremental Attribute Regression Imputation” (IARI) method. Two well-known evaluation criteria, namely, Root Mean Squared Error (RMSE) and Coefficient of Determination (CoD) are used to compare the performance of proposed methods with other imputation methods. Experimental results show that the proposed methods perform better than other compared methods, even when the missing ratio is high.
    Keywords: Missing values imputation, Correlation, Regression}
  • بهروز محبوبی*، داریوش دیدبان
    با پیشرفت فناوری مدارهای مجتمع و ورود ترانزیستورها به مقیاس های نانومتری، تغییرات آماری مشخصات الکتریکی افزاره ها به علت ماهیت گسسته بار و ماده و تغییرات تصادفی ناشی از نوسانات پروسه ساخت به طور چشم گیری افزایش پیدا کرده است. این تغییرات به نوبه خود باعث تغییر در مشخصه های خروجی بلوک های مهم آنالوگ و علی الخصوص تقویت کننده ها می شود. در این مقاله به کمک شبیه سازی مونت کارلو یک مدار تقویت کننده هدایت انتقالی و استفاده از 1000 مدل فشرده متفاوت برای ترانزیستورهای MOSFET در فناوری 35 نانومتر، تغییرات آماری پارامترهای مهم مدار از لحاظ نحوه توزیع آماری، بررسی و آنالیز گردیده و مدل وابستگی آماری بین پارامترهای مهم مدار نیز استخراج شده است. تحلیل تغییرات آماری پارامترهای خروجی مدار و وابستگی آنها، دارای نتایج مستقیم در کاهش هزینه و زمان طراحی مدار بوده و حایز اهمیت فراوانی است.
    کلید واژگان: تغییرات آماری تصادفی, تقویت کننده هدایت انتقالی, توزیع و وابستگی آماری, فناوری نانو CMOS}
    B. Mahboubi*, D. Dideban
    With advancement of integrated circuit technology and aggressive scaling into nanometer regime, statistical variability in device electrical characteristics caused by discreteness of charge and fabrication process variations has significantly increased. These variations in turn result in fluctuations in output characteristics of important analog building blocks and in particular, amplifiers. In this paper, with the aid of Monte-Carlo simulations for a transconductance amplifier and using 1000 different compact models of MOSFET transistors in 35nm technology node, statistical variations of important circuit parameters are investigated and analyzed based on their statistical distributions. Moreover, statistical correlations between circuit parameters are extracted. Analysis of statistical variations for circuit parameters and their correlations has a direct impact on reduction of cost and time of a design and thus, is of great amount of significance.
    Keywords: Random statistical variability , transconductance amplifier , statistical distribution, correlation , nano-CMOS technology}
  • G. Ghadimi *, M. Nejati Jahromi, E. Ghaemi Ghaemi

    An optimized method for data hiding into a digital color image in spatial domain is provided. The graph coloring theory with different color numbers is applied. To enhance the security of this method, block correlations method in an image is used. Experimental results show that with the same PSNR, the capacity is improved by %8, and also security has increased in the method compared with other methods. In the correlation block-based image method, data hiding capacity of the host image varies according to image type and defined threshold level. In the proposed algorithm, during graph explanation, independent pixels placed side by side were colored. Then, based on “pixel block correlation data hiding” process is done. This method grows the security and capacity of hiding process. Besides, this increases the effects of image format and correlation threshold on security and capacity.

    Keywords: Data hiding, Graph coloring, Correlation, Threshold, Security, Color number}
  • Shaik Asif Hossain_Avijit Mallik * Arman Arefin
    An inspection of signal processing approach in order to estimate underwater network cardinalities is conducted in this research. A matter of key prominence for underwater network is its cardinality estimation as the number of active cardinalities varies several times due to numerous natural and artificial reasons due to harsh underwater circumstances. So, a proper estimation technique is mandatory to continue an underwater network properly. To solve the problem, we used a statistical tool called cross-correlation technique, which is a significant aspect in signal processing approach. We have considered the mean of cross-correlation function (CCF) of the cardinalities as the estimation parameter in order to reduce the complexity compared to the former techniques. We have used a suitable acoustic signal called CHIRP signal for the estimation purpose which can ensure better performance for harsh underwater practical conditions. The process is shown for both two and three sensors cases. Finally, we have verified this proposed theory by a simulation in MATLAB programming environment.
    Keywords: Bins, CHIRP signal, Cross, correlation, Underwater network cardinality (Node), Mean}
  • M. Moazedi, M. R. Mosavi, A. Sadr
    Global Positioning System (GPS) spoofing could pose a major threat for GPS navigation ýsystems, so the GPS users have to gain a better understanding of the broader implications of ýGPS.ý In this paper, a plenary anti-spoofing approach based on correlation is proposed to distinguish spoofing effects. The suggested ýmethod can be easily implemented in tracking loop of GPS receiver. We will study a real-time spoof recognition ýwith a clear certainty by introducing a reliable novel metric. As a primary step, the proposed technique is ýimplemented in software receiver to prove the concept of idea in a multipath-free ýscenario. ýThree rooftop data sets, collected in our GPS laboratory, are used in the ýperformance assessment of the proposed method. The results indicate that investigated algorithm is ýable to perform a real-time detection in all date sets.
    Keywords: Correlation, GPS Receiver, Tracking Loop}
  • M. Opan
    The pollution, changing according to various standards, is directly proportional to water quality in rivers. In this study, data and restrictions prescribed for standards of water quality such as Turkish Standards Institute (TSI), European Commission (EC), and the World Health Organization (WHO) were used to determine water pollution. For this purpose, correlation analysis was made to identify strong relationships between data. Regression analysis and Arti cial Neural Networks (ANN) models are developed based on these standards by data obtained from correlation analysis. The Lower Sakarya River is selected as application area, and measurement of the water quality values of this river is used in these models. Pollution control flows in the river are obtained by the ANN models and regression analysis. The obtained results are compared with regard to these standards.
    Keywords: Pollution control flow, Correlation, regression analyses, Arti cial neural networks, Drought management, Multi, reservoir system}
  • Mahdi Bashiri, Mohammad Hasan Bakhtiarifar
    Dealing with more than one response in the process optimization is a great issue in these recent years, so multiple response optimization studies have been grown in the published works. In the common problems, there are some input variables which can affect output responses but optimization can be more complex and more real when the responses have correlation with each other. In such problems analyst should consider the correlation structure in addition to input variables effects. In some cases, responses variables may emerge from another distributions rather than normal in which can be analyzed by the proposed method. Moreover, in some problems, response variables may have different importance for the decision maker. In this study, we try to propose an efficient method to find the best treatment in an experimental design which its correlated responses have different weights, either cardinal or ordinal ones. Also a heuristic method was proposed to deal with problems that have considerable number of correlated responses or treatments. The results of some numerical examples confirm the validity of the proposed method. Moreover a real case about Tehran air pollution is studied to show theapplicability of the proposed method in the real problems.
    Keywords: Cardinal weight, Ordinal weight, Multiple response optimization, Correlation, Transformation}
  • Y. Daneshbod, M. J. Abedini
    Sensitivity analysis is considered as an important part of evaluating the performance of mathematical or numerical models. One-factor-at-a-time (OAT) and differential methods are among the most popular sensitivity analysis (SA) schemes employed in the literature. The two major limitations of the above methods are lack of addressing the correlation between model factors and being a local method. Given these limitations, its extensive use among modelers raises concern over the credibility of the associated sensitivity analyses.This paper proposes proof of the inefficiency of the aforementioned methods drawing from experimental designs, and provides a novel technique based on principal component analysis (PCA) to address the issue of the correlation among input factors. In addition, proper guidelines are suggested to handle other conditions.
    Keywords: sensitivity analysis, OAT, correlation, local SA, Monte Carlo Simulation}
  • S.P. Guan, J.Y. Hua|S.Y. Du, S. Zhong
    The acoustic signal is an important medium in communication, damage and leakage detecting, etc. They are also used in the wireless sensor network positioning in recent years. With the development of mobile communication technology, smartphone-based indoor positioning by acoustic signals becomes possible. In the indoor environment, however, acoustic signals will be interfered with by environmental noise and multipath effect. Moreover, when multiple users are positioning simultaneously, acoustic signals will interfere with each other. In order to eliminate various forms of interference, we design a robust acoustic signal for smartphone-based indoor positioning in this paper. The signal is generated by using the pseudo-random codes, Gold sequences to modulate 6 kHz cosine wave. It is detected through its auto-correlation properties in the receiver. The designed acoustic signal can resist various noises with its excellent cross-correlation characteristic. We conduct experiments on real smartphones and the results show the signals can work well in the presence of forms of interference.
    Keywords: Acoustic signal, Indoor positioning, Smartphone, Pseudo, random code, Cross, correlation}
  • Issa Shooshpasha, Iman Amiri, Hossein Molaabasi

    Standard Penetration Test (SPT) is one of the most effective tests for quick and inexpensive evaluation of the mechanical properties of soil layers. Numerous studies have been conducted to evaluate correlations between SPT blow counts (NSPT) and the soil properties such as friction angle (). In this paper, the relation between and in situ parameters of soil including NSPT, effective stress and fine content is investigated for granular soils. In order to demonstrate the relevancy of and corrected SPT blow count (N60), a new polynomial model based on Group Method of Data Handling (GMDH) type neural networks (NN) was used based on a 195 data sets including three soil parameters. That has been recorded after two major earthquakes in Turkey and Taiwan in 1999. This study addresses the question of whether GMDH-type NN is capable to estimate based on specified variables. Results confirm that GMDH-type NN Provide an effective way to recognize data pattern and predict performance over granular soils accurately. Finally, the effect of fine content and effective overburden stress on the correlation of N60 and has been studied by sensitivity analysis.

    Keywords: Standard Penetration Test, Friction Angle, Correlation, GMDH, Sensitivity Analysis}
  • سجاد شفیع پور یوردشاهی*، میرهادی سیدعربی، علی آقاگل زاده
    در چند سال گذشته شناسایی چهره بر اساس ویدئو مورد توجه زیادی قرار گرفته است. در روش شناسایی چهره بر اساس ویدئو، مشکلاتی همچون تغییر شدت روشنایی، چرخش سر، پوشش قسمتی از چهره، تفکیک پذیری پایین تصویر، وجود دارد. هدف از این پژوهش ارایه یک روش جهت شناسایی چهره بر اساس ویدئو جهت حل مشکلات چرخش سر، تغییر شدت روشنایی و انسداد قسمتی از چهره است. در این تحقیق ابتدا جهت حذف تصویر زمینه، چهره در فریم های ویدئو تشخیص داده می شود. سپس تصاویر هر مجموعه بر روی منیفولد به صورت غیرخطی توزیع یافته را با استفاده از روش های مناسبی خوشه بندی می کنیم. تعیین تعداد خوشه ها و مدل های خطی کاملا هوشمندانه بوده و با اجرای این روش، با توجه به حرکات سر تعداد خوشه های متفاوت برای هر رشته ویدئویی به دست می آید. دو روش جهت خوشه بندی و به دست آمدن مدل های خطی پیشنهاد می گردد. برای انجام شناسایی از محاسبه فاصله بین منیفولدهای غیرخطی استفاده می شود که از ترکیب دو روش تشکیل یافته است. در روش اول از فاصله بین خوشه های به دست آمده که به عنوان زیر فضای خطی درنظر گرفته می شوند استفاده می گردد و در روش دوم از فاصله بین فریم های کلیدی هر خوشه جهت شناسایی استفاده می شود. در نهایت نتایج به دست آمده با روش های دیگر مقایسه می گردد.
    کلید واژگان: شناسایی چهره, منیفولد, خوشه, زاویه اصلی, همبستگی, فریم کلیدی, بردارحرکت}
    S. Shafeipour*, M. H. Seyed Arabi, A. Aghagolzadeh
    Video-based face recognition has received significant attention in recent years. There are some problems in video-based face recognition method such as pose, lighting and expression variations, occlusion, aging and low image resolutions. In this paper, first of all face in video frames is detected for background removing. Then each set of images is distributed on a nonlinear manifold and clustered using appropriate methods. Determination of the number of clusters and linear models is quite intelligent and with implementation of this method, the number of clusters depends on different movements of head. Two methods are proposed for clustering and linear models. The distance between nonlinear manifolds that is composed of the combination of two methods, are used for recognition. The first method uses the distance between the clusters which are considered as the linear space and the second method uses the distance between key frames for recognition. Finally the results are compared with other methods.
    Keywords: Face recognition, manifold, clustering, principal angles, correlation, key frame, motion vector}
  • E. Bojnordi *, P. Moradi
    One of the fundamental methods used in collaborative filtering systems is Correlation based on K-nearest neighborhood. These systems rely on historical rating data and preferences of users and items in order to propose appropriate recommendations for active users. These systems do not often have a complete matrix of input data. This challenge leads to a decrease in the accuracy level of recommendations for new users. The exact matrix completion technique tries to predict unknown values in data matrices. This study is to show how the exact matrix completion can be used as a preprocessing step to tackle the sparseness problem. Compared to application of the sparse data matrix, selection of neighborhood set for active user based on the completed data matrix leads to achieving more similar users. The main advantages of the proposed method are higher prediction accuracy and an explicit model representation. The experiments show significant improvement in prediction accuracy in comparison with other substantial methods.
    Keywords: Recommender systems, collaborative filtering, Correlation, matrix completion, convex optimization, nearest neighborhood}
نکته
  • نتایج بر اساس تاریخ انتشار مرتب شده‌اند.
  • کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شده‌است. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
  • در صورتی که می‌خواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.
درخواست پشتیبانی - گزارش اشکال