amin babazadeh sangar
-
International Journal Of Nonlinear Analysis And Applications, Volume:15 Issue: 2, Feb 2024, PP 39 -46Diabetes is a dangerous disease in which the body is incapable of controlling blood sugar due to inadequate insulin hormone levels. This chronic disease increases blood sugar in patients. Therefore, if it is not controlled, it will cause many complications. A considerable number of people in the world suffer from this disease owing to its damage and lack of its initial diagnosis. The patient visits the doctor frequently to diagnose his/her illness and conducts various tests that are boring and costly. Increasing machine learning approaches through heuristics, and novel methods can somewhat decrease the problems. The current study aims to propose a model that can predict diabetes in patients with high accuracy. The paper introduces a new method based on the assortment of metaheuristic algorithms of a particle swarm and fuzzy inference system. The proposed method utilizes fuzzy systems to binary the particle swarm algorithm. The achieved model is applied to the diabetes dataset and then evaluated using a neural network classifier. The results indicate an increase in classification accuracy to 95.47% compared to other existing methods.Keywords: Diabetes, PSO Algorithm, Neural Networks, Fuzzy systems, Meta-heuristic algorithms
-
International Journal Of Nonlinear Analysis And Applications, Volume:14 Issue: 12, Dec 2023, PP 175 -186The present era is marked by rapid improvement and advances in technology. Nowadays inefficient traffic light management systems can make long delays and waste energy improving the efficiency of such complex systems to save energy and reduce air pollution in future smart cities. In this paper, we propose to take real-time traffic information from the surrounding environment. Such a process, which is called profilization constantly gathers and analyses information for vehicles and pedestrians throughout smart cities in order to fairly predict their actions and behaviours. We develop an efficient multi-level traffic light control system to schedule traffic signals’ duration based on a distributed profile database, which is generated by embedding sensors in streets, Vehicles and everywhere. We deploy pervasive deep learning models from the cloud to users (vehicles, bikes and pedestrians) to learn and control the traffic lights. In the cloud-level learning model, the maximum waiting time of different vehicles and pedestrians is calculated based on their profiles. The profilization process is a constant learning process throughout the whole city at the user level. Each vehicle deploys a separate learning model (decision-making) based on its average and maximum speed in a different area, waiting times at the intersections and possible trips and destinations. Such a multi-level deep learning model in the level of intersection and cloud aims to locally schedule the traffic with deadlines toward their destinations within a certain period. The results show that the proposed multi-level traffic light system can significantly improve the efficiency of the traffic system in future smart cities.Keywords: Traffic light system, Smart Cities, Deep learning
-
International Journal Of Nonlinear Analysis And Applications, Volume:14 Issue: 8, Aug 2023, PP 83 -94In spite of the various advantages of Business Intelligence Systems (BIS), implementing them brings different challenges. Implementing BIS without considering the related challenges and determinants will increase the total cost and decrease added value for the organization. In this study, a questionnaire is developed to identify the critical factors affecting the implementation of BIS in automotive parts manufacturing companies and analyzed through a data mining technique, namely association rules, and the WST-WFIM algorithm on weighted data. The algorithm aims to extract sets of frequent rules and their weights (determined according to their importance) obtained from an expert panel. Taguchi method was adapted to design the algorithm's optimized parameters in order to obtain more effective rules. After applying the new algorithm to effective weighted factors, the relation and importance of each collection of effective factors are analyzed. The findings showed that, from the experts' viewpoint, the most important factors for successful implementation of BIS include (1) Removing potential negative resistances and barriers in spite of the various advantages of the Business Intelligence Systems (BIS), implementing them brings different challenges. Implementing BIS without considering the related challenges and determinants will increase the total cost and decrease added value for the organization. In this study, a questionnaire is developed to identify the critical factors affecting the implementation of BIS in automotive parts manufacturing companies and analyzed through a data mining technique, namely association rules, and the WST-WFIM algorithm on weighted data. The algorithm aims to extract sets of frequent rules and their weights (determined according to their importance) obtained from an expert panel. Taguchi method was adapted to design the algorithm's optimized parameters in order to obtain more effective rules. After applying the new algorithm to effective weighted factors, the relation and importance of each collection of effective factors are analyzed. The findings showed that, from the experts' viewpoint, the most important factors for successful implementation of BIS include (1) Removing potential negative resistances and barriers to implement BIS, (2) alignment between business strategy and BIS characteristics; and (3) system reliability, flexibility, and scalability. implement BIS, (2) alignment between business strategy and BIS characteristics; and (3) system reliability, flexibility, and scalability.Keywords: Business Intelligence, Data Mining, Weighted Association Rules, WST-WFIM Algorithm, Taguchi method
-
تشخیص دستخط همواره مسیله چالش برانگیزی بوده است؛ ازاین رو، توجه محققان زیادی را به خود جلب کرده است. مطالعه حاضر یک سیستم آفلاین (غیر برخط) تشخیص خودکار دست نوشته های انسان را در شرایط آزمایشی مختلف ارایه می دهد. این سیستم شامل داده های ورودی، واحد پردازش تصویر و واحد خروجی است. در این مطالعه، یک مجموعه داده راست به چپ بر پایه استانداردهای آمریکایی (ASTM) طراحی شده است. یک مدل شبکه عصبی کانولوشن عمیق (DCNN) بهبودیافته بر پایه شبکه از پیش آموزش دیده، برای استخراج ویژگی ها به صورت سلسله مراتبی از داده های خام دستخط طراحی شده است. یک مزیت درخور توجه در این مطالعه استفاده از داده های نامتجانس است. یکی دیگر از جنبه های شایان توجه مطالعه حاضر این است که مدل پیشنهادی DCNN مستقل از هر زبان خاصی است و می تواند برای زبان های مختلف استفاده شود. نتایج نشان می دهند مدل پیشنهادی DCNN، عملکرد بسیار خوبی برای شناسایی نویسنده بر پایه داده های نامتجانس دستخط دارد.کلید واژگان: شناسایی آفلاین نویسنده, داده نامتجانس, یادگیری ویژگی, شبکه عصبی عمیقHandwriting recognition has always been a challenge; therefore, it has attracted the attention of many researchers. The present study presents an offline system for the automatic detection of human handwriting under different experimental conditions. This system includes input data, image processing unit, and output unit. In this study, a right-to-left dataset is designed based on the standards of the American Society for Experiments and Materials (ASTM). An improved deep convolution neural network (DCNN) model based on a pre-trained network is designed to extract features hierarchically from raw handwritten data. A significant advantage in this study is the use of heterogeneous data. Another significant aspect of the present study is that the proposed DCNN model is independent of any particular language and can be used for different languages. The results show that the proposed DCNN model has a very good performance for identifying the author based on heterogeneous data.Keywords: Offline Identification of the Author, Heterogeneous Data, Feature Learning, Deep Neural Network
-
Due to such disadvantages of current traffic light control methods as waste of time, waste of fuel and resources, increased air pollution, providing an intelligent traffic light control system that leads to the shortest waiting time for vehicles and pedestrians becomes so significant. Given the high priority of this issue, this paper presents an intelligent urban traffic system method based on IoT data and fog processing. Fog processing is a platform that is at the edge of the network and provides powerful services and applications for users. Compared to cloud computing, cloud computing is closer to users and therefore collects information faster and disseminates it over a network of sensors. It also helps cloud computing to perform tasks such as preprocessing and data collection. Cloud computing is a new type of distributed processing structure used for the Internet of Things. This paper proposed a method called GW-KNN. According to this method, we first collect data through the Internet of Things. Then, the preprocessing operation and extraction of effective fields in the cloud processing section are performed using the k-nearest neighbor improved machine learning algorithm. Traffic on each road is predicted in the next time slot and this information is sent for use in the fog processing layer to make traffic control decisions. The concept of Euclidean distance network with Gaussian weight was used to predict the future traffic situation and KNN model was included in the algorithm output to increase the forecasting accuracy and finally solve the problem of traffic light control. This idea was implemented and simulated using MATLAB. To get the results, the implementation was done on a computer with an i7-10750 processor and 16 GB of main memory and 1 TB of external memory. The results of the evaluations show that the proposed method has a much better performance than the previous two methods in terms of absolute mean error percentage, absolute mean error percentage of traffic forecast, and average waiting time of each vehicle.
-
Introduction
Telemedicine in the pandemic of coronavirus disease 2019 (COVID - 19) has responded to societal distancing in medical treatments by protecting health workers while also managing available resources. To attain best practices in telemedicine, a platform must be functional, and both patients and clinicians must be satisfied with the technology. To ensure the benefits of telehealth systems, usability refers to how easy the user interfaces of telehealth systems are to use. In this s tudy, the usability of telemedicine systems has been investigated.
Material and MethodsThe authors of this study review the study from 2015 to 2021 using a combination of the keywords "health", "telemedicine", "telemedicine", "mobile health", "usability" "Software", "System" and "Program", which led to the extraction of 119 articles in this field.
ResultsArticles in the field of remote health software and evaluation of the usability of remote health applications in the form of applications based on mobi le health technologies, web - based applications or a combination of both types with sample devices Primary are wearable electronics, sensors or robots.
ConclusionIn this study, most of the remote health software are mobile based and their usability has be en evaluated by a questionnaire. Satisfaction is the most important usability attributes to consider when designing Health mobile apps.
Keywords: Healthcare, Telemedicine, Telehealth, Mobile Health, System, Software, Application, Usability -
Signature identification plays an important role in many areas such as banking, administrative and judicial systems. For this purpose, in this paper, an automatic intelligent framework is developed by combining a deep pre-trained network with a recurrent neural network. The results of the proposed model were evaluated on several valid datasets and collected datasets. Since there was no suitable Persian signature dataset, we collected a Persian signature dataset based on US ASTM guidelines and standards, which can be very effective and profound for deep approaches. Due to the very promising results of the proposed model in comparison with recent studies and conventional methods, to evaluate the resistance of the proposed model to different noises, we added Gaussian Noise, Salt and Pepper Noise, Speckle Noise, and Local var Noise in different SNRs to the raw data. The results show that the proposed model can still be resistant to a wide range of SNRs; So at 15 dB, the accuracy of the proposed method is still above 90%.
Keywords: Automatic Identification of the Writer of the Signature, Pre-trained Network, Feature Learning, Convolutional Neural Network (CNN) -
Journal of Artificial Intelligence in Electrical Engineering, Volume:9 Issue: 34, Summer 2020, PP 16 -33
Landslide susceptibility analysis is beneficial information for a wide range of applications. We aimed to explore and compare three machine learning (ML) techniques, namely the random forests (RF), support vector machine (SVM) and multiple layer neural networks (MLP) for landslide susceptibility assessment in the Ahar county of Iran. To achieve this goal, 10 landslide occurrence-related influencing factors were pondered. A sum of 266 locations with landslide potentiality was recognized in the context of the study, and the Pearson correlation technique utilized in order to select the influencing factors in landslide models. The association between landslides and conditioning factors was also evaluated using a probability certainty factor (PCF) model. Three landslide models (SVM, RF, and MLP) were structured by the training dataset. Lastly, the receiver operating characteristic (ROC) and statistical procedures were employed to validate and contrast the predictive capability of the obtained three models. The findings of the study in terms of the Pearson correlation technique method for the importance ranking of conditioning factors in the context area uncovered that slope, aspect, normalized difference vegetation index (NDVI), and elevation have the highest impact on the occurrence of the landslide. All in all, the MLP model had the utmost rate of prediction capability (85.22 %), after which, the SVM model (78.26 %) and the RF model (75.22 %) demonstrated the second and third rates. Besides, the study revealed that benefiting the optimal machine with the proper selection of the techniques could facilitate landslide susceptibility modeling.
Keywords: Random Forest, Support Vector Machine, Multiple Layer Neural Network -
Journal of Artificial Intelligence in Electrical Engineering, Volume:8 Issue: 31, Autumn 2019, PP 9 -24Lung cancer is among the deadliest cancers worldwide. One of the indications of lung cancers is lung nodules which can appear individually or attach to the lung wall. Therefore, the detection of the so-called nodules is complicated. In such cases, the image processing algorithms are performed by the computer, which can aid the radiologists in locating and assessing the nodule's feature. The significant problems with the current systems are the increment of the accuracy, improvement of other criteria in the results, and optimization of the computation costs. The present paper's objective is to efficiently cope with the aforementioned problems by a shallow and light network. Convolutional Neural Networks were utilized to distinguish between benign or malignant lung nodules. In CNN's networks, the complexity increases as the number of layers increases. Accordingly, in the current paper, two scenarios are presented based on State the art and shallow CNN method in order to accurately detect lung nodules in lung CT scans. A subset of the LIDC public dataset including N=7072 CT slices of varying nodule sizes was also used for training and validation of the current approach. Training and validation steps of the network were performed approximately in five hours, and the proposed method achieved a high detection accuracy of 83.6% in Scenario1 and 91.7% in Scenario2. Due to the usage of various validated database images and comparison with previous similar studies in terms of accuracy, the proposed solution achieved a decent trade-off between criteria and saved computation costs. The present work demonstrated that the proposed network was simple and suitable for the so-called problems. Although the paper attempted to meet the existing challenges and fill up the prevailing niches in the literature, there are still further issues that requires complementary studies to shape the tapestry of the knowledge in the field.Keywords: Computed Tomography, Computer Aided Detection, deep learning, Lung nodules, Medical Image Processing
-
پیش زمینه و هدف
یکی از نشانه های بروز سرطان ریه، که یکی از مرگبارترین سرطان ها محسوب می گردد، غده های ریوی می باشند. به دلیل اینکه آشکارسازی این غده های ریز از روی تصاویر سی تی اسکن ریه با چشم بسیار دشوار می باشد بنابراین سیستم های هوشمند یا سیستم های تشخیص به کمک کامپیوتر (CAD)، می توانند به عنوان کمک کار متخصص در آشکارسازی، محل یابی و ارزیابی کیفیت غده کمک کنند. مهم ترین چالش سیستم های هوشمند موجود، ارتقاء متعادل معیارهای دقت، تشخیص، حساسیت و کاهش نرخ خطای مثبت کاذب (FPr) بوده و همچنین پیچیدگی این سیستم ها، باعث کاهش کارایی و سرعت اجرا شده است بنابراین هدف از انجام پژوهش حاضر، ارایه یک چارچوب چابک و بهینه سازی چالش مدنظر می باشد.
مواد و روش کاریکی از زیرشاخه های نوین هوش مصنوعی، یادگیری عمیق وگرایش شبکه های CNN می باشند که در سال های اخیر، در تحلیل تصاویر پزشکی کاربرد زیادی یافته اند. در این پژوهش، یک شبکه ابتکاری مبتنی بر شبکه های CNN از نوع LeNet جهت استخراج ویژگی های تصویر و همچنین کلاس بندی تصاویر پیشنهاد می گردد. دیتاست مورد استفاده، یک زیرمجموعه به تعداد 7072 قطعه تصویر که از مجموعه دیتاست استاندارد LIDC-IDRI حاصل شده است، می باشد. غده های موجود در این تصاویر که جهت آموزش و اعتبارسنجی شبکه، استفاده می شوند دارای اندازه های 1 تا 4 میلی متر می باشند.
یافته ها:
فرآیندهای آموزش و اعتبارسنجی این شبکه با یک دستگاه رایانه دارای پردازنده Core i5 2.4GHz، حافظه 8GB و کارت گرافیکIntel Graphics 520 در مدت زمان، پنج ساعت و یازده دقیقه اجرا شده و به میزان دقت، حساسیت و تشخیص به ترتیب برابر با 91.1درصد،85.3درصد و 8/92درصد دست یافته است.
بحث و نتیجه گیری:
با توجه به مبنای استاندارد مدل ارایه شده و نیز استفاده از تصاویر پایگاه داده معتبر برای سنجش شبکه و مقایسه با کارهای پیشین، نتایج حاصل شده از آن، تعادل خوبی را بین معیارهای ارزیابی برقرار نموده و با اجرای سریع تر، قابلیت لازم برای کاربردهای زمان واقعی را کسب می نماید.
کلید واژگان: سیستم های کمک تشخیص کامپیوتری, پردازش تصویر پزشکی, غدد ریوی, شبکه های عصبی مصنوعی, یادگیری عمیقBackground & AimsOne of the symptoms of lung cancer, which is one of the deadliest cancers, is the lung nodules. It is very difficult to detect these tiny nodules on CT scans of the lungs with the naked eye. Therefore, intelligent systems or computer-aided detection (CAD) systems can assist a radiologist in detecting, locating, and evaluating the quality of lung nodules. The most important challenge of existing intelligent systems is the balanced improvement of accuracy, sensitivity, specificity, and reduction of false positive rate (FPr), and also the complexity of these systems has reduced the efficiency and speed of execution. Therefore, the purpose of this study was to provide an agile framework and optimize the challenge.
Materials & MethodsOne of the new subfields of artificial intelligence is the deep learning and orientation of CNN networks, which has been widely used in the analysis of medical images in recent years. In this research, an innovative network based on CNN networks of LeNet type is proposed to extract image features as well as image classification. The used dataset is a subset of 7072 image pieces derived from the LIDC-IDRI standard dataset. The size of nodules of these images, which are used to train and validate the network, are 1 to 4 mm.
ResultsThe training and validation processes of this network were performed with a computer device (configurations 2.4GHz Core i5 processor, 8GB of memory, and Intel Graphics 520) in five hours and eleven minutes and the accuracy, sensitivity, and specificity are 91.1%, 85.3% and 92.8%, respectively.
ConclusionBased on the standard basis of the proposed model and also the use of valid database images to measure the network and compare with previous works, the results establish a good balance between evaluation criteria, and with faster implementation gain the necessary capability for real time applications.
Keywords: Computer aided detection systems, Medical image processing, Lung nodules, Artificial Neural Networks, Deep learning
- در این صفحه نام مورد نظر در اسامی نویسندگان مقالات جستجو میشود. ممکن است نتایج شامل مطالب نویسندگان هم نام و حتی در رشتههای مختلف باشد.
- همه مقالات ترجمه فارسی یا انگلیسی ندارند پس ممکن است مقالاتی باشند که نام نویسنده مورد نظر شما به صورت معادل فارسی یا انگلیسی آن درج شده باشد. در صفحه جستجوی پیشرفته میتوانید همزمان نام فارسی و انگلیسی نویسنده را درج نمایید.
- در صورتی که میخواهید جستجو را با شرایط متفاوت تکرار کنید به صفحه جستجوی پیشرفته مطالب نشریات مراجعه کنید.