به جمع مشترکان مگیران بپیوندید!

تنها با پرداخت 70 هزارتومان حق اشتراک سالانه به متن مقالات دسترسی داشته باشید و 100 مقاله را بدون هزینه دیگری دریافت کنید.

برای پرداخت حق اشتراک اگر عضو هستید وارد شوید در غیر این صورت حساب کاربری جدید ایجاد کنید

عضویت

جستجوی مقالات مرتبط با کلیدواژه « Neural network » در نشریات گروه « فناوری اطلاعات »

تکرار جستجوی کلیدواژه «Neural network» در نشریات گروه «فنی و مهندسی»
  • نازیلا محمدی، غلامرضا معمارزاده طهران*، صدیقه طوطیان اصفهانی

    مدیریت صحیح اجرای سیاست های حوزه فناوری اطلاعات و ارتباطات در مسیری برنامه ریزی شده به منظور رسیدن به ارتقای جایگاه کشور در زمینه های علمی و فناوری، اجتناب ناپذیر است. هدف این پژوهش، ارائه مدل عوامل موثر بر اجرای سیاست های حوزه فناوری اطلاعات و ارتباطات ایران به کمک تکنیک شبکه عصبی و بر اساس تئوری ساخت یابی گیدنز می باشد. این تحقیق از منظر انجام آن از نوع پیمایشی و بر مبنای هدف، از نوع کاربردی است زیرا سعی بر آن است که از نتایج پژوهش در مجموعه وزارت ارتباطات و فناوری اطلاعات و شرکت مخابرات ایران بهره برداری گردد. گردآوری داده ها، بر اساس روش کتابخانه ای و میدانی صورت گرفته است. ابزار گردآوری اطلاعات، ادبیات پژوهش و پرسشنامه محقق ساخته می باشد. جامعه آماری تحقیق کارشناسان فناوری اطلاعات و ارتباطات ستاد شرکت مخابرات ایران (810 نفر) می باشند که 260 نفر براساس فرمول کوکران به صورت تصادفی به عنوان نمونه انتخاب شدند. برای تحلیل داده ها از نرم افزار متلب استفاده شد. طبق یافته ها بهترین ترکیب برای توسعه زمانی است که تمام متغیرهای ورودی همزمان در نظر گرفته شوند و بدترین حالت زمانی است که متغیر توسعه زیرساخت نادیده گرفته شود و همچنین بیشترین اهمیت بر اساس تحلیل حساسیت شبکه، مربوط به توسعه زیرساخت و کمترین مربوط به تامین محتوا می باشد.

    کلید واژگان: فناوری اطلاعات و ارتباطات, برنامه ششم توسعه, گیدنز, شبکه عصبی, اجرای سیاست ها}
    Nazila Mohammadi, Gholamreza Memarzadeh Tehran *, Sedigheh Tootian Isfahani

    It is inevitable to properly manage the implementation of information and communication technology policies in a planned way in order to improve the country's position in the fields of science and technology. The purpose of this research is to provide a model of the effective factors on the implementation of Iran's ICT policies with the help of the neural network technique and based on Giddens' constructive theory. From the point of view of conducting it, this research is of a survey type and based on the purpose, it is of an applied type because it is trying to use the results of the research in the Ministry of Communication and Information Technology and the Iranian Telecommunications Company. Data collection is based on library and field method. The tool for collecting information is research researcher-made questionnaire. The statistical population of the research is information and communication technology experts at the headquarters of Iran Telecommunication Company (810 people), of which 260 people were randomly selected as a sample based on Cochran's formula. MATLAB software was used for data analysis. According to the findings, the best combination for development is when all input variables are considered at the same time, and the worst case is when the infrastructure development variable is ignored, and the most important based on network sensitivity analysis is related to infrastructure development and the least important is related to content supply.

    Keywords: Information, Communication Technology, 6Th Development Plan, Giddens, Neural Network, Policy Implementation}
  • بابک نیکمرد، آذین پیشداد، گلناز آقایی قزوینی، مهرداد عباسی

    هر روز، سازمان ها حجم قابل توجهی از فایل های رخداد (لاگ) تولید می کنند که برای بررسی شرایط، اشکال زدایی و رفع ناهنجاری ها نیاز به پردازش دارند. برون سپاری چنین فرایندی به دلیل نیاز به پردازش بلادرنگ و نگهداری امنیتی مناسب نیست. با توجه به انبوه نرم افزارها و سرویس های مختلف، سازمان ها با حجم قابل توجهی از گزارش ها و رخدادهای تولیدی مواجه هستند که به جای حذف یا نادیده گرفته شدن، باید پردازش شوند. در روش سنتی، کارشناسان روزانه به صورت دستی پرونده های رخداد را بررسی می کنند که این امر از یک سو باعث کندی فرآیند، افزایش زمان و عدم دقت و از سوی دیگر به دلیل نیاز به نیروی متخصص، هزینه های بالای استخدام را در پی دارد. این مقاله راه حلی را معرفی می کند که از شبکه های عصبی مولد برای ایجاد یک ساختار محلی برای تجزیه و تحلیل گزارش در سازمان استفاده می شود. این فرآیند شامل بازیابی و تجزیه فایل های متنی از بخش های مختلف، تقسیم آن ها به بخش های قابل مدیریت، جاسازی و ذخیره آن ها در یک پایگاه داده برداری است. در این ساختار، یک فرد آموزش دیده بدون تخصص خاص می تواند به سرعت به اطلاعات لازم با استفاده از اعلان های مناسب (پرامپت نویسی) از یک مدل زبان بزرگ که به صورت محلی در سازمان توسعه یافته و در هر زمان قابل دسترسی است، استفاده کند. ازهمین روی، روش پیشنهادی می تواند باعث پایداری امنیت، افزایش سرعت تجزیه و تحلیل و کاهش هزینه های منابع انسانی شود.

    کلید واژگان: شبکه عصبی, هوش مصنوعی مولد, مدل زبان بزرگ, فایل لاگ}
    Babak Nikmard, Azin Pishdad, Golnaz Aghaee Ghazvini, Mehrdad Abbasi

    Nowdays, organizations generate a significant volume of log files that require processing for condition checking, debugging, and anomaly resolution. Outsourcing such processing is not suitable due to the need for real-time processing and security maintenance. Given the multitude of different software and services, organizations face a substantial volume of production logs that should be processed rather than deleted or ignored. In the traditional approach, experts manually check the logs daily. This, on one hand, slows down the process, increases the time and inaccuracy, and, on the other hand, results in a high hiring cost due to the need for an expert force. This article introduces a solution that employs generative neural networks to establish a local structure for log analysis within the organization. The process involves retrieving and parsing text files from various sectors, segmenting them into manageable portions, embedding them, and storing them in a vector database. In this structure, a trained individual without special expertise can quickly access necessary information using appropriate prompts from a local language model available at any time. As a result, three overarching goals are achieved: maintaining security, increasing the speed of analysis, and reducing human resource costs.

    Keywords: Neural Network, Generative Artificial Intelligence, Large Language Model, LLM, Log File}
  • Mehdy Roayaei *

    Contemporary machine learning models, like deep neural networks, require substantial labeled datasets for proper training. However, in areas such as natural language processing, a shortage of labeled data can lead to overfitting. To address this challenge, data augmentation, which involves transforming data points to maintain class labels and provide additional valuable information, has become an effective strategy. In this paper, a deep reinforcement learning-based text augmentation method for sentiment analysis was introduced, combining reinforcement learning with deep learning. The technique uses Deep Q-Network (DQN) as the reinforcement learning method to search for an efficient augmentation strategy, employing four text augmentation transformations: random deletion, synonym replacement, random swapping, and random insertion. Additionally, various deep learning networks, including CNN, Bi-LSTM, Transformer, BERT, and XLNet, were evaluated for the training phase. Experimental findings show that the proposed technique can achieve an accuracy of 65.1% with only 20% of the dataset and 69.3% with 40% of the dataset. Furthermore, with just 10% of the dataset, the method yields an F1-score of 62.1%, rising to 69.1% with 40% of the dataset, outperforming previous approaches. Evaluation on the SemEval dataset demonstrates that reinforcement learning can efficiently augment text datasets for improved sentiment analysis results.

    Keywords: Data Augmentation, Sentiment analysis, Deep reinforcement learning, Neural Network, DQN Algorithm}
  • فریناز صناعی، سید عبدالله امین موسوی، عباس طلوعی اشلقی، علی رجب زاده قطری
    مقدمه

    ملانوم جزء شایعترین سرطان تشخیصی و دومین علت مرگ ناشی از سرطان در میان افراد است. تعداد مبتلایان به آن در حال افزایش است. ملانوم، نادرترین و بدخیم ترین نوع سرطان پوست است.در شرایط پیشرفته توانایی انتشار به ارگانهای داخلی را دارد و میتواند منجر به مرگ شود. طبق برآوردهای انجمن سرطان آمریکا برای ملانوم در ایالاتمتحده برای سال 2022 عبارتاند از: حدود 99،780 ز افراد مبتلابه ملانوم تشخیص داده شدند و حدود 7،650 نفر در اثر ملانوم جان خود را از دست میدهند. لذا هدف از این مطالعه، طراحی بهبود دقت الگوریتم برای پیش بینی بقای این بیماران است.

    روش پژوهش

     روش حاضر کاربردی، توصیفی- تحلیلی و گذشتهنگر است. جامعه پژوهش را بیماران مبتلابه سرطان ملانوم پایگاه داده مرکز تحقیقات کشوری سرطان دانشگاه شهید بهشتی) 1387 تا 1391 (که تا 5 سال مورد پیگیری قرارگرفته بودند، تشکیل داده است. مدل پیشبینی بقای ملانوم بر اساس شاخص های ارزیابی الگوریتم های داده کاوی انتخاب شد.

    یافته ها

    الگوریتم های شبکه عصبی، بیز ساده، شبکه بیزی، ترکیب درخت تصمیم گیری با بیز ساده، رگرسیون لجستیک، J48 ، ID3 بهعنوان مدل های استفاده شده ی پایگاه داده کشور انتخاب شدند . عملکرد شبکه عصبی در همه شاخصهای ارزیابی ازلحاظ آماری نسبت به سایر الگوریتم های منتخب بالاتر بود.

    نتیجه گیری

    نتایج مطالعه حاضر نشان داد که شبکه عصبی با مقدار 97 / 0 ازلحاظ دقت پیش بینی عملکرد بهینه دارد. بنابراین مدل پیش بینی کننده بقای ملانوم، هم ازلحاظ قدرت تمایز و هم ازلحاظ پایایی، عملکرد بهتری از خود نشان داد؛ بنابراین، این الگوریتم به عنوان مدل پیش بینی بقای ملانوم پیشنهاد شد

    کلید واژگان: داده کاوی, پیش بینی, ملانوم, بقای بیماری, شبکه عصبی, درخت تصمیم گیری}
    farinaz sanaei, Seyed Abdollah Amin Mousavi, Abbas Toloie Eshlaghy, ali rajabzadeh ghotri
    Background/ Purpose

    Among the most commonly diagnosed cancers, melanoma is the second leading cause of cancer-related death. A growing number of people are becoming victims of melanoma. Melanoma is also the most malignant and rare form of skin cancer. Advanced cases of the disease may cause death due to the spread of the disease to internal organs. The National Cancer Institute reported that approximately 99,780 people were diagnosed with melanoma in 2022, and approximately 7,650 died. Therefore, this study aims to develop an optimization algorithm for predicting melanoma patients' survival.

    Methodology

    This applied research was a descriptive-analytical and retrospective study. The study population included patients with melanoma cancer identified from the National Cancer Research Center at Shahid Beheshti University between 2008 and 2013, with a follow-up period of five years. An optimization model was selected for melanoma survival prognosis based on the evaluation metrics of data mining algorithms.

    Findings

    A neural network algorithm, a Naïve Bayes network, a Bayesian network, a combination of decision tree and Naïve Bayes network, logistic regression, J48, and ID3 were selected as the models used in the national database. Statistically, the studied neural network outperformed other selected algorithms in all evaluation metrics.

    Conclusion

    The results of the present study showed that the neural network with a value of 0.97 has optimal performance in terms of reliability. Therefore, the predictive model of melanoma survival showed a better performance both in terms of discrimination power and reliability. Therefore, this algorithm was proposed as a melanoma survival prediction model.

    Keywords: data mining, prediction, melanoma, disease survival, neural network, decision tree}
  • Natalja Osintsev *
    Air pollution is the biggest environmental hazard that cannot be ignored. Due to increase in number of industries and urbanization increases air pollutants concentrations in many areas because of this different changes are been happening in human life like health issues and as well as other living organisms. We have some pollutant emission monitoring systems, like Opsis, Codel, Urac and TAS-Air metrics which are expensive. As well as these systems have limitations to be installed on chimney due to their principle of operation. In this work I like to propose a function that is easy to use and causes less cost compared to the other ones. That is an industrial air pollution monitoring system based on the technology of Wireless Sensor Networks (WSNs). This system is integrated with the Global System for Mobile (GSM) communications and the protocol it uses is zigbee. The system consists of sensor nodes, a control center and data base through which sensing data can be stored for history and future plans. It is used to monitor Carbon Monoxide (CO), Sulfur Dioxide (SO2) and dust concentration caused by industrial emissions due to process.
    Keywords: object detection model, Neural Network, Deep Learning, Python}
  • Milad Ghasemi *, Maryam Bayati
    The continuous progress of photography technologies as well as the increase in the number of images and their applications requires the emergence of new algorithms with new and different capabilities. Among the various processes on medical images, the segmentation of medical images has a special place and has always been considered and investigated as one of the important issues in the processing of medical images. Based on this, in this research, a solution to diagnose the tumor through the use of a combined method based on watershed algorithm, co-occurrence matrix and neural networks has been presented, so that through the use of this combined solution, the tumor can be detected with high accuracy. Medical images diagnosed. According to the method used in this research, as well as the implementation of the solution in the Python environment and through the use of CV2 and SimpleITK modules, it is possible to set parameters such as accuracy, correctness, recall and Fscore criteria. which are always important parameters that are investigated in researches, improved compared to the past and achieved favorable results. This will increase the improvement of tumor detection in the brain compared to Thersholding and TKMeans methods.
    Keywords: Tumor Diagnosis, image processing, medical images, Neural network}
  • Ibrahim Mekawy *
    Household object detection is a brand-new computer technique that combines image processing and computer vision to recognize objects in the home. All objects stored in the kitchen, room, and other areas will be detected by the camera. Low-end device techniques for detecting people in video or images are known as object detection. With picture and video analysis, we've lost our way.
    Keywords: object detection model, Neural Network, Deep Learning, Python}
  • Aziza Algarni *
    We all know forest is very important resource of oxygen. Saving our environmental resources is human beings responsibility. One of the techniques to save forests is forest fire detection. This is a technique used to detect the fire and prevent them in less time. Forest fire leads to death of wild life and trees. There are other techniques used to detect fire in forests like cameras, satellite system, manual monitoring but they take time to detect the fire whereas Forest fire detection system detects the fire within seconds and triggers the alarms. In this way we can save tress and wildlife in very less time.
    Keywords: object detection model, Neural Network, Deep Learning, Python}
  • Mohammad Nazarpour, navid nezafati, Sajjad Shokouhyar

    Integration and diversity of IOT terminals and their applicable programs make them more vulnerable to many intrusive attacks. Thus, designing an intrusion detection model that ensures the security, integrity, and reliability of IOT is vital. Traditional intrusion detection technology has the disadvantages of low detection rates and weak scalability that cannot adapt to the complicated and changing environment of the Internet of Things. Hence, one of the most widely used traditional methods is the use of neural networks and also the use of evolutionary optimization algorithms to train neural networks can be an efficient and interesting method. Therefore, in this paper, we use the PSO algorithm to train the neural network and detect attacks and abnormalities of the IOT system. Although the PSO algorithm has many benefits, in some cases it may reduce population diversity, resulting in early convergence. Therefore,in order to solve this problem, we use the modified PSO algorithm with a new mutation operator, fuzzy systems and comparative equations. The proposed method was tested with CUP-KDD data set. The simulation results of the proposed model of this article show better performance and 99% detection accuracy in detecting different malicious attacks, such as DOS, R2L, U2R, and PROB.

    Keywords: Attack detection, Internet of Things (IOT), Neural Network, PSO Algorithm, Fuzzy rule, Adaptive Formulation}
  • Agyan Panda *, Sheila Maria Muniz
    Household object detection is a brand-new computer technique that combines image processing and computer vision to recognise objects in the home. All objects stored in the kitchen, room, and other areas will be detected by the camera. Low-end device techniques for detecting people in video or images are known as object detection. With picture and video analysis, we've lost our way.
    Keywords: object detection model, Neural Network, Deep Learning, Python}
  • Alaa Mahdi Alkhafaji, Ghassan Fadhil Smaisim*, Falah mahdi Alobayes, Monireh Houshmand

    With the accelerated development of Internet finance, electronic funds transfer, and the rapid growth of credit card activity, credit cards play a very important role in every area of ​​life today. There are some risks in this regard that are considered serious threats to both issuers and cardholders. The increasing number of fraudulent credit card transactions forged credit cards and fraudulent use of expired credit cards have led to increased losses. Therefore, finding fraud detection techniques accurately and quickly has become an important topic in current investigations. In this study, after normalizing and reducing the dimensionality of the data using the PCA algorithm, we used the modified perceptron neural network and the grasshopper algorithm to classify the data. In this study, we use the grasshopper algorithm to adjust the weights and biases of neural networks. In the end, we were able to achieve 99.20% accuracy.

    Keywords: Fraud Detection, Grasshopper Optimization Algorithm, Neural Network}
  • Zahra Abbasnejad*, Milad Ghahari Bidgoli

    The growing number of information on the web and the addition of different web pages and websites to this space has made users face problems. These problems appear to users when users are trying to obtain information on a particular topic, and finding all the pages that are suggested to them is a difficult and time consuming process. In the current research, a profile is first created based on the behavioral characteristics of users at different sessions that result from web server logs. These include things like the frequency of user page views, the length of time the user has been on different pages, and the date the page was viewed. We then group them using the clustering method, then fuzzy inference system, extract the fuzzy rules according to the interests of the users and their clusters, and after obtaining the users’ movement patterns, they Insertneural network into vector format Other tools such as bio-algorithms can be useful by obtaining optimal parameters in optimizing predictions and increasing accuracy in fuzzy neural network. The evaluation criteria in this study is accuracy.

    Keywords: data mining, web mining, user behavior patterns, neural network, fuzzy system, MFO algorithm}
  • Mehrdad Fadaei Pellehshahi, Sohrab Kordrostami, AmirHosein Refahi Sheikhani*, Marzieh Faridi Masouleh, Soheil Shokri

    In this study, an alternative method is proposed based on recursive deep learning with limited steps and prepossessing, in which the data is divided into A unit classes in order to change a long short term memory and solve the existing challenges. The goal is to obtain predictive results that are closer to real world in COVID-19 patients. To achieve this goal, four existing challenges including the heterogeneous data, the imbalanced data distribution in predicted classes, the low allocation rate of data to a class and the existence of many features in a process have been resolved. The proposed method is simulated using the real data of COVID-19 patients hospitalized in treatment centers of Tehran treatment management affiliated to the Social Security Organization of Iran in 2020, which has led to recovery or death. The obtained results are compared against three valid advanced methods, and are showed that the amount of memory resources usage and CPU usage time are slightly increased compared to similar methods  and the accuracy is increased by an average of 12%.

    Keywords: Long Short Term Memory, Recurrent Deep Learning, Prediction, COVID-19, Neural Network}
  • Pejman Peykani, Farzad Eshghi *, Alireza Jandaghian, Hamed Farrokhi-Asl, Farid Tondnevis
    Providing efficient and powerful approach for liquidity management of bank branches has always been one of the most important and challenging issues for researchers and scholars in the banking field. In other words, estimating the amount of required cash in different branches of the bank is one of the basic and important questions for managers of the banking system. Because on the one hand, if the amount of cash is less than the required amount, the bank runs the default risk, and on the other hand, if the amount of cash is more than the required amount, the bank incurs opportunity costs. Therefore, the purpose of this study is to provide a practical approach to predict the optimal amount of required cash in bank branches. For this purpose, the concepts of time series, neural network approach and vector autoregressive model are used. The effectiveness of the proposed approach is also examined using real data.
    Keywords: Banking System, Cash Prediction, Liquidity Requirement, Neural Network, time series}
  • مهدیه سالاری*، وحید خطیبی بردسیری، عمید خطیبی بردسیری

    تخمین و برآورد معیارها یک فعالیت حیاتی در پروژه‌های نرم‌افزاری محسوب می‌شود. به‌طوری‌که تخمین تلاش در مراحل اولیه توسعه نرم‌افزار، یکی از مهم‌ترین چالش‌های مدیریت پروژه‌های نرم‌افزاری است. تخمین نادرست می‌تواند منجر به شکست پروژه گردد. لذا یکی از فعالیت‌های اصلی و کلیدی در توسعه موثر و کارآمد پروژه‌های نرم‌افزاری تخمین دقیق هزینه‌های نرم‌افزار است. ازاین‌رو در این پژوهش دو روش به‌منظور تخمین تلاش در پروژه‌های نرم‌افزاری ارایه شده است، که در این روش ها سعی شده با تجزیه‌وتحلیل محرک‌ها و استفاده از الگوریتم‌های فرا ابتکاری و ترکیب با شبکه عصبی راهی برای افزایش دقت در تخمین تلاش پروژه های نرم افزاری ایجاد شود. روش اول تاثیر الگوریتم فاخته جهت بهینه‌سازی ضرایب تخمین مدل کوکومو و روش دوم به صورت ترکیبی از شبکه عصبی و الگوریتم بهینه‌سازی فا خته جهت افزایش دقت برآورد تلاش توسعه نرم‌افزار ارایه‌شده است. نتایج بدست آمده روی دو پایگاه داده واقعی نشان دهنده عملکرد مطلوب روش ارایه شده در مقایسه با سایر روش هاست. 

    کلید واژگان: الگوریتم فاخته, تخمین هزینه, شبکه عصبی, کوکومو}
    mahdieh salari*

    It is regarded as a crucial task in a software project to estimate the criteria, and effort estimation in the primary stages of software development is thus one of the most important challenges involved in management of software projects. Incorrect estimation can lead the project to failure. It is therefore a major task in efficient development of software projects to estimate software costs accurately. Therefore, two methods were presented in this research for effort estimation in software projects, where attempts were made to provide a way to increase accuracy through analysis of stimuli and application of metaheuristic algorithms in combination with neural networks. The first method examined the effect of the cuckoo search algorithm in optimization of the estimation coefficients in the COCOMO model, and the second method was presented as a combination of neural networks and the cuckoo search optimization algorithm to increase the accuracy of effort estimation in software development. The results obtained on two real-world datasets demonstrated the proper efficiency of the proposed methods as compared to that of similar methods.

    Keywords: Cocomo, Cost estimation, Cuckoo algorithm, neural network}
  • Tanzia Ahmed, Tanvir Rahman, Bir Ballav Roy, Jia Uddin*

    This paper presents a vision-based drone detection method. There are a number of researches on object detection which includes different feature extraction methods – all of those are used distinctly for the experiments. But in the proposed model, a hybrid feature extraction method using SURF and GLCM is used to detect object by Neural Network which has never been experimented before. Both are very popular ways of feature extraction. Speeded-up Robust Feature (SURF) is a blob detection algorithm which extracts the points of interest from an integral image, thus converts the image into a 2D vector. The Gray-Level Co-Occurrence Matrix (GLCM) calculates the number of occurrences of consecutive pixels in same spatial relationship and represents it in a new vector- 8 × 8 matrix of best possible attributes of an image. SURF is a popular method of feature extraction and fast matching of images, whereas, GLCM method extracts the best attributes of the images. In the proposed model, the images were processed first to fit our feature extraction methods, then the SURF method was implemented to extract the features from those images into a 2D vector. Then for our next step GLCM was implemented which extracted the best possible features out of the previous vector, into a 8 × 8 matrix. Thus, image is processed in to a 2D vector and feature extracted from the combination of both SURF and GLCM methods ensures the quality of the training dataset by not just extracting features faster (with SURF) but also extracting the best of the point of interests (with GLCM). The extracted featured related to the pattern are used in the neural network for training and testing. Pattern recognition algorithm has been used as a machine learning tool for the training and testing of the model. In the experimental evaluation, the performance of proposed model is examined by cross entropy for each instance and percentage error. For the tested drone dataset, experimental results demonstrate improved performance over the state-of-art models by exhibiting less cross entropy and percentage error.

    Keywords: Feature Extraction, GLCM Method, Image Processing, Neural Network, SURF Algorithm}
  • سینا لبافی، علیرضا زاهدی*
    یکی از عوامل مهم در رشد ریزجلبک ها مقدار نمک لازم برای تغذیه آن ها است. طبق این پژوهش،محیط کشت برای ریزجلبک نانوکلروپسیس، در غلظت های مختلف نمکی آماده شده و در هر شبانه روز میزان رشد ریزجلبک های فعال به کمک فناوری ماشین بینایی بررسی گردید. حداکثر و حداقل تراکم سلول های ریزجلبک در روز هفتم پرورش به ترتیب 105×0.38±104×286.23(در غلظت 35 میلی گرم برلیتر) و 105×0.48±104×168.58(در غلظت 100 میلی گرم برلیتر) سلول در هر میلی لیتر به دست آمد. در تحلیل سیستم رشد، الگوریتم رگراسیون خطی ساده (با کمترین خطا)، رگراسیون خطی، پرسپترون چندلایه و پردازش گوسین (با بیشترین خطا) که به ترتیب دارای ضرایب همبستگی 0.9095، 0.9039، 0.8623 و 0.7335 بودند، نتایج خوبی را نشان دادند. همچنین سامانه توسط شبکه عصبی مصنوعی در محدوده 4 تا 20 نرون ارزیابی شد. بررسی داده ها نشان داد دیدگاه بیوسیستمی به کشت ریزجلبک به کمک پردازش تصویر ضمن دقت بالاتر و هزینه و وقت کمتر تخمین موفقیت آمیزی از روند رشد در غلظت های مختلف نمکی در مقایسه با دیگر روش های کنترل رشد به دست می دهد.
    کلید واژگان: کشت ریزجلبک, غلظت نمکی, نانوکلروپسیس اوکولاتا, ماشین بینایی, شبکه عصبی}
    Sina Labbafi, Alireza Zahedi *
    One of the important factors in growth of microalgae is the amount of salt needed to feed them. In this study, the culture medium for microalgae, Nannochloropsis Oculata, was prepared at different concentrations of salt and the growth of active microalgae was investigated with the help of state machine technology on every day. The maximum and minimum concentration of microalgae on the seventh of the breed were obtained286.23 × 104 ± 0.38 × 105 cells per ml at 30 mg per liter and a minimum salinity of 100 mg per liter with the density treatments density 168.58 × 104 ± 0.48 × 105 cells per ml, respectively. In the growth system analysis, the simple linear regression algorithm (with the lowest error), linear regression, Multilayer Perceptron algorithms (MLP) and Gaussian process (with the greatest errors) with a resolution of 0.9095, 0.9039, 0.8623 and 0.7335 was achieved. Among the results, minimal error, 54.32, was belong to linear regression algorithm simple and the greatest error, 70.79, was related to MLP algorithm. In the assessment section by Artificial Neural Network (ANN) with a different number of neurons (4 to 20) was also studied. According to the evaluation data, it was concluded that image processing techniques to estimate the growth of Nannochloropsis at different salty levels were generally successful
    Keywords: Microalgae, Nannochloropsis oculata, Salty concentration, Machine Vision, Neural Network}
  • Ramri Shukla *, Bardia Khalilian, Sara Partouvi
    To lessen the impact of a low student success rate, it's critical to be able to identify students who are in danger of failing early on, so that more targeted remedial intervention may be implemented. Private colleges use a variety of techniques, including increased tuition, expanded laboratory access, and the formation of learning communities. The prompt identification of students in danger of failing a given programme is important to both the students and the institutions with which they are registered, as seen by the debate presented below. Students are classified using artificial neural networks and random forests in this article. A private higher education provider provided a dataset of 2000 students. Artificial neural networks were found to provide the best performing model, with an accuracy of 83.24% percent.
    Keywords: Education, Neural Network, monitroing}
  • Mojtaba Nasehi*, Mohsen Ashourian, Payman Moallem

    Today, large-scale vehicles are scattered in different parts of the city and therefore need to be controlled by programmed systems. Applications of these systems include traffic control, urban planning, driverless vehicles, parking lot management by announcing the arrival of a vehicle, detecting stolen or offending vehicles, and so on. Due to challenges such as the multiplicity of objects in the image, weather conditions, different colors and designs of the type of vehicles and very diverse images from different angles of a vehicle in the section identifying the type of vehicles in the photo, Films, moving images, etc. have led to a variety of research, and in this article we will examine some of the techniques.

    Keywords: convolution, vehicle, Neural Network}
  • Saeed Talati*, MohammadReza Hassani Ahangar

    The neural network learning vector quantization can be understood as a special case of an artificial neural network, more precisely, a learning-based approach - winner takes all. In this paper, we investigate this algorithm and find that this algorithm is a supervised version of the vector quantization algorithm, which should check which input belongs to the class (to update) and improve it according to the distance and class in question. To give. A common problem with other neural network algorithms is the speed vector learning algorithm, which has twice the speed of synchronous updating, which performs better where we need fast enough. The simulation results show the same problem and it is shown that in MATLAB software the learning vector quantization simulation speed is higher than the self-organized neural network.

    Keywords: Neural network, learning vector quantization, self-organizing neural network, optimization}
نکته
  • نتایج بر اساس تاریخ انتشار مرتب شده‌اند.
  • کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شده‌است. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
  • در صورتی که می‌خواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.
درخواست پشتیبانی - گزارش اشکال