به جمع مشترکان مگیران بپیوندید!

تنها با پرداخت 70 هزارتومان حق اشتراک سالانه به متن مقالات دسترسی داشته باشید و 100 مقاله را بدون هزینه دیگری دریافت کنید.

برای پرداخت حق اشتراک اگر عضو هستید وارد شوید در غیر این صورت حساب کاربری جدید ایجاد کنید

عضویت

جستجوی مقالات مرتبط با کلیدواژه « Machine Learning » در نشریات گروه « پزشکی »

  • Yasaman Vojgani*, Mohadeseh Sadeghinia

    Recently, the medical profession has seen an accelerated integration of devices equipped with artificial intelligence (AI) technology, thanks to significant advancements in this area. Over 60 medical devices integrated with AI have already received approval from the Food and Drug Administration (FDA) in the United States. The widespread use of AI technology in medicine is seen as an unavoidable trend soon. AI technology is now being used in the area of cancer, particularly in radiology, for clinical use of medical equipment. It is anticipated that AI technology will become a crucial core technology in this industry. Precision medicine, which involves selecting the most suitable treatment for each patient based on extensive medical data like genome information, has gained global popularity. AI technology is anticipated to play a crucial role in extracting valuable data from vast medical datasets and applying it to medical care. Cancer is the second most prevalent global illness that relies on oncogenic mutations and non-mutated genes for its survival. The significant variability of tumors may result in varying curative results when using the same medications or surgical procedures in people with the same tumor. This highlights the need for more precise treatment approaches for tumors and personalized therapies tailored to individual patients. We summarize current and noteworthy AI advances in cancer research in this report. We also discuss AI’s limitations, challenges, and potential effects on cancer therapy. We also explored AI in omics, pathology, and medical imaging.

    Keywords: Artificial Intelligence, Cancer Therapy, Precision Medicine, Machine Learning}
  • Nadia Mohammadi Dashtaki, Alireza Mirahmadizadeh, Mohammad Fararouei*, Reza Mohammadi Dashtaki, Mohammad Hoseini, Mohammadreza Nayeb
    Background

    Exposure to air pollution is a major health problem worldwide. This study aimed to investigate the effect of the level of air pollutants and meteorological parameters with their related lag time on the transmission and severity of coronavirus disease 19 (COVID-19) using machine learning (ML) techniques in Shiraz, Iran.

    Study Design:

     An ecological study.

    Methods

    In this ecological research, three main ML techniques, including decision trees, random forest, and extreme gradient boosting (XGBoost), have been applied to correlate meteorological parameters and air pollutants with infection transmission, hospitalization, and death due to COVID-19 from 1 October 2020 to 1 March 2022. These parameters and pollutants included particulate matter (PM2), sulfur dioxide (SO2 ), nitrogen dioxide (NO2 ), nitric oxide (NO), ozone (O3 ), carbon monoxide (CO), temperature (T), relative humidity (RH), dew point (DP), air pressure (AP), and wind speed (WS).

    Results

    Based on the three ML techniques, NO2 (lag 5 day), CO (lag 4), and T (lag 25) were the most important environmental features affecting the spread of COVID-19 infection. In addition, the most important features contributing to hospitalization due to COVID-19 included RH (lag 28), T (lag 11), and O3 (lag 10). After adjusting for the number of infections, the most important features affecting the number of deaths caused by COVID-19 were NO2 (lag 20), O3 (lag 22), and NO (lag 23).

    Conclusion

    Our findings suggested that epidemics caused by COVID-19 and (possibly) similarly viral transmitted infections, including flu, air pollutants, and meteorological parameters, can be used to predict their burden on the community and health system. In addition, meteorological and air quality data should be included in preventive measures.

    Keywords: Air Pollutants, Meteorological Factors, COVID-19, Machine Learning, Time Factors}
  • Mohammadsadegh Sohrabi*, Hassan Khotanlou, Rashid Heidarimoghadam, Iraj Mohammadfam, Mohammad Babamiri, Ali Reza Soltanian
    Background

     Modeling with methods based on machine learning (ML) and artificial intelligence can help understand the complex relationships between ergonomic risk factors and employee health. The aim of this study was to use ML methods to estimate the effect of individual factors, ergonomic interventions, quality of work life (QWL), and productivity on work-related musculoskeletal disorders (WMSDs) in the neck area of office workers.

    Study Design: 

    A quasi-randomized control trial.

    Methods

     To measure the impact of interventions, modeling with the ML method was performed on the data of a quasi-randomized control trial. The data included the information of 311 office workers (aged 32.04±5.34). Method neighborhood component analysis (NCA) was used to measure the effect of factors affecting WMSDs, and then support vector machines (SVMs) and decision tree algorithms were utilized to classify the decrease or increase of disorders.

    Results

     Three classified models were designed according to the follow-up times of the field study, with accuracies of 86.5%, 80.3%, and 69%, respectively. These models could estimate most influencer factors with acceptable sensitivity. The main factors included age, body mass index, interventions, QWL, some subscales, and several psychological factors. Models predicted that relative absenteeism and presenteeism were not related to the outputs.

    Conclusion

     In this study, the focus was on disorders in the neck, and the obtained models revealed that individual and management interventions can be the main factors in reducing WMSDs in the neck. Modeling with ML methods can create a new understanding of the relationships between variables affecting WMSDs.

    Keywords: Ergonomics, Model, Machine Learning, Support Vector Machine}
  • آمنه جوانمرد*، علیرضا صالحان
    زمینه و هدف

    در سال 1960، ویروس های کرونا کشف شدند. موجودات زنده درشت پیکر از خانواده ویروس های پاکت دار که RNA تک رشته ای با منشاء جانوری دارند. ویروس های کرونا در انسان می تواند به بیماری تنفسی خفیف یا شدید تنفسی تبدیل شوند. در سال 2020، سازمان بهداشت جهانی ویروس کووید-19 را یک بیماری همه گیر جهانی اعلام کرد. هدف این مطالعه استفاده از ضریب همبستگی جاکارد جهت تعیین شباهت الگوی رفتار بیماری کووید-19 در فصول مختلف سال است.

    روش بررسی

    در این بررسی از سیستم های یادگیری ماشین و معیار تشابه در تعیین الگوی رفتار بیماری کووید-19 در فصل های سال استفاده شد. مکان انجام مطالعه، بیمارستان موسی بن جعفر (ع) مشهد و زمان دقیق انجام مطالعه از اردیبهشت 1399 لغایت شهریور 1401 می باشد. علایم بیماران مبتلا با مجموعه داده تدوین شده مقایسه و تشابه بیماران در ماتریس شباهت تهیه و ضریب همبستگی جاکارد روی داده ها انجام شد. نهایتا تحلیل سویه ها از ابتدای پیدایش تا آخرین سویه بررسی شد.

    یافته ها

    شاخص های عملکرد الگوریتم در روش تشابه جاکارد، معیار یادآوری با مقدار 94/0، معیار دقت با مقدار 1، معیار امتیاز F1 با مقدار 86/0 و معیار صحت با مقدار 76/0 را نشان داد. مهمترین فاکتورهای موثر در بررسی، گلبول های سفید خون، پلاکت، RT PCR، CT SCAN، تنگی تنفس، تب، SPO2 و تعداد تنفس می باشند.

    نتیجه گیری

    در این مطالعه رفتار ویروس کووید-19 با استفاده از الگوریتم های یادگیری ماشین با احتساب موقعیت جغرافیایی و فصلی در بیماران بررسی شد و یک الگوی واضح از ارتباط فصل ها در گسترش کووید-19 مشخص گردید، به طوری که در هر فصل علایم مشخصی مشاهده شده است که با سویه همان فصل مطابقت دارد.

    کلید واژگان: کووید-19, بیماری, بومی شناسی, یادگیری ماشین, سویه ها}
    Ameneh Javanmard*, Alireza Salehan
    Background

    Coronaviruses were discovered in 1960. Large-sized living organisms from the Coronaviridae family, with single-stranded RNA of animal origin. Coronaviruses in humans can cause mild respiratory illness or severe respiratory illness. In 2020, the World Health Organization declared COVID-19 a global pandemic. The aim of this study is to use the Jaccard similarity coefficient to determine the similarity of COVID-19 behavior patterns in different seasons of the year.

    Methods

    This study used machine learning systems and similarity metrics to determine the behavior pattern of COVID-19 in different seasons of the year. The location of research was the Mousa ibn Ja'far Hospital in Mashhad, and the time was from May 2020 to August 2021. The symptoms of affected patients were compared with the compiled dataset, and the similarity of patients was prepared in a similarity matrix, and the Jaccard correlation coefficient was calculated on the data. Finally, the analysis of strains from the beginning of emergence to the latest strain was examined. The performance indicators of the algorithm in the Jaccard similarity method showed a recall metric with a value of 0.94, a precision metric with a value of 1, an F1 score with a value of 0.86, and remove accuracy metric with a value of 0.76. The most important factors in the investigation include white blood cells, platelets, RT-PCR, CT SCAN, shortness of breath, fever, SPO2, and respiratory rate.

    Results

    The transmission of the COVID-19 virus depends on several factors, including human interaction. The evidence of the collected data shows that people with COVID-19 have low lymphocyte count and it is very consistent with the results of recent studies. Due to the lack of a dataset, a comparative study was conducted and a dataset was collected.

    Conclusion

    This study, leveraging machine learning algorithms, identified a clear seasonal correlation in the spread of COVID-19. Considering geographical and seasonal variations among patients, distinct symptoms were observed in each season corresponding to the prevalent strain during that period.

    Keywords: Covid-19, Disease, Epidemiology, Machine Learning, Strains}
  • Günay Rona*, Meral Arifoğlu, Ahmet Tekin Serel, Tamer Baysal
    Background

    Detection of axillary metastases in breast cancer is critical for treatment options and prognosis. The aim of this study is to investigate the value of radiomic features obtained from short tau inversion recovery (STIR) sequences in magnetic resonance imaging (MRI) of primary tumor in breast cancer in predicting axillary lymph node metastasis (ALNM).

    Methods

    Lesions of 165 patients with a mean age of 51.12 ±11 (range 28-82) with newly diagnosed invasive breast cancer who underwent breast MRI before treatment were manually segmented from STIR sequences in the 3D Slicer program in all sections. Machine learning (ML) analysis was performed using the extracted 851 features Python 3.11, Pycaret library program. Datasets were randomly divided into train (123, 80%) and independent test (63, 20%) datasets. The performances of ML algorithms were compared with area under curve (AUC), accuracy, recall, presicion and F1 scores.

    Results

    Accuracy and AUC in the training set were in the range of 57 %-86 % and 0.50-0.95, respectively. The best model in the training set was the catBoost classifier with an AUC of 0.95 and 84% accuracy. The AUC, accuracy, recall, precision values and F1 score of the CatBoost classifier on the test set were 0.92, 84 %, 89 %, 85 %, 86 %, respectively.

    Conclusion

    Radiomic features obtained from primary tumors on STIR sequences have the potential to predict ALNM in invasive breast cancer.

    Keywords: Breast Cancer, Lymphatic Metastasis, Magnetic Resonance Imaging, Radiomics, Machine Learning}
  • Elnaz Sheikhian, Majid Ghoshuni, Mahdi Azarnoosh, Mohammad Mahdikhalilzadeh
    Background

    This study explores a novel approach to detecting arousal levels through the analysis of electroencephalography (EEG) signals. Leveraging the Faller database with data from 18 healthy participants, we employ a 64‑channel EEG system.

    Methods

    The approach we employ entails the extraction of ten frequency characteristics from every channel, culminating in a feature vector of 640 dimensions for each signal instance. To enhance classification accuracy, we employ a genetic algorithm for feature selection, treating it as a multiobjective optimization task. The approach utilizes fast bit hopping for efficiency, overcoming traditional bit‑string limitations. A hybrid operator expedites algorithm convergence, and a solution selection strategy identifies the most suitable feature subset.

    Results

    Experimental results demonstrate the method’s effectiveness in detecting arousal levels across diverse states, with improvements in accuracy, sensitivity, and specificity. In scenario one, the proposed method achieves an average accuracy, sensitivity, and specificity of 93.11%, 98.37%, and 99.14%, respectively. In scenario two, the averages stand at 81.35%, 88.65%, and 84.64%.

    Conclusions

    The obtained results indicate that the proposed method has a high capability of detecting arousal levels in different scenarios. In addition, the advantage of employing the proposed feature reduction method has been demonstrated.

    Keywords: Arousal Level, Feature Selection, Genetic Algorithms, Machine Learning}
  • Boya Fan, Gang Wang, Wei Wu
    Background

    Occupational hearing loss of workers exposed to impulse noise and workers exposed to steady noise for a long time may have different clinical characteristics.

    Methods

    As of May 2019, all 92 servicemen working in a weapon experimental field exposed to impulse noise for over 1 year were collected as the impulse noise group. As of Dec 2019, all 78 servicemen working in an engine working experimental field exposed to steady noise for over 1 year were collected as the steady noise group. The propensity score matching (PSM) model was used to eliminate the imbalance of age and working time between the two groups of subjects. After propensity score matching, 51 subjects in each group were finally included in the study. The machine learning model is constructed according to pure tone auditory threshold, and the performance of the machine learning model is evaluated by accuracy, sensitivity, specificity, and AUC.

    Results

    Subjects in the impulse noise group and the steady noise group had significant hearing loss at high frequencies. The hearing of the steady noise group was worse than that of the impulse noise group at speech frequency especially at the frequency of 1 kHz. Among machine learning models, XGBoost has the best prediction and classification performance.

    Conclusion

    The pure tone auditory threshold of subjects in both groups decreased and at high frequency. The hearing of the steady noise group at 1 kHz was significantly worse than that of the impulse noise group. XGBoost is the best model to predict the classification of our two groups. Our research can guide the prevention of damage caused by different types of noises.

    Keywords: Noise-Induced Hearing Loss, Impulse Noise, Steady Noise, Machine Learning}
  • Seyed Saeid Masoomkhah, Khosro Rezaee, Mojtaba Ansari, Hossein Eslami
    Purpose

    Artificial Intelligence (AI), which mimics the human brain structure and operation, simulates intelligence. The aim of Machine Learning (ML), which is a branch of artificial intelligence, is to create models by analyzing data. Another type of artificial intelligence, Deep Learning (DL), depicts geometric changes using several layers of model representations. Since DL broke the computational analysis record, AI has advanced in many areas.

    Materials and Methods

    Contrary to the widespread use of conventional ML methodologies, there is still a need to promote the use and popularity of DL for pharmaceutical research and development. Drug discovery and design have been enhanced by ML and DL in major research projects. To fully realize its potential, drug design must overcome many challenges and issues. Various aspects of medication design must be considered to successfully address these concerns and challenges. This review article explains DL's significance both in technological breakthroughs and in effective medications.

    Results

    There are numerous barriers and substantial challenges associated with drug design associated with DL architectures and key application domains. The article discusses several elements of medication development that have been influenced by existing research. Two widely used and efficient Neural Network (NN) designs are discussed in this article: Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).

    Conclusion

    It is described how these tools can be utilized to design and discover small molecules for drug discovery. They are also given an overview of the history of DL approaches, as well as a discussion of some of their drawbacks.

    Keywords: Machine Learning, Deep Learning, Drug Design, Drug Discovery, Neural Network}
  • هانیه مالمیر، سمیه حسین پور*، پروین میرمیران
    سابقه و هدف

    توسعه هوش مصنوعی فرصت های جدیدی را برای تحقیق در زمینه علوم تغذیه فراهم کرده است. این مقاله با هدف مرور و بررسی جامع مطالعات مربوط به حوزه رژیم غذایی و الگوهای غذایی که از تکنیک های هوش مصنوعی و الگوریتم های یادگیری ماشین استفاده کرده اند، انجام شد.

    مواد و روش ها

    تمامی مطالعات به چاپ رسیده تا نوامبر 2023 با استفاده از پایگاه های اطلاعاتی PubMed  Cochrane,  و SCOPUS و موتور جستجوی Google Scholar و با کلیدواژه های مرتبط مورد جست وجو قرار گرفت.

    یافته ها

    بعد از بررسی کامل مقالات 31 مقاله مرتبط انتخاب شدند که با هدف مطالعه حاضر هم خوانی داشتند. روش های مختلف یادگیری ماشین در پیش بینی الگوهای غذایی دقت متفاوتی دارند. به عنوان مثال شبکه عصبی هوشمند دقت بالاتری در پیش بینی پنجک های شاخص غذایی سالم دارد، در حالی که در مورد وعده های غذایی درخت تصمیم گیری دقت بالاتری دارد. کاربرد دیگر یادگیری ماشین، استخراج الگوهای غذایی و بررسی ارتباط آنها با بیماری های مختلف مانند چاقی، بیماری های قلبی، سکته مغزی، خطر مرگ ناشی از بیماری قلبی عروقی و سرطان می باشد. همچنین برخی از روش های یادگیری ماشین مانند درخت تصمیم قادر به ارائه مدل هایی برای پیش بینی میزان پیروی از رژیم غذایی مختلف مانند رژیم غذایی مدیترانه ای است.

    نتیجه گیری

    روش های مختلف هوش مصنوعی می توانند به شناخت بیشتر الگوهای غذایی مرتبط با بیماری های مزمن کمک کنند. مهمترین الگوریتم های مطرح در بررسی الگوهای غذایی درخت تصمیم گیری، جنگل تصادفی، میانگین کا، نزدیک ترین همسایه کا، روش های رگرسیونی، ماشین بردار پشتیبان و شبکه عصبی هوشمند هستند. این روش ها می توانند با دسته بندی و یافتن ارتباط پنهان بین گروه ها و مواد غذایی، به درک بهتر الگوهای غذایی مرتبط با بیماری های مزمن کمک کنند. برای درک بهتر این ارتباطات مطالعات بیشتری در این حوزه لازم است.

    کلید واژگان: هوش مصنوعی, یادگیری ماشین, تغذیه, الگوی غذایی}
    H .Malmir, S.Hosseinpour*, P. Mirmiran
    Background and Objectives

    Development of artificial intelligence has provided novel opportunities for research in the field of nutrition sciences. This study was carried out with the aim of reviewing and comprehensively assessing studies linked to the field of diet and food patterns, using artificial intelligence and machine learning algorithms.

     Materials & Methods

    All studies published until June 2023 were searched using PubMed Cochrane, EMBASE and SCOPUS databases with associated keywords. No time and language restrictions were used.

    Results

    After a complete review of the articles, 31 relevant articles were selected that were consistent with the purpose of the present study. Various machine-learning methods have various accuracy in predicting food patterns. For example, intelligent neural network is further accurate in predicting healthy food index quintiles, while it is further accurate in decision tree meals. Another use of machine learning includes extracting food patterns and investigating their relationships with various diseases such as obesity, heart diseases, stroke, risks of death from cardiovascular diseases and cancers. Machine learning methods such as decision trees are able to provide models for predicting adherence to various diets such as the Mediterranean diet.

    Conclusion

    Various artificial intelligence methods can help better understand food patterns linked to chronic diseases. The most important algorithms in the study of food patterns are decision tree, random forest, K-means, K-nearest neighbor, regression methods, support vector machine and intelligent neural network. These methods can help better understand dietary patterns associated with chronic diseases by categorizing and finding hidden associations between the groups and foods. Further studies are needed to better understand these connections.

    Keywords: Artificial Intelligence, Machine Learning, Nutrition, Food Patterns}
  • رامین فرخی، سمانه حسین زاده، عباس حبیب اللهی، اکبر بیگلریان*
    مقدمه و اهداف

    شناسایی زنان بارداری که در معرض زایمان زودرس هستند و همچنین تعیین عوامل خطر موثر برآن، امری ضروری است که در سلامت نوزادان تاثیرگذار است. این مطالعه با هدف به کارگیری نوعی از مدل یادگیری ماشینی تفسیرپذیر برای پیش بینی زایمان زودرس انجام شد.

    روش کار

    این مطالعه به صورت مقطعی انجام شد و از داده های 149350 مورد تولد شهر تهران در سال 1399 از مجموعه داده شبکه مادران و نوزادان ایران (IMaN) استفاده گردید. در این مطالعه، عوامل مختلف وابسته به مادر و جنین مانند متغیرهای جمعیت شناختی مادر نوزاد، وضعیت سلامت و سوابق بیماری مادر، شرایط بارداری، زایمان و خطرات آن استفاده شد. پس از پیش پردازش و آماده سازی داده ها، از مدل های یادگیری ماشینی شبکه عصبی چندلایه، جنگل تصادفی و XGBoost برای پیش بینی وقوع زایمان زودرس استفاده گردید. مدل ها بر اساس معیارهای دقت، حساسیت، ویژگی و سطح زیر منحنی راک ارزیابی شدند. برای تحلیل داده ها از زبان برنامه نویسی پایتون نسخه0-10-3 استفاده شد.

    یافته ها

    8/67 درصد از رخداد زایمان ها، زودرس بودند. بالاترین دقت پیش بینی (0/90) مربوط به الگوریتم XGBoost بود. در تفسیر خروجی مدل برای یک خانم باردار با مقادیر مشخص برای متغیرها، مهم ترین متغیر، با امتیاز اهمیت 46 درصد، متغیر چندقلویی و پس از آن عوامل خطر زایمان، با امتیاز اهمیت 41 درصد بود و متغیرهای دیگر نظیر بیماری اعصاب و روان، پره اکلامپسی و بیماری قلبی عروقی در رده های بعدی اهمیت برای این فرد خاص بودند.

    نتیجه گیری

    استفاده از روش یادگیری ماشینی تفسیر پذیر توانست وقوع زایمان زودرس را پیش بینی نماید. این روش می تواند توصیه های پیشگیرانه مختص هر زن باردار را، مبتنی بر عوامل خطر و با هدف جلوگیری از زایمان زودرس ارائه کند.

    کلید واژگان: بارداری, زایمان زودرس, یادگیری ماشینی, تفسیرپذیری, مدل آگنوستیک}
    Ramin Farrokhi, Samaneh Hosseinzadeh, Abbas Habibelahi, Akbar Biglarian*
    Background and Objectives

    Identifying pregnant women who are at risk of premature birth and determining its risk factors is essential because it affects their health. This study aimed to use an interpretable machine-learning model to predict premature birth.

    Methods

    In this study, data from 149,350 births in Tehran in 2019 were utilized from the Iranian Mothers and Babies Network (IMaN) dataset. Various factors related to the mother and the fetus, such as the mother's demographic variables and health status, medical history, pregnancy conditions, childbirth, and associated risks, were considered. The machine learning models, including multilayer neural networks, random forest, and XGBoost, were employed to predict the occurrence of preterm birth after data preprocessing. The models were evaluated based on accuracy, sensitivity, specificity, and area under the ROC curve. The Python programming language version 3.10.0 was applied to analyze the data.

    Results

    About 8.67% of births were premature. The XGBoost algorithm achieved the highest prediction accuracy (90%). According to the model output, multiple births, which account for 46% of pregnant women's births, had the highest importance score. Delivery risk factors had a score of 41%, and other variables, including neurological and mental illness, preeclampsia, and cardiovascular disease, were subsequently ranked in order of importance for this particular individual.

    Conclusion

    Using an interpretable machine learning method could predict the occurrence of premature birth. Based on risk factors, the interpretable machine learning method can provide personalized preventive recommendations for every pregnant woman, aiming to reduce the risk of preterm birth.

    Keywords: Pregnancy, Premature Birth, Machine Learning, Interpretability, Model-Agnostic}
  • مه لقا افراسیابی*، احمد موحدی
    مقدمه

     بیماری آلزایمر یک بیماری برگشت ناپذیر عصبی است که با اختلالات فکری، رفتاری و حافظه مشخص می شود. پیش بینی اولیه آن یک امر چالش برانگیز است. هدف از این مطالعه تعیین عوامل مرتبط مبتلا به بیماری آلزایمر است.

    روش کار

    این مطالعه با استفاده از داده های جمع آوری شده از پروژه OASIS که توسط مرکز تحقیقات دانشگاه واشنگتن در دسترس قرار گرفته، چارچوبی برای پیش بینی آلزایمر پیشنهاد می کند. در این مطالعه از شبکه عصبی عمیق برای پیش بینی استفاده می شود. برای انتخاب ویژگی های مناسب، الگوریتم بهینه ساز ازدحام ذرات به کار رفته است. ترکیب این دو روش باعث افزایش دقت روش پیش بینی شده است. این روش با الگوریتم های مختلف یادگیری ماشین که دقت خوبی در پیش بینی بیماری آلزایمر داشته اند، مقایسه شده است.

    یافته ها

    نتایج نشان می دهد دقت روش پیشنهادی با ویژگی کمتر، بالاتر است. از بین 11 ویژگی در این مجموعه داده، شش ویژگی سن، وضعیت اقتصادی-اجتماعی، نمره ارزیابی صحت آزمون کوتاه وضعیت ذهنی، رتبه بندی سطح کارکرد حافظه، حجم  برآورد شده  داخل جمجمه و حجم نرمال شده کل مغز تاثیر زیادی در پیش بینی بیماری را دارد که در بین این شش ویژگی، رتبه بندی سطح کارکرد حافظه اهمیت بیشتری دارد.

    نتیجه گیری

    مطالعه حاضر به بررسی عوامل موثر و پیش بینی بیماری آلزایمر پرداخته است. تشخیص زودهنگام بیماری آلزایمر، باعث ارائه خدمات تشخیصی و درمانی مناسب و همچنین بهبود کیفیت زندگی بیماران می شود. روش ارائه شده در این مطالعه با الگوریتم های مختلف یادگیری ماشین که دقت خوبی در پیش بینی بیماری آلزایمر داشته اند، مقایسه شده است. نتایج نشان می دهد دقت روش پیشنهادی با ویژگی کمتر، بالاتر است.

    کلید واژگان: بیماری آلزایمر, شبکه عصبی عمیق, یادگیری عمیق, الگوریتم بهینه سازی ازدحام ذرات}
    Mahlagha Afrasiabi*, Ahmad Movahedi
    Introduction

    Alzheimer's disease is an irreversible neurological condition characterized by cognitive, behavioral, and memory impairments. Early prediction before the transition from mild cognitive impairment to Alzheimer's disease is still a challenging issue. This study aimed to identify factors associated with Alzheimer's disease.

    Method

    This study proposes a framework for predicting Alzheimer's disease using data collected from the OASIS project, made available by the Washington University Research Center. In this study, a deep neural network was used for prediction. A particle swarm optimization (PSO) algorithm was employed for selecting appropriate features. The combination of these two methods increases the accuracy of the proposed prediction method.

    Results

    The results indicate that the proposed method achieves higher accuracy with fewer features. Among the 11 features in this dataset, six features (age, socioeconomic status, Mini-mental state examination score, clinical dementia rating scale, estimated total intracranial volume, and normalized whole-brain volume) have a significant impact on predicting the disease. Among these six features, the clinical dementia rating scale is of great importance.

    Conclusion

    This study investigated the influential factors and prediction of Alzheimer's disease. Early diagnosis of Alzheimer's disease allows for the provision of appropriate diagnostic and therapeutic services, as well as an improvement in patients' quality of life. The proposed method in this study is compared with various machine learning algorithms that have shown good accuracy in predicting Alzheimer's disease. The results indicate that the accuracy of the proposed method is higher with fewer features.

    Keywords: Alzheimer's Disease, Deep Neural Network, Machine Learning, Particle Swarm Optimization}
  • امیرحسین اسمعیل پوز، مریم عاملی*، اشکان مزدگیر، ارد احمدی، مرتضی ضرابی
    سابقه و هدف

    خون بند ناف منبع ارزشمندی از سلول های بنیادی است که در پیوند برای درمان بیماری های مختلف از جمله لوسمی، لنفوم و اختلالات ژنتیکی مورد استفاده قرار می گیرد. با این حال، لخته شدن خون بند ناف در فرآیند جمع آوری می تواند کیفیت نمونه را کاهش دهد و بر اثر بخشی آن در ذخیره سازی خون بند ناف در بانک‎ها تاثیر بگذارد. در این مقاله با استفاده از روش های پیشرفته یادگیری ماشین، لخته شدن خون بند ناف قبل از جمع آوری نمونه ها از اهداکنندگان پیش بینی شده است.

    مواد و روش ها

    در یک مطالعه گذشته نگر، تعداد 928127 نمونه از بانک خون بند ناف رویان از سال 1384 تا 1400 بررسی شدند. داده ها با استفاده از نمونه های موجود در بانک خون بند ناف رویان و با استفاده از الگوریتم های طبقه بندی یادگیری نظارت شده، از جمله درخت تصمیم، بیزین ساده، K- نزدیک ترین همسایه، ماشین بردار پشتیبان، جنگل تصادفی، طبقه بندی رای اکثریت و پرسپترون چند لایه برای پیش بینی لخته شدن خون بند ناف بر روی داده های بانک خون بند ناف رویان اجرا و عملکرد آن ها با استفاده از معیارهای ارزیابی دقت، صحت، بازخوانی و امتیاز F1 مقایسه شد.

    یافته ها

    در این مطالعه دقت الگوریتم درخت تصمیم 80/0،  بیزین ساده 63/0، K- نزدیک ترین همسایه 83/0، ماشین بردار پشتیبان 65/0، جنگل تصادفی 84/0، طبقه بندی رای اکثریت 81/0 و پرسپترون چند لایه 74/0 اندازه گیری شده است.

    نتیجه گیری

    در این مطالعه عملکرد دو الگوریتم جنگل تصادفی و K- نزدیک ترین همسایه بهترین کارآیی را از خود نشان دادند و بیانگر آن است که می توان با کمک الگوریتم های یادگیری ماشین، با دقت بالایی بروز لخته پیش از زایمان را در نوزاد پیش بینی کرد و به کمک آن می توان از نمونه برداری نمونه های دارای لخته به منظور کاهش هزینه و مشکلات ذخیره سازی آن ها جلوگیری نمود.

    کلید واژگان: سلول های بنیادی, یادگیری ماشین, خون بند ناف, بیوانفورماتیک}
    A.H. Esmaielpour, M. Ameli*, A. Mozdgir, O. Ahmadi, M. Zarabi
    Background and Objectives

    Umbilical cord blood is a valuable source of stem cells used in transplants to treat various diseases including leukemia, lymphoma and genetic disorders. However, cord blood clotting during the collection process can reduce sample quality and quantity and impact its efficacy in cord blood banking. This article aims to predict pre-collection cord blood clotting in donors using advanced machine learning techniques.

    Materials and Methods

    In this retrospective study, data was gathered using 928127 samples available in the fetal cord blood bank, and with using supervised machine learning classification algorithms, including decision tree, naïve Bayes, K-Nearest Neighbors, Support vector machine, Random forest, Majority voting and Multilayer perceptron, prediction of cord blood clotting was performed on the Royan cord blood bank database and their performance was compared using evaluation metrics such as Accuracy, Precision, Recall, and F1 Score.

    Results

    In this study, the algorithm accuracy of Decision Tree was 0.80, Naive Bayes was 0.63, K-Nearest Neighbors was 0.83, Support Vector Machine was 0.65, Random Forest was 0.84, Majority Voting Classifier was 0.81, and Multilayer Perceptron was 0.74.

    Conclusions  :

    In this study, the performance of Random Forest and K-Nearest Neighbors algorithms demonstrated the best accuracy showing that machine learning algorithms can predict prenatal cord blood clotting with high accuracy which can help prevent sampling of clotted specimens in order to reduce costs and storage problems.

    Keywords: Stem Cells, Machine Learning, Umbilical Cord Blood, Bioanformatics}
  • منیره رحیم خانی*، مریم گیلانی

    مقاومت آنتی بیوتیکی در سال های اخیر افزایش قابل ملاحظه ای داشته و از طرفی دیگر الگوریتم های یادگیری ماشینی (ML) به طور فزاینده ای در تحقیقات پزشکی و مراقبت های بهداشتی به کار می روند.در میان کاربردهای مختلف این روش های جدید، استفاده از آن ها در مبارزه با مقاومت ضد میکروبی (AMR) یکی از حیاتی ترین زمینه ها می باشد، زیرا افزایش مقاومت به آنتی بیوتیک ها و مدیریت عفونت های مقاوم به چند دارو چالش های مهمی هستند.هر دو ابزار یادگیری ماشین تحت نظارت و بدون نظارت با موفقیت برای پیش بینی مقاومت آنتی بیوتیکی اولیه استفاده شده اند و بنابراین از پزشکان در انتخاب درمان مناسب حمایت می کنند.  یادگیری ماشین و هوش مصنوعی  (AI) در ارتباط با پیش بینی مقاومت ضد میکروبی از علوم امروزی بوده و برنامه نظارت ضد میکروبی (ASP) برای بهینه سازی تجویز آنتی بیوتیک و محدود کردن AMR بایستی اجرا شود.

    کلید واژگان: مقاومت ضد میکروبی, یادگیری ماشینی, هوش مصنوعی}
    Monireh Rahimkhani*, Maryam Gilani

    Antibiotic resistance has increased significantly in recent years. On the other hand, machine learning (ML) algorithms are increasingly used in medical research and healthcare and are gradually improving clinical performance. Using ML to fight antimicrobial resistance (AMR) is one of the most critical areas of interest among the various applications of these new methods. The rise of antibiotic resistance and managing multidrug-resistant infections that are difficult to treat are important challenges. Both supervised and unsupervised machine learning tools have been successfully used to predict early antibiotic resistance and thus support clinicians in selecting the appropriate treatment. Machine learning and artificial intelligence (AI) in predicting antimicrobial resistance are among today's sciences. Therefore, an antimicrobial stewardship program (ASP) should be implemented to optimize antibiotic prescribing and limit AMR.

    Keywords: Antimicrobial Resistance, Machine Learning, Artificial Intelligence}
  • Assef Zare, Narges Shafaei Bajestani*, Masoud Khandehroo

    As an artificial intelligence (AI) branch, machine learning has pioneering applications in public health, ranging from disease diagnosis to epidemic prediction. Machine learning (ML) is a strategic lever to improve care services’ access, quality, and efficiency and create health systems based on learning and value. In the following, we mention only a part of ML assistance in public health.

    Keywords: Machine Learning, Public Health}
  • Zohre Ebrahimi-Khusfi, Mohsen Ebrahimi-Khusfi, Ali Reza Nafarzadegan*, Mojtaba Soleimani-Sardo
    Introduction

    This study was carried out with the aim of determining weather parameters and air pollutants affecting seasonal changes of particulate matter of less than 10 microns (PM10) in Yazd city using Random Forest (RF) and extreme gradient boosting (Xgboost) models.

    Materials and Methods

    The required data was obtained from 2018 to 2022. Levene’s test was applied to investigate the significant difference in the variance of PM10 values in 4 different seasons, and Boruta algorithm was used to select the best predictive variables. RF and Xgboost models were trained using two-thirds of the input data and were tested using the remaining data set. Their performance was evaluated based on R2, Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Nash–Sutcliffe Model Efficiency Coefficient (NSE).

    Results

    The RF showed a higher performance in predicting PM10 in all the study seasons (R2  > 0.85; RMSE < 22). The contribution of dust concentration and relative humidity in spring PM10 changes was more than other variables. For summer, wind direction and ozone were identified as the most important variables affecting PM10 concentration. In the autumn and winter, air pollutants and dust concentration had the greatest effect on PM10, respectively.

    Conclusion

    RF model could explain more than 85% of PM10 seasonal variability in Yazd city. It is recommended to use the model to predict the changes of this air pollutant in other regions with similar climatic and environmental conditions. The results can also be useful for providing suitable solutions to reduce PM10 pollution hazards in Yazd city.

    Keywords: Air Pollution, Particulate Matter, Dust, Machine Learning, Random Forest}
  • Vahid Mansouri, Saber Gholizadeh, Saeed Hosseinpoor*

    Artificial intelligence (AI) and its techniques are a rapidly growing field and are being used in various fields, including healthcare and many others. Medical entomology is one of the important sectors in health care. Diseases transmitted through carriers impose a great economic and social burden on the health of society. Mosquito-borne diseases pose major challenges to human health, affecting more than 600 million people and killing more than 1 million people each year. In the current study, we reviewed more than 30 papers in PubMed and Google Scholar that dealt with the application of artificial intelligence techniques in medical entomology. Articles were classified based on the use of AI and its techniques in this field and show that this new tool can play an important role in predicting the risk of contracting vector-borne diseases and the accurate monitoring of insect vector species.

    Keywords: Artificial intelligence, Deep learning, Machine learning, Medical entomology}
  • Mohammad Maskani, Samaneh Abbasi, Hamidreza Etemad-Rezaee, Hamid Abdolahi, Amir Zamanpour, Alireza Montazerabadi *
    Background
    Gliomas, as Central Nervous System (CNS) tumors, are greatly common with 80% of malignancy. Treatment methods for gliomas, such as surgery, radiation therapy, and chemotherapy depend on the grade, size, location, and the patient’s age.
    Objective
    This study aimed to quantify glioma based on the radiomics analysis and classify its grade into High-grade Glioma (HGG) or Low-grade Glioma (LGG) by various machine-learning methods using contrast-enhanced brain Computerized Tomography (CT) scans.
    Material and Methods
    This retrospective study involved acquiring and segmenting data, selecting and extracting features, classifying, analyzing, and evaluating classifiers. The study included a total of 62 patients (31 with LGG and 31 with HGG). The tumors were segmented by an experienced CT-scan technologist with 3D slicer software. A total of 14 shape features, 18 histogram-based features, and 75 texture-based features were computed. The Area Under the Curve (AUC) and Receiver Operating Characteristic Curve (ROC) were used to evaluate and compare classification models.
    Results
    A total of 13 out of 107 features were selected to differentiate between LGGs and HGGs and to perform various classifier algorithms with different cross-validations. The best classifier algorithm was linear-discriminant with 93.5% accuracy, 96.77% sensitivity, 90.3% specificity, and 0.98% AUC in the differentiation of LGGs and HGGs. 
    Conclusion
    The proposed method can identify LGG and HGG with 93.5% accuracy, 96.77% sensitivity, 90.3% specificity, and 0.98% AUC, leading to the best treatment for glioma patients by using CT scans based on radiomics analysis.
    Keywords: Radiomics, CT scan, Glioma, cancer, Neoplasms, tumor, Machine Learning}
  • Hamed Zamanian, Ahmad Shalbaf*
    Purpose

    This study aims to diagnose the severity of important pathological indices, i.e., fibrosis, steatosis, lobular inflammation, and ballooning from the pathological images of the liver tissue based on extracted features by radiomics methods.

    Materials and Methods

    This research uses the pathological images obtained from liver tissue samples for 258 laboratory mice. After preprocessing the images and data augmentation, a collection of texture feature sets extracted by gray-level-based algorithms, including Global, Gray-level Co-Occurrence Matrix (GLCM), Gray-level Run length Matrix (GLRLM), Gray-level Size Zone Matrix (GLSZM), and Neighboring Gray Tone Difference Matrix (NGTDM) algorithms. Then, advanced methods of classification, namely Support Vector Machine (SVM), Random Forest (RF), Quadratic Discriminant Analysis (QDA), K-Nearest Neighbors (KNN), Logistic Regression (LR), Naïve Bayes (NB), and Multi-layer Perceptrons (MLP) are employed. This procedure is provided separately for each of the four indices of fibrosis level in 6 grading classes, steatosis in 5 grading classes, inflammation in 4 grading classes, and ballooning in 3 grading classes. For a comparison of the output of these algorithms, the accuracy value obtained from the evaluation data is presented for the performance of different methods.

    Results

    The results showed that, compared to other methods, the Gaussian SVM algorithm provides a better response to the classification of the grading of liver disease among all the indices from the pathological images due to its structural features. This value of accuracy was calculated at 84.30% for fibrosis, 90.55% for steatosis, 81.11% for inflammation, and 95.98% for ballooning.

    Conclusion

    This fully automatic framework based on advanced radiomics algorithms and machine learning from pathological images can be very useful in clinical procedures and be considered as an assistant or a substitute for pathologists’ diagnoses.

    Keywords: Liver Disease, Machine Learning, Radiomics, Gaussian Support Vector Machine, PathologicalImages}
  • الهام صحرائی، مهدی تقی زاده*، بابک غلامی، مهدی نوریان زواره
    زمینه و هدف

    استخراج اطلاعات از سیگنال صدای قلب و تشخیص سیگنال غیرطبیعی در مرحله اولیه می تواند نقش حیاتی در کاهش میزان مرگ و میر ناشی از بیماری های قلبی عروقی داشته باشد. از این رو، تاکنون پژوهش های متعددی در حوزه پردازش این سیگنال ها انجام شده است. لذا هدف از این پژوهش تعیین و بررسی بهبود تشخیص ناهنجاری های قلبی از طریق استخراج ویژگی از سیگنال صدای قلب با به کارگیری الگوریتم های طبقه بندی یادگیری ماشین بود.

    روش بررسی

    این یک مطالعه توصیفی تحلیلی می باشد که در سال 1402 در دانشگاه آزاد کازرون انجام شد، داده های پژوهش از دادگان چالش 2016 فیزیونت انتخاب شدند، پس از پیش پردازش و حذف نویز، 6 ویژگی جدید و 35 ویژگی مورد استفاده در پژوهش های پیشین در مجموع 41 ویژگی از سیگنال های صدای قلب استخراج شد. 6 ویژگی جدید عبارتند از؛ آشفتگی متوسط ​​نسبی، ضریب اغتشاش دوره پنج نقطه ای، شیمر محلی (برحسب دسی بل)، ضریب اغتشاش دامنه سه نقطه ای، ضریب اغتشاش دامنه پنج نقطه ای و همبستگی مرکز جرم زمانی و مرکز جرم فرکانسی. ویژگی های استخراج شده به عنوان ورودی به چهار طبقه بند شامل؛ جنگل تصادفی، ماشین بردار پشتیبان، K نزدیکترین همسایه و تجزیه و تحلیل افتراق خطی اعمال شد. میزان صحت، حساسیت و اختصاصیت هر یک از طبقه بندها محاسبه گردید و به منظور بررسی تاثیر ویژگی های جدید در تشخیص ناهنجاری های قلبی، نتایج به دست آمده با پژوهش هایی که از دادگان و طبقه بندهای مشابهی استفاده کرده، ولی ویژگی های کمتری از داده ها استخراج کرده بودند مقایسه شد. داده های جمع آوری شده با استفاده از آزمون های آماری تی و رگرسیون لجستیک تجزیه و تحلیل شدند.

    یافته ها

    بیشترین مقدار صحت و حساسیت، با استفاده از طبقه بند تجزیه و تحلیل افتراق خطی به دست آمد که به ترتیب به میزان 52/91 و 19/96 می باشد. بیشترین مقدار اختصاصیت نیز در طبقه بند جنگل تصادفی به میزان 90/88 به دست آمد. طبق نتایج به دست آمده، با افزودن ویژگی های جدید، سه شاخص صحت، حساسیت و اختصاصیت در دو طبقه بند K نزدیک ترین همسایه و تجزیه و تحلیل افتراق خطی بهبود می یابد. استخراج این ویژگی ها هم چنین باعث افزایش میزان اختصاصیت در طبقه بند جنگل تصادفی می گردد.

    نتیجه گیری

    نتایج نشان می دهد استخراج ویژگی های جدید باعث افزایش میزان صحت، حساسیت و اختصاصیت در تشخیص ناهنجاری های قلبی در مقایسه با نتایج پژوهش های پیشین شده است.

    کلید واژگان: تشخیص ناهنجاری های قلبی, یادگیری ماشین, استخراج ویژگی, طبقه بندی, سیگنال صدای قلب}
    E .Sahraee, M .Taghizadeh*, B .Gholami, M. Nourian-Zavareh
    Background & aim

    Extracting information from the heart sound signal and detecting the abnormal signal in the early stage can play a vital role in reducing the death rate caused by cardiovascular diseases. Therefore, many researches have been done in processing these signals up to now. So, this study aimed to determine the improvement of heart abnormalities diagnosis by extracting features from the heart sound signal by applying machine learning classification algorithms.

    Methods

    The present descriptive–analytical study was conducted at Kazerun Azad University in 2023. The research data were selected from the 2016 Physionet Challenge database. After pre-processing and noise removal, 6 new features and 35 features (41 features) used in previous researches were extracted from the heart sound signals. The 6 new features are " Relative Average Perturbation", " five-point Period Perturbation Quotient", "local shimmer (in dB)", " three-point Amplitude Perturbation Quotient " and " five-point Amplitude Perturbation Quotient " and " correlation of time center of signal and frequency center of signal". The extracted features were applied as input to four classifiers of random forest, support vector machine, K nearest neighbor and linear discriminant analysis. Accuracy, sensitivity and specificity of each classification were calculated. In order to investigate the impact of new features in the diagnosis of cardiac abnormalities, the results obtained were compared with studies that used similar data and classifications but extracted fewer features from the data. The collected data were analyzed using t-tests and logistic regression.

    Results

    The highest accuracy and sensitivity were obtained in the Linear Discriminant Analysis classifier, which are 91.52 and 96.19, respectively. The highest specificity was obtained in the Random Forest classifier at the rate of 88.90. According to the obtained results, by adding new features, the three indices of accuracy, sensitivity and specificity are improved in the two classifiers of K-nearest neighbor and Linear Discriminant Analysis. Extraction of these features also increases the level of specificity in the Random Forest classification.

    Conclusion

    The results indicated that the extraction of new features led to increase in the accuracy, sensitivity and specificity in the diagnosis of cardiac abnormalities compared to the results of previous researches.

    Keywords: Diagnosis of cardiovascular abnormalities, Machine learning, Feature extraction, Classification, Heart sound signal}
  • فرید سامی فر، سهیل سامی فر، فرزانه وفایی، علی گرجی*
    مقدمه

    در سال های اخیر، تکنیک های هوش مصنوعی (AI) به سرعت در حال تبدیل شدن به اقدامات بالینی مانند فرآیندهای تشخیص و پیش آگهی، ارزیابی اثربخشی درمان و پایش بیماری هستند. مطالعات قبلی نتایج جالبی را در مورد کارایی تشخیصی روش های هوش مصنوعی در افتراق بیماران مبتلا به مولتیپل اسکلروزیس (MS) از افراد سالم یا سایر بیماری های دمیلینه کننده نشان داده اند. یک بررسی سیستماتیک جامع در مورد نقش هوش مصنوعی در تشخیص ام اس وجود ندارد. هدف ما انجام یک بررسی سیستماتیک برای مستندسازی عملکرد هوش مصنوعی در تشخیص MS بود. در این مطالعه، ما یک جستجوی جامع و سیستماتیک با استفاده از پایگاه داده PubMed انجام دادیم. تمام مطالعات اصلی که بر یادگیری عمیق یا هوش مصنوعی برای تجزیه و تحلیل روش هایی با هدف تشخیص MS با استفاده از تصاویر MRI متمرکز بودند، در مطالعه ما گنجانده شدند.

    مواد و روش ها

    برای این بررسی، PubMed را برای مطالعاتی در مورد کاربرد هوش مصنوعی در MS با استفاده از تصاویر MRI منتشر شده به زبان انگلیسی طی دوره 2010-2023 جستجو کردیم. استراتژی جستجو بر اساس کلمات Mesh و ترکیب آن ها بود. همه مطالعات مرور شدند، اما تنها مرتبط ترین آن ها در این مرور مورد استفاده قرار گرفتند.

    یافته ها

    هوش مصنوعی با استفاده از روش های یادگیری عمیق می تواند بروز ام اس و عوارض آن را بر اساس عوامل خطر بیماری پیش بینی کند. و هزینه و زمان صرف شده برای آزمایش های مختلف پزشکی را کاهش می دهد. هوش مصنوعی با استخراج اطلاعات و انجام پردازش های لازم با استفاده از روش هایی مانند CNN این امکان را فراهم می کند.

    نتیجه ‎گیری:

     تشخیص بیماری ام اس بر اساس نشانگرهای جدید و هوش مصنوعی، زمینه تحقیقاتی رو به رشدی با تصاویر MRI است، همه این نتایج نشان می دهد که با پیشرفت در هوش مصنوعی، نحوه نظارت و تشخیص بیماران ام اس می تواند تغییر کند. با این حال، چالش های متعددی از جمله درک بهتر اطلاعات انتخاب شده توسط الگوریتم های هوش مصنوعی، اعتبارسنجی چند مرکزی و طولی مناسب نتایج، و جنبه های عملی مربوط به یکپارچه سازی سخت افزار و نرم افزار باقی مانده است. به طور کلی اهمیت حیاتی نظارت انسانی برای بهینه سازی و استفاده کامل از پتانسیل رویکردهای هوش مصنوعی را نمی توان نادیده گرفت.

    کلید واژگان: مولتیپل اسکلروزیس, هوش مصنوعی, یادگیری عمیق, یادگیری ماشینی}
    Farid Samifar, Soheil Samifar, Farzaneh Vafaee, Ali Gorji*
    Introduction

    In recent years, artificial intelligence (AI) techniques are rapidly becoming clinical practices such as diagnosis and prognosis processes, treatment effectiveness evaluation, and disease monitoring. Previous studies have shown interesting results regarding the diagnostic efficiency of artificial intelligence methods in differentiating patients with multiple sclerosis (MS) from healthy individuals or other demyelinating diseases. There is a lack of a comprehensive systematic review on the role of AI in MS diagnosis. Our aim was to conduct a systematic review to document the performance of artificial intelligence in MS diagnosis. In this study, we conducted a comprehensive and systematic search using the PubMed database. All original studies that focused on deep learning or artificial intelligence to analyze methods aimed at diagnosing MS using MRI images were included in our study.

    Materials and Methods

    For this review, we searched PubMed for studies on the application of artificial intelligence in MS using MRI images published in English during the period 2010-2023. The search strategy was based on the words Mesh and their combinations. All studies were reviewed, but only the most relevant ones were used in this review.

    Results

    Artificial intelligence, using deep learning methods, can predict the incidence of MS and its complications based on the risk factors of the disease and reduces the cost and time spent for various medical tests. Artificial intelligence makes this possible by extracting information and performing the necessary processing using methods such as CNN.

    Conclusion

    MS diagnosis based on new markers and artificial intelligence is a growing field of research with MRI images. All these results show that with advances in artificial intelligence, the way MS patients are monitored and diagnosed can change. However, several challenges remain, including better understanding of information selected by AI algorithms, appropriate multicenter and longitudinal validation of results, and practical aspects related to hardware and software integration. In general, the critical importance of human supervision to optimize and fully utilize the potential of artificial intelligence approaches cannot be ignored.

    Keywords: Multiple Sclerosis, Artificial Intelligence, Deep Learning, Machine Learning}
نکته
  • نتایج بر اساس تاریخ انتشار مرتب شده‌اند.
  • کلیدواژه مورد نظر شما تنها در فیلد کلیدواژگان مقالات جستجو شده‌است. به منظور حذف نتایج غیر مرتبط، جستجو تنها در مقالات مجلاتی انجام شده که با مجله ماخذ هم موضوع هستند.
  • در صورتی که می‌خواهید جستجو را در همه موضوعات و با شرایط دیگر تکرار کنید به صفحه جستجوی پیشرفته مجلات مراجعه کنید.
درخواست پشتیبانی - گزارش اشکال