فهرست مطالب

Web Resrarch - Volume:3 Issue: 2, Autumn-Winter 2020

International Journal of Web Research
Volume:3 Issue: 2, Autumn-Winter 2020

  • تاریخ انتشار: 1400/03/13
  • تعداد عناوین: 6
|
  • Shireen Atarod, Alireza Yari * Pages 1-8
    The volume of Farsi information on the Internet has been increasing in recent years. However, most of this information is in the form of unstructured or semi-structured free text. For quick and accurate access to the vast knowledge contained in these texts, the information extraction methods are essential to generate knowledge bases. In recent years, relation extraction as a sub-task of information extraction has received much attention. While many of these systems were developed in English and other well-known languages, the systems for information extraction in Farsi have received less attention from researchers. In this systematic research for semi-automatic relation extraction, Persian Wikipedia articles were presented as reliable and semi-structured sources. In this system, the relation extraction is performed with the assistance of patterns that are automatically obtained with an approach based on distant supervised. In order to apply the distant supervised, the vast knowledge base of Wikidata has been used as a source in perfect synchronization with Wikipedia. The results show that the average precision value for all relations is 76.81%, which indicates an enhancement of precision compared to other methods in Farsi.
    Keywords: Relation Extraction, Information Extraction, Distant Supervision, Persian Wikipedia
  • Meysam Alavi *, Sayed Ali Lajevardy Pages 9-15
    Collaboration in writing scientific articles with the growth of academic exchanges and social interactions of researchers is increasingly expanding. Scientific collaboration gives researchers the opportunity to combine the capabilities and abilities of different scientific and research disciplines, which cannot be done individually. Co-authorship is the most formal manifestation of intellectual collaboration between authors in the production of scientific research. On the other hand, the study of the trend of scientific activities and its dynamics in any specialized field is one of the most important concerns of researchers in that field. In recent years, the use of the social network analysis approach has been proposed as a suitable solution to map the scientific structure of specialized fields and the co-authorship network of researchers. In this research, the papers published in six web research conferences have been analyzed to discover the scientific network and the co-authorship based on the social network analysis approach. The results of the analysis show that in the period, concepts such as social network analysis, Internet of Things, cloud computing, and deep learning have the largest share in articles. Also, based on the number of communities formed, the authors of the conference papers were more inclined to form small scientific groups in the form of universities or research institutes of their respective organizations.
    Keywords: Co-authorship Network, Scientific map, conference on web research, social network analysis
  • Fatemeh Mohammadi Ashnani, Zahra Movahedi, Kazim Fouladi * Pages 16-25

    With the emergence of the World Wide Web, Electronic Commerce (E-commerce) has been growing rapidly in the past two decades. Intelligent agents play the main role in making the negotiation between different entities automatically.  Automated negotiation allows resolving opponent agents' mutual concerns to reach an agreement without the risk of losing individual profits. However, due to the unknown information about the opponent's strategies, automated negotiation is difficult. The main challenge is how to reveal the optimal information about the opponent's strategy during the negotiation process to propose the best counter-offer. In this paper, we design a buyer agent which can automatically negotiate with the opponent using artificial intelligence techniques and machine learning methods. The proposed buyer agent is designed to learn the opponent's strategies during the negotiation process using four methods "Bayesian Learning", "Kernel Density Estimation", "Multilayer Perceptron Neural Network", and "Nonlinear Regression". Experimental results show that the use of machine learning methods increases the negotiation efficiency, which is measured and evaluated by parameters such as the rate agreement (RA), average buyer utility (ABU), average seller utility (ASU), average rounds (AR). Rate agreement and average buyer utility have increased from 58% to 74% and 90% to 94%, respectively, and average rounds have decreased from 10% to 0.04%.

    Keywords: Multiagent System, Automatic Negotiation, Machine Learning, Opponent Strategy Learning, Opponent's Modeling, e-commerce, Bayesian Learning, Kernel density estimation, Artificial Neural Network
  • Ghazale Etemadikhou, Fatemeh Azimzadeh *, Abdolsamad Karamatfar Pages 26-30
    The diversity and high volume of available information on the web make data retrieval a serious challenge in this environment. On the other hand, obtaining user satisfaction is difficult, which is one of the main challenges of data retrieval systems. Depending on their information about interests and needs for the same keyword, different people expect different responses from Information Retrieval (IR) systems. Achieving this goal requires an effective method to retrieve information. Personalized Information Retrieval (PIR) is an effective method to achieve this goal which is considered by researchers today.  Folksonomy is the process that allows users to tag in a specific domain of information in a social environment (tags are accessible to other users). Folksonomy systems are made collaborative tagging systems. Due to the large volume and variety of tags produced, resolving ambiguity is a severe challenge in these systems. In recent years, word embedding methods have been considered by researchers as a successful method to fix the ambiguity of texts. This study proposes a model which, in addition to using word embedding methods to remove tag ambiguity, provides search results in a personalized approach by fixing ambiguity and sentiment analysis combination tailored to users' interests. In this research, different models of word embeddings were applied. The experiments' results show that after applying the fixing ambiguity, the mean accuracy criterion improved by 1.93% and the mean MRR (Mean Reciprocal Rank)  by 0.38%.
    Keywords: Personalized Information Retrieval, Folksonomy, Fixing Ambiguity, word embedding, Sentiment analysis
  • Yasaman Mashhadi Hashem Marandi, Hedieh Sajedi * Pages 31-44
    Building and maintaining brand loyalty is a vital issue for market research departments. ‎Various means, including online advertising, helps with promoting loyalty to the brand amongst users. The present paper studies intelligent web advertisements with an ‎eye-tracking technique that calculates users’ eye movements, ‎gaze ‎points, and heat maps. This ‎paper ‎examines different ‎features of an online ad and their combinations, such as underlining words and personalization by eye-tracking. These characteristics include underlining, changing color, number of words, personalizing, inserting a related photograph, and changing the size and location of the advertisement on a website. They help advertisers to improve their ability to manage the ads by increasing users' attention. Moreover, the current research argues the impact of gender on users' visual behavior for advertising features in different Cognitive Demand (CD) levels of tasks while avoiding interruption of users’ cognitive processes with eye-tracking techniques. Also, it provides users the most relevant advertisement compatible with CD level of a task by Support Vector Machine (SVM) algorithm with high accuracy. This paper consists of two experiments that one of them has two phases. In the first and second experiments, a news website alongside an advertisement and an advertising website is shown to the users. The results of the first experiment revealed that personalizing and underlining the words of the ad grabs more attention from users in a low CD task. Furthermore, darkening the background promotes users' frequency of attention in a high CD task. By analyzing the impact of gender on users' visual behavior, males are attracted to the advertisement with red-colored words sooner than females during the high CD task. Females pay more prolonged and more frequent attention to the ads with red-colored words and larger sizes in the low CD task. The second experiment shows that the gazing start point of users with a right to left mother tongue language direction is mainly in the middle of the advertising website.
    Keywords: Web advertisement, Eye-tracking, Intelligent advertisement, Online advertisement, Human-Computer Interaction, Machine Learning, Website Design, User Experience
  • Aliakbar Tajari Siahmarzkooh * Pages 45-50

    Todays, Intrusion Detection Systems (IDS) are considered as key components of security networks. However, high false positive and false negative rates are the important problems of these systems. On the other hand, many of the existing solutions in the articles are restricted to class datasets due to the use of a specific technique, but in real applications they may have multi-variant datasets. With the impetus of the facts, this paper presents a new anomaly based intrusion detection system using J48 Decision Tree, Support Vector Classifier (SVC) and k-means clustering algorithm in order to reduce false alarm rates and enhance the system performance. J48 decision tree algorithm is used to select the best features and optimize the dataset. Also, an SVM classifier and a modified k-means clustering algorithm are used to build a profile of normal and anomalous behaviors of dataset. Simulation results on benchmark NSL-KDD, CICIDS2017 and synthetic datasets confirm that the proposed method has significant performance in comparison with previous approaches.

    Keywords: Intrusion Detection, K-Means Clustering, Decision Tree, Support Vector Classifier, NSL-KDD Dataset