Soft Computing: Theory Innovations and Problem Solving Benefits

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • Item
    Rough Sets: A Bibliometric Analysis from 2014 to 2018
    (2020-01-07) Heradio, Ruben; Fernandez-Amoros, David; Moral-Muñoz, Jose A.; Cobo, Manuel J.
    Along almost forty years, considerable research has been undertaken on rough set theory to deal with vague information. Rough sets have proven to be extremely helpful for a diversity of computer-science problems (e.g., knowledge discovery, computational logic, machine learning, etc.), and numerous application domains (e.g., business economics, telecommunications, neurosciences, etc.). Accordingly, the literature on rough sets has grown without ceasing, and nowadays it is immense. This paper provides a comprehensive overview of the research published for the last five years. To do so, it analyzes 4,038 records retrieved from the Clarivate Web of Science database, identifying (i) the most prolific authors and their collaboration networks, (ii) the countries and organizations that are leading research on rough sets, (iii) the journals that are publishing most papers, (iv) the topics that are being most researched, and (v) the principal application domains.
  • Item
    Determining Project Contingency Reserve Using a Fuzzy Arithmetic-Based Risk Analysis Method
    (2020-01-07) Fateminia, Seyed Hamed; Siraj, Nasir Bedewi; Fayek, Aminah Robinson; Johnston, Andrew
    Traditional techniques for estimating contingency reserve fail to capture subjective uncertainties and expert knowledge, and they rely on historical data. This paper proposes a fuzzy risk analysis model (FRAM) that uses fuzzy arithmetic to analyze risk and opportunity events and determine construction project contingency reserve. The FRAM allows experts to use natural language to assess the probability and impact of risk and opportunity events by employing linguistic scales represented by fuzzy numbers, thus addressing the data reliance problem of probabilistic methods. It enables experts to customize linguistic scales and fuzzy numbers for different project types and stages. The FRAM also deals with the challenges associated with deterministic approaches by addressing measurement imprecision and the subjective uncertainty of experts’ opinions. Moreover, the FRAM allows analysts to estimate contingency at different levels of confidence. This paper also illustrates Fuzzy Risk Analyzer© (FRA©), software that implements the fuzzy arithmetic procedure of the FRAM.
  • Item
    Applying Feature Selection to Improve Predictive Performance and Explainability in Lung Cancer Detection with Soft Computing
    (2020-01-07) Potie, Nicolas; Giannoukakos, Stavros; Hackenberg, Michael; Fernandez, Alberto
    The field of biomedicine is focused on the detection and subsequent treatment of various complex diseases. Among these, cancer stands out as one of the most studied, due to the high mortality it entails. The appearance of cancer depends directly on the correct functionality and balance of the genome. Therefore, it is mandatory to ensure which of the approximately 25,000 human genes are linked with this undesirable condition. In this work, we focus on a case study of a population affected by lung cancer. Patient information has been obtained using liquid biopsy technology, i.e. capturing cell information from the bloodstream and applying an RNA-seq procedure to get the frequency of representation for each gene. The ultimate goal of this study is to find a good trade-off between predictive capacity and interpretability for the discernment of this type of cancer. To this end, we will apply a large number of techniques for feature selection, using different thresholds for the number of selected discriminant genes. Our experimental results, using Soft Computing techniques, show that model-based feature selection via Random Forest is essential for both improving the predictive capacity of the models, and also their explainability over a small subset of genes.
  • Item
    An Approach Toward a Feedback Mechanism for Consensus Reaching Processes Using Gamification to Increase the Experts' Experience
    (2020-01-07) Pérez, Ignacio; Garcia-Sanchez, Pablo; Cabrerizo, Francisco; Herrera-Viedma, Enrique
    Sometimes, the consensus reaching process in group decision making problems is a challenging task for the people who are in charge of the final choice (usually called experts). Firstly, the consensus is defined as a convergent and iterative process. This implies that it is necessary to keep the experts' attention during the whole process, even if it is longer than expected. Secondly, some of the experts tend to be rigid and they do not change their minds to help in the negotiation process easily. In such a way, we propose a new feedback mechanism that uses some gamification rules, designed as a reward distribution system, in order to transform that task into a game. This change can improve the consensus reaching process in both situations: keeping the experts' attention on the process and motivating those experts that should adjust their preferences.
  • Item
    Adaptive and Concurrent Garbage Collection for Virtual Machines
    (2020-01-07) Haque, Md. Enamul; Zobaed, Sm; Hussain, Razin Farhan; Islam, Aminul
    An important issue for concurrent garbage collection in virtual machines (VM) is to identify which garbage collector (GC) to use during the collection process. For instance, Java program execution times differ greatly based on the employed GC. It has not been possible to identify the optimal GC algorithms for a specific program before exhaustively profiling the execution times for all available GC algorithms. In this paper, we present an adaptive and concurrent garbage collection (ACGC) technique that can predict the optimal GC algorithm for a program without going through all the GC algorithms. We implement this technique in the Java virtual machine and test it using standard benchmark suites. ACGC learns the algorithms’ usage pattern from different training program features and generates a model for future programs. Feature generation and selection are two important steps of our technique, which creates different attributes to use in the learning step. Our experimental evaluation shows improvement in selecting the best GC. Additionally, our approach is helpful in finding better heap size settings for improved program execution.
  • Item
    Similarity-based and Iterative Label Noise Filters for Monotonic Classification
    (2020-01-07) Cano, José Ramón; Luengo, Julian; García, Salvador
    Monotonic ordinal classification has received an increasing interest in the latest years. Building monotone models from these problems usually requires datasets that verify monotonic relationships among the samples. When the monotonic relationships are not met, changing the labels may be a viable option, but the risk is high: wrong label changes would completely change the information contained in the data. In this work, we tackle the construction of monotone datasets by removing the wrong or noisy examples that violate monotonicity restrictions. We propose two monotonic noise filtering algorithms to preprocess the ordinal datasets and improve the monotonic relations between instances. The experiments are carried out over eleven ordinal datasets, showing that the application of the proposed filters improve the prediction capabilities over different levels of noise.
  • Item
    Introduction to the Minitrack on Soft Computing: Theory Innovations and Problem Solving Benefits
    (2020-01-07) Herrera-Viedma, Enrique; Pérez, Ignacio; Cabrerizo, Francisco