Accountability, Evaluation, and Obscurity of AI Algorithms

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item
    The Effect of Training Set Timeframe on the Future Performance of Machine Learning-based Malware Detection Models
    ( 2021-01-05) Galen, Colin ; Steele, Robert
    The occurrence of previously unseen malicious code or malware is an implicit and ongoing issue for all software-based systems. It has been recognized that machine learning, applied to features statically extracted from binary executable files, offers a number of promising benefits, such as its ability to detect malware that has not been previously encountered. Nevertheless it is understood that these models will not continue to perform equally well over time as new and potentially less recognizable malwares occur. In this study, we have applied a range of machine learning models to the features extracted from a large collection of software executables in Portable Executable format ordered by the date the binary was first encountered, consisting of both malware and benign examples, whilst considering different training set configurations and timeframes. We analyze and quantify the relative performance deterioration of these machine learning models on future test sets of these features, and discuss some insights into the characteristics and rate of machine learning-based malware detection performance deterioration and training set selection.
  • Item
    How Useful are Hand-crafted Data? Making Cases for Anomaly Detection Methods
    ( 2021-01-05) Du, Len ; Hutter, Marcus
    While the importance of small data has been admitted in principle, they have not been widely adopted as a necessity in current machine learning or data mining research. Most predominantly, machine learning methods were typically evaluated under a “bigger is better” presumption. The more (and the more complex) data we could pour at a method, the better we thought we were at estimating its performance. We deem this mindset detrimental to interpretability, explainability, and the sustained development of the field. For example, despite that new outlier detection methods were often inspired by small, low dimensional samples, their performance has been exclusively evaluated by large, high-dimensional datasets resembling real-world use cases. With these “big data” we miss the chance to gain insights from close looks at how exactly the algorithms perform, as we mere humans cannot really comprehend the samples. In this work, we explore in the exactly opposite direction. We run several classical anomaly detection methods against small, mindfully crafted cases on which the results can be examined in detail. In addition to better understanding of these classical algorithms, our exploration has actually led to the discovery of some novel uses of classical anomaly detection methods to our surprise.
  • Item
    Bias in Geographic Information Systems: The Case of Google Maps
    ( 2021-01-05) Wagner, Ben ; Human, Soheil ; Winkler, Till
    Users' perception of geographic space depends heavily on geographic information systems (GIS). GIS are the most common way for users to estimate travel time, provide routing information and recommend appropriate forms of transportation. This article analyses how predictions made by Google Maps, one of the most popular GIS, influence users' perceptions and travel choices. To analyze this influence, a pre-study in a classroom setting (n=36) as well as an online survey (n=521) were conducted. We study users intuitive perception of travel time, before using the Google Maps Mobile App as a 'treatment' to see how it influences their perceptions of travel time and choice of transportation type. We then contrast this original Google Maps treatment to a mock-up 'warning label version' of Google which informs users about biases in Google Maps and an 'unbiased version' of Google Maps based on ground truth data. Our analysis suggests that Google Maps systematically underestimates necessary car driving time, which has an impact on users' choice of transportation.
  • Item
    An Adversarial Training Based Machine Learning Approach to Malware Classification under Adversarial Conditions
    ( 2021-01-05) Devine, Sean ; Bastian, Nathaniel
    The use of machine learning (ML) has become an established practice in the realm of malware classification and other areas within cybersecurity. Characteristic of the contemporary realm of intelligent malware classification is the threat of adversarial ML. Adversaries are looking to target the underlying data and/or models responsible for the functionality of malware classification to map its behavior or corrupt its functionality. The ends of such adversaries are bypassing the cybersecurity measures and increasing malware effectiveness. We develop an adversarial training based ML approach for malware classification under adversarial conditions that leverages a stacking ensemble method, which compares the performance of 10 base ML models when adversarially trained on three data sets of varying data perturbation schemes. This comparison ultimately reveals the best performing model per data set, which includes random forest, bagging and gradient boosting. Experimentation also includes stacking a mixture of ML models in both the first and second levels in the stack. A first level stack across all 10 ML models with a second level support vector machine is top performing. Overall, this work reveals that a malware classifier can be developed to account for potential forms of training data perturbation with minimal effect on performance.
  • Item