Cyber Operations, Defense, and Forensics

Permanent URI for this collection


Recent Submissions

Now showing 1 - 5 of 5
  • Item
    Towards Hardware-Based Application Fingerprinting with Microarchitectural Signals for Zero Trust Environments
    ( 2023-01-03) Langehaug, Tor ; Graham, Scott
    The interactions between software and hardware are increasingly important to computer system security. This research collects sequences of microprocessor control signals to develop machine learning models that identify software tasks. The proposed approach considers software task identification in hardware as a general problem with attacks treated as a subset of software tasks. Two lines of effort are presented. First, a data collection approach is described to extract sequences of control signals labeled by task identity during real (i.e., non-simulated) system operation. Second, experimental design is used to select hardware and software configuration to train and evaluate machine learning models. The machine learning models significantly outperform a Naive classifier based on Euclidean distances from class means. Various configurations produce balanced accuracy scores between 26.08% and 96.89%.
  • Item
    Introduction to the Minitrack on Cyber Operations, Defense, and Forensics
    ( 2023-01-03) Glisson, William ; Grispos, George ; Mcdonald, Jeffrey
  • Item
    Safe Reinforcement Learning via Observation Shielding
    ( 2023-01-03) Mccalmon, Joe ; Liu, Tongtong ; Goldsmith, Reid ; Cyhaniuk, Andrew ; Halabi, Talal ; Alqahtani, Sarra
    Reinforcement Learning (RL) algorithms have shown success in scaling up to large problems. However, deploying those algorithms in real-world applications remains challenging due to their vulnerability to adversarial perturbations. Existing RL robustness methods against adversarial attacks are weak to large perturbations - a scenario that cannot be ruled out for RL adversarial threats, as is the case for deep neural networks in classification tasks. This paper proposes a method called observation-shielding RL (OSRL) to increase the robustness of RL against large perturbations using predictive models and threat detection. Instead of changing the RL algorithms with robustness regularization or retrain them with adversarial perturbations, we depart considerably from previous approaches and develop an add-on safety feature for existing RL algorithms during runtime. OSRL builds on the idea of model predictive shielding, where an observation predictive model is used to override the perturbed observations as needed to ensure safety. Extensive experiments on various MuJoCo environments (Ant, Hooper) and the classical pendulum environment demonstrate that our proposed OSRL is safer and more efficient than state-of-the-art robustness methods under large perturbations.
  • Item
    Image Attribute Estimation for Forensic Image Reconstruction from Fragments
    ( 2023-01-03) Montambault, Kevin ; Kul, Gokhan
    The increasing prevalence of cyber-crime has led to a surge of new forensics tools aimed at collecting digital evidence from a suspect’s computer. A suspect’s hard drive can be the largest source of collected information, but the task of collection can be made significantly more difficult when the contents of a hard drive are deleted or damaged. In these circumstances the information needed to read files normally may be missing, leaving only the raw, often fragmented, data behind. If we were able to reliably reconstruct files from this raw data, then it would be more difficult for suspects to destroy potential evidence. In this paper, we focus on the reconstruction of an image from a set of fragments. This research contributes a novel image reconstruction method which utilizes pre-stitch data extraction on individual data sectors. We show that, when certain attributes are successfully extracted from the data sectors, this method yields a high reconstruction accuracy even when used with a naive stitching algorithm on heavily fragmented image files.
  • Item
    Data Exfiltration via Flow Hijacking at the Socket Layer
    ( 2023-01-03) Bergen, Eric ; Lukaszewski, Daniel ; Xie, Geoffrey
    The severity of data exfiltration attacks is well known, and operators have begun deploying elaborate host and network security controls to counter this threat. Consequently, malicious actors spare no efforts finding methods to obfuscate their attacks within common network traffic. In this paper, we expose a new type of application transparent, kernel level data exfiltration attacks. By embedding data into application messages while they are held in socket buffers outside of applications, the attacks have the flexibility to hijack flows of multiple distinct applications at a time. Furthermore, we assess the practical implications of the attacks using a testbed emulating a typical data exfiltration scenario. We first prototype required attack functionalities with existing Layer 4.5 application message customization software, and then perform flow hijacking experiments with respect to six common application protocols. The results confirm the flexibility of socket layer attacks and their ability to evade typical security controls.