Cyber Deception and Cyberpsychology for Defense
Permanent URI for this collection
Browse
Recent Submissions
Item Investigating Cognitive Salience and SHAPley Values for Model Explainability in Intrusion Detection Datasets(2025-01-07) Thomson, Robert; Cranford, Edward; Lebiere, ChristianWe compare a cognitive salience algorithm with SHAPley values to show how cognitive salience provides an alternate measure of the extent to which each feature is responsible for making a particular decision in a given context. We show how local cognitive salience is computed in near real-time. By being embedded in cognitive mechanisms, cognitive salience is influenced by recency, frequency, and order effects appropriate to interpreting both cognitive model and arbitrary machine learning output. Applying this technique to intrusion detection datasets (UNSW-NB15, CICIDS2017, NSL-KDD), we compute the salience of each feature for any given classification. By tracing (i.e., clamping) our model’s decisions to that of a specific classification technique or individual decision maker, we are able to provide measures of model-agnostic salience. We then compare our output to SHAP values, which are a comparable game-theoretic measure of local and global feature importance. We show how our salience is more scalable than Tree-based SHAP techniques and discuss when predictions vary.Item Deploying Active Defence in a SOC: Analysts’ Perceptions of Cyber Deception(2025-01-07) Reeves, Andrew; Ashenden, DebiSecurity Operations Centres (SOCs) are pivotal in safeguarding an organisation's network infrastructure. While existing technologies focus on reactive measures, the emergence of deception tools presents an opportunity for a more proactive defence against cyber threats. Integrating such tools into SOCs, however, necessitates understanding their impact, value, and implementation challenges. To explore this, we conducted fifteen interviews with analysts from a leading SoC provider in Australia. Our thematic analysis revealed key insights: implementing cyber deception requires a shift in organisational risk tolerance, efficacy hinges on proper implementation, and it introduces new risks requiring strategic management. Analysts suggested that in-house SOCs or threat intelligence teams might be more suited for cyber deception deployment in a Managed Service Provider (MSP) SOC. This study sheds light on the implications of cyber deception for SOC operations. We conclude with recommendations to guide the integration of deception tools into SOCs.Item Containerized Cozenage: Exploring the Effectiveness of High Interaction ICS Honeypot Containers(2025-01-07) Norton, Gregory; Landsborough, Jason; Orozco, Emmanuel; Fields, JosephIndustrial control systems (ICS) provide critical functionality and are responsible for ensuring water treatment and electrical grid operation, among other vital services. Therefore, they are prime targets for cyber attackers. As attackers employ techniques such as living-off-the-land to remain undetected, defenders need to adapt their tools to protect ICS networks. We present a high interaction honeypot which runs the Sedona Framework within a Docker container. Containerized honeypots could mitigate costs associated with deployment and maintenance. Fingerprint evasion methods are implemented in our honeypot's design. These methods include creating a physics-aware simulated centrifuge device, and ensuring compatibility with human machine interface (HMI) software. The honeypot's behavior was compared against a physical system: the Contemporary Controls BASC-20T. The honeypot was found to fully interoperate with two HMI control applications. Network traffic analysis reveals that the honeypot's network response time signature can be made to closely resemble the BASC-20T.Item Cyberpsychology: Integrating Cyber Behavioral Sciences with Adaptive Environments for Enhanced Cyber Deception(2025-01-07) Rich, MarshallAs cyber threats evolve in complexity, there is a growing need to integrate Cyber Behavioral Sciences (CBS) and adaptive defense strategies into cybersecurity frameworks. This paper explores the potential of CBS and real-time adaptive testing environments (ATEs), such as moving target defense (MTD), to enhance cyber deception strategies. A comprehensive literature review and thematic analysis identifies critical gaps in the current research, including the lack of empirical validation, standardized performance metrics, and the under explored role of organizational culture in cyber deception. The findings suggest that while theoretical models for CBS offer valuable insights into attacker behavior, practical implementations remain limited. However, by addressing these challenges, cybersecurity practitioners can create more dynamic, behaviorally informed defenses capable of responding to evolving adversarial tactics.Item Sludge for Good: Slowing and Imposing Costs on Cyber Attackers(2025-01-07) Dykstra, Josiah; Shortridge, Kelly; Met, Jamie; Hough, DouglasChoice architecture describes the design by which choices are presented to people. Nudges are an aspect intended to make “good” outcomes easy, such as using password meters to encourage strong passwords. Sludge, on the contrary, is friction that raises the transaction cost and is often seen as negative by users. Turning this concept around, we propose applying sludge for positive cybersecurity outcomes by using it offensively against attackers to consume their time and other resources. Most cyber defenses have been designed to be optimally strong and effective and prohibit or eliminate attackers as quickly as possible. Our complementary approach is to deploy defenses that seek to maximize the consumption of attackers’ time and other resources while causing as little damage as possible to the victim. This approach is consistent with zero trust and similar mindsets which assume breach. The Sludge Strategy introduces cost-imposing cyber defense by strategically deploying friction for attackers before, during, and after an attack using deception and authentic design features. We present the characteristics of effective sludge and show a continuum from light to heavy sludge. We describe the quantitative and qualitative costs to attackers and offer practical considerations for deploying sludge in practice. Finally, we examine real-world examples of U.S. government operations to frustrate and impose costs on cyber adversaries. We encourage research and further exploration of how sludge can slow attackersItem Puppeteer: Leveraging a Large Language Model for Scambaiting(2025-01-07) Charnsethikul, Pithayuth; Mirkovic, Jelena; Saiya, Rishit; Liu, Jeffrey; Crotty, Benjamin; Bartlett, GenevieveScambaiting is a defense that engages with scammers to waste their resources and gain information about their fraud campaigns. This defense needs automation to scale to the vast number of scams we see today. In this paper, we propose a scalable, automated scambaiting system, Puppeteer, which leverages a large language model for response generation and state machines for conversation tracking. We measure Puppeteer’s effectiveness via a user study, where participants play a role of a scammer in two scam scenarios. Puppeteer convinced more than 72% of the participants that they were interacting with a human, and was able to extract information from 68% of participants. In comparison, using the same large language model without conversation tracking convinced only 54% of the participants that they were interacting with a human and obtained information from 54% of participants. Our results show potential for real-world use of Puppeteer. To the best of our knowledge, we are also the first to systematically evaluate a large language model for a scambaiting task.Item ‘It's Not Paranoia If They're Really After You’: When Announcing Deception Technology Can Change Attacker Decisions(2025-01-07) Reeves, Andrew; Ashenden, DebiAs organisations continue to adopt deception technology, adversaries are becoming aware of this technology. Little is known, however, about how this awareness changes the attacker’s behaviour as they navigate a victim's network. Concurrently, work is being done to build algorithms that predict attacker paths to recommend where to place deceptive assets, but it is not clear whether attacker awareness of deception alters their behaviour sufficiently to render these algorithms ineffective. We present an ongoing mixed method study to better understand how attackers move through a network when they are aware of the presence of deception. Thematic analysis of think-aloud sessions revealed three key decision-making themes. Themes suggest that several industry heuristics for the use of decoys may be inaccurate and impact the efficacy of decoy placement strategies. In addition, effect sizes indicate that awareness of deception leads attackers to take longer paths through the network, although no more decoys were required to detect them.Item Introduction to the Minitrack on Cyber Deception and Cyberpsychology for Defense(2025-01-07) Patel, Tejas; Fugate, Sunny; Wang, Cliff; Ferguson-Walter, Kimberly