top of page

ITS Nook 2025 Agenda

Friday - November 14, 2025

8:30 AM - 9:00 AM - Check-in and Networking

9:00 AM - 9:45 AM - Dr. Eric Lang, Ph.D.

The Social Psychology of AI Algorithm Weights: Biases, Real-World Problems, and Solutions

Abstract Pending 

9:45 AM - 10:15 AM - Joshua Lenzini

Title Pending

Abstract Pending 

10:15 AM - 10:45 AM - Coffee Break

10:45 AM - 11:00 AM - SPONSOR SPEED ROUND

Advanced Onion & Cogility

11:00 AM - 11:30 AM - Dustin Burns

Bridging Global AI Standards and Insider Risk Applications

As algorithmic risk scoring tools become increasingly embedded in government and industry, concerns around fairness, transparency, and unintended bias have moved from academic discourse to operational urgency. Drawing on my experience as a U.S. delegate to the ISO/IEC JTC 1/SC 42 Artificial Intelligence standards committee and lead representative to the ANSI/INCITS AI Technical Committee, this presentation will bridge the gap between international AI standards and real-world applications in insider risk scoring. I will outline key principles from emerging standards frameworks—including those addressing data quality, trustworthiness, and risk management—and demonstrate how they can be operationalized to assess and mitigate bias in algorithmic systems. Using case studies from healthcare, defense, and cybersecurity domains, I will explore how auditability, explainability, and fairness can be embedded into the design and deployment of insider threat models. This talk is intended for practitioners, researchers, and policymakers seeking to ground their algorithmic systems in scientifically rigorous, standards-informed methods.

11:30 AM - 1:00 PM - Lunch

1:00 PM - 1:45 PM - Jeremey Parkhurst & Victoria Liu

As insider threat programs evolve beyond traditional perimeter-based monitoring, the integration of external intelligence—such as third-party data feeds, geopolitical risk indicators, and open-source threat assessments—has become a powerful tool for enhancing detection and response. However, this expansion introduces complex challenges around bias, defensibility, and legal permissibility.

This talk explores how organizations can responsibly score external data to inform insider risk decisions, while avoiding unintended consequences. We will examine the tension between operational urgency and analytic rigor, highlighting how codified Priority Intelligence Requirements (PIRs) can complement internal Potential Risk Indicators (PRIs) to create a more holistic and defensible risk framework.

Real-world examples will illustrate how geopolitical triggers—such as PRC talent recruitment programs or concerns over DPRK IT worker infiltration—can influence employment decisions, sometimes in ways that raise legal and ethical questions.

Attendees will leave with a deeper understanding of:
How to balance external threat intelligence with internal risk scoring.
The role of PIRs in shaping organizational risk appetite.
Legal and privacy constraints on using external data in hiring pipelines.
Strategies for building analytic objectivity and defensibility into insider threat programs.

1:45 PM - 2:15 PM - Brad Morris

Connecting the Dots: Graph-Based Methods for Explainable Insider Threat Detection

How might we accelerate the development and adoption of counter insider threat tools that not only detect anomalous behavior, but also provide explainable risk assessments to guide investigations and ensure legal defensibility? In this presentation, Advanced Onion’s CTO, Brad Morris, shares his ongoing research on explainable anomaly detection and risk scoring using probabilistic graph-based methods. Specifically, the goal of his presentation aims to address the feasibility and efficacy of graph concepts for detecting insider threats by understanding how people, computers, and data connect, while providing defensible explanations of why individuals are flagged and what actions justify investigation. 

2:15 PM - 2:45 PM - Coffee Break

2:45 PM - 3:15 PM - Dr. Frank Greitzer, Ph.D.

Title Pending

Abstract Pending 

3:15 PM - 3:45 PM - Speaker Pending

Title Pending

Abstract Pending 

3:45 PM - 4:45 PM - Open-Panel Discussion 

Regulatory Frameworks for Accountability in AI Bias

As artificial intelligence systems become increasingly embedded in decision-making processes across sectors—from finance and healthcare to hiring and law enforcement—the urgent need for robust regulatory frameworks to address algorithmic bias has come into sharp focus. This open-panel discussion, “Regulatory Frameworks for Accountability in AI Bias,” brings together policymakers, technologists, ethicists, and legal experts to explore the evolving landscape of governance and accountability in AI. Panelists will examine emerging legislation and standards, assess the challenges of enforcing transparency and fairness, and debate the balance between innovation and oversight. By comparing international approaches and identifying gaps in existing frameworks, the discussion aims to chart actionable pathways toward equitable, accountable, and trustworthy AI systems.

5:00 PM - 7:00 PM - Networking Reception with food & beverages

bottom of page