top of page

Agenda

Click here to download our official Agenda for the Event!

Wednesday - November 6, 2024

8:30 AM

Check-in and Networking

Welcome to Nook!

9:00 AM - 9:15 AM

Michael Douglass, Advanced Onion

Welcome Address

9:15 AM - 9:45 AM

Dustin Burns, Exponent

Technical Introduction

In this technical introduction, Dr. Dustin Burns provides a visual and intuitive presentation of AI with Large Language Models (LLMs), including answering the question: What is the meaning of “GPT” in ChatGPT? Building from an understanding of neural networks, the audience will gain practical knowledge of the definitions of concepts and algorithms related to LLMs, training and validation methodology, fine tuning, inference, and prompt engineering. We will conclude by discussing how users may better apply AI tools in their work while considering the risks and pitfalls.

9:15 AM - 10:00 AM

COFFEE BREAK

!

10:00 AM - 10:30 AM

John Massey, USAF

Fireside chat

10:30 AM - 10:45 AM

SPONSOR SPEED ROUND

A quick highlight of all our valued sponsors.

EchoMark, Exponent, and Advanced Onion

10:45 AM - 11:30 AM

David Wong, EchoMark

Balancing Innovation and Risk: Navigating the Challenges of Generative AI

Generative AI is transforming business operations across industries, from customer engagement to software development, unlocking unprecedented opportunities for innovation. However, it also introduces serious risks, including data exfiltration, compliance and confidentiality challenges, as well as exposure to deepfakes and inappropriate content. In this session, we’ll explore key risk factors associated with Generative AI and discuss actionable strategies to help organizations navigate and mitigate these emerging threats effectively.

11:30 PM - 1:00 PM

LUNCH BREAK

Please enjoy a delicious selection from the catering team at the InterContinental.

1:00 PM - 2:00 PM

Dr. Kristin Schneider, Rhonda Maluia, and Erick Miyares, DITMAC 

Beyond UAM: LLMs and AI in DoD InT

Within the U.S. Department of Defense (DoD), insider threats ranging from unauthorized disclosure of classified information, espionage, and terrorism, to workplace violence, suicide, and other forms of targeted violence are managed by the DoD Insider Threat Management and Analysis Center (DITMAC). When a concern arises regarding a military Service member, DoD civilian employee, or DoD contractor, the DITMAC Behavioral Threat Analysis Center (BTAC) conducts a comprehensive multidisciplinary behavioral threat assessment to inform case management and mitigation. As DoD grapples with the growing complexity of insider threats, identifying and integrating innovative methodologies that aid in the detection, assessment, and mitigation of threats has become increasingly essential. LLMs, renowned for their capabilities in natural language processing and contextual understanding, present a transformative opportunity for enhancing counter insider threat capabilities and fostering a more secure operational environment, if used properly.

 

In this presentation, multidisciplinary BTAC team members describe DoD threat assessment cases and explore opportunities for, and challenges with, integration of LLM technologies into DoD insider threat detection, assessment, and mitigation practices. They describe the types of information needed to inform comprehensive insider threat assessments and the guiding behavioral threat assessment methodology which emphasizes systematic evaluation of risk factors, contextual information, and behavioral patterns. They highlight key advantages of employing LLMs to support this methodology; for example, by efficiently analyzing vast amounts of unstructured data from disparate data sources, including social media, public records, and incident reports, thereby informing a comprehensive view of an individual’s behavior and evaluations of risk. They also address challenges associated with integrating LLMs, including privacy and ethical considerations, potential biases in data interpretation, and the necessity of human oversight to ensure responsible use.

2:00 PM - 2:15 PM

COFFEE BREAK

!

2:15 PM - 3:00 PM

Michael Roldan & Lester Kwok, FBI

The Use of Large Language Models (LLMs) in Criminal Activity

This presentation discusses how Large Language Models (LLMs) are increasingly being exploited for malicious activities. Key points include:

  1. Writing Malware: LLMs can generate harmful code, such as Distributed Denial of Service (DDoS) scripts, when manipulated.

  2. Phishing and Social Engineering: LLMs help generate sophisticated and convincing phishing attempts that target specific individuals, making scams harder to detect.

  3. Credential Harvesting: Cybercriminals use LLMs to create phishing emails or fake websites that trick users into giving up credentials.

  4. Business Email Compromise (BEC): Scammers leverage AI to impersonate executives in emails, leading to financial theft and sensitive data breaches.

  5. Synthetic Identity Fraud: AI helps criminals create synthetic identities for financial fraud, posing a growing threat to financial institutions.

  6. Deepfakes: AI-generated deepfakes of trusted individuals are used in scams, like directing fraudulent financial transfers or extracting confidential information.

  7. Vulnerability Identification: AI aids attackers in identifying system vulnerabilities and selecting high-value targets more efficiently.

  8. Confidence/Romance Scams: AI-powered chat scripts and voice cloning make romance scams more believable, leading to substantial financial losses.

  9. Crypto-Investment Scams: AI is used to enhance fraudulent crypto investment schemes, resulting in billions of dollars in losses.

  10. Employment Scams: AI-generated fake interviews trick companies into hiring fraudulently, contributing to large-scale employment fraud.

3:00 PM - 3:15 PM

COFFEE BREAK

!

3:15 PM - 3:45 PM

Brad Morris, Advanced Onion

It's a Nice Day for A Random Walk: A Non-Expert's Journey to Develop a Self-Contained Generative AI Chat Assistant Application

In this presentation, Advanced Onion’s CTO, Brad Morris, will share his recent experiences to develop an innovative GenAI chat assistant application using non-proprietary resources. Specifically, he will present several open-source frameworks that have enabled him to quickly develop a people analytic prototype that combines LLMs, retrieval systems, and custom prompts using local hardware. His presentation aims to transform common misconceptions and beliefs for accelerating the development and adoption of GenAI across greenfield and brownfield initiatives. 

4:00 PM - 6:00 PM

NETWORKING RECEPTION

Please join us for cocktails, food, and plenty of conversation on the day's content!

bottom of page