David Mohaisen

2021-2023 Distinguished Visitor
Share this on:

David Mohaisen earned his M.Sc. and Ph.D. degrees in Computer Science from the University of Minnesota. He is currently an Associate Professor at the University of Central Florida, where he directs the Security and Analytics Lab (SEAL) and has been the sole advisor of 15 PhD students, 7 of whom have graduated and joined academia. Before joining UCF, he held several posts, in academia and industry; as an Assistant Professor at the University at Buffalo, (Senior) Research Scientist at Verisign Labs, and a Member of the Engineering Staff at the Electronics and Telecommunication Research Institute (ETRI), a Visiting Researcher at the International Computer Science Institute (ICSI) of UC Berkeley, a Visiting Researcher at the Electrical and Computer Engineering Department of Georgia Institute of Technology, and a Visiting Research and Faculty Fellow at the United States Air Force Research Laboratory. His research interests fall in the broad areas of networked systems and their security, adversarial machine learning, IoT security, AI security, and blockchain security, and his work has been published in more than 150 research papers that appeared in various peer-reviewed, leading IEEE conferences and journals. Among other services, he is currently an Associate Editor of IEEE Transactions on Mobile Computing (TMC) and IEEE Transactions on Parallel and Distributed Systems (TPDS). He is a senior member of ACM (2018) and IEEE (2015).

University of Central Florida

Email: mohaisen@ucf.edu

DVP term expires December 2023


Presentations

Malware Analysis and Classification Using Machine Learning

Malicious software (or malware) is a vehicle for adversaries to launch various types of attacks, and there has been a constant steam of malware samples in the wild over the past few years. Per one study, the number of malware samples have grown to almost 1.1 billion samples in 2020, compared to 100 million samples only 8 years earlier, and attacks launched by malware has significant costs to the world economy, in order of hundreds of billions of dollars. This rise in this attack vector, coupled with the deployed of new systems of unprecedented scale, e.g., Internet of Things, call for techniques to identify malware samples for detection and classification. In this regard, machine learning has shown some promise, including significant accuracy results for filtering unwanted families, as well as operational systems for tracking families of interest overtime, or just making use of threat intelligence to reduce the manual analysis efforts. In this talk, we will review some of the recent results on the applications of machine learning for a broad class of malware analysis, detection, and classification using various program analysis modalities, such as strings, graphs, and functions. We will then explore the robustness of such defenses to a new class of attacks on machine learning and broad directions to defenses.

Towards Taming Adversarial Machine Learning: Security Applications Perspectives

The recent rapid advances in machine and deep learning algorithms have found many applications in the security space, targeting various applications including intrusion detection systems, malware detection, and attribution. Despite their extraordinary superhuman performance in various tasks, machine learning algorithms are prone to adversarial examples, carefully crafted input examples to the machine algorithms that will result in fooling the machine algorithms by, for example, reducing their confidence or even resulting in misclassification. In this talk, we review advances in adversarial machine learning space as it pertain to various application security tasks. We further highlight and review several recent studies to demonstrate the success of adversarial examples on various applications, including website fingerprinting, malicious binaries classification, source code authorship identification, and intrusion detection systems. We discuss various defenses and conclude with open directions.

How Secure are Blockchain Systems?

Abstract: Blockchains are promising many applications in distributed computing systems, ranging from supply chain to critical infrastructure (e.g., cyber-physical systems). For foundations-driven guarantees of blockchain systems, a better understanding of their security is necessary. In this talk, we will explore questions surrounding the security of blockchain technology. As a distributed peer-to-peer system that relies in its operation on the behavior of peers, multiple security threats are outlined due to various fundamental realities, such as stale and orphan blocks, blockchain forks, selfish-mining, equivocation, and privacy-issues through blockchain ingestion. To this end, we proceed by presenting a systematic exploration of the attack surface of the blockchain technology from network and application standpoints and several potential solutions to address those attacks towards enabling a reliable use of blockchain for provenance applications. Specifically, we investigate the fundamental properties of the blockchain as distributed systems, and how they can be violated using various intended (due to malicious behavior) and unintended (due to reliability constraints) behaviors, and investigate how these attacks can be mitigated by enhancing the current protocols. We conclude with open questions and directions.

 

Presentations

  • All-Domain Awareness According to AI/ML-Driven Big Data Analytics
  • 5 Questions Every Decision Maker Should Ask (and answer with the help of AI/ML
  • Distilling AI: The Hitchhiker’s Guide

Read the abstracts for each of these presentations