Nael Abu-Ghazaleh

2021-2023 Distinguished Visitor
Share this on:

Nael Abu-Ghazaleh is a Professor with joint appointment in the CSE and ECE departments at the University of California, Riverside, and the director of the Computer Engineering program. His research interests include architecture support for security, high performance computing architectures, and networking and distributed systems. His group’s research has lead to the discovery of a number of vulnerabilities in modern architectures and operating systems which have been reported to companies and impacted commercial products. He has published over 200 papers, several of which have been nominated or recognized with best paper awards. He is a Distinguished Member of the ACM.

University of California Riverside

Email: nael@cs.ucr.edu

DVP term expires December 2023


Presentations

Security challenges and opportunities at the Intersection of Architecture and ML/AI

Machine learning is an increasingly important computational workload as data-driven deep learning models are becoming increasingly important in a wide range of application spaces. Computer systems, from the architecture up, have been impacted by ML in two primary directions: (1) ML is an increasingly important computing workload, with new accelerators and systems targeted to support both training and inference at scale; and (2) ML supporting architecture decisions, with new machine learning based algorithms controlling systems to optimize their performance, reliability and robustness. In this talk, I will explore the intersection of security, ML and architecture, identifying both security challenges and opportunities. Machine learning systems are vulnerable to new attacks including adversarial attacks crafted to fool a classifier to the attacker’s advantage, membership inference attacks attempting to compromise the privacy of the training data, and model extraction attacks seeking to recover the hyperparameters of a (secret) model. Architecture can be a target of these attacks when supporting ML, but also provides an opportunity to develop defenses against them, which I will illustrate with three examples from our recent work. First, I show how ML based hardware malware detectors can be attacked with adversarial perturbations to the Malware and how we can develop detectors that resist these attacks. Second, I will also show an example of a microarchitectural side channel attacks that can be used to extract the secret parameters of a neural network and potential defenses against it. Finally, I will also discuss how architecture can be used to make ML more robust against adversarial and membership inference attacks using the idea of approximate computing. I will conclude with describing some other potential open problems.

Secure speculative execution in the age of Spectre and Meltdown

Modern computing systems are under attack by increasingly motivated and sophisticated attackers. Recently, the Meltdown and Spectre attacks have demonstrated that security is not only a software problem, but that the hardware components can expose software-exploitable vulnerabilities. These attacks exploit the core paradigm used to build modern high performance CPUs: speculative out of order execution. It is not clear how to build CPUs that are both secure and performant. In this talk, I will first introduce these attacks using examples of SpectreRSB, a speculation attack that exploits the Return Stack Buffer, which is a structure in modern processors used to predict the return address of a function. I will then discuss two ideas to build next generation CPUs to enable secure speculation without sacrificing performance. In particular, SafeSpec hides the effects of speculation until an instruction is committed. In contrast, SpecCFI leverages principles of control flow integrity to restrict speculation to occur only to legal points within the program.

 

 

Presentations

  • Security challenges and opportunities at the Intersection of Architecture and ML/AI
  • Secure speculative execution in the age of Spectre and Meltdown

Read the abstracts for each of these presentations