This Friday, I’m headed over to SysML to present “Toward Scalable Verification for Safety-Critical Deep Networks”, co-authored with Guy Katz, Justin Gottschlich, Kyle Julian, Clark Barrett, and Mykel Kochenderfer. Here’s our poster!

Our poster, 'Toward Scalable Verification for Safety-Critical Deep Networks', appearing at SysML this year.
Our poster, "Toward Scalable Verification for Safety-Critical Deep Networks", appearing at SysML this year.

Part of this is just a summary of my co-authors’ very cool work on Reluplex, which they’ve been working on since well before I got involved. For me, though, this isn’t just about Reluplex or even just about neural network verification, but about a more general idea that I think Reluplex exemplifies: the idea of exploiting high-level domain-specific abstractions for efficiency. If there’s one theme that ties together all the stuff I’ve been working on or studying in the last several years, it’s that a high-level representation of programmer intent (commutative operations on lattice elements in the case of LVars; aggregate array operations in the case of ParallelAccelerator; the ReLU predicate added to the theory of linear real arithmetic in the case of Reluplex) enables optimizations or smart scheduling choices that would be difficult or impossible otherwise.

The team’s experience with Reluplex suggests that to make headway on hard verification problems, it won’t be enough to use off-the-shelf, general-purpose solvers, and that we need to instead develop domain-specific solvers that are suited to the specific verification task at hand. That suggests to me that we (or, you know, someone) should try to create tools that make it really easy to develop those domain-specific solvers (analogously to how Delite aimed to make it really easy to develop high-performance DSLs) – and perhaps Rosette, which I’m now learning about through my work on the CAPA program, will play a role here. It’s all connected!

What makes a network verification-friendly?

In addition to making solvers better, there’s also the other side of things: designing networks that are easier for solvers to chew on in the first place. I think we’re just beginning to learn how to do this, but what I find promising is that some design choices that are desirable for other reasons – like pruning and quantizing networks to reduce storage requirements, speed up inference, or improve energy efficiency – may also make those networks more amenable to verification.

It may even be the case that the same hardware accelerator techniques that optimize inference on quantized or low-precision networks can also optimize verification of those same networks! This is honestly just pure speculation on my part, but if you’re interested in trying to figure out if it’s true, I hope you’ll come talk to me at SysML.

What’s “deep” about the networks we can handle?

Baris Kasikci asked a good question on Twitter recently about the Reluplex work: “How difficult is it to produce the SMT formula for the property that you want to prove?” Part of the answer is that it depends on the property. We want properties that can be expressed in terms of “if the input to the network falls into such-and-such class of inputs, then its output will fall into so-and-so class of outputs”. For the prototype ACAS Xu aircraft collision avoidance network that Reluplex used as a case study, the input is relatively high-level, expressing things like how far apart two planes are, and the output is one of five recommendations for how much an aircraft should turn (or not turn). In that setting, it’s relatively easy to at least state the properties one wants to prove. For a network where the input is something more low-level, like pixels, it may be a lot harder to state properties of interest.

That should give us at least a moment’s pause, since the whole point of deep learning is that it’s possible to start with relatively low-level inputs and allow the computer to figure out for itself what features are important, rather than having to do a lot of complicated feature engineering to provide input. But the mechanism by which a deep neural network accomplishes that is mostly “lots of layers”, with successive layers operating on increasingly abstract representations of the input. Julian et al. describe the prototype ACAS Xu network as “deep” more because it has a lot of layers than because the input is particularly low-level.

But let’s say that we do have a network that takes low-level inputs like pixels: can Reluplex verify anything interesting about it? Yes, it can! There are still interesting properties we can express to the solver – like, say, robustness to adversarial inputs. I’ve claimed in the past (although I suspect some of my co-authors would disagree with me) that adversarial robustness properties are actually kind of boring, because we already know ways to state those properties for arbitrary networks, and that the more interesting properties are the ones that we’ll have to work hard to even figure out how to express, much less prove. But even if we’re “just” looking at verifying adversarial robustness, it’s not like there’s any shortage of work to do, both in coming up with meaningful robustness metrics and in making solving lots more efficient.

Comments