It’s been nearly half a year since it happened, so it’s probably about time I mentioned it: the group I originally joined at Intel Labs, the Programming Systems Lab or PSL, has been absorbed into a larger group, the Parallel Computing Lab, or PCL. I was a proud member of the PSL, and it was a little sad to lose our old lab’s unique identity. However, joining the PCL has been a good thing for me and for us. Our old projects continue; in particular, the PCL is now home to the High Performance Scripting project, which ParallelAccelerator is a part of. Some of our PCL colleagues had been involved with Julia already, so it makes sense that we’re now in the same lab and can more easily coordinate our efforts.

The PCL is a big lab, and its members are spread out across various sites in the US and India, including a couple at the Intel Science and Technology Center for Big Data at MIT. Being part of a globally distributed lab has its advantages – for instance, when I visited MIT for JuliaCon last week and needed a quiet place to work on my talk, the presence of local PCL folks made me feel more at home and less like I was squatting in someone else’s office. The PCL is also better at having an externally-facing web presence than the PSL was (I heard rumors that a PSL website existed at some point, but I never was able to find it), so I finally have an Intel Labs web page now!

Historically, the PCL’s focus has been making code run really, really fast on parallel Intel architectures. Lately the lab has been concentrating on machine learning applications specifically, which is part of what prompted me to finally start to learn more about machine learning on my own recently. (It also didn’t hurt that some of my previously-PSL colleagues were doing really cool work on a DSL for neural networks, and I wanted to understand that work better.) To get up to speed, I started taking the Coursera machine learning course taught by Andrew Ng. I actually only completed the first half of the course, but as it turns out, even a half-a-Coursera-course level of understanding of machine learning is much better than nothing.1

As for me, I continue to contribute to the High Performance Scripting project, but I’m also getting underway with some new projects involving topics that are new to me. I know a whole lot more about distributed consensus, for instance, than I did a few months ago (thanks to Marco Serafini for answering some very long emails), and I’m even digging into the literature (such as it is!) on verification of neural networks. I’m a beginner at all of this, and not too long ago, I was frustrated that I didn’t seem to be using most of the specialized skills that I spent a lot of time and effort developing during grad school. I felt a bit like I did during the first few years of grad school, when I wasn’t an expert in anything. What I’ve now come to realize is that changing topics after one’s Ph.D. is pretty normal, and that I should take advantage of the opportunity to develop depth in new areas. It helps to keep in mind that expertise is a process, and that I’ve been in this situation before and ended up fine.

  1. Before this, the last AI course I took was in grad school in 2009, and as recently as then, they were still telling us that neural networks were out of style. 

Comments