Hi, I'm Matthew.
I'm currently learning about safety and security in agentic artifical intelligence.
How can we ensure deep neural networks, especially deployed in sensitive or life-critical systems, will be sufficiently reliable to avoid potentially significant harms? How do we determine that we can we trust them to function justly, fairly and predictably? What new challenges do autonomous physical/digital agents present to our societies and practices? These are a few questions I'm trying to answer.
Previously, I've:
- directed growth engineering at Bitly,
- founded, scaled and led the international infrastructure security groups at Tesla,
- grew EMEA technical security for Uber,
- helped build high-trust, SOTA-challenging, distributed systems at M.C. Dean, Inc., and
- helped edit the first volumes of the emerging (and 2008 award-winning) Community Literacy Journal.
While at Tesla, I was fortunate to earn a Master's degree in software engineering from the Extension School at Harvard. I wrote a thesis on improving convolutional neural network question-answering generalization and robustness, which you can find here if you like reading that sort of thing.
Quite recently, I was honored to speak about the increasing need for functional certification in AI systems during the AI for Nuclear seminar series at the University of Stuttgart.
A call to action: If you are working on problems in AI safety, security, and robustness, or building services that support public safety and security, I'd love to help. You can send me an email here, or say hi on LinkedIn.
I also have a small presence around the web: