I was interested in math and science, but I also had a passion for law, all of which I was fortunate to pursue during my junior year of high school at an internship in the patent law office of a digital watermarking company. I had a great mentor and learned a lot about the logistics of securing ideas and inventions, and what makes a technology truly innovative. It was my first exposure to engineering outside of the equations I learned in school, as well as to the field of machine learning. It was a foundational experience for me, and cemented my desire to go into engineering.
I began my PhD in electrical engineering at Stanford in 2019, and today my research focuses on designing hardware for state-of-the-art models used in machine learning and cryptography, and learning how we can make them more efficient. In autonomous driving, for example, there are multiple things going on at once and conditions are always changing. If a car is changing lanes to the right, it might be useful in that moment to devote more resources to determining whether another car is to its right, rather than determining if a car is to the left. Our research looks at these sorts of tradeoffs between accuracy, time, and energy spent – and the impact of those tradeoffs – when running machine learning models that make such precise decisions.
When it comes to cryptography applications, we’re usually interested in efficiency – how we can design hardware that meets security protocols but doesn’t cause delays for people when they use the internet or text, for example. It’s also important for us to understand how fast we can run cryptographic algorithms in order to keep anyone from gaining an unfair advantage.
My parents have always said that while it’s good to have technical skills, it’s just as important to be able to effectively and clearly communicate what you’re working on, and to make sure your research is accessible. Today, I am passionate about improving the comprehensibility of these algorithms and how they work so that everyone can have confidence when they’re used – especially for applications like autonomous driving where safety is paramount. At the end of the day, the goal of all this technology is to benefit society and help the world. We cannot do that if people are reluctant to adopt technologies because we have not properly communicated how they work. Researchers and engineers need to be responsible for this communication as they do the work.
It’s also important that as we innovate, we think about policymaking and regulation in tandem with technological questions. I’m involved with the Stanford Science Policy Group, which provides an opportunity for students to ask questions about what role scientists should play in policy development. I am a huge superhero movies fan, and I always think about the Spider-Man quote: “With great power comes great responsibility.” Having scientists and researchers in conversation with policymakers and legislators to govern the consequences of using this research should be part of our responsibility as we build new technology.