Brian Kulis is a professor of Machine Learning at Boston University (and my former adviser). He has also worked in industry on the crack team responsible for Amazon’s “wake” word. We discuss the history of AI, the role of industry vs academia in research, and AI safety. He has watched his field go from a little group of math nerds failing to classify images of dogs and cats to the hottest technology on the planet (that may kill us all). As a grad student, I read sci-fi discussions of AI doom on, and would sometimes share them with Brian and other professors (I was fun at parties). Now, those scenarios are taken seriously by the likes of Yeshua Bengio.
We mostly hear about AI safety from a vocal minority with well-developed views on the subject. People who have made having an opinion about it one of their things that they are willing to argue about in public. But those are, by and large, not the people who answer widely-cited surveys about AGI and its consequences. This is a valuable snapshot of how one such tenured ML researcher thinks about it.
But I don’t want to sell this as an hour-long chat about p(doom); we spend most of the time on less fantastic matters. ChatGPT’s summary of the conversation:
Faculty of Computing and Data Science at BU: Kulis talked about the establishment of this new unit at BU, which focuses on computing and data science. This unit facilitates cross-disciplinary collaboration and student recruitment from various divisions.
Machine Learning's Evolution: The history of machine learning, starting from the 1940s with McCulloch and Pitts' mathematical model for neurons, was discussed. Kulis explained the progression from the early models to the current deep learning wave, highlighting the shifts in ML paradigms over the decades.
Transition from Expert Systems to Current ML Models: Kulis described the transition from expert systems, which were prevalent in the 70s and 80s, to current machine learning models. He noted how early expert systems were limited and brittle, making them unsuitable for real-world applications.
Machine Learning Research and Industry: The conversation touched on the movement of AI and ML talents between academia and industry. Kulis spoke about the challenges academia faces in retaining AI faculty, who are often drawn to industry roles.
Statistical AI and Kernel Methods: The interview discussed statistical AI, emphasizing its role in machine learning's history. Kulis explained the concept of probabilistic models in statistical AI and its strengths in unsupervised learning.
Academia's Role in ML Research: Kulis reflected on the role of academia in the current landscape of machine learning research. He talked about the challenges academia faces, such as the inability to train large models due to resource constraints, and the importance of long-term thinking in research.
Existential Risks of AI: The discussion also covered the topic of AI and existential risks, with references to debates and opinions within the ML community, including those of notable figures like Hinton and LeCun.
The Future of Machine Learning: Kulis expressed his views on various aspects of machine learning, including large language models, reinforcement learning, AGI, and personalized medicine. He opined on what he considers overrated or underrated in the field.