Can Developmental Neuroscience Save us from Rogue AI?

development

#1

I recently wrote a blog entry on the topic of Artificial Intelligence after seeing Avengers: Age of Ultron with my family. I enjoyed the movie, but felt a little let down by how the AI villain was portrayed. However, it stimulated thinking about whether a real superintelligent AI would harm us, how it might actually go about subduing humanity, and whether we could do anything about it. In almost every portrayal of AI it springs into being as a fully integrated personality. This seems to fly in the face of what we know about intelligent systems and learning. My hypothesis is that more work (a new field?) at the interface of Developmental Neuroscience and AI could lead to enormous insights on brain development and advanced AI architectures that in the end could be more protective to humanity than Asimov’s 3 laws of robotics.


#2

The key issue in building AI is the purpose that it is built for. An AI program using machine learning to scour cooking videos to learn to cook cannot switch purposes if the rules and functions on which it operates are hard coded. So, it could watch surfing videos, but the program will still be trying to learn to cook.

If we remove the artificial from AI, and the program is free to learn its own purpose, then the issue of developmental psychology may come into play. However, our developmental goals are in many ways determined by our biology and a need to survive. Our developmental progress has guiding goals and constraints. Without those, it is anyone’s bet whether or to what a learning program will develop into. If we impose goals and constraints, we are back to a purpose driven program performing according to its programming.


#3

Interesting blog post–you made me want to see Avengers: Age of Ultron! :smile:

I think we anthropomorphize AI just like we do non-human animals. I would compare the AI that will outperform human intelligence to cells of a multicellular organism, rather than comparing it to a single human consciousness. A programmer launches different jobs to accomplish different tasks; think of AI as simply eliminating a need for a programmer. Jobs can launch themselves without a central decision-maker, just like cells of the body execute jobs without any homunculus to guide them.

I disagree that developmental neuroscience will give us much insight into AI. I think the actual AI algorithms that correspond to the developing neural network level (essentially, learning to learn) won’t be far from Elman’s “starting small” (and similar) results that seem to pop up over and over. I don’t see how earlier implementation details, such as neural proliferation, migration, etc will tell us anything about learning to learn, as I don’t think the system is ready to learn anything at that time.

That’s my five minute thought for the day …