Post written by Michelle Gill:

This post is from the first inaugural O’Reilly Artificial Intelligence (AI) ConferenceO’Reilly Artificial Intelligence (AI) Conference held on September 26–27, 2016 in New York City! I was able to attend, courtesy of Women in Machine Learning & Data Science and O’Reilly Media, and would like to share some themes present in the talks.

Human and Machine, not Human vs Machine.

A common theme among news stories reporting on AI is that it will be used to replace jobs. However, a very different reality was presented at the O’Reilly AI Conference. Instead of replacing humans, AI was positioned to supplement our jobs, freeing us to perform more–or higher quality versions of–the specific tasks that we as humans can do better than machines.

Peter Norvig made this point in the opening session when he described Show and Tell, Google’s recently open sourced image caption composer. In many cases, suitable or even excellent captions are generated. However, sometimes these captions lack important context or are (hilariously) wrong all together (see image below). Clearly, additional human intervention would be needed if the correctness of these captions were critical.

Peter Norvig describes some amusing mistakes made by Google’s Show and Tell when captioning images. Here, “A man riding a skateboard” was generated for a picture of Elvis Presley.

One of my favorite talks, by Anirudh Koul and Saqib Shaikh of Microsoft Research, was about Seeing AI, which uses computer vision to assist visually impaired individuals in understanding events happening around them. In cases like this, rather than taking existing jobs, AI can provide life-enriching–and even life-saving–functionality in a way that is difficult for another human to provide.

The use of deep learning to uncover disease mechanisms was also presented. Laura Deming and Sasha Targ described their research, which uses neural networks to understand links between gene expression and aging. They noted that deep learning was often better at prediction than biologists, but doesn’t provide mechanistic insight into these predictions.

Future Paradigm Shifts.

A second theme at the conference was the move to practicalize AI, or “go beyond demos,” as Hilary Mason advocated in her talk about “Practical AI Product Development.” She showcased Pictograph.us, which uses deep learning to analyze Instagram photographs.

There was also discussion of methodological advances in AI. Yann LeCun described Facebook’s efforts to use deep learning for prediction. An example of this was predicting frames of a video based on previous ones. Yann noted that the success of these endeavours would likely require alternative and/or novel types of neural networks.

A Roadmap for Responsible AI.

Finally, Tim O’Reilly’s talk advocated for the conscientious implementation of AI, in a way that allows more individuals to participate meaningfully in the economy. Tim drew parallels between the current AI revolution and the industrial revolution, and urged us to use the resources freed by AI in ways that reduce or alleviate inequalities in society.

I felt O’Reilly AI painted a pragmatic view of the current accomplishments and remaining challenges for deep learning. The seminars and discussion were well-positioned to provide both a glimpse of the field to relative newcomers, such as myself, while also enabling long time members to delve into details of recent advances. It was also fun to attend a brand new conference and experience a field that is growing so quickly. Next year’s conference will be held in New York City on June 27–29, and I’ve already marked my calendar!