I’ve attempted to link to the person or people who inspired a particular thought, but there’s a lot of variation in how direct the connection is, and any particular item may not reflect the opinion of the linked person.
5680 people registered for NIPS this yearApplied machine learning
Neural nets
Interacting with humans
Bayes in the time of neurons
Planning & reinforcement learning
Reinforcement learning, in more depth
Generative adversarial nets
Chat bots
- On the other hand, Kaggle’s , which required algorithmic participants to answer multiple-choice questions from a standardized 8th grade science exam, using information retrieval methods, not RNNs.
- In dialog automation, one of the biggest challenges is in building up an accurate picture (or state) that summarizes the dialog so far.
- At Facebook, people are pursuing multiple approaches to dialog automation, but the main one is to go directly from dialog history to next response, without transparent intermediate state that can be used for training/evaluation.
- Facebook’s include a variety of dialog tasks, including transactions (making a restaurant reservation), Q&A, recommendation, and chit-chat. The with almost 1 million tech troubleshooting dialogs is another useful resource.
- At the moment, some researchers build user simulators to train their dialog systems, but those are difficult to create — a simulator is effectively another dialog system, but it needs to mimic user behavior, and it’s hard to evaluate how well it is doing (in contrast to the dialog system that is being trained, there’s no notion of “task completion”).
- If you can’t collect huge numbers of dialogs from real users, what can you do? One strategy is to first learn a semantic representation based on other datasets to “create a space in which reasoning can happen”, and then start using this pre-trained system for dialogs.
Idea generators
- Everything is an algorithm: It may be useful to view web experiments in the social sciences more explicitly as algorithms. Among other things, this makes it clearer that experimental design can take inspiration from existing algorithms, as in the case of . See also: If we formalize existing RL approaches such as training in simulation and reward shaping by writing them down as explicit , maybe we can make it easier to incrementally improve these protocols. (I did some work on this project.)
- Take some computation where you usually wouldn’t keep around intermediate states, such as a planning computation (say , where you only keep your most recent estimate of the value function) or (where you only keep around your current best estimate of the parameters). Now keep around those intermediate states as well, perhaps reifying the unrolled computation in a neural net, and take gradients to optimize the entire computation with respect to some loss function. Instances: , .
- If we can overcome adversarial examples, we can train a neural net by giving it the score for a few prototypes—say designs for cars, and the rating a human designer assigned—and then use gradient descent on the inputs to synthesize exemplars that are better than any of the ones we can imagine. We have a “universal engineering machine”, if you like.
- How can we implement high-level symbolic architectures using biological neural nets? now calls this “the modern mind-body problem”.
- Neural nets still contain a lot of discrete structure, e.g. how many neurons there are, how many layers, what activation functions we use, and what’s connected to what. Is there a way to make it all continuous, so that we can run gradient descent on both parameters and structure, with no discrete parts at all?
Tidbits and factoids
- 20 years ago, Jürgen Schmidhuber’s first submission on got rejected from NIPS.
- For some products at , the main purpose is to acquire data from users, not revenue.
- doesn’t use any learning in their robots (so far), including the new Spot Mini —it’s all manually programmed.
- For speech recognition, ML algorithms are now benchmarked against teams of humans, not individuals.
- When Zoubin Ghahramani asked who in the audience knew the , essentially no hands went up and he was sad.