Welcome! Please contribute your ideas for what challenges we might aspire to solve, changes in our community that can improve machine learning impact, and examples of machine learning projects that have had tangible impact.
Lacking context for this site? Read the original paper: Machine Learning that Matters (ICML 2012). You can also review the slides.
Some humbler challenges
  • I was struck by how ambitious the list of goals were.  I'd be glad, personally, to accomplish one of them in a lifetime.

    I'd suggest some humbler challenges for things we still don't know how to do:

    1. Read a basic textbook (junior high level, maybe, or lower) and do the problems at the end of each chapter.
    2. Take a description of a simple game and learn to play it well.  (There's already a General Game Playing competition that does this from the logic side).
    3. Given the ability to experiment, have a system learn the effects of basic Unix commands (ls, rm, cp, cd, etc.)
    4. Learn to moderate discussions and flag postings as inflammatory or off-topic.
  • 7 Comments sorted by
  • Those seem like a great list for NLP as well.
  • I recall some discussion of #1 at AAAI last year.  I think it would be great to consolidate these ideas into a central location.  A paper may be too static.  Ideas?

    #3 is scary :)  Have you thought about what it would be like to learn what Unix commands do by blindly running them?  I guess you could set up a sandbox system to let the ML system explore, but you'd probably have to go in and patch it up frequently!  Still, it could be an interesting experiment (possibly using RL?).  What would be the "impact" of a system succeeding?  Could I deploy it to manage some aspect of system maintenance, like updating security patches or other admin tasks?  What would it take to really TRUST it?  (This example is nice in that it really highlights the trust angle.)
  • You could set up such a sandbox with a virtual machine, and snapshot it in a known good state.   Restoring to a snapshot takes only a few seconds.   With VirtualBox, this is process is scriptable as well, so you could integrate it in a learning loop.  (It may also be scriptable with VMWare; I just have more experience with VB.) 
  • Hi Tom,

    considering #4 there is ongoing research: 
    http://www.uni-weimar.de/cms/index.php?id=8582 and http://www.uni-weimar.de/cms/index.php?id=21410
    Mainly focused on uncovering plagiarism, automatic vandalism detection in Wikipedia as well as analyzing and predicting quality flaws in Wikipedia. Finding off-topic comments or better measuring the descriptiveness of a comment according to the original post (http://www.uni-weimar.de/medien/webis/publications/papers/potthast_2009a.pdf), was one side track in research on how IR and ML can help to tackle "social software misuse". 

    Greetings from Germany
  • By #3 I am imagining something pretty simple (and yes, in a sandbox directory that can be regenerated).  You have a directory with 3 files and a small handful of unix commands, such as cp, rm, ls, touch, etc.  The task is to learn what each command does -- what are the preconditions for its success, and what are the effects after it has executed.  There is nothing tricky here.  This isn't system maintenance.  I'm ignoring all the flags and options that these commands have.  Just figure out what each command basically does.  An intelligent school kid could probably learn what these commands do with ten minutes and a little experimentation.  This isn't a Unix puzzle or even a computer puzzle, this is a "figure out what something does by playing around with it" puzzle.  Unix commands just happen to be a convenient domain.   I'm not aware of any ML system that could do this.
  • PS.  What I'd want is some description like this:

    Command: rm X
    - Summary: Removes the file X.
    - Precondition: X must already exist or an error occurs.
    - Postcondition: X is gone.

    Reinforcement learning might be applicable but I don't really see this as a state value estimation problem.  And I'd be disappointed if the system required thousands of training epochs. :-)  These commands are deterministic.
  • Good points.  I agree that it isn't really an RL problem since the goal is to learn some "understanding" of the actions, which is not the same as a policy for applying them in support of some task.  Hmm.  Is there work on learning the semantics of an action?  It feels a bit akin to logic learning.

    I could see a solution to this problem having other benefits, like being able to automatically "read" (and execute, for experimentation) source code and annotate it with an "interpretation" of each function (auto-documentation).  

Welcome!

To post or add a comment, please sign in or register.

Tip: click the star icon to bookmark (follow) a discussion. You will receive email notifications of subsequent activity.
If search doesn't work, try putting a + in front of your search term.