Welcome! Please contribute your ideas for what challenges we might aspire to solve, changes in our community that can improve machine learning impact, and examples of machine learning projects that have had tangible impact.
Lacking context for this site? Read the original paper: Machine Learning that Matters (ICML 2012). You can also review the slides.
Machine learning already matters
  • I thoroughly enjoyed reading the position paper, and decided to give a positive view about how much impact machine learning already has had. I posted this on the mloss.org blog.




  • 6 Comments sorted by
  • I think much of the community views applications as the "users group" of ML.   Neil Lawrence's blog posting takes this view.  They're glad the group exists because the successes promote continued funding of the field, but they think ML researchers have little to learn from applications and don't want app papers taking up slots at ICML. 

    What we tried to point out in our essay from 1996 (which Kiri mentioned in another thread) was that research and applications exist together in a mutual feedback loop.  Applications is not simply a "users group".  In ICML the loop is broken, though in other forums (MLJ, KDD, etc.) it seems to be in better shape.
  • Excellent!  I definitely think a constructive and positive view is merited.  There are lots of ways ML does matter now, but doesn't get enough press.  Let's publicize, and celebrate, these successes.

    Your deeper point about ML becoming more invisible as it is adopted is well taken, and it is related to a phenomena we observe for AI in general (i.e., if a machine can do X, then X no longer really requires "intelligence", so the bar for "artificial intelligence" keeps moving higher).
  • I agree with Cheng, and further I think the list of challenges is too narrow and not ambitious enough at some levels (the saving of $100M certainly has already been done) and far too ambitious (avoid a war) at the same time.

    It is not hard to find problems where ML will help. What is harder, is to find problems where it will not. But it will only be a part of the solution. A brief list from my own organisation:
    • Inferring good sites for geothermal power production
    • Navigating the biomedical literature by learning topics
    • Predicting when water mains may fail to save money on maintenance
    • Inferring the waveforms needed for neurostimulation to block chronic pain
    • Early detection of deadly fungal outbreaks in hospitals by mining nurses daily notebooks
    • Predicting the output of rooftop photovoltaic power generation to better manage the electricity grid
    • Predicting demand for media content to enable distributed serving of high demand content and reduce power requirements in data centers
    • Inferring cognitive load in people manning control rooms etc
    • Predicting traffic intensity on city roads to enable better traffic light control
    • Predicting efficacy of different cancer treatments on individuals based on their genome
    All of these have complex and non-standard evaluation criteria. They are not toy problems.

    I think the main problem behind the issues being discussed here is that (not unlike any other subdiscipline) specialists tend to think of problems that will be solved with ML, rather than problems the solution of which will include ML, but rely on many other disciplines too. Karl Popper made this point very well at the beginning of Realism and the Aim of Science, where he said Subjects do not exist, only Problems.

    Once ML researchers understand that they are doing engineering, then you just apply the normal discipline of engineering which will imply you need to look at the system as a whole. And of course if you extract toy problems out of context it will not necessarily help.  Another consequence (which I have spoken about before) is that ML research (at least as evidenced by the core ML conferences) tends to be very technique driven. Many of the points Kiri makes are resolved if you are problem focussed.

    Finally, a pointer to an opinion piece I co-wrote a couple of years ago which also took issue with the UCI dataset business.

    Well done Kiri for focussing the discussion.

    Bob Williamson
  • I thoroughly enjoyed reading the paper and could not have expressed my opinion better - thanks Kiri for such a great contribution!

    Regarding section 5, I think one additional way to get more adoption is to make ML algorithms re-usable by researchers and engineers in other disciplines. Since ML algorithms rely on data to be effective, and large data gets stored and processed more and more in distributed storage systems, one way to achieve this is to look into "ML-as-a-Service" with a concise and known interfaces. For example, the task of 'training' can be viewed as (a generalization of) 'storing' items, and the task of 'prediction' can be viewed as (a generalization of) 'retrieving' items.

    Looking forward to your talk on Friday!
  • ... "ML-as-a-Service" with a concise and known interfaces.  For example, the task of 'training' can be viewed as (a generalization
    of) 'storing' items, and the task of 'prediction' can be viewed as (a
    generalization of) 'retrieving' items.
    @Ralf: This is indeed the viewpoint being taken by the "probabilistic databases" folks. 

    There's also a bunch of work on probabilistic programming languages, although AFAIK, it is limited to exact (or MCMC) Bayesian inference where the program defines the generative process.

  • Neil Lawrence claims that machine learning should really not be measured with respect to applications.



Welcome!

To post or add a comment, please sign in or register.

Tip: click the star icon to bookmark (follow) a discussion. You will receive email notifications of subsequent activity.
If search doesn't work, try putting a + in front of your search term.