AI and new jobs

Last year when we were preparing for the AI and ML panel at the Markets Group meeting, we spent a lot of effort to prepare for questions on potential and actual adverse effects – but no-one asked. The audience were institutional investors, many of them managing pension funds for employees, so we really had expected pointed questions about the potential for removal of existing jobs and about how new occupations might arise.

Prompted by a blog post from Timothy Taylor, and quoting from a paper titled ‘The Wrong Kind of AI’ , it seems useful to think “about the future of work as a race between automation and new, labor-intensive tasks. Labor demand has not increased steadily over the last two centuries because of technologies that have made labor more productive in everything. Rather, many new technologies have sought to eliminate labor from tasks in which it previously specialized. All the same, labor has benefited from advances in technology, because other technologies have simultaneously enabled the introduction of new labor-intensive tasks. These new tasks have done more than just reinstate labor as a central input into the production process; they have also played a vital role in productivity growth.”


IZA DP No. 12292 Institue of Labor Economics The Wrong Kind of AI?
Artificial Intelligence and the Future of Labor Demand APRIL 2019
Daron Acemoglu MIT and IZA
Pascual Restrepo Boston University

More machine learning – ScaledML

27 – 28 March 2019

The ScaledML conference is growing up; from a Saturday at Stanford to a two day event at the Computer History Museum with sponsors.

Two big new themes emerged

  • Concern for power efficiency (Simon Knowles, Graphcore, talked about Megawatts; Pete Warden, Tensorflow talked about milliwatts and energy harvesting
  • Development platforms – Adam D’Angelo, Quora, was particularly clear on how Quora operate development to efficiently support a small number of good developers

David Paterson gave the first talk on Domain Specific architectures for Neural Networks – an updated version of this talk

The roofline performance model is a useful way to visualize comparative performance. For future performance improvements functionally specific architectures are the way forward; this requires both hardware updates (what Google is doing with the TPUs) and improved compiler front and back ends.

Fig 3 from the Domain Specific Architectures paper linked above.

Intel recognizes this trend – Wei Li described the work his team is doing to incorporate domain specific support into Xeon processors. This blog post has the gist of what he presented.

Most of the talks are here on YouTube

Machine Learning snapshot, June 2018

Kian Katanforoosh and Andrew Ng have been teaching CS230:Deep Learning, at Stanford. The project reports and posters list has just come out, summarizing work done by the students with help from the teaching team. More than 160 projects. It will be interesting to see which of these mature into applications.


Image from Painting Outside the Box: Image Outpainting with GANs (Mark Sabini, Gili Rusak) which was awarded first place in Outstanding Posters


Reports and posters

The 4th Research and Applied AI Summit just finished in London. The 125 slideset on the State of AI is a decent current snapshot of much more evolved work than the Stanford posters.

Artificial Intelligence for Institutional Investors


Venue – the National Press Club, 13th floor ballroom

The Markets Group runs events for institutional investors. I took part in a panel about Artificial Intelligence and Machine Learning, and in a round table discussion.

Diagrams and useful background sources listed below.

Definition of Artificial Intelligence, to illustrate that machine learning is a subset of AI. Sourced from the thorough review of AI in the NHS


Examples of AI and ML in use

At least 75% of the audience uses Netflix – which applies machine learning to improve its users’ experience of streaming as well as for content selection. Its results are driven by an extreme emphasis on keeping existing and attracting new customers. The data its customers generate by using it are used to make recommendations to them. Artwork personalization  Image discovery

View at

Alibaba and Tencent Unlike the US companies, they have built their support for retailing primarily based on the customer’s mobile device; so they use facial recognition for identification, image scanning and matching for item selection, precise location specification in shopping venues and integration with payment apps to enhance the customer’s buying experience, both online and in person at a store. (joint report – Bain and Alibaba)

Recent reviews – longer reads  Recent developments are a direct result of the enormous improvement in computing capability at drastically reduced costs, a direct result of Moore’s Law – which continues.

Longer term effects of automation – 10 – 20 year horizon – Bain March 2018

13 artificial intelligence trends reshaping industries and economies – CBInsights February 2018

Capabilities at the end of 2017 – summary in slide format from Jeff Dean (Stanford, Google Brain team)


1950 Alan Turing asked “Can Machines Think”  Computing Machinery and Intelligence paper
1956 Initial definition of Artificial Intelligence at a workshop  at Dartmouth College   Remember the AI Winters in the 1970s, 1990s


Scaled Machine Learning


Bill Dally, Nvidia

Matroid and the Stanford center for image system engineering ran the 3rd year of the ScaledML conference yesterday, 24 March 2018. It was a concentrated survey of work in progress in machine learning, with admirably little overt advertising. The overall impression is of enormous actual potential, orthogonal to enormous hype and inflated expectations, significant uncertainty about what will actually get done, and of a lot of work in progress on necessary infrastructure in hardware, architecture, languages, systems, and education.

Agenda and speaker list

08:45 – 09:00 Introduction Reza Zadeh Matroid
09:00 – 10:00 Ion Stoica Databricks
10:00 – 11:00 Reza Zadeh Matroid
11:00 – 11:30 Andrej Karpathy Tesla
11:30 – 12:00 Jennifer Chayes Microsoft Research
13:00 – 14:00 Jeff Dean Google
14:00 – 14:30 Anima Anandkumar Amazon
14:30 – 15:00 Ilya Sutskever Open AI
15:00 – 15:30 Francois Chollet Google
16:00 – 17:00 Bill Dally Nvidia
17:00 – 17:30 Simon Knowles Graphcore
17:30 – 18:00 Yangqing Jia Facebook

From my notes :
The successor to the AMP Lab at Berkeley is RISE lab, building Real-time Intelligent Secure Explainable applications to make low-latency decisiongs on live data with strong security. (Ion Stoica). Note the remark about Explainable; this came up as a common theme.
Being able to examine detector errors and mistakes came up again in Reza Zadeh’s Matroid demonstration – this was the only live product shown. A user can build a detector with multiple attributes to pick out images from streaming video.
Bill Dally (Chief Scientist, Nvidia) reckons that Moore’s Law is dead; Simon Knowles (Graphcore) gave a more reasoned explanation about possible performance gains from hardware improvements over the next 10 years.


Graphcore hardware, use of BSP – Simon Knowles

Jeff Dean’s slides

Bill Dally’s slides

Anima Anandkumar

Ion Stoica

Francois Cholet on Keras

Ilya Sutskever

Jennifer Chayes

Yangqing Jia