Time for Special Ethics in Machine Learning?

Will Oremus posted an interesting article yesterday on Slate called Move Fast and Break Trust about the risks of pilot-phase machine learning.  The article touches on machine language deployments that are drawing attention and a little concern.

The first deployment described is Google Home, the new mischievous “elf on the shelf” that listens instead of watching you.  In its product description, Google Home is described as “… a voice-activated speaker powered by the Google Assistant” and Google encourages you to “ask it questions.”  If you ask it objective questions about the time and the weather, you could expect reasonable answers.  Addriane Jeffries reports in The Outline that if you ask it questions about politics, current events, public figures, or any number of other interesting topics, Google Home will be happy to confidently answer you based on a broad range of erroneous information and conspiracy theories that are likely to show up in Google searches.  In other words, Google Home can be a wonderful font of in-home “fake news.”

The second deployment described is Uber’s controversial deployment of self-driven cars in San Fransciso last year.  One self-driving car was filmed running a red light and driving through a pedestrian crosswalk.  Oremus’ article talks about how machine learning algorithms require a pilot-phase where they are expected to make many errors, with the expectation that their performance will improve to desirable (acceptable?) levels after sufficient “real world experience” has been acquired.  San Francisco would clearly be a dangerous place for humans if a large number of autonomous vehicle producers were all doing early phase deployments at the same time.  Likewise, if a single producer was doing a single deployment with a large number of vehicles in order to speed up the “learn-in” time.  Oremus makes a soft call for regulation of these kinds of testing.

Oremus’ article touches on one particularly interesting detail.  Google has been working on autonomous vehicles in a methodical way, including the machine learning and testing parts.  Google has presumably capture plenty of real-world street level data, not only from its autonomous project but in its long-established street mapping operations.  In order to catch up quickly, Uber and Tesla chose a more aggressive path, putting code into production and on the streets quickly in order to accelerate data collection and machine learning.

We can be optimistic that new machine learning methods will make this kind of pilot phase faster and safer.  However, perhaps it is time for a code of ethics drafted specifically for machine learning.  I would welcome your thoughts on what that should contain and how it would work.  In these scenarios, it is common to think of Isaac Asimov’s Laws of Robotics, with limited immediate utility.  However, we need something more concrete that can be used to directly influence the coding and the testing.


(Image provided by geralt at Pixabay.)

Advertisements