The Hill – Artificial intelligence is unavoidable these days.
Companies like Google, Apple and Facebook are all investing in building the technologies — long the stuff of science fiction — into their projects. AI technologies allow computers to use large amounts of data to make decision or to learn on their own.
The companies have been working to educate the public about AI’s capabilities and limitations in an attempt to fend off perceptions of the technologies rooted in movies like "Terminator" and "War Games."
Here’s a guide some of the ways the growth of AI might change your life over the next five to 10 years.
Companies from Google to Uber to General Motors all want a piece of the fast-growing market for driverless vehicles.
That could take the form of passenger cars, like the kind Uber now has on the road in Pittsburgh and that Google has been testing for years. The cars use artificial intelligence to navigate through city streets.
But there is also interest in developing autonomous vehicles for other purposes. Uber, for example, recently acquired Otto, a startup working on driverless trucks. That could revolutionize the way products are shipped around the country, but could also lead to job losses in the trucking industry.
There are hotly debated questions about how to adapt the rules of the road to this new reality. Major companies in the space are already aggressively lobbying regulators and lawmakers in the hopes that federal guidance can help them dodge a patchwork of state-level regulations.
“One is of course the legal status, one is regulating these technologies,” said Bryant Walker Smith, a professor at the University of South Carolina School of Law, asked about the questions hovering around the technology, “another is promoting, another is preparing for the broader changes.”
Machine learning is also having an impact in another broad area of American life: policing.
More departments around the country are adopting the use of “predictive policing” systems, which forecast where crime might occur or people who are likely to be the victims of crime.
According to Upturn, a tech policy consultancy that works with the civil rights community, 20 of the nation’s 50 largest police departments have used predictive policing software.
Like the broader battle over policing, however, the new machines raise questions for law enforcement about discrimination.
“Automated predictions based on such biased data — although they may seem objective or neutral — will further intensify unwarranted discrepancies in enforcement,” said a coalition of civil rights groups in August. “Because of the complexity and secrecy of these tools, police and communities currently have limited capacity to assess the risks of biased data or faulty prediction systems.”
The White House has also weighed in, broadly, saying that federal agencies that give grants out for artificial intelligence systems that have a major effect on people’s lives should “review the terms of grants to ensure that AI-based products or services purchased with Federal grant funds produce results in a sufficiently transparent fashion and are supported by evidence of efficacy and fairness.”
Weapons that can think for themselves
Artificial intelligence is also coming to the battlefield.
Governments and companies around the world are developing weapons that are capable of targeting, and firing, on their own. In the latter respect they differ from vehicles that must be remotely piloted by a human operator, like a drone.
But experts say that there is generally a consensus that it will be important for humans to have ultimate control over the weaponry.
“The main thing is people talk about maintaining meaningful human control over weapons,” said Ryan Calo, a law professor at the University of Washington. “There’s a lot of questions that remain.”
Several leading voices in technology have called for an outright ban on autonomous weapons that lack human control.
“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” a group including Tesla CEO Elon Musk and physicist Stephen Hawking said in a statement this year.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."
AI in the exam room…
Artificial intelligence isn’t likely to replace your doctor. But it might become her sidekick.
IBM’s Watson computer system currently has a variant on the market that processes patient data for oncologists and helps them make decisions about treatment. The company is looking to bolster the product and give it credibility.
One research firm estimates healthcare spending on artificial intelligence will cross $6 billion by 2021, up from estimates of $633.8 million in 2014.
But experts say that the products are far from a panacea. A 2015 study found, for example, that a system that calculated which pneumonia patients should be sent home from the hospital had wrongly calculated that people with asthma should be treated as low-risk.
...and in every classroom
Soon, artificial intelligence might help a teacher spot a student who is struggling with their French or Chinese vocabulary, according to a major education company.
Pearson predicted in a report this year that artificial intelligence will one day be used to process data related to students and flag when they need extra attention from a teacher or a prompt on a digital device. The company, which has a stake in the education market, said those possible developments come with both positives and negatives.
Image Credit: Getty Images