The MIT article highlighted a self-driving car powered by deep learning, a powerful technology that some hope will someday cure disease, making high-value financial trading decisions, and improve our quality of life. The problem it presents, however, is that if developers do not yet fully understand the technology they’re meddling with, it’s hard for them to anticipate or be accountable for its failures. And with AI, the failures could be big and costly, both in terms of money and human life.
The self-driving car—and the admission that humans do not fully understand it—magnifies the fact that—though incredibly powerful—AI is still relatively new and misunderstood. It seems businesses in the AI field have much more than complex technology to manage; they also have a potential PR nightmare on their hands. The following are just a few reasons why AI is not being fully embraced in the greater tech—or human—communities.
It’s Getting a Bad Rap
The pace AI has developed is concerning to even some of the top names in technology. Folks like Elon Musk, Bill Gates, Stephen Hawking and others have already expressed concerns for AI’s rapid development. In 2015, Musk even donated $10 million to the Future of Life Institute to make sure AI doesn’t turn evil. In fact, despite many promising developments—including the completely nonthreatening use of AI in creating recommended viewing, reading, and purchases on Amazon, Google, and Netflix—many still remain concerned about what AI has in store for the human race.
It’s Learning to Make Decisions—for Better or Worse
While some are excited regarding AI’s potential to help diagnose disease or more quickly manage parole decisions or investment selections, others caution that handing decision-making abilities over to robots—including, in the military’s case—the decision to take a life—is irresponsible at best. According to one BBC article, Danah Boyd, a principle researcher at Microsoft Research, noted there remain concerns about “the values being written into such systems—and who is ultimately responsible for them.” When we hand decision-making over to robots, the situation becomes even more fuzzy.
There is Still Not Universal Ethics Panel or Standard Governing AI’s Growth
Although certain organizations like Google have their own ethics boards governing the development of AI technology within their own realm, the fact remains that there is still no universal body ensuring the ethical advancement of AI. For many, this remains a huge concern, as there is no telling what type of AI robot could be developed if the technology gets into the wrong hands. The result could be lack of jobs, lack of relationships, or—as Hawking tends to believe—and end to human society itself.
Wherever you stand on the AI issue, the fact remains that the technology isn’t going anywhere but up—albeit a bit more slowly than many believe. I’d encourage those troubled by the development of AI to consider that AI’s development thus far has been fractured at best. The shocking technologies we see in the media are developed independently, and none have yet been proven to work together to create the super-human force we’re fearing. That isn’t to say the opportunity isn’t there—it’s just that we aren’t there yet. There is still time to find the needed checks and balances to allow AI to experience a positive public embrace. That, provided an ethics committee is formed, and we’re able to show humans the big picture of how this amazing technology can, in fact, continue to help—not hurt—the human race.
Additional Resources on This Topic:
The Ethical Side of Artificial Intelligence
Artificial Intelligence and Automation: Predictions for the Future
AI is Going to Become Your New BFF
This article was first published on Converge.xyz