The headlines are everywhere, artificial intelligence is coming, and it’s going to change the world. From automation of jobs, predicting patterns in climate change, driving cars, and even the mundane such as making breakfast, the message is clear: AI will do your job, and it’ll do it even better than you can. Based on these bullish predictions, it would feel like AI is inevitable, and it’s going to take over the world by storm.
It is quite likely that a lot of these advancements frequently touted by companies aren’t anywhere near being ready to replace human workers. However, these testimonies from companies don’t reflect the reality of artificial intelligence. For a lot of fields, the technology simply isn’t ready. Sai Balasubramanian, on the topic of radiology, notes this about current AI technology:
“The most significant issue is that the technology simply isn’t ready, as many of the existing systems have not yet been matured to compute and manage larger data sets or work in more general practice and patient settings, and thus, are not able to perform as promised.”
It would be misleading to market artificial intelligence and automation, in its current form, as an alternative to human supervision and operation. The medical professionals can attest to that, as adoption has not been normalized.
Another observation worth noticing about AI in radiology is that the view of artificial intelligence is shifting from being a replacement to radiologists, to being an assistant. The important observation to make is that the algorithms will not be making the decisions, only the radiologist will, but the algorithms are what provides the radiologist with the information to come to their conclusion.
However, when constraints are not recognized and taken seriously, serious consequences can occur.
The modern problems with AI:
One major issue revolving around AI is the potential for bias: These algorithms could give people of certain demographics inaccurate results that can lead to unfavorable responses or misdiagnosis. A research study discusses an algorithm related to diagnosing and identifying symptoms, noting that as the algorithm identifies need based on predicted medical costs, black patients were less likely to be flagged for high risk care management, as their reported costs were more similar to those of healthier white patients. This is something that may result in a disparity in quality of healthcare.
The biggest, and most obvious, limitation is simply what technologies are available at the moment. As the technology used by some algorithms may not be sufficiently mature to support automated decisions on a higher level, the quality of these algorithms in action will be impacted. In an IIHS study on automated safety and drivers assist features in new cars, it was found that although some tasks were done reasonably safely and consistently, such as adaptive cruise control, other features may be dangerous if not moderated, such as sensors which may misinterpret lanes and not account for many edge cases and unpredictable behavior that drivers will also have to account for.
The Consequences:
One of the most notable failures of AI in modern day is the fatal collision between an Uber autonomous vehicle and a jaywalker. In this accident, the backup “driver” of an Uber-owned autonomous vehicle was distracted by their mobile phone while overseeing the vehicle. However, at night, a jaywalker appeared from the side of the road, which the vehicle could not detect. As the operator was too distracted to manually intervene, the vehicle struck the jaywalker and later died.
This was among the first incidents of its kind, and immediately people started looking for causes to blame for this. But ultimately, the National Transportation Safety Board decided that these factors were to blame for this:
- The vehicle was poorly programmed and its vehicles were neither capable of detecting nearby obstacles at night nor was it programmed to account for edge cases such as jaywalking.
- Uber at the time lacked a safety department, and there seemed to be little thought into minimizing risks or assessment.
- The operator had been distracted, part of which is a result of Uber’s “automation complacency”, which causes operators to assume that the vehicle is fully capable and that there is no reason to question its decisions or ever intervene.
The point about automation complacency is especially important, as the notion that algorithms are by default trustworthy, especially concerning as Uber did little to address this problem despite their AI for autonomous vehicles being incomplete.
Bias is also a frequent issue that may have real world consequences, with a glaringly obvious case of this being with criminal justice. A study performed on a risk assessment software used for Florida courts found that the questionnaires were significantly overestimating the reoffence risk and the threat of black defendants, and also underestimating that of white defendants. The study, for example, found that although a white defendant may have a lengthy criminal record, compared to a black defendant with a shorter one or even without one, the machines often estimate the black defendant to be a bigger risk for crimes of the same severity.
The important takeaway from these incidents is to maintain realistic expectations: Calling prototypes and early algorithms full AI and touting them as revolutionary or powerful may lead to misuse as people will hold the misguided notion that the machines are capable of more than they actually are, and as a result of this overestimation cause accidents.
Solutions and Drawbacks:
Practice Makes Perfect: But what about privacy?
We’ve all heard this, one of the best ways to get better is through practice. For robots, practice is through obtaining data and having developers and machine learning algorithms learn from it.
For most sectors relevant to AI, better access to data and machine learning is the commonly suggested solution to problems relevant to the inexperience of algorithms and not having a large enough sample size. However, with an increased data size put into play, the new issue of user privacy comes into question, as skepticism around whether or not this will violate user privacy comes into question.
One example exists in Amazon’s famous Alexa app built into its smart speakers. While Amazon insists that the Alexa app may record user data primarily for quality assurance purposes and to help “train” its AI to better communicate with humans, it has been found that this data and recordings have been passed down to third parties. In addition, Amazon recordings have now been featured as court evidence, as demonstrated by the situation in one New Hampshire trial of a murder suspect. While it would be easy to agree that a suspect should be held accountable, it is also rightfully worrisome that personal data recorded without explicit consent can now constitute evidence and can be legally accessed by authorities.
From the eyes of developers, these incidents may lead to an irreparable breach of trust. Amazon has since offered more transparency by giving users the option to opt their device out from having their recordings be sent to contractors and for manual review, in response to privacy concerns surrounding their products’ data usage. However, after these incidents, some people simply do not trust them with data. For example, this response to an article is a clear jab at the trust issues clouding big tech:
“And if you actually trust Amazon to do a danged thing to protect your privacy, may I interest you in a nice bridge for sale in Brooklyn?”
It is in the best interests for AI designers to maximize data security and privacy and assuage the concerns of users properly. For one, it is important that users know exactly how their data is being used, without any secrets, and any features that involve data harvesting should be openly announced and users should be given the option to opt out if desired.
Optimizing Safety: Is it really making us safer?
The trolley problem is a very famous one: Do you pull the lever and kill one person who wasn’t going to die otherwise, or do you ignore the level and allow five people to die as a result of inaction? This is the very problem that automakers and software designers have to make when designing their self-driving algorithms to respond in the event of an inevitable accident. One study called the Moral Machine seeks to collect an aggregate opinion on who a vehicle should kill in the event of an inevitable accident, with factors not just including quantity of people, but also socioeconomic status, health, whether or not people are following the law, and age.
However amusing it is as a website, many have also criticized it as dangerous in the context of actually coming to a conclusion as to how to answer this question. This questionnaire comes with the notion that not all lives should be necessarily equal in the eyes of a vehicle, and attempts to indifferently treat life in a utilitarian fashion, such that the least net utility from people is lost.
As Russell Brandom, writer for the Verge, says:
“That’s a bad deal, and it has nothing to do with the way moral choices actually work. I am not generally concerned about the moral agency of self-driving cars — just avoiding collisions gets you pretty far — but this test creeped me out. If this is our best approximation of moral logic, maybe we’re not ready to automate these decisions at all.”
Although this may feel like a flimsy way to test social responsibility in AI, it does bring us back to the topic of safety and responsibility.
So what? What’s the conclusion?
AI should not be replacing humans anytime soon, given the current constraints and maturity of current systems. For AI designers and operators, it is important to know the true limitations of what your product can do and not overextend it.
This quote, from Brendan Dixon of Mind Matters, sums it up perfectly:
“Researchers and computer scientists need to stop believing science fiction; they are not creating some new life form, but machines, machines that can harm as well as help. They should submit their machines to whatever tests, licensing, and certification our society considers necessary before unleashing their creations.”