Artificial Intelligence in Business: Legal Concerns
In-roads in the development of artificial intelligence systems are creating new opportunities for businesses to differentiate their services and in some cases drastically alter their competitive landscapes. Artificial intelligence usage currently varies from being used as a purely process improving tool to enabling disruption in different industries. Some applications of artificial intelligence systems include assisting in the calculation of credit scores for financial services companies, assisting in hiring decisions and filtering content on social media platforms.
Even as a tool, the decisions artificial intelligence systems are making on behalf of the companies and institutions currently using them are already having a profound impact on peoples’ lives. With artificial intelligence systems still in the early innings of their implementation across society, it is important to recognize the ethical issues that have come up so far and consider whether more should be done to ensure the use of artificial intelligence systems is embraced in the future from both a legal and social perspective.
Many large employers such as Hilton, Procter & Gamble, Unilever and CapitalOne use artificial intelligence to screen candidates who apply for employment, with the use of the technology becoming increasingly commonplace. HireVue is currently the most widely used artificial intelligence system implemented for this service, however, the methodology they use to screen candidates is currently unknown. It is troubling that many people do not have the opportunity to be informed of how the artificial intelligence system made its decision on a fundamentally important aspect of their life, as well as the possibility that the system denied them employment for potentially arbitrary reasons.
What is also concerning is the realistic possibility that if an entire industry utilizes the same artificial intelligence system like HireVue, a person may be barred from an entire industry for reasons they can never know. Amazon ended its use of artificial intelligence systems for hiring when it found that the artificial intelligence system discriminated against women and found other ways to be biased against them even after several attempts to fix the issue. The same problems may exist with other artificial intelligence systems. If these issues are not addressed and this issue becomes a more pervasive problem, the legality of using artificial intelligence systems in the employment context may come into question.
The use of artificial intelligence systems has already entered the healthcare sector, just like in the employment context, and has also already started to come under criticism. Recently, Science published findings that widely-used commercial algorithms have demonstrated biases against patients from specific races. Optum, a pharmacy benefits manager owned by UnitedHealth Group, the largest health insurer globally, is just one company that employs that algorithm. According to the study, this bias has resulted in patients receiving unequal access to health treatments and coverage, greatly impacting their lives for the worse.
Biases are also starting to emerge in artificial intelligence uses in medical research and drug development. Several upstart biotechnology companies have argued that artificial intelligence can improve the drug discovery and clinical testing process. However, it has come out that datasets used in these systems are disproportionately represented by certain groups of the population. The vast majority of people included in genomic research studies are of white European ancestry. This bias in the datasets means that artificial intelligence systems being deployed at the forefront of precision oncology, gene therapies and diagnostic tools, to name a few examples, may be less effective for underrepresented populations. The possibility that medical innovations may not benefit certain groups in society may erode public confidence and affect the use of artificial intelligence systems in healthcare.
Artificial intelligence systems are also being incorporated into the lending and credit scoring functions for companies like financial services companies Paypal, Equifax & Kreditech. These companies in recent years have started including alternative and “non-traditional” data in their artificial intelligence system which they argue provides a better assessment of people’s creditworthiness. For some companies, non-traditional data may include their social media accounts, geographic location and what type of cellphone an applicant uses. This opens the door for financial technology companies to judge different segments of the population in an opaque manner with no opportunities for a disadvantaged individual to protest a decision that could have serious consequences on their lives.
This problem became evident when it was made public that the Apple credit card gave much lower credit limits to women as compared to men, even in the case of a husband and wife that had shared bank accounts and assets. What was the most troubling aspect of this example is that not only was gender not a variable for the algorithm, but the creators of it did not know how it was able to be biased based on gender using other related factors.
Putting it All Together
What the above examples show is that even at an early stage of artificial intelligence systems’ introduction, patterns are starting to emerge that the speed of implementation of artificial intelligence systems may be coming at the expense of introducing unbiased decision-making tools that can benefit everyone. If this trend continues and more problems are discovered, not only could the introduction of new applications for artificial intelligence systems slow down, but the perceived biases they contain may erode the public’s confidence in them and eventually limit their potential.
The examples in this article are just a few that illustrate the need for regulation, which could legislatively slow down artificial intelligence’s implementation in society. This is why it is important to consider shifting the focus from releasing these systems as soon as possible to focusing more on implementing them ethically. Considering artificial intelligence systems can potentially teach themselves new biases and make decisions they were never taught to do, more should be done now to minimize the risk of unintended biases before their integration into society makes any beneficial course correction untenable.
If you’re interested in learning more about ethical issues in artificial intelligence systems and corresponding legal issues, feel free to reach out to our technology lawyers. The Harvard Business Review has a great article on the ethical issues related to the use of AI in job hiring.