The Problem With Ethical Frameworks and the Need for Regulation

Ethics are essential in the creation of responsible technology, but they're not strong enough to truly hold us accountable.

Yale psychologist Paul Bloom suggests we are born knowing right from wrong. He argues that as children, the idea of fairness is quite simple, but as we grow older and experience more of life, we pick up cultural behaviors and develop rational thinking, shifting our idea of fairness from being concrete to contextual.

The tech industry is full of contexts which make differentiating right from wrong extremely difficult to agree on. While some things seem obvious on the surface, the more you dig into the issue, the less sure you become about what should be done. Let's take a look at the heavy hitters that have become heated topics in 2020:

  • Unfairly biased AI: Should we let AI make decisions for us, without fully understanding how the decisions are being made? Machine learning is so complex that even developers cannot always fully anticipate how parameters will contribute to a system's outcome. This is called the Black Box Problem. Given how opaque these systems can be, perhaps the use of AI should be prohibited from being used in areas such as criminal justice and healthcare, where the risk of negatively effecting human lives is too high.
  • Spread of misinformation: Should social media companies moderate their platform and censor information that can be considered harmful? Moderation as a solution is flawed in that it is inherently subjective, takes too long to be practical, and we risk removing dissident voices that in the end may be helpful. However, it's not unreasonable to expect some sort of mitigation against algorithms that promote the most engaging content, rather than the most factual.
  • Data privacy: One study found that 91% of users consented to Terms and Conditions without reading them, which really makes you wonder why we bother with them at all. To make matters even more unethical, data is sometimes used for purposes other than what was consented to. Should we make these agreements more comprehensible and revokable? Perhaps we should stop using "I consent" checkboxes and buttons altogether given we have been conditioned to click them without thinking.

In response to the growing public outrage and pressure from employees within surrounding these ethical dilemmas, companies have begun to make public commitments to building responsible technology. These commitments, which we can call ethical frameworks, outline the ways in which the company plans to hold themselves accountable in the creation and application of AI and other new technologies.

Google's AI Principles

Google outlined seven objectives they are aiming for, and listed four areas of technology they will not design or deploy. While the principles themselves are vague and open to interpretation, they've published case studies and research backing up their commitment the principles elsewhere on ai.google as well as within the complementary People + AI Research (PAIR) research and guidebook. Importantly, they state that they plan to approach their work with "humility, a commitment to internal and external engagement, and a willingness to adapt."

Microsoft's AI Principles

Microsoft defined six categories that describe their approach to building responsible AI. For each category there is a short video where someone from the company explains what it means. Microsoft.com/ai is filled with case studies and stories showing how they've put their principles into action under the guidance of two internal teams: the Office of Responsible AI (ORA) and the AI, Ethics, and Effects in Engineering and Research (Aether) Committee. Because they build AI technology for consumers, they've also created guidelines for how to use their technology responsibly.

While these ethical initiatives appear to be robust and sincere, they're hardly a solution to the industry-wide problem of technology innovators knowing what's right and wrong. In fact, Margaret Mitchell, a senior research scientist at Google, says they aren't meant to define right and wrong at all, rather, they "give you the tools to understand different values." Essentially, they provide guard rails, and researchers at Microsoft agree:

“Despite their popularity, the abstract nature of AI ethics principles makes them difficult for practitioners to operationalize. As a result, and in spite of even the best intentions, AI ethics principles can fail to achieve their intended goal if they are not accompanied by other mechanisms for ensuring that practitioners make ethical decisions.”

Besides being more aspirational than practical, ethical frameworks have three major problems:

They lack consequences

The biggest issue with ethical frameworks, is that by nature they are  voluntary. For that reason, they should never replace regulation, but instead supplement it, urging technologist to go beyond the call of duty.

They are specific to a company, not a technology or industry

Without a collective agreement upon ethical standards, those who choose not to act ethically will undermine the public's trust of not just that company, but of the entire industry.

They give us a false sense of security

Ethics washing is when a company makes a coordinated effort to appear as though they are committed to building fair AI systems and technology, when in actuality they have merely disguised basic rights and decency as good faith commitments that we should be thankful for.

There is one pretty big way ethical frameworks are important, though, which is that ethics greatly inform our laws and policies.

We need regulations yesterday

There are two kinds of regulation: positive and negative. Positive regulation helps us maintain standards, where as negative regulation mitigates risk. There is a place for both in the tech industry, but it's crucial that we focus on the latter. Where there is artificial intelligence making decisions for humans, there is risk that the decisions will be harmful. The more advanced AI becomes, the bigger the risk will get.

Policies have their own problems, though, such as:

They take too long to create

In the US, it takes on average 215 days for a bill to become a law, and that's only for the ~10% that actually make it through. Regulation moves at a glacial pace compared to technological innovation.

There's money in politics

The amount of money spent on lobbying has tripled since 1998, with a grand total of $3.47 billion in 2019. Lobbying goes both ways, of course, and it's possible to lobby for ethical laws. Still, big corporations have a financial interest in avoiding regulation, and tend to pay top dollar to protect their best interest.

We inevitably let our guard down

Similar to ethical frameworks, the notion that an industry is being regulated will ultimately make us feel safer. Jo Wolff calls this "regulatory drift," which is the phenomenon where "strong initial regulation softens through complacency."

Regulating will be hard, but it isn't impossible

It starts with basic algorithm literacy. While you don't need a degree in computer science or law to know how we should and shouldn't use artificial intelligence, having a general understanding of how algorithms work and where they are used will help us make informed decisions about how to protect our rights and how to limit the ways in which algorithms are used.

Algorithm literacy applies not just to policy makers, but to everyday people as well. After all, it is our data that is the driving force behind the algorithms. Tae Wan Kim said in an episode of Consequential:

"Data subjects can be considered as a special kind of investors, like shareholders."

As creators, users, and subjects of algorithms, we have the power and responsibility to demand they be regulated.

After understanding algorithms, we should begin to identify the risks involved, and decide what is acceptable to remain voluntary, and what should be regulated. As an example, every few months Google sends me an email to remind me that I am sharing my location with my husband. Should all companies be required to send semi-frequent reminders that our data is being used in certain ways, or should it be voluntary?

Finally, we must come together and decide if we want to form a non-state institution to handle the oversight and enforcement of regulations (similar to the American Bar Association), or if this type of regulation is better suited for the House Ethics or Science, Space, and Technology Committees. Perhaps a mix of the two would be best, since there are some subsections of AI, like healthcare, transportation, and foreign relations that are already regulated by the government.

Permission to be a pest

Tim O'Brien, who wrote his own job description as an advocate for AI policy within Microsoft, gave these words of encouragement at the Venture Beat Transform conference in 2019:

“If you have a passion for this and you think you can contribute, don’t ask for permission to engage and don’t wait for someone to invite you. Just do it, regardless of what your role is and where you are in the company. Ethics is one of these weird domains in which being a pest, banging on doors, and being an irritant is acceptable.”

Point taken, Tim.

No items found.