Why AI is a threat to democracy—and what we can do to stop it

That’s a great question because you could argue that pieces of the AI ecosystem are already impacting our Western democratic ideals in a truly negative way.

Obviously, everything that’s happened with Facebook serves as an example.

But also look what’s going on with the anti-vaxxer community.

They’re spreading totally incorrect information about vaccines and basic science.

Our American traditions will say freedom of speech, platforms are platforms, we need to let people express themselves.

Well, the challenge with that is that algorithms are making choices about editorial content that are leading people to make very bad decisions and getting children sick as a result.

The problem is our technology has become more and more sophisticated, but our thinking on what is free speech and what does a free market economy look like has not become as sophisticated.

We tend to resort to very basic interpretations: Free speech means all speech is free unless it butts up against libel law, and that’s the end of the story.

That’s not the end of the story.

We need to start having a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can get those two to meet in the middle.

In other words, you have faith that we will evolve from where we are now to a more idealized version of Western democracy.

And you would much prefer that to idealized Chinese communism.

Yeah, I have faith that it’s possible.

My huge concern is that everybody is waiting, that we’re dragging our heels and it’s going to take a true catastrophe to make people take action, as though the place we’ve arrived at isn’t catastrophic.

But the fact that measles is back in the state of Washington to me is a catastrophic outcome.

So is what’s happened in the wake of the election.

Regardless of what side of the political spectrum you’re on, I cannot imagine anybody today thinks that the current political climate is good for our futures.

So I absolutely believe that there is a path forward.

But we need to get together and bridge the gap between Silicon Valley and DC so that we can all steer the boat in the same direction.

What do you recommend government, companies, universities, and individual consumers do?.The developmental track of AI is a problem, and every one of us has a stake.

You, me, my dad, my next-door neighbor, the guy at the Starbucks that I’m walking past right now.

So what should everyday people do?.Be more aware of who’s using your data and how.

Take a few minutes to read work written by smart people and spend a couple minutes to figure out what it is we’re really talking about.

Before you sign your life away and start sharing photos of your children, do that in an informed manner.

If you’re okay with what it implies and what it could mean later on, fine, but at least have that knowledge first.

Businesses and investors can’t expect to rush a product over and over again.

It is setting us up for problems down the road.

So they can do things like shore up their hiring processes, significantly increase their efforts to improve inclusivity, and make sure their staff are more representative of what the real world looks like.

They can also put on some brakes.

Any investment that’s made into an AI company or project or whatever it might be should also include funding and time for checking things like risk and bias.

Universities must create space in their programs for hybrid degrees.

 They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments.

They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like.

Ethics should not be taught as a stand-alone class, something to simply check off a list.

Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation.

At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like.

So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now.

We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.

” It’s true, we can’t predict the future.

But we can certainly do a better job of planning for it.

An abridged version of this story originally appeared in our AI newsletter The Algorithm.

To have it directly delivered to your inbox, sign up here for free.

Blockchain is changing how the world does business, whether you’re ready or not.

Learn from the experts at Business of Blockchain 2019.

Register now.

. More details

Leave a Reply