The Issue with Generating Value from Our Own Data

Neural networks just don’t function in a way where we can delete the data that we teach them.

There would also need to be an additional layer that indicates when a given user’s data is being utilized so that they can receive a payout, but in some ways, this option reduces the anonymous nature of the aggregate data.

The issue is that neural networks are, in general, black boxes.

A black box, in computer science, is just any device, which takes an input and produces an output, in such a way that we don’t know what the function is.

The black box works, but we don’t really understand why.

These devices are used in theoretical discussions a lot, but neural networks are really black boxes.

We can input data, and get a result, but we don’t really understand why the reslt occurred.

David Weinberger’s article on machine learning covers the topic in a fair amount of detail.

I’m actually a little concerned that certain people are getting too accustomed to letting machine learning try to model reality for them, but the analysis is important, because it not only addresses how useful and important machine learning has become for predicting future events, but also just how difficult it is to reverse engineer these models.

Machine Learning Widens the Gap Between Knowledge and UnderstandingAnd gives us the tools for our next evolutionary steponezero.

medium.

comThere are those who even consider that machine learning is so powerful and that unwrapping these black boxes is so complicated, that they represent a paradigm shift away from understanding to simply predicting using these trained models.

If this technology is so disruptive that it threatens the very way of doing science, how can we control its internal workings and break them apart?There are some methods to gain a little bit of insight into what the network has learned, and unwrapping that information can give us further insight in the future, but we cannot just fully unwrap a neural network back into its training information, and extract a given reason for a decision, or remove a given subset of training data, unless we store all of the data and constantly reproduce the entire neural network, which aside from being absurdly inefficient, would also produce even more privacy issues.

Neural networks that can be unwrapped in such a way provide greater risk of attack.

A “white box” neural network can be probed in ways that provide too much information on the training information (Towards Reverse-Engineering Black-Box Neural Networks).

But even these methods are simple forms of probing the model to try to gain information.

It still doesn’t allow us to truly understand how a decision in the model was reached.

Legal IssuesGDPR, and other legal protocols which seek to give individuals full control over their data, are in conflict with machine learning, especially of this sort.

The more legal protections that an individual has for their data, the more illegal neural networks become, simply because of their black box nature.

GDPR itself already opens up a lot of issues for machine learning, in its inability to give detailed commentary on the decision outcome.

This issue is addressed in Will GDPR Make Machine Learning Illegal?.and while this article is about a year old, but I have a feeling that a lot of the answers to these questions still don’t exist.

And GDPR is still in many ways a lot weaker than certain individuals want.

ReadWrite suggests that blockchain is a way to give us universal basic income, and I agree, but don’t agree with the approach.

In their article, Is Crypto the Missing Ingredient for Universal Basic Income, the author proposes laws that provide “complete and unassailable ownership of data to the individuals that generate it…” Such a practice would make current neural network based solutions wholly illegal.

And I just don’t see blockchain as a solution which would create a feasible way around this issue.

A Final PointThe other issue, as I’ve mentioned in previous discussions, is that the reason why these businesses are so interested in our data is because they are expected to give us everything for free.

Do we pay Facebook for the services that they provide?.No.

Well, I do, because I’m an advertiser on Facebook.

I’m one of their consumers.

And so Facebook et al.

have to do what they can to keep their consumers happy and to serve their interests.

It’s not only ethical, but it’s the law.

Navigation: Master Index | Political Theory | EconomicsFurther ReadingThe Psychology of Artificial IntelligenceWhy theories from psychology, as well as anthropology and cognitive neuroscience need to be applied to the development…towardsdatascience.

com.

. More details

Leave a Reply