# Simplicity in the complexity

Let’s break down the term into “neuron” and “network”.

To interpret “neuron”, we can think of it as a black box that maps A to B.

Or more specifically, it is a mathematical function, which takes input A and gives output B.

Say, we want to predict the house price B.

In the simplest case, we assume the house price is linearly dependent on the size of the house A multiplied by a constant C, plus some fixed legal fees L involved in purchasing that house.

To put it into a formula, we have a hypothesis B = C * A + L that represents the ground truth for the ‘ingredients’ of a house price.

The most essential task here is to determine the constant C and the constant L.

Once we have such a formula in the black box, given the size of any house, we can predict its price.

As in nearly all cases, we do not have enough knowledge to uncover the true relationship between the input and the output, in this case, the house size and the house price.

Then how do we go about finding the C and L?However, what we do have is a bunch of data, each data point represents a specific house size and the respective price.

It is through these data that we hope to learn a model that is representative enough of the reality as we hypothesized it.

In other words, for each problem, there is a “right neuron” that can solve the problem.

This was an example of a simplest “neuron”, where the relationship between the input and the output is linear.

In fact, in most practices, such neurons also include an additional nonlinear transformation that helps to capture the more complicated relationship.

The idea of finding the “right neuron” is indeed a longstanding way to problem-solving in many domains of science.

Given a problem, we want to model the relationships between the input and output, in an attempt to fully characterize a system.

We write down some formulas to abstract the problem in a solvable fashion.

Often, many practical problems are too complicated to approach in such a way.

Take the example of the house price.

In reality, the price depends not just on the size of the house, but also on many other factors such as the location, the facilities, and the economy.

It’s thus infeasible for even the most knowledgeable expert to provide a fair formula that would take in all relevant factors and calculate the corresponding house price based on those factors.

If a single neuron cannot handle the complexity, will multiple neurons do the job?That’s where the notion of “network” comes in.

A neural network consists of multiple neurons, each takes some input and gives some output.

The output of a neuron can become the input of some other neurons.

With multiple neurons, a neuron network can perform more complicated computation while keeping each neuron relatively “simple”.

Now, the complexity is being partly shifted from the structure within a neuron to the structure of the network.

With such a network, the model is capable of representing a more complicated phenomenon.

In essence, a neural network paints a beautiful picture of simplicity in the complexity — “simple” neurons embedded in a complicated network.