It does a great job of matching the data we do have, but it won’t be able to make sensible guesses for any new data.One of the main concerns in machine learning is finding a best fit line or curve that is just curvy enough to give us the general shape of our data set, but isn’t so curvy that it can’t generalise to making guesses about new pieces of data.This is where polynomial regression falls over..As I mentioned before, we have to explicitly tell polynomial regression how curvy we want the best fit curve to be before we use it, and this isn’t an easy thing to decide.In our example, our data points have been in only two dimensions — one value for shoe size and one value for height— which means that we have been able to plot them on the graphs above (because screens are two dimensional)..When we can do this, it is easy to see the general shape of our data..Unfortunately, this is not the case in machine learning problems with more dimensions, and if we don’t know what shape our data is, then how do we tell polynomial regression how curvy to be?.The simple answer is that we can’t..The only option would be to try polynomial regression many times with different levels of flexibility and see which one works best.What we need is a machine learning technique that has the flexibility to be as curvy as it wants, but limits its curviness in order to be able to generalise better to new data.Neural NetworksThis is when data scientists generally move onto using a neural network instead of polynomial regression or linear regression..A neural network on its own is very much like polynomial regression in that it is able to learn data sets that have very curvy shapes.On their own, they don’t solve the problem of overfitting, but when you combine them with a technique called regularisation, everything works out.The actual implementation details of how neural networks and regularisation work aren’t really important to somebody just wanting to understand the basics of machine learning..The key things to remember are that neural networks are very good at “learning” the shapes of complicated data sets (more so than linear or polynomial regression), and that regularisation helps to prevent the neural network from overfitting our data.Getting computers to answer questionsHopefully, everything we’ve discussed so far has made sense..For the techniques we have covered — linear regression, polynomial regression, and neural networks — we’ve only looked at how we can train computers to give us a number depending on the data we give it..For example, our shoe size vs..height model gave us a height as a number (maybe in centimetres), when we gave it a shoe size, or our house cost vs..number of rooms model gave us a cost as a number (maybe in £s), when we gave it a number of rooms.These use-cases are great, but getting a number as an output isn’t always what we want.. More details