Let’s see.

When we decompose our matrix A into U, D, V then a few left-most columns of all three matrices represents almost all the information we need to recover our actual data.

Remember I did not say all, I say almost all, for example 92% of the information in just 5% columns of total columns.

which is a pretty good deal given that you have reduces the size of your data set tremendously.

This means that SVD found some relation between all the columns of the matrix A and represented us with same information with fewer columns.

Now the columns other than the extreme left most columns are deleted because they are considered as errors, and this process reduces the size of the matrix by deleting almost 90% of the columns of original matrix.

Now let’s see how it works in R.

Keep in mind that after each chunk of code you will see the output and a small explanation for that output.

install.

packages(“pixmap”,repos = “http://cran.

us.

r-project.

org")library(pixmap) image<- read.

pnm(“flower.

ppm”)image@size## [1] 300 400str(image)## Formal class ‘pixmapRGB’ [package “pixmap”] with 8 slots ## .

@ red : num [1:300, 1:400] 0.

894 0.

878 0.

851 0.

816 0.

8 … ## .

@ green : num [1:300, 1:400] 0.

29 0.

275 0.

255 0.

235 0.

231 … ## .

@ blue : num [1:300, 1:400] 0.

525 0.

51 0.

494 0.

471 0.

463 … ## .

@ channels: chr [1:3] “red” “green” “blue” ## .

@ size : int [1:2] 300 400 ## .

@ cellres : num [1:2] 1 1 ## .

@ bbox : num [1:4] 0 0 400 300 ## .

@ bbcent : logi FALSEred.

img <- matrix(image@red,nrow = image@size[1], ncol = image@size[2]) blue.

img <- matrix(image@blue,nrow = image@size[1], ncol = image@size[2]) green.

img <- matrix(image@green,nrow = image@size[1], ncol = image@size[2]) str(red.

img)## num [1:300, 1:400] 0.

894 0.

878 0.

851 0.

816 0.

8 …We see that same number of rows and columns are there in each type of color matrix.

The reason we have separated them into three colors is because in R these three colors makes the basis for every color available in R.

But for our example we are going to take red color only, the difference between all the images can be seen below.

image(red.

img)Red matrix colorimage(green.

img)Green matrix colorimage(blue.

img)Blue Matrix colorplot(image)Original ImageFrom the above given pictures, I am taking the red matrix for decomposition.

To get a clearer picture, here is the snapshot of the matrix of red color.

Remember, this matrix will be decomposed into three components soon.

View(red.

img)You would see that ‘svd’ command in R written down here will broke red matrix into three components.

They are d, u, v with their respective rows and columns given.

comp<- svd(red.

img) str(comp)## List of 3 ## $ d: num [1:300] 205.

2 37.

1 33.

1 20.

4 15.

4 … ## $ u: num [1:300, 1:300] -0.

0431 -0.

0427 -0.

0421 -0.

0419 -0.

0418 … ## $ v: num [1:400, 1:300] -0.

0305 -0.

0304 -0.

0303 -0.

03 -0.

0298 …To get a clearer picture below we also have the snapshots of each one of them.

View(comp$v)v matrixView(t(comp$d))Transpose of vYou can now see that the rows of ‘v’ matrix become column in transposed matrix.

View(comp$d)You see that it is a list and a list does not multiply with a matrix, so we need to convert it into diagonal matrix when we multiply it back with other components.

But to take a feel look below to the snapshot of how it will look after using ‘diag’ command.

d <- diag(comp$d) View(d)matrix ‘d’ after imputing zerosThis is how it looks after the number has been diagonally arranged.

Important thing to notice here is that only a few starting columns in ‘d’ have more weight compared to others and it keeps decreasing as you go from left to right, hence we need only need those columns that are to the left most of matrix.

Let’s take 25 of them which might be representing almost 90% of the information.

Note: I have not calculated the percentage, it is just an assumption.

Now before we multiply those first 25 columns from each one of these matrices we need to be aware that ‘u’ will remain the same, but ‘v’ has to be transposed in order to make it follow the law of matrix.

compressed.

image<- (comp$u[,1:25] %*% diag(comp$d[1:25]) %*% t(comp$v[,1:25])) image(compressed.

image)Final Image we RecoveredRemember, the columns of left matrix must always be equal to the rows of right matrix.

If some error occurs, check your columns and rows.

See that this last image is not as clear as we had earlier, but it is obvious that it is the image of a flower; but we have reduced the number of columns and hence we need very less memory to show this image compared to the space needed to represent the original red color matrix.

References:Anonomous.

(2016).

“What is an Eigenvector?”.

LeiosOS.

https://www.

youtube.

com/watch?v=ue3yoeZvt8EAnonomosu.

“Singular Value Decomposition”.

RPUBS.

https://rpubs.

com/aaronsc32/singular-value-decomposition-r.

White, J.

(2009).

“Image Compression with the SVD in R”.

RBLOGGERS.

https://www.

r-bloggers.

com/image-compression-with-the-svd-in-r/.. More details