What’s Wrong With The Brooklyn Bridge Example

Section of Brooklyn Bridge Example Image

This is a piece of the Brooklyn  Bridge example image in the Internal Recursive Exhaustion post.  It is remotely possible that in my lazy slapdash way I constructed a poor example.  If you read the full page, you will see that the final sign has a sequence of ever-smaller inset photos which are exactly the same size and taken from the same perspective.  Ideally they should be taken at identical intervals of time, too.  Perhaps one every week.

Clearly the smaller inset images contain less information than the larger one.  They have in effect been compressed.

Recovering data from compressed versions is done everyday.  Look at the example on my Acronymic Languages page.  The left hand image is at full resolution.  The right hand image is a deconstructed version of it, obtained from a JPG file which was compressed to one tenth the size of the original file.  Yet it is quite recognizable and not just a blurry version of the first image.

The numerical version of this follows if we can assume that that some data fields have been thoroughly linearized.   Suppose the first ten numbers in each 100 component vector are set to zero to start with, while components 11 through 100 represent some valuable data.  For the next iteration, apply a data compression step to the whole 100 numbers, reducing the number of components to 10.   Replace the first 10 numbers (all zeros at the start) with those newly derived ones, which are a compressed version of the whole 100 component vector.  Replace the remaining 90 numbers with the new data.

And repeat.   If the data is nice linear stuff, a good data compression algorithm is to take the SVD orthonormalization of the 100 component vector and use the eigenvectors corresponding to the largest eigenvalues.  If the matrix used to perform the orthonormalization is stored for each step, then it is possible to invert the process and recover approximations of the vectors at each step.

This is Internal Recursive Exhaustion because the same basic algorithm is used but the number of fields is kept the same.   If you consider the example of a sequence of images representing something like stages in the construction of a building, the first image might be made of 1 Kilo  lines of 1 Kilo single byte greyscale pixels.  The resulting image would contain one 1 Meg single byte pixels.

An image of exactly the same size and resolution with an inset representing an earlier stage of construction would contain exactly the same number of pixels.  The number of fields does not increase, only the content of the image.

The numerical example is better, because the first 10 components of the vector would always be used for earlier version of the whole 100 component vector.  None of the following 90 components would be overwritten by any inset data.

In the Brooklyn Bridge example, the insets always cover some of the next larger image, and the insets are not in exactly the same size or from the same perspective view.

I hope it still helps to explain a difficult concept.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply