This example matches the header image above.
(I’ll describe it here as a simple iterative process, which is “unrolling” an entirely different implementation.)
Given the initial collection of data in a database, second order data can be added using known facts. For example, your parents, birthplace, current or last known dwelling place could be added to your own record. Second order data is not just added to your own record, but to everyone’s. This can be done recursively, adding third, fourth and higher order information.
All information must be added in the numerical form most useful for technologies derived from data science. For example, the fact that my grandfather was born in the village of Dodderhill, near Droitwich Spa, Worcestershire, England must not be mentioned in those words, but with geographical coordinates such as latitude and longitude. In adding second order data for Dodderhill, the fact that a male person was born there on October 16th, 1880 can be added as a single numerical field.
My grandfather is no longer living. His last known residence would be the place he died, which is could also be represented as numerical coordinates. The fact that a person with his first order data died there on that date is second order information for an entirely different kind of entity — that location in Penticton, BC.
All of the data about my grandfather’s parents, together with second order data about Dodderhill and Penticton would be added to his record, making a much longer one. The increase in the number of numerical fields in my grandfather’s record becomes reflected in my father’s as third level information, and mine as fouth.
It may seem as if there is a watering down of data at each step, since the significance of each field added to my own record decreases as the number of them increases. At a further step back, I have information for eight great grandparents, which would make the influence of any one of them less important, but there are that many more people to provide information.
The fact that several were born in Ontario but ended up in British Columbia makes their combined contribution more than make up for the lessened significance of any one of them. The fact that many of my ancestors are from Worcestershire in the English Midlands and worked as farmers tells me a lot about their combined contributions, regardless of how little they contribute individually on each iteration. That applies also to the ancestors of mine who were mariners in or near Newcastle-on-Tyne in Northumberland, England.
If this is done for everybody, then much more can be derived, such as an excellent vector representation of the occupation of farmer. That would be added into the record for each person, increasing again the number of fields in their record (or vector representation). An extremely different vector in the occupational plane would be that of a mariner.
At each iteration, the number of fields increases dramatically. The amount of actual information in the technical sense which is added at each step is less, because of redundancy, but still grows quickly.
Each iteration the number of numerical fields might be multiplied by one hundred. I’m certain that this would not mean one hundred times as much information is being found, but I’d be surprised if if was less than ten times as much. Even if a data field multiplication of one hundred only doubled the amount of information actually obtained at each step, it would be an exponential rate of growth — literally exponential, as mathematicians use the term, not as a vague substitute for “fast” as used in popular culture.
In an unfinished work of fiction I described this as “Repeat until your supercomputer installation runs out of disk space.” Today I might say “until the Cloud runs out of disk space”.
The term recursive is correct, because this is obviously a recursive process. That is easy to see when you consider querying for information about one person. To update the current vector representation, you need to examine and probably update those of all related entities or activities.
The term exhaustion is correct in two ways, one as a name for a common algorithm in computer science, the other for the fact that we are indeed trying to find numerical representations of every person, every place, every occupation, and so on. Perhaps a third reason for applying the term exhaustion is that I get exhausted writing and updating my numerous websites to adapt to its impact.
Among other techniques applicable to society are those for collecting and using personal data by tapping the resources of social media. The most notorious example of this to-date is the misuse of Facebook information by Cambridge Analytica. To me, this was almost trivial. They collected a small amount of information on a mere 87 million people.
Through Recursive Exhaustion it is possible to collect a vast amount of information about almost everybody. Indeed the more people studied, the more information can be obtained about each. The best analogy is that of a large radio-telescope installation. The more individual receivers there are and the larger the geographic area over which they spread, the greater the resolving power.
Using one meaning of the the term, recursive exhaustion works by repeatedly exhausting the space or tree of known individuals and their attributes. The secondary meaning involves growing that tree.
In computer science there is a method known as an exhaustive search, also known as a brute-force search. Sometimes it is referred to by its fundamental technique and is known as generate and test. It is one of the most powerful of the general problem-solving techniques, but is computationally expensive.
An exhaustive search is usually conducted on a tree structure, which is a discrete combinatorial object. One might somehow transform a list of people into a tree structure then perform a search to find a person meeting certain characteristics.
The problem with this is that the human population changes. People are born and die. Living people change all the time. A fixed tree structure for the human race is impossible.
Applying an exhaustive search recursively in a social context means that the attributes of one person are reevaluated regularly by considering all changes in his or her social environment. The individuals in that social environment will also have to be reevaluated, so the search for one person requires a data collection step which can propagate recursively throughout the whole population.
As applied to the whole of human society, this violates the most fundamental requirement of a recursive algorithm: it has no end condition.
Nor should it. There is no end to the changes society goes through.
The only way a query could be answered by an exhaustive search of a tree of human information would be if no changes were made to that data. This could be done in a recursive way having an end condition, but would provide poor answers and be ultimately pointless.
As a powerful tool of social technology, recursive exhaustion would be an unending process. A computer system would operate continuously, accepting new data as it became and providing information about individuals as requested.
For example, a non-governmental organization devoted to helping people could query for the people most in need of its services. On the other hand evil people could query for people susceptible to blackmail or intimidation.
Various implementations of the recursive exhaustion algorithm are discussed on another page. Details of its application to human society will be given elsewhere. An example of its use in given on a page discussing genealogy, since that is easy to explain and could actually be of some use to interested individuals.
If you post a comment and it doesn’t appear soon, please use the Contact Form to contact me. People who abuse it will be blocked. To encourage access to this information, I am using at least one ping service: