Part 1:

Over the years I have seen many people talking about randomness and random numbers. There is a lot to say on this subject, but I will start with the following: consider the following three real numbers. I describe how to write each one down, and ask you — assuming you just saw each written down — how you would go about telling that they were generated by a random process (whatever precisely that is — that’s part of what we will address later), if you were to encounter them some other way. In every case, the number continues in some way or other, infinitely. Yet your analysis cannot be infinite, so I only give you some of the number to get you started:
0.1592653589793238462643383279502884197169399375105820974944…
0.10100100010000100000100000010000000100000000001000000000001…
Pick your favourite programming language and consider all the programs you can write in it which take a single integer as a parameter and return a single integer as the value. Convert all of those to a suitable text encoding and sort them all in alphabetical order. Now, imagine running them with 0 as the value of the parameter. Determine if 0 will make the first program stop or not and so on for each program in order. If it does, write 1 for the relevant digit after the decimal point. If it does not, write 0. So for example, if the first 10 programs have the following stopping pattern {yes, no, yes, yes, yes, yes, no, yes, no, no} then the number begins 0.1011110100…
If you do not program, imagine instructions to do things that allow you to a) remember where you are in the process, b) perform grade school arithmetic, and c) perform any of the previous steps repeatedly (including this sort of step). Sort the collection of these instructions into alphabetical order and continue as in (3).
Does it matter that I already told you how to write down (some of) the digits in each case? Suppose a friend gave you each instead? Does it matter that my numbers were between 0 and 1?

Part 2:

Not surprisingly, I received no answers to last month’s “randomness I” conundrums. Generally people divide into several camps when it comes to “random events.” Part of the confusion is that the terminology in probability theory is of “event” regardless of the category of the “item” in question: In probability theory, an event is a subset of the sample space, regardless of what it represents.
Consider the weird computer use mentioned in my last column. This makes use of a number called somewhat misleadingly “Chaitin’s constant.” (Misleading because it is not constant everywhere, but instead relative to a given representation of programs. But these lead to the same sort of result, so we will pretend, as everyone does, that it really is a constant.)
This number cannot be computed by any computational device we can build. In fact, most philosophers, computability theorists, logicians, etc., also think that it cannot be computed by anything whatever — this is the so-called Church-Turing thesis. Why do I mention this? Because one aspect to “randomness” is to appeal to mechanism (or lack thereof). It appears if that’s right, Chaitin’s constant is not a random number. But there isn’t just one mechanism here: One runs one program, waits for it to halt, and if it does, records the digit. If it doesn’t, one will never know. (One can imagine parallelizing a lot of these requests, so it isn’t as if one will wait forever on the first digit.)
This is very similar to a point of David Bohm: It depends on how you look on whether something has a mechanism or not. He encouraged physicists to look at what people claim are the only random processes in nature (some items studied by quantum mechanics). Not, as commonly understood, to dispute their randomness, but instead to look for mechanisms underlying them.
This does not, I repeat, remove the randomness — or so it seems. Why do I say that? Well, we have seen one example in the computability case. Probability theory is the other answer: All the resources of probability theory are still applicable regardless. This makes use of an assumption that his follower, Mario Bunge, articulates explicitly: no randomness, no probability.
(~R -> ~P, therefore ~~P -> ~~R, therefore, P->R, using some standard symbolism)
If we grant that, then we have randomness all over, even in classical physics (like classical statistical mechanics) — to the extent that these theories (or rather, their individual hypotheses) are true and we take a realist attitude. I leave then with the puzzle: Is Bunge right in his assumption? Or, if you disagree, where is the mistake in all of this?
(This may only appear to be a funny result to people who have certain backgrounds. I apologize if this seems boring or incomprehensible to anyone else. I will return to “river of coke” style conundrums soon enough.)