# To Infinity…and Beyond!

In researching Gödel’s Incompleteness Theorem, I stumbled upon an article that stated no one has proven a line can extend infinitely in both directions. This is shocking, if it’s true, and after a quick Google search, I couldn’t seem to find anything that contradicts the claim. So, in the spirit of intellectual adventure, I’ll offer a fun proof-esque idea here.

Consider a line segment of length $\ell$ that is measured in some standard unit of distance/length (e.g., inches, miles, nanometers, etc.). We convert the length of $\ell$—whatever units and length we’ve chosen (say, 0.5298 meters)—into a fictitious unit of measurement we’ll call Hoppes (hpe) [pronounced HOP-ease]. So, now, one should consider the length of $\ell$ to be 2 hpe such that $\ell$/2 = 1 hpe. We then add some fraction (of the length of) $\ell$ to (both ends of) itself, and let’s say the fraction of $\ell$ we’ll use, call it $a$, is 3$\ell$/4, which equals 3/2 hpe. The process by which we will add $a$ to $\ell$ will be governed by the following geometric series:

$s_n(a) = 1+a+a^2+a^3+\dots+a^{n-1} = (1-a^n)(1-a)^{-1}=\frac{a^n-1}{a-1}$.

Let us add the terms of $s_n(a)$ to both sides of $\ell$; first, we add 1 hpe to both sides ($\ell=4$ hpe), then 3/2 hpe ($\ell=7$ hpe, then 9/4 hpe ($\ell=23/2$ hpe)and so forth. If we keep adding to $\ell$ units of hpe based on the series $s_n(a)$, then we’re guaranteed a line that extends infinitely in both directions because $\lim_{n\rightarrow\infty} (a^{n}-1)(a-1)^{-1} = \infty$ when $\vert a\vert \geq 1$.

Now, suppose we assume it is impossible to extend our line segment infinitely in both directions. Then $s_n(a)$ must converge to $(1-a)^{-1}$, giving us a total length of $2+(1-a)^{-1}$ hpe for $\ell$, because $\lim_{n\rightarrow\infty} 1-a^{n}=1$, which is only possible when $\vert a\vert < 1$. (We cannot have a negative length, so $a\in \text{R}^+_0$.) But this contradicts our $\vert a\vert$ value of 3/2 hpe above, which means the series $s_n(a)$ is divergent. Q.E.D.

N.B. Some might raise the “problem” of an infinite number of discrete points that composes a line (segment), recalling the philosophical thorniness of Zeno’s (dichotomy) paradox; this is resolved, however, by similarly invoking the concept of limits (and is confirmed by our experience of traversing complete distances!):

$\sum_{i=1}^{\infty} (1/2)^i=\frac{1}{2}\sum_{i=0}^{\infty} (1/2)^i=\frac{1}{2} s_n (\frac{1}{2})=\frac{1}{2}\Big( 1+\frac{1}{2}+(\frac{1}{2})^2+\cdots\Big)=\frac{1}{2}\Big(\frac{1}{1-\frac{1}{2}}\Big) = 1$,

a single unit we can set equal to our initial line segment $\ell$ with length 2 hpe.

Special thanks to my great friend, Tim Hoppe, for giving me permission to use his name as an abstract unit of measurement.

Standard

# A Proposed Proof for the Existence of God

Assume it is impossible to prove God does not exist. Then the probability that God exists, $p(\text{G})$, however minuscule, is greater than zero: $p(\text{G}) = ab^{-1} \in (0,1)$. Also assume, as many important physicists and cosmologists do, that (1) the multiverse exists and is composed of an infinite number of independent universes and (2) our current universe is but one of those infinite universes existing in the multiverse.*

If the probability of the non-existence of God, denoted $p(-\text{G})$, in some universe is defined as

$p(-\text{G}) = (1 - ab^{-1})\in\left(0,1\right)$

then as the number of universes (n) approaches infinity,

$\lim_{n \rightarrow \infty} (1 - ab^{-1})^n = 0$.

That is, the sequence $\left(1-ab^{-1}\right)^n\to 0$ as $n\to\infty$. Any event that can happen will ineluctably happen given enough trials. This means God must exist in at least one universe within the multiverse, and if He does, then He must exist in all universes, including our universe, because omnipresence is a necessary condition for God to exist.

Q.E.D.

* This is certainly a reasonable, if not ubiquitously held, concept that follows from the mathematics of inflationary theory. In Our Mathematical Universe, for example, Max Tegmark suggests if “inflation…made our space infinite[, t]hen there are infinitely many Level I parallel universes” (121).

Standard

# Toward a quantification of intellectual disciplines

As a mathematician, I often find myself taking the STEM side of the STEM-versus-liberal-arts-and-humanities debate—this should come as no surprise to readers of this blog—and my principal conceit, that of a general claim to marginal productivity, quite often (and surprisingly, to me) underwhelms my opponents. So, I’ve been thinking about how we might (objectively) quantify the value of a discipline. May we argue, if we can, that quantum mechanics is “more important” than, say, the study of Victorian-period literature? Is the philosophy of mind as essential as the macroeconomics of international trade? Are composers of dodecaphonic concert music as indispensable to the socioeconomic fabric as historians of WWII? Is it really possible to make such comparisons, and should we be making them at all? Are all intellectual pursuits equally justified? If so, why should that be the case, and if not, how can society differentiate among so many disparate modes of inquiry?

To that end, then, I’ve quickly drafted eleven basic categories I believe can aid us in the quantification of an intellectual pursuit:

I. Demand

This will perforce involve a few slippery statistical calculations: average annual salary (scaled to cost-of-living expenses) for similar degree holders (e.g., BSc, PhD, etc.), the size of associated university departments, job-placement rates among graduates with the same terminal degree, the number of relevant publications (both popular and academic), and anything that betrays a clear supply-and-demand approach to the activities of participants within a discipline and the output they generate.

II. Influence

How fertile is the (inter-field) progeny of research? How often are articles cited by other disciplines? Do the articles, conferences, and symposia affect a diverse collection of academic research in different fields with perhaps sweeping consequences, or does the intellectual offspring of an academic discipline rarely push beyond the confines of its participants?

III. Difficulty

What is the effort required for mastery and original contribution? In general, we place a greater value on things that take increased effort to attain. It’s easier, for example, to eat a pizza than to acquire rock-hard abs. (As an aside, and apart from coeval psychosexual aspects of attraction—obesity was considered a desirable trait during the twelfth to fifteenth centuries because it signified wealth and power—being fit holds greater societal value because it, among other things, represents the more difficult, ascetic path, which suggests something of an evolutionary advantage.) Average time to graduation, the number of prerequisite courses for degree candidacy, and the rigor of standardized tests might also play a useful role here.

IV. Applicability

How practical is the discipline’s intellectual import? How much utility does it possess? Does it (at least, eventually) lead to a general increase in the quality of life for the general population (e.g., the creation of plastics), or is it limited in its scope and interest only to those persons with a direct relationship to its machinery (e.g., non-commutative transformational symmetries in the development of Mozart’s Piano Sonata no. 12 in F major K. 332)? A less diplomatic characterization might involve asking the simple question: Who cares?

V. Recognition

Disciplines and academic fields that enjoy major prizes (e.g., Nobel, Pulitzer, Fields, Abel, etc.) must often succumb to more rigorous scrutiny and peer-reviewed analysis than those disciplines whose metrics more heavily rely upon the opinion of a small cadre of informed peers and the publish-or-perish repositories of second-tier journals willing to print marginal material. This isn’t a rigid metric, of course: Many economists now reject the Nobel-winning efficient-market hypothesis, and the LTCM debacle of the late 90s revealed the hidden perniciousness crouching behind the Black-Scholes equation, which also earned its creators a Nobel prize. (Perhaps these examples suggest something deficient about economics.) In general, though, winning a major international prize is a highly valued accomplishment that validates one’s work as enduring and important.

VI. Objectivity

Can we prove the propositions of an academic discipline, or are its claims wholly unfalsifiable? Is the machinery of an intellectual discipline largely based upon subjective and intuitive interpretation or rigorously defined axioms? Can the value and importance of a conceit change if coeval opinion modulates its position? It seems desirable to prefer an objective and provable claim to one based on subjectivity and a mushy, ever-changing worldview.

VII. Future value

What is the potential influence surrounding the field’s unsolved problems? Do experts generally believe resolving those issues might eventually lead to significant breakthroughs (or possibly chaos!), or will the discipline’s elusive solutions effectuate only incremental and localized progress when viewed through the widest possible lens?

VIII. Connectivity

What might be the long-range repercussions of eliminating a discipline? Would anyone beyond its active members notice its absence? How essential is its intellectual currency to our current socioeconomic infrastructure? One or two generations removed from our own? There exists inherent value in the indispensable.

IX. Ubiquity

How many colleges and universities offer formal, on-campus degrees in the field? Is its study limited to regional or localized interests, or is it embraced by a truly international collective? Wider academic availability, regardless of where you live, suggests a greater general value.

X. Labor mobility

Is employment contingent upon a specific geographic area or narrowly defined economies? Does an intellectual discipline provide global opportunity? Do gender gaps or racial-bias issues exist that might impede entry for qualified candidates? How flexible is the discipline’s intellectual infrastructure? Do the skills you acquire permit productivity within a range of disparate occupations and applications, or do they translate poorly to other sectors of the the labor market because graduates are pigeonholed into a singular intellectual activity?

Can you find meaningful employment without going to graduate school, or must you finish a PhD in order to be gainfully employed? There are certain exceptions, of course: brain surgeons, for example, enjoy a very limited employment landscape—and earning anything less than an M.D. degree means you can’t practice medicine—but this is an example of an outlier that offer counterbalancing compensation within the larger model.

XI. Automation

What is the probability a discipline will be automated in the future? Can your field easily be replaced by a robot or a sufficiently robust AI (or even new advances in classical computer algorithms) in the next 15 years? (Luddites beware.)

__________

Not perfect, but it’s a pretty good start, I think. The list strikes a decent balance across disciplines and, taken as a whole, doesn’t necessarily privilege any particular field. A communications major, for example, might score near the top in labor mobility, automation, and ubiquity but very low in difficulty and prize recognition (and likely most other categories, too). I also eliminated certain obvious categories (like historical import) because the history of our intellectual landscape has often been marked by hysteria, inaccuracy, and misinformation. To privilege, say, music(ology) because of its membership to the quadrivium when most people believed part of its importance revolved around its ability to affect the four humors seems unhelpful. (It also seems unfair to penalize, say, risk analysts because the stock market didn’t exist in the sixth century.)

Where we go from here is anyone’s guess. Specific quantifying methods might only require the most obvious metric: a function $f : \text{R}^n\to \text{R}$ with a series of weightings where n is the total number of individual categories, $c_i$, and the total value of a discipline, $v_j$, is calculated by a geometric mean, provided no category can have a value of zero: $v_j = \left(\prod_{i=1}^n c_i\right)^{1/n}.$

Standard

# On making “Chewbacca Mom” Disappear

I take great solace in the fact that we could make “Chewbacca Mom” (hereafter CM) vanish—without being hurt in any way—if we could create the required Lorentz contraction utilizing Einstein’s gamma function:

$L = L_r(1 - (\frac{v}{c})^2)^{1/2}$

where $L_r$ is CM’s length at rest. As her velocity approaches the speed of light (i.e., as $v/c \to 1$), CM (essentially) disappears before our very eyes! (And, yes, if the vehicle were large enough, we could fit the Kardashians inside, too. Don’t you just love science?)

Until technology catches mathematics and physics, though, I guess I’ll just have to keep filtering my news feed.

Standard

# Is Atheism Irrational?

The following is an very interesting (and rather long) FB discussion about a NYTimes link I posted to my wall, which led to an enlightening debate concerning the viability of the Big Bang theory based upon stochastic measurements.  I have done my best to present the discussion in its original form.

Leon:  There’s probably an established name for this fallacy. Just because AP can’t imagine what sort of life might arise under different conditions doesn’t mean that it wouldn’t — and that it wouldn’t by into the same fallacy: “If conditions had been just a little different, our world could have been overrun with deadly WATER and life as we know it would have been impossible! This clearly proves that everything in the universe was shaped with our welfare in mind!”

PeterLeon, I generally agree, but changes to cosmological parameters don’t just lead to “deadly water.” It’s hard to imagine a universe that could sustain any life while precluding lifeforms of equivalent complexity / interest / value to humans. So when we talk about universes different from ours, we might as well be talking about universes with no life at all. This pushes the question back to whether life itself is a worthy criterion by which to judge a universe—and then back to whether the “worth” of a universe is even a coherent concept, absent human judgment. This article gives a sharp analysis.

David:  More important, I think, is the mathematics involved in the (very unlikely) probabilities associated with the current state of the universe—regardless of whether we wish to quantify that approach by burdening it with the concept of anthropocentrism. (And even if we do wish to pursue such an approach, anthropocentrism doesn’t seem to cast a greater shadow over creationism than it does the theory of evolution, which is, essentially, an anthropocentric theory concerned—if not obsessed—with humans qua the teleology of a “trajectory toward perfection.”) The notion of “life” is irrelevant, for example, if we’re limiting our discussion to the stochastic probability of the synthesis of a single protein chain subsequent to the Big Bang (1 in 10^125).

Peter:  David, I suspect both of our minds are already made up, but:

1. Evolution, as I understand it, is absolutely not human-centric or teleological. Quite the opposite: humans aren’t the end or destination of the process, just another branch on the tree.

2. Anthropocentrism (biocentrism, etc.) is still an issue in this discussion of probability.

The set of states containing a protein chain is no more or less improbable than an equivalently “sized” set of states without it. It’s hard to reason dispassionately about it, for the same reason it’s hard to imagine a world in which you had never met your wife. But *things would have happened* in all those other worlds too. When you say, “What are the chances that we would meet and fall in love?” you’re implicitly valuing this scenario above all the others. It’s the same with the probability argument you give above. The article I linked gives a neat rebuttal to *my* point on pp. 173–175. It really is worth a read!

Leon:  Peter, the article you linked to is very well-done (except for one thing that I will mention) and I learned from it. However, when I found my attention drifting halfway through and wondered why, I realized that no one, *really*, is making the pure logical argument that there might be *some* being that created the universe. Mr. Manson does a good job of pointing out that some debunking strategies are not really arguments, they’re rhetorical strategies. What I fault him for is not pointing out that claims such as Plantiinga’s above are also, just as much, rhetorical strategies rather than logical arguments. Manson does a good job of showing that what he calls the “moral argument” for some sort of creator requires there to be a moral value to creating conscious beings before anything in the universe existed. He then goes on to say he doesn’t know what arguing for such a value ex nihilo would look like. That’s right, and I don’t think anyone has done it, because anyone who gets to this point is really just providing rhetorical cover for saying that there must be a god. That, or if Manson takes the extra step in honesty and admits this, then he has to say that the moral argument is circular. And, in the spirit of following up on Manson’s analysis of the debunking rhetoric, I’ll point out that a lot of the success in Plantinga’s “argument from design” story is undoubtedly its ecumenical nature: it doesn’t mention any sects, so each listener gets to pencil in the name and characteristics of their own preferred god.

Dave, highly improbable events occur all the time; we don’t feel compelled to find divine explanations for them unless it reinforces our own personal narrative for the universe. The last time I saw a football game on TV, a team that was down by 5 threw a hail-mary pass as time expired. The quarterback threw it squarely to the two defenders. One of them jumped high up, caught the ball, and as he came down his hands hit the top of the helmet of his fellow defender. The ball bounced up and forward, over the head of the intended receiver, who did a visible double-take but managed to grab it and carry it into the end zone. I don’t know what the odds are on this, but no one feels obliged to find a divine explanation for this unless (a) they’re a really big fan of the winning team or (b) I end up getting inspired to become a football player by seeing this play and want to credit God with motivating me. That’s my response to the math: no one would care about the odds (except maybe “Wow! Cool.”) if they weren’t a way of reinforcing the emotional payoff of one’s chosen narrative about the overall situation.

Peter:  So, what about the alleged incompatibility of materialism and evolutionary theory? That seems like the novel part of AP’s argument, and I don’t really know what to make of it. My gut reaction is that there’s a problem with the reasoning around “belief” (in particular, why should we assume a priori that each belief is 50% likely to be true?), but I don’t know enough philosophy to really get it.

David:  @Peter: 1. We certainly know Darwin to have framed the concept of selection as a progression toward a state of “perfection,” and Lamarck even described the evolutionary trajectory as a ladder of ever-increasing moments of such perfection. So, even if a teleology isn’t explicitly stated, it’s heavily implied as an essential component of evolution’s general guiding principle. Also, I know of no examples within evolutionary biology where selection and adaptation have effectuated a regression to a less perfect state, so whether or not there exists intention (i.e., a Platonic teleology, etc.) with respect to the evolutionary process, there exists at least a teleomatic process that, through its natural laws, moves toward something that is “better than it was.” Of course, it might be more than that—say, an Aristotelian telenomic (i.e., a process of “final causes”)—but what we have is, at least, the law-directed purpose embedded within the process itself. Humans might not finally be a reification of the highest rung of Lamarckian “perfection,” but if we aren’t, that doesn’t necessarily efface the likelihood we exhibit a current state of perfection—“better than we’ve ever been”—which is still a shadow cast by (coeval) evolutionary predilections toward anthropocentrism. 2. I’m not quite sure I understand your point with respect to the mathematics-to-anthropocentrism link. Are you referring to James’s “infinite universes” when you speak of an “equivalent ‘sized’ set of states”? Also, I’m not sure why “valuing” 1:10^125 above some other p-event is necessarily a problem. We privilege it because of its importance to what ostensibly comes next.

@Leon: Sure, no one need hold for a divine explanation of events witnessed, say, during a football game, even in the face of highly improbable events—unless, of course, you’re a hard determinist—but I think you’re (inadvertently) misappropriating causation/intention; there’s no reason to entertain the possibility of design and authorship with respect to the very low odds involved in the path of your football, so an attempt to do so immediately strikes one as extremely odd, which suggests (erroneously) that the argument for the design of (generally) highly unlikely events is logically unsound. It’s easy to imagine the occurrence of unusual events when contemplating the (sum of the) discrete actions of autonomous agents within the confines of physical scientific laws, but in no sense do those events demand the possibility of, or need for, a “designer.”

But consider the following Gedankenexperiment: You are sitting at a table with three boxes in front of you. One box contains identical slips of paper, each with one of the twelve pcs inscribed on it; the second box also contains identical pieces of paper, and on each is written a single registral designation; the third box, like the others, contains identical pieces of paper, but, here, each piece of paper denotes a single rhythmic value (or rest). If you (or a monkey or a robotic arm) were to choose one piece of paper from each box randomly (with replacement) and notate the results, what are the odds you would create a double canon at the twelfth or even a fugue? I’m not going to do the math, but the p-value is an unimaginably small number. Yet if we were to suddenly discover a hitherto unknown sketch of just such a canon, who would presume it to be the result of a stochastic process? None of us. Why? Because the detailed complexity of the result—the canon itself—very strongly suggests a purposeful design (and, thus, a designer), so we would perforce reject any sort of stochastic probability as a feature of its existence. Is it not odd, then, that the canon’s complexity evinces the unimpeachable notion that a composer purposefully exhibited intention (and skill) in its creation, yet the universe—with its infinitely more complex structure and an unbelievably smaller probability of stochastic success—can be rationalized and dispatched by random (and highly improbable) interactions between and among substances that appeared ex nihlio?

Leon: Yes, Dave, I’m very much enjoying the discussion. Peter, I honestly think that Plantinga is just throwing in everything that occurs to him, in the hopes that it will stick. If that seems ad hominem, well, I just see him appealing to “But doesn’t it just seem ridiculous that…[new claim here]” over and over again, without any reasoning other than an appeal to “isn’t it just so improbable…” I think it’s perfectly okay to not address some of that, on the grounds that we’re not here to figure out a coherent argument for his rhetoric for him. Dave, it’s completely wrong to attribute teleology to Darwin and the theory of evolution that comes from him. He is something of a transitional figure, and may not have guarded his language against teleological implications as well as later workers did. But even during his lifetime, he was fiercely opposed by biologists who had explicitly teleological accounts of evolution, like Carl Nägeli; and by the end of the century this had become well-established enough that even people like Mark Twain (certainly not on the cutting edge of biology) could ridicule teleology via an argument by design: he said that if we take the total age of the earth as the height of the Eiffel Tower, then the period of man’s time on earth can be represented as the layer of paint at the top of it — and that saying that all of earth’s history was in service of bringing man into existence is like saying that the purpose of the Eiffel Tower is to hold up that top layer of paint.

David: Oh, I’m not suggesting “evolutionary teleology” ends with humans—though modern scientists often speak of humans with such reverence that they imply such a concept (e.g., Dawkin’s discussion of human brain redundancy, etc.)—but I am saying there exists a teleology of process (toward improvement/perfection) that is built into evolution’s core principles. You can’t have one without the other. Whether that “constant state of improvement” ends with human life is not my concern—though it’s difficult to imagine a change-of-kinds progression beyond human life (could the Singularity be that moment?)—but it seems to occupy the bulk of Plantinga’s conceit.

Leon:  […]and also, Dave, your gedanken experiment is well-taken, but in this and the original question I think you underestimate the vastness and tremendous age of the universe — under our current hegemonic cosmology, there have been planets in existence for ~10 billion years, and there are hundreds of billions of galaxies each containing hundreds of billions of stars. If your experiment is carried out at each star for a comparable length of time, I’m quite certain we’ll end up with thousands of perfectly appropriate canons. I also disagree with this example in that I believe that you’re working under an assumption that I’ll illustrate with the following story, taken from a philosopher that I’m not recalling: the edge of a road erodes, revealing some pebbles that spell out a sonnet of Shakespeare’s. We get very impressed by this, assuming it either to be somehow miraculous or a prank — in either case we take it to demonstrate intentionality of *some* sort. The philosopher’s point is that this reaction is an anthropocentric bias — *any* random arrangement of revealed pebbles is just as unlikely as any other, yet we don’t take the more random-looking ones as evidence of intentionality. It’s not quite that simple, of course; but as you pointed out, we don’t have a lot of space here. But I will say that given a sufficiently large number of roadsides, I’d expect a *lot* of things that “make sense” to appear, especially given that we conflate many things that “make sense” in the same way but have surface features that differ (the “same text” with different fonts or handwritings, for example) but we don’t do that with more “random-looking” arrangements. It also seems to me that you made the gedanken experiment because you think of life (or intelligence) on earth as something like the appearance of a Shakespeare sonnet on the wayside — evidence of intentionality. But to do so already assumes intentionality in the pre-life universe — that is, it’s circular reasoning. Teleology is a directionality imposed from without, not one that results from humans seeing a situation and imposing their thought habits. Some species get better at some of their life tasks because more of their handier members survive; in the absence of humans calling that a direction and privileging it as the “essential” nature of evolution, that’s no more teleological than water flowing downhill. Actual evolution theory actually points out that many, many organisms have very and obviously imperfect adaptations, yet as long as they can still survive they are not replaced by “fitter” species, nor do they keep evolving spontaneously just for the sake of evolving. And there are tons and tons of very “primitive” organisms on earth — like nematodes and bacteria, which probably make up the 90% of the earth’s biomass — that are so evolutionarily fit that they have not evolved since probably before the dinosaurs. There’s no teleology driving them. Also, this. 🙂

Peter:  Well, and even if Darwin did think selection was teleological (which, I dunno, maybe he did early on at least), theorizing about evolution didn’t stop there. Twain’s quip is clever, but putting humans at the top of the tower still seems like a 19th-century move. We’re probably an *extreme* of something, but I don’t think many evolutionary theorists would say we’re in a state of perfection, in either of the senses Dave outlines. It sounds like you’re thinking of evolutionary fitness as a universal quality that every organism has some amount of. But that’s not how it works: fitness is relative to a habitat. We humans are more “fit” than our predecessors in the sense that if you were to drop one of our hominid ancestors into most present-day human habitats, it wouldn’t do so well. (It would probably be terrible at music theory, for instance.) But that’s not because we’re universally more “fit” or better adapted for life in general. Plenty of organisms survive in habitats that would kill us instantly. Fitness is optimized over shorter spans than environmental change, so we can pretty much assume that everything that survives and reproduces is at a local maximum of its fitness landscape. But that doesn’t mean it’s more fit than its ancestors were, or less fit than its descendants will be. [edit: …in the long term, I mean.] The double canon example is great, but I think it illustrates my point better than yours. If we looked up in the sky and saw the stars and comets arranged into a double canon, or if one were somehow encoded into our DNA, then yes, we’d be compelled to look for some intelligent composer. That would be really cool! And it would be statistically unlikely, because we can imagine scenarios in which things could have gone differently, and we wouldn’t have observed those things. (Our actual world being one of them.) But that’s not the same kind of evidence provided by our existence in the universe, because there’s no scenario in which we would have been able to observe our nonexistence. The improbability of our existence just doesn’t bear on the question. [Edit: In oh-so-fashionable Bayesian terms, P(universe|people) is 1, no matter what P(universe) may be.]

David:  @Leon: My thought experiment only meant to suggest that sufficient complexity, beyond the bounds of any sort of reasonable levels of stochastic probability, strongly suggests design. It’s not circular reasoning because we invoke that logic each time a new sketch is discovered. Your counterpoint sounds a lot like the infinite monkey theorem. But, as I’ve described in my blog, the math doesn’t even work; infinite exponentiation on the interval (0,1) always approaches zero. So, we always have a fatal contradiction: the p-value of an event cannot be both certain and impossible. Imagine a boy trying to throw a ping-pong ball into a paper cup 60 yards away. There’s a big difference between 100 billion earths, each with a single boy trying to throw the ball into a cup 60 yards away, and 100 billion boys covering the first earth, each with a ball they will attempt to throw into the cup.

Leon:  Peter, I totally agree with you about evolution not being a single monolithic structure with humans at the end of it. But I would quibble: I do agree that “fitness” is not some abstract quality that everything now has in greater measure than the past and lesser measure than the future. But as far as it proceeding more slowly than environmental change, we’ve certainly upset that. And mass extinctions are a counterexample. And even in a stable environment, one of SJ Gould’s flagship examples of bad adaptation was the panda’s “thumb”, which is certainly not an optimal adaptation. It just works well enough to keep the pandas going, and that’s enough.

Peter:  You’re absolutely right about mass extinctions and catastrophic events—I should have been clear. But is it still fair to say that the panda thumb is a case of a local maximum in the fitness landscape? Like, small “steps” around it were worse? What I meant to say was just that evolution isn’t solely driven by competition in a stable environment—which is what the teleological, constant-improvement model assumes. Also, yes—this is a super fun conversation! If only I could get this excited about the work I’m *supposed* to be doing.

David:  Quick insertion: Humans have vestigial organs, but that doesn’t mean we must jettison the commonly-held belief that humans represent a “local maximum,” although, if we follow the metaphor precisely, that phrase presumes a decline after the peak, which doesn’t really describe any of the evolutionary biology I’ve read. “Maladaptations” and extinctions, I think, should also be contextualized within the larger trajectory of “progress”—the whole survival-of-the-fittest thing (not the survival-of-everything thing). I’ll have to come back to the canon example!

Peter:  Yay, my favorite conversation is back! Are you sure survival-of-the-fittest should be characterized as “progress”? I’m certainly not an expert on evolutionary theory, but I get the strong impression that that could be true only in the relatively short term, in a stable environment (as I tried to say above). Wikipedia cites S. J. Gould on this: “Although it is difficult to measure complexity, it seems uncontroversial that mammals are more complex than bacteria. Gould (1997) agrees, but claims that this apparent largest-scale trend is a statistical artifact. Bacteria represent a minimum level of complexity for life on Earth today. Gould (1997) argues that there is no selective pressure for higher levels of complexity, but there is selective pressure against complexity below the level of bacteria. This minimum required level of complexity, combined with random mutation, implies that the average level of complexity of life must increase over time.”

David:  I would say, quickly, though, too that the very high “improbability of our existence”—based on the sheer math involved—has quite a bit of bearing on the probability of design, imo. In fact, that was the whole point of my double-canon example. Why don’t we ever consider a double canon—when we find one—to have been created by stochastic processes? I think the notion of “progress” is inherent in the concept of “survival.”  What doesn’t survive obviously cannot progress.

Peter:  Actually, that just shows that “survival” is inherent in “progress.” What survives still does not necessarily progress. And what would a biological definition of progress look like, anyway? The problem with the probability argument is that the universe is a precondition for our observation. When we observe that the universe happens to be just right for us to exist, or that we seem to exist despite incredible odds, what does this tell us? This is exactly the question that Bayes’s theorem is built to answer: “How likely is Y, given X?” How likely is it that the universe would have these properties, given that we exist? If it’s the only sort of universe that could support intelligent life, then, well, 100%. Ha! It turns out my argument has a name, and can be expressed MUCH more clearly. It’s the Weak Anthropic Principle: “…the universe’s ostensible fine tuning is the result of selection bias: i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing any such fine tuning, while a universe less compatible with life will go unbeheld.”

Leon: Dave, it actually strikes me that there are two ways to take your thought experiment. One is, as you say, that the result you discuss is very improbable, therefore perhaps someone did it on purpose. This strikes me as more of the “young earth,” hands-on creation position, not the one that you or AP are floating here. The other approach is less about the system’s ability to generate the canon than the idea that if there’s some process in place that *can* generate such a canon, then the process must have been set up by an intelligence that had such canons in mind. This seems closer to Plantinga’s (and your?) approach in saying that if this universe *can* produce life, it must have been set up that way on purpose. Is that correct? (Though of course, with God as a non-timebound, less anthropomorphic being, perhaps there’s not so much of a difference between these two ways of looking at things).

David:  Okay, I think “evolutionary teleology” derives from its principles. That is, “progress” is an inherent feature of evolutionary design and not some exogenous thing slapped onto its structure ex post facto. That doesn’t mean everything needs to change constantly—there are periods of stasis (i.e., localized temporal optimizations)—but it does suggest that when things move, they move in one direction. When things stop moving (forward), when organisms stop evolving and adapting (in the long run) in ways that are beneficial to their survival, they eventually become extinct; thus, even the notion of extinction becomes a feature of a more diachronic concept of progress. Water flows down the hill (and even pools into puddles of stasis) because of the “teleology” established by the law of gravity. If we reject the teleological notion of progress—the idea that adaptation and fitness are random, non-directed processes—evolutionary biology becomes a much tougher sell, imo. I’m not really interested in fit-for-life arguments of the universe, even though that concept drives Plantinga’s conceit. I do not reject the possibility of stochastic double canons because composers exist; I assume composers exist because the p-value of a stochastic double canon is impossibly small. This allows me to sidestep the problems associated with Bayes’s theorem. I’ll have to come back for the rest…including Leon’s interesting parsing.

Peter:  Okay, I see where you’re coming from re: evolution, and I agree that natural selection does generally lead to greater “fitness.” In fact, I’m pretty sure that’s how we define fitness: that which is maximized by natural selection. But it has nothing to do with a “trajectory toward perfection,” as you said way back at the start of the thread. Fitness isn’t concerned with perfection (in the sense of “freedom from defect”), only with survival and reproduction *in a particular ecosystem*. Actually, Wikipedia tells me that the phrase “survival of the fittest” is actually a misquotation: Herbert Spencer’s original formulation was “survival of the best fitted.”

David:  True. Perhaps I should have described it as a “trajectory toward a greater perfection.” Is it possible there exists a permanent-yet-imperfect evolutionary state that just, well, stops? Leon, I’m not quite sure I understand; there IS such a process because I created it, lol: one picks from three boxes, each filled with unique and discrete musical elements. The probability of that process creating the desired result, however, is truly minuscule. In fact, let’s put a face on it. If we assume an eight-octave range, common time, and both note and rest values no shorter in duration than a sixteenth note (and no longer than a whole note), the p-value for generating (only!) a C major scale in quarter notes (within one octave) is given by

$p = [(1/12)(1/8)(1/32)]^7 \approx 3.87 \times 10^{-25}$

A 40-note composition with uniquely determined values approaches 3.19 x 10^-140. (There are only 10^80 atoms in the observable universe.) Imagine the p-value in generating Bach’s Contrapunctus I from BWV 1080! So, okay, what do these numbers mean? Well, it’s simple: you’d have a MUCH better chance of closing your eyes and, with one trial, picking the single atom (within the observable universe) I’ve labeled “Leon” than creating a 20-note dux (with comes) by using the three-box method I’ve described. But, Leon, if you’re suggesting a “process” that with a HIGH PROBABILITY does, in fact, create, say, Baroque-era canons with invertible counterpoint, then I’d say the process IS the intelligence itself, which is my point. I can create the canon, but I can’t generate a larger number from the product. Of course, I could create such a high-valued stochastic process by severely limiting the variables (e.g., controlling p-values for each input, etc.), but rigging the task to be less demanding cannot be evidence for the feasibility of the more difficult one. (And my model could be made even more difficult!)

Peter:  > “Is it possible there exists a permanent-yet-imperfect evolutionary state that just, well, stops?”< Sure, if the environment holds still and other competing species also stop evolving. More seriously, what about cockroaches, or bacteria? They’ve been around in roughly their current forms a heck of a lot longer than we have. I guess my big point is that in the big picture, evolution isn’t a trajectory *toward* any particular destination—more like an expansion around an origin. See the link above about “largest-scale trends.” > “The probability of that process creating the *desired result*, however, is truly minuscule”<

This analogy depends on our being the “desired result,” which is (I think) what Leon was poking at a few comments ago. It begs the question, IMO.

Leon:  It’s really fascinating to me to discover completely unexpected ways that we misunderstand each other. 🙂

David:  Okay, what am I missing, lol?  I’m not referring to humans as the desired result, Peter. I’m more than content to limit the discussion to protein-chain synthesis—with or without humans. As for the stalled-evolution hypothesis, I feel much more comfortable with the notion that each “thing” is (largely) a discrete entity. Why some evolving bacteria, cockroaches, and fish but not others? Is it somehow “fitter” to be a bacterium rather than a human in 2014? There are sizable hurdles there, imo. As an aside, can we indulge in the notion of a Platonic canon at the tenth, lol? 🙂

Peter:  > Why some evolving bacteria, cockroaches, and fish but not others?<

I’m not sure I understand. If the question is why mutations aren’t possessed by every individual in a species, it’s just the way DNA works: mutations are random. If the question is why populations diversify and speciate, it depends on the degree to which they maintain contact as they split.

> Is it somehow “fitter” to be a bacterium rather than a human in 2014?<

Only if bacteria displace humans. If we’re not in competition, then no. Relative fitness is defined only among competing genotypes (see the Wikipedia link to “Fitness,” above). Okay, I’m gonna try one last time to sum up my objections to the probability argument.

1a. Whenever we describe the probability of an event, we do so in terms of a sample space. For example, when someone rolls two dice together, the chance of getting double sixes is 1/36, because the sample space includes 35 other combinations, all equally likely to occur.

1b. Current physics describes many cosmological configurations, all equally physically valid, the vast majority of which could not sustain intelligent life. In this sample space our universe is improbable, bordering on impossible.

2a. When we observe a spectacularly unlikely event that borders on the impossible, that can give us doubts about the way we’ve constructed the sample space. For example, if we dump out a bucket of dice, and they all come up 6, it’s a pretty fair bet that they were loaded, and not all configurations were in fact equally likely. (Or in your canon analogy, its low entropy suggests to us that it was composed by traditional canonic process, rather than by some stochastic one that would inflate the sample space.)

2b. Analogously, the improbability of our universe suggests a problem with the sample space. For you, the conclusion is that our universe wasn’t created by a random roll of the cosmic dice, but rather was designed with an eye toward this outcome. Another explanation would be that the cosmic dice have been rolled again and again, and this is the only outcome that we (as intelligent beings) could ever observe. From what I can tell, most physicists find this plausible (the debate now seems to be about “where” these other universes are). This improbability (by itself) is NOT evidence of multiple universes, nor of a designer. It just doesn’t weigh on the question either way. It’s analogous to a bucket of dice rolled by someone else in a room that we’re invited into *only* if/when all the dice come up six. Given that scenario, it doesn’t matter how many dice are in the bucket, or whether or not they’re loaded: the only result we will ever see is the one with all sixes.

David:  1a. Yes, all p-values within a distribution will sum to one, but if we’re interested in rolling double sixes, 1/36 will be our focus for a single trial, though it might very well take 70 trials to get the desired result.

1b. Yes, and I’d phrase it this way: a single “trial” effectuated by the Big Bang yields a p-value so small such that the likelihood of some stochastic design of the current cosmological configuration (or even a configuration without human life) very quickly approaches, and is, for all intents and purposes, zero.

2a. Precisely. A bucket-of-sixes event strongly suggests an intervention of some kind; we do not presuppose the fact that we’ve witnessed some sort of unbelievably rare stochastic moment (i.e., [1/6]^n : n = the total die count). A bucket of 200 dice yields a p-value that approaches 2.35 x 10^-156. (Again, there are only 10^80 atoms in the observable universe.) The same logic, of course, is inferred when we unearth a double canon at the tenth; though, a canon’s p-value is much, much smaller than that of a bucket of sixes.

2b. As a theorist and mathematician, I’m saying, as we did in 2a, that there exists an intervention with respect to such minuscule p-values, that stochastic processes are a very poor explanation for our cosmological result. As a Christian, I believe that intervention involves an omniscient God, just like person X composed the impossibly unlikely canon (with, as you suggest, an incredibly low entropy) rather than a robotic arm pulling pieces of paper from three boxes. Also, mathematics has only proven eleven dimensions, yet that does not simultaneously prove at least eleven “parallel universes.” Four of those dimensions, as you know, are firmly rooted within our present (single) universe. So, there’s no proof that, say, an infinite number of Big Bangs took an infinite number of stochastic cracks at generating our current cosmology. And even if that WERE the case, the math is still restrictive. Each Big Bang attempt would have a near-zero p-value for the current cosmology, and Bernoulli’s law of large numbers essentially guarantees such a near-zero p-value at an infinite number of trials. A single universe-trial does not involve a non-replacement p-value (e.g., pulling a marble out of a bag and putting it in your pocket); you don’t approach p = 1 at an infinite number of trials, though that seems to be a common mistake people make. It’s like the analogy I described earlier—that of a near-infinite number of earths, each with a single child trying to throw a ping-pong ball into a Dixie cup 60 yards away. The p-value for each discrete earth does not change—assuming uniform laws of physics and consistent variables (e.g., wind speed, topography, etc.)—and the earths are not working in tandem to reduce the improbability of the event…as would a single child who could throw 100 billion balls at once.

Anyway, the theory as it is currently taught, however, is that a single Big Bang event (read: trial) created a stochastic chain that created the cosmology that surrounds us—like dumping a near-infinite number of cosmic buckets filled with fair dice and arguing that every die from every bucket lands on six. That’s impossibly unconvincing. We’re also assuming that these bucket rolls never have deleterious effects when twos, threes, and fives emerge. A true p-value for cosmology would have to include the likelihood of internecine stochastic combinations that would immediately end the process. So, there’s serious doubt as to whether the universe would even be “allowed” to get an infinite number of “bucket dumps” before we’re asked to enter the room. I guess I’m just perplexed, too, by the notion that we’re unwilling to give stochastic processes the benefit of the doubt when it comes to canons and bucket-dumps, but we’re more than willing to make them the bedrock of the most statistically improbable event(s) involved in creating the universe. In that limited sense, then, as the article queries, I do believe atheism to be irrational.

Peter:  The analogy is flawed because our observation of canons and buckets is unrestricted: we can, in principle, observe any result in the sample space. Same with the Dixie cups: if we want to make the analogy work, then we’re not standing next to some arbitrary boy, watching him throw ping-pong balls. We’re a tiny creature that’s generated inside a Dixie cup the moment a ping-pong ball lands inside it. All that’s necessary to explain our existence is that there be enough boys, balls, and cups that it could plausibly happen at least once, in *some* trial. The p-value can be as low as we want for any single trial—the selection bias, [which] ensures that we can only ever observe the successful one. This isn’t just my crazy idea; it’s a fundamental principle of statistics. At this point I’ve explained it as clearly as I can, so if you still have a problem with it, it might be time to appeal to a higher court.

> Anyway, the theory as it is currently taught, however, is that a single Big Bang event (read: trial) created a stochastic chain that created the cosmology that surrounds us […] That’s impossibly unconvincing.<

I agree! As I mentioned above, most physicists seem to agree too, and since they noticed this problem, they’ve proposed various multiverse scenarios that provide an adequate number of “trials.” (This is different from the more well known “parallel universes” of some interpretations of quantum mechanics, which share the same physical properties.) Obviously, I understand virtually none of the real physics here, but it’s so much fun to grapple with the general conceptual outlines—as cool as any science fiction. I hope we now agree that a suitably large number of “trials” would solve this problem. You also seem skeptical that the universe would get that many tries, but I don’t see why not. The eleven dimensions of spacetime aren’t a problem, since more universes doesn’t mean more dimensions: you can “stack” infinite n-dimensional spaces in an (n+1)-dimensional space. (And anyway, that’s irrelevant in current models—see below.) You also mention “internecine stochastic combinations that would immediately end the process.” Could you elaborate on that? It seems like it could make sense in a cyclic model, with one universe at a time—but there are *plenty* of alternatives. So, if you’re curious what physicists say about this, here are a few theories I’ve come across—I will inevitably butcher them, but as always, there are better explanations on Wikipedia and elsewhere: (a) eternal inflation: the universe actually expands much faster than the speed of light, and different regions of spacetime are “far enough apart” from each other (in some sense) as to be “causally unconnected.” So they have different sets of parameters. This seems to be very popular these days. (b) black hole cosmology: each black hole is the boundary of a of a new “universe” that may have its own parameters. Not only does that imply that all the black holes in our universe are themselves baby universes, but it also implies that we ourselves are stuck in some other universe’s black hole! How metal is that? (c) mathematical universe hypothesis: this one is so crazy that even its creator Max Tegmark claims not to believe it. The idea is that the fundamental level of reality isn’t particles, fields, or branes, but rather math itself. Every mathematical system is its own universe—not just a description of one. Honestly, this sounds kind of dumb and self-defeating to me, but Tegmark is a smart guy who has forgotten more math than I could learn in a lifetime. So hey, if he says it’s possible, that’s cool. As for your final question, which is probably at the heart of this, I prefer physical explanations because they’ve worked well in the past. Maybe they will break down at some point, and the only answer available will be “God did it”—but it hasn’t happened for any such question in the past, and it’s not clear to me that this is the exception. As for why it’s stochastic instead of directed in some way, that’s just the null hypothesis. There may well turn out to be reasons why some configurations are preferred, but AFAIK we have no reason to assume that at this point. Sorry, I should have been clear: I also agree that atheism is irrational. To my thinking, the rational position at this point (which is not to say the best!) is agnosticism.

David:  A quick comment for now, but I’ll write more later: You keep insisting on observation as a necessary condition for my argument, but I’ve never made that assumption. Plantinga did—and you have with the bucket-of-dice metaphor—but I’m really only interested in “Platonic” events. We might never witness the boy on the nth earth trying to get his ball in the cup or the robotic arm reaching into the boxes (or the resulting composition!), but that has no bearing on the p-value of the trial. We don’t need to be there at all—in the cup or next to the boy or even on the same planet! My qualms with extraordinarily low-entropy p-values are distinct from whether or not we ever “observe” them, so neither selection bias nor Bayes’s theorem has any relevance with respect to my arguments. These points, I thought, were obvious because the bulk of our discussion has involved p-values of wholly unobservable events (e.g., protein-chain synthesis after the Big Bang, etc.), but perhaps I should have been clearer.

As for dimensions, I agree…and, as I think you’d agree, too, more dimensions don’t necessarily prove the multiverse, which, some physicists say, is simply the union of all “parallel” universes (as opposed to the “forked” theory proposed by QM). Physicists also suggest such universes might very well have different physical constants, which doesn’t help us much when we’re talking about p-values with respect to the current cosmology. I don’t believe a larger sample space gets us there either. (1) There’s no evidence for the multiverse (or forking parallel universes), (2) the vast majority of the enlarged sample space would involve sterile universes incapable of sustaining any kind of life, (3) an infinite sample space means infinite exponentiation on (0,1) that approaches not one but zero, and (4) current cosmological evidence (e.g., cosmic background radiation (CBR), etc.) only supports a single trial. I’m familiar with the first two theories you mentioned, and I know inflation is very popular because it answers a lot of questions, including (1) the “flatness problem”—a feature of the permissible range for Omega values (the ratio of gravitational to kinetic energy)—and (2) CBR homogeneity. As for cosmological conflicts, there are many…everything from problems of initial inhomogeneity and UV radiation to “permittivity” of free space and interfering cross-reactions within the process of amino acids forming peptide bonds. I guess the “null hypothesis” is the heart of the matter. Though I would never suggest very low p-values are, in and of themselves, proof of design, I feel such extreme improbabilities strongly suggest a designer—or, at least, strongly argue against chance. There’s something extraordinarily unnerving about the idea that the stochastic process involved in generating the human genome equals a range of something like (1 in) 4^-180^110,000 to (1 in) 4^-360^110,000. Numbers like that suggest something beyond the merely improbable…well beyond canons, buckets of dice, bouncing footballs, and even protein chains.

Peter:  > (1) There’s no evidence for the multiverse (or forking parallel universes), (2) the vast majority of the enlarged sample space would involve sterile universes incapable of sustaining any kind of life.< Well, anthropic reasoning more or less interprets (2) as the reply to (1). If universes that can sustain life are statistically unlikely, and we know there is at least one that can (ours), there are probably many others that can’t. And at least some physicists think there may be other ways to observe universes, as crazy as it sounds. So I don’t think these are evidence against it. > (3) an infinite sample space means infinite exponentiation on (0,1) that approaches not one but zero< Are you sure about that? If p is the probability that any randomly-generated universe can sustain life, and n is the number of universes, then the probability that *at least one* can sustain life is 1 – (1 – p)^n, which approaches 1 as n goes to infinity. I think you’re taking p^n, which is the probability that *every* universe can sustain life. (And note that I’m not doing any non-replacement stuff either—p is the same for every trial.) > (4) current cosmological evidence (e.g., cosmic background radiation (CBR), etc.) only supports a single trial. < AFAIK, none of the multiverse models make any predictions about that. Every universe will have its own background radiation, and we wouldn’t expect it to “leak” from one to another. Unless I’ve missed something? > There’s something extraordinarily unnerving about the idea that the stochastic process involved in generating the human genome equals a range of something like (1 in) 4^-180^110,000 to (1 in) 4^-360^110,000.<

It’s funny—I think we (as intelligent lifeforms) are pretty insignificant in the bigger picture, and it’s unnerving to imagine that this huge, empty universe would have been created just for us! Oops! I should have read before shooting my mouth off: there are at least some people who claim to have found evidence that our universe bumped into another one (an Arxiv paper linked from Wikipedia). But that’s pretty recent and (I think) controversial.

David:  > Well, anthropic reasoning more or less interprets (2) as the reply to (1). If universes that can sustain life are statistically unlikely, and we know there is at least one that can (ours), there are probably many others that can’t. And at least some physicists think there may be other ways to observe universes, as crazy as it sounds. So I don’t think these are evidence against it. < Well, I’ve tried to frame my arguments in a way that bypasses anthropic reasoning. (1) means we don’t really need to think about a very large sample space, and (2) becomes irrelevant to our discussion of p-values that relate to our current cosmology. > Are you sure about that? If p is the probability that any randomly-generated universe can sustain life, and n is the number of universes, then the probability that *at least one* can sustain life is 1 – (1 – p)^n, which approaches 1 as n approaches infinity.  I think you’re taking p^n, which is the probability that *every* universe can sustain life. (And note that I’m not doing any non-replacement stuff either—p is the same for every trial.) < Well, I was giving cosmologists the benefit of the doubt and assuming the possibility that quantum fluctuations replicate the process involved in our current cosmology (thus, p^n), but if we’re only interested in “at least one” universe—and, perhaps, our current universe is a reification of that event, which might suggest the other universes do not sustain life—the formula is almost too convenient to be helpful; it states that every non-impossible event is guaranteed to occur (at least once) over an infinite number of trials. I’ll leave it to you to imagine at-least-once events that offer fatal contradictions. > AFAIK, none of the multiverse models make any predictions about that. Every universe will have its own background radiation, and we wouldn’t expect it to “leak” from one to another. Unless I’ve missed something? < Well, assuming the multiverse requires space to exist, those universes couldn’t exist apart from the BB. That is, current cosmological models suggest the Big Bang created space (i.e., before it, there was REALLY nothing); inflationary models allow space to travel faster than the speed of light in order to preserve relativity theory, and AFAIK quantum fluctuations can theoretically create mass without the cosmic fireworks of CBR. Conserving the laws of thermodynamics, though, means the duration of these “spontaneous” masses is incredibly small and unobservable (as are the masses). > It’s funny—I think we (as intelligent lifeforms) are pretty insignificant in the bigger picture, and it’s unnerving to imagine that this huge, empty universe would have been created just for us! <

Unless you think about a fantastically creative and loving God who chose to have a relationship with us, despite our incredible insignificance!

> Well, assuming the multiverse requires space to exist, those universes couldn’t exist apart from the BB. <

Space is complicated. Like I said, our universe is “causally unconnected” to other universes—either by the event horizon of a black hole, or by some of the more subtle general-relativistic stuff in eternal-inflation theory. So no, we wouldn’t share the same Big Bang in any observable sense. As for how fluctuations produce inflationary “bubbles,” my initial guess was that they were a different kind of fluctuation from the standard vacuum uncertainty, which is how I found the paper linked above. The calculations start on p. 2, but they smacked me down pretty hard. I really do want to learn this stuff some day… sigh.

>There’s something extraordinarily unnerving…it’s unnerving to imagine….Unless you think about a fantastically creative and loving God.<

Right. I just meant that “unnerving” in the eye of the beholder. Sorry for constantly referring to the Neil Manson article—I get that you don’t have the time or inclination to read these things—it’s just hard to summarize. Here’s the relevant bit from the abstract, which follows a defense of the “design argument”: “Lastly, some say the design argument requires a picture of value according to which it was true, prior to the coming-into-being of the universe, that our sort of universe is worthy of creation. Such a picture, they say, is mistaken, though our attraction to it can be explained in terms of anthropocentrism. This is a serious criticism. To respond to it, proponents of the design argument must either defend an objectivist conception of value or, if not, provide some independent reason for thinking an intelligent designer is likely to create our sort of universe.” The full argument appears in pp. 172–175. This is why I keep referring to anthropocentrism.

David:  I do have the inclination…just not as much time, which is why I haven’t responded to date. All apologies.

> I get that you want to bypass the anthropic principle, but as long as we’re reasoning from our actual experience in the universe, you can’t. It’s a general principle of reasoning about observations. If you want to talk about “Platonic events” divorced from our human perspective, that’s great, but then the unlikeliness of our universe doesn’t demand explanation: any other universe would have been equally unlikely, and there’s nothing obviously special about ours a priori. < I’m not sure why I can’t, lol. I’m interested in discussing the stochastic probability of protein chains and peptide bonds and DNA sequencing subsequent to the BB (but before our emergence onto the scene as conscious, observant beings), all of which are wholly unobservable events. In fact, most of the probabilities in the universe might be considered “Platonic” (i.e., unobserved)—from the imminent explosion of distant quasars and formation of black holes to the 46.3 percent probability that the tree in the wooded midway on my way to work will be uprooted at wind speeds exceeding 72.4 mph. That approach doesn’t necessarily demand anything, but discussing the origin of the universe in the absence of a designer places the burden on physics and mathematics (specifically, probability theory)…and THAT does demand investigation. > While I’m sympathetic to the complaint that this multiverse stuff is “too convenient”–that it explains everything equally well–the divine-creator explanation has the same flaw. As you may know, there are some physicists who consider multiverse theories “unscientific” for precisely that reason. < True. The difference, though, is that faith does not require proof…despite Hitchens’s claims to the contrary. In fact, the Bible says faith IS the proof (Hebrews 11:1). I don’t mind “convenient,” but “logically easy” rubs me a bit the wrong way. (The “at least one” (ALO) formula for infinite n is an example.) I understand some might consider faith to be “logically easy,” but I’m comfortable with the notion that faith is completely different from, and directly opposed to, science. > The question of evidentiary support is well taken. There doesn’t seem to be consensus on what would even count as evidence for a multiverse–though that’s hardly a unique scenario for scientific theories, including some that have gone on to be vindicated (e.g., evolution, quarks, cosmic inflation). So no, I don’t think multiverse theories are self-defeating, at least not at the point you identify, nor do I think it’s driven by a refusal to accept the divine creator. It’s about a commitment to natural explanation before supernatural. < I think it’s a bit problematic to lionize natural explanations as a feature of coeval scientific understanding. We’ve seen many times throughout history that science very often “got it wrong” in light of new evidence. That’s not to say science isn’t incredibly valuable and insightful—it is—but it is finally limited in its capacity to explain events based upon restricted observation(s) and imperfect knowledge. Again, in the absence of a designer, we have no choice but to follow such a path, but that’s why we need to be careful. Many of these theories exist without any serious physical evidence. That’s fine, but that’s also why I’m focusing on abstract p-values because they offer a more substantive and dispassionate line of inquiry with respect to “natural explanations.” > I don’t follow: doesn’t “non-impossible” preclude “fatal contradiction”? Anyway, the anthropic argument doesn’t call for an infinite number of universes, just enough for there to be one that sustains life. If indeed there are infinite universes (as some physicists think), then the situation is even worse than you describe: “anything that can happen will happen an *infinite* number of times.” < Not at all. (And, again, I’m not making an anthropic argument.) Here’s a “trivial” example: Assume we can establish a p-value involving whether or not God created the universe. Motive for the creation is irrelevant. (Perhaps this number is simply the complement of the probability that stochastic processes “created” the universe.) According to 1 – (1 -p)^n for an infinite n, even if that value is vanishingly small—and, as an agnostic, I imagine you’d argue p > 0 (otherwise, you’d be an atheist)—then it is the case that p approaches 1 as n approaches infinity. And if, according to Guth, every event (that can occur) will occur an infinite number of times, then that’s essentially the same thing as saying every possible event (E) will occur within each of the infinite universes (if not multiple times within a single universe). This is why I invoked p^n = 1, which contradicts the calculation that infinite exponentiation on (0,1) approaches zero.

Anyway, there are different metrics we can use, too.  For example, I think the Poisson distribution

$P(x;\mu) = e^{-\mu}(\mu^x)(x!)^{-1}$

might be a better p-measure for the stochastic probability of the universe; here, the p-value approaches zero as the mean of the sample space approaches zero, even for an infinite x. That seems much more intuitive to me: for an extremely small p-value for a single trial, which, in a real way, becomes the mean case for the stochastic probability of our current universe, the probability of future successes decreases. This is the opposite of the mechanism behind the ALO equation where the probability increases as the number of trials increases. Another model I prefer involves curves like exponential decay (and equations like it); for example, the simple non-homogeneous differential equation

$dp/dt = te^{-3t}-3p$

is one such (general) reification of a curve modeling (what I believe is the basic notion of) probability over time subsequent to the BB. For the sake of completeness, the general solution is

$dp/dt + 3p = te^{-3t}\\ (dp/dt + 3p)e^{\int3 dt} = te^{-3t + \int 3 dt}\\ \int (d(pe^{3t})/dt) dt = \int t dt\\ p(t) = (t^2e^{-3t})/2 + \delta e^{-3t}$

where the Euclidean metric

$[p(t_n)^2 - p(t_{n-1})^2]^{1/2} \rightarrow 0$

as t approaches infinity, which is what we want. This is intuitive: if after the BB, space expands faster than the speed of light, pulling matter behind it (though not quite at the SOL) in a nearly homogeneous way, it seems incredibly unlikely that, over time, the necessary material would have the opportunity to create protein chains and the like, especially when the force and velocity of Guth’s slow-roll inflation inexorably pushes that material further apart through the expansion.

> Space is complicated. Like I said, our universe is “causally unconnected” to other universes–either by the event horizon of a black hole, or by some of the more subtle general-relativistic stuff in eternal-inflation theory. So no, we wouldn’t share the same Big Bang in any observable sense. <

Whence, then, the space for those universes? Are we to assume that an infinite number of BBs (i.e., quantum fluctuations) begat the infinite number of universes? That’s more difficult to believe than (an infinite number of) fluctuations within our own universe as the catalyst for the multiverse; in fact, Guth’s paper suggests that very notion. Anyway, I’d much prefer an investigation of objective p-values rather than debating diachronic theories of cosmology. I think part of the issue is that we don’t fully comprehend the magnitude of the improbabilities with which we’re dealing.

Jeff:  A few quick observations on this discussion:

1. I’m enjoying it immensely, while understanding only some of it, and being completely unable to participate in it.

2. It is taking place on the Internet.

3. It is completely civil and, until this moment, focused on the issues of the discussion and not observation of the discussion itself.

4. There are no cats anywhere in this discussion. Not even Schrödinger’s — the poor thing(s).

5. The convergence of factors 2, 3, and 4 above — a civil discussion on the Internet without the inclusion of cats — seems so highly improbable, involving opposing forces of such strength able to co-exist only in conditions at or immediately following the BB (I can’t do the math, but y’all can do it in your sleep, apparently) that I hereby postulate that this discussion is not actually taking place. Now, please, continue.

Peter:  Thanks, Jeff! I can’t believe you (and at least two others) read this far. I’ve learned a lot over the last couple weeks. Dave, we need to find a publisher. I eagerly await your further thoughts! In the meantime, here’s my bid for Longest Post So Far. Sorry in advance.

> Anyway, I’d much prefer an investigation of objective p-values rather than debating diachronic theories of cosmology. It’s one thing to assign a p-value, and another to interpret it as evidence for design. >

I think part of the issue is that we don’t fully comprehend the magnitude of the improbabilities with which we’re dealing. I think we both understand them fine, at least mathematically. (Yeah, it’s probably impossible to understand them intuitively.) We disagree on what they imply. You say the p-value is so low that our universe couldn’t be a random accident. I say no p-value, no matter how small, could ever make that case by itself: our universe is no more or less likely than any other.

Here’s an illustration. I’ll ask my computer for ten random integers:

RANDOM ; done

1967 3496 15853 19457 29526 16109 16229 15867 14059 223 15303

[Edit: Oops! That was eleven. And that’s why I’m not a professional programmer. ]

That sequence is incredibly unlikely: the p-value is just 1 in 10^45. (In other words, if every person on Earth ran that command a billion times a second, it almost certainly wouldn’t come up again before the sun engulfed our planet.) But that, in itself, gives no reason to suspect it was specially chosen. For us to make that leap, it needs to have some properties that a designer would care about—in terms of our older examples, it would have to be the equivalent of a double canon or a bucket of sixes. In those examples, we recognize canons and high rolls as valuable in the domains of music and gaming. (Manson gives the analogy of poker, where an accusation of cheating is more persuasive if the cheater ends up with a strong hand.) Perhaps there is something special about this sequence, which would be ruined by even a slight change to any number. We still can’t claim that it was specifically chosen without assuming that the chooser also knows and cares about this special quality and would thus be motivated to choose this sequence over any other. So this is what I meant by saying the low probability of our universe doesn’t inherently “demand explanation.” We agree that our universe appears to be uniquely tuned for life and extremely improbable, but we disagree about the next step. In order to argue for design, we have to assume that life is inherently valuable within the domain of universe-creation, just like canons and sixes are in music and dice games. But (as Neil Manson points out) it’s hard to find people who explicitly defend that assumption, probably because it’s a bit embarrassing and not that easy to do without assuming some amount of theology and thus rendering the argument circular. I found one defense by Richard Swinburne.

I haven’t gotten to read it all the way through—the Google Books preview cuts out just as he gets to the multiverse issue—but I’m very curious.

*** AND NOW, Some Remaining Ancillary Quibbles *** (no obligation to discuss this stuff if you’re sick of it)

> And if, according to Guth, every event (that can occur) will occur an infinite number of times, then that’s basically the same thing as saying some event E—actually, every event that could occur!—will occur within each of the infinite universes. < No, it’s totally different. Given infinite universes, even things that only happen in one universe out of a bajillion will still happen an infinite number of times. This makes calculating probabilities really annoying, as Guth says, but it doesn’t mean that everything is equally probable. There’s no contradiction here, and p^n is still irrelevant. > I doubt anyone would accept that as a “proof” of God’s existence, even though it makes perfect logical and mathematical sense, which is, of course, why the ALO equation is problematic at infinite n. < It only makes sense if divine creation is “an event that can happen” according to the laws of physics—in other words, if God is just another agent in the multiverse, subject to the same laws as everything else. I don’t see why anyone should have a problem with that. To really cause a problem for this probability thing, we’d need an event that “can happen” once, but not an infinite number of times, even in different universes. (Note that I’m not saying physics is incompatible with divine creation, only that physics doesn’t explain divine creation.) > Are we to assume that an infinite number of BBs (i.e., quantum fluctuations) begat the infinite number of universes? That’s more difficult to believe than (an infinite number of) fluctuations within our own universe as the catalyst for the multiverse…<

Yeah, I think the first one is right. But what makes it harder to believe than the second? As far as I can tell, the only difference between them is that the second assumes that our universe is the “grandfather” from which all others spring, rather than one of the later generations. That’s a big assumption, and I’m not sure how it could be justified. As for the rate of universe-generation, the exponential-decay model sounds plausible enough to me (that’s what you were using it for, right? I wasn’t sure), though I’d prefer a model more motivated by the actual physical theory. But even if a single universe does gradually lose its ability to create new ones, that doesn’t put an upper bound on the total number of universes out there, given sufficient time. (Think bunnies.) So it doesn’t limit the explanatory ability of eternal inflation. All that said, I actually have no idea what eternal-inflation theory would say about the ur-origin of the grandfather universe. For all I know, it may dispense altogether with the idea of an origin, and just let every universe bubble out from another one, turtles all the way down! Or maybe that’s ludicrous. If only we had some physics-literate friends who were patient enough to wade through these ramblings.

Jim: I’ve followed but steered clear of participating in this conversation. I did want to put out there though that isn’t this something that we ultimately found a philosophical answer to in modernity? If anything, the 57 comments of back and forth reinforce the idea that all we can agree upon is the notion that there’s an innate uncertainty on the subject. It’s like we’re all holding out that we’ll some day find the answer that justifies our own personal belief through science when the only thing science has really taught us is the complexities of the universe(s) in its(their) entirety will always fall beyond the capacity of human reason. Wouldn’t the pursuit of knowledge be bettered if we all called a truce? On Pi day, can’t we all just get along and agree that we’ll never be able to calculate that last digit of infinity? If there’s a God that created our physical realm, clearly he doesn’t intend for us to ever find the end of the rainbow is all I’m saying.

David:  >>I think we both understand [astronomically low p-values] fine, at least mathematically. (Yeah, it’s probably impossible to understand them intuitively.) We disagree on what they imply. You say the p-value is so low that our universe couldn’t be a random accident. I say no p-value, no matter how small, could ever make that case by itself: our universe is no more or less likely than any other.<< Well, considering all the variables involved, I’m saying it’s very, very highly unlikely chance is responsible for the complexities and details of the universe. And I think the fact that it’s so difficult to understand intuitively very low p-values plays an important role in this. Consider this narrative by James Coppedge from “Evolution: Possible or Impossible?”

“The probability of a protein molecule resulting from a chance arrangement of amino acids is 1 in 10^287. A single protein molecule would not be expected to happen by chance more often than once in 10^262 years on the average, and the probability that one protein might occur by random action during the entire history of the earth is less than 1 in 10^252. For a minimum set of the required 239 protein molecules for the smallest theoretical life, the probability is 1 in 10^119,879. It would take 10^119,841 years on the average to get a set of such proteins. That is 10^119,831 times the assumed age of the earth and is a figure with 119,831 zeroes, enough to fill sixty pages of a book this size.” “Take the number of seconds in any considerable period. There are just 60 in a minute, but in an hour that increases to 3,600 seconds. In a year, there are 31,558,000, averaged to allow for leap year. Imagine what a tremendous number of seconds there must have been from the beginning of the universe until now (using 15 billion years…). It may be helpful to pause a moment and consider how great that number must be. When written down, however, it appears to be a small figure: less than 10^18 seconds in the entire history of the universe. The weight of our entire Milky Way galaxy, including all the stars and planets and everything, is said to be ‘of the order of 3 x 1044 grams.’ (A gram is about 1/450th of a pound.) Even the number of atoms in the universe is not impressive at first glance, until we get used to big numbers. It is 5 x 10^78, based on present estimates of the radius at 15 billion light years and a mean density of 1/1030 grams per cubic centimeter. Suppose that each one of those atoms could expand until it was the size of the present universe so that each had 5 x 10^78 atoms of its own. The total atoms in the resulting super-cosmos would be 2.5 x 10^157. By comparison, perhaps the figure for the odds against a single protein forming by chance in earth’s entire history, namely, 10^161, is now a bit more impressive to consider. It is 4,000 times larger than the number of atoms in that super universe we just imagined.”

…and this:

“Imagine an amoeba. This microscopic one-celled animal is something like a thin toy balloon about one-fourth full of water. To travel, it flows or oozes along very slowly. This amoeba is setting forth on a long journey, from one edge of the universe all the way across to the other side. Since the radius of the universe is now speculated by some astronomers to be 15 billion light years, we will use a diameter of double that distance. Let’s assume that the amoeba travels at the rate of one inch a year. A bridge of some sort – say a string – can be imagined on which the amoeba can crawl. Translating the distance into inches, we see that this is approximately 10^28 inches. At the rate of one inch per year, the tiny space traveler can make it across in 10^28 years. The amoeba has a task: to carry one atom across, and come back for another. The object is to transport the mass of the entire universe across the entire diameter of the universe! Each round trip takes 2 x 10^28 years.  To carry all the atoms of the universe across, one at a time, would require the time for one round trip multiplied by the number of atoms in the universe, 5 x 10^78. Multiplying, we get 10^107 years, rounded. That is the length of time for the amoeba to carry the entire universe across, one atom at a time. But wait. The number of years in which we could expect one protein by chance was much larger than that. It was 10^171. If we divide that by the length of time it takes to move one universe by slow amoeba, we arrive at this astounding conclusion: The amoeba could haul 10^64 UNIVERSES across the entire diameter of the known universe during the expected time it would take for one protein to form by chance, [even] under those conditions so favorable to chance. But imagine this. Suppose the amoeba has moved only an inch in all the time that the universe has existed (according to the 15-billion-year estimate). If it continues at that rate to travel an inch every 15 billion years, the number of universes it could carry across those interminable miles is still beyond understanding, namely, more than 6 x 10^53, while one protein is forming.  Sooner or later our minds come to accept the idea that it’s not worth waiting for chance to make a protein. That is true if we consider the science of probability seriously.” I think that helps a bit with our intuition! >>Here’s an illustration. I’ll ask my computer for ten random integers:<<

LOVE this example, but I’m not sure how a random integer string is any different from (essentially) rolling eleven dice. It seems like you’re arguing we should disregard (the import of) very low p-values because (1) very low p-values exist and (2) they’re ubiquitous (i.e., we can find them everywhere; thus, they offer no substantive value as highly improbably events). [Edit: I think these are Leon’s main objections, too.]  If I’m understanding you correctly, your string of integers is a random-but-meaningless event (for us) primarily because it cannot distinguish itself—or, rather, we cannot distinguish it—from any other random string (i.e., its “meaning”). (Let’s assume we wouldn’t get a subset of the Fibonacci sequence or something recognizable or meaningful.) I think that’s what you were saying with respect to a “[property]…a designer would care about.”

So, the question then becomes: How do we assign meaning to p-values—on the order of double canons and buckets of sixes—without appeals to anthropic, fit-for-life arguments? I’ve thought about it, and I just don’t know the answer to that question. I am convinced, though, there is one! It’s clear, for example, that temporal perspective matters—ex post facto vs. a priori quantifications of probability; also, the p-value of a bucket roll means one thing when it represents the mathematics of ANY one of the possible bucket rolls (B), given as $\forall x p_x = (1/6)^n$ but, as I’m sure you would agree, it means another as the probability of a specific constellation of dice (p_i), even though

$p_i = p_x = (1/6)^n$

because a bucket of sixes, E_6, is an element of the set of all possible dice constellations:

$E_6 \in B : |B| = \Gamma(n+f)[\Gamma(f)(n!)]^{-1}$

where f is the number of faces on a single die. (This equation provides a cardinality that eliminates all repetitions.) But why can’t we appeal to statistical significance as the “domain” of probability measurements? It seems awkward to suggest the stochastic cosmological p-value—as incredibly and infinitesimally small as that number is—wouldn’t satisfy, say, the 5-sigma measure for rejecting the null hypothesis. Perhaps the formation of protein chains should be equated with Manson’s high poker hand or a bucket of sixes without appealing to anthropic principles. I don’t see that as being unreasonable.

>>No, it’s totally different. Given infinite universes, even things that only happen in one universe out of a bajillion will still happen an infinite number of times. This makes calculating probabilities really annoying, as Guth says, but it doesn’t mean that everything is equally probable. There’s no contradiction here, and p^n is still irrelevant.<< Well, if we’re supposed to exclude all the universes where the event doesn’t occur, then that falls into the “logically easy” category. It’s as if I said I’m going to roll an infinite number of sixes—just ignore any roll that isn’t a six. The only requirement for that logic to work is that I keep rolling the die. The p-success of every event in that scenario equals one (even if the constellation of universes changes for each event); we just exclude the results we don’t like. Of course, we know that $p^{\infty \pm m} = p^{\infty} \rightarrow 0$, so, in that sense, then, exceptions don’t even really apply. >>It only makes sense if divine creation is “an event that can happen” according to the laws of physics–in other words, if God is just another agent in the multiverse, subject to the same laws as everything else. I don’t see why anyone should have a problem with that. To really cause a problem for this probability thing, we’d need an event that “can happen” once, but not an infinite number of times, even in different universes. (Note that I’m not saying physics is incompatible with divine creation, only that physics doesn’t explain divine creation.)<< I think my “proof” suggests such a singular event. An event that completely alters (or destroys) the universe would also be another example. (I’ve read this is theoretically possible.) Also, I don’t think the Platonic, universe-by-God p-value is influenced in any way by whether or not God is subject to the laws of physics. (If He is, then He must not exist.) Either God did or didn’t create the universe: as a historical probability, clearly p = 0,1. >>Yeah, I think the first one is right. But what makes it harder to believe than the second? As far as I can tell, the only difference between them is that the second assumes that our universe is the “grandfather” from which all others spring, rather than one of the later generations. That’s a big assumption, and I’m not sure how it could be justified.<< Well, it’s more difficult to believe for the same reason a single BB is difficult to believe: quantum fluctuations require (at least a vacuum that requires) space-time, which doesn’t yet emerge until after the BB/fluctuation begins. So, I guess it’s easier to imagine an infinite number of fluctuations within an already-established space-time continuum rather than an infinite number of impossible “space-less” fluctuations emerging outside of space-time. (And nothing in Guth’s article suggests a “space-less” fluctuation.) Consider this quote: “Quantum mechanical fluctuations can produce the cosmos,” said…[physicist] Seth Shostak….”If you would just, in this room,…twist time and space the right way, you might create an entirely new universe. It’s not clear you could get into that universe, but you would create it.” Oddly, Shostak’s claim presupposes both time and space in order to hold. >>As for the rate of universe-generation, the exponential-decay model sounds plausible enough to me (that’s what you were using it for, right? I wasn’t sure), though I’d prefer a model more motivated by the actual physical theory. But even if a single universe does gradually lose its ability to create new ones, that doesn’t put an upper bound on the total number of universes out there, given sufficient time. (Think bunnies.) So it doesn’t limit the explanatory ability of eternal inflation.<<

What if the bunnies were expanding away from each other at cosmological speeds (i.e., 74.3 +/- 2.1km/s/megaparsec), lol?!  (One megaparsec equals roughly 3 million light-years.) Not even bunnies can copulate that quickly, lol. Eventually, each bunny would become completely isolated—the center of its own galaxy- or universe-sized space—where it could no longer procreate and repopulate. So, inflation perforce establishes the rate of “cosmological procreation” as inversely proportional to time.

Peter:  > But why can’t we appeal to statistical significance as the “domain” of probability measurements? It seems awkward to suggest the stochastic cosmological p-value—as incredibly and infinitesimally small as that number is—wouldn’t satisfy, say, the 5-sigma measure for rejecting the null hypothesis. < Okay, if the null hypothesis is, “No other universes exist, and the cosmological parameters were pulled at random from any of the [whatever huge number] possibilities,” then yeah, that’s probably safe to reject. But in rejecting that, we’re still nowhere near an affirmative argument for design. There are plausible alternatives to both prongs of the hypothesis, both of which have been active areas of research for decades: to the first, an account of universe-generation in which our sort of universe is more likely than others (as Sean Carroll describes in the piece you linked); to the second, multiverse theories like eternal inflation. This is the familiar course of “God of the Gaps” arguments. They present a false choice between materialist and theist explanation, and paint God into an ever-diminishing corner: if the proof of the divine rests on what we don’t understand, then what happens when we understand it? I’m much more sympathetic to the Spinozist (I think?) thought that credits God for the astonishing regularity of the universe. (Side note: Coppedge gets the math all wrong… but that deserves its own thread, and is well documented elsewhere in any case.) > Perhaps the formation of protein chains should be equated with Manson’s high poker hand or a bucket of sixes without appealing to anthropic principles. I don’t see that as being unreasonable. < But the thing is, even OUR universe is practically devoid of protein chains. If you look at our universe as a whole, why would protein chains be its most important feature? And is there no other possible universe in which protein chains could be more common than in ours? Even if there’s not, can we assume that ANY being capable of universe-creation would necessarily prioritize protein chains above all other arrangements of matter and energy, under any possible set of physical laws? That’s what the design argument requires, and I don’t see how it can be justified. (Though this is basically what Richard Swinburne attempts in that chapter I linked a couple weeks ago, albeit with humans instead of protein chains.) Anyway, one thing I’ve really enjoyed about this discussion is that both sides counsel humility: “Science can’t explain everything” vs. “We’re not the most important thing in the universe.” > So, inflation perforce establishes the rate of “cosmological procreation” as inversely proportional to time. <

Um, I think the bunny metaphor may have gotten away from us. Universes don’t copulate, and even if universe-creation slows over time within a single universe (a claim that requires a LOT more physics than you and I know), that still wouldn’t limit the number of universes. That’s what I meant to illustrate with bunnies—they get old and die, but their children keep reproducing. I suspect your intuition stems from conservation laws—if there’s only a finite amount of stuff, then it’s going to slow down as it spreads out—but I think that intuition may be mistaken here. Universe-creation doesn’t need to be conservative if they are isolated from each other (i.e., “We can’t get in”). And as long as universes generate more than one baby bubble universe on average, the process tends toward infinity. I’m guessing your next question will be “If the baby universes aren’t inside the parents, then where are they?” [edit: oops—that’s referring to a sentence I deleted! In short, I suspect it’s wrong to think of baby universes as contained within their parents.] Honestly, I haven’t a clue—but I’m guessing it’s the wrong question to ask. Going out on a limb, I’d guess that the path between universes is neither spacelike nor timelike (in the relativistic sense), and it’s kind of meaningless to try and specify “which dimensions” are involved. Suffice it to say they’re isolated from each other.

[END]

Standard